Various approaches to metadata quality assessment divide the assessment criteria into sections. For example accuracy, consistency, and completeness. However, one should ask if a quantitative approach to metadata quality assessment is better than a qualitative approach. Some may point out that the two are not mutually exclusive, and therefore not in direct competition with each other. However, I wonder if this is true. For example, if one has limited reading time does one benefit more from reading the percentage of errors relative to another error type or does one learn more by reading about the assumed noncompliance or disharmony across metadata records?
The second point in suggesting that a qualitative description of metadata quality might be better that a quantitative description is related to root causes — presumably the purposes of the investigation in the first place.
It seems to me that a quantitative approach makes the data the discussion and ignores the methods by which the data got into the observed format. For example, what were the human factors under which the metadata was produced? What was the workflow? What was the target metadata scheme at the time the records were created? What was the management implemented checking process, i.e. what were they checking for, or their metrics for success?
A qualitative analysis can show where the current process meets the management considerations. Essentially this is problem-solution fit analysis, where metadata quality is a trailing performance indicator for business processes. However, it gets interesting here because the prevailing thought is that metadata is also the way that a customers are serviced through the organization. That is, it is like a loss-leader product in that it is a product to get a customer to the main product.
Purely quantitative analysis simply announces that issues exist within a relative order. It doesn’t seek to explain the short comings using a contextual analysis.