Subscribe to our monthly newsletter.
Your email address will never be disclosed to any third party.
Read our privacy notice.


01-05-2018  |  322x
Share this item

Approaching quality as a black box

Automation enables us to take major steps, such as introducing a new LIMS. Many labs use not only a LIMS, but also a document management system, which allows them to digitize their QA processes. Ideally, the labís LIMS and the QA system are linked, so the management can trace the progress and quality of the analyses.

They say every cloud has a silver lining. I would turn it around, though, and say every silver lining has a cloud; even a document management system that enables you to do a completely different type of auditing has a downside. Itís just as easy for you to collect data for a horizontal audit as it is for an external auditor to identify and locate holes in the system. Yet, this is not what happens in practice. Most auditors are happily surprised about the management systemís openness and scope. Apparently, the program satisfies the need for demonstrable quality assurance. The disadvantage is that auditors are making increasing demands. Their tendency is to require ever more exhaustive tracking & tracing, simply because we have the technology to supply it. The LIMS can easily tell us who received which sample, when, in what state, what it weighed, and what its temperature was. But how long did that sample lie unrefrigerated on the table? In which refrigerator was it stored, at what temperature, and for how long? Who ground the sample? With which grinder, which sieve, and for how long? By the time the results are reported, we have so much big data that every IT guy would be delighted. But what is its actual contribution to the quality of the analysis result?

Weíre all familiar with error analysis from analytical chemistry, where it is used to determine each action or instrumentís contribution to the total measurement error. But no one uses this approach anymore, because statistics have given us the black box method; when re-analyzed, the same sample must yield the same result within the bandwidth of pre-set performance characteristics. Itís only when the results deviate from the specifications that you need to start looking for causes. Many people will assume that you can find the cause in that big data pool. But frankly, I doubt that, because this is often imprecise and canít be quantified. Much of our current tracking & tracing is purely administrative. Thatís not very helpful for quality assurance. Error analysis teaches us to be accurate and error-conscious, but large-scale administrative tracking & tracing is neither. Auditors could inadvertently ignore the main objective of the ISO 17025 standard by setting requirements based on popular management objectives rather than monitoring how accurate and precise analysis results are reported.

Henk Heijthuijsen is a member of the LABinsights editorial advisory board.
LABinsights.net LABinsights.de LABinsights.nl
Newsletter archive
Service and contact
ContactDisclaimerPrivacyAdvertisingControl panel log-in