This website uses cookies primarily for visitor analytics. Certain pages will ask you to fill in contact details to receive additional information. On these pages you have the option of having the site log your details for future visits. Indicating you want the site to remember your details will place a cookie on your device. To view our full cookie policy, please click here. You can also view it at any time by going to our Contact Us page.

ISO9001 changes and their effect on calibration processes

19 July 2016

Heikki Laurila explores the changes in the 2015 revision of ISO9001 and explains how these changes affect calibration processes.

The ISO9001 standard was the first quality management standard of its kind. Its main focus has been to provide consistently-conforming products and services. Its most recent revision came in September 2015 when it saw some major changes. Certified companies have a three-year transition period – until September 2018 – to update their quality systems to meet the standard. 

A general level change is the High Level Structure (HLS) of the standard. This was updated into a new structure that will also be common to other management standards, such as the ISO14000 environmental management standard. An important management-level change in the revision is that the older term ‘management’ has been replaced with ‘leadership’. This is a fundamental change, and is a modernisation of the obligations of the highest-level management – its commitment, responsibility and resource allocation. 

Another big change is the risk-based thinking approach of the standard. Risk based thinking has been included in many parts of the standard. This is one of the changes that will affect calibration processes. 

Other changes, like the process-based approach, which was always included in the standard, require more details about the processes, such as determination of the input and output, resources, responsibilities, risks and opportunities. The services have been highlighted as being part of the term products. A quality manual as its own document is no longer required; it can now be included in an electronic system. The term ‘quality management system’ has been replaced with the term ‘context of the organisation’, so instead of describing a separate quality system, it now requires a description of the company’s organisation. 

While the old standard revision had some specific requirements for the test and measuring equipment, the new revision takes a higher-level approach and sets requirements for all the required resources (human and equipment) to be made available and adequate for the use in measurements and follow-up. 

The role of management has been updated to the more modern thought process of leadership. This puts new responsibilities on the senior management, who must demonstrate commitment and take responsibility for the effectiveness of the quality system, while enhancing customer satisfaction. One could conclude that management’s role has changed from focusing on things to focusing on people, and from doing things right to doing the right things. Also, instead of planning, organising and directing, the role has moved to inspiring, influencing and motivating. 

Risk-based thinking is one of the key elements in the new revision and it affects many elements throughout the standard. This is also what affects calibration processes the most. 

Although risk-based thinking has been included in earlier revisions of the standard, it is very much highlighted in the 2015 revision. Some companies may already be familiar with the risk-based approach from other standards, such as the Good Manufacturing Practices (GMP) guidelines and Food and Drug Administration (FDA) regulations. It is worth remembering that although a risk is often considered a negative thing, risk-based thinking also helps to reveal new opportunities. 

When analysing risks, it is often divided into two parts; the impact (severity) of something happening, then the probability (likelihood) of it to happen. Often, both of these are rated on a scale of 1-to-5. When you multiply the impact with the likelihood, a 1-to-25 rating of the risk index is formed. The bigger the index, the bigger the risk.
 
ISO does offer a dedicated standard for risk management: ISO/IEC 31010:2009, Risk management – Risk assessment techniques. 

There is also a vocabulary available for risk management: ISO Guide 73:2009, Risk management – Vocabulary. A GAMP guide is available on how to implement the risk based approach for calibration management: The GAMP Good Practice Guide: A Risk-Based Approach to Calibration Management. 
Risk and calibration 
When analysing calibration processes it is important to analyse all the measurement points and loops in the plant. 

For example, first, evaluate the consequences (impact/severity) – what will happen if this measurement fails, what would the consequences be? Secondly, how likely is it that this measurement will fail? With a multiplication of these two, the risk index is calculated. Keep in mind how the risk can be diminished, or even eliminated. There are many sources that the risk can come from; it can be human error, an equipment failure, an accident, or a total surprise that cannot be predicted in advance. 
There can be some measurements in a process plant that will have very severe consequences if they fail, so these will be the most critical to consider. Sometimes the most critical measurements are provided with redundant independent measurements, so if one measurement fails, the others will continue working. This is most often the case in critical safety measurements, which also typically have dedicated safety certified/approved measurement equipment. In practice, it is not always possible to add redundant measurements into all loops which are considered critical or important. 

For an important measurement, start by installing measuring equipment known/found to be the most reliable. Next, calibrate the most important measurements more often than those that are less critical. After the risk analysis, less important measurements can be calibrated less often, leaving more time for the calibration and maintenance of the most important measurements.
 
The measurement may fail any time between the periodical calibrations and the longer the calibration period is, the longer time the measurement may have been faulty for. It is easy to notice if an instrument totally fails, but if the instrument only starts to slowly measure wrong, although outside allowed tolerances, it is often difficult to realise until the next calibration. Once it fails in calibration, perform an analysis on the impact of this measurement failure. For example, if there is a one-year calibration period and the measurement fails in calibration, it means that in the worst-case scenario, this measurement could have been bad for the whole year, assuming it failed right after the previous calibration. 

Different measurement points in the process will have different criticality and accuracy needs. These acceptance/accuracy limits should be set by the process specialists. Often, the same kind of transmitters are installed into locations with different accuracy needs. Often the calibration acceptance limits are set according to the transmitter’s specifications, when they should be set by the process requirements. This runs a risk that the more and less critical measurement points will end up having the same acceptance limits in calibrations. 

Best practice 
Manual documentation of the actual calibration creates a risk of human error. Electronic automatic documenting calibration equipment is recommended for calibration in order to reduce this risk. It will also save time. 

One essential aspect in any calibration is to analyse the calibration results and compare the found errors to the allowed accuracy limits. To reduce the risk, electronic calibration equipment should be used to automatically calculate the error, compare it to the allowed limit and make the Pass/Fail decision automatically. 

All measurements tend to drift over time, it is important to follow the history trend of the measurement. A history trend is a tool to analyse whether the instrument is likely to stay within the tolerance limit over the next calibration period, or if the period should be adjusted. If the history shows a risk, meaning the instrument is unstable and the measurement could fail during the next period, the calibration period should be made shorter. Otherwise an OOT (Out Of Tolerance) situation could happen. On the other hand, if the instrument is stable and the measurement is not critical, the calibration period could be made longer and resources released for more important measurements. 

Manual history trending is labour intensive. Some calibration management software offers an easy ways to analyse the history trend of measurements. 

Total uncertainty 
Analysis of the total uncertainty is an essential aspect in any calibration. The idea of calibration is lost if reference equipment is not traceable and is not accurate enough for the calibration work. 

The calibration procedures may be carried out differently due to being done by different people, or at different times. This will add uncertainty to the calibration results and make it difficult to compare or trend the results. Automated calibration procedures, with calibration management software and automated documenting calibrators, ensures that the calibration procedures are always performed in the same manner and are repeatable, reliable and comparable. 

Heikki Laurila is product marketing manager at Beamex Oy Ab.


Contact Details and Archive...

Related Articles...

Most Viewed Articles...

Print this page | E-mail this page