This website uses cookies primarily for visitor analytics. Certain pages will ask you to fill in contact details to receive additional information. On these pages you have the option of having the site log your details for future visits. Indicating you want the site to remember your details will place a cookie on your device. To view our full cookie policy, please click here. You can also view it at any time by going to our Contact Us page.

Sponsored Article

A specification guide to inclinometers, accelerometers and load cells

03 June 2014

Sensors are an integral component of any measurement and automation application to ensure accuracy, reliability, efficiency and communications capability. This has fuelled research and development into the sensors industry and the continued innovation in sensors technology has ensured a thriving market and a growing demand for custom solutions. By Mike Baker, managing director, Sherborne Sensors

According to analyst firm Frost & Sullivan, the sensors market in Europe is estimated to reach $19 billion by 2016, creating opportunities for technological advancements and ultimately new applications for sensors.

Ultra-reliability and long-life precision sensors such as inclinometers, accelerometers and load cells enjoy an unrivalled status within the sensors industry and are critical for use in industries spanning aerospace, military and defence, transport, construction and civil engineering, to name but a few. In this whitepaper we outline the key factors you should consider when specifying inclinometers, accelerometers and force transducers. In part 2 we examine how location, calibration and application influence the accuracy of inertial sensors and we also outline some typical applications. 

Specmanship vs. Reality 
If you want to either specify a sensor or acquire meaningful data from a sensor, you first need to understand terms such as total error band, accuracy, and precision. 

Sensors and transducers are used throughout the worlds of science, technology, and industry to measure and control physical events. They range from simple devices such as thermocouples to sophisticated sensors used in aerospace applications. Most sensors output their data in an electronic form of one sort or another and it is this signal that forms the analog of the physical quantity being monitored.

When specifying a sensor, its accuracy and its precision are paramount among a multitude of other parameters. These terms are often used interchangeably but it is critical to recognize the fundamental differences between the two. Accuracy, a qualitative concept, indicates the proximity of measurement results to the true value, while precision reflects the repeatability or reproducibility of the measurement.

ISO 3534-1:2006 defines precision to mean the closeness of agreement between independent test results obtained under stipulated conditions and views the concept of precision as encompassing both repeatability and reproducibility. The standard defines repeatability as precision under repeatable conditions, and reproducibility as precision under reproducible conditions.

Nevertheless, precision is often taken to mean repeatability. The terms precision, accuracy, repeatability, reproducibility, variability, and uncertainty represent qualitative concepts and thus should be applied with care. The precision of an instrument reflects the number of significant digits in a reading; the accuracy of an instrument reflects how close the reading is to the true value being measured.

To the decimal point
It is common in science and engineering to express the precision of a measurement in terms of significant figures, but this convention is often misused. Armed with an inexpensive calculator, you can produce results to 16 decimal places or more. For example, an electronic calculator or spreadsheet may yield an answer of say 6.1058948 and this implies that we are confident of the precision of the measurement to 1 part in 61,058,948.

Similarly, stating a figure of 6 implies that we know the answer to a precision of 1, whereas we may really know it to a precision of three decimal places if it were written 6.000. Neither of these answers may be accurate because the true value may well be 5 but measured inaccurately as 6 in both instances. An accurate instrument is not necessarily precise, and instruments are often precise but far from accurate.

The chart in Figure 1 illustrates the difference between accuracy and precision pictorially and shows that the precision of the measurement may not be constant but may instead vary in proportion to signal level.

Concepts of accuracy
Sensor manufacturers and users employ one of two basic methods to specify sensor performance:
• Parameter specification
• The Total Error Band envelope
Parameter specification quantifies individual sensor characteristics without any attempt to combine them. The Total Error Band envelope yields a solution much nearer to that expected in practice, whereby sensor errors are expressed in the form of a Total Error Band or Error Envelope into which all data points must fit regardless of their origin. As long as the sensor operates within the parameters specified in the data sheet, the sensor data can be relied on, giving the user confidence that all sensor data acquired will be accurate within the stated error band and therefore avoiding the need for lengthy and error-prone data analysis. The diagram in Figure 2 illustrates the total error band concept.
However, many manufacturers specify individual error parameters, unless there are legislative pressures compelling them to state the total error band of their sensors. In the weighing industry, for instance, if products or services are sold by weight, the weighing equipment is subject to legal metrology legislation and comes under the scrutiny of weights and measures authorities around the world. The Organization Internationale de Métrologie Légale (OIML) requires that load cells that are used in weighing equipment are accuracy-controlled by enforcing a strict adherence to an error-band performance specification. Typically, such an error band will include parameters such as nonlinearity, hysteresis, nonrepeatability, creep under load, and thermal effects on both zero and sensitivity. The user of such a sensor can rest assured that its measurement precision will be within the total error band specified, provided all the parameters of interest are included.

Unless there is external pressure to comply, manufacturers do not generally specify their products using the error band method, even though it yields results more representative of how the product will respond during real-world use. Instead, deep-rooted commercial pressures result in manufacturers portraying their sensors in the most favorable light when compared to those of their competitors. The commonly used parameter method allows you to make a direct comparison between competing products by examining their specifications as detailed in the product data sheets. If you are selecting a sensor, you must carefully examine all performance parameters with respect to the intended application to ensure that the sensor you ultimately choose is suitable for its specific end use.

Predictable error sources
A typical sensor data sheet will list a number of individual error sources, not all of which affect the device in a given situation. Given the plethora of data provided, you may find it difficult to decide whether a given sensor is sufficiently accurate for your desired application. Ideally, the mathematical relationship between a change in the measure and the output of a sensor over the entire compensated temperature and operational range should include all errors due to parameters such as zero offset, span rationalization, nonlinearity, hysteresis, repeatability, thermal effects on zero and span, thermal hysteresis, and long-term stability. Typically, users will focus on just one or two of these parameters, using them as benchmarks with which to compare other products.

One of the most commonly selected parameters is nonlinearity, which describes the degree to which the sensor's output (in response to changes in the measured parameter) departs from a straight-line correlation. A polynomial expression describing the true performance of the sensor would, if manufacturers provided it, yield accuracy improvements of perhaps an order of magnitude. Many sensors do, in fact, have a quadratic relationship between sensor output and measured value, with a response that is linear to a first-order approximation. Thus, if you substitute the quadratic equation y = ax2+bx+c as an alternative to using the manufacturer's advertised sensitivity data, supplied in the form y = ax+b, you can improve the accuracy. In another example, although many gravity-referenced inertial angular sensors have a sine wave transfer function (the relationship between the output and the measured angle is a sine wave), the manufacturer's data sheets will still list a linear expression, because there is a linear relationship between the sine of the angle and the angle itself.

If the specific thermal effects contributing to both zero and sensitivity errors are stated, then the measurement errors may be minimized by considering the actual errors rather than the global errors quoted on the sensor specification or data sheet, together with the actual temperature range encountered in the application. Often, both errors are quoted in terms of the percentage of Full Range Output (FRO). In reality, sensitivity errors are normally a function of a percentage of reading. Thermal errors may be further minimized by actively compensating for temperature by using a reference temperature sensor installed near to or on the sensor being used. Some manufacturers provide an onboard temperature sensor expressly for this purpose.

It is important to distinguish between the contribution of zero-based and sensitivity errors. Thermal zero errors are absolute errors and are generally quoted as a percentage of full scale (F.S.). In most cases, sensors are not used to their full-scale capacity and therefore, when expressed as a percentage of reading, errors can become very large indeed. For example, a sensor used at 25% F.S. will have a thermal zero error of four times its data sheet value as a percentage of reading. A similar mistake occurs when users specify sensors with an operating range much higher than that which will be encountered in practice "just to be safe."

These examples illustrate that you can improve both accuracy and precision because you can minimize predictable errors mathematically. Stability errors and errors that are unpredictable and nonrepeatable present the largest obstacle to achievable accuracies.

Unpredictable errors
Unpredictable errors—such as long-term stability, thermal hysteresis, and nonrepeatability—cannot be treated mathematically to improve accuracy or precision and are far more difficult to deal with. While thermal hysteresis and nonrepeatability can be quantified at the point of manufacture under controlled conditions, long-term stability cannot.

Various statistical tools are available to help define long-term stability, but ultimately you have to make a decision that will depend in part on how critical the measurement is. Routine recalibration may be the only reliable way of eliminating the consequences of long-term deterioration in the sensor's performance.

Part Two: Gravity and Sensor Calibration Accuracy
How location, calibration and application influence the accuracy of inertial sensors.

The calibration accuracy of many sensors is fundamentally dependent upon the force of gravity at the site of operation. Examples of such sensors include accelerometers, inclinometers, force transducers and load cells. Consequent upon the principles upon which these sensors work, their sensitivity is fundamentally proportional to the force of gravity where they are being used; their absolute sensitivity may well differ when in situ from that of their place of manufacture. The acceleration due to gravity varies across the Earth’s surface due to a number of circumstances and, in the extreme, may well translate to a variation of up to 0.5% depending on where in the world it is measured.
For example, electronic weigh scales that use load cells as weight sensors effectively measure the force of gravity acting upon a mass. If on-site gravity compensation is not taken into consideration, the scales will have an error proportional to the difference between the acceleration due to gravity between the installation and original calibration sites.
Universal Laws
Sir Isaac Newton’s Law of Universal Gravitation resulted from mostly empirical observations and was first stated in his Philosophiae Naturalis Principia Mathematica in 1687. Here, he stated that: “Every point mass attracts every other point mass by a force pointing along the line intersecting both points. The force is proportional to the product of the two masses and inversely proportional to the square of the distance between them.”

Mathematically the force due to gravity is expressed by the formula: F = Gm1m2/r2

(Where F is the force between masses, G is the Gravitational Constant, m1 and m2 are the first and second masses and r is the distance between the centres of the masses).

Although Einstein’s Theory of General Relativity has since superseded this law, it continues to be applied, unless there is a need for extreme precision or when dealing with gravitation for extremely massive and dense objects.

The Gravitational Constant G is actually very difficult to measure but, in 2010, CODATA, (Committee on Data for Science and Technology) recommended the value of G = 6.67384 x 10-11 m3 kg-1 s-2, with an uncertainty of 1 part in 8,300. Thus, with knowledge of the mass of the Earth and its radius, the force due to gravity can be ascertained.

It should be noted that the Law of Universal Gravitation defines a mass as a point mass; the Earth is neither of uniform shape nor even mass distribution and cannot be treated as such unless to a first approximation. Indeed, the Earth is also not spherical but actually takes the form of an oblate spheroid, the equatorial diameter being larger than that at the poles. Gravity therefore varies in proportion to latitude, (see Figure 3). Secondly, the height above sea level of the various landmasses making up the inhabitable surface of the Earth varies according to location. Consequent on the Law of Universal Gravitation the acceleration is proportional to altitude too, (see Figure 4). 

Thirdly, the Earth is spinning on its axis and consequently the force of gravity at the equator is reduced by the centripetal force, the effect diminishing to zero at the poles.

By way of example and using the Law of Universal Gravitation, the force due to gravity acting on a mass of 1kg, at sea level and at the equator is 9.7958N. However, at an elevation of 2,000m, the force reduces to 9.7897, a reduction of 0.06%. Similarly, due to the Earth’s geometry, the force acting on a mass of 1kg at the poles increases by 0.70% compared to that at the equator. Complicating matters further is the effect of centripetal acceleration due to the Earth’s rotation that varies from a maximum at the equator to zero at the poles. Varying according to latitude, this reduces the apparent force due to gravity at the equator by 0.35% compared to that at the poles.

What it all means

The application of force transducers in relation to weighing is perhaps the easiest to relate to when reaching an understanding of the effects of gravity on sensor calibration and accuracy. Typically a weighing instrument is calibrated using test masses. But as has been shown, the forces produced by test masses will vary according to location and it is reasonably commonplace for accurate weighing systems to have means of adjustment built in to allow for multi-location deployment throughout the world.

Inertial inclinometers and accelerometers also use gravity as their fundamental calibration reference, but it is important to note that the calibration will only be truly valid when the sensors are used at the original calibration site. 

When making accurate measurements and unless the sensors can be calibrated locally, then it is essential to consider the latitude and the altitude at which the sensors will be installed and adjust the sensitivity of the sensors accordingly if the manufacturer’s calibration data is to be the sole source of reference. 

Fortunately there are several convenient sources of gravitational data available for reference that may be used for this purpose. One of the most useful and flexible information sources available on the Internet is that contained in the PTB’s web site: 

The use of a value for g of 9.81m/s2 is commonplace, but when high accuracy is required from a sensor that is reliant upon gravity for its function, it is important to understand and consciously correct for variations in gravity between the original calibration site and that where the sensors are installed. Reputable sensor manufacturers will always provide users with the value of g at the calibration site so that variations can be taken into consideration.
Appendix 
Inclinometers - Inclinometers can measure horizontal and vertical angular inclination to very high levels of precision and output the data in the form of analogue or digital signals. Inclinometer technology has broadened the scope of sensor applications in recent years and there is a wide array of standard products available.

Customised products can offer the customer many advantages including performance specifications to meet exacting requirements and enabling the direct fitment into applications. This results in better performance with both time and expense saved in the installation process. 

Below are listed some typical inclinometer applications.
• Military Fire Control Systems – require robust sensors which can deliver precision measurements following exposure to severe levels of mechanical shock from the firing process.
• Rail Track Monitoring – to survey rail tracks to determine deterioration and the need for safety critical maintenance.
• Satellite Communications – used on ship, vehicle and land based antenna applications to track communication satellites.
• Civil Engineering – to monitor movement over time of bridges, buildings and other large structures.
• Continuous Casting Monitor – to ensure guide tracking is within dimensional tolerances to ensure continuous and quality production.
Accelerometers - Accelerometers measure linear acceleration and deceleration of dynamic systems. These can be used in the development phase of projects to ensure design calculations correlate with actual measurements in the application and also in the safe operation and control of equipment in service. 
The careful selection of an accelerometer is essential to ensure its frequency response is matched to the application requirements. Products are available with bandwidths up to many kHz and some specialist low frequency accelerometers to measure down to DC levels (0Hz).
Some typical examples of accelerometer applications include:
• Aircraft Health Monitoring – acceleration levels applied to aircraft structures in service are monitored for determining the safe flying life of the aircraft.
• Civil Engineering – low ‘g’ range accelerometers utilised to monitor accelerations induced into bridges and other structures to check design calculations and long term critical safety.
• Railway – control of braking and cornering of trains to ensure safety and passenger comfort.
• Flight Simulators – to control actuation systems to ensure the programmed ‘g’ levels are achieved. 
• Accident Data Collection – acceleration data recording for future reference in the event of an accident or incident.

Load Cells -  A load cell is a transducer that is used to convert a force into an electrical signal and offers measurement of tension, compression and shear forces. The majority of today's designs use strain gauges as the sensing element, whether foil or semiconductor, and feature low deflection and high frequency response characteristics, which are especially beneficial for both materials testing and high-speed load measurement applications, particularly where peak forces are being monitored.

Load cells are available in many physical shapes and forms to suit particular applications and forms of loading. Miniature form and very low force ranges are invariably available as custom designs where specific customer applications require them. 
Some of the major load cell applications include: 
• Aerospace – fatigue testing of airframe structures, internal engine forces and ejector seat force measurements. 
• Paper Mill – bearing force load cells to monitor and maintain correct roller tension. 
• Marine – hoist loads, platform retention, towing forces and mooring loads and systems.
• Civil Engineering – bridge lifting/weighing, vehicle/crane load monitoring and earthquake force monitoring.
• Pharmaceutical Industry – to control the compressive forces during tablet manufacture during many millions of cycles.


Contact Details and Archive...

Most Viewed Articles...

Print this page | E-mail this page