Calibrating sensors... My irrigation supervisory system has four pressure sensors that measure water pressure at various points in the system. Those sensors have a 0.5% absolute accuracy over the full range of the sensor (0 - 150 psi), and over the full temperature range (-40 to 105 °C). At normal operating pressures of around 40 psi, the maximum absolute error is therefore about 0.2 psi. However, because sensors could be off in different directions, the maximum absolute error between two sensors is about 0.4 psi.
That's still a small error, and not really significant for my system's purposes – but I got to wondering if I could do better. There are two places in the system where I'm measuring the pressure difference between two gauges (to sense how dirty a filter is), and there I'm looking at pressure differences as small as a few psi. The 0.4 psi error looms more significantly there then you might think at first blush.
The obvious way to correct the sensor error would be to calibrate each sensor against a “gold standard”. One could imagine, then, simply making an equivalence table for each sensor, showing the actual pressure that corresponds to each reading. You could make a few calibrated readings and then linearly interpolate for intermediate pressures. Simple! Except, that is, for one slight little problem: the lowest-cost pressure calibrator I could find is nearly $500! That's way more than this problem is worth to me (and several times the cost of all four of my pressure sensors!). I can't think of anyone who might have one of these little beasties laying around, either.
So we need another approach. One thing occurred to me that might lead to a solution: the sensors' intrinsic absolute accuracy is more than sufficient – what I need to improve is the relative accuracy. What if ... I took one of these sensors and called it my gold standard? Then I could calibrate the other three sensors against that one, and then my differential pressure measurements should be more accurate. I have an easy way to have all four sensors simultaneously measure the exact same pressure (whatever the Paradise Irrigation system happens to be providing at that moment, which varies from 0 psi to about 45 psi over time): I just close the outlet valve from my pump shed, which guarantees there's no flow through the system (and therefore all the pressures are identical). I can also easily get absolute zero pressure to all four gauges by closing the inlet and the outlet valves, then draining the pump.
I know a bit about these sensors from past experience. If P is the absolute pressure, then the sensor's measured pressure S can be approximated by
AP + B,
where A is the scale coefficient (generally very close to one), and B is the offset (generally very close to zero). If you take a series of measurements at different pressures for a sensor you're calibrating, while your gold standard is measuring the same pressure, you can get a series of pairs of measurements. That series can then be used as the input to a linear regression, the output of which is A and B for the sensor you're calibrating. Then you've got that sensor's equation relative to your gold standard sensor.
At that point it's just some simple math to “correct” the sensor you're calibrating. It's equation is
S = AP + B.
The gold standard's equation is
S = P
(by definition). The difference between them (the error, or E) is
E = (A-1)P + B.
Therefore the corrected pressure
C = S - ((A-1)P + B).
That's not so bad! I'm implementing a class that does this right now. The most challenging bit of that is the user interface: a way to let the user (me!) click a button to capture a new set of data. The irrigation supervisor can't do that on its own, because it doesn't know when all the sensors should be showing the same pressure – it needs a human assist for that...