SediMeter Accuracy Revisited

The second and third generation of the SediMeter™ were developed with the express purpose to work as siltation monitoring instruments in coral reefs, where 1 mm of sediment accumulation can be enough to kill corals. With its detection threshold of 0.1 mm, the SediMeter™ can be used for detecting such siltation in real time. It was designed as part of a system that allows networking, including wireless, of up to hundreds of instruments. It is manufactured in Miami and was developed with protecting the South Florida environment in mind.

A SediMeter™ SM3A in the test tank that was used for the accuracy tests. On Monday February 1st, 2016, a public demonstration of the accuracy will be made using this tank.
Figure 1. A SediMeter™ SM3A in the test tank that was used for the accuracy tests. On Monday February 1st, 2016, a public demonstration of the accuracy will be made using this tank.

Recently we found out that the US government decided to not even consider using the SediMeter™ for monitoring siltation on the coral reefs while dredging the harbor of Miami, because a US government study had concluded that it was not suitable for the purpose, since it allegedly did not have enough accuracy in distance measurements. If they had used the SediMeter™ to monitor siltation during the dredging, it might have been possible to avoid some or most of the costly damage to the coral reef.

It should be noted that the evaluation of the SediMeter™ was not conducted in order to evaluate its usefulness for the Miami dredging project, but for using it in a research project (cf. page 8). However, it was left out of consideration in the Miami project exactly because of the previous study by the same organization. Since the US government study landed two orders of magnitude wrong, we feel we need to put the record straight.

The US government measured and calculated the error in distance measurements (differential level) of various lengths from 1 mm and up; calculated the standard deviation; assumed normal distribution (thus justifying multiplying the standard deviation by 3 to estimate the 99.7% confidence interval); and in the conclusion stated that the calculated error value referred to a level (rather than a distance as they had said earlier in the report) why they doubled the error (in the conclusions!) in order to get the uncertainty of distance measurements. Based on such flawed arguments they concluded that the SediMeter™ is only suitable for cm-scale measurements, two orders of magnitude more than the detection threshold guaranteed by the manufacturer.

Dr. Ulf Erlingsson of Lindorm, Inc. has recently made a study of the accuracy by adding a small amount of sand 68 times to a tank with a SediMeter, once every 2 minutes. In contrast, the US government moved the SediMeter rather than adding sand, which is known to introduce small variations in the measurements. On the other hand, adding sand suffers from several sources of errors in the “true” value: The sand falls randomly and may not fall equally each time; consolidation may start after deposition; and a fine fraction may stay in suspension only to be deposited later, after measurements were made. These effects were minimized by depositing sand at an approximately even rate, and deposit several batches before starting to measure so that the bottom adjusts to the spatial pattern of deposition (so that the new layers become conformal). Also, values that were visibly affected by consolidation (the bottom level falls over time) were excluded from the final analysis. The sand was weighed with a 0.1 g resolution electronic scale.

Blue dots: Sand added on the X axis (g), and level measured on the Y axis (cm). Black line: Best linear interpolation. Red dots: Difference between blue dots and black line.
Figure 2: Blue dots: Sand added on the X axis (g), and level measured on the Y axis (cm). Black line: Best linear interpolation. Red dots: Difference between blue dots and black line (mm). This graph shows only the 32 values taken before the sand started to compact under its own weight (corresponding to 3 cm).

Most of the errors detected were found to be systematic with a period of 1 cm, and not random (Figure 2). It is due to the non-linearity in the interpolation between the detectors, and since they are spaced 1 cm apart, the error is cyclical. The US government report assumed that the errors were random, which they are not. Since the errors are not random, one cannot use a multiple of the standard deviation to calculate the confidence interval. Rather, the analysis of the confidence level must be made based on an understanding of the nature of the error.

The distance error in mm and % is here plotted as a function of the distance (the distances in Figure 1 have been added to get values up to and above 10 mm).
Figure 3. The distance error in mm (red) and percent (blue), plotted as a function of distance. The 338 error values were calculated by comparing over distances from 1 to 14 level measurements in Figure 2. The maximum absolute error varies cyclically with a peak at 5 mm, 15 mm, etc. The maximum relative error goes towards a max of ≤50% at distances below 1 mm, but decreases to ≤5% at 10 mm and all longer distances. The shortest distance in the data set is 0.64 mm, the longest 17.3 mm.

Figure 3 shows that the maximum absolute error in distance measurements varies with the same distance as the spacing between the optical backscatter detectors, again revealing that the error is not random. The maximum error was under 1 mm, and the occurred at a distance of between 4 and 6 mm. Looking instead at the relative error it increases towards shorter distances. The largest calculated relative error was 44% (the SediMeter had recorded 1.66 mm when the calculated sediment accumulation based on added sand was only 1.15 mm).

It may be noted that the level determination in the SediMeter was originally only made in software, and the process allowed for user input and interactivity. While it’s still possible for the user to adjust the correction of non-linearity errors, new SediMeters output a level directly. They also allow for burst sampling, with 20 burst samples in a single measurement. Taking the average of those measurements the noise is further reduced. It’s perfectly possible to get many measurements in a row with the same level value to 1/100th of a millimeter.

The only way to determine the detection threshold is to measure very small differences, and calculate if the change in the measurement data is statistically significant. The SM3 instrument resolution is 10 µm, and the detection threshold is advertised as 100 µm. Previous measurements have shown the detection threshold to be 50 g/m2 at 95% confidence level and 100 g/m2 at 99% confidence level, which corresponds to about 37 µm and 75 µm, respectively. For the SM2 instruments tested by the US government, the only sensitivity value given in advertisement was the resolution of 0.1 mm.

The new study confirms that the instrument indeed is capable of detecting 0.1 mm, and that the conclusion from the US government study simply is wrong. Surely many are asking by now, how could they come to a result that is off by two orders of magnitude? We can identify four contributing factors:

First, they measured by moving the SediMeter which introduced additional errors and appears to have increased the standard deviation of the data by about a factor 2.

Second, not realizing the distance-dependence of the error they assumed random data, why they calculated the standard deviation without taking into account the distance. This exaggerates the short-distance error by about a factor 25.

Third, again assuming that their measured error was random they doubled the erroneous standard deviation to calculate the 95% confidence level. This also doubled the error, but since the same doubling is made using the lower and true random error values for the detection threshold study made by Lindorm, the factor will be entered as 1 in the equation.

Fourth, they forgot that the measured standard deviation referred to a distance, and thinking it referred to a level they erroneously doubled it in order to make it apply to a distance (differential level). This increased the error by exactly a factor 2.

Multiplying 2 x 20 x 1 x 2 = 80, two orders of magnitude. Thus, instead of the US government value of 3.4 mm error at the 95% confidence level, the Lindorm estimate of the 95% uncertainty level in detection of siltation is 37 µm (50 g/m2), and the data shows that sedimentation of less than 1 mm can be measured well enough (±20%) for the requirements for siltation monitoring, QED.


On February 1st, 2016, 10 AM, a public demonstration will be made of the SediMeter accuracy. Anybody who has doubts about its ability to detect the sedimentation of 0.1 mm of sand (corresponding to 2.2 g in the 162 cm2 tank) is welcome to come and see with his or her own eyes. UPDATE: The demonstration was carried out and a video made from it. See later posts.

Leave a Reply