Guest Editorial: Relative Positional Accuracy

A 180Kb PDF of this article as it appeared in the magazine—complete with images—is available by clicking HERE

 

With the unveiling of the new 2005 Minimum Standard Detail Requirements for ALTA/ACSM Land Title Surveys, there has been a tremendous amount of confusion over Relative Positional Accuracy (RPA). Notwithstanding the fact that the use of RPA (under the different terms "Positional Uncertainty" and "Positional Tolerance") was, with very limited exceptions–which are still exceptions in the 2005 standards—required beginning more than six years ago with the 1999 standards, some discussion on the topic is obviously needed.

The 2005 Accuracy Standards for ALTA/ACSM Land Title Surveys define Relative Positional Accuracy as follows:
"Relative Positional Accuracy" is the value expressed in feet or meters that represents the uncertainty due to random errors in measurements in the location of any point on a survey relative to any other point on the same survey at the 95 percent confidence level.

The Standards also state: Relative Positional Accuracy may be tested by:
(1) comparing the relative location of points in a survey as measured by an independent survey of higher accuracy, or
(2) the results of a minimally constrained, correctly weighted least square adjustment of the survey.

The application of RPA assumes that the surveyor "compensate[d] or correct[ed] for systematic errors, including those associated with instrument calibration, (2) select[ed] the appropriate equipment and methods, and use[d] trained personnel and (3) use[d] appropriate error propagation and other measurement design theory to select the proper instruments, field procedures, geometric layouts and computational procedures to control random errors." In short, this means that the surveyor’s equipment was in adjustment, proper procedures were used, and the technicians were trained and competent. It is critically important that this is understood because if all of that was assured, then the majority of the inaccuracy left in the survey should be a function of the random, accidental errors; and that is what RPA is about.

With regard to testing for RPA, the standards do not say that the only ways to test RPA are the two ways mentioned above; it says that RPA may be tested in those ways. In fact, running an independent survey of higher accuracy would usually be an economically ridiculous thing to do. If we would check the results of the accuracy of our transit and chain survey with an electronic total station, why would we not simply use the total station in the first place? But this misses the point. A higher accuracy survey would, in fact, be a way to check the results of a lower accuracy survey; it’s a simple factual statement.

In the case of a simple, closed loop traverse with no redundant measurements, if all of the above criteria were met, the linear error of closure of such a traverse is probably a reasonably reliable estimate of the RPA of the survey. Why? Because we have eliminated the sources of significant error except for those random, accidental errors. In fact, in such a scenario, the RPA of each corner–at least with respect to the adjoining corners connected by direct measurements–could probably be reliably estimated by some hand computations and a good understanding of random, accidental errors.

A reliable estimate of the RPA for a more complicated survey (e.g., one with redundant measurements or one that uses a combination of GPS and conventional measurements) quickly becomes too difficult to efficiently do by hand, and a software that allows the entry of measurement parameters (like the standard deviation of a pointing, the standard deviation of a reading, etc.) is required.

There have been a number of questions on the topic of RPA that indicate a broad misconception of the role of adjustments in the process of computing a survey.

It has been suggested that the use of a minimally constrained, correctly weighted least squares adjustment will often result in unreliable or "erroneous values" for the coordinates of the corners on a survey. This is true, but it is true of any adjustment if all of the systematic errors, blunders or misadjusted instruments were not removed from the system. That is, however, not an inherent problem with the adjustment, but rather because our attempt to correctly weight the measurements was invalidated by the bad data.

The comment above about a "good understanding" of random, accidental errors, means being able to reliably estimate the magnitude and sources of random accidental error in the survey measurements. If that is done (and we’re not talking about "guesses," we’re talking about reliable estimates), using a minimally constrained, correctly weighted least square adjustment should not result in unreliable or "erroneous values."

In the simple closed loop traverse described above (and assuming there are not some anomalies like severely unbalanced or short sites), the results of a compass rule adjustment and a minimally constrained, correctly weighted least square adjustment will be similar. They almost have to be, since the linear error of closure of such a survey with today’s technologies is usually very small –and the resulting adjustments are equally small.

But the primary issue with RPA is not how the measurements are ultimately adjusted; it’s the integrity of the measurements themselves. A minimally constrained, correctly weighted least square adjustment is the best way to arrive at good estimates of the accuracy of those measurements, but only if you understand the contributing factors in each of your individual measurements in the first place. —Gary Kent, LS

Editor’s Note:
To me, the bright line of divergence in the surveying profession came with the advent of least squares adjustments. For the first time, we had a tool that, if improperly used, would give us a wrong answer. In the transit and chain days we touched every distance with our hands, and an instrument operator could stand there and intuitively tell if an angle was in the ballpark before writing it in the book. Even the EDM took a little getting used to, but as soon as we had confidence, the chain was a thing of the past.

Those that took the time with least squares learned that the skill lay in the analysis of the adjustment, and that the software provided several indicators of measurement quality. Additionally, for the first time, we had a tool that would allow us to locate blunders. Previously, a traverse that wouldn’t close generally meant re-running the entire traverse unless the instrument man could guess the offending angle, as in, "Yeah, I remember I turned that angle when the pretty girl walked by."

The bright line I refer to is the education divide; it is why, as a surveyor without a degree, I started supporting more education for surveyors. And it all came down to math. In the transit and chain days, and even into the total station days, only a rudimentary knowledge of math was needed. But least squares and GPS are tools that can give wrong answers if improperly used.

For me, the proof of the validity of least squares came when we began doing 3D total station traverses. The guys didn’t understand why I wanted them to set up their traverses so they could make cross ties, or why I wanted them to redundantly shoot the same points. But when they came back in and announced that when they ran levels between the traverse points, and they fit, I knew we were on the right track.

Kent’s statement above in which he notes that some believe that least squares will often result in unreliable or erroneous values is simply inconceivab
le to me. It really tells me that there are still surveyors who are on the other side of the divide, and simply do not wish to step across.

In This Issue
I have yet to meet a surveyor who doesn’t enjoy working outdoors, or at least did at some point in their career. When I worked in various undeveloped areas of the West, I often sensed that the only people that may have occupied the same ground where I was standing may have been Native Americans or others whose paths had been erased by time. I’m sure that many of you have experienced this.

Many parallels can be drawn between surveyors and explorers, since many of the places we visit have remain untouched for thousands of years. Our surveys become part of the legacy of the surveyors who went before us, and the land deeds we work with are intertwined back in history.

The ongoing celebration of Lewis and Clark’s exploration has clearly drawn attention to the grueling conditions and adverse circumstances under which explorers labored and the fruits that their explorations yielded. While many expeditions included a surveyor or a survey crew, other expeditions did not and explorers relied on their knowledge of astronomy to determine positions and create maps.

The Rendezvous article on page 38 demonstrates how the lives and accomplishments of numerous influential men were intertwined, even those who never met. Andrew Elliott advised Thomas Jefferson on what instruments he should purchase for Lewis and Clark. The work of Alexander Mackenzie and David Thompson was used by Lewis and Clark, and vice versa, Thompson later used knowledge gained by Lewis and Clark.

Commerce, specifically the fur trade, fueled the exploration by Mackenzie and Thompson. Lewis and Clark made their journey so our young nation could establish its rightful place on the globe. Tributes like the Lewis and Clark commemorations and the Surveyors Rendezvous programs stir a surveyor’s soul, the part that just wants to "see what lay beyond the next bend of the river."

Also in this issue is a new department, Surveyors Report. Readers often send us interesting pieces that are longer than a traditional "letter to the editor," but shorter than a traditional article. To give the surveyors who submit these articles a voice, we have created this new department. You can read the first installments starting on page 24. —Marc Cheves, LS

A 180Kb PDF of this article as it appeared in the magazine—complete with images—is available by clicking HERE