A 631Kb PDF of this article as it appeared in the magazine—complete with images—is available by clicking **HERE**

Surveyors wrestle with a variety of situations which are vague and uncertain, yet decisions must be made and actions taken. The surveyor and those who rely on the surveyor understand the skillful application of specialized knowledge will carry the project forward, but an outcome cannot be predicted with certainty. This arrangement presents some level of risk to the parties involved. The parties attempt to assess the likelihood an outcome will be favorable, adverse or somewhere in between. In some situations past precedents offer some guidance on how things might turn out, but predicting future outcomes largely rests of subjective assessments of likelihood. In other situations the science of uncertainty can be used to predict likely outcomes. This type of prediction also offers some insight into the level of confidence that can be assigned to the prediction. Both types of prediction are only as good as the underlying assumptions and techniques. For a variety of reasons it is prudent to also ponder the improbable. There is always some chance the unlikely will happen despite predictions to the contrary.

The following discussion outlines a couple of different areas of practice that rely on assessments of likelihood. The focus will be on the science of uncertainty that is summarized in most elementary texts on surveying. The goal is to show how measurement error theory can be applied to operating a business. In order to explain how this can be done, a few simple Monte Carlo simulations are introduced. This technique allows a surveyor to define a problem, assign a probability distribution to one or more variables and then solve for a range of likely outcomes. These simulations are intended to approximate reality. This offers the surveyor an opportunity to better understand the behavior of uncertainty before venturing into the real world.

**Uncertainty and the professional surveyor**

Surveyors are accustomed to working with uncertainty. The analysis of documents yields ambiguous, conflicting and incomplete information. Physical investigations produce findings that are inconsistent with written records and verbal descriptions. Survey measurements are subject to systematic and random errors. Surveyors use different techniques to contend with different types of uncertainty. These techniques may involve logic, math and principles that guide a certain practice. Perhaps one of the surveyor’s most challenging tasks involves weighing different bits of evidence and judging the likelihood these bits prove a fact. Probabilistic reasoning plays an important role in determining the location of a boundary, as well as, other tasks.

Surveyors use a slightly different version of probabilistic reasoning in measuring. Measurement techniques are built on the principles of probability and statistics. Instrument specifications and geospatial data accuracy standards are expressed in statistical metrics. Each day surveyors put their fingers on an interface and unleash algorithms to do work on the surveyors behalf. Some of these algorithms use probability and statistics to derive a solution. The mathematics used in these solutions is largely hidden in the background, but the results are presented to the surveyor for consideration. The surveyor must analyze a few statistics and judge the likelihood that measurement objectives were achieved. Surveyors are better able to make these decisions when they understand the uncertainty embedded in the technology and methods they use.

Surveyors also encounter uncertainty in their business practices. Much of what they understand about weighing evidence and analyzing measurements can be applied to business activities. Few business decisions come with a guarantee of success. The firm’s financial statements paint a picture of order and certainty. Behind these statements is a multitude of activities that have a more or less quality to them. One day an activity is adding to the bottom line and the next it might not. Understanding the variations in day to day business activities helps owners better manage overall business performance.

**Monte Carlo simulations– another tool in the surveyor’s toolbox**

A lot of good work gets done every day without knowing much about probability and statistics. Calculating an average is common practice. Standard deviations, error of the mean and confidence intervals come up in conversations once in a while. Most statistical problems are now solved with the press of a button. Uncertainty is also about shapes and patterns. Simply looking at the precision-accuracy diagram found in an elementary surveying text offers insights that are hard to express in numbers and words. Microsoft EXCEL includes functions that make it easier to understand random variations. It is possible to simulate random outcomes and present these in graphic form. By repeatedly pressing F9 a new set of random outcomes is calculated and displayed. This is commonly called a Monte Carlo simulation. It can be used to model the random error patterns of a point measurement. This enables a static precision-accuracy diagram to become one that is dynamic. Simply recalculating the model and watching random changes in point patterns enhances learning. These simulations can also be used to better understand practical problems that surveyors contend with. Some of the following graphics were made with simulations.

**Win some, lose some, in the end it all balances out– Maybe, maybe not**

Before jumping into a measurement example, it may be helpful to explore a very simple model of a coin toss. There is a 50% probability a head will appear with each toss. After 10 coin tosses we would expect to count 5 heads. This reasoning encourages a rule of thumb that can be summarized as–win some, lose some, in the end it all balances out. This line of reasoning tends to be true when there are a large number of coin tosses. When there are a small number of tosses, the heads and tails are likely to be out of balance. As it turns out, random events are not perfectly random. A sequence of random events may even seem nonrandom–patterns can appear in a sequence of random events. This can easily be observed in a series of coin tosses. Take a coin and try it! Figure 1 shows a sequence of 100 coin tosses produced with the pseudo random number generator in EXCEL. A running total of heads was calculated and then converted to a percentage. Note how the percentage of the first few tosses deviates significantly from 50% and as the number of tosses increase, the running average converges on 50%. This experiment suggests the (win some, lose some, in the end it all balances out) rule of thumb should be applied with some caution. On small projects the laws of chance may be less likely to work in your favor.

**The uncertainty in small amounts of data**

Figure 2 shows the error of the mean. It rapidly decreases during the first 5 observations. At 10 observations the rate of change slows to less than 1% for each additional observation. Often uncertainty can be significantly reduced by acquiring some additional data. More data tends to be better than less data, but sometimes not much better.

**Estimating the accumulation of random errors in differential leveling**

Differential levelling is a survey technique without too many moving parts. The sources of systematic and random errors are listed an introductory text on surveying. Theoretical error propagation is a straight forward calculation. Figure 3 shows the results of theoretical error propagation and a Monte Carlo simulation. The problem assumes the level run involves 20 turns (BS+FS) and a random error of 0.0028′ (1 standard deviation) per turn. The EXCEL random number generator RAND() provides a real number greater than or equal to 0 and less than 1. This number is assumed to be the cumulative probability of the normal distribution (NORM.INV) having a mean of 0.00 and a standard deviation of 0.0028. Given these functions and input the application will calculate a different random error for each turn.

Figure 3 shows the following:

• The light blue shading represents the theoretical propagation of random errors through 20 turns. This is the error of a series at 1 standard deviation.

• The red bars are the errors per turn produced by the random number generator. These are each at different probability which is why the sign and magnitude are different. This is intended to approximate the behavior of actual random errors.

• The green bars are the running sum of the random errors shown by the red bars.

• The black line represents the theoretical error of a sum using the errors shown by the red bars.

This line should approximate the outside edge of the blue shading. It does not because the magnitude of the random error per turn varies with probability. This learning experiment shows the potential randomness of random measurement errors. Now imagine what happens to the elevations after a simple adjustment. It offers insights in to why check measurements sometimes find larger errors that the original measurements indicate.

**Business activities, statistical wizardry and getting skewed**

Before surveyors get to run differential levels, someone has to engage in business development. Some call this winning work for the firm. A common business practice is to keep score by calculating the percentage of proposals accepted by clients or prospective clients. This score keeping practice resembles the process of calculating the probability of coin tosses. Preparing a winning proposal is important, but so is completing work on schedule and within budget. This presents another score keeping game–tracking project variances. Most accountants seem to have not heard about the "win some, lose some, in the end it all balances out" theory of business. A project budget variance is often viewed as an existential crises rather than the uncertainty inherent in conducting business. Managing a crisis involves a different response than managing uncertainty. However, if uncertainty is overlooked it has a tendency to precipitate a crisis. A project budget variance is the result of comparing two outcomes that have different uncertainties. Once ink is put to paper and a commitment is made to the client, the uncertainty inherent in the estimate is transformed into a certainty. This statistical wizardry has the potential to hinder learning from experience which is vital to improving business performance.

Estimating labor costs resembles measurement. The time it takes to complete a task varies, although the range of variation is typically known, more or less. A common estimating practice is to use an average production rate for the task and multiple it by the number of times the task is completed to arrive at a total amount of time. In some instances an estimator will apply a contingency on the belief some uncertainty in the base estimate was not accounted for. This technique could be characterized as adding a subjective assessment of uncertainty to an objective estimate for reasons of safety–an attempt to guard against the possibility of financial loss. The merits of adding two different types of uncertainty together is worth pondering; however the estimator’s urge to apply a contingency might be related to the way the base estimate was derived.

Figure 4 shows the duration of a section corner recovery expressed as a triangular probability distribution. This distribution was selected because it is simple and can be produced with a calculator simply by assuming the area of the triangle equals 100% probability or 1. The distribution is based on the assumption the minimum corner recovery time is 1 hour, the most likely is 2 hours and the maximum is 6 hours. The blue bars indicate the probability frequency of the various times. The red line indicates the cumulative probability over the total range.

Note the time with a 50% probability is well to the right of the most likely time of 2 hours. The average of 1, 2 and 6 equals 3 hours. This time has a 55% probability of happening. If the estimator wants to take advantage of the pluses and minuses cancelling out then 2.84 hours is the magic number. This example displays too many significant digits, but it helps show the math.

A Monte Carlo simulation resembling Figure 3 was created. (Not shown) It assumed 20 section corners would be recovered. Ten (10) simulations were run using the triangular probability distribution shown in Figure 4. The average corner recovery time was 3.1 hours with a 0.3 hour standard deviation. All this proves is the estimate may be within 10% if the 3 durations used in making the triangular probability distribution are correct.

The above exercises should be viewed as calibrating expectations based on some assumptions rather than mathematically perfecting certainty. Change the assumptions and recalibrate your expectations. The Monte Carlo simulations offer an inexpensive way to learn more about uncertainty.

**A day late and a dollar short**

We have all encountered service providers who seem to be a day late and a dollar short. The reasons they provide are about as predictable as a fast food menu. In most instances the reasons have a common theme–something unexpected came up. In other words, the uncertainties inherent in their business are most likely not being accounted for and managed to their advantage. Tracking a few key business statistics provides feedback on performance and the opportunity to learn from doing. Developing an understanding of uncertainty can improve decision making. Sometimes you can position yourself so some luck can smile on you.

* Note:* The Excel file used by the author in preparing this article can be found at

**https://amerisurv.com/docs/LovellTAS-DIFF-LEVEL.xlsx**

*Lee Lovell is a registered land surveyor in Colorado and Nebraska and has accumulated 34 years of professional experience. He resides in Parker, Colorado where he was part of Western States Surveying for 20+ years. *

A 631Kb PDF of this article as it appeared in the magazine—complete with images—is available by clicking **HERE**