The American Surveyor

Model Behavior: The How-To Guide to Successful Surface Modeling Part 2

A 718Kb PDF of this article as it appeared in the magazine—complete with images—is available by clicking HERE

Welcome back! In this second article, we’re going to take a look at the actual mechanics of constructing a good digital terrain model, how best to verify the source data that we plan to use to construct that model, and how to verify our resulting work once we’re finished. If you recall from our last discussion, we learned some basics involved in constructing a digital terrain model (I’ll refer to it as a DTM from here on out) such as points with elevations and the required breaklines that limit the "visibility" of points in relationship to other points. In other words, we use breaklines as barriers to prevent a point from "seeing" points on the other side of the breakline and erroneously constructing a triangle edge between itself and the other non-related points. As we learned, the DTM is constructed most often with our type of typical survey data by a series of contiguous triangles (a process known as triangulation), although there are other methods of constructing DTMs such as gridding, kriging, etc. So in this article, I’d like us to examine some possible sources of point and breakline data, how to evaluate their potential for use in our model, and how to set up the data for a successful DTM process.

Data Sources
Before we can model any surface­whether ground elevation, population density, temperature variations, etc.­we must have some data to use that defines the surface shape. In our case of modeling surface ground elevations, those data typically take the form of point locations (coordinates) and edges (breaklines), both having elevation values associated with them. Digital terrain modeling data is found in a variety of sources including:
• Digital files of previous processing (sometimes, simply the resulting triangles of prior triangulation)
• Files of elevation data, often termed a DEM or Digital Elevation Model which usually are text files with a regularly spaced set of elevation values
• Remote sensing sources such as aerial photogrammetry, light detection and ranging (LIDAR ), laser scanning, etc.
• Previous digital and paper mapping (usually contours and spot elevations)
• Occupation and observation of point locations on the ground

Figures 1 and 2 show some examples of these data sources.

When we use previously developed data from sources such as DTMs, DEMs and contour mapping, it is vital that we evaluate the data for correctness and suitability to our purposes. In fact, it’s vital that we evaluate all data that we use for our modeling. We’ll learn some of those techniques next.

Evaluate, Evaluate, Evaluate!
I can’t stress enough how vital it is to evaluate our data prior to using it to construct DTMs for the simple reason that the computer, being the mindless obedient servant that it is, will dutifully calculate some kind of surface from our data, usually without much complaint. This does not mean that our model is correct, though. Although some software applications have some built in error checking for invalid triangles, excessively large triangles, vertical edges, crossing breaklines, etc., it is usually possible to override most of those errors and "accept" what the computer gives us. Some software simply won’t capture any errors, and dutifully reports "file’s done!"

So, how can we best perform some quick checking of our data prior to processing and how to evaluate the resulting DTM once processing is complete? One answer is surprisingly simple, and you have the best tools available to do the job with you at all times: your own eyes.

The best way to evaluate potential data sources is to look at it in a 3D perspective view, and if your software allows, rotate it around to different viewpoints. Look for any spikes, holes, entities at visibly wrong elevations, and obvious discontinuities. Make sure things look like they did when you were out there on the ground. Most software applications used in surveying allow some kind of perspective viewing, either as part of the underlying CADD system (AutoCAD’s 3D Orbit command, for example), or as a command tool within the application such as Carlson’s excellent 3D Data viewer. Unlike 3D Orbit, Carlson’s viewer only images point, line, and 3D face objects and will not show block inserts, text and the like. For this reason it is much faster since it doesn’t have to render more complex objects. If your software has some tool for viewing in 3D, learn to use it!

Figure 3 shows what appears to be good data, all within expected relationship to other objects. Figure 4 on the other hand shows a very common problem with digital data from outside sources: the road edges and streams have been compiled at some constant elevation (often at zero) which does not correspond to their actual place in three dimensions; obviously, unusable as breaklines as originally intended. This discrepancy won’t show up in plan view of course, so if we didn’t look at this data beforehand, we would have blithely gone down the path to DTM Purgatory! To resolve this, we would need to "elevate" these lines back to their original elevation positions, and this may often require additional field collection if there is not enough data in the file to "snap" it to.

Other good means of evaluating potential elevation data include looking at a listing of elevation values of the selected objects. This can be accomplished in some software as a property of the objects, sometimes as a displayed range or spectrum (low and high values), some indicator of the spread of the data. If the elevations aren’t what you are expecting, then try to find out what needs to be resolved prior to building the model.

Remember, the key rule here is simple: all the data we wish to use as our source of point elevations and breaklines must be at their proper elevations! And it’s often easiest to simply look at it from a different angle.

So Let’s Get Mod’lin’!
Other errors can still creep in even after we’ve evaluated out data and pronounced it good; and this next example that we’ll look at is actually not an error in the data or the processing, but rather simply one of omission on our part. If you remember from the first article, we talked about breaklines being used to limit a point’s ability to "see" beyond it to other points. And a classic case of this happening occurs on the outside edges of our data sets, in areas where the data is "concave" or inset from the general outline shape of the total data. Consider Figure 5. This shows the eastern limits of our site with an area obviously not compiled in our data set.

Our triangles dutifully formed in this outside area, and in fact algorithmically, this is not an error; it simply did what we told it to do.

We need to resolve this by creating what we commonly call a boundary or an inclusion polyline that serves as an outside limit on the triangulation. Behind the scenes, this is nothing more than yet another breakline, keeping the points on one edge from "seeing" the far points. This will prevent the long "sliver" triangles from forming.

Now, some software allows you to simply draw in a polyline to serve as this boundary, and for a "way cool" tool to create such inclusion polylines, if you happen to use Carlson software, check out "shrinkwrap entities." This command outlines a selected set of objects in a flash. Others such as Bentley-based GeoPAK and InRoads have a postprocessing switch to discard sliver triangles or triangles over a certain length. This can be dangerou
s, however, as sometimes long triangles are actually valid when they occur in the interior of our site.

The same holds true for areas in the interior of a site where we do not have adequate data to model a surface. Some examples include ponds and lakes, building footprints, and obscured areas under dense foliage (most often encountered in photogrammetric data sets). This is another type of limiting breakline known as an exclusion polyline. These areas are omitted from the processing, and keep in mind that when computing earthwork volumes, these areas will not include any values! Be aware of what’s going on in our modeling efforts, and don’t just trust the computer… as far as I know, we do still have good minds and we’re expected to continue to use them!

Evaluate Yet Again!
After we process our data into a DTM, we’re still not finished. How many of us simply accept what the computer cranks out, assume it’s correct and head off to deliver our project? And sometimes the processed surface may even generate decent looking contours as a final map product, but there may still be some issues involved that we need to check on. A common problem I’ve encountered in past engineering design work when subcontracting out the site survey to someone who didn’t use proper QA/QC techniques prior to delivery is around "hard edges." Very often I would receive mapping that had nice looking contours, but things like curb lines, steps, ditches, etc., weren’t showing up in the model. In fact, one such DTM that should have indicated a major drainage channel that I needed for my design was completely missing! It was certainly drawn and indicated in the mapping, and I suspect the surveyors simply hand-altered the resulting contours to indicate the presence of the channel after the fact, but I was relying on the DTM to add a good base to my design. While it could be argued that the delivery was sufficient since the mapping showed the feature and had contours that indicated it, like I mentioned in our first article, those days are fast becoming a thing of the past. We need the model, and we need it to be right.

So… I’ll say it again… "Let’s use our eyes, use our minds, and use our software tools to make sure that what we deliver downstream is the best possible product for the intended purpose." Don’t simply rely on looking at the resulting contours spit out at the end of generating a DTM. Certainly DO look at them, but this is just one check. While I realize that most of us have been trained since time began on contours, contours, contours, and we have developed a good eye on how to evaluate them, they don’t always tell the complete story. And in fact, the day is coming when we won’t be producing contours at all as part of our deliverable. I predict that some day we will deliver a digital model of a site and leave it to the end user to decide how to display it. It may be going to a design engineer who may never need to have a printed map. She may simply start using it as the base grade underpinning her design grades. So it is imperative that we deliver the "right stuff!"

Figure 6 shows my favorite evaluation tool, the 3D perspective viewer. I can make sure that the work I bring in from the field or from any of our alternate sources has been modeled as intended and that it supports the work being planned downstream. I find it very helpful to look at the surface in its solid or rendered form, and this can be done using specialized viewers or by drawing the triangulation into your CADD file as 3D faces and rendering them there.

As I can see in this view of a progress As-Built survey of a construction project, I need to either resolve some issues that could be involved in the processing, or I may need to supplement the data set with additional observations. Or, I can satisfy myself that this is indeed the way things looked at the site during my visit. Either way, I can be assured that I will deliver a DTM that will serve the project downstream of my efforts.

Coming Up Next
Our next installment will look more indepth at common errors in source data, data collecting, the DTM process itself, and ways to resolve them. We’ll discuss the philosophy behind whether we edit the resulting bad model, or whether we should go back and fix the data that had the problem in the first place. (I think you can probably guess my perspective on that issue!) And we’ll set aside some more time for a thorough discussion of contours; both as input to our model (not recommended!) and as the resulting output of our process. Following articles will look at some advanced methods of merging pieces of separate models to a composite DTM without requiring an entirely new field collection effort. And the finale will address ways and means to package everything up for delivery and sharing with other users and other software platforms.

Thanks for the visit, and remember to be on your best "model behavior!"

Ken Crawford is a Registered Professional Engineer in Pennsylvania and a U.S. Army Corps of Engineers Quality Construction Manager (QCM). Ken’s experience includes 25 years in civil engineering, surveying, and construction, with significant background in GIS applications. He is a principal in the firm Harken-Reidar, Inc.

A 718Kb PDF of this article as it appeared in the magazine—complete with images—is available by clicking HERE

Exit mobile version