RTN­101: Network Geometry­Design Meets Reality (Part 12)

A 945Kb PDF of this article as it appeared in the magazine—complete with images—is available by clicking HERE

"We will never be an advanced civilization as long as rain showers can delay the launching of a space rocket."–George Carlin

The best design principles and best intentions may initially drive RTN design, but certain realities rule in the end. Ultimately the test of functionality reigns supreme. Most RTNs have to some degree been "selfdesigning," some in an almost organic manner.

That is not to say that good practices should be ignored; indeed the same principles used to design good static GPS campaigns are those you employ first. From a rough framework of "wish" locations, the vetting, triage, compromises, and unexpected opportunities morph your original vision into the final draft for a working RTN.

Spacing: the First but Most Puzzling Question
The first challenge one must confront in designing this RTN latticework is workable spacing between stations. There is no clearly defined or authoritative answer to this question; only the recommendations of the respective manufacturers and the experience of successful network operators.

The manufacturers exercise caution in recommending station spacing; a conservative answer of 30km-50km is typical. While cynics might surmise that a conservative spacing may be recommended to require purchase of more stations, it is more a case of the manufacturers sincerely not trying to oversell the capabilities of their systems.

Which is it to be? 30km, 40km, 50km, or beyond? That depends a lot on who you talk to and what they are comparing. There are a number of technical approaches to network derived corrections (see the May 2007 installment of RTN101 in The American Surveyor), each of which may have their own limitations and considerations for station spacing.

For the most part, each respective approach provides the best service only within the "triangles" formed between the stations. But the RTN may be useable for certain distances beyond the outermost stations (this may vary with approach, spacing of the inclusive stations, number of nearby stations, and many other factors like station quality).

There is always the option to singlebase to edge stations as a fall back when outside the "triangles." Many RTNs have reported fairly consistent networkcorrected results 30km beyond the edge. Success adjacent to an RTN of fairly uniform spacing has often yielded the best "out-of-network" experiences. If the adjacent part of the RTN is like a sharp point (i.e., a station hanging way out on its own) then there are less stations within a reasonable range to augment the solution. The conventional wisdom is to rely on single-base ranges beyond the edge, but that extended network capabilities are a bonus when achievable.

A half-dozen years ago when the debate raged over the emerging network solutions, and the proponents of longbaseline single-base RTK lobbed volley after volley of claims and counterclaims over just how long single base could stretch. Everyone has an "iono-free" solution, and everyone can site anecdotal instances of 100km+ successes; but these must be tempered with the overwhelming cases of where the conventional wisdom of the "10km RTK tether" really does apply. A number of early networks did work with as little as 20km spacing so that the user was never very far beyond 10km from a station at any given time. This single-base contingency is often still included in the design of recent networks, even with wider spacing. Now that RTNs are a proven amenity the debate has switched from RTN vs. RTK (one can certainly complement the other) to just how far apart the RTN stations can be.

There have been only a few specific studies on baseline length; some have been commissioned or executed by the manufacturer to highlight the benefits of a particular approach to network corrections, or their own flavor of a particular approach. Conditions for studies may only be controlled to a certain degree and in particular with respects to multiple constellations and third frequencies (that may not yet be fully implemented) studies may take place completely in a "laboratory" with generated or simulated elements.

The Real Test
The best source is actual field results from a successful running network. Much of the theory (and rhetoric) goes out the window when you can actually test it (or find reliable testimony) for yourselves. I highly recommend that you contact network operators and users directly (and not just the cherry picked ones); you need to know about RTN, warts and all.

There are more than 50 RTNs in some stage of development in North America and over twice that overseas. Start calling around. Better yet, go visit some RTNs. Though it has seemed like a tourist destination at times, our own network in Washington state (www. wsrn.org) does not discourage visits and will provide test accounts to all who want to try. Incidentally, many of our baselines are at the 70km range, as are many other U.S. and overseas networks. Indeed there are networks, like the one recently implemented in Turkey, which are successfully solving at 100km and even 120km by design; the only caveat is that very high ionospheric condition events would likely cut that back to 70km.

One may take the research a bit too far though; there was the case of a group that hired a consultant to study many elements of a proposed RTN (the consultant actually having no specific background in the subject) and ended up spending more on the study than it would have cost to implement a good pilot network. More can be answered by running a pilot than any amount of theorizing.

Balancing Cost and Risk
Too few stations can result in poor and inconsistent service, but too many stations can run up projected costs and perhaps scuttle an initiative (see Figure 1). Finding that fine balance should start with an exercise in purely theoretical spacing (which often turns out to be more wishful thinking than actual design, but we’ll get into that later).

Many RTNs have taken the philosophy of "design wide, test, and fill in where it doesn’t work." But that’s not always very helpful in coming up with overall projected costs for an RTN; the total costs are needed in a comprehensive cost-benefit analysis. One could balance the spacing risk with usage projections; put them closer together where the RTN will see the most use. We have seen this with many RTNs; heavily populated areas have tighter spacing, though it’s more for redundancy than anything.

Designing Outside the Vacuum
For an exercise, we can take an actual geographic area and apply some local conditions to the mix. In Figure 2 we see the state of Idaho, an extended subject area of over 250,000 sq. km. Note this is just an exercise and that there are actually more stations active in the state and surrounding regions than we show as "existing." It was a coincidence that someone from Idaho just happened to ask for help in developing a rough estimate when I started writing this.

We can take an outline of the region and lay a 100km grid over it. Then for argument’s sake we’ll work with the 70km spacing (that is currently working just fine in the adjoining eastern Washington region). This theoretical network would have 77 stations. Next we would overlay the "reality" features, like highways, population centers, communication networks coverage, schools and other potential host sites. For purposes of this exercise we have overlaid the map with state highways, as they are the best indicators
of the probability that other features key to site selection might exist in a given area: power, secure facilities, communications, potential users/sponsors. Further, you will find that these elements are even more likely to exist at highway intersections.

The result (see Figure 3) is a mix of shorter baselines (in the range of 40km-50km) around more populated areas, and longer (70km-90km) baselines in more remote areas. It may well be possible to tighten up the spacing in smaller more populated parts of the country (key elements like communications, secure sites, and large potential users bases apply). Even with the risks assumed in undertaking a long-baseline design, there may only be a need to augment with a few more stations in key areas after the fact (i.e., add a few contingency stations to the estimate).

The numbers for this design exercise indicate only 50 "intra" (or stations needing to be established by the hosting network), and 21 "inter" (or collaborative) stations operated by others. It is likely that there are other stations existing internally to the region that make themselves known as a network gets under way, or that others jump on said bandwagon once the momentum starts rolling. While one cannot directly count on this in developing an initial design estimate, there is a high likelihood that this will happen, and that these additional stations may likely fill the gap for the final augmentation.

Leaving Holes on Purpose
If the driver for an RTN is purely commercial, or otherwise designed to suit a specific function as many are (e.g., state highway work, purely subscription-based, or for a specific industry or resource management), then the latticework may only cover narrow corridors or regions, leaving deliberate holes. But if a network was also designed to serve a wider purpose such as geodetic control (as "active control" as the new buzz term goes), or other statewide or regional interests (tectonic studies, floodplains, height modernization, or cadastral registration initiatives), then there is a more compelling need to fill in the holes.

Whether a network is intended to serve a narrow purpose or is intended to become a wider area amenity, it can benefit from inter-network data sharing, or cooperative efforts with other completely disparate industry, academic, and scientific concerns. The broader the base the better the chances for survivability of the RTN. Enough soap boxing…

Multi-Constellation Considerations
Most new RTNs are planning to eventually be GNSS (or multi-constellation) capable. Like anything cool, there is an added expense for such capabilities. Right now this generally means the addition of Glonass-capable (GLN) stations (as the other constellation and frequency capabilities have scarcely been productized to date, though most manufacturers tout "placeholders" for such capabilities in their new gear). While the extra satellites do not necessarily equate to better results, they give you the ability to work in more situations as there are more satellites available in tighter-sky situations (yes, trees are evil when it comes to RTNs).

Upgrading an existing GPS-only RTN to or starting a new one with GNSS involves a not-insignificant cost differential. Many RTNs are upgrading by attrition, or as opportunities arise. Or a new RTN may use a phased approach and target specific sub-regions for initial GNSS. There must be a method to the phasing to ensure the capability is immediately, or very shortly, made available (don’t waste an expensive station in an isolated area where its added capabilities will not be used; if you have to then shuffle the stations around).

GNSS is immediately useable in single-base mode in the general vicinity of the individual GNSS-capable station, even if it is isolated far from any other GNSS station. It is not until you have say three or more GNSS stations clustered together that those additional satellites get mixed into network-style corrections. It is to be noted that at this time there are some flavors of the various network-style approaches that do not use GLN (not forever we hope). This is mostly when there are isolated GLN stations, then only their GPS satellites are used in the respective network solutions.

As GLN is steadily improving, and with Galileo, Compass, et al., in the works the RTN developers are coming up with creative ways to help in the initial and phased implementation of GNSS for network-style corrections. One creative solution in the Trimble GPSNet suite is the "Sparse GLONASS" option. This allows for the use of otherwise isolated GLN stations among existing GPS stations; the GLN stations may be as much as 100km apart. This really helps when the network is facing a phased upgrade, or if there are many legacy receivers in the mix. Some creative shuffling of stations makes for a viable interim GLN implementation.

Future — How Far Apart Will the Stations Be?
Even the experts seem to disagree. While many more satellites and three or more frequencies will most definitely improve capabilities as a whole, there are many underlying considerations that would need to be taken into account beyond just sheer numbers. How these improved and additional elements equate to baseline lengths does not appear to be a linear progression, or at least that is what some studies have concluded.

Studies to date have had to employ a lot of simulated signals, and other "laboratory" exercises; even in those, there have been indications that thresholds of diminishing returns will likely present themselves early on as these new elements go live. Early studies see a much improved field user experience (e.g., working in poor sky conditions, quicker initializations), but not dramatically elongated baselines. With many of these new elements are years away from full implementation-and other considerations like the upcoming "solar maxima" on the horizon-one would not be wise to start planning 200km baselines for the foreseeable future. When confronted with this inevitable question the current RTN developers seem only to be comfortable with predicting 100km125km baselines for the next decade.

Even if RTNs are able to stretch baselines to 200km (as some scientific and international initiative proponents are preaching), one has to consider the function of closer spacing for other reasons: redundancy, relative positional integrity monitoring (tectonic, et al.), and geodetic reference framework integrity and maintenance. More important, it is a losing proposition to bet an uncertain future, or try to save a small amount by waiting; you lose the potential for substantial cost savings in the interim.

Gavin Schrock is a surveyor in Washington State where he is the administrator of the regional cooperative real-time network, the Washington State Reference Station Network. He has been in surveying and mapping for more than 25 years and is a regular contributor to this publication.

A 945Kb PDF of this article as it appeared in the magazine—complete with images—is available by clicking HERE