Posted on

Choosing an RF Propagation Model

Author: Cameron Mickell

A common question for novice planners is which RF propagation model is best for my technology?

We have many different users employing diverse technologies, time constraints and accuracy requirements, so it is not always a quick answer but knowing about the key types of models and where to use them makes a big difference to accuracy. There isn’t a one size fits all approach to model selection for radio planning but there are definitely good defaults…

TL;DR We now recommend ITU-R P.1812 as a default model.

To help answer this question in detail, we’ve decided to explain a little about each propagation model, describe some relevant use cases and then conduct a series of measurable experiments to compare model performance and offer practical recommendations for users who need a clear starting point so they can hit the ground running with their radio planning. In this blog, we will look at model types, when to use them and how to make an educated decision on which model to use for your radio project.

Communications Technologies across the EM spectrum

First it is important to understand there are vastly different use cases for radio technologies across the electromagnetic spectrum. Each of these technologies have their own spectrum requirements, frequency, bandwidth and power limits which have a strong influence over any potential coverage or point to point link. However, more impactful than this, is the environment and the varied ways in which they interact with radio signals.

Terrain, buildings and vegetation all interact differently with radio waves of varying frequency and different propagation models attempt to capture these behaviours in different ways. Older (1960) models pre-date developments in high resolution data so while they may adapt well to situations like their intended use, like downtown Tokyo in the case of Okumura-Hata, they will under perform in other scenarios without adjustments.

Because of this complexity, choosing the right model depends not only on your radio system but also on the environment you’re operating in. Below is a quick overview of common technologies and where they sit in the spectrum. We will look at environment latter.Communications Technologies

VHF (30–300 MHz)

Use case: Wide area voice comms, typically extending to the radio horizon.

Propagation at VHF frequencies is highly effective over long distances due to strong diffraction, good performance over undulating terrain, and relatively low attenuation through vegetation. These characteristics make VHF particularly well-suited to wide area narrow band voice networks and maritime or land mobile radio.

VHF applications can cover both broadcast and two-way communications with the former having significantly bigger antennas mast and transmission power.

LoRa / LPWAN (433 MHz and 868 MHz EU, 915MHz US)

Use case: IoT devices, low power sensors, hobbyist networking

Propagation at these frequencies is general better through vegetation compared to higher frequencies, allowing the signals to penetrate foliage with relatively low attenuation leading to good overall range while supporting modest data rates that are well suited to low power IoT telemetry applications.

L/S Band (1–2 GHz / 2–4 GHz)

Rough equivalent: Wi-Fi, Broadcasting, Tactical radios, Microwave links

Use case: IP based networking, voice, short to medium range data links

These frequencies typically support distances up to several kilometers, depending on antenna height, power, and environmental clutter. Propagation in this range is sensitive to buildings and clutter, which limits range in dense areas but still provides reliable line-of-sight performance for short to medium distance networking. These frequencies support higher data rates than VHF or sub-GHz bands but at the expense of reduced penetration through walls and vegetation. These bands support higher data rate technologies such as Wi-Fi, video streaming or autonomous drones/robots.

LTE / 4G / 5G (700 MHz – 2.6 GHz)

Use case: Mobile phones, tablets, broadband services

Propagation in the LTE bands offers a balanced compromise between range and capacity, allowing signals to travel several kilometres in outdoor environments while still maintaining the bandwidth needed for modern mobile broadband services.

Lower frequency LTE bands propagate further and diffract more effectively over terrain, whereas higher frequency bands are more affected by clutter and require denser cell deployments. This is why the uplink from the low power handset uses the lower bands as it has less path loss.

Because of this, LTE cells can have very different performance characteristics around terrain and clutter which makes choosing the right propagation model important.  

Across all these technologies, the environment is a key factor in determining how far or how well you can communicate. Propagation models attempt to quantify just how much the environment is going to affect the behaviour of a signal to help engineers build out these complex communications systems.

How do Propagation Models work?

Radio Propagation Models provide mathematical formulas to give predictions for the behaviour of radio waves between two points. Typically, each model aims to estimate the path loss along a link. Through recursive testing of adjacent points, a wide area can be studied to produce a signal map.

Prediction of path loss is necessary for radio engineers and operators to create accurate link budgets and generate functional communication systems or sensors. Across all models, there are two principles:

The common principle of free space loss is that path loss increases with both distance and frequency. The plotted curves below demonstrate this well.

The next principle is that each model has a unique path loss for an identical link. These curves are representative of an ideal test case of transmitting to a receiver with line of sight across uniform terrain.  

Graph of Path loss for ITM, Okumura-Hata, ITU-R P.1812, SUI, COST 231, Ericsson 9999, Egli and ITU-R P.525 for Choosing a Propagation Model.
Most models have similar curves with P.525 and SUI as the outliers

We can see different models give different results before budgeting for other sources of variation. In order to understand why this occurs we need to look at some of the key features of a model so we know when to select each one and how they work to use them effectively.

Parts of a propagation model

Each model is essentially an attempt to solve a planning problem for a communications problem. Sometimes these are very generic problems and others are tied to a specific technology and frequency range. This gives them very different reasons for existingBear in mind that some pre-date consumer computing! That leads researchers past and present to look for practical solutions which can come from theory or from practice to solve a wide range of communications research problems. This has led to two main types of radio propagation models: These are deterministic and empirical models.

Deterministic Models

Deterministic models are formulas which take input variables and consistently produce the same output as to opposed to “stochastic” models which are probabilistic. Researchers derive deterministic models from first principles and other phenomena to give the best possible representation of radio wave behaviour for a given set of assumptions and inputs. Both inputs and assumptions vary from model to model due to the complexity and motivation for the model.

For planners, this means the model always treats input factors consistently. It means that accurate inputs will lead to a high degree of accuracy in the output. The opposite, stochastic models, are more commonly used in fields like finance or weather modelling where there is uncertainty around a given input or future conditions.

Empirical Models

Empirical Models are data driven,built from survey data which is refined to produce a prediction of wave behaviour built on the prior observations. The advantage of these models is that they can act as ‘black‑box’ predictors that do not require describing the internal physics of the system yet still producing outputs that fit observed conditions.

The risk of using an empirical model is if it was made from tower data in a Japanese city and you use it with handheld radios in a desert, it will not perform well at all.

Input Parameters

For both types of models, there are assumed input parameters that planners need to choose for a model to be applicable for their use case. For users, it is often unclear what each setting controls or how to choose an appropriate context

While propagation models provide the mathematical basis for predicting radio performance, their accuracy is ultimately constrained by the quality of the environmental data fed into them. 

Even the most sophisticated model cannot compensate for incomplete or low‑resolution terrain and clutter inputs. This makes environmental data one of the biggest contributory factors in successful RF planning.

Terrain Data

Terrain refers to the physical shape of the earth such as hills, valleys, ridges and slopes. These features directly affect radio propagation through shadowing, diffraction, and reflection. Planning tools represent terrain using tiles sized according to their chosen resolution. In CloudRF, the resolution can be adjusted from the Output section, with higher resolution leading to larger compute times, bigger output files and more accurate representation of the world.

So, when to use a certain resolution? In CloudRF there are resolutions from 1m to 300m, however key thresholds to note are 2m, 10m and 30m which map to our source data.

  • 30m global datasets. Suitable for coarse planning or large areas. Limited detail often causes over‑optimistic coverage in built‑up or rugged environments. CloudRF is preloaded with 30m DSM coverage for most of the globe up to 60N with additional high latitude data for Scandinavia and Alaska.
  • 10m national datasets and space based land cover (trees etc). The balance between performance and accuracy for tactical and commercial use. Well suited for coverage maps with radii up to 10s of kilometres.
  • 2m LiDAR. Highly accurate and excellent for urban, industrial or complex terrain analysis. Particularly beneficial for UHF deployments in cities or complex industrial/agricultural sites. Because most propagation issues occur when line‑of‑sight is obstructed, a high terrain resolution gives a close fit to the real environment.

Clutter Data and Contexts

Clutter describes man‑made or natural surface features that are above the terrain dataset—buildings, trees, industrial areas, bodies of water, or open ground. Different wavelengths interact with clutter in mostly predictable ways:

VHF and lower tend to penetrate vegetation more effectively but are still attenuated by dense structures. UHF, LTE and Wi‑Fi suffer greater attenuation from foliage and urban environments. LoRa and LPWAN rely heavily on clutter accuracy for predicting street‑level performance.

Within CloudRF, clutter is represented as classification layers with associated nominal heights and attenuation values. Selecting the correct clutter model ensures that urban and rural areas are treated appropriately, since the losses applied can vary dramatically between tree canopy, suburban housing, or high‑rise commercial zones. This allows for clutter tuning which can help with fitting survey/calibration data to a prediction.

Instead of clutter, empirical models (Okumura-Hata, COST 231 and Ericsson 9999) use contexts as factors to help tune their attenuation to an environment. These contexts are fixed empirical curves intended to represent the average path loss for a typical environment. These contexts are urban, mixed (suburban) and unobstructed (open ground). Because of this they will often not be terrain aware and in our experience do not adapt well to real clutter. The graphs below show how contexts can vary pathloss in ways that aren’t always intuitive.

Empirical high-low models with a suburban context

Now we know what kinds of inputs our models are expecting, it is worth understanding the differences with the models available on CloudRF.

Model Bios

Irregular Terrain Model (Longley-Rice) 

The Longley-Rice model is an old but trusty general-purpose model developed to meet the needs of television broadcasting during the 1960s. As such, its input parameters have a focus on longer range high-low use cases. The model is named for its ability to account for terrain variations along the signal path. Naturally, ITM requires quality terrain data to achieve best performance. It can be used from 20MHz to 20GHz and has a range of 1-2000km for Antennas 0.5m to 3km in height.

ITU–R P.1812

The P.1812 model covers VHF and UHF bands and is recommended by the ITU since 2007 for terrestrial point‑to‑area services. The model incorporates Bullington multi-obstacle diffraction and is effective from 30 MHz to 3 GHz, making it well suited for modern commercial wireless technologies. Like the ITM, it factors in changes in terrain and incorporates clutter data into its calculations allowing it to perform very well when supplied with high quality terrain and clutter data.

General Purpose

The General-Purpose model on CloudRF is the ITU-R P.525-2 model with an additional 20dB of attenuation. The P.525-2 model is the ITU recommended free space attenuation model. It can be used across all RF frequencies from VHF up into 100GHz. With accurate clutter and land cover data, this model can be tuned to achieve single digit variation from field measurements in rural or suburban environments.  It is well suited to signals where both ends of a link are at ground level, like portable radio networks. This is outside of the comfort zone of typical high-low cellular models.

Okumura-Hata

The Okumura-Hata model is used for path loss prediction in urban environments. It is empirically derived and suitable for use around urban environments.

It assumes that the transmitter is much higher than the receiver. Specifically, 30m- 200m transmitter and 1-10m receiver heights for 1-20km. The frequency range of the original model is 150MHz – 1.5 GHz. These assumptions and range make this model best suited for cellular or broadcast environments. It uses an environment context to set its attenuation.

COST 231-Hata

This model is a popular extension of the Okumura-Hata model which brings the upper frequency to 2 GHz. COST (COopération européenne dans le domaine de la recherche Scientifique et Technique) began the Action 231 project to address the need to accurately model 2G mobile systems like GSM around 1995-1999. It was based data collected from multiple European cities to tune the model for urban environments. Because of this, it is best used in 1500-2000MHz range where the user is looking to model dynamic urban environments where LOS is often obstructed. Like the Okumura-Hata, it uses environmental contexts to be tune its attenuation.

Ericsson 9999

Ericsson extended the Hata model to 1900 MHz with special attention to the 4G and LTE use case in urban environments. Like the COST and Hata models, it’s environmental parameters can be adjusted for account for different scenarios such as rural, suburban or urban environments.

Egli VHF/UHF

The Egli model was developed by John Egli based his research with the US Army Signal Corp Labs in the early 1940s. The old model is empirically derived by capturing real world path loss across irregular terrain with dispersed clutter such as trees, buildings and other structures. The model typically expects 30-300m tall base stations to a mobile station at 1.5-10m height. Egli is suitable for VHF and UHF high-low cases below 1.5GHz. Unlike other empirical models on this list, it doesn’t use environmental contexts, so it is best suited for open rural settings.

Model Bios Quick Reference Table

ModelFrequency RangeBest Environments/UseTerrain‑Aware?Clutter or Context UseStrengths
Irregular Terrain Model (Longley‑Rice)20MHz  20GHzMixed terrain, rural, long‑rangeYesIncludes hybrid smooth earth diffractionUse CloudRF Clutter profilesGood for hilly/mountainous terrain; adaptable to many use cases
ITU‑R P.181230MHz  3GHzVHF/UHF area coverage, suburban–rural, mixed pathsYesIncludes Delta Bullington DiffractionUse CloudRF Clutter profilesExcellent general‑purpose model; robust diffraction; needs accurate clutter
General Purpose1MHz  100GHzSimple LOS, open areas, clutter‑tuned scenariosYes (with clutter added)Use CloudRF Clutter profilesEasy to use; fully wideband; predictable behaviour; optimistic without clutter.
Okumura‑Hata150MHz  1.5GHzUrban Macro CellsNoUrban/Suburban/Rural ContextsAssumes high transmitter. Behaves poorly outside operating conditions.
COST‑231 Hata1.5GHz  2.0GHzUrban Macro CellsNoUrban/Suburban/Rural ContextsWell validated for cities; good for obstructed LOS macro networks
Ericsson 9999~800MHz  1900MHzUrban Macro Cells (GSM/LTE)NoUrban/Suburban/Rural ContextsFlexible; Needs calibration measurements; good for early LTE/GSM
Egli VHF/UHF< 1.5GHzRural VHF/UHFNoNilUseful for open rural coverage; good for broadcast-like paths; assumes tall base stations;

Propagation Model Bake Off

To help us make an informed model choice, we will conduct a series of tests using real world measurements and comparing model performance to our measured data. From this we will be able to compare results across models and see well that work without diving into clutter tunning. This will lead us to the point where it is possible to make a clear recommendation on what propagation model to choose to start a project.

Defining accuracy

To grade a model, we need to understand what values indicate an accurate model. When collecting measurements from the real world, there is always a hardware measurement error. Expensive test equipment is expensive for a reason and conversely a cheap SDR is unusable for power measurements.

For our tests, we expect a measurement error around 3 dB which would represent absolute accuracy.

score of 3 – 6 dB would indicate an excellent result, 6 – 9 dB is a good match and up to 12 dB is ok. A score higher than 12 dB would be considered an inaccurate model and/or measurements.

Both the statistical mean and the Root Mean Square (RMS) are compared. Achieving a low mean is easy enough through over fitting results but a low RMS is much harder in an urban environment as high resolution clutter must be tuned to match diverse coverage results.

We will look at 41.5MHz, 200MHz, 800MHz, 1800MHz and 2100MHz which give us a broad frequency range to test across.

VHF (41.5 & 200MHz)

VHF broadcasting is an old and difficult problem where the long range and varying terrain can disrupt line of sight to the receiver. Power ranges are significantly higher and the antennas are mounted on very tall radios.

To test performance of models in the range, we referenced an ITU dataset collected by the ITU’s Study Group 3 which has a VHF broadcast data for various locations around UK, US and Europe. We will be using data from Ashkirk, Croydon and Emily Moor (41.5, 191.25, 196.25). Each area typically has over 1000 data points collected around the broadcast region, measured in field strength (dBμV). Using CloudRF, we can model expected field strength using a selection of models to see which best fits the data.

It should be noted that as this radius, terrain and clutter resolution is reduced on CloudRF due to commercial limits not present on a private server. However, as we have multiple large data sets, we can still be confident in our predictions if we see consistent performance results from case to case.

The first test involves a dataset collected from the Ashkirk broadcasting tower in Selkirkshire Scotland. The VHF antenna sits 192m above the ground, so it is very high compared to cellular or handheld radio use cases. The receive antenna is fixed at 4.3m, which will make it taller than most trees and clutter in the area. The data set contains 534 data points within an 80km radius of the tower.

Ashkirk (41.5MHz)

  • ITU-R P1812 (Mean : -3.7 dB, RMS: 7.2 dB)
  • General Purpose (Mean: -4.7 dB, RMS: 8.3 dB)
  • ITU-R P.P525 (Mean : 9.3 dB, RMS: 11.6 dB)
  • Egli (Mean: -11.6 dB, RMS: 13 dB)
  • ITM(Mean: -0.5 dB, RMS: 16.5 dB)

Croydon (191.25 MHz)

The second test involves a dataset collected from the Croydon transmitting station in Upper Norwood, London. The VHF antenna sits at 137m above the ground, so it is very high compared to usual use cases. The receive antenna sits at 9.8m, which places it well above most buildings and landcover expect for dense urban areas like London. The data set contains 2000 data points within a 145km radius of the tower.

  • ITU-R P.1812 (Mean: -2.1 dB, RMS: 11.1 dB)
  • General Purpose (Mean : -0.2 dB, RMS: 12.9 dB)
  • Egli (Mean : -11.7 dB, RMS: 17.4 dB)
  • ITU-R P.525 (Mean: 13.7 dB, RMS: 18.8 dB)
  • Irregular Terrain Model (Mean error: -14.3 dB, RMS: 42.8 dB)

In this second test, we can see that our only acceptable prediction is P.1812 which will require further calibration to be tuned for this environment.

The third test uses data from the Emily Moor transmitter which broadcasts to the Yorkshire area. The data set contains 2000 points within a 100Km radius. The transmit height is 305m and receive height is 10m.  

Emily Moor (196.25 MHz)

  • ITU-R P.1812 (Mean: -2.5 dB, RMS: 8.3 dB)
  • ITM (Mean: -1 dB, RMS: 10 dB)
  • ITU-R P.525 (Mean: -5.2 dB, RMS: 11.1 dB)
  • Egli (Mean: 11 dB, RMS: 14.5 dB)
  • General Purpose (Mean: -14.8 dB, RMS: 17.7 dB)

From our third data set we can see that P.1812 gives the best prediction again for these conditions. The significant heights involved worked against the ground based GP model but favoured ITM, developed for TV broadcasting.

VHF Conclusion

From our testing, we can see that without calibration, the models produce variable results with the test data sets. However, the one consistent exception is ITU-R-P.1812 which gives a mean measurement error of -2.76 dB with an RMS of 8.8 dB. For this range and complex environment, this is a good result which can be improved further with clutter tuning.

We can also see that our mean and root mean square values are higher than the few dB we would expect in a cellular model eg. 6dB. This is acceptable in this case as we are working over a very large area where the standard deviation of our results will increase as our resolution expands. With a large amount of diverse data points, localised errors can be diluted to establish consistent performance across data sets.

Looking at our selection of models, it is not surprising to see P.1812 outperforming the rest. Egli is a 1950 empirical model for VHF broadcasting, however it is not terrain aware so will tend to under/over attenuate through irregular terrain. Free space (P.525) will tend to be over optimistic over long distances and the added attenuation for General Purpose is better suited for handheld radios amongst clutter. So naturally, for CloudRF uses, we’d recommend starting with ITU-R P.1812 when working with VHF.

800 MHz (LoRa, UHF, Cellular)

For this test, we will be using an LTE band 20 (806MHz) transmission tower with RSRP measurements taken from test handsets located with a 3 Km radius of the antenna. The Antenna itself is sitting at 12m above the ground. This serves as an excellent test of lower frequency LTE, 3G and LoRa (868MHz). Using field measurements, we will make predictions using Cloud RF and then use the calibration tool to see the average and RMS errors to see if we have a good fit between our model and our data.

The models we will test are: General Purpose, Irregular Terrain Model, ITU-R P.1812, Okuma-Hata, Ericsson 9999 and Egli. We won’t be testing COST 231 as the test data is below its intended frequency range of 1.5GHz -2GHz.

The area of interest is located to the south of the village of Wroughton, which is South of Swindon. The site sits in an open field surrounded by fields and a solar farm with good inter-visibility around the former airfield. The village of Wroughton sits to the north in the shadow of a hill, so we would expect to only see a little coverage through diffraction to the north with stronger coverage to the west, south and east being broken up by hedgerows and sparse buildings.

Wroughton (806 MHz)

  • General Purpose (Mean : 2.6 dB, RMS: 3.6 dB)
  • Egli (Mean : -3.4 dB, RMS: 5.7 dB)
  • ITU-R P.1812 (Mean : -6.5 dB, RMS: 7.7 dB)
  • Irregular Terrain Model (Mean : 12.1 dB, RMS: 12.7 dB)
  • Okumura-Hata (Mean : -37.7 dB, RMS: 37.9 dB)
  • Ericsson 9999 (Mean : -37.7 dB, RMS: 37.9 dB)

Results

From the test, we can see that General Purpose and ITU-R P.1812 are good fits for the data, offering single digit variance. The ITM prediction is under attenuating and giving stronger coverage over similar areas to the general purpose and ITU-R P.1812. We can also see that Okumura-Hata and Ericsson 9999 are over attenuating, and we aren’t seeing coverage in the area around our readings at all.

To understand these results, we can go back to their intended use cases: Okumura-Hata and Ericsson 9999 models are intended for built up urban environments and expect more obstacles and chances for diffraction. For the test template, we are using an average/mixed profile which maybe over attenuating our predictions without the environment providing enough paths for diffraction. If we look at the area of the test, we can see there is very few buildings with plenty of open fields and trees. If the context is adjusted to unobstructed, both Okumura-Hata and Ericsson 9999 should yield a better fit to our test data.

  • Ericsson 9999 Unobstructed (Mean: 2.4 dB, RMS 3.5 dB)
  • Okumura-Hata Unobstructed: (Mean : -6.8 dB, RMS: 7.6 dB)

By changing the context, we can see that both models now fit the data well.

UHF conclusion

CloudRF recommends ITU‑R P.1812 or General Purpose model for modelling the 800 MHz range. Our experiment supports this, demonstrating that both models provide reliable results when paired with quality clutter and land cover data.

As this test shows, empirical models such as Okumura‑Hata and Ericsson 9999 can be difficult to use without reference data because they depend heavily on selecting the correct environmental context. Without field measurements, you must rely on your interpretation of the environment to decide whether a model should be treated as urban, suburban, rural, or unobstructed. This requires time, experience, and careful reading of the model documentation especially when planning in remote or complex areas.

Deterministic models, on the other hand, have shown to perform consistently when supplied with good‑quality terrain and clutter data. As we continue conducting field tests, we are becoming increasingly confident in recommending ITU‑R P.1812 as a robust starting point for modelling LTE Band 20 (800 MHz) and similar low‑frequency systems. Because it is terrain aware and It offers good accuracy even before calibration, which makes them highly useful for time sensitive planning tasks. Additionally, as better LiDAR and DTM data becomes available, these models will increase in effectiveness as legacy empirical models become obsolete.

Snow covered trees

Taking the test up a gear to the Arctic circle, we collected LTE survey data using the RantCell survey app from the top of Finland across multiple bands to investigate the accuracy impact of thick snow on trees.

Snow is a lattice of water which reflects and attenuates RF so is challenging to simulate, especially as it changes!

The field data collected gives us RSRP (Reference Signal Received Power) from two LTE bands (band 1 and band 3) from our tower of interest. This gives us a good opportunity to use one data set to calibrate a model and then use the second set to see if model prediction performance remains consistent across frequency. The frequencies of the two bands do make model selection more limited as band 1 (~2.1GHz) sits above the threshold for Okumura-Hata, its extension COST-231 and Ericsson 9999.

For the data itself we are looking at a small section of coverage near the tower surrounded by large snow-covered trees in undulating terrain. The collection was performed on a ski track under the trees which was often covered by a tree canopy.

The signal RSSI was calculated as 30dB above measured RSRP using the known 20MHz bandwidth and the data fed into our calibration tool to plot the points. From the app’s data, we know the LTE bands for each of our data sets, so we have a centre frequency and bandwidth. Using the photograph of the mast we can approximate its height at 60m. With the mast location set, we can then make two sets of predictions for both 1820 MHz and 2140 MHz down links and compare model performance across both. We will use P.525 as our free space reference model.

1820 MHz

  • ITU-R P.1812 (Mean: 2.3 dB, RMS: 6.8 dB)
  • Irregular Terrain Model (Mean: -2.8 dB, RMS: 7.1 dB)
  • ITU-R P.525 (Mean: -9.3 dB, RMS: 11.3 dB)
  • Ericsson 9999 (Mean: -8 dB, RMS: 10.5 dB)
  • General Purpose (Mean: -23.3 dB, RMS: 24.2 dB)
  • COST-231 (Mean: -43.9 dB, RMS: 48 dB)

When comparing the predictions to the 1820 MHz data sets, we can see that P.1812 and ITM are close predictors of the measured values. Additionally, when Ericsson 9999 is used with an average/suburban context, it gives an okay estimate, but has a much smaller coverage area overall, suggesting that more tuning is required to better match the attenuation caused by the large snow-covered trees. General Purpose is over attenuated which was not surprising given our free space path loss is a close fit and the 20 dB offset was added following tests with ground based tactical radio networks, not 60m masts. COST-231 is unusable which was expected given it is well outside it’s intended environment.

To test consistency, we can now look at the test results at 2140 MHz. Unfortunately, we can’t include Ericsson 9999 or COST-231 as the operating frequency is too high. However, we can test the Stanford University Interim (SUI) model which is rated for above 1.9 GHz.

2140 MHz

  • ITU-R P.1812 (Mean: -5 dB, RMS: 9.2 dB)
  • Irregular Terrain Model (Mean: -5.1 dB, RMS: 9.6 dB)
  • ITU-R P.525 (Mean: -9.4 dB, RMS: 12 dB)
  • General Purpose (Mean: -23.4 dB, RMS: 24.6 dB)
  • SUI (Mean: -69.3 dB, RMS: 72.9 dB)

From this comparison, we can again see similar results. ITU-R P.1812 is again providing the best prediction, followed closely by ITM. The observation for P.525 and General Purpose remains the same. The SUI model is heavily over attenuating using an unobstructed context. This is not surprising when looking at the generic path loss graphs shown previously shown. SUI has consistently been the most conservative microwave model in our collection and based on the performance in comparison with other models it will be retired from our API in due course.

LTE conclusion

Looking at the two set of predictions, we can see consistency in performance from both P.1812 and ITM with P.1812 giving the best fit. Their coverage maps are generally consistent shapes with themselves and each other and we see more attenuation through the trees at our higher frequency as expected.

Our two models are showing their utility by giving accurate predictions despite heavy snow based on terrain and clutter data alone. The next question for these two models now is how to tune the clutter for each frequency for a better match.

Key findings for choosing a Propagation Model

Having conducted tests across six locations with different datasets and frequencies, we’ve gained insights into how each propagation model performs. The results of those tests have been broadly consistent with deterministic models like ITU-R P.1812 and its legacy predecessor ITM being consistently accurate before calibration and clutter tuning.

The old empirical models can be accurate, but they require the correct context to make an accurate prediction and without test data, it is difficult to tune them to their respective environment due to their fixed path loss curves. This is why we are recommending ITU-R P.1812 as our default model for VHF, LoRa and LTE propagation when using Cloud RF. You can still use Empirical models, but you’ll have to commit to collecting field data for tuning.

To further improve accuracy, users can tune our clutter profiles with variables such as tree heights or average attenuation through buildings. To understand where these values come from, please check out our past model and clutter improvements blogs or if you want to accelerate the process, see our calibration with machine learning demo with sample code on our Github.

What about Machine Learning?

The promise of Machine Learning models to improve accuracy (and speed) is tempting but it depends upon an enormous quantity of accurate training data. In our experience, ML researchers struggle to generate the vast quantity of accurate and expensive test data needed to develop even small demos.

Given enough training data, an ML model could be quicker and just as accurate as physics based simulation or potentially a drive survey.

However, it is naive to criticise the performance of physics based simulation in favour of ML as the model generation relies upon the former to train the model which creates a dichotomy whereby ML developers need to both criticise, and rely upon, simulation tools to develop an accurate model (and secure funding). There is a solution to this which requires academic honesty and a mature and scalable API but one of those requirements is harder to come by than the other.

Further Reading

Fast simulation calibration with Machine Learning

Model and clutter improvements

SG 3 Databanks – ITU

CloudRF model menu

Posted on

Live network mapping endurance test

Summary

We conducted a field test in the mountains with SOOTHSAYER focused on automation and endurance. The test generated quality data and revealed altitude issues with our plugin we have since fixed.

We conducted a field test in the mountains with SOOTHSAYER focused on automation and endurance. The test generated quality data and revealed altitude issues with our plugin we have since fixed.

During the test we created 925 multi-site coverage heat maps and 4625 links to maintain a live map of the network. We previously established model accuracy on previous field tests so the focus here was on endurance.

Live network mapping is radio planning without user interaction where radio locations and coverage are updated dynamically via an API. It requires fast and economical edge compute like our SOOTHSAYER API to be effective and is not possible with legacy desktop tools.

This offline edge capability is implemented in our ATAK plugin via the Co-Opt feature. This new feature updates network coverage automatically using live map data to provide a current view of communications problems and opportunities, akin to a moving weather layer. It is useful for deploying radio networks into challenging terrain.

Test objectives

  • Collect performance data
  • Prove software stability
  • Test altitude logic

Test setup

Hardware

The edge compute used was a Nvidia Jetson NX 16GB onboard a Cardshark rugged computer with an external Wi-Fi adaptor to provide an access point for the phone client.

The computer was powered by budget USB-C powerbanks rated at 13000mAh and 25000mAh respectively.

The test phone was a Samsung Galaxy S23 connected via the Jetson’s Wi-Fi.

Cardshark computer with USB-C power cable, batteries and phone

Software

The offline software running on the Jetson’s Jetpack 6.1 OS was SOOTHSAYER v1.10, deployed as Docker containers. The resource intensive 3D engine container was not needed here so was disabled.

Services for a CoT simulator, a low power 5GHz Wi-Fi access point and a performance logging utility were running.

On the phone we ran ATAK 5.6.0.12 (Play store) with our SOOTHSAYER ATAK plugin version 2.7a.

Reference data

The Jetson was pre-loaded with 30m SRTM1 DTM and 10m ESA Land cover data for the mountainous test area. The phone used cached Openstreetmap mapping and 30m SRTM1 terrain data.

Test route

The area chosen was the Glenshee ski resort in the Cairgngorms national park, Scotland during early February where temperatures up the mountain were -10°C (14°F). A 16km circular route was followed which provided challenging conditions to meet the objectives.

Test data

Using the onboard Tegrastats utility we collected detailed data about workload, temperature and power consumption which will inform future designs and recommendations.

The day was split between two test profiles for the morning (0900 to 1300) and the afternoon (1345 to 1600).

Each profile used 0.5 megapixel resolution which for a multi-site request with 5 nodes would require the analysis of 2.5 million points using the ITU-R P.1812 VHF/UHF propagation model which includes diffraction.

In the first test profile, a calculation was triggered if a radio moved more than 200m. This is an economical way of working designed to extend battery life and reduce bandwidth if working across a network.

In the second test profile, a calculation was triggered on a 10 second interval. This is a more intensive way of working which provides regular updates.

Processor load

As expected, the CPU and GPU load was more intense for the fixed interval than the responsive profile. The spacing during the responsive profile in the morning shows our slower progress on the ascent followed by rapid progress as we moved across the plateau and the server worked harder to keep up with us.

GPU load
CPU load

SOC Temperature

The internal SOC temperature chart was also predictable as the unit was inside a waterproof bag inside a rucksack. It climbed steadily during the ascent, then dropped sharply as we stopped to make a video where it was removed from the rucksack briefly.

The unit temperature leveled out at 53 degrees Celsius inside the rucksack. It was not an ideal place to achieve cooling but given the winter conditions, quite acceptable judging by the data. A temperature of 80 degrees would be hot.

In the afternoon the unit was attached outside the rucksack in the waterproof bag where it leveled out at 32 degrees Celsius. The moment the rucksack was placed inside the vehicle before 1600 is evident as the temperature climbed steadily. This coincided with it being placed under an intense load during a driving demonstration we have published on our Youtube channel.

SOC Temperature

Power consumption

The most valuable data was power consumption which showed some interesting features and very encouraging mean values. The afternoon profile was more intense but did not increase peak power consumption which was actually lower than the morning for reasons which were not immediately obvious.

Following inspection of memory consumption (~25%), the reason was assessed to be a GPU memory leak triggered by unplanned “Above Sea Level” (ASL) calculations which occurred on the ascent. A large calculation needs more memory, which draws more power. Unlike the CPU engine which is called on-demand, the GPU engine runs continuously and its memory consumption can grow with use. In this case power consumption grew by 250mW whilst delivering 925 heat maps which we’re happy with.

The afternoon was not affected by the ASL memory leak so we maintained a steady even profile at around 7.5W power consumption, well within the range of a phone power bank.

Power consumption throughout the day

Test videos

A video of select moments from the edge compute field test has been published on our Youtube channel. Following the field test, we created a bonus video of the drive home as the server was still running in the vehicle. This second video demonstrates the Co-Opt feature running with a 5 second refresh on a vehicle moving at up to 50 mph.

Issues

Plugin altitude logic

During the ascent we noted our own position marker, sourced from GPS, jumped from Above-Ground-Level (AGL) defined within our template to Above-Sea-Level (ASL). This was due to logic inside our plugin designed to handle aircraft.

The logic compares reported (GPS) altitude, measured in WGS-84 Height Above Ellipsoid (HAE), which is known to be inaccurate, with (ATAK) terrain height and if the difference exceeds 120m / 400ft, it uses ASL units and overrides the template’s receiver altitude to the local terrain altitude.

For more information on Height Above Ellipsoid see this article.

// If Height AGL is > 120m / 400ft, this is probably flying so we switch units to meters AMSL and use GPS altitude

if (altitude - terrain > 120.0) {

  marker.markerDetails.transmitter?.alt = altitude.toDouble()

  marker.markerDetails.receiver.alt = terrain + 1

  marker.markerDetails.output.units = "m_amsl"

} else {

  marker.markerDetails.output.units = "m"

}

This risky logic made sense from the comfort of the office with a GPS simulator but was a mess on the mountain with real GPS altitudes. The synthetic CoT markers were unaffected as they report a height above ground level.

As we went on to find, we were comparing an inaccurate GPS altitude with an inaccurate terrain altitude. Not exactly a recipe for success :/

ATAK API inconsistencies

Whilst on the mountain we noted a disagreement between our GPS altitude and ATAK’s reported altitude which warranted a deeper investigation. During the investigation we discovered inconsistencies with ATAK Elevation data.

The ElevationData class we, and no doubt other developers, were using was deprecated and apparently removed in 5.6 despite being in the Hello World demo for that release. We were using this with the getElevation() method which returns the height in meters above the WGS-84 ellipsoid (HAE).

Deprecation warning on ElevationData
val dtmFilter = ElevationManager.QueryParameters().apply {

  elevationModel = ElevationData.MODEL_TERRAIN

}
val terrain = ElevationManager.getElevation(currentMarker.point.latitude, currentMarker.point.longitude, dtmFilter)

The recommended replacement for ElevationData based upon public source code for 5.5 is the ElevationChunk API. There are no public examples for implementing the recommended alternative at the time of writing although references were noted in documentation before it was deprecated in 5.3 which needs clarification.

Version 5.5.1 recommends ElevationChunk
Documentation for 5.6 does not contain elevationchunk

DTED or SRTM?

The deprecated method used was evidently returning a value based upon low resolution DTED0 data at 1km resolution. This was less accurate than the 30m SRTM1 (DTED2) data which ATAK tools like the range-bearing elevation profile or cross marker (X) use. SOOTHSAYER uses SRTM1 also.

ATAK 5.6 was found to be referencing different datasets for the same position between its interface tools and programming API which was frustrating. To prove this we pulled both the raster tiles and used GDAL’s location info utility to plot the differences for the route.

At location 56.875932, -3.377869 we experienced a large height error when the plugin fetched a height of 879m HAE yet the GPS reported 997m HAE (visible in the screenshot above). The massive 118m difference is just shy of the 120m needed to trigger our “airborne” logic.

The GPS measurement accuracy in the z-axis was measured to be inaccurate by at least 10m as the real altitude was 1007m ASL so it appears we had an altitude error greater than 120m during the ascent. A notable error which affects other GPS apps.

gdallocationinfo n56.dt0 -wgs84 -3.377869 56.875932
Report:
  Location: (37P,15L)
  Band 1:
    Value: 879

Digging into ATAK’s preferences we found validation for our theory via the “Pull Elevation Mode” option within Elevation Overlays Preferences. The choice between DTED and Highest Resolution suggests this was implemented to support better than DTED data eg. SRTM1. It is not clear if it references DTED0 or SRTM1, which as we’ve shown could be a +100m error.

Some batteries are better than others

Our goal was to use flight-safe USB power banks which can easily provide the 7-8W of power needed to run the capability. We ran soak tests in the office to establish a battery life of 6 hours for the smaller 45wH battery. We expected this to be reduced on the mountain so purchased a larger 92wH battery but it failed after 4 hours despite being kept warm inside an insulated jacket.

The smaller battery proved its worth and ran for two hours with only 35% consumption suggesting it would have lasted longer than the larger battery.

During subsequent charging of the large battery it reported unexpected levels suggesting it was likely defective and under performing.

Our conclusion was that you can live-map a network for more than four hours on a small battery, but it must be a reliable one.

Conclusion

We were happy with the test and the results which showed solid stability and good power economy. It validated the hardware, the SOOTHSAYER API and most importantly the concept of edge coverage mapping.

Given the accuracy issues experienced with GPS data and the deprecated-yet-still-going elevation APIs using low resolution data sources, we have commented out the error prone “airborne” logic in our plugin and will only use the fixed altitude(s) defined within the radio template until further notice.

Plugin users can edit both the transmit and receive altitudes above ground level by selecting the marker then clicking the pencil.

The ATAK plugin has already been updated and pushed to our Github repository and Google Play as version 2.7.1

Posted on

Enhancing Radio Direction Finding with RF simulation

Background

Radio Direction Finding (DF) is the art of determining the location of an emitter and is used in search and rescue, coastal surveillance, law enforcement and defence. There are different techniques using power and phase but the output for a single sensor is normally a Line of Bearing (LoB) which points towards the emitter.

If you’ve ever seen DF depicted in marketing or an info-graphic, you’ve likely seen three geometrically distributed sensors surrounding an emitter which produce a high accuracy position fix (PF) where their lines of bearing converge.

In the real world, DF systems are expensive and require specialist training so are in short supply. It is far more common for these systems to be used in isolation so operators must determine an emitter’s location with a single LoB and a map study. For powerful signals, the search area could be vast.

A Line of Bearing displayed on ATAK

Guessing the signal power

For a signal to be tasked for DF, it’s frequency is already known. With signal classifiers increasingly integrated into receivers, and now even open source, the signal type may well be known which helps answer a key question: what is the signal’s transmit power?

When a new signal is detected, it could be in the room next door or in the next county. Knowing the signal type and ideally the hardware is key to estimating the distance, as you can lookup the possible power levels from a data sheet.

A portable radio has variable power levels: For a DMR radio with low and high power at 0.1W and 4W these can be put into a basic path loss model to determine the possible distance. Using the Friis reference model with a detected signal of -80dBm for example, a 1GHz signal could be 2.4km or 15km away in free space.

Spectrum analyser up mountain
Strong LTE signals seen from a mountain

This significant variation with the possible distance is where modelling can add value to reduce the vast search area.

For the example radio, these power values in Watts must be converted to decibel milliwatts (dBm) for consistency with the path loss modelling and to establish the range in decibels which will inform simulation parameters. In this case, low power is 20dBm (0.1W) and high power is 36dBm (4W) for 16dB of uncertainty.

In an obstructed environment such as a forest, this uncertainty represents a shorter distance than in free space where again, modelling can add value. A counter drone system is an example of a free space problem.

Path loss variation due to clutter attenuation

Link reciprocity

A radio link is not symmetrical due to how and where obstacles impact the fresnel zone which is the cone of power an element radiates. Even if you have line of sight (LOS) between two even power stations, you can still get different received power levels from A to B than B to A.

A to B != B to A

This matters as we cannot model the emitter since we don’t know where it is! We can only model the receiver location.

In our experience, the difference is measured in single digits and is small compared with noise which will make a bigger impact on a link’s viability. If you are operating at the edge of a system’s link budget then the reciprocal difference may be enough to make a link one way only.

For modelling a receiver we need uplink (talk-in) measurements instead of downlink (talk-out) which we normally collect for clutter and model calibration.

Field testing

We conducted several field tests to integrate our API using a budget commercial DF receiver, the KrakenSDR. This compact entry level unit gave us a LoB (with 8 degrees of error) we could work with but as it used 8-bit SDRs, we could not rely upon the received power level as low resolution SDRs can not represent weak signals.

After a false start with a 12-bit SDR designed for the amateur community and interfaced with SoapySDR, we used a professional RFEye receiver which aside from having superior measurement accuracy and sensitivity is a turnkey solution with a web API which we have integrated with our API previously.

Test system

Our test system grew in scope from a Kraken with a Pi to a network in a box with a bespoke management and signal logging interface. Key to this innovation was not creating a budget DF system which we needed to collect data but the employment of an edge modelling capability on a Raspberry Pi 5.

Our goal was to develop a hardware agnostic script which our customers could use to enhance their DF data.

Hardware

  • The Line of Bearing came from a KrakenSDR with a circular 5 element array upon a 2m telescopic mast.
  • The processor was a Raspberry Pi5 running our test software and SOOTHSAYER v1.10
  • The radio traffic was generated by a Tait DMR portable radio equipped with a programming cable connected to a Pi4.
  • The power measurements came from a CRFS RFEye connected to an elevated monopole antenna.
  • A pair of sensecap meshtastic LoRa trackers were used for GPS tracking.
  • A laptop and tablet running ATAK were used to manage the system and observe the output as a KML.

Software

To automate data collection, we developed test software to collect data from the SDR and DF receiver simultaneously and model them using our API. The DMR radio was configured to broadcast telemetry periodically which provided a regular target signal and the out-of-band meshtastic tracker provided a precise location within the trees.

We couldn’t use a second DMR radio to receive the telemetry as bi-directional radio traffic risked spoiling the data.

The modelling came from SOOTHSAYER 1.10 which was installed upon the Raspberry Pi 5. This also provided the map tiles for a web based logging system which displayed live signal readings. Only one (CPU) API call was necessary per test cycle to generate a grey scale Path Loss map in decibels (dB) from which subsequent received power heat maps in decibel milliwatts (dBm) could be rapidly derived using a simple formula.

The path loss simulation needs refreshing if either the location, frequency or height change but is power agnostic. The client script queries this path loss map using known (or assumed) radio power levels.

Results are presented as a network KML which can be consumed on standards based geo-viewers like ATAK.

Challenges

We took our ‘Temu DF system’ out twice but we couldn’t collect as much data as we wanted in the time available due to different constraints such as the weather or just running a small business.

A decision to avoid vehicles and buildings was made to avoid reflections which meant we had to run the equipment from travel batteries. The power budget for the Pi5 (30W), KrakenSDR (12W) and RFEye (5W) was 47W which was more than we normally test with so it reduced our endurance.

We encountered local radio traffic on our licensed channels due to the choice of locations overlooking the city. This was easy to discount at the start of the test when our signal was obvious but became a nuisance as it faded into the trees and ultimately tainted our test data since we were triggering on power.

Old data to the rescue

After several frustrating tests where a lot of time was spent climbing local hills, calibrating DF and chasing false positives, we elected to reuse a rich data set from an antenna field test last year which included bi-directional links for a UHF radio on a moving vehicle.

This data was attractive as it included the uplink and a good variety of obstacles including houses, trees and hills as well as LOS links which are all useful for calibration. Before we could conduct DF analysis with the uplink, we calibrated the local clutter using the downlink, as we do routinely for calibration. This is a standard process we have developed a feature for in the web interface as well as a supporting video tutorial. Using our new 2m tree height data, we were able to improve upon last year’s score.

As we did not collect lines of bearings during that model test, we had to simulate these using the known vehicle location for which we used 10 degrees of azimuth error.

Somerton UHF calibration, 2024

Analysis technique

To compute the effectiveness of this technique we calculated the area of the 10 degree arc where the vehicle could have been, with a radius of 6km representing the maximum range in this test.

This gave us a search area for a given LoB of 3,141,593 m2.

Our analysis script calculated a high resolution grey scale heatmap using SOOTHSAYER’s API which was referenced with collected power readings. To compare path loss (dB) with received power (dBm) we used the known radio power of 2W (33dBm) within a link budget formula to generate received power which was compared with measurements.

RSSI (dBm) = Radio Power (dBm) + Gain (dBi) - Path Loss (dB) - Losses (dB) + Receiver Gain (dBi) - Receiver Loss (dB)

Where the difference between measurements and simulation was within tolerances of our colour key, we styled that pixel, otherwise we eliminate it from the search area and set it to transparent.

The result is an accuracy heatmap defined by a traffic light colour key. The levels we chose for our “known power” assessment were 1, 2 and 3dB. By showing 3dB of error we allow for receiver error and reduce the risk of false negatives where a matching location might be discounted.

When the radio power is known, we can produce more accurate results.

When the radio power is unknown and the hardware/signal is known, we can simulate the minimum and maximum power to generate a dynamic range for the analysis. We used a low power value of 20dBm (0.1W) and a high power value of 36dBm (4W) for a possible power range of 16dB so our “low accuracy” colour key was 14/15/16dB.

We repeated the analysis with known and unknown power levels to compare accuracy.

Results

Analysis of data revealed the simulation heatmap significantly reduced the search area. As expected, knowing the radio power helps greatly but even with unknown power the search area was reduced to 32% of what it could have been for a conventional 6km arc.

Even when radio power is unknown, the search area is reduced significantly

Known Power (2W)Unknown Power (0.1 or 4W)
Best case0.010.03
Worst case27.3364.37
Average area7.93%31.51%
Improved search area as a fraction of the original arc area in m2

The amount of benefit was relative to the terrain and clutter: For example, where there were no obstacles or a single consistent obstacle such as a forest, the result was a focused band of probability without any false positives.

Where there were multiple obstacles such as a hill and a forest, false positives appeared which depending upon the ground could be discounted by an observer. This was to be expected given the pixel picking which is taking place.

A tight traffic light schema, with tuned clutter, was better than a loose schema with larger error margins. The reason being that it will show much less false positives.

Video and KMZ

This video is a sped-up compilation of time stamped KMZ layers viewed on Google Earth showing the vehicle’s route around the sensor. Where the vehicle disappears, no signal was detected.

The KMZ is available here and works best in Google Earth.

Demo video of Enhanced DF

Conclusion

This testing proved that the effectiveness of a single LoB can be improved greatly with modelling but the concept is only an improvement if the analysis is automated as doing this manually would not be faster than a map study.

The reason this analysis isn’t performed regularly by DF systems today isn’t for a lack of LoBs and RSSI measurements but rather a lack of APIs with which to exploit this information. Current RF planning software exists as a user interface which requires manual, and skilled, operation. Furthermore, the capability often exists in the wrong location on a high performance desktop computer, disconnected from edge sensors.

By putting this API at the edge on small board computers (SBCs) such as the Raspberry Pi 5 or Nvidia Jetson, a DF system’s effectiveness can be improved. Through open GIS standards like KML, the result can be consumed on open standard GIS systems like ATAK requiring minimal integration effort to add a powerful capability.

Looking forward, we are speaking with open minded vendors about adding this API to enhance existing systems.

If you’d like to improve your LoBs, get in touch with us or one of our regional resellers.

Links

SOOTHSAYER server: https://cloudrf.com/soothsayer

Kraken SDR: https://www.krakenrf.com/

DF integration demo: https://github.com/Cloud-RF/CloudRF-API-clients/tree/master/integrations/DF

API schema: https://cloudrf.com/documentation/developer

Posted on

Fast simulation calibration with Machine Learning

To Survey or Simulate

Whether its survey drones, drive test vehicles or a police analyst with a backpack full of phones, the problem is the same: RF propagation surveys are very resource intensive. They’re more accurate than a simulation, but not more efficient.

Survey data is typically a GPS log with signal metadata including a signal strength value. This can be different measurement units but the principle is the same. It is data which shows the signal at a given point.

Surveying is preferred for good reason by some industries, especially for evidential purposes where the variables in simulation open the door to uncertainty which nobody wants in court. Another reason is legacy desktop simulation software is slow and often inaccurate, especially amongst clutter which is more complex than a topographical study.

For example, relying upon a high-low empirical model like Hata which pre-dates developments in clutter will get you ~8dB accuracy whilst calibrated survey equipment can get you ~2dB, or ~3dB for an app on a standard phone.

Manual calibration

Using survey data like the output of Rantcell’s survey app, we can load this into our web interface to perform calibration manually. This process involves adjusting model and/or clutter settings until the error between the simulation and the measurements is as low as possible. It’s an efficient process as you can test thousands of points in a single API call but also repetitive, interface based and requires engineer input to adjust clutter values for example. You can see manual calibration in action here.

A good calibration would be below 8dB. For more on calibration of survey data, see one of our many field test blogs.

Good survey data is thousands of points all around a site of interest. When we’re field testing, we choose our route to ensure we collect a diverse range of data.

The Pizza Problem

The pizza problem is when you only have a slice of data but need to infer the rest. This is very common in the real world where a customer may not be able to collect data all around a site for various reasons:

  • Lack of time
  • Lack of access
  • Lack of resource

This limited data is then used to estimate what the rest of the coverage looks like. For an omni-directional antenna, it’s a good assumption. For a directional antenna, it’s clearly less accurate but crucially, it is about making the best estimate using available data.

If you can get more measurements, then do it. If you don’t have the time/resource, then simulation using calibration offers the best compromise. After all a ~7dB accuracy prediction is much more useful than no coverage at all.

A Machine Learning genetic algorithm

Using a slice of data, we can employ a basic Machine Learning model which uses a genetic algorithm to optimise settings. It works by starting with a range of fixed inputs (tower frequency, location etc) and a range of variables (power, tree height, building thickness etc) which it uses to make pseudo-random requests to our Area API.

Script parameters showing variables and their ranges

The responses are fetched as greyscale open standard GeoTIFF images which are analysed using the rasterio library against the survey measurements. The delta between points is recorded as both mean and root mean square error.

The RMSE error is the key figure which describes the error between two arrays of results. Achieving a low mean is easier to do by over-fitting a model, but lowering the RMSE is harder for a diverse environment as clutter must be tuned to allow for trees, buildings and open ground. The script constantly updates a custom clutter profile at the API with new values in between requests.

Many tools won’t publish performance data as it would make it hard to justify their price tag. Some cannot do diffraction which disqualifies them for signals below 2GHz.

The requests are scored individually using RMSE and ranked. The best scores from the generation are selected to breed the next generation and so on until the (user defined) limit is reached. Therefore, the process can be scaled to offer a quick result for a live map or a thorough result for enumeration of unknowns such as tower height, power or even tower location using a search box.

At the end of the process, the best values for the variables are shown which can either be used to build a custom clutter profile, as we do in the demo video, or scripted further to make a final layer for a third party interface.

Demo video

Conclusion

Developments in data and performance, accelerated by GPUs, means accurate and fast simulation is more viable than ever. By being able to deliver this capability in seconds instead of hours, new integrations and capabilities are possible. Furthermore using a mature API, with public examples, makes an MVP viable in days.

Using a SOOTHSAYER server, this can be done at the edge without internet access.

A few ideas for new integrations which can leverage this concept:

  • A spectrum analyser with a living coverage map
  • A signal classifier which shows more than metadata
  • An app which shows the impact of antenna adjustments in real time
  • A robot which maintains a live coverage map which can inform route selection
  • An RFPS analyst can be freed up from walking around to do some analysis

Credits

Thank you to Rantcell for providing rich LTE drive test data and our resident Machine Learning guru, AppyBara, for developing our automatic calibration client. If you would like a copy get in touch.

Posted on

Live RF coverage mapping with ATAK

Highlights

  • Dynamic radio coverage visualisation
  • Vendor agnostic radio integration via ATAK
  • 450 heat-maps delivered without issue
  • Sub-second computation via Cardshark computer

Background

Three years ago we developed a “live” simulation capability using location-aware MANET radios which we described as dynamic radio planning which fused real and planned radio positions. This feature required a third party hardware API with restrictive terms, common in commercial radio, so it exists as a video demo only.

We’ve refreshed and field tested this concept, using modern edge compute and open standards.

ATAK as the common API

Using ATAK as a proprietary API broker, we are now able to do the same via our plugin. The technology agnostic capability can be used with any radio, vehicle or marker on the map and by starting with an open information standard, Cursor-on-Target (CoT), it eliminates the commercial friction with NDAs, proprietary APIs and different vendors.

Open standards unlock low-cost cross-vendor interoperability in a way proprietary standards never can. For example, two vendors can achieve compatibility without knowledge of each other’s products. Better still, compatibility with future products, not yet deployed, can be assured.

Mapping live radios in the SOOTHSAYER ATAK plugin

The field test

We picked a local Forest to field test this concept using a Cardshark computer which is a rugged Jetson Orin with a 1024 core GPU. We need the GPU to efficiently compute our ‘Multisite‘ network heat maps. The ‘Points‘ links are CPU powered. We’ve worked with Jetsons on previous field tests but under manual control. The automation we’ve added here makes periodic API requests and places the computer under a sustained load.

The radio network was a four Tait 9300 DMR portables on a 2W channel, with one donor radio connected via a USB programming cable. GPS locations were fetched using our Tait script which outputs CoT broadcasts to make them appear (and move) upon the map.

The testing went well and produced 450 heat-maps to validate both the concept and the computer. Crucially, our 9Ah battery depleted only by 25% during 2 hours of intensive testing. Our conclusion is that with a reasonable load and refresh rate this edge capability can be scaled to run all day, as we found in Scotland earlier this year.

Live RF coverage mapping with ATAK

Speed test!

Five years ago we asked for the ATAK KML refresh rate to be lowered to enable a “follow me” demo we published and our understanding is it was capped by design at 10s (compared with 1s for Google Earth) due to a concern over excessive bandwidth which was understandable – at the time. A lot has changed in five years of software and radios and now we’re doing the compute locally, this concern is obsolete.

Our GPU engine can model a heatmap in under a second so we bypassed the network KML functionality (which is still a valid way of refreshing heatmaps on ATAK as our Tait plugin does) and implemented our own refresh system, designed for fast moving data. During our speed test, we refreshed the heat-map every 5s which both ATAK and the Cardshark handled comfortably. Logs showed each simulation took under a second with another second for pre/post processing and another for communication. The points requests take 150ms and are called for each radio so four radios would be 600ms, excluding communication.

Issues identified

We identified issues relating to USB tethering which weren’t apparent in the office: The Cardshark does not have WiFi which we employed for previous field tests so this made communication more challenging.

We were able to workaround this for the test with a WiFi hotspot to fool the plugin into thinking it was on a network. As we were using dynamic IP addresses provided by the phone and the Cardshark has no interface, we ex-filtrated the IP information we needed via ATAK which is why there is a IP-address-callsign visible in the video.

The Cardshark is a fanless design which requires airflow to cool it. We deployed it in a bum bag / fanny pack where it unsurprisingly became hot during intensive use but still functioned well. For enduring use, this would need to be mounted externally and the workload throttled accordingly.

Our radio template needed work as only afterwards did we note we did not set the DMR template’s noise floor which defaulted to -133dBm based upon the narrow 12.5KHz bandwidth. This was why there were blue 50dB links visible in the video when in reality the noise floor was likely closer to -113dBm and these links were a more realistic 30dB SNR. This issue did not affect the heatmap which used received power units and we’re satisfied from calibrating with large data sets that the modelling is accurate.

Credits

A special thanks to GoTak LLC who helped us develop and test the live Co-Opt feature in ATAK and Carnegie Robotics for producing the Cardshark and providing timely support.

Links

SOOTHSAYER self hosted server: https://cloudrf.com/soothsayer

Cardshark computer: https://carnegierobotics.com/cardshark

SOOTHSAYER ATAK plugin: https://github.com/Cloud-RF/SOOTHSAYER-ATAK-plugin

Tait ATAK plugin: https://github.com/Cloud-RF/CloudRF-API-clients/tree/master/integrations/Tait

Fanny pack: https://www.osprey.com/gb/osprey-seral-7-s23?size=One+Size&colour=Black

Posted on

Mapping Noise

SDR radios

Noise is the single biggest factor in determining the quality of a communications link. It’s also the reason why there is low confidence in the accuracy of (RF) simulation in complex environments as it’s rarely done well, if at all.

Budgeting for noise is critical to achieve desired signal levels. Historically, it was done with a single figure to satisfy all locations, eg. ‘-100dBm’. This simplification is a time/accuracy trade-off and is no longer relevant in the age of dynamic spectrum management and cognitive radio.

Noise varies widely between locations, and changes constantly, so we have invested in developing living noise maps to reflect this dynamic nature. Like a terrain layer that moves, noise data can be used to improve the accuracy and relevance of planning in dynamic environments.

Using SDRs and APIs to improve simulation accuracy with live noise

Evolution of simulating noise

A noise figure (2022)

Back when we added Signal-to-Noise (SNR) output units in API v2.7, we needed to express the noise floor as a (dBm) figure to provide a reference for a signal’s quality eg. 15dB. Users interested in SNR enter a single value like -100dBm, hopefully based on the local environment, to describe noise across the entire area, or link. As this guesswork is prone to error, we automatically recommended a conservative value, to budget for high noise.

For example if the thermal noise for a narrow channel is -133dBm, our interface automatically recommends -113dBm as a floor for planning which provides 20dB for unknown noise.

The noise figure could be measured direct from a networked sensor which we published in early 2023.

A Noise database (2023)

Noise varies by location (and frequency) and the previous method didn’t scale so we developed a noise API to store noise data and reference it in calculations. The private data was used on a per-site basis so you could model a network with different noise at each site. A marked improvement on a single figure.

This development represented a leap forward in network planning as each node could be configured for the local environment, which can vary drastically. Two different users might have different needs so it is isolated to the user’s account. A multisite API call accepts different noise values for each site.

A Noise map (2025)

Building upon our Noise API, stored data was used to generate a noise map, specifically a raster layer of measurements, similar to clutter which our API can reference. This noise map describes thousands of noise points across the area or link of interest and provides high resolution noise. Now you can see the real impact of noise with minimal effort at each location covered.

Any calculation requested with the database option versus the legacy single figure method will create and use a noise map at the API. The quality of the noise is determined by the data you can provide and any missing values will be interpolated. The maximum resolution is 12m supporting dense urban planning so you can have different noise levels in adjacent streets, which is common in urban canyons.

Better still, with live noise data, you get live coverage. Ideal for autonomous systems and future spectrum management systems which will need to be automated to remain relevant.

SNR

Collecting noise with DORA

DORA (Distributed Open Receiver API) is an open source project sponsored by CloudRF designed to collect noise measurements using various Software Defined Radio (SDR) receivers and mature open source utilities.

It uses consumer grade SDRs via a remote service present on the DragonOS operating system. Designed for SBCs like Raspberry Pis, DORA presents a common API for RF sensing across different radios. Nodes perform a local FFT to measure average power (with configurable bandwidth) and then publish the PSD data via an API endpoint. A server fetches and collates these to present them in an interface to provide spectrum visibility.

When a CloudRF API key is provided, the server sends data to our noise API giving a user live noise data for accurate planning. DORA’s low cost (£200 BOM per node) makes it scalable and cost effective. It won’t give you a pretty waterfall like Government spec hardware but it will provide the scale needed for autonomous spectrum management, powered by the CloudRF API.

If you do want an open source FFT and waterfall we recommend OpenWebRx.

You can contribute to the future of scalable spectrum sensing over on Github with issues, feedback and features.

Summary

This noise map feature is live now and works with any receiver capable of reporting noise as dBm. The benefit of using live noise in planning is improved accuracy but also relevance, and in time confidence, as the simulation will match the environment.

Posted on

The art of HF

We’ve published a series of video tutorials for HF novices to bring to life HF theory around the topics of Frequency selection, Antenna fundamentals and forecasting.

HF communications is very different to terrestrial communications and given the right frequency, time of day, and antenna, you can achieve long range links in excess of 1000km with only a few watts of power on a HF Dipole.

CloudRF’s API uses the proven VOACAP engine to create accurate HF predictions considering a number of factors. Using this tool, you can plan long range resilient links and anticipate the (time based) surprises HF throws up…

Frequency Selection

Time is critical to HF communications. As sunlight changes throughout the day, so does the range of usable frequencies. A common strategy for round the clock communications is to maintain a day and night frequency. These can be identified using the VOACAP powered path tool in CloudRF.

This tool reports the Signal-to-Noise ratio against time for different HF frequencies.

Antenna Basics

Once a frequency has been identified, an antenna must be constructed to the right dimensions.

The antenna of choice for many long range HF links is the half wave dipole. This simple and efficient design uses two fixed length elements and a center feeder to bounce signals off the ionosphere.

To get the wavelength for your frequency divide 300 by the frequency in MHz.

Height

The height of the antenna will change its radiation pattern and the take off angle.

Achieving at least a quarter wavelength is recommended for efficiency (and practicality) as the long HF wavelengths make a half wavelength too high for most masts. For an 11MHz signal, the height would need to be 6.8m for example.

Azimuth

HF patterns are directional and must be orientated towards a distant station. For some antennas, like long end fed wires this is as simple as pointing the wire towards the station.

For a half wave dipole, which has a donut shaped radiation pattern, it must be broadside towards the station as it has nulls where the wire ends are. A bad azimuth can be forgiven at short range but will determine the potential range.

Feeder loss

Using a feeder co-axial cable will reduce the efficiency of your system. The effect increases with length so you should aim for the shortest, low loss, feeder possible. The impact of the feeder can be simulated even if you don’t know the cable type by entering 1 or 2dB into the feeder loss option.

Forecasting

Sunspot R12

The Sunspot index number describes solar activity which follows a 10 year cycle. When the number is high, there is increased solar activity and better refraction. The different between a good and bad year within the cycle is around 6dB which on the S Meter scale is equivalent to two levels, or the difference between success and failure.

This can be predicted and the random element budgeted for using the model’s reliability value.

Posted on

Phase Tracing interface

Phase Tracing Interface

Simulating indoor radio coverage for first responders has been made simpler thanks to a new capability called Phase Tracing.

The novel design was influenced by the 2017 Grenfell Tower inferno, where radio communication in concrete stairwells was highlighted as a major problem. The Grenfell inquiry highlighted radio and training issues in the report, which had a section dedicated to communications.

During the inquiry, expert witnesses were unable to demonstrate how far a signal would travel within the tower, even with the availability of indoor planning tools. Estimated distances offered to the inquiry were based upon empirical measurements from elsewhere and were at odds with witness statements from firefighters who reported losing communication after only four floors and communicating with paper notes.

Multi-path in a stairwell

The intensive computation required to perform a true 3D simulation with reflections has been made practical through developments in graphics processing. As a result, accurate radio coverage in stairs, tunnels and elevator shafts can be simulated, at the network edge, by an operator with minimal training.

In contrast to legacy indoor planning tools, which use floor plans and images; Phase Tracing is designed for critical communications and industrial markets in challenging and dynamic 3D environments, represented by digital models.

Models not floor plans

Phase tracing in a multi floor open plan office

Phase Tracing represents a leap forward for radio simulation from overlaying images upon a 2D map or floor plan, suitable for an estate agent, to using a digital twin 3D model which considers all floors, and the obstructions in between from stairs, to air ducts and pylons. Simulating reflections is critical for indoor modelling which is a pillar of the design.

There also exists a huge gap in the market between indoor simulation packages and the skill required to use them effectively, and first responders who are left guessing where they will lose communications on a stairwell. This gap has been closed by developments in computation, namely GPU processors, and web technologies which mean this powerful API can be used from a low power touchscreen device.

A little movement…

For RF theory students who are taught the impact of multi-path; they now have a tool to visualise and explore this important concept; so they can see why “a little movement may cure a dead spot”. Better still, they can identify constructive “good” multipath they didn’t know about.

Tarana antenna at a railway station with a bridge and pylons

The GPU accelerated engine reads and writes to open standard glTF models and uses ray tracing techniques from computer games to bounce photons around the model. With the addition of phase, multi-path artefacts such as signal “dead spots”, where out of phase signals on the same wavelength cancel each out, can be modelled.

The number of reflections, material attenuation and scattering properties can be configured. This is essential for modern buildings which are built with materials which disrupt radio communication.

Applications

Phase Tracing has a distinct advantage over 2D modelling for the following 3D obstacles in most wireless industries.

  • Stairs
  • Tunnels
  • Bridges
  • Towers
  • Pylons

Design

The Phase Tracing capability is built upon our 3D API which we launched last year with a blender plugin. The API can be called directly to integrate the output into other model based systems, or even viewed in a standalone HTML5 viewer.

Touchscreen interface on a tablet

The interface and API is radically different to our map based Globe. For starters there are no Geographic coordinates, positions are in Cartesian XYZ co-ordinates relative to position 0,0,0. This is so you can work with models which might not have a geo reference or in the case of design, might not even exist yet.

Photons and Phase

The 3D engine is a CUDA accelerated pipeline, like our 2D GPU engine, which processes jobs asynchronously to service multiple users. It creates a voxel model from a glTF file which it then radiates photons around. A photon will reflect from obstacles until it runs out of energy or reaches a reflection limit. Unlike Ray Tracing, a legacy technique for indoor modelling, these photons maintain their phase so multi-path can be simulated in all directions.

Each reflection costs several decibels of power typically so there is a practical limit, depending on the material, after which it will be too weak to be useful and the photon should be killed. The engine can model up to 30 reflections per photon which do not impact performance so much as the number of photons, currently set to 2e6. The required number of photons depends upon the model: If you have a small office and need to decide where best to put a Wi-Fi Access Point you don’t need many.

Reflections in a Microwave oven

If however, you need to model reflections up a stairwell, along a corridor and into a flat you need millions. This isn’t fast, or pretty, but such is the nature of critical communications. We’ve fixed the photon limit on CloudRF to deliver a calculation in under 30 seconds for a large model. A small model will be quicker.

VR/AR support

The cross platform interface uses three.js and the WebXR library which supports Virtual Reality and Extended Reality devices. We have a XR branch we’re playing with on a Meta Quest but are having a headache issue as it is so immersive you get vertigo exploring tall models. Once this is sorted, likely by AR, we’ll merge it. Last year the 3D output was integrated into a third party Hologram interface.

VR controllers in a development emulator

Demo Gallery

We have an interactive demo gallery of 3D models you can explore on our Github pages. To use these demos you will need a WebGL capable web browser like Chrome. You can use your mouse to zoom in and explore the models or download them as GLB to view on your phone using an app like glTF viewer. iPhones support these GLB models natively.

Roadmap

The API and version 1.0 of the interface have been published. The API can be used by Silver and Gold customers and the interface is restricted to Gold only presently whilst we build more infrastructure to support this.

June 2024 – 3D API

  • Upload glTF model
  • Perform multi-site simulation using transmitter parameters
  • Configurable material attenuation
  • Configurable reflections and attenuation
  • Blender plugin
  • 1e6 photons
  • Mega voxel limits

Jan 2025: Phase Tracing 1.0

  • Cross platform web interface
  • GLB Model management (Add, Remove)
  • Local model caching
  • 3D antenna models built from user’s antennas
  • Click to aim
  • Configurable reflections, resolution and default material density
  • 2e6 photons
  • Save/Load settings as JSON

TBC: Phase Tracing 1.1

  • Official VR/XR support
  • GLB download
  • Material manager for construction materials
  • Biasing for speed boost
  • Configurable photon limits – linked to plan

Sample GLB models

Upload these glTF binary models into the interface or another tool such as this handy free viewer.

You can validate your models with another free tool here.

Posted on

Interference analysis

Microwave dishes

Interference is one of the single biggest issues in radio which limits the potential of a system or network.

There are different types of interference but the problem of interference visualisation is common to all. With simulation software you can model your system, and an interfering system, but understanding the interplay where the coverage of the two overlap is crucial. Like many radio engineering concepts it’s a complex topic so making it simple requires abstraction which our API provides.

Up until now we offered a basic interference capability, capable only of colour promotion. It was unable to consider signal parameters or to show the level of interference.

Enhanced Interference API

The upgraded interference API considers the signal parameters frequency, bandwidth and power. It accepts two arrays of sites, one for the “signal” network and another for the “noise” network so you can compare two sites or scale the concept for two networks.

Frequency is obvious as two local signals on the same wavelength will interfere. This technology agnostic API considers the signal as a constant carrier. This means it does not consider features of the waveform since modern technologies, like 802.11, employ back-off mechanisms in the PHY to manage collisions whereby a transmission will pause momentarily if it detects noise.

Bandwidth is important as even if the signals are on different channels, their bandwidth may overlap. In 802.11, adjacent channels overlap by design when using wide (20MHz) signals but the amount is small enough that the spread spectrum signal can overcome it in error recovery mechanisms. As a result, many signals can operate in a dense slice of spectrum.

Power is harder to plan for in spectrum planning where the focus is normally on frequency management and is the source of most interference reports. Even if two signals are on different channels, with non-overlapping bandwidth, they can still interfere if one of them is sufficiently powerful. This is because a signal produces frequency harmonics at multiples of itself and power in the spectrum appears as a Gaussian function which looks like a bell curve. A powerful signal will bleed power into the spectrum adjacent to it and if a receiver does not have an adequate filter, it will receive this power even if it’s on an adjacent channel!

Presenting interference

We use decibels (dB) as the measurement unit to describe interference along with a special purpose colour key called JS (Jam to Signal). The J/S ratio, as the name implies, shows the interference (Jammer) power over the signal power. A bad JS ratio implying strong interference would be greater than 0 eg. 12dB and a good ratio would be negative eg. -12dB.

The level at which this interference presents a problem to a given waveform varies. Some waveforms are designed to operate within noise such as LoRa and others like WiFi fail gradually with noise: When people say “the WiFi is slow” yet they have a strong signal, the problem is interference which causes sampling errors, and reduces data bandwidth.

Using -3dB as a interference limit in planning is recommended. This is green on our colour key.

Anything higher than this and there will be reduced performance / speeds. An interference ratio higher than 0dB will likely stop you communicating altogether if your signal requires a positive SNR ratio – as most do. For reference, high capacity data waveforms require 20dB SNR and commercial telemetry requires less at 3dB SNR.

Demo: Signal jammer (Frequency)

This high resolution frequency demo shows the impact of a 10W signal jammer against a high powered urban rooftop cell tower radiating ten times more power at 100W EIRP.

Despite being near to the strong, and elevated, cell, the lower powered omni directional jammer is able to overcome the cell in building shadows and coverage nulls caused by the directional antenna pattern.

Where the interference is equal to or greater than 0dB, it is very likely that cell coverage would be disrupted.

Demo: FM broadcasting (Power)

This is a power problem whereby channels have been separated in frequency but there is interference from neighbouring channels. This is because a signal is shaped with a Gaussian function resembling a bell curve and has power either side of it in the spectrum. The stronger the signal, the more power bleeds into neighbouring channels.

Demo: Microwave link (Bandwidth)

A high power microwave link uses parabolic dishes to focus a high bandwidth beam towards a distant point.

On the path of the link is relatively low power 3GHz cellular system separated in frequency by 45MHz. There is no guard channel so the two signals are adjacent to each other. The directional pattern experiences interference at the edge but is not affected on the main beam.

API demo

We have published a new API demo to demonstrate this scalable capability using vehicles using PMR 446 radios which are being interfered with by other vehicles with different technology in the 446 band.

It uses our Multisite API to model each network for the Signal (Blue) and Noise (Red). When a transmitter, or vehicle in this case, moves, the network is updated and the interference simulated.

Link: https://cloud-rf.github.io/CloudRF-API-clients/slippy-maps/leaflet-interference.html

Documentation

API reference

https://cloudrf.com/documentation/developer/#/Analyse/interference

User documentation

https://cloudrf.com/documentation/04_web_interface_functions.html#interference-analysis

Complete Code example

https://github.com/Cloud-RF/CloudRF-API-clients/blob/master/slippy-maps/leaflet-interference.html

Posted on

HF Skywave

HF Ionospheric propagation, known as Skywave, has been added to the CloudRF API.

While Satcom offers clear advantages in terms of speed, bandwidth, and the ability to provide real-time, high-quality data transmission, HF remains a crucial alternative where cost, independence from satellites, portability, power efficiency, and resiliency are important.

Wide area simulation

The new /hf/area API endpoint uses the proven VOACAP engine to simulate coverage for a given transmitter at a given time of day. It combines CloudRF’s familiar JSON interface structure (transmitter, antenna, receiver, environment) with VOACAP antenna patterns and temporal parameters (Month and hour).

The maximum range for the simulation is 10,000km which takes several seconds and like other (area) layers it can be exported to KMZ, GeoTIFF and SHP.

API reference: https://cloudrf.com/documentation/developer/#/Create/hf%2Farea

Link frequency prediction

The new /hf/prediction API endpoint provides a HF frequency prediction capability, powered by VOACAP, for a given link. It uses the same workflow as the path tool whereby the transmitter is placed upon the map, the path tool is selected and the receiver is placed.

The output is a chart of frequency SNR values for the link across the HF band from 2 to 20MHz and hours of the day. As ionospheric activity varies considerably between day and night, the chart helps select an optimal frequency and can be exported to PNG.

SNR graph for a HF link

API reference: https://cloudrf.com/documentation/developer/#/Create/hf%2Fprediction

HF parameters: Time and Sunspots

Time is critical with ionospheric communications so the “model” section has been extended to include three new variables.

Diffraction is disabled and the two familiar reliability and context variables can be tuned as with other models to match empirical measurements.

Month

The month is defined as a integer from 1 to 12.

The relevance to HF is that during the summer, solar activity, and therefore refraction, is increased.

Hour

The hour is defined as an integer from 0 to 23 and is in UTC.

The relevance to HF is that during darkness, the lower “D” layer collapses so RF which was previously attenuated is free to reach the higher “E” and “F” layers which increases both noise and global reach.

Sunspot R12 number

Solar activity follows a ten year cycle which can be predicted. The year 2024 is at the peak of this cycle so the “R12” number is high at ~150. In 2030 this will be low at (~25) and equivalent HF links will be more difficult.

Solar activity is subject to random bursts of radiation (Sporadic E) which makes predicting and calibrating HF communications harder than short range terrestrial links. Using empirical live measurements from sounders and crowd sourced networks, the random element can be mitigated but not removed.

Sunspot chart © Australian Bureau of Meterology

HF Antennas

HF antenna theory is a complex art form. For example, the same physical antenna will produce very different propagation depending upon its height above the ground where reflection takes place.

The reflection depth varies by terrain so a swamp has a ground level water table but a desert could have a reflection height much deeper so an antenna may not need elevating much at all to achieve a link.

This reflection height determines the “take off angle”, and skip zone, whereby a tall antenna between two tall masts has a lower take off angle, and longer range compared with a waist height antenna which has a steep angle and relatively short range for Near Vertical Incidence Skywave (NVIS), which we have a separate model for.

When selecting the HF model, you will be given a fixed list of VOACAP antenna patterns with which you can manipulate the gain, height and the azimuth. A typical gain figure for a dipole will be between 1 and 2.15dBi whereas a LPA could be 9dBi depending on elements.

AntennaDescriptionAPI code
ITSA-1 Horizontal DipoleCenter fed symmetrical design with arms matched to half a wavelength. Deployed broadside to the target.2
ITSA-1 Horizontal YagiDirectional array which can be steered to a target.3
ITSA-1 Vertical Log PeriodicHighly directional array with a focused beam pointed at a target.4
ITSA-1 Sloping VeeSimple directional antenna which only requires one mast. Arms are measured to 1/2 the wavelength and staked out towards the target.5
ITSA-1 Inverted LEnd-fed mixed polarisation antenna capable of local groundwave and distant Skywave. 1/4 wavelength up and 1/2 wavelength along.6
ITS-78 Inverted LEnd-fed mixed polarisation antenna capable of local groundwave and distant Skywave. 1/4 wavelength up and 1/2 wavelength along.8
ITS-78 Sloping Long WireSimple end-fed wire orientated towards the target.9
ITS-78 Arbitrary Tilted DipoleDipole tilted to achieve maximum gain.10
ITS-78 Terminated Sloping VeeSimple directional antenna which only requires one mast. Arms are staked out towards the target and terminated with a load.11
Table of HF antennas and CloudRF antenna codes

Skip zone for a low take off angle

Example HF API request

This simulation request is for a 10MHz half wave dipole, 8m above the ground, radiating 30W of RF power.

The month is November and the time is 8am UTC.

The sunspot R12 number is ~150 for late 2024 based on the solar cycle.

{
    "site": "Tx",
    "network": "HF",
    "transmitter": {
        "lat": "46.3936",
        "lon": "6.8835",
        "alt": "8",
        "frq": "10",
        "txw": "30",
        "bwi": "0.012"
    },
    "receiver": {
        "alt": "1",
        "rxg": "1",
        "rxs": "-125"
    },
    "antenna": {
        "txg": "2",
        "txl": "0",
        "ant": "2",
        "azi": "45"
    },
    "model": {
        "pm": "13",
        "pe": "3",
        "rel": "50",
        "month": 11,
        "hour": 8,
        "sunspots_r12": 150
    },
    "output": {
        "units": "m",
        "col": "HF.dBm",
        "out": "2",
        "nf": "-112",
        "res": "20",
        "rad": "5000",
        "bounds": {
            "north": 89,
            "east": 62.25,
            "south": 1.42,
            "west": -48.49
        }
    }
}

HF Calibration

Using crowd sourced link measurements from the amateur Weak Signal Propagation Network (WSPR), the VOACAP engine can be calibrated using our native calibration utility. Data can be filtered by station and processed ready for import to the calibration utility. The data reports Signal-to-Noise (SNR) measurements which reference an unknown noise floor. We recommend an average value of -100dBm for the 10MHz noise as any error in measurement is less than the other variables such as the variable solar radiation and distant station antenna gain for example.

The WSPR data required pre-processing to convert the 6 figure Maidenhead grid squares to WGS-84 coordinates. The maidenhead python library is recommended for this.

Filtering a time window is critical due to the propagation changes which occur throughout the day. For example, you cannot calibrate measurements from day and night together but you can calibrate an hour as a separate file.

Calibrated 10MHz signal from UK across Atlantic at 9am UTC
WSPR data for station G8ORM
HF propagation on 10MHz over 24hrs

A look ahead

Now that we have published an API, we want to integrate it into some systems. ATAK is an obvious candidate for starters but a HF radio which can see into the future or a ALE modem which doesn’t need to radiate to know it can, or cannot, communicate.

This capability will be available with the next SOOTHSAYER release.

Credits

Credit to the many VOACAP developers and maintainers over the years who have maintained this powerful capability. It is arguably one of the most senior pieces of operational software in the commercial world.

We have only put a shell on this incredible engine but hope our API will introduce a new generation of software developers and communicators to the magic of HF.