Smart farming is using Internet of Things (IoT) technologies in agriculture to enable efficient use of resources.
For this blog we’re focused on cattle farming on large, fence-less farms in New Zealand. The farms in question are vast and remote so connectivity options are limited. This is why an off-grid sub-GHz LPWAN network is ideal due to its long range and the requirement to only send small, infrequent, packets of data.
For the solution to be cost-effective compared with Satellite, as little infrastructure as possible is needed which in this case is a LPWAN gateway on a pole, some collars for the herd and an app to manage the system via a web service.
Siting the LPWAN gateway(s) properly is critical to achieving not only coverage across the farm(s) but to reduce the number of gateways, which reduces complexity and cost.
Sub GHz LPWAN on the farm
An 868MHz LPWAN signal can go for many miles under the right conditions. We know this well from powering the Helium LPWAN network’s planning tool, Helium Vision, where people can communicate data 50 miles with a fraction of a watt of RF power and an omni directional antenna.
Despite it’s useful diffraction properties which enables it to work non-line-of-sight (NLOS), it’s still sensitive to obstructions so clutter on the farm such as buildings and trees needs modelling accurately. CloudRF has 10m Landcover for New Zealand from the European Space Agency and 10m DSM from the LINZ Geospatial agency.
These data sets are adequate for most outdoor scenarios but are not fine enough to model a farm complex of buildings, such as tall grain silos, metal sheds and seasonal obstacles. For high resolution you could source your own surface model, as our customer Halter did…
Use case: Halter
Halter are a novel agri-tech startup focused on cattle management with a unique solar powered collar.
They needed accessible RF planning software to help their engineers site LPWAN gateways. Having used and liked Cloud-RF, they needed higher resolution surface models of the farms, and no pesky API restrictions!
They also planned to build their own tools on top of our powerful physics based API which is smart as it allows their R&D team to focus on their primary product, and not waste time reinventing the wheel.
Their options were either buy expensive commercial data or self generate data using a drone and photogrammetry software such as Pix4D. Given the prohibitive cost of high resolution commercial LiDAR, it would only take a few jobs to make a return on the purchase of a decent drone!
Halter purchased a private Keyhole Radio server from us which included the API they needed. The server runs as virtual machine and crucially, lets them import their own terrain data.
They were quickly able to import high resolution, organic data into their server as GeoTIFF files. This allowed them to work with data which was very current, even hours old, so would be an accurate model of tree heights and man made obstructions.
The terrain format accepted by Keyhole Radio and SOOTHSAYER is GeoTIFF, Int16 resolution and WGS84 (EPSG:4326) projection.
It wasn’t all plain sailing though, they found that there was a limit to the physical tile sizes our server could use caused by memory. The solution was to reprocess the large tile into smaller tiles to make it digestible.
A 5000 x 5000 GeoTIFF at Int16 resolution will require 50MB of disk space. If this is 5m LiDAR, the physical width is 25km x 25km. Our engine can super-sample, so if you used this tile, but requested 1m resolution, it would create a raster in memory measuring 25,000 x 25,000 pixels which would need 1.25GB of memory.
For 1m resolution however, tiles measuring 1000 x 1000px would only require 2MB of disk and memory. You may need to load in a few, lets say 16, to do your model but that’s still only 32MB.
You could also resolve this by increasing the memory available to the server but it’s recommended to prepare data into smaller parcels. We support 1m resolution in our API but don’t hold a lot of 1m data sets due to their substantial cost and size. If you already have 1m data, a Keyhole Radio or SOOTHSAYER server is the answer.
Cloud-RF’s powerful API is ideal for efficient smart farming.
Our private servers will let you take it to the next level with terrain data you can source yourself, no API restrictions and as a bonus, they work without an internet connection!
We took a field trip to model LTE (4G) coverage in order to collect data we can use to develop calibration utilities and improve modelling as modelling is only as accurate as your inputs.
We focused on a single remote cell in the Peak District national park identified through cellmapper.net. We expected to find one cell only but were surprised to be serviced by several distant LTE cells, not evident in the crowd sourced app and equally significant, we established limited or no coverage in an area where a national map suggested coverage was available.
Data revealed the crowd-sourced coverage app was conservative in rural areas
Data revealed the operator’snetwork map was optimistic in rural areas
Modelling was matched with 2.5dB RMSE for a cell 12km away
Modelling was on average accurate to 5.5 dB RMSE
Improvements to modelling have been identified
Equipment and process
We used a rooted Samsung Galaxy Tab with an integrated Qualcomm X11 LTE modem, running both Network Signal Guru (NSG) and Cellmapper. NSG requires root access to lock to a cell which was necessary to prevent our survey tablet from hopping around not only protocols (2G,3G) but neighbouring cells.
Cellmapper is a crowd-sourcing app which writes signal strength readings to a CSV file, convenient for our analysis. Before embarking we planned a route around a remote cell on the edge of available coverage maps.
Both apps record various LTE power levels such as Received Power Received Signal (RSRP), Received Power Received Quality (RSRQ) and Received Signal Strength Indicator (RSSI). For this test we use RSSI which is typically a stronger value than the others as it is the measured carrier, irrespective of bandwidth.
Receiver measurement calibration
Radio receivers are subject to measurement error, typically in the range of 0.5 to 3.0dB for very expensive and consumer grade equipment respectively. As we were using a consumer grade Snapdragon 662 SoC with an X11 LTE modem, we needed to find out it’s measurement error. The Qualcomm datasheets we could find didn’t list this value so we used empirical measurements to establish it.
During our survey we paused at a site 3km from a tower with line of sight where we recorded continuous power readings with the tablet static on the ground for about 15 minutes. Consistency is essential for calibration. We have analysed these readings to establish a standard deviation in readings of 3.1dB for the X11 LTE modem, which puts it at the consumer end of the spectrum for survey device accuracy, in accordance with its price.
We use the error of 3.1dB in our analysis by subtracting it from the Root Mean Square Error (RMSE).
Setting off uphill from Derwent Dam car park with Sheffield’s man-of-the-mountain, Chris, we approached our target cell located on the hillside below us.
As we neared 500m we used NSG to lock onto a strong LTE signal which we believed was the target (CID 130256660, PCI 270), based on proximity and strength (-60dBm RSSI). It was in fact a tower on a distant hill 12 km to the south with line of sight, CID 131413770. Surprise number 1.
At the top of the hill we could see the target tower’s directional panels which confirmed it was configured to serve the A57 “Snake pass” road below. One panel was oriented north-west towards Manchester (CID 130256660) and the other south-east towards Sheffield (CID 130256650). Based on the dimensions of the panels we estimated their beamwidth as at least 120 degrees and gain of at least 10dBi.
As we passed the eNodeB along the hill top we were conscious of a number of cell neighbours and performed a targeted re-selection where we ended up briefly attached via the antennas back-scatter of *6650, made possible by our proximity. This didn’t last for long before we re-selected to a strong signal (PCI 337) which we were convinced was CID *6660. It wasn’t. Surprise number 2.
We later found out this was also a distant cell with line of sight near Hope, 7km due south of us!
Marching on happily with a great signal, we started a gentle descent until we lost the horizon behind us. At this point the neighbours we observed disappeared and our serving cell (in Hope) became very weak as we entered the signal’s (diffracted) beyond-line-of-sight (BLOS) shadow. As predicted in the video based on the surrounding high plateau we could see, we lost the signal as we continued to head north toward Alport Castles, a local feature. We descended into the valley below without a signal (despite a national map suggesting otherwise) and continued the next 3.5km without any coverage at all 🙁
As we exited the remote valley, heading towards the A57 road, we reacquired a signal and finally locked onto our target, *6660, with an excellent signal and line of sight (Credit to Chris for spotting the tower in the trees at 2.7km!). As observed, the directional pattern was focused on the road and we were in the main beam.
A quick map study and we elected to march west until we lost the signal. This event occurred after only 1.5km thankfully as we hand railed the road over undulating terrain. We followed the same route back to the acquisition point which doubled our measurements for this section.
From the A57, in the main lobe, we climbed the hill to the south east and headed back toward the target cell. Knowing it had a directional pattern, we anticipated signal strength decreasing as we exited the main lobe which we confirmed as we drew parallel with the cell, eventually circumnavigating it to the south.
As we exited the beam of *6660 and entered the influence *6650 we re-selected for the final phase of our journey which would take us into a steep ravine and then up the hill, right past the cell.
A sweaty climb up a steep hill behind the cell, saw signal strength and field testing enthusiasm collapse which was fixed with some fizzy snakes. We lost the cell for good only 500m behind it due to the convex hill and directional pattern.
Moral of the story is, in RF, proximity to an access point is no guarantee of service!
Hagg Farm (South East)
LTE Band 20 10MHz
Hagg Farm (North west)
LTE Band 20 10MHz
LTE Band 20 10MHz
LTE Band 20 10MHz
Table of serving cells featured in our data
The cells all have a downlink and an uplink frequency. As these four cells share the same downlink they are separated in time using a multiplexing schema and the Physical Cell Identifier (PCI) code. If we only took out a spectrum analyser we’d never know which cell we are looking at otherwise.
We chose to model after field testing. We could have done it before but it would have ruined all the surprises that came up during analysis like the serving cell 13km away!
We extracted the CSV data (1034 rows) from the survey tablet which for cell mapper was located at /storage/emulated/0/Android/data/cellmapper.net.cellmapper/files/.
We sorted it by cell and created clean CSV files for each cell with only the location and RSSI.
We used our new “coverage check” CSV import tool. This tool allow the import of customer locations which can be tested against visible coverage layers to report a correlation.
This is a binary yes/no comparison with a summary report eg. “87% coverage” which is handy for comparing options.
It cannot automatically calibrate field test measurements but is useful for gap analysis as a “first pass” toward calibration.
This tool is handy for manually aligning the modelling until it matches visually but is too simplistic for calibration.
Fine tuning inputs
Our confidence level for the inputs started around 50%, based on known frequencies, heights and power levels for the UK network. For the first cell, we used a combination of known, observed and assumed values.
You can be forgiven for thinking why not do field testing with known transmission parameters but even then you must calibrate as old batteries, weathered connectors and battered antennas will all impact a transmitters actual effective radiated power (ERP).
As we working LTE800 we used the ITM model, designed for this UHF band when it was conceived for TV broadcasting. This general purpose model has built in diffraction and also has a reliability variable which we can use for fine tuning.
Known values: frequency, location, approximate height, approximate azimuth Estimated values: Antenna azimuth, beamwidth, gain, RF power, exact height
Once we had a coverage plot using some sensible power values and the coverage-check tool reported a correlation > 90% we rendered it using the Greyscale GIS colour schema and download a GeoTIFF raster. This contains fine grain signal values to 1 dB resolution.
We suggest this workflow for the calibration process.
We also have an API capable of returning data in open vector and raster formats including SHP and GeoTIFF so there are other ways to do this...
Gap analysis with the coverage check tool in the web interface and approximate/rough inputs
Power balancing with the path profile tool for selected points only (Recommend a LOS link at long range)
Gap analysis with the coverage check tool in the web interface and power balanced inputs
Regenerate the layer with the GIS schema and export for precision offline calibration
Make minor (1-2dB) adjustments to either the loss or gain values for LOS links, and/or clutter profiles for BLOS until the calibration script reports an RMSE value < 10.
Using a Python script and the rasterio library we were able to query each row from the CSV data against the GeoTIFF raster instantly, negating the need for many recursive API calls.
The offline method is more efficient when working with large point-to-multipoint layers and spreadsheets than calling the API directly. It computes a mean error which can be positive or negative and a more useful root mean square error (RMSE) which is always positive. A lower figure is better with 0dB being ideal (and also impossible).
The API method is still valid for testing select points or calibrating dynamically.
python3 Offline_Calibration.py 131413770.csv 131413770.tiff
Lat: 53.402250 Lon -1.760703 Measured: -65.0dBm Modelled: -68.0dBm Error: 3.0dB Mean error: 0.5dB
Lat: 53.402252 Lon -1.760699 Measured: -59.0dBm Modelled: -68.0dBm Error: 9.0dB Mean error: 0.6dB
Lat: 53.402253 Lon -1.760699 Measured: -61.0dBm Modelled: -68.0dBm Error: 7.0dB Mean error: 0.7dB
Lat: 53.402252 Lon -1.760698 Measured: -63.0dBm Modelled: -68.0dBm Error: 5.0dB Mean error: 0.8dB
Model error is mean 0.8dB, pure RMSE 5.6dB based upon 84 measurements
Receiver measurement error: 3.1dB
RMSE adjusted for receiver error: 2.5dB
The modelling inputs are excellent.
0 to 3
3 to 6
Inputs are very close. Fine tuning needed.
6 to 9
Inputs are good but more tuning needed
9 to 12
Inputs are OK but not tuned
Inputs are bad. Check basics eg. Height, Frequency, Power
Suggested interpretation of calibration scores. Requirements will vary by scenario.
We achieved results better than expected. We were aiming for under 6dB RMSE and achieved 3dB, at 7km range, which is excellent, and coincidentally as accurate as the measurement accuracy of our survey device.
Manual calibration can be time consuming, and collecting good data definitely is. We felt we could have improved the scores further with more data, like the antenna data sheets for starters, but were happy with our 3dB.
The best results came from the distant cells where LOS was achieved. This makes sense as without obstacles to complicate things the path loss decays at a predictable rate, based on wavelength, which can be plotted as a clean curve. Once we had this power balanced using the path profile tool and manual adjustments, it produced a great match with the data due to the open nature of the high plateau.
The other cells, like our target at Hagg Farm (South east), served a more complex piece of ground in the valley which had steep ravines and tall trees. As expected we didn’t fair as well here and achieved 10dB RMSE. Analysis of where we lost accuracy can be summarised as follows:
Trees. We found 2m LiDAR to be too conservative here as this contains the tree canopy. We tried smoother DSM with clutter profiles which gave a better result but didn’t go as far as adjusting our clutter profile. This is a future trees blog!
Proximity. Counter-intuitively, being closer to the cell is not better for measurements and calibration than being further away. This is due both to the way path loss decays on a curve and the highly directional panels cells use. Small differences near a cell produce large differences in data, compared with very small differences up on the plateau with the distant LOS cells. We can model directional panels but are guessing what the *actual* beamwidths are.
Table of scores for the calibration of the four measured cells
The so what of all this is we have proven the software is capable of high accuracy,given the right input, and we have identified areas to improve it further.
We will be adjusting our LiDAR to soften it in areas where this is available to fully exploit recent developments with our custom clutter profiles.
We will be integrating recent developments with offline calibration into our user interface to make this manual process smoother, simpler and faster.
We will work on automating calibration. Some might call this machine learning blah but it’s just software.
Expect another field testing blog all about……trees.
Scripts and data
You can download our field test data and Python scripts here as well as Google Earth KMZs showing the route,cells and measurements.
We have developed a fast GPU RF propagation engine.
We’ve been busy behind the scenes designing and developing the next generation of fast radio simulation engines for urban modelling with NVIDIA CUDA technology and Graphics Processing Units (GPU).
The engine was made to meet demand across many sectors for speed and accuracy and to enable an automated best-site-analysis capability, which will accelerate planning and improve efficiency whilst keeping a human in the loop.
Designed for 5G
5G networks are much denser than legacy standards due to limited range of mmWave signals, necessary for high bandwidth data. The same limitation means these signals are very sensitive to obstructions, and line of sight coverage is essential for performance.
A dense network means more low power (small) cells are needed, which means more efficient planning is needed.
You can’t just place 5G cells on big hills and crank up the power like it’s the 1990s as the low power handsets would not be able to talk back to them. To achieve an economic and balanced low power urban network requires careful and thorough planning.
Our GPU engine has several modes, for different use cases. Here’s two we’re focusing on for this quarter.
The simplest mode is a fast line of sight “2.5D” viewshed (with a path loss model) which creates a point-to-multipoint heatmap around a given site using LiDAR data. This is comparable to using the current CPU engine with LOS mode – only much quicker. This is up to 50 times faster than our multi-threaded CPU engine, SLEIPNIR.
Best Site Analysis (BSA) is a monte-carlo analysis technique across a wide area of interest to identify the best locations for a transmitter. Now we have the GPU speed, this can be done quickly with a new /bsaAPI call. Presently our GPU based BSA implementation can search a radius around a location, using the 2.5D viewshed technique, to grade locations. The output will identify optimal sites, and just as important, inefficient sites.
This feature will replace the “best site” tool currently in the web interface which is not GPU accelerated
This feature is powerful for IoT gateway placement, 5G deployments and ad-hoc networking where the best site might presently be determined by a map study based on contours as opposed to a LiDAR model.
ETA: March 2022
Our GPU engine is up to 50 times faster through the API than the current (CPU) engine SLEIPNIRTM
By harnessing the power of high performance graphics cards, we are able to complete high resolution LiDAR plots in near real time, negating the need for a “start” button, or even a progress bar! This speed enables API integration with autonomous drones which will need to model propagation to make better decisions, especially when they’re off the grid. It was designed around consumer grade cards like the GeForce series but will scale to enterprise Tesla grade cards due to our design.
When it goes live, it will be an option in our /area API so you can use it from any interface by setting the engine option in the request body. The OpenAPI 3.0 compliant API returns JSON which contains a PNG image so for existing API integrations using our PNG layers there will be no code changes required to enable it.
At the time of writing the API integration is undergoing bench testing (see video). This feature is scheduled for public Beta testing in February 2022.
Using GPU cards to model Physics, including EM propagation, is an established concept dating back 20 years, despite sales-first businessmen claiming otherwise. Advances in gaming in particular have made ray tracing a mainstream term but there’s a big difference between ray tracing a visual perspective (in view) and modelling a high resolution raster or voxel map to generate a deliverable output. One is pretty and good for games, investors and technology hype-beasts and the other is actually useful for radio engineering.
What is novel here is making this exciting technology accessible to users priced out of premium tools using consumer grade GeForce cards.
Staying true to CloudRF’s accessible and affordable principles, we’ll include it in our service as an optional processing engine this year. Quite what this means for market incumbents and upstarts who currently charge SMEs a small fortune for a basic capability will be interesting. We’ll let the market answer that one.
In radio planning, accurate terrain data is only half the story.
The other data you need, if you want accurate results, is everything above the surface such as buildings and trees.
This is known as land cover or in radio engineering; clutter data.
In October 2021, the European Space Agency released a global 10m land cover data set called WorldCover with 9 clutter bands.
In our opinion, the ESA data is a sharp improvement on a similar ESRI/Microsoft 10m land cover data set also published this year.
The land cover can be used to enhance coarse 30m data sets to distinguish between homes and gardens, or crops and rivers. It’s space mapped so has every recent substantial building unlike community building datasets which can be patchy outside of Europe.
This data was previously very expensive. A price reflected in the eye watering pricing of legacy WindowsTM planning tools.
The data has 9 bands which have been mapped to 9 land cover codes in Cloud-RFTM. Combined with our recent 9 custom clutter bands, we have 18 unique bands of clutter which you can use simultaneously.
We have integrated the 10m data into our SLEIPNIRTM propagation engine which as of version 1.5, can work with third party and custom clutter tiles simultaneously, in different resolutions.
This is significant as it means you can have a 30m DSM base layer, enhanced with a 10m land cover layer, enriched further with a 2m building which you created yourself. Effectively this gives you a 10m global base accuracy with potential for 2m accuracy if you add custom obstacles. The interface will let you upload multiple items as a GeoJSON or KML file.
Demo 1 – The Jungle
Always a tricky environment to communicate in, and model accurately due to dense tree canopies. In this demo, a remote region of the Congo has been selected at random for a portable VHF radio on 75MHz with a 3km planning radius.
This area has 30m DSM which out of the box produces an unrealistic plot resembling undulating flat terrain. This is because the thick tree canopy is represented as hard ground and the signal is diffracting along as if it were bare earth. The result therefore is that 3km is possible in all directions.
By adding our “Tropical.clt” clutter profile, calibrated for medium height, dense trees, we get a very different view which shows the effective range through the trees to be closer to 1km, or less, with much better coverage down the river basin, due to the lack of obstructions.
Demo 2 – A region withoutLiDAR
Scotland has very poor public LiDAR compared with England which has good coverage at 1m and 2m.
For this demo, Stirling was chosen which has 30m DSM only. A cell tower on a hill serving the town produces an optimistic view of coverage by default but when enhanced with a “Temperate.clt” clutter profile, calibrated for solid and tall town houses and pine forests (eg. > 50N, Northern Europe, Northern USA) we get a much more conservative prediction. As a bonus, the base resolution has improved three fold to 10m.
Demo 3 – A region with 2m LiDAR
You might think if you’ve got high resolution LiDAR data that’s enough. Wrong. Soft obstacles like trees especially will produce excessive diffraction as if they were spiky terrain. This manifests itself as optimistic ‘great’ coverage due to the diffraction coverage. By adding our “Temperate.clt” profile again we make trees absorb power and see where there are nulls in our coverage – beyond the houses and woods.
Despite our land cover being only 10m resolution, we are able to benefit from the full LiDAR resolution with 2m accuracy.
Inspecting a profile
The path profile tool will now show you colour coded land cover as well as custom clutter and 3D buildings. Crops are yellow, grass is green(!), Trees are dark green, built-up areas are red, 3D buildings are grey, water is blue…
The most significant feature in this image isn’t the coloured land cover, or the custom building (as both are features we’ve done before), or the fact we know the tidal river Severn sits lower than the man-made Canal beside it, It’s the fact that both are being used in the same model at the same time. They are different sources, different resolutions, different densities…
Using and editing a profile
Once you’ve got the hang of switching profiles you may find it needs optimising for your region. With the clutter manager in the web interface, premium customers can create their own profile based on field measurements for highly accurate predictions. After all no two forests or neighbourhoods are the same density.
Create your perfect profile and save it to your account. The system has 5 regional profiles ready for all users and you can add your own.
To use them, pick from the Clutter > Profile menu and ensure “Landcover” is set to “Enabled”.
If you have created custom clutter and want to use that, set Custom clutter to “Enabled” to blend it in.
We played with a few designs before settling on this very simple template method where you set a profile within the environment menu as follows. This is a new value “clt” and you can still use the existing “cll” and “clm” values to manage the system clutter and custom clutter layers.
JSON request excerpt for a temperate “European” profile, with custom clutter, with 3D buildings and a 3D building density of 0.25dB/m
Now that we have highly configurable environment profiles. it’s time to tune them with field testing. We’ve bought a heap of comms equipment and will be using it to optimise these profiles with real world measurements.
In this study we look at modelling long range microwave links and the key parameters which help get the best out of mobile microwave terminals. When sited properly, a low power microwave terminal can communicate over 100km. When sited badly the same terminal can fail to communicate 5km…
A brief history of microwave
Commercial terrestrial microwave links spread in the 1950s during post-war radio innovation and are used today as backhaul in many key public and commercial networks. A microwave station typically consists of a large tower on high ground with round parabolic dishes communicating in UHF (300MHz) bands and above. As Wi-Fi spread after the millennium, outdoor fixed wireless access (FWA) terminals for long-range (>2km) consumer wireless links became increasingly popular, especially around ISM bands, but only recently have portable tracking terminals like the AVwatch MTS become available, intended for mobile ground to air use at distances exceeding 150km.
A microwave link is designed to be high capacity and focused in order to carry a large amount of information from one point to another. For this reason they need a short wavelength so are found in UHF and SHF bands above 300MHz.
The signal has a fresnel zone around it which is sensitive to obstructions. Achieving a line-of-sight link is not a guarantee of a good connection if the fresnel zone is obstructed by trees or buildings. The size of this zone is inverse to the frequency so a higher frequency has a smaller zone, akin to a laser beam, compared with a lower frequency which has a larger zone and so requires to be higher above the earth to clear it.
The maximum distance a microwave link can go over the earth has little to do with RF power and much more to do with the dish heights and the horizon which limits how far a (short wavelength) signal can go. Whilst refraction can extend a link beyond the horizon, it is variable like the weather so impractical to model accurately and in a timely fashion. A simple formula to calculate the radio horizon is 4.12 x sq(height) where height is the combined transmitter and receiver altitudes. This formula produces a table of horizons which show that an improvement in height of several meters translates to a range improvement of several kilometers due to the earth’s curvature.
Transmitter height m
Receiver height m
Radio horizon km
As signal attenuation is substantial at these frequencies they require a highly directional antenna to improve forward gain and cancel noise from other angles. The larger the dish size the greater the gain and the smaller the beam.
A microwave dish antenna is easily recognised as a polar plot by it’s prominent main lobe, symmetrical side lobes and minimal back scatter. It has a very high front-to-back ratio which describes the ratio of forward power to rear in the order of +50dB. Due to it’s high directional gain it only needs to be driven with a modest amount of RF power to generate an effective radiated power of several hundred watts.
Using and creating a directional pattern
In CloudRF you can choose from thousands of crowd sourced patterns, upload your own in TIA/EIA-804-B / NSMA standards or create your own using a few parameters.
To select a template, open the Antennas menu in the web interface and click the database icon. This will open a search form. Search by manufacturer, eg. Cambium, or model. When you find a pattern you want click the green plus symbol to add it to your favourites list. You can now proceed to set the azimuth and tilt as if you were affixing it to a pole.
If the pattern does not exist, you can choose to use a “custom pattern” and define the horizontal and vertical beamwidths in degrees as well as the gain and front-to-back ratios in decibels to generate polar plots. These can be downloaded as a legacy .ant text file which you can upload in the service as a private pattern. A custom pattern is quick to self-generate but lacks side lobes and the full accuracy of a detailed pattern from a manufacturer.
An over the horizon link
For this demo, we’re simulating a link from the cliffs of Dover in England across the English Channel to Calais, France, a distance of 40km across the sea with no obstructions. The 18dBi terminal is 1m off the ground and is using only 3 Watts / 34.7dBm power for a total effective radiated power of 189W / 52.8dBm. A receiver threshold of -100dBm was used. This is too low for high speed waveforms but would be ok for a telemetry fallback waveform like QPSK.
A bad link
With a ground receiver on top of the cliff, the link just reaches Calais. It is obstructed on the radio horizon at ~25km, a full 10km before the coastline but the height advantage of the cliff makes line of sight just possible to some parts of the town. Despite just achieving line of sight, this link would still be unsuitable due to the majority of the fresnel zone being obstructed.
A good link
With the same cliff top terminal and RF parameters, the distant receiver is swapped for a drone 300m above the ground. The increase in height extends the link from ~25km to 75km, deep into France with good LOS.
An ugly link
This time the same terminal which just achieved 75km was misused down on the beach to communicate with a small boat in the channel. It’s effective range was less than 6km due to the radio horizon. As you can see from the normalised path profile chart below, the curvature impact is substantial when the stations are on the earth!
Thresholds and modulation
The simplest way to limit the modelling is with received power measured in decibel milliwatts. In this common scale, -100dBm is a sensible threshold for most digital systems. For planning purposes, a 10dB fade margin should be added for a -90dBm threshold. The actual thresholds needed will vary by systems and waveforms. Many commercial microwave links operate very high symbol rate modulation schemas which need received power above -70dBm to function.
You can also use Bit-Error-Rate (BER) as a threshold. This unit is used in conjunction with the noise floor and the desired signal-to-noise ratio (SNR) to derive a threshold. A modulation schema like QAM64 requires a relatively high 15dB SNR compared with 5dB for QPSK which can function on weak links. These thresholds are not absolute which is why we set the desired error rate. Errors are inevitable and the relationship between the BER and SNR is best visualised as curves. If you know you want QPSK for example but are not sure what error rate to use, use a mid level error rate such as 10-3 (One bad bit for every 1000 bits) which will give you a 7dB SNR. If the local noise floor is -120dBm your equivalent receiver threshold is a pretty low -113dBm.
Forget RF power, height is everything in creating a successful microwave link. This might mean moving a terminal several kilometers away from the distant station in order to gain a few meters in height but the benefit will be many more kilometers in range.
Mobile ad-hoc networks (MANET) are an increasingly popular architecture in emergency services and Defence communications. Unlike classic repeater based networks, MANET radio network communications do not have fixed infrastructure so must form self-healing, self-routing networks.
MANET radio modules are well suited to working either off-grid away in remote areas or for providing resilience and independence in well served cities which may be suffering from power and/or network failure.
The bandwidth requirements and throughput of MANET networks varies substantially by waveform. Some are designed for range, others maximum throughput. For this reason, manufacturers offer a range of frequency modules.
Why RF planning tools don’t get used for emergency networks
RF planning software has evolved substantially in the 30 years it’s been used to build out fixed infrastructure networks. Time sensitive customers such as the emergency services have a difficult relationship with these tools. They need them, and often buy them, but don’t have the time to use them to their full capabilities. As a result they rarely get used on anything except training and exercises. Even then the numbers of staff directly interfacing with them will be very small, even in very large organisations of radio users.
The focus for most RF tools is planning with static sites. Whether that’s clicking on a map or uploading a spreadsheet of hundreds of locations it’s still static. MANET requires dynamic inputs and continuous computation which is where APIs come to the fore…
An API for MANET
Cloud-RF’s latest API has a function designed for ad-hoc networks called ‘points’. The points API functions like a point-to-point profile in terms of it’s input and output except it accepts an array of transmitters. This means you can test 10,50 or 500 transmitter nodes back to a single receiver in a single API call. It’s also fast as you’d expect and can model a link every millisecond so the 870 distinct links demonstrated in the video were processed in under a second, every second.
In this video, we demonstrate the Cloud-RF points API to model a MANET network (Mobile Ad-hoc Network). For this demo 30 nodes were moved around a 16km track covering a variety of terrain. Each node was tested against 29 siblings for a total of 870 links per second.
Coloured links denote good (green) average (amber) and poor (red) links between the nodes and map to 5dB, 10dB and 20dB signal-to-noise ratios. Only links exceeding 5dB SNR are shown or it looks like a bad game of kerplunk!
The radio settings used were L band (1-2GHz) with only 1 watt of power. This conservative start setting was chosen to show a dynamic range of links. Later in the video the template is switched at the database to demonstrate the impact or gain of using different bands such as 2.4GHz and 500MHz.
Integrating your data
The demo video used mock data and an unpublished script to present the results as a KML. The source of the data is irrelevant so long as it’s accurate and time sensitive. This could be a radio vendor’s dashboard or database. Many of the leading vendors such as DTC, Harris, Persistent Systems, Silvus and Trellisware have location aware GPS modules and software interfaces to display reported radio positions.
The required format for a point is WGS84 decimal degrees. The height is taken from the template which is defined within the body of the points request. The new APIv2 makes defining a template easy as a JSON object so you can have a local archive of template .json files.
A suggested workflow for API integration for dynamic points is as follows:
Fetch a list of all radio locations as decimal degrees
Choose a template as a JSON object
Make an API request using the data and a client script to https://api.cloudrf.com/points
Parse the JSON response to extract the results for each node
Put the results on a map as lines
Style the lines based upon your own local rules for your equipment, QoS and waveform eg. < 5dB is red
For assistance with integration and hosting options email email@example.com
Where this points API will really add value is in mapping and assisting autonomous vehicles who are invariably fitted with MANET radio modules. Whether it’s a drone or a UGV, this API can be used to rapidly exercise multiple routes to help make better decisions.
During infantry training, soldiers are shown firsthand the impact of different weapons upon different materials to help them make better decisions about good cover versus bad cover. Spoiler: The railway sleeper doesn’t make it 🙁
As tactical radios have moved several hundred megahertz up the spectrum from their cold-war VHF roots, material attenuation is a serious issue which needs demonstrating to enable better route selection and siting. Unlike shooting at building materials it’s hard to visualise invisible radio signals, and therefore teach good siting, but equally important as ground based above-VHF signals are easily blocked in urban environments.
This blog provides a visual demonstration of the physical relationship between different wavelengths and attenuating obstacles only. It does not compare modulation schemas, multi-path, radios or technologies.
Bricks and wavelengths
Clutter data refers to obstacles above the ground such as trees and buildings. Cloud-RF has 9 classes of clutter data within the service which you can use and build with. Each class (Bricks +) has a different attenuation rate measured in decibels per metre (dB/m). This rate is a nominal value based upon the material density and derived from the ITU-R P.833-7 standard and empirical testing with broadcast signals in European homes.
A signal can only endure a limited amount of attenuation before it is lost into the noise floor. In free space attenuation is minimal but with obstacles it can be substantial. This is why a Wi-Fi router in a window can be hard to use within another room in the house but the same router is detectable from a hill a mile away.
The attenuation rate is an average based upon a hollow building with solid walls.
Common building materials attenuate signals to different amounts based on their density and the signals wavelength.
A higher wavelength signal such as L band (1-2GHz) will be attenuated more than VHF (30-300MHz) for example.
A long wavelength signal like HF will suffer minimal attenuation making it better suited to communicating through multiple brick walls.
The layer cake house
A brick house is not just brick. It’s bricks, concrete blocks, glass, insulation, stud walls, furniture and surfaces of varying absorption and reflection characteristics. Modelling every building material and multi-path precisely, is possible, given enough data and time due to the exponential complexity of multi-path but wholly impractical.
A trade-off for accurate urban modelling is to assign a local attenuation value. It’s local since building regulations vary by country and era so a 1930s brick house in the UK has different characteristics to a 1960s timber house in Germany. Taking the brick house we can identify the nominal value by adding up the materials and dividing it by the size.
For example, 2 x solid 10dB brick walls plus a 5 dB margin for interior walls and furniture would be 25dB. Divide this by a 10m size and you have 2.5dB/m. Using some local empirical testing you can quickly refine this for useful value for an entire city (assuming consistent architecture) but in reality the *precise* value will vary by each property, even on a street of the same design, due to interior layouts and furniture.
We created nine 4 metre tall targets using each of the 9 clutter classes in attenuation order from left-to-right, measuring 10x10m and fired radio-bulletsTM at them from a distance of 300m using the same RF power of 1W.
The following bands were compared: HF 20MHz, VHF 70MHz, UHF 700MHz, UHF 1200MHz, UHF 2.4GHz. SHF 5.8GHz.
The ITU-R P.525 model was used to provide a consistent reference.
Only the stronger direct-ray is modelled. Multipath effects mean that reflections will reach into some of the displayed null zones, with an inherent reflection loss for each bounce, but these are nearly impossible to model accurately and in a practical time.
Here are the results.
Dense materials, especially concrete, attenuates higher frequency signals more than natural materials like trees
Lower UHF signals perform much better than SHF with the same power
Higher frequencies with low power can be blocked by a single house, even after only 300m
HF eats bricks for breakfast!
Modern tactical UHF radios, and their software eco-systems, are unrecognisable from their cold-war VHF ‘voice only’ ancestors in terms of capabilities but have an Achilles heel in the form of material penetration. To get the best coverage the network density must be flexed to match the neighbourhood.
This is obvious when comparing rolling terrain with a urban environment but the building materials and street sizes in the urban environment will make a significant difference too. Ground units which communicated effectively in a city in one country may find the same tactics and working ranges ineffective in another city with the same radios and settings. Understanding the impact of material penetration will help planning and communication.
When simulating radio propagation you can choose to model results in a variety of ways: Path loss will show you the attenuation in decibels (dB), Received Power will show you the signal strength at the receiver in dBm and field strength will show you the signal strength in micro-volts (dBuV/m). If you are using a digital modulation schema such as Quadrature-Amplitude-Modulation (QAM) your effective coverage will be dictated by the desired Bit Error Rate (BER) and local noise floor. This blog will describe these concepts and show you how to apply them to model a given modulation schema.
Bit Error Rate (BER)
The Bit Error Rate (BER) is the number of acceptable errors you are prepared to tolerate. This is typically a number between 0.1 (every 10th bit is bad!) and 0.000001 (Only one in a million is bad). This ratio is closely linked to the Signal-to-Noise-Ratio (SNR) which is measured in decibels (dB). A high SNR is required for a low BER. A low SNR will have an increased BER. Put simply a strong signal is better than a weak one and has less chance of errors.
The reason error increases with SNR is because of noise. The closer you get to the noise floor for your band (about -100dBm at 2.4GHz), the more unstable and unpredictable things become.
The noise floor
The noise floor is the ambient power present in the RF spectrum for your location, frequency, temperature and bandwidth. Understanding the noise floor is important when modelling Bit Error Rate as it is subject to change and will determine your SNR. The SNR will determine your BER so if you want good coverage you need to know your noise floor so you can set your power accordingly. There are several factors that influence noise floor:
A lot of noise if man-made so the noise floor is higher in a city than in the mountains. The difference varies not just by city but by country as countries have different spectrum authorities and regulate spectrum usage differently. The difference between a city and the countryside for a popular band like 2.4GHz is huge and can be over 6dB. Using a calibrated spectrum analyser with averaging is a good way to measure the noise floor. Ensure you set the bandwidth to your system’s bandwidth for best results.
If you don’t own a spectrum analyser you can use Boltzmann’s Constant (see bandwidth section) and add an arbitrary margin to it depending on your location. This table has some suggested generic values:
Rural / Remote
Rural / Remote
Thermal noise is spread uniformly over the entire frequency spectrum but man-made noise is not. The 2.4GHz ISM band is much busier than neighbouring bands for example due to its unlicensed nature. As a result the noise floor is several dB higher than a ‘quieter’ piece of the RF spectrum. Some of the quietest spectrum is co-incidentally the most tightly regulated, which keeps users down, which reduces noise, and improves performance.
Thermal noise increases with temperature so in general you will get slightly more distance for your power in northern Scandanavia than in central Africa. The difference is about 1dB between a cold day and a hot day so can be considered negligible when compared with other factors. Budget for a hot day with an extra dB in your planning.
Bandwidth has a direct influence on noise power because of Boltzmann’s Constant. This simple formula lets you calculate the absolute noise from the bandwidth. There are different ways to apply the formula but if you use dBm then the simplest form is:
Noise floor dBm = -114dBm + 10 Log(Bandwidth in MHz)
Using this formula you get the following results.
Receiver Bandwidth MHz
Spectrum analyser with 100MHz FFT
If in doubt, use a noise floor of -98dBm
A low power 20MHz wide 64QAM signal is being simulated in a city. The noise power is computed from the bandwidth with Boltzmann’s Constant as -101dBm to which we add +3dB for man-made noise putting the noise floor at -98dBm.
When selecting a BER of 0.1 / 10e-1 the SNR is 11dB which equates to a receiver threshold of -87dBm.
The difference in propagation between the two error rates is noticeable with 64QAM but what happens if you switch up the modulation to 1024QAM which carries a higher SNR?
High resolution LIDAR data is great but its also limited in coverage and focused on urban areas generally. What happens when you need to see coverage beyond the city limits where coverage stops?
What happens when you live in Cornwall?
With the Signal Server propagation engine you can model LIDAR high resolution data but where the data has a void or stops entirely, you will receive an ugly hole in your coverage prediction or even worse a failure if the requested radius far exceeds the available LIDAR. To see wide area coverage without the voids a coarser resolution would need to be used which won’t have the detail of the LIDAR.
Customers using CloudRF’s LIDAR capabilities occasionally report terrain data anomalies which upon closer inspection reveal voids. In the UK 2m LIDAR for example, this was flown by light aircraft in strips like mowing a lawn so its not uncommon to see nice coverage around a city but voids out in the country, especially in remote regions like Cornwall.
A Cornish customer highlighted a region with some prominent voids which we used to develop a solution to this tricky problem. Previously Signal-Server worked with legacy SRTM DEM converted to the unique SPLAT! SDF raster format and then later ASCII Grid tiles but the two datasets could only be use exclusively. The solution required a fundamental re-design of the engine and its relationship with the data and is one of the biggest changes in the history of CloudRF…
‘Slayp-nir’: The fastest horse in Norse mythology, capable of traversing any terrain on eight legs.
Working with senior C++ developer, Gareth Evans, the loading of data was re-designed from scratch to not only make it faster but format and resolution agnostic. By doing this from the outset, different text data sources could be rapidly loaded into memory to form a single, seamless elevation model. For urban LIDAR this means a city tile with voids at the edges can be layered on top of a 100km 30m DSM tile to fill voids. The step between the two formats is indistinguishable to the human eye.
This benefits not only city planners frustrated by the unsightly hard edges at city limits but also the emerging DIY Drone LIDAR market where users might submit their own very small tile to CloudRF covering just a forest block for example. With this engine you can backfill the surrounding area to fit your high resolution 1m Drone LIDAR onto some lower resolution DSM like 20m for example.
The changes required to make this solution could not be done by adding more code to Signal-Server which as a fork of the much older SPLAT! engine was becoming difficult to maintain and has well documented problems in its public commit history with handling rectangular LIDAR tiles or tiles which span the Greenwich Meridian.
CloudRF has a long and proud history of using and supporting open source software, starting with SPLAT! in 2011 but this re-design and re-build from scratch allowed for a fresh licence. Sleipnir will not be open source and for this reason will not contain GPL licensed code from SPLAT! or Signal Server. It has faster, Intel CPU optimised, implementations of the same public domain models found in Signal server (ITM, Hata, COST231, SUI, ECC, Ericsson,ITU-R P.525) except for the ITWOM 3.0 model which has been excluded as its license and provenance is unclear.
Signal-Server LOS model
Sleipnir Free space path loss including LOS
A key difference in how Sleipnir’s models work is line-of-sight (LOS) analysis. With Signal-Server, LOS was an optional mode, comparable to a propagation model which meant basic models like ITU-R P.525 (Free space path loss) would continue to show coverage behind obstacles unless knife-edge-diffraction was explicitly enabled.
With Sleipnir, LOS is factored into every model by default so you will always see the impact of obstacles. Beyond obstacle diffraction can also be modelled with optional knife-edge-diffraction.
What this means for basic models is that you now get a result which combines line of sight with the model. Perfect for microwave links requiring a high SNR.
Case study #1: Patchy LIDAR, Cornwall
Tregony in Cornwall is served by three datasets presently: 90m DEM, 30m DSM and 2m LIDAR.
The LIDAR covers the town but has a void to the north west and a giant vertical strip missing to the east. The 30m DSM covers everything because it was mapped from space but lacks the detail of the 2m LIDAR in the town. Using Sleipnir the town and surrounding area were modelled using the 2m LIDAR resampled to 5m and rural LIDAR voids were (automatically) filled with 30m DSM. Not only were the voids filled but due to the redesigned engine it was executed in a fifth of the time on the same processor.
Case study #2: Localised LIDAR, Sweden
Malmo in Sweden is served by 4m LIDAR but this is limited to the city centre only for now. Beyond the tile coverage falls back to 30m DSM. This scenario is common since LIDAR is expensive and difficult to justify beyond cities. It is also a problem faced by the DIY LIDAR market made possible by Drones and photogrammetry suites like Pix4D which you will see more of in the future.. When your drone has a 15 minute battery life your LIDAR tile isn’t going to take long to upload but as drones and laws improve this potential will grow.
Aside from the obvious difference with coverage beyond the tile limit, you may notice the propagation in the city is different also. This is because the basic models like Hata have all been enhanced to include Line-Of-Sight (LOS) as standard now.
Performance test: Signal-Server vs Sleipnir
Tests were conducted on an a hex core Intel(R) Xeon(R) CPU E5-1650 v2 clocked at 3.50GHz. Signal server used four threads and Sleipnir was limited to eight although can use n threads, hardware permitting. Times include image post processing conducted by the CloudRF service.
30m DSM, 10km Path Profile
30m DSM, 10km radius
30m DSM, 30km radius
5m LIDAR, 5km radius
5m LIDAR + 30m DSM, 5km radius (Multi-mode)
30m DSM, 50km radius
60m DSM, 100km radius
Sleipnir is currently available in CloudRF with the new ‘model’ parameter. For API users, model=1 is Signal-Server and model=2 is Sleipnir. In the web interface the choice is easier in the model section. For now, Sleipnir is available for area coverage only with Signal Server used for path profile but we’re working on Sleipnir path-profile along with new ‘best server’ features to exploit its incredibly fast path profile capability.
A popular question when modelling GSM / UMTS / TETRA / LTE networks is how can I show the coverage from the mobile subscribers (Uplink)? Showing a tower’s coverage (Downlink) is easy but how do you go the other way back to the tower? If you know your equipment capabilities (tower and subscribers) you can calculate link budgets for both the uplink and downlink and then use those values to perform an area prediction. Here’s a simple example with the GSM900 band and without some of the other gains and losses which can complicate this for the benefit of novices. You can always add in your own gains and losses where you like to suit your needs.
1. Calculate the total effective radiated power for the BTS tower by adding the power and antenna gain (Limited to 33dBm in the UK)
2. Repeat for handset (Limited to 23dBm in the UK)
3. Calculate the minimum receive level for the BTS by subtracting the receive antenna gain from the receiver sensitivity eg. -110 – 10 = -120
4. Repeat for handset
5. Calculate the maximum allowed path loss (MAPL) by subtracting the minimum receive level from the ERP.
6. Repeat for handset
A balanced network will have similar values. If your base station can radiate for miles but your handsets cannot you have an unbalanced and inefficient network.
Finally, to see this on a map, use the ‘Path loss (dB)’ output mode in CloudRF along with the ‘Custom RGB’ colour schema. Enter the uplink value into the green box and the downlink value into the blue box and run the calculation. A typical cell site will have a greater reach (blue) than it’s subscribers (green). The system will automatically factor in the effect of terrain, ground absorption, antenna heights to give you an accurate prediction.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.