Posted on

3D radio propagation API

3D RF API

True 3D multi-path

Following two years of R&D, our new 3D engine and API is live.

It uses an advanced volumetric design to simulate propagation in all directions making it more advanced than 2D engines which can only produce flat images. By design, it supports multi-path and phase tracking to model “fast fading” when signals collide making it well suited to challenging urban and subterranean environments.

Key features include:

  • 3D antenna patterns
  • Configurable reflections (up to 10)
  • Configurable material attenuation (dB/m)
  • Configurable material reflectivity (dB)
  • Configurable material diffusivity (Metal v Stone)
  • Multi-site support (n transmitters)
  • Phase tracking for multi-path effects (Constructive and destructive multipath)
  • Configurable resolution from 10cm, subject to model size

CloudRF Blender plugin

An open file standard and an Open API

We’ve chosen the growing glTF 3D standard by the Khronos Group for our input and output. It is supported by most devices, GIS software, graphics engines and 3D viewers.

You can transform LiDAR point cloud scans into a glTF mesh using a number of free packages to exploit popular formats like LAS and LAZ.

As per our open architecture and API-first design, the 3D API is available now as an open API. You will require a premium CloudRF account and an API key to use it.

With the API, you can push up a model, perform coverage analysis, and view the output using a hosted viewer supported by popular browsers.

Multi-path visualisation

Everyone talks about multi-path, the behaviour of colliding radio waves but can they visualise it? Signals schools teach students that a “little movement” will cure a dead spot. That’s good but when a little more movement puts them back in a dead spot again it doesn’t solve the real issue, current software cannot practically do multi-path.

There are expensive ray tracing solutions designed for design engineers but not operators deploying equipment, or students even who are taught to move, but don’t know where to!

In the screenshot below, a directional 5G antenna is pointed towards Tower bridge in London. Where there is line of sight on the river Thames the signal is a solid green. Where there are reflections, the signal is patchy and adjacent to the bridge there is a notable area of destructive multipath in the middle of the river. This huge dead spot is caused by strong reflections coming from the south tower of Tower Bridge. The north tower isn’t as affected due to the angle of incidence which creates a longer reflection path, and weaker reflection.

Destructive multipath in the middle of a river.

Viewer agnostic

As an open standards API our mesh output can be consumed by browsers, third party apps and AR viewers. We’ve already integrated it into a Hologram and the popular Blender 3D software and will add it to our web interface soon since we use Cesium which has supported glTF since 2014.

You can add glTF models direct into ESRI’s ArcGIS Maps SDK. Demo code is right here!

Blender plugin

Whilst developing this we used the popular Blender open source 3D platform to create obstacles and inspect output. We developed a plugin for Blender which we have open sourced so you can drive it directly from Blender. The plugin will upload your model and allow you to use it, along with CloudRF radio templates, to simulate RF coverage.

For more information on the plugin see here.

3D properties

Coordinates

Instead of a geographic projection like WGS-84, this engine uses XYZ coordinates relative to the model origin (x=0,y=0,z=0). This is better for modelling buildings in isolation like architecture designs which don’t exist!

When a building is placed upon the earth, the translation to these values should be performed by the client, such as Blender so ideally the user does not need to know what/where they are – it just looks right.

Up and Forward

Cartesian vector coordinates are more complex to express than latitude and longitude.

This is best left to the client like Blender for example. As a minimum the position is required as XYZ and for advanced usage the API allows “up” or “forward” XYZ directions to be used to express rotation. Different platforms do different things with coordinates so we have opened up our API to support as many as possible.

Materials

We’ve supported configurable clutter for years but with multi-path support we have extended support for both attenuation, reflection and diffusion. The glTF standard supports materials by standard with human readable names eg. “Wood”. You should match the materials you are using with the “Keys” section to capture variants in your model for example: “Wood”, “Oak Wood”, “Timber”.

Reflection loss

Measured in decibels, this is loss incurred by a reflection from a surface. Solid surfaces like Metal reflect most of the energy so have a low loss value of between 1 and 3 dB and softer surfaces like timber absorb more energy so have higher loss values of 3 to 6 dB.

Transmission loss

Measured in decibels per meter, this is absorption loss. These figures will be much higher than what you might have used with our other APIs since those are nominal values based on average attenuation through a house whereas these are the actual value for the material(s) (eg. brick) not the parent obstacle (a brick house).

For example; a brick house measuring 10m wide might have 2 blocking walls at 10dB each for a given UHF frequency.

With the 2D API, this would be represented as a attenuation figure of 20dB / 10m = 2dB. In the 3D API the brick wall would be the simpler 10dB!

The advantage of this is we can now model inside rooms with different materials and furniture – if you have the model…

Diffusion

Radio waves scatter when they hit a wall. The behaviour varies by the material so you can define this behaviour with the diffusion parameter. It’s a randomisation ratio from 0 to 10. At 0 there is no diffusion and you are only considering the input ray. With 0.1 a small amount of randomisation is occurring so the reflection is very predictable like a game of pool.

With 10.0 the reflections are truly random in all directions. This would be suitable for a gravel path for example.

Performance

Modelling a cube is harder than a 2D plane so this takes longer. How long depends upon your model size and resolution and reflections. Increasing reflections doesn’t add as much work as you might expect due to our efficient design but asking for 20cm resolution for an entire neighbourhood will leave you waiting for a few minutes.

Performance tips

  • Start with 1m resolution and a small model
  • Keep you input model minimal. If you have every pot plant in the town it will be a big mesh file and will take longer!
  • Pretty photo realistic 3D tiles are pretty but take a long time to model. Go ugly early with basic models for speed.

Documentation

The swagger documentation is located here.

The blender plugin is located here.

Posted on

3D simulation roadmap

The problem with tunnels and stairs

Whenever there’s been a major incident involving emergency services in complex urban environments the inquiry report has consistently highlighted radio communications failure despite significant developments in radio communications and 3D technology since the infamous 1988 Kings Cross Fire on the London Underground. The following tragic incidents all featured tunnels, stairs and communications failure:

Limitations of (2D) radio planning tools

Radio planning tools are not used in emergencies. They’re complicated, slow and require a lot of knowledge to produce an accurate output. Even if a skilled operator were able to model a site before the event, currently they would be expected to model each floor of a multi-story building in isolation due to the “floorplan” design of current software.

The problem is indoor planning tools are built for corporate clients to achieve seamless Wi-Fi in every corner of the office, not to help a fire chief deploy a mesh radio network down stairs and then along a tunnel. The top end tools can do limited multipath, slowly, but not as an API which can be consumed by a third party viewer…

Most radio planning tools on the market, ourselves included, have the following limitations when it comes to complex urban modelling which we will explore in detail:

Using LiDAR as a 2.5D surface model

The abundance of free LiDAR data has made this high resolution data the standard for accurate outdoor RF planning and for several Fixed Wireless Access (FWA) tools, including free LiDAR based path tools, it is their core feature. We started using LiDAR in 2015 and know its limitations well; for example when point cloud LiDAR has been rasterised into GeoTIFF then it’s no longer 3D, it’s a 2.5D surface model which is useful for building heights and unsuitable for bridges, arches and tunnels.

A bridge or arch in a rasterised LiDAR model extends to the ground like a wall. In the screenshot below, a large ferris wheel is blocking line of sight through it as well as the elevated rail bridge across the river which is casting a shadow much larger than it would in reality.

London eye and bridges in LiDAR

Using a floor plan to model a building

Expect us

For indoor Wi-Fi planning tools, the start point is typically a floor plan. This does not scale well with multi-story buildings or support vertical planning as it produces a 2D image of a 2D plan.

Many tools present 2D images in a 3D viewer, as we do, but the output remains 2.5D, as with rasterised LiDAR. The significant Wi-Fi attenuation presented by solid floors makes this simplified 2D floor-by-floor planning viable for corporate clients in offices but not in challenging environments or where a floor plan does not exist.

Direct ray only

Attenuation is good, reflections are better

Modelling multipath, or fast fading, is much more complex than the direct ray. For this reason, most tools only do the more powerful direct ray and even then some cannot do diffraction or obstacle attenuation as we do already. For the previously mentioned Wi-Fi planning tools, the current standard is to model obstacle attenuation only. By doing this a tool is able to simulate most of the coverage quickly for a given floor but for complete accuracy it must be augmented by a walk survey, which isn’t so quick. For some customers, a walk survey is just not possible.

Multipath effects will increase coverage beyond a direct ray simulation and cause phase issues like signal dead-spots and doppler spread where reflections increase bandwidth and overall noise. This effect can be observed indirectly via customer reviews for urban WISPs where people state their once good link quality reduced as more neighbours subscribed.

A 3D multipath API for 2024

We’ve been working on this full 3D capability since the 2022 Grenfell inquiry with valuable input from firefighters, mining experts and MANET radio OEMs. The first version of the engine is done and we’re onto API integration now.

Our GPU based design takes a 3D model, simulates propagation in all directions irrespective of floors including configurable reflections, surface refractivity, material attenuation and crucially it outputs to the open 3D standard glTF. It scales from small rooms to suburbs and everything in between so will be used for tunnels, multi-story buildings and outdoor multipath.

It will be integrated into our API first so other standards compliant viewers can visualise it and will then be integrated into our own 3D user interface. We can’t say what interfaces people will be using in the future but are confident that by aiming for open standards APIs we will ensure compatibility with phones, glasses and holograms.

Done

Read LiDAR into a 3D volume

Prepare a volume from a LAS/LAZ LiDAR scan.

Done

Direct ray with attenuation

Model direct ray with configurable attenuation in dB/m for obstacles

Done

Reflections

Model reflections accurately based on the wavelength and angle of incidence

Done

Phase tracking

Track the phase to show constructive and destructive interference (fast fading) eg. dead spots, cured by a little movement 😉

Done

BIM / glTF support

Read and write BIM models as the open standard glTF “3d tiles” format.

Under development

API integration

Integrate engine into the CloudRF API so a BIM/LAS model can be uploaded and used via our standard JSON requests.

Under development

3D tiles web interface integration

Add 3D tiles output to 3D web interface. Some interfaces already supported 🙂

To do

Multisite support

Model many sites at once

To do

Antenna pattern integration

Add 3D antenna pattern loss

Commercial plan

The 3D engine API will be a new feature within CloudRF Gold plans and our SOOTHSAYER server at no additional cost. It requires a GPU. We’re aiming to get a beta up on CloudRF in May/June and to ship this with the next major SOOTHSAYER release, currently scheduled for September.

Users will be allowed to upload models within their storage limits and execution time / accuracy will be scaled to fit within a reasonable time. Limits will be relaxed on SOOTHSAYER.

We are partnering with open standards based companies to integrate this into different viewers. One exciting partner we are working with now is Avalon Holographics. Their revolutionary display is able to display our rich engine output in a hologram format so it can be explored in three dimensions for maximum spatial awareness without additional hardware for viewers.

If you would like to get our open standard glTF models into your viewer, get in touch. If you can bring challenging BIM models or LiDAR scans of real tunnels and large buildings we would really like to talk to you.

Demo video

3D simulation engine demo video

Posted on

SOOTHSAYER server performance testing

Brown bears

We have lab tested three different size hardware profiles for running our SOOTHSAYERTM RF planning server to find out how they compare under load. These consumer profiles cater for different setups ranging from an enterprise with rack mounted servers, to a small office to a vehicle.

Enterprise server

This server is a standard Dell Poweredge R740 with an Intel Silver CPU and a 24GB Nvidia A5000 GPU running a Proxmox hypervisor. SOOTHSAYER 1.7 is installed as a virtual machine and LiDAR data is mapped via a network share.

Datasheet.

Mini desktop PC

This small form factor desktop PC is a HP z2 G9 mini with an Intel i9 CPU and a Nvidia A2000 GPU running Ubuntu 22 server.

SOOTHSAYER 1.7 is installed as a docker container and LiDAR data is local.

Datasheet.

Embedded PC

This portable server is an Nvidia Jetson AGX Orin with an ARMv8 64 bit CPU and a 2048 core GPU. The server has 3 variable power settings and was run in the modal 30W mode.

SOOTHSAYER 1.7 is installed as a docker container and LiDAR data is local.

Datasheet.

The tests

The tests used were designed to benchmark both the hardware and our software’s capability for high performance RF planning. We’ve picked challenges other tools would struggle to compete with, like Bullington diffraction and double digit megapixel resolutions.

High resolution area

The test parameters here were for a 5m resolution coverage heatmap out to 10km radius for a total image size of 16 mega pixels. This was repeated with and without Bullington diffraction and soft clutter data which is computationally expensive to show diffraction with soft clutter versus basic line-of-sight (LOS) speed respectively.

This would exercise our Area API.

Long range path profile

The test parameters here were for a 2m resolution LiDAR path out to a point 10km away. This would test 5000 points and would exercise our Path API.

Ten links at once

The test parameters here were for 10 random transmitters at 3km from a receiver using 2m resolution and Bullington diffraction. All links would be tested in a single API call to our Points API.

The results

All times are in seconds and were taken from the API response, excluding network latency and presentation in an interface.

TestServerMini PCEmbedded PC
Area w/Diffraction (CPU)261338
Area w/LOS (CPU)1710.924
Area w/Diffraction (GPU)6.713.1116
Area w/LOS (GPU)2.53.929
Path0.140.050.08
Links0.090.050.08
Table of results

The times didn’t fail to disappoint and threw up more than a few surprises. Unsurprisingly, cores matter when processing coverage and the fastest compute went to the largest GPU on the server.

When processing links, the CPU is critical and here the Intel i9 on the Mini PC excelled with a 50 millisecond compute time for multiple 2m LiDAR links. This faster-than-human reaction-time speed makes it suitable for dynamic planning with moving vehicles. The enterprise server disappointed with quick links due to the latency with the large data share where the LiDAR GeoTIFF tiles were stored. This latency was only noticeable with very quick calculations however.

The embedded PC performed admirably considering it was seriously under powered compared to the others at only 30W. It was able to model LiDAR links in 80ms and was only about 46% slower than an enterprise server at CPU calcs. Where it was noticeably slower was with processing the GPU area calculation. By increasing the power on the device to the 60W maximum the CUDA cores are doubled and from our testing we expect this would halve the GPU time.

Recommendations

For MANET link planning; an intel i9 CPU with an SSD is extremely fast

For high resolution area coverage; an enterprise grade GPU is unbeatable

For a small form factor host; the HP z2 G9 mini with an A2000 GPU is powerful

For value for money; the HP z2 G9 mini with an A2000 GPU is excellent

For low SWaP; the Nvidia AGX Orin 64GB delivers great economy

More information

For more information on self hosted RF planning, see our SOOTHSAYER page.

No load balancers with arrays of RTX gaming GPUs were used in this testing. We don’t need to do that!

Posted on

SOOTHSAYER 1.7 released

SOOTHSAYER 1.7

Our latest major release of our private server, SOOTHSAYER, is ready. It includes six months of features, updates and bug fixes from CloudRF and features several customer sponsored capabilities including RADAR and Trilateration.

By popular demand, we now have a Docker enterprise solution so you can build your own containers or use our pre-built AWS template.

Docker support!

Thank you to all who gave feedback and feature sponsorship to help make this feature release. As you can see from the substantial new features and enhancements we continue to model the future of scalable APIs for multiple technologies and verticals such as Aviation and Counter UAS.

New in 1.7

RADAR model

The RADAR propagation model has a RADAR cross section parameter (m2) so you can model the effective range for detection of different sized objects with a RADAR, up to 90GHz, and 500km radius – horizon permitting!

It’s implemented in the API as model #8, both CPU and GPU engines and the user interface.

RADAR documentation: https://cloudrf.com/documentation/02_web_interface_intro.html#radar

Airport RADAR
Airport RADAR

Noise API

The noise API was developed from user feedback about the problem with varying local noise figures. Using a universal guessed value eg. -110dBm is not representative of the real world and especially the difference between a quiet rural and loud urban area for example. Now you can push in your own noise readings from radios or other sources either before or during planning. When modelling, live noise can be used by setting the noise value to ‘database’.

Noise CREATE API schema: https://cloudrf.com/documentation/developer/#/Manage/noiseCreate

Noise GET API schema: https://cloudrf.com/documentation/developer/#/Manage/noiseGet

Trilateration API

The Trilateration API was developed by popular request to accelerate and enable the process of geo-location of an unknown emitter. It will challenge conventional thinking about the accuracy of power based geo techniques by using accurate modelling and clutter data instead of circles. Our modelling has been field tested to below 8dB RMSE.

It requires receivers to be pre-modelled to enable rapid RSSI lookups using live receiver measurements. Using this two step process, results are delivered in milliseconds unless a receiver is moving in which case it’s coverage can be maintained using the fast GPU engine.

Trilateration API demo: https://cloud-rf.github.io/CloudRF-API-clients/slippy-maps/leaflet-trilateration.html

Height AMSL

Since our inception we’ve used height above the ground as most of our users are land based terrestrial planners. As we’ve gained more aviation, and RADAR, customers, barometric altitudes are now supported by request. The altitude type is specified in the request “output.units” key as before only now there are four possible inputs instead of two. Range is 1 to 120,000 m/f.

ValueDescription
mMeters above ground
m_amslMeters above sea level
fFeet above ground
f_amslFeet above sea level
API height measurement units

HF NVIS model

By request we’ve added a HF Near Vertical Incidence Skywave (NVIS) model. This models the first bounce from the ionosphere out to 500km and has an option for three layers (D, E, F) at differing refractive heights. This capability is supported in both our CPU and GPU engines and is particular valuable for teaching HF as it will give students an interactive HF tool to learn dipole patterns, the difference between day and night and critical frequency selection.

We have calibrated our NVIS model to align within 10dB of measurements taken from a 2012 research paper by Marcus Walden using a 5MHz NATO frequency in the UK. From this paper we selected one of the longer links at 210km where we used the median measurement value for August.

HF reliability animation
HF reliability animation

Bullington and Deygout diffraction models

Our single knife edge diffraction model has served us well for many years but cannot deliver the accuracy we aspire to once multiple obstacles are on the path. We have therefore invested substantial effort to add the much more complex Bullington and Deygout models to both our CPU and GPU engines. These greatly enhance simple propagation models as we proved during our LTE800 field test in the mountains earlier this year.

Deygout diffraction
Deygout diffraction


Automatic CSV processing in UI

From user feedback we created a solution to a problem whereby customers using managed IT systems were not able to install or execute our python API scripts but needed to batch process spreadsheets. We addressed this by adding a form within our web interface where CSV spreadsheets could be uploaded and automatically processed. It uses a much simpler format which combines with the current form settings like environment to execute API calls.

Documentation: https://cloudrf.com/documentation/05_web_interface_import_data.html#automatic-processing-process-a-spreadsheet

ITU-R P.1546 VHF/UHF model

This is a logically more advanced path loss model compared with legacy curves which is designed for terrestrial VHF and UHF planning. It’s conservative so we recommend the optimistic context with Bullington diffraction.

Multisite support for mixed AGL / AMSL units

After we implemented height above sea level for aircraft, we received feedback from customers using our multisite API that they would like to model transmitters above ground level and receivers above sea level. This is a common scenario for a ground-to-air network for example. We extended the multisite API to allow for mixed units so this can all be modelled in a single API call.

Testing

Our testing cycle is six months long, and starts with CloudRF where thousands of users, on every device imaginable, will test our API and interfaces to destruction. By opening it to the public via our free plan, we encourage many concurrent users, with diverse client software, to test our service and in doing so receive much more comprehensive testing than legacy products or GOTS software which only the contractor has tested.

Field testing is essential to validate the accuracy of our software and calibrate radio templates. After we implemented our new diffraction models, we took them to Scotland where we mapped out 22km of mountain LTE800 measurements. This valuable data improved the models and clutter profiles for UHF and validated our investment in improving accuracy.

Our API is regression tested daily and our models have a custom test harness to validate the many permutations of path loss models, environment contexts, diffraction models and parameters. As the number of models and inputs has grown we are relying on automation to ensure outputs are consistent and within parameters for the model(s).

Our user interface on CloudRF is instrumented with third party error handling software which automatically triages bugs for us. Through this we are able to identify issues early before customers are aware. This works especially well with our crowd sourcing strategy since we see a greater variety of clients than legacy or GOTS competitors who do not have the confidence to do genuine crowd sourced testing.

For hardware and hypervisor compatibility we have invested in a wide variety of systems and GPUs ranging from low end consumer GTX cards to enterprise grade devices like the A5000 and A100. We test SOOTHSAYER virtual machines on Proxmox 8 and ESXi 8 with different CPU architectures, network profiles and resource profiles.

Custom clutter

More information

Get in touch for a demo and pricing today at support@cloudrf.com

Posted on

Field testing diffraction

Spectrum analyser up mountain

Recently, we added advanced diffraction models to CloudRF to complement our existing models. To validate the performance of the new Bullington and Deygout models, we took a field trip to the Highlands of Scotland to collect UHF measurements over rugged mountain terrain and through forests.

With these measurements we have validated and optimised our new models for this environment. We already had single-knife-edge diffraction, based on Huygen’s formula, and the Irregular Terrain Model (ITM) which uses Vogler diffraction. The Vogler model is known to be good but single knife edge has its limits which we have pushed.

Summary

The testing validated our investment into the complex multi-obstacle models we have added.

Both new models offer a significant improvement in accuracy, with no loss in performance for Bullington. We were able to model diffraction with higher accuracy over multiple challenging obstacles such as gradual convex slopes, ridges and valleys. Modifications have been made to the CPU and GPU engines which will be updated on CloudRF and SOOTHSAYER in due course.

Our key findings include:

  • Single-knife-edge was optimistic
  • Deygout was the most accurate, but slower
  • Bullington provided the best overall performance
  • 7.6dB accuracy achieved, including receiver error
  • 2.4dB improvement on single knife edge model

Test environment

We selected a famously cold and remote valley in the Cairngorms national park for our test which has cell towers in the valley and a variety of local repeaters for TETRA, VHF and UHF PTT services. The challenging terrain is notoriously difficult for radio communications making it ideal for our purposes.

Using a test phone with 3dB of measurement error attached to the Vodafone 4G network and a portable Rohde and Schwarz spectrum analyser, we collected a variety of VHF and UHF measurements along a 22km circular mountain route covering a wide variety of terrain. From the data collected, the 800MHz LTE measurements proved the best examples of signal failure so we focused our post-analysis on these.

Throughout the LTE testing the phone attached to multiple local cells and experienced prolonged signal failure as expected in a remote mountain valley.

We filtered the results to isolate 634 RSRP readings from a single physical LTE cell, PCI 460, from which we would calibrate modelling. This cell was located at the start of our test route and was a high power LTE band 20 (800MHz) base station with 10MHz of bandwidth.

Trees and attenuation

The first, and last, few miles of the circular route was a mature Scots pine forest. Unlike dense Scandinavian pine forests, this was sparse with a relatively high tree canopy. A lighter tree clutter profile was used to represent the attenuation from these trees which impact UHF propagation.

Convex hill and a loss of signal

Beyond the forest, the route gained altitude into a mountain plateau where line of sight was lost. The shape of the hill meant any diffraction formula would have to model a gradual convex shape versus a simpler knife-edge obstacle.

The ascent and re-acquisition

As the route ascended a spur leading toward the ridge, the signal was reacquired beyond the snowline. This signal gain was gradual, starting as a diffracted signal from the lower convex hill which eventually became a direct signal at the summit, 7km away from the cell.

Summit switcheroo

The route traversed a high ridge which featured many gaps in our cell coverage in the test data. These gaps were because the LTE modem performed a handover to stronger cells which appeared as soon as they were “visible”. Depending upon the position along the ridge, it occasionally reverted to the original “460” cell at over 7km.

Descent into darkness

The steep descent from the ridge entered a obscured valley not visible from the cell.

This resulted in a prolonged loss of signal for several miles until the signal was reacquired toward the trees at the foot of the valley.

Results analysis

The LTE survey data was prepared as CSV and loaded into the CloudRF web interface for use with the coverage analysis tool. This provided live feedback on accuracy with user generated heatmap layers so the correct settings could be identified first visually using a fine colour schema and then numerically by the reported average error in decibels.

Whilst the site location and frequency was known, the power output was not so the first task was to match line of sight positions, such as on the ridge-line, to establish the power without any obstacles. From there, a tree clutter profile was created to match the tree measurements and finally the best model and context were selected. For this task, the generic Egli VHF/UHF model was chosen as a basic model on which to base the diffraction comparison.

As settings matured, the reported Root Mean Square (RMS) error reduced accordingly until it was below 8dB (including 3dB of receiver error). This was slightly better than the 8dB we achieved on our last field test with LTE800 previously and given the extreme context, spanning a diverse mountain range, this was an excellent improvement.

Subtracting receiver error gives modelling error in the range of 4.6 to 7dB; an excellent result for difficult terrain.

Diffraction modelMean error dBRMSE error dBModelling error dBComment
Single knife edge5.2107Optimistic. May show false positive coverage.
Deygout-1.77.64.6Good. Can be conservative and is 50% slower but gives high assurance.
Bullington1.48.95.9Good. Can be optimistic but is as fast as KED and relatively accurate.
Calibration results from comparing area coverage with survey data

Coverage results

The scatter plot for the ascent to the ridgeline shows measured and simulated values. The steep drop at 2.5km and gap in results after 3.3km matches closely for the critical beyond line of sight region. The results start again once we ascended toward the ridge where the new models were conservative by 10dB whilst the simple knife edge model tracked the path loss curve – which was to be expected. All models aligned once line of sight was achieved at 6.3km.

Recommendations

The outcome of this testing has improved the accuracy of our diffraction models, identified optimisations for our clutter profiles and proved a simple path loss model can be very accurate beyond line of sight with the right diffraction model.

The API settings we used for the LTE800 cell and RSRP output are here. Note the custom clutter profile and fine colour schema.

{
    "version": "CloudRF-API-v3.9.5",
    "reference": "https://cloudrf.com/documentation/developer/swagger-ui/",
    "template": {
        "name": "Lochnagar LTE800",
        "service": "CloudRF https://api.cloudrf.com",
        "created_at": "2024-01-16T13:15:02+00:00",
        "owner": 1,
        "bom_value": 0
    },
    "site": "Site",
    "network": "LOGNAGAR",
    "engine": 2,
    "coordinates": 1,
    "transmitter": {
        "lat": 57.003155,
        "lon": -3.327424,
        "alt": 15,
        "frq": 806,
        "txw": 15,
        "bwi": 10,
        "powerUnit": "W"
    },
    "receiver": {
        "lat": 0,
        "lon": 0,
        "alt": 2,
        "rxg": 0,
        "rxs": -129
    },
    "antenna": {
        "mode": "custom",
        "txg": 19,
        "txl": 0,
        "ant": 0,
        "azi": 180,
        "tlt": 0,
        "hbw": 120,
        "vbw": 20,
        "fbr": 19,
        "pol": "v"
    },
    "model": {
        "pm": 11,
        "pe": 2,
        "ked": 2,
        "rel": 60
    },
    "environment": {
        "obstacles": 0,
        "buildings": 0,
        "landcover": 1,
        "clt": "SCOT4.clt"
    },
    "output": {
        "units": "m",
        "col": "PLASMA130.dBm",
        "out": 6,
        "ber": 0,
        "mod": 0,
        "nf": -120,
        "res": 10,
        "rad": 8
    }
}

Disclaimer

Climbing mountains in winter to test radio networks is dangerous, hard work which requires fitness, experience, skill and dedication to RF engineering. Only do this if you are serious about improving accuracy!

Posted on

HF Near Vertical Incidence Skywave (NVIS)

HF NVIS coverage

Today we launched a new model for ionospheric communication planning with High Frequency Near Vertical Incidence Skywave (NVIS).

It’s available in the interface and directly via the area, path, points or multisite API calls. The powerful GPU accelerated capability offers a modern way of visualising and teaching NVIS propagation. It does not, in it’s present form, do frequency selection so this must be performed prior to using this tool to visualise the coverage.

Background

This form of basic ionospheric propagation is popular with Military, Maritime and rural customers. With a simple horizontally polarised antenna and the right frequency, an operator can establish a link of up to 500km making this a quick and economical method for communicating long distances.

HF is undergoing a renaissance driven by uncertainty of the availability of space systems and the need for secondary communications in emergency PACE planning. Despite the choice available now with consumer grade space based communications, HF is a low cost method which requires no third parties making it immune to business and geo-political changes.

As HF bandwidth is very limited, historically only CW and voice channels were viable although developments in compression, cognitive radio and now MIMO are changing this. Improvements in software especially mean that reliable data channels with improved throughput are possible which makes HF data links a popular low cost, low bandwidth, alternative to satellite communications.

Ionospheric propagation

The ionosphere describes layers of ionised gas between earth and space which vary in height between around 100 and 300km. These layers reflect (HF) radio waves and attenuate others. As the layers are stimulated by sunlight, propagation changes significantly between day and night. Seasons affect propagation also, so a frequency which is good in the day may become unworkable after sunset.

The D Layer is the lowest layer at around 100km and absorbs low frequencies (2-4MHz). This weakens at night so these frequencies become viable. This determines the Lowest Usable Frequency (LUF).

The F layer is the highest layer at around 300km and reflects higher frequencies between 4 and 8MHz. The critical frequency is the Maximum Usable Frequency (MUF) which changes throughout the day, determined by sunlight.

A useful analogy for considering the change in the layers is a car engine; It warms up quickly in the morning and cools gradually at the end of a day driving. HF layers change quickly at dawn and slowly after sunset.

Higher frequencies beyond 8MHz experience less refraction so pass through the layers out into space. Depending on conditions a higher frequency may be possible but the most reliable (for NVIS) are found between 2 and 8MHz.

Using the NVIS model

The HF NVIS model can be selected in the model menu or in the API as code 12. Like other models it has a configurable reliability (aka fade margin) and a “context”. The context here refers to the refraction altitude and not an environmental eg. urban/rural choice with other terrestrial models.

  • Context 1 is the D layer at 100km – (Day)
  • Context 2 is the E layer at 200km
  • Context 3 is the F layer at 300km – (Night)

In the day you should use the D layer and your frequency should be between 4 and 8 MHz.

At night, you will use the F layer and need a lower frequency between 2 and 4MHz.

This HF model is only for use with a pre-determined frequency. It does not do forecasting or LUF/MUF frequency selection. This functionality will follow.

The reliability option provides a 10dB fade margin to tune modelling to match the real world. This was set with 50% reliability aligning to summer predictions with a 5MHz frequency.

HF dipole antenna

The antenna pattern will be a special horizontal dipole. You may set the gain and azimuth only but cannot change the pattern as it has high angle nulls for the skip distance before the reflection hits the earth. This will manifest itself as a cold zone at either end of the dipole where the pattern gain is lowest.

This animation shows a dipole orientated north west. The angle of orientation is measured perpendicular (at a right angle) to the wire so the tips of the antenna will generate the worst coverage, in this case to the north east and south west.

HF coverage animation

Radius and resolution

The recommended resolution for NVIS is 180m due to the immense size of the problem. Land cover is irrelevant with this mode of propagation. The radius has been limited to 500km in line with API limits. You can go further with NVIS but would run a risk of straying into multi-hop HF Skywave and this capability is focused on one hop only.

Most NVIS communication takes place between 50 and 300km where groundwave ends and the signal fades into the noise floor.

Using the GPU engine we can model a 500km radius with NVIS and terrain in under 3s. Terrain is a small concern to NVIS unless it’s a large mountain several hundred km away. In this case you will experience shadows due to to low angle of incidence but compared with shadows from terrestrial communications, it will be small.

Environment layers such as land cover and buildings should be off. They will be ignored at 180m resolution.

The colour schema can be whatever you like but if you want to align with the ‘S’ meter scale, popular with HF, where a barely workable signal is S1 and the best is S9 (-73dBm) use a max value of -73dBm with 6dB bands for S9 to S1.

Accuracy verification

We have calibrated our NVIS model to align within 10dB of measurements taken from a 2012 research paper by Marcus Walden using a 5MHz NATO frequency in the UK. From this paper we selected one of the longer links at 210km where we used the median measurement value which for August 2009 was lower during the day than VOACAP, a popular open source application for HF forecasting. The median dBW measurement at noon was -120dBW (-90dBm).

Noting that the RMS error between the VOACAP predictions and the measured values was concluded to be 7 to 12dB at 12 noon (Ref table 7 on page 8), and more at night, we have tuned our model so an “optimistic” prediction is 3dB from the noon measurement. The context and reliability options provide sufficient control to allow predictions to align with current and local ionospheric conditions.

The screenshot below shows both the path and the area coverage aligning with a 1dB calibration schema. The link has over 900m of curvature height gain which explains why a flat region of England appears as a mountain!

HF NVIS calibration
HF NVIS calibration to 3dB

Ionospheric modelling is less predictable than terrestrial modelling due to unpredictable solar radiation. Predictions generated with this model are useful for training, situational awareness and antenna alignment but cannot provide an accuracy greater than 10dB, assuming the inputs are correct.

Look forward: Space weather and long range HF

HF forecasting tools use lookup tables to set refractivity during both seasons and times of day. Using quality, and current data, improves accuracy but like weather forecasting it cannot offer accurate predictions without live data, in this case space weather which has seen a lot of renewed research recently. Our implementation does not use forecasting data presently so users should not be using it to pick their frequencies, but it will help visualise the coverage and align antennas – which at 500km is important.

For the next phase of HF, long range skywave, we will use a space weather feed to offer high resolution HF predictions. Long range HF uses multiple hops at lower angles so the space weather and time of day must be considered along the route which may be thousands of kilometers….

Posted on

Planning for noise

The trouble with radio planning software

Radio planning software has a patchy reputation. Regardless of cost, the criticism, especially from novice users, is generally that results “do not match the real world”. The accuracy of modelling software can be improved with training, better data, tuned clutter etc but if you do not plan for the local spectrum noise, it will be inaccurate.

The reason modelling does not match the real world, is the real world is noisy, and noise is everything in digital communications. Spectrum noise will limit your network’s coverage and equipment’s capabilities. A radio that should work miles can be reduced to working several feet only when the noise floor is high enough.

Anyone expecting simulation software to produce an accurate result without offering an accurate noise figure will be forever disappointed as software cannot predict what the noise floor is in a given location at a given moment – you need hardware for that…

Spectrum sensing radios

Modern software defined radios are capable of sensing the noise figure for the local environment. This allows operators, and cognitive radios, to make better choices for bands, power levels and wave-forms as narrow wave-forms perform better in noise than wider alternatives due to channel noise theory where noise increases with bandwidth.

For example, you can have a radio capable of 100Mb/s but it won’t deliver that speed at long range, at ground level, as it requires a generous signal-to-noise margin to function. This is why a speed demo is always at close range.

When spectrum data is exposed via an API, like in the Trellisware family of radios, it provides a rich source of spectrum intelligence which can be used for radio network management, and dynamic RF planning with third party applications. When we integrated this radio API last year, we were focused on acquiring radio locations, not spectrum noise. At the time we could only consider a universal noise floor value in our software so the same noise value was applied for all radios which was vulnerable to error as radios in a network will report different noise values.

Interference: a growing issue

The single biggest communications problem we hear about, from across market sectors, is interference, either deliberate, accidental, or just ambient like in a city. The number of RF devices active in the spectrum, especially ISM and cellular bands, is increasing and in markets which were relatively “quiet”, such as agriculture. Some have always been problematic, such as motorsport, where the noise floor increases significantly on race days.

Spectrum management is a huge problem which won’t be fixed with management consultants or artificial intelligence. Regulators can, and are, restructuring spectrum for dynamic use but to use this finite resource efficiently, hardware and software vendors need to publish APIs and competing vendors need to be incentivised to work to common information standards.

As noise increases, the delta between low-noise RF planning results and real world results has the potential to grow. There’s anecdotal evidence that some private 5G network operators are experiencing so much urban noise they’ve given up on RF planning altogether, and have opted to take their chances using a wet finger and local knowledge. Skipping RF planning is a managed risk when a company has experienced staff (or they get paid for failure), but it does not scale and is a significant risk when working in a new area and/or with inexperienced staff.

A solution: The noise API

To address this challenge, we’ve developed a noise API to eliminate human error, and guesswork for noise floor values which has undermined the reputation of “low-noise” radio planning software.

Manual entry can now be substituted for a feed of recent, or live, spectrum intelligence to enable faster and more accurate network planning. Combined with our real-time GPU modelling, the API can model coverage for moving vehicles, with real noise figures.

There are two new API requests in v3.9 of our API; /noise/create; for adding noise, and /noise/get; for sampling noise. The planning radius is used as a search area so you can upload 1 or thousands of measurements, private to your account. The planning API will reference the data, if requested, and if recent (24 hours) local noise is available for the requested frequency, it will sample it and compensate for the proximity to the transmitter(s).

When no noise is available within the search radius, an appropriate thermal noise floor will be used based on the channel bandwidth and the Johnson-Nyquist formula. The capability can be used by our create APIs (Area, Path, Points, Multisite) by substituting the noise figure in the request eg. “-99” for the trigger word “database”.

{
  "site": "2sites",
  "network": "MULTISITE",
  "transmitters": [
    {
      "lat": 52.886259202681785,
      "lon": -0.08311549136814698,
      "alt": 2,
      "frq": 460,
      "txw": 2,
      "bwi": 1,
      "nf": "database",
      "ant": 0,
      "antenna": {
        "txg": 2.15,
        "txl": 0,
        "ant": 39,
        "azi": 0,
        "tlt": 0,
        "hbw": 1,
        "vbw": 1,
        "fbr": 2.15,
        "pol": "v"
      }
    },
    {
      "lat": 52.879223835785716,
      "lon": -0.06069882048039804,
      "alt": 2,
      "frq": 460,
      "txw": 2,
      "bwi": 1,
      "nf": "database",
      "ant": 0,
      "antenna": {
        "txg": 2.15,
        "txl": 0,
        "ant": 39,
        "azi": 0,
        "tlt": 0,
        "hbw": 1,
        "vbw": 1,
        "fbr": 2.15,
        "pol": "v"
      }
    }
  ],
  "receiver": {
    "alt": 2,
    "rxg": 2,
    "rxs": 10
  },
  "model": {
    "pm": 11,
    "pe": 2,
    "ked": 1,
    "rel": 80
  },
  "environment": {
    "clm": 0,
    "cll": 2,
    "clt": "SILVER.clt"
  },
  "output": {
    "units": "m",
    "col": "SILVER.dB",
    "out": 4,
    "res": 4,
    "rad": 3
  }
}: 

In the example JSON request above, two adjacent UHF sites are in a single GPU accelerated multisite request. The sites both have a noise floor (nf) key with a value of “database”. Noise will be sampled separately for each site.

Demo 1: Motorsport radio network on race day

The local noise floor jumps ups significantly on race day compared with the rest of the time making planning tricky.

Demo 2: Importing survey data to model the “real” coverage across a county

By importing a spreadsheet of results into the API, we can generate results sensitive to each location.

A look forward to cognitive networks

Autonomous cognitive radio networks require lots of data to make decisions.
Currently, they can use empirical measurements of values such as noise to inform channel selection and power limits at a single node.
What they cannot do is hypothesise what the network might look like without actually reconfiguring. To do that requires a fast and mature RF planning API, integrated with live network data. Only then can you begin to ask the expansive questions like, which locations/antennas/channels are best for my network given the current noise or the really interesting future noise whereby the state now is known but the state in the future is anticipated.

As our GPU multisite API can model dozens of sites in a second, the future could be closer than you think…

References

API reference: https://cloudrf.com/documentation/developer/swagger-ui/

Hosted noise client: https://cloud-rf.github.io/CloudRF-API-clients/integrations/noise/noise_client.html

GPU multisite racetrack demo: https://cloud-rf.github.io/CloudRF-API-clients/slippy-maps/leaflet-multisite.html

Posted on

Critical Coronation private 5G network planned with CloudRF

On Saturday 6th May the worlds media descended on London for the Coronation of King Charles, an event last planned before many people had television.

As the national broadcaster, the BBC managed the coverage and worked with Neutral Wireless to deploy an innovative private 5G network, with dedicated spectrum, along the procession route for exclusive use by the media and special cameras with 5G modems.

Using the CloudRF API with UK LiDAR data the team created accurate urban line of sight models for their N77 base stations along the tree lined route. Their model used RSRP units and a custom colour schema to map the 4GHz downlink coverage and key handover regions to ensure smooth subscriber transitions for the dynamic event. 

Antenna patterns

The area to be covered is a linear tree lined boulevard known as “The Mall” which leads to one of the most iconic buildings in the country, Buckingham palace. For this task, high performance Alpha Wireless directional panels were employed above the crowds at only 4m, much lower than a conventional city cellular network where a mast will be on rooftops. The combination of low height and a 4GHz frequency limits the effective range so the direction of the antennas needed to be carefully optimised to provide maximum coverage for broadcasters, strategically positioned along the route for line of sight.

How do you model a parade of horses?

Given the low height of the masts and the significant number of tall horses on parade this presents a challenge to critical line of sight which a LiDAR vendor cannot help with. The same problem applies to temporary structures such as the grandstand erected outside the Palace.

For a challenge such as this we can use custom clutter to simulate a parade of horses at a uniform distribution with a nominal density value. A “brick” clutter type can be customised to 2m height and 0.4dB/m attenuation to simulate loss through the parade to show the low and high risk locations for maintaining a link through the parade.

Our current 2D engine regards all obstacles as extending to the ground so a clutter model for a horse will be conservative since in reality a horse has a substantial gap between its legs sufficient for some RF to travel between it and reflect, and diffuse, off the rough ground to reach a camera beyond it. We’re already working on 3D 😉

In the screenshot below, the formation nearest the mast present no challenge, the formation in the middle show attenuation throughout them making a link difficult potentially depending upon siting of the receiver and the distant formation is blocking the, already attenuated, signal.

Trees

The parade was held in May when the trees along the Mall are coming into leaf presenting a moderate obstacle to the 4GHz frequencies. The temporary masts were therefore erected forward of the trees for optimal coverage but still technically under the canopy which makes planning challenging with a 2D engine since these trees can exist as spikes in the LiDAR profile. To avoid accidentally siting a mast atop a tree/spike in the model the path profile tool can be used to inspect the path profile, to identify where there are tall trees in the underlying surface model. Using the UK Environment Agency LiDAR from 2019, the nearest trees on the Mall are lightly represented in LiDAR with a 6m to 8m average canopy compared to their significant neighbours one row back and beyond in St James Park.

The result

The high resolution coverage map was integrated with official event mapping, printed, and displayed in the event broadcasting operations room as a reference. You can also see it on the BBC.

Despite the challenges expected from such congested spectrum, dynamic obstructions and the unexpected surprise of unscheduled electronic counter measures they were able to deliver accurate coverage, on time and within budget to broadcast a historic event in high definition, in real time.

Victoria Memorial

CloudRF referenced in award winning technical paper

The BBC research and development team published an award winning paper at the 2023 International Broadcasting Convention about the event titled 5G Standalone Non-Public Networks: Modernising wireless production.

In this paper, the 4GHz coverage accuracy was validated using ground truth data. This quote about CloudRF’s accuracy stands out:

The agreement between the predictions and on-the-ground measurements is excellent

BBC Research and Development

You can download the paper at the BBC here.

Posted on

Live noise floor modelling

“I’ve never seen modelling that matches the real world”

Anon

Noise in the RF spectrum is a growing issue. It undermines the performance of systems and requires careful spectrum management and deconfliction to mitigate its impact. Sources of noise can be natural, industrial or man made. Regardless of the source, or intent, the issue needs understanding and dealing with to operate effectively in congested spectrum.

Radio users working in cities know about this problem all too well. They can describe the symptoms but are unable to visualise the full impact, and in doing so realise a solution such as an adjustment, through a lack of tooling. Many organisations have bought receivers for use at the edge, and back office software for planning but the two rarely meet…

Noise floor and SNR

The noise floor describes the average minimum RF power in the spectrum. There are different ways to measure it, and depending on the requirement you may want the peak power but for our purposes we are using the mean dBm value within the channel. A quiet noise floor value is -120dBm, subject to bandwidth. The higher this value, the less range you are communicating.

FFT

Signal to Noise Ratio (SNR), measured in dB, describes a signals power above the noise. A good signal might be 20dB above the noise and a weak signal only 3dB. Different signals have different SNR requirements; GPS for example uses a BPSK signal with a very low 2dB requirement so it’s barely visible in the noise whereas a DVB QAM signal needs a prominent 20dB SNR to deliver an error free  video signal. 

In CloudRF, both noise and SNR can be defined to simulate different environments and different waveforms.  

A good guess – Johnson-Nyquist noise 

Without the presence of man made signals, the RF spectrum has a natural noise floor called the thermal noise floor. This can be calculated with temperature (noise increases with temperature) and bandwidth (More bandwidth equals more noise).

Noise dBm = -173.8 + 10 log10(Frequency in Hertz)

Calculators exist to compute this value based on the Johnson-Nyquist formula. We use this in our interface so when you change bandwidth the noise floor is set. This is a good start, consider it a 75% guess, but is not the real noise. To get that you need to go to that place and measure it.

Measuring noise with an RFeye Node

To measure the noise floor accurately you need a high quality receiver. Low quality SDR receivers are easy to come by but will not be able to give you a more accurate noise value than the previously mentioned bandwidth formula.

 The CRFS RFeye Node is a high performance RF receiver with an excellent dynamic range, industry leading low noise figure, and sensitivity. The API enabled receiver is in use worldwide for remote spectrum monitoring making it an ideal candidate for integration, especially since it has open source client scripts!

Integration with the CloudRF API

We imported the NCP client library into our open source API client so we could query the noise for our target frequency and bandwidth. Every time our script processes a site, the noise is tested and the result spliced into our site request.

In return we get a model which uses a real noise floor value. Typically this is higher than the formula method resulting in reduced coverage.

The beauty of this integration is the receiver can be in another county but the modelling can be conducted with high precision from home. With the scalability of the API it unlocks several possibilities:

  • Model a spreadsheet for a large network, and sample noise floor from local receivers instead of using a generic best guess value
  • Model a route for a drone with different noise values along the route. If you’ve ever lost communications with a distant 2.4GHz drone that had LOS this was likely WiFi noise
  • Model a radio and/or a waveforms performance in a remote location without visiting or deploying equipment which is expensive and time consuming 

Demo video

In this video we put it all together to incorporate live noise into our modelling. We’re executing one site at a time, but with a spreadsheet, the API client will automatically process a network of sites.

Dynamic noise floor modelling with a CRFS RFeye receiver

References

RFeye python library https://github.com/CRFS/python3-ncplib

CloudRF python script https://github.com/Cloud-RF/CloudRF-API-clients/tree/master/integrations/CRFS

CloudRF API reference https://cloudrf.com/documentation/developer/

Posted on

Calibrating beyond line of sight RF modelling with field testing

Summary

We field tested our software to improve it for beyond line of sight planning. From analysis of data we have improved diffraction accuracy, clutter profiles and crucially have proven that high resolution LiDAR is not the best choice for beyond line of sight or sub GHz modelling. An RMSE modelling error of 5.2dB was achieved as a result.

Modelling can only be as accurate as the inputs.

Given accurate reference data and accurate RF parameters it can be very accurate but achieving both conditions requires careful and delicate calibration of dozens of variables. Thankfully this time intensive process is only necessary when changing hardware which for most organisations is a cycle measured in years.

The reference data used could be a digital terrain model like SRTM, a digital surface model like ALOS30, high fidelity LiDAR or landcover like ESA Worldcover. As we demonstrate, high resolution does not always translate to high accuracy in beyond line of sight RF.

Calibrating modelling with LiDAR data to match field measurements

LiDAR is great, but it’s not a silver bullet

You can have the most expensive 50cm LiDAR money can buy and still not achieve real world accuracy or a notable gain over 1m or 2m data (unless you’re planning for a model village). LiDAR on its own cannot model beyond line of sight, essential for sub GHz planning, which is a risk we’ll explore when planning tool design is focused on sales and marketing, not actual RF Engineering.

Controversially, you can get better BLOS modelling accuracy with basic terrain data enhanced with calibrated clutter profiles which we’ll demonstrate below.

The best data to use depends on the technology and requirements. LiDAR is unbeatable for line of sight planning, but won’t help you in the woods, or beyond line of sight without a proper physics propagation engine.

Unless your network is composed of static masts eg. Fixed wireless access (FWA), then chances are you are working non line of sight between radios so LiDAR should be used carefully.

Public 1m LiDAR data showing cars, trees and houses

“Line of sight” field testing Feb 2022

Last year we field tested LTE 800MHz in the Peak district and achieved excellent calibration figures for distant hilltop towers looking onto open moorland. This was predictable given the legacy cellular models we used were developed from similar measurements. As the blog described, the harder calibration was inside a wood where the LiDAR data proved unsuitable. Due to the simplistic nature of first return LiDAR, a tree canopy appears as a solid immutable obstacle. You can model the RF as it hits the tree canopy but not where it matters, on the ground inside the trees. This key finding accelerated and matured our developments with tooling to support calibration with survey data in CSV format and user configurable environment profiles.

CSV import utility – developed for analysing field test data
Clutter manager

“Non Line of sight” field testing, Feb 2023

This year we field tested LTE 800MHz again but this time in a old Gloucestershire village, Frampton on Severn, where the tower was deliberately obstructed and the solid stone buildings in the village meant we were measuring diffraction, coming from rooftops of single, double and triple storey buildings. The test data was collected from 2 handheld LTE test devices using a combination of Network Signal Guru (NSG) and CellMapper for Android. This app reports signal values and logs cell metadata with locations to a CSV file which we can analyse.

Some variables were unknown such as RF power, which required us to take measurements on the green in full line of sight. These “power readings” allowed us to reverse engineer the cell power as approximately 40dBm (10W) which would be appropriate for a cell serving a village.

Frampton on Severn. The cell tower is to the far right behind the pub.

Received Signal Received Power (RSRP)

The measured power value is Received Signal Received Power (RSRP) which is a LTE dBm value determined by the bandwidth, in this case 10MHz like most LTE Band 20 signals in Europe.

RSRP is lower than the carrier signal (Received Power) which is agnostic to bandwidth, but also measured in dBm.

Be careful not to confuse the two units of measurement as they can vary by more than 27dB!! A carrier signal of -80dBm might have a RSRP of -108dBm or lower depending on bandwidth. RSRP is usable down to -120dBm.

Received power dBmBandwidth MHzRSRP dBm
-7010-97.8
-8010-107.8
-9010-117.8
Relationship between power, bandwidth and RSRP at 10MHz

Diffraction

Diffraction is the effect that occurs when radiation hits an edge like a rooftop or a hilltop. The wavefront radiates from that edge with resulting power determined by several factors like height and wavelength. Much like a game of pool, the angle of incidence determines the angle of reflection so a tall building will cast a long RF shadow before the diffracted signal is available again beyond the shadow. A proper diffraction shadow has soft edges as the RF scatters in all directions. LiDAR data creates sharp shadows, even when trees have no leaves.

The CloudRF service has two diffraction capable CPU and GPU engines which use a proprietary algorithm based upon Huygen’s formula which considers obstacle dimensions and wavelength.

Exaggerated diffraction caused by solid LiDAR

Which propagation model is best for 800MHz?

Most propagation model curves follow similar trajectories but differ by only a modest amount of dB in relation to the impact of an obstacle. The choice of model is therefore less important, in our experience, than getting the obstacle data right so for a cellular base station, you could choose to calibrate against any empirical or deterministic model which supports that frequency. Each model has a reliability margin to help align and tune it. For UHF the advanced (and default) ITM model is preferable as it was designed for NLOS broadcasting with complex diffraction routines. For this test we picked the simpler Egli VHF/UHF model with basic knife edge diffraction since this features in both our CPU and GPU engines, and we want to calibrate both.

Path loss curves for propagation models

What is “accurate”?

The cellular modem used to record power levels has a measurement error of -/+ 3dB so any reading cannot be more accurate than this. Therefore, if calibration of field measurements returns a Root Mean Square (RMSE) value of 8dB, this can be considered to be composed of measurement error and (5dB of) modelling error.

For Line of sight, a modelling error level of < 10dB is ok, < 5dB is good, and < 3dB is excellent. This is the easy part which for some basic tools is enough.

Line of Sight coverage: Good for above UHF only

For non line of sight (which covers much more complex scenarios), the error doubles so an error level of < 20dB is ok, < 10dB is good and < 6dB is excellent.

For our field testing, we achieved a non line of sight calibration with 5.2dB of modelling error which we were content with. We are confident we can improve upon this with richer clutter data which we are developing presently.

Results

1m LiDAR – It isn’t as useful as it looks

Using 1m LiDAR for the village we generated a sharp heatmap sensitive to chimney stacks and even parked vehicles which made for a very crisp result visually but the first-pass correlation with the field measurements showed it was conservative, which arguably is a safe default if you’re unsure.

The reason was a combination of trees and buildings. The village had trees on the green but due to the season, none were in leaf so signals would travel through them with relatively reduced attenuation. The LiDAR data however, regards a tree as a solid obstacle so results in an overly conservative prediction for measurements beyond the trees. Attenuation through buildings is a weakness of LiDAR in 2.5D RF modelling using this raster data.

You can show RF on the roof and if diffraction is calibrated, beyond the diffraction shadow as the signal hits the ground but not within the shadow itself where through-building signals reside.

LiDAR calibration showing a mean error of -1dB and a total RMSE error of 10dB.

The LiDAR result was improved with positive adjustments to the diffraction routine in SLEIPNIR, our CPU engine. As a result, diffraction is slightly more optimistic and the correlation with field measurements was improved.

The best LiDAR score, subtracting 3dB of receiver error was a modelling RMSE of 7.28dB.

DTM and Landcover – Better than LiDAR?

Using 30m DTM with layered 10m Landcover and 2m buildings, sampled at 5m resolution, higher calibration was achieved despite the loss of resolution. The reason is the Landcover offers through-material attenuation which can be adjusted to match field measurements. In this case, the “trees” and “urban” height and attenuation values were manipulated until coverage matched the results with high accuracy.

The best Landcover score, subtracting 3dB of receiver error was a modelling RMSE of 5.22dB.

Landcover calibration produced a better result – without breaking the bank

A / B comparison – LiDAR and Landcover

Using our calibrated settings, we extrapolated coverage out to 3km radius to model the whole cell. Here you can clearly see differences in coverage between the two data sets. With LiDAR, coverage is bouncing off hard tree canopies and casting sharp shadows on obstacles like hedgerows. With Landcover, we still have diffraction but more attenuation from obstacles which creates major nulls and also softer diffraction shadows, set by our clutter profile.

A look forward

Findings from this field testing will be worked back into the CloudRF service in coming days, followed by SOOTHSAYER in due course, as new releases for our SLEIPNIR CPU engine, GPU engine and better default clutter values. We are developing sharper, and economically viable, global clutter data to improve on these scores, but won’t say how just yet 😉