Introduction
Most real-world 4G/5G capable device fleets spend time on both types of networks (i.e. 4G and 5G). Even with 5G coverage improving, many mobile devices pass through cells, buildings and load conditions that may trigger a necessary bearer change (5G↔4G).
That switch alters the performance profile of devices. For example, by switching from 5G down to 4G:
- Latency usually rises by a few–tens of milliseconds on 4G, and jitter (packet delay variation) can widen.
- Uplink headroom can drop from tens of Mbit/s on 5G, to single or low double digits on 4G – being more felt on cameras, telemetry bursts, and OTA updates.
- Setup and short transfers are typically faster on 5G – especially Standalone (SA) versions – but it is important to understand that IoT devices won’t land on 5G every time.
For applications that require voice, talk, visualisation, or control in near real time, these shifts show up as stutter, lag, or longer mouth to ear (voice) / glass to glass (video) delays, unless you plan your devices and applications for them.
The remedy is to:
- design for both 4G and 5G lanes if possible.
- use adaptive bitrate and elastic jitter buffers for media
- prioritise critical flows over background jobs
- keep safety-critical decisions local when appropriate – when (or if) they are needed
- and measure latency/jitter so buffers and SLAs are evidence-based.
In short – dual-capable 4G/5G devices succeed when they are dual-prepared and have access to as many 5G networks as possible.
If you work on vehicle systems, hospital technology, building management, or critical infrastructure, you’ll see what 5G↔4G switching means for latency, jitter, and uplink, and how that translates to some types of IoT devices in everyday use.
CSL’s suite of multinetwork IoT SIMs and rSIMs make these transitions more predictable by enabling devices to switch between the locally available carrier networks. Allowing 5G/4G capable devices to select the best available RAN (preferably 5G), and helping to preserve headroom for priority traffic, giving you the data to right-size buffers accordingly.
Glossary
Quick definitions for the acronyms we use throughout this paper:
- PDV = packet delay variation (jitter)
- SA = Standalone 5G – Fully Independent 5G radio access network (RAN) that also uses a dedicated 5G core network.
- NSA = Non‑Standalone 5G – Uses a 4G (LTE) core infrastructure for its core network, while using a 5G RAN.
- UL = uplink (device → network)
- DL = downlink (network → device)
Cloud, Network Security, Services & Technologies
To achieve an analysis of the effects of 5G↔4G radio access network (RAN) handovers we need to go ‘beyond the basics’ of theoretical limits. this means placing your devices in the wider cellular ecosystem: i.e. how network infrastructure and core network architecture interact with cellular technologies in the radio access network. Key to real world performance being how network providers and service providers operate across the mobile telecommunications landscape.
In this ‘overview’ we reference deployment models used across the internet of things, explain how massive MIMO and spectrum usage improve network efficiency on terrestrial networks, and outline development paths by each generation of networks that are shaping the future of the mobile industry.
The baseline (measured, not theoretical)
What UK 5G and 4G actually deliver today – latency, speeds, and how often devices switch:
In the UK today, public sub-6 GHz 5G typically delivers low-tens of milliseconds latency and materially higher download speeds than 4G/LTE, with a mixed picture on uploads amid the still-evolving availability. For example:
- In Ofcom’s latest Mobile Matters 2025, Standalone 5G (SA) showed ~15% lower latency than Non-Standalone 5G (NSA) and ~45% faster short file downloads; 70% of SA samples exceeded 100 Mbit/s down stream.
- 5G SA’s connection success rate, however, was slightly below NSA’s, and uploads were “mixed” (fewer very low results on SA, but NSA showed a slightly larger share of high-upload outliers).
- Also, only ~28% of connection attempts were on 5G during the study period, meaning most 5G devices will also still spend meaningful time on 4G LTE.
Further independent crowdsourced studies aligned with this general picture. For example:
- UK 5G median downloads were often in the 100–230 Mbit/s band, with median uploads in the low-to-mid tens or teens (as reported in recent operator-agnostic round-ups).
- Across Europe in 2025, analysts also noted a mixed trend in 5G download speed growth and spectrum use which further supported or corroborated the findings.
What that measurement data means in plain terms for IoT devices:
- It is reasonable to expect ~20–40 ms end-to-end latency on public 5G sub-6 (with it also being lower in good SA pockets)
- Downloads commonly are commonly >100 Mbit/s, and uploads often 10–20 Mbit/s,
- The results, however, show that devices need to expect Time-on-5G to be well below 100%. As a result, your fleet will likely regularly pivot between 5G and 4G LTE.
How IoT SIMs move between 5G and 4G - and why it matters
How a device decides to use 5G or fall back to 4G, and what that switch does to performance:
CSL’s multi-network IoT SIMs allow devices to select the best available Public Land Mobile Network (PLMN) and Radio Access Technology (RAT) in the moment (although optimal performance requires 4G/5G compatible devices and robust authentication processes to be followed).
When 5G New Radio (NR) signal and load conditions are strong, devices can remain on 5G; when they aren’t, LTE 4G becomes the active radio bearer (the current connection).
To avoid “back-and-forth reselection” at cell edges, modern devices use hysteresis and dwell timers so that re-selection completes smoothly in seconds rather than rapid back-and-forth reselection. The practical effect of a 5G→4G change is therefore:
- Latency typically rises by a few to a few-tens of milliseconds versus the same spot on 5G.
- Uplink (UL) headroom often drops into single- to low-double-digit Mbit/s, which you’ll notice first on camera/video, bulk telemetry, and Over-the-Air (OTA) updates.
- Jitter (packet delay variation) tends to increase under LTE load, which can disturb real-time streams unless buffered or prioritised. Ofcom’s 2025 measurements confirm the structural 5G advantages on response time and short-file performance, with SA strongest.
Jitter-defined: and why engineers should care.
In simple terms, Jitter is delay variation; and for some real-time/interactive applications it is best to either keep it small or add a buffer and accept extra delay.
Jitter is formally packet delay variation (PDV): the variability of one-way packet delay across a flow.
The Internet Engineering Task Force (IETF) and the International Telecommunication Union (ITU-T) define and quantify PDV in RFC 5481 and Y.1541, and tie its effects to application behaviour (e.g., the need for de-jitter buffers).
For voice, the ITU-T G.107 E-model and G.114 show how delay and loss (often exacerbated by jitter) degrade conversational quality.
In short: stable latency matters as much as low latency for real-time control, voice, and video.
What jitter looks like in the field: bursts of queuing or radio scheduling variance produce frame-time variation. For example:
- If your de-jitter buffer is small, you’ll see choppy audio, video stutter, or control loop spikes;
- if it’s large, you’ll add delay (hurting interactivity).
- The right buffer is application-specific, but the governing principle is universal per ITU/IETF: meaning it is important to bound the variation (keep jitter – i.e., packet delay variation (PDV) within a tight window so your app sees almost the same delay every packet, or alternatively budget for it (i.e. accept delay and variation and add a jitter buffer on the app or device so the app still plays smoothly, at the cost of extra latency).
Sector-by-sector: what 4G↔5G switching does to outcomes.
Some of the practical effects of 5G↔4G switching for vehicles, hospitals, buildings, and CNI.
1) Vehicle system providers (telematics, ADAS offload, V2X support)
What a 5G→4G change does to in‑vehicle uploads, video, and remote assistance.
- The reality today: In cities, 5G commonly sustains UL ~10–20 Mbit/s for 1080p streams and rich telemetry; on corridors with coverage gaps your device will drop to LTE where UL can halve and latency/jitter rise. That shifts your effective camera count/bitrate and the stability of driver-assist teleoperation or remote monitoring. Ofcom’s UK measurements substantiate the 5G vs LTE latency gap and the variability of 5G exposure – (results vary by location/load; and the recent UK studies confirm geography-driven variability).
- Why jitter matters here: Cooperative or tele-assisted manoeuvres are latency- and variation-sensitive; automotive groups (e.g., 5GAA) document tens-of-milliseconds service-level targets for many Cellular Vehicle-to-Everything (C-V2X) use cases, with stricter bounds for coordinated actions. Fluctuating delay from RAT changes pushes you toward adaptive bitrate, buffering, or local autonomy for safety-critical decisions.
2) Hospital systems (clinical mobility, imaging carts, in-hospital communications)
How bearer changes affect consults and imaging across hospital campuses.
- The reality today: On hospital campuses, public 5G latency is routinely in the tens of ms; short file downloads benefit materially from SA where available. When the bearer flips to LTE (or when you traverse dense buildings), transfer times and UL throughput can degrade, affecting high-bitrate imaging pushes while voice/telepresence remain workable with appropriate buffering. Ofcom’s UK dataset shows SA’s material advantage on short downloads and response time – exactly the metrics you feel during session setup and transfer bursts.
- Why jitter matters here: For voice/video in remote clinical collaboration, jitter forces larger de-jitter buffers or yields stutter; the ITU E-model explicitly treats delay/jitter/loss as additive impairments to conversational quality. In imaging workflows, jitter mainly affects throughput smoothness; where peak bitrate is less important than sustained, steady-rate delivery.
3) Building Management Systems (BMS) – HVAC, lifts, metering, access
What switching means for lifts/HVAC telemetry, access video, and firmware updates.
- The reality today: Most BMS control/telemetry tolerates latency in the tens to hundreds of ms. Falling back from 5G to LTE rarely breaks the application, but you will notice bearer changes on video intercom, mobile access, or bulk firmware updates (lower UL, more jitter). UK-wide stats show 5G time-share is still limited, so this mixed 5G/LTE 4G operation is normal.
- Why jitter matters here: For control loops with seconds-level update intervals, jitter is usually absorbed. But for event-driven access control and video, jitter drives how big your buffers must be to keep interactions responsive.
Device issues and constraints
- For vehicles, 5G-capable cellular modules with LTE fallback preserve functionality across coverage changes; you will also feel antenna quality, MIMO (Multiple-Input Multiple-Output), and uplink features (e.g., UL carrier aggregation) far more than theoretical maximums.
- For BMS controllers, LTE Cat-1/Cat-4 remains adequate unless you run video or heavy remote access; For long-life battery sensors, consider LPWA (Low-Power Wide-Area) profiles such as LTE-M (LTE for Machines) or NB-IoT (Narrowband IoT) for multi-year life (provided coverage for them exists).
These are industry-standard views reflected in 3GPP and industrial alliances.
What 4G↔5G switching means for devices (and budgets)
The device traits that matter and the two biggest cost drivers:
For device budgets. Two-line items dominate:
- Data volume, especially uplink: 5G makes video and high-rate telemetry feasible; costs track volume, not the “5G” label itself. Many UK datasets show median UL on public 5G in the low-tens of Mbit/s, which encourages richer feeds – unless you compress, filter, or schedule them.
- Hardware capability: 5G-capable modules/routers cost more than LTE-only, and dual-modem designs add again. Whether that’s justified depends on your measurable gains: lower latency (e.g., faster file transfers, tighter control), higher UL stability, or time-frame determinism.
A short-form engineer’s primer on mapping jitter/latency to success criteria
How to set success criteria by linking latency/jitter to voice, video, and control quality.
- Voice/Push-to-Talk: Voice on mobile networks typically use VoLTE (or VoNR where available) and you may see different latency/jitter when a VoLTE or VoNR device shifts between 5G and 4G during calls.
- Use G.114 as your baseline check for one-way delay and G.107 (E-model) for the combined impairment from delay, jitter (via buffering), and loss → expected conversational quality. For example, as a way of example of illustrating some practical effects:
- A bearer change = your call flips radio “bearer” (e.g., 5G → 4G) because coverage/quality changed. That flip often raises jitter (widens packet-to-packet timing variation (PDV)). To stop choppy audio, the app needs to increases the jitter buffer (it holds audio a bit longer before playing). Bigger buffer = more delay added to the path. The result is mouth-to-ear (conversational) delay (often called “mouth-to-ear delay” in the field) or conversational delay): people start talking over each other, pauses feel awkward, “sorry, you go ahead” moments.
- For reference, telephony guidelines say <150 ms one-way is generally fine; 150–400 ms starts to feel laggy; >400 ms is unpleasant. Bigger buffers keep audio smooth, but you pay in conversational responsiveness. Translation: if bearer changes push you to bigger buffers, you pay in mouth-to-ear delay delay.
- Use G.114 as your baseline check for one-way delay and G.107 (E-model) for the combined impairment from delay, jitter (via buffering), and loss → expected conversational quality. For example, as a way of example of illustrating some practical effects:
- Live video & remote viewing: Smoothness depends on bounded PDV (jitter). If 5G→LTE switching increases jitter, either raise buffer (increase glass-to-glass delay (camera-to-screen delay)) or drop bitrate/frame rate on the fly. Industry measurements and operator-agnostic analyses show why: upload capacity and response time are the bottlenecks in cellular networks.
Why CSL (and why now)
Why CSL’s multi-network IoT SIMs keeps 4G↔5G transitions predictable:
- Single network SIMs tie uptime to one operator’s weakest cell. CSL multi-network roaming selects the best available radio per site, per moment, cutting outage minutes and smoothing 5G↔4G transitions. The result is steadier uplinks, fewer session resets, and higher video/telemetry uptime.
- Operator-agnostic performance reality: The UK’s measured data (Ofcom) shows SA’s technical edge, limited but rising time-on-5G, and low-tens-of-ms latency on both 5G and LTE – with 5G better. Your 5G capable fleets will experience both RATs daily; how switching is managed determines jitter budgets, upload feasibility, and session reliability.
- Sector depth: For vehicles, reputable automotive bodies (5GAA) document service-level needs in the 10–100 ms range for many cooperative functions; for CNI, industrial and power standards (3GPP/5G-ACIA/IEC) set tight latency/jitter bounds for applications that require smooth PDV.
The Bottom line
With CSL’s multi-network IoT SIMs and network policy expertise, 4G↔5G transitions become predictable events you can design around. If your workload is uplink-heavy or jitter-sensitive, we can help you understand and quantify the expected latency/PDV envelope on public 5G vs LTE for your routes/sites (so control stays steady when video streams occur).
Note: This report in the is anchored surveys by: Ofcom, 3GPP/ETSI, 5G-ACIA, 5GAA, ITU-T, and independent measurement labs for 5G and 4G-LTE performance and their geographical or performance enforced switching.