Your dedicated server has a 10 Gbps uplink — but what are you actually getting? Speed test websites lie. Hosting dashboards show port capacity, not real throughput. iPerf3 is the industry-standard tool that tells you the truth — and in this guide, you'll learn how to use it properly, interpret the results, and benchmark your server like a network engineer.
iPerf3 is a free, open-source command-line tool designed to measure maximum achievable network bandwidth between two endpoints. Unlike browser-based speed tests that route traffic through a single CDN node, iPerf3 gives you direct, protocol-level control over how traffic flows — making it the go-to benchmarking tool for network engineers, data center operators, and dedicated server administrators worldwide.
When you provision a dedicated server — whether it's a bare-metal machine in a tier-3 data center or a colocated box — your provider advertises a port speed, not a guaranteed throughput. A "1 Gbps uplink" means the physical interface runs at 1 Gbps. What you actually get depends on routing paths, NIC configuration, kernel network stack tuning, and shared infrastructure overhead. iPerf3 is how you find out the real number.
Key Insight: iPerf3 is the successor to iPerf2 and was rewritten from scratch. The two are not interoperable — both your server and client must run iPerf3 for a test to work. Never mix versions.
Why Dedicated Server Users Specifically Need iPerf3
Cloud VMs and shared hosting environments have virtualized networking that adds unpredictable latency layers. On a dedicated server or colocated machine, you have direct hardware access — which means your test results are reliable, repeatable, and actually meaningful. iPerf3 lets you:
Verify that your provider is delivering the bandwidth you're paying for
Detect network bottlenecks before they affect production workloads
Validate cross-datacenter connectivity for multi-region setups
Benchmark NIC configuration changes (jumbo frames, interrupt coalescing)
Measure real throughput between your server and your CDN origin or backup node
Test latency and jitter for real-time applications like game servers or VoIP
iPerf3 is available in the default package repositories of all major Linux distributions. Installation takes under 30 seconds.
Ubuntu / Debian
sudo apt update
sudo apt install iperf3 -y
iperf3 --version # confirm installation
CentOS / RHEL 8+ / Rocky Linux / AlmaLinux
sudo dnf install iperf3 -y
iperf3 --version
Arch Linux / Manjaro
sudo pacman -S iperf3
Windows Server (via Chocolatey or direct binary)
# Option 1: Chocolatey
choco install iperf3
# Option 2: Download the win64 binary from iperf.fr
# Then run from PowerShell:
.\iperf3.exe --version
Firewall Note: iPerf3 uses TCP/UDP port 5201 by default. You must open this port in both your OS firewall (iptables / firewalld / ufw / Windows Firewall) and any upstream hardware firewall or ACL at your data center. Forgetting this is the #1 reason iPerf3 tests fail.
Opening Port 5201 on Linux
# UFW (Ubuntu/Debian)
sudo ufw allow 5201/tcp
sudo ufw allow 5201/udp
# firewalld (CentOS/Rocky)
sudo firewall-cmd --add-port=5201/tcp --permanent
sudo firewall-cmd --add-port=5201/udp --permanent
sudo firewall-cmd --reload
iPerf3 operates in a client–server model. One machine listens (server mode) and the other initiates the test (client mode). You need two machines — or a second machine and a public iPerf3 server — to run a meaningful test.
Step 1: Start iPerf3 in Server Mode
On your dedicated server (or any machine that will receive traffic):
iperf3 -s
# Output:
# -----------------------------------------------------------
# Server listening on 5201 (test #1)
# -----------------------------------------------------------
Add -D to run iPerf3 as a background
daemon, or wrap it in a systemd service for persistent listening on production hosts.
Step 2: Run the Test from the Client
From another machine (your laptop, another server, or a VPS):
iperf3 -c YOUR_SERVER_IP
# Output:
# Connecting to host YOUR_SERVER_IP, port 5201
# [ ID] Interval Transfer Bitrate
# [ 5] 0.00-1.00 sec 113 MBytes 948 Mbits/sec
# [ 5] 1.00-2.00 sec 114 MBytes 954 Mbits/sec
# ...
# [ 5] 0.00-10.01 sec 1.10 GBytes 944 Mbits/sec sender
# [ 5] 0.00-10.01 sec 1.09 GBytes 941 Mbits/sec receiver
The sender line shows what was transmitted; the receiver line shows what arrived. The gap between them reflects packet loss and retransmissions. On a healthy 1G link inside the same data center, these numbers should be nearly identical.
Choosing between TCP and UDP tests isn't just a flag difference — it's about what workload you're actually trying to simulate and what failure modes matter to your infrastructure.
| Protocol | What It Measures | Best For | Key Flag |
|---|---|---|---|
| TCP | Sustained throughput, window scaling, retransmit behavior | File transfers, backups, web servers, databases | (default) |
| UDP | Packet loss, jitter, one-way delay | Game servers, VoIP, video streaming, real-time apps | -u |
TCP Bandwidth Test (Default)
iperf3 -c YOUR_SERVER_IP -t 30
# -t 30 → run for 30 seconds instead of the default 10
# Longer tests reveal sustained throughput vs burst behavior
UDP Packet Loss and Jitter Test
iperf3 -c YOUR_SERVER_IP -u -b 500M -t 30
# Output:
# [ ID] Interval Transfer Bitrate Jitter Lost/Total
# [ 5] 0.00-30.00 sec 1.74 GBytes 499 Mbits/sec 0.124 ms 12/1260020 (0.00095%)
Interpreting UDP Results: For a dedicated server running game server or VoIP workloads, target jitter below 1 ms and packet loss below 0.1%. Anything above these thresholds and you need to investigate routing paths or NIC interrupt settings.
On high-capacity links (10 Gbps, 20 Gbps, or 100 Gbps), a single TCP stream will not saturate the full link. TCP's congestion control algorithm limits how fast a single connection can grow. This is why a single iPerf3 run on a 10G server might show only 3–5 Gbps — the link isn't the bottleneck, the single stream is.
Use the -P flag to run multiple parallel
streams:
iperf3 -c YOUR_SERVER_IP -P 8 -t 30
# Output:
# [SUM] 0.00-30.00 sec 34.3 GBytes 9.81 Gbits/sec sender
# [SUM] 0.00-30.00 sec 34.1 GBytes 9.76 Gbits/sec receiver
| Link Speed | Recommended -P Value | Expected Result (healthy link) |
|---|---|---|
| 1 Gbps | 1–2 | 900–970 Mbps |
| 10 Gbps | 4–8 | 9.2–9.8 Gbps |
| 100 Gbps | 16–32 | Requires DPDK / kernel bypass for accurate measurement |
-R — Reverse
Mode: By default, the client sends and the server receives. -R flips this —
the server sends, client receives. Critical for testing both directions of an asymmetric
link.
-w 4M — TCP Window
Size: Sets the socket buffer size. For high-latency links
(cross-continent), increase the window to allow more data in flight. Default is
OS-determined; try 4M–16M for WAN tests.
--get-server-output —
Server-Side Stats: Pulls the server's perspective of the test to your
client terminal. Useful for comparing sender vs receiver stats in a single output.
-J — JSON Output:
Outputs all results as machine-readable JSON. Feed this into Grafana, Prometheus, or
your own monitoring pipeline for automated bandwidth tracking over time.
--bidir — Bidirectional Test
(iPerf3 3.7+): Runs simultaneous upload and download tests. Simulates
real-world full-duplex traffic on links that handle concurrent inbound and outbound
flows.
-i 1 — Per-Second Interval
Reporting: Reports bandwidth every 1 second instead of the default. Reveals
short-term burst behavior and traffic spikes that averages would hide.
-p 5202 — Custom Port:
Useful when running multiple simultaneous iPerf3 server instances or when port 5201 is
blocked upstream.
--tos 0x10 — DSCP / QoS
Marking: Set the IP Type of Service field to test how your network treats
QoS-marked traffic — important for providers with traffic shaping policies.
A Comprehensive Real-World Test Command
iperf3 \
-c YOUR_SERVER_IP \
-P 8 \ # 8 parallel streams
-t 60 \ # 60-second duration
-i 5 \ # report every 5 seconds
-w 4M \ # 4 MB TCP window
--get-server-output \
-J > result.json # save to JSON for logging
Raw numbers mean nothing without context. Here's how to interpret what iPerf3 is actually telling you about your dedicated server's network health.
| Healthy Result — 1 Gbps Dedicated Server | |
|---|---|
| Metric | Value |
| TCP Throughput | 944 Mbps ✅ |
| UDP Jitter | 0.082 ms ✅ |
| Packet Loss | 0.001% ✅ |
| Degraded Result — Investigate Further | |
|---|---|
| Metric | Value |
| TCP Throughput | 412 Mbps ⚠️ |
| UDP Jitter | 4.7 ms ⚠️ |
| Packet Loss | 1.8% ⚠️ |
| Metric | Healthy | Investigate | Critical |
|---|---|---|---|
| TCP Throughput vs Link Speed | > 90% | 70–90% | < 70% |
| UDP Jitter (same DC) | < 0.5 ms | 0.5–2 ms | > 2 ms |
| UDP Jitter (cross-continent) | < 5 ms | 5–15 ms | > 15 ms |
| Packet Loss | < 0.1% | 0.1–1% | > 1% |
| Sender vs Receiver gap | < 1% | 1–3% | > 3% |
Always Test in Both Directions. Run once normally, then
with -R (reverse). Asymmetric results — where upload
and download speeds differ dramatically — usually point to uplink congestion,
half-duplex negotiation issues, or QoS policies at the switch level.
"iPerf3: error — unable to connect to server"
This almost always means a firewall is blocking port 5201. Check your OS firewall first
(ufw status / firewall-cmd --list-all), then check if your data
center has a hardware firewall filtering the port. Verify the server is listening with
ss -tlnp | grep 5201.
Results are much lower than expected on a 10G link
Single-stream TCP limitation. Add -P 8 to push more
data simultaneously. Also check that your NIC supports RSS (Receive Side Scaling) and
that multiple CPU cores are being utilized — run htop during the test to see if you're hitting a
single-core bottleneck.
UDP packet loss even on a local network
You're likely sending faster than the server can receive. Start at a lower target bitrate
(e.g., -b 100M) and work upward. UDP doesn't have
flow control — iPerf3 will send at whatever rate you specify, even if the receiver is
dropping packets.
Results vary wildly between runs
This suggests network congestion on a shared segment, CPU interference from other
processes, or interrupt coalescing issues on your NIC. Run tests at multiple times of
day and use -i 1 to identify when drops occur. On
Linux, check ethtool -S eth0 for NIC-level drop
counters.
"The server is busy running a test"
By default, iPerf3 server mode accepts only one connection at a time. Run multiple server
instances on different ports with iperf3 -s -p 5202,
-p 5203, etc., or restart the server between tests.
Don't have a second machine? You can use publicly available iPerf3 servers to test your dedicated server's outbound connectivity to specific regions. These are maintained by ISPs, universities, and data center operators globally.
Important Caveat: Public iPerf3 servers are shared resources with limited bandwidth. They will not saturate a 10G link, and results reflect the path to that specific server — not your server's maximum capacity. Use them for routing verification and latency checks, not raw throughput benchmarks.
# Test to a US-based public server
iperf3 -c iperf.he.net -t 10
# Test to a European server
iperf3 -c bouygues.iperf.fr -t 10
# Test to a US West Coast server
iperf3 -c la.speedtest.clouvider.net -t 10
A curated, up-to-date list of public iPerf3 servers is maintained at iperf.fr/iperf-servers.php. Availability changes
frequently — always verify connectivity before relying on a specific host.
Is iPerf3 the same as iPerf2? Can they work together?
No. iPerf3 was a full rewrite and is not backward-compatible with iPerf2. Both endpoints must run the same major version. In most cases, stick with iPerf3 — it's actively maintained, supports JSON output, and has bidirectional testing built in.
Why does my iPerf3 result differ from what my hosting provider advertises?
Providers advertise port capacity, not guaranteed throughput. Real-world bandwidth is shaped by CPU overhead, NIC driver efficiency, TCP stack tuning, network congestion on peering links, and the route between you and the test endpoint. iPerf3 gives you the real achievable rate — which is the number that actually matters.
Can I run iPerf3 tests continuously for ongoing monitoring?
Yes — and it's a good practice. Use -J to output
JSON, then pipe results into a time-series database like InfluxDB or push metrics to a
Grafana dashboard. Many teams run hourly iPerf3 checks between data center nodes as part
of their network observability stack.
Does iPerf3 affect my production traffic?
Yes. iPerf3 deliberately saturates your link to measure maximum capacity. Always run bandwidth tests during low-traffic windows on production servers. On heavily loaded servers, schedule tests during maintenance windows or use a dedicated test interface if your server has multiple NICs.
What TCP buffer size should I use for cross-continental tests?
The optimal buffer size depends on your bandwidth-delay product (BDP). The formula: BDP =
Bandwidth × Round-Trip Time. For a 1 Gbps link with 150 ms RTT: 1,000 Mbps × 0.150 s =
~18 MB. Use -w 16M as a starting point for
transatlantic or transpacific links.
COLO BIRD provides bare-metal dedicated servers with transparent 1G, 10G, and 20G uplinks. If your iPerf3 results don't match what you're paying for, that's a conversation worth having.