You need a 40 GB database dump at the off-site backup before midnight. Your link is rated 500 Mbps — napkin math says about 11 minutes. Two hours later the bar sits at 62%. The rated speed assumed zero overhead and a dedicated pipe; neither was true. Estimating file transfer time correctly means accounting for protocol headers, real efficiency, and the unit mix-up that trips even experienced admins.
Enter file size and connection speed to get estimated duration at your actual throughput — not the number on the ISP bill.
Why Rated Bandwidth Never Matches Real Transfer Speed
ISPs quote line rate — the physical ceiling before anything useful happens on the wire. Every byte rides inside TCP segments wrapped in IP packets wrapped in Ethernet frames, each layer adding headers. Roughly 5–8% of bits on the link are overhead, not payload. Add TLS handshakes, TCP slow-start, and ACK waits and the gap widens. The Cloudflare Learning Center calls the shared-pipe factor the contention ratio — most residential plans never disclose theirs.
Protocol Overhead: TCP, TLS, and the Bytes You Never See
A 1,500-byte Ethernet frame carries at most 1,460 bytes of TCP payload after headers. TLS 1.3 adds framing plus crypto expansion per record. On small files the handshake alone — one to two round trips — takes longer than the data. For large files handshake cost fades but per-packet overhead stays constant.
High-latency links hurt more. A satellite hop at 600 ms round-trip needs TCP window scaling or throughput caps far below the link rate regardless of bandwidth.
MiB vs MB and Mbps vs MBps — The Unit Trap That Doubles Estimates
ISPs sell in megabits per second (Mbps). Download managers show megabytes per second (MB/s). Confuse the two and your estimate is off by 8×. A 100 Mbps link tops out at 12.5 MB/s.
Storage adds a binary wrinkle: manufacturers use decimal gigabytes (1 GB = 109 bytes) while OSes show gibibytes (1 GiB = 230 bytes). That 7.4% gap compounds on multi-terabyte moves.
Fast Readout: Interpreting Your Transfer Time Result
The output shows duration at effective throughput, not the advertised line rate. If the number looks wildly optimistic, check whether you mixed bits and bytes. If impossibly slow, confirm you did not enter upload speed for a download on an asymmetric link. Apply a 70–90% efficiency factor before trusting the estimate for scheduling.
Edge-Case Checks for Large and Long-Distance Transfers
- TCP window scaling. Satellite and cross-continent paths need window scaling enabled or throughput caps at a fraction of bandwidth.
- Disk I/O as the real bottleneck. A gigabit link feeding a spinning HDD maxes around 120 MB/s. NVMe removes that ceiling, but heavy writes can throttle even fast storage mid-transfer.
- Shared bandwidth. Residential ISP links share capacity with neighbours. Peak-hour throughput can drop 30–50% below the off-peak test at 2 AM.
- MTU mismatch. Enabling 9,000-byte jumbo frames on one side of a path clamped to 1,500 causes fragmentation or silent drops.
Mistakes that wreck transfer estimates: using download speed for an upload on an asymmetric link, ignoring TCP slow-start on short-lived connections, and quoting MB/s to a client who budgeted in Mbps.
Related tools: API Rate Limit Planner for sizing request throughput on the same pipe, CIDR Subnet Calculator for the network segment carrying the transfer, SLA Uptime Calculator for availability targets that affect transfer windows, and Password Entropy Estimator for credentials on the endpoints involved.
Transfer time estimates assume sustained throughput at the rate you provide — actual transfers vary with congestion, protocol behaviour, and hardware limits. Not a replacement for speed tests or professional network planning.