Upload your 50GB file to an S3 bucket using the AWS CLI.
# On Linux (faster than MD5) time sha256sum 50GB_test.file Get-FileHash D:\50GB_test.file -Algorithm SHA256 50 gb test file
scp 50GB_test.file user@server:/destination/ Look for the "Sawtooth" pattern. If the transfer speed drops after 10GB, your router's buffer is filling up (Bufferbloat). Scenario 2: Cloud Upload Speed (AWS S3 / Google Drive) Cloud providers advertise "unlimited" speed, but they often throttle long-lived connections. Upload your 50GB file to an S3 bucket using the AWS CLI
It is the "goldilocks" of synthetic data. It is too large for RAM caching (making it a true disk/network test), small enough to generate quickly on modern SSDs, and large enough to expose thermal throttling in NVMe drives or buffer bloat in routers. Scenario 2: Cloud Upload Speed (AWS S3 /
# Creates a 50GB file filled with zeros (fastest) dd if=/dev/zero of=~/50GB_test.file bs=1M count=51200 dd if=/dev/urandom of=~/50GB_random.file bs=1M count=51200 status=progress