I would guess that the speed of the old disks is more of a bottleneck in my case, but it's time for some science!
My former-Hackintosh-turned-NAS has 1 Gbps gigabit ethernet on its motherboard. Testing just raw data using
iPerf3 (not touching any disks) over that connection:
Code:
❯ iperf3 -c 192.168.2.4 -p 9201
Connecting to host 192.168.2.4, port 9201
[ 5] local 192.168.2.1 port 53788 connected to 192.168.2.4 port 9201
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 112 MBytes 940 Mbits/sec
[ 5] 1.00-2.00 sec 112 MBytes 935 Mbits/sec
[ 5] 2.00-3.00 sec 112 MBytes 938 Mbits/sec
[ 5] 3.00-4.00 sec 112 MBytes 940 Mbits/sec
[ 5] 4.00-5.00 sec 110 MBytes 926 Mbits/sec
[ 5] 5.00-6.00 sec 112 MBytes 943 Mbits/sec
[ 5] 6.00-7.00 sec 111 MBytes 933 Mbits/sec
[ 5] 7.00-8.00 sec 112 MBytes 936 Mbits/sec
[ 5] 8.00-9.00 sec 112 MBytes 938 Mbits/sec
[ 5] 9.00-10.00 sec 112 MBytes 942 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.00 sec 1.09 GBytes 937 Mbits/sec sender
[ 5] 0.00-10.00 sec 1.09 GBytes 937 Mbits/sec receiver
iperf Done.
Meets maximum theoretical throughput expectations pretty closely.
Sending a large (~470 MB) file over rsync from my local system (which has a very fast NVME SSD) to the NAS:
Code:
❯ rsync -avPh ~/Downloads/OpenStep-Install-4.2-Dev.iso bsmith@192.168.2.4:~/
building file list ...
1 file to consider
OpenStep-Install-4.2-Dev.iso
470.54M 100% 85.21MB/s 0:00:05 (xfer#1, to-check=0/1)
sent 470.59M bytes received 42 bytes 85.56M bytes/sec
To compare, copying the same file locally, just to verify that my local SSD isn't slower:
Code:
❯ rsync -avPh ~/Downloads/OpenStep-Install-4.2-Dev.iso /tmp/
building file list ...
1 file to consider
OpenStep-Install-4.2-Dev.iso
470.54M 100% 225.59MB/s 0:00:01 (xfer#1, to-check=0/1)
sent 470.59M bytes received 42 bytes 188.24M bytes/sec
total size is 470.54M speedup is 1.00
(yes, my local SSD is
plenty fast by comparison)
And finally downloading the same file
from the NAS to my local system:
Code:
❯ rm /tmp/OpenStep-Install-4.2-Dev.iso
❯ rsync -avPh bsmith@192.168.2.4:~/OpenStep-Install-4.2-Dev.iso /tmp/
receiving file list ...
1 file to consider
OpenStep-Install-4.2-Dev.iso
470.54M 100% 111.49MB/s 0:00:04 (xfer#1, to-check=0/1)
sent 38 bytes received 470.65M bytes 104.59M bytes/sec
total size is 470.54M speedup is 1.00
Huh! Maybe my speculation about the old drives being a bottleneck was wrong because that's nearly saturating the 1Gbps ethernet connection. 111.49MB/s x 8 bits/byte = 891.92 Mbps, which is pretty dang close to the theoretical 1Gpbs maximum. The difference might just be the overhead cost of using rsync and ssh.
Since my two 1 TB drives are pooled as a mirror of each other, that means reading from the NAS should
generally be faster than writing to it, but I was surprised to see such an immediately obvious jump from uploading (85.21MB/s) to downloading (111.49MB/s). I repeated these commands above a few times, deleting the destination files between each run, just to make sure it wasn't a fluke, and they all worked out to roughly the same numbers.
Regardless, the rest of the computers on my network are all on wifi, and
that is a whole other bottleneck. Performing the earlier throughput test from a wireless device upstairs to the NAS wired into the router downstairs:
Code:
❯ iperf3 -c 192.168.1.120 -p 9201
Connecting to host 192.168.1.120, port 9201
[ 5] local 192.168.1.247 port 58521 connected to 192.168.1.120 port 9201
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 23.5 MBytes 197 Mbits/sec
[ 5] 1.00-2.00 sec 23.1 MBytes 194 Mbits/sec
[ 5] 2.00-3.00 sec 22.2 MBytes 186 Mbits/sec
[ 5] 3.00-4.00 sec 22.5 MBytes 188 Mbits/sec
[ 5] 4.00-5.00 sec 22.1 MBytes 185 Mbits/sec
[ 5] 5.00-6.00 sec 21.5 MBytes 180 Mbits/sec
[ 5] 6.00-7.00 sec 21.9 MBytes 183 Mbits/sec
[ 5] 7.00-8.00 sec 22.4 MBytes 188 Mbits/sec
[ 5] 8.00-9.00 sec 22.2 MBytes 187 Mbits/sec
[ 5] 9.00-10.00 sec 22.4 MBytes 188 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.00 sec 224 MBytes 188 Mbits/sec sender
[ 5] 0.00-10.01 sec 224 MBytes 187 Mbits/sec receiver
iperf Done.
Oof. 187 Mbits/sec is still pretty good for general web surfing and streaming, and it'll be fine for occasional file transfers and routine backups, but that's a far cry from the potential 937 Mbits/sec. This is why I still want to run ethernet cables around my house some day.