Yeah, I didn't dig in on the commands fully either but used them as a "general test" based on
this blog post I found.
I should try with random too just for comparison...
Code:
[root@knowsmore ~]# sync; dd if=/dev/zero of=tempfile bs=1M count=1024; sync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.297927 s, 3.6 GB/s
[root@knowsmore ~]# sync; dd if=/dev/random of=tempfile bs=1M count=1024; sync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 2.4137 s, 445 MB/s
[root@knowsmore ~]# dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.114965 s, 9.3 GB/s
There is a drastic difference between zero and random on this DB server (on a
NMVe drive). Makes me think it has more to do with the random function introducing a lag of some sort. I could be wrong, but that is WAY slower than I would expect. The DB server is basic CentOS 9 minimal with only a few applications actually running on it. Granted, the DB activity is relatively high, and it is already in production so there will be some actual use during the test... but close to a tenth of the original speed? Something is fishy there.
Louis L'Amour, “To make democracy work, we must be a nation of participants, not simply observers.
One who does not vote has no right to complain.”
Visit our archived Minecraft world! | Maybe someday I'll proof read, until then deal with it.