View Single Post
Brad
Selfish Heathen
 
Join Date: May 2004
Location: Zone of Pain
 
2022-08-23, 00:41

To follow your example, here's what shoving zeroes into a file on my two mirrored 14TB drives in an unencrypted dataset looks like:

Code:
# sync; dd if=/dev/zero of=tempfile bs=1M count=1024; sync 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.687322 s, 1.6 GB/s
versus the encrypted dataset
Code:
# sync; dd if=/dev/zero of=tempfile bs=1M count=1024; sync 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.687974 s, 1.6 GB/s
Hmm. Methinks ZFS being clever with zeros. I also have LZ4 compression enabled, and maybe it's crunching them all down before they hit the disk.

Let's try from random to the unencrypted dataset…

Code:
# sync; dd if=/dev/random of=tempfile bs=1M count=1024; sync 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.14828 s, 259 MB/s
Ahh, that looks more realistic. And to the encrypted dataset…

Code:
# sync; dd if=/dev/random of=tempfile bs=1M count=1024; sync 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.62426 s, 232 MB/s
Well, that's a bit slower, but it's not nearly as much as I'd expect based on real-world patterns I saw over the weekend. The `sync` command is certainly slower following `dd` in the encrypted dataset, though. So, there's probably some more low-level magic at play that I don't fully grok.

The quality of this board depends on the quality of the posts. The only way to guarantee thoughtful, informative discussion is to write thoughtful, informative posts. AppleNova is not a real-time chat forum. You have time to compose messages and edit them before and after posting.
  quote