Return to RAID: The Ars readers “What If?” edition

0
53

Enlarge / I am concerned if I cannot see the flashing lights in the large terminal window in the background while the tests are running.

Storage Bases

Show more stories

In a previous report using ZFS against Linux kernel RAID, some readers were concerned that we had missed some tuning tricks. In particular, Louwrentius wanted us to retest mdadm with bitmaps disabled, and Targetnovember believed that XFS could possibly outperform ext4.

Intuitive bitmaps are an mdraid feature that allows disk drives that have dropped and re-entered into the array to be re-synchronized instead of being recreated from scratch. The “age” of the bitmap on the returning hard drive is used to determine which data was written in the absence. This means that it can only be updated with the new data instead of being created from scratch.

XFS and ext4 are simply two different file systems. Ext4 is the standard root file system on most distributions, and XFS is a company that most often occurs in arrays with hundreds or even thousands of tebibytes. We tested both this time with bitmap support disabled.

Running the entire range of tests that we used in previous articles is not trivial. The full suite, which tests a variety of topologies, block sizes, process numbers, and I / O types, takes approximately 18 hours. However, we have found the time to do some testing with the heavyweight topologies, those with all eight hard drives running.

A note on today’s results

The framework that we used for the ZFS tests automatically destroys, creates, formats, and mounts arrays and performs the actual tests. Our original mdadm tests were done individually and manually. To ensure that we had the best experience from apple to apple, we customized the framework to work with mdadm.

During this adjustment we discovered a problem with our asynchronous 4KiB write test. For ZFS we used –numjobs = 8 –iodepth = 8 –size = 512M. This creates eight separate files, each with 512 MB, with which the eight separate fio processes can work. Unfortunately, this file size is just small enough to be afraid to do the entire test in a single sequential batch, rather than doing random 4GB writes.

To get mdadm to work together, we had to adjust upwards until we reached –size = 2G. At this point, mdadm’s write throughput decreased to less than 20 percent of its “burst” throughput when using smaller files. Unfortunately, this also extends the duration of the asynchronous 4KiB write test enormously – and even the time-based option of fio does not help, since mdraid has already taken up the entire workload in its write buffer in the first hundred milliseconds.

Since our test results would otherwise come from slightly different FIO configurations, in addition to the new tests –bitmap none and the XFS file system, we performed new tests for ZFS and mdraid with activated standard bitmaps.

RAIDz2 vs mdraid6

Although we only test configurations with a width of eight hard drives today, we test configurations with striped parity as well as with striped mirrors. First we will compare our parity options – ZFS RAIDz2 and Linux mdraid6.

block size 1MiB

Removing bitmap support accelerated mdraid6’s asynchronous writes. Setting Bitmap = none did not help synchronizing writes. 1-MB reads are not affected by the bitmap function.

When we created a new mdraid6 array with eight hard drives and disabled bitmap support, our asynchronous writes accelerated considerably – but the additional 27.9 percent increase still didn’t bring mdraid6 within reach of the ZFS default settings , let alone record correctly = 1M result.

Both reads and synchronous writes were not affected by bitmap support or its lack. RAIDz2 writes are more than twice as fast as mdraid6 writes even with the bitmap, while mdraid6 reads are slightly less than twice as fast as RAIDz2 reads.

Although XFS was only tested without a bitmap, it lagged behind ext4 in every 1MB test.

Block size 4KiB

Removing the bitmaps had no significant impact on 4KiB writes. But for the first time, XFS surpasses ext4. Removing bitmaps did not support writing 1MB synchronization, nor writing 4K synchronization. When reading, XFS and ext4 run neck to neck at slightly more than twice the speed of ZFS.

Small accidental operations are the nightmare of a traditional RAID6. They are also not the ideal scenario for RAIDz2 – but the ability of RAIDz2 not to be caught in a read, change and rewrite cycle brings a 6: 1 write performance advantage over mdraid6. Mdraid6 performs much better on random reading with a 2: 1 reading advantage.

In these small block tests, XFS was able to assert itself with ext4 – and even slightly surpass it with asynchronous 4KiB writes. None of these changes – file system or bitmap support – had a major impact on the overall 4KiB performance of mdraid6.

ZFS Mirrors vs mdraid10

Administrators who need maximum performance should leave the parity arrays behind and switch to mirrors. On the mdraid side, mdraid10 outperforms mdraid6 in every performance metric we tested – and a ZFS mirror pool outperforms mdraid10 in every metric tested.

block size 1MiB

By deactivating bitmaps, 1 MB can also be written for mdraid10 – but only by about 5 percent. Disabling bitmaps also helps mdraid10 a bit when synchronizing write operations. The reading speed is not affected by a bitmap (or its absence).

Similar to the parity arrays, mdraid10 gets a 1MB write spurt – but a much smaller one than mdraid6, and this little spurt doesn’t significantly change mdraid10’s relationship to the faster ZFS mirrors.

Deactivating bitmaps has no effect on the reading performance – and in contrast to RAIDz2, ZFS mirrors also gain with the reading performance of 1 MB.

XFS follows ext4 again on all metrics tested.

Block size 4KiB

Bitmaps have no influence on the 4KiB write performance of mdraid10 – however, XFS is displayed in fewer numbers than ext4. mdraid10 does the same for 4KiB sync writes, whether XFS or ext4, internal bitmaps, or no bitmaps. Bitmaps still have no effect on reading speed – and you shouldn’t expect that either.

With the 4KiB block size, RAID10 has a moderate advantage over ZFS mirrors: Reads that are not cached are approximately 35 percent faster. But mdraid10 gives up a 4: 1 advantage when writing and a 12: 1 advantage when writing synchronously.

The presence or absence of bitmaps shows no visible difference in any 4KiB process. The XFS performance corresponds to that of ext4 when writing and reading synchronizations, but is somewhat slower with asynchronous writes.

Conclusions

Disabling bitmap support does have some effect on the write performance of mdraid6 and mdraid10, but is not day and night in our tests and does not significantly change the relationship of the two topologies to their next ZFS equivalent.

We recommend that you do not deactivate bitmaps, regardless of whether you are interested in this service relationship with ZFS or not. Security features are important, and without bitmaps, mdraid is a bit more fragile. There is an option for “external” bitmaps that can be stored on a fast SSD, but we don’t recommend this either – we’ve seen some complaints about problems with corrupted external bitmaps.

If your big criterion is performance, we can’t recommend XFS over ext4 either. XFS followed ext4 in almost every test, sometimes significantly. Administrators with massive arrays – hundreds of Tebibyte or more – may have other, stability and test-related reasons to choose XFS. But hobbyists with a few hard drives are well served with both and apparently can get a little more power out of ext4.

LEAVE A REPLY

Please enter your comment!
Please enter your name here