This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| computing:btrfsreminders [2026/02/08 15:47] – oemb1905 | computing:btrfsreminders [2026/02/08 16:06] (current) – oemb1905 | ||
|---|---|---|---|
| Line 176: | Line 176: | ||
| </ | </ | ||
| + | If you use the script above, you will also need to '' | ||
| + | <code bash> | ||
| + | root@net:~# free -h | ||
| + | | ||
| + | Mem: | ||
| + | Swap: | ||
| + | root@net:~# / | ||
| + | UUID: | ||
| + | Scrub started: | ||
| + | Status: | ||
| + | Duration: | ||
| + | Total to scrub: | ||
| + | Rate: | ||
| + | Error summary: | ||
| + | </ | ||
| + | To test or compare your new pool's speed to your prior setup and/or just to obtain some benchmarks, I recommend using '' | ||
| + | sudo apt install fio | ||
| + | sudo fio --name=seqread --rw=read --bs=128k --iodepth=32 --ioengine=libaio --direct=1 --size=4g --numjobs=8 --runtime=60 --group_reporting --filename=/ | ||
| + | sudo fio --name=seqwrite --rw=write --bs=128k --iodepth=32 --ioengine=libaio --direct=1 --size=4g --numjobs=8 --runtime=60 --group_reporting --filename=/ | ||
| + | sudo fio --name=seqread --rw=read --bs=128k --iodepth=32 --ioengine=libaio --direct=1 --size=4g --numjobs=8 --runtime=60 --group_reporting --filename=/ | ||
| + | sudo fio --name=seqwrite --rw=write --bs=128k --iodepth=32 --ioengine=libaio --direct=1 --size=4g --numjobs=8 --runtime=60 --group_reporting --filename=/ | ||
| + | With zfs on my production server, I found I was still getting the read speed of one hard drive, despite the presumed parallelization benefits from having 8 enterprise SAS SSDs in a R10 pool?! Since I migrated to BTRFS, the speeds are near hardware level caps. Here's the read test: | ||
| + | <code bash> | ||
| + | seqread: (g=0): rw=read, bs=(R) 128KiB-128KiB, | ||
| + | ... | ||
| + | fio-3.39 | ||
| + | Starting 8 processes | ||
| + | seqread: Laying out IO file (1 file / 4096MiB) | ||
| + | Jobs: 8 (f=8): [R(8)][100.0%][r=5797MiB/ | ||
| + | seqread: (groupid=0, jobs=8): err= 0: pid=2279596: | ||
| + | read: IOPS=42.1k, BW=5264MiB/ | ||
| + | slat (usec): min=11, max=28981, avg=106.92, stdev=402.06 | ||
| + | clat (usec): min=43, max=53886, avg=5831.38, | ||
| + | lat (usec): min=183, max=53910, avg=5938.30, | ||
| + | clat percentiles (usec): | ||
| + | | ||
| + | | 30.00th=[ 3064], 40.00th=[ 4113], 50.00th=[ 5080], 60.00th=[ 6063], | ||
| + | | 70.00th=[ 7242], 80.00th=[ 8717], 90.00th=[11469], | ||
| + | | 99.00th=[21365], | ||
| + | | 99.99th=[46924] | ||
| + | bw ( MiB/s): min= 4508, max= 6109, per=100.00%, | ||
| + | | ||
| + | lat (usec) | ||
| + | lat (msec) | ||
| + | lat (msec) | ||
| + | cpu : usr=2.22%, sys=28.68%, ctx=252056, majf=0, minf=8262 | ||
| + | IO depths | ||
| + | | ||
| + | | ||
| + | | ||
| + | | ||
| + | |||
| + | Run status group 0 (all jobs): | ||
| + | READ: bw=5264MiB/ | ||
| + | </ | ||
| + | |||
| + | Here's the write test: | ||
| + | <code bash> | ||
| + | seqwrite: (g=0): rw=write, bs=(R) 128KiB-128KiB, | ||
| + | ... | ||
| + | fio-3.39 | ||
| + | Starting 8 processes | ||
| + | seqwrite: Laying out IO file (1 file / 4096MiB) | ||
| + | Jobs: 6 (f=6): [W(6), | ||
| + | seqwrite: (groupid=0, jobs=8): err= 0: pid=2279720: | ||
| + | write: IOPS=12.2k, BW=1529MiB/ | ||
| + | slat (usec): min=38, max=33255, avg=595.61, stdev=1120.04 | ||
| + | clat (usec): min=176, max=96135, avg=18562.40, | ||
| + | lat (usec): min=264, max=96296, avg=19158.01, | ||
| + | clat percentiles (usec): | ||
| + | | ||
| + | | 30.00th=[14222], | ||
| + | | 70.00th=[19792], | ||
| + | | 99.00th=[53216], | ||
| + | | 99.99th=[79168] | ||
| + | bw ( MiB/s): min= 1074, max= 2563, per=100.00%, | ||
| + | | ||
| + | lat (usec) | ||
| + | lat (msec) | ||
| + | lat (msec) | ||
| + | cpu : usr=2.07%, sys=64.47%, ctx=142306, majf=0, minf=20562 | ||
| + | IO depths | ||
| + | | ||
| + | | ||
| + | | ||
| + | | ||
| + | |||
| + | Run status group 0 (all jobs): | ||
| + | WRITE: bw=1529MiB/ | ||
| + | </ | ||
| + | |||
| + | In lay terms, these reports confirm that read speed is 5,520 MB/s, or 5.5 GB/s, and write speed is 1,603 MB/s, or 1.6 GB/s. This is a 4x improvement for reads and 2x improvement for writes compared to zfs. For whatever reason, zfs was not benefitting from the parallelization. It's possible that I could get zfs to perform better with tinkering, but why? Every major upgrade I have to re-compile it with dkms against the new kernel headers, which takes forever. Additionally, | ||
| + | |||
| + | --- // | ||