Quote Originally Posted by strahil-nikolov-dxc View Post
What mount options are you using?
Have you tested with 4M block size ?

Great idea. From some quick testing, the mount options of "-o localflocks,coherency=buffered,noatime" produced good results (although from a limited number of tests performed).


The application that i'm working with writes with 1MB block sizes, but if larger blocks got us there then I can make a case for it. Below are the tests for 1M through 16M block sizes with the above mount options. While the throughput of the 4M test case got me up to 9235MiB/s, the latency really picks up at 4M (~10ms). Block sizes 1M (1.1ms) and 2M (5.7ms) are more in line with what the application can handle. (Although 5.7ms >>> 1.1ms)

I have the cluster and block sizes of the mkfs command set as large as I can (-C 1M, -b 4096).


1M writes
mount -o localflocks,coherency=buffered,noatime /dev/md117 /mnt/ocfs2-R0
write: IOPS=7076, BW=7076MiB/s (7420MB/s)(415GiB/60002msec)
slat (usec): min=152, max=1124.6k, avg=1125.54, stdev=17354.68
clat (usec): min=47, max=1125.2k, avg=2263.23, stdev=24523.08
lat (usec): min=458, max=1125.7k, avg=3389.35, stdev=30009.56


2M writes
mount -o localflocks,coherency=buffered,noatime /dev/md117 /mnt/ocfs2-R0
write: IOPS=4192, BW=8386MiB/s (8793MB/s)(491GiB/60003msec)
slat (usec): min=321, max=1000.7k, avg=1905.87, stdev=20998.16
clat (usec): min=103, max=1001.4k, avg=3816.29, stdev=29642.73
lat (usec): min=578, max=1002.1k, avg=5722.46, stdev=36239.94


4M writes
mount -o localflocks,coherency=buffered,noatime /dev/md117 /mnt/ocfs2-R0
write: IOPS=2308, BW=9235MiB/s (9683MB/s)(541GiB/60005msec)
slat (usec): min=484, max=805147, avg=3462.84, stdev=29867.22
clat (usec): min=234, max=806260, avg=6930.31, stdev=42104.41
lat (usec): min=1464, max=807378, avg=10393.46, stdev=51402.95


8M writes
mount -o localflocks,coherency=buffered,noatime /dev/md117 /mnt/ocfs2-R0
write: IOPS=988, BW=7907MiB/s (8292MB/s)(463GiB/60009msec)
slat (usec): min=1544, max=2473.8k, avg=8090.49, stdev=73644.51
clat (usec): min=193, max=2475.7k, avg=16183.57, stdev=103988.25
lat (msec): min=2, max=3234, avg=24.27, stdev=129.85


16M writes
mount -o localflocks,coherency=buffered,noatime /dev/md117 /mnt/ocfs2-R0
write: IOPS=490, BW=7843MiB/s (8224MB/s)(460GiB/60018msec)
slat (msec): min=3, max=6150, avg=16.32, stdev=136.79
clat (usec): min=137, max=8301.4k, avg=32592.63, stdev=205172.65
lat (msec): min=4, max=8305, avg=48.91, stdev=256.05


(My counterpart in the lab is trying the same test with GPFS and cLVM, so it's a race to 12.5GB/sec)

Has anyone else tried to benchmark OCFS2 on workloads with high write throughput rates?