Skip to main content

Its recommended to maintain  20% free space on the zfs pool, storage usage of more than 80% will lead to poor IO performance and longer resilver times on raidz or mirrored pool if any disks is being replaced.

 By default, ZIL or zfs intent log) lives on the same ZFS vdevs. Not only is the writes to the ZIL on rotating disks are slow but those writes and reads are competing with other disk activity this also means double writes on the same pool vdevs.

These double writes can be prevented by disabling sync on the pool, eg.

zfs set sync=disabled PoolName

this configuration poses the risk of loosing a few seconds of data if there is a sudden power loss on the server.

set xttr=sa

zfs set xattr=sa PoolName

The dedicated zil log device can improve write performance, to add a log device,

zpool add PoolName log  /dev/disk/by-id/<id of ssd log disk>

Below values can be set to control the zfs arc memory (unit is bytes)

echo "9663676416" > /sys/module/zfs/parameters/zfs_arc_maxecho "1073741824" > /sys/module/zfs/parameters/zfs_arc_min

On slower storage servers the default dirty data value may be too high and can lead to kernel hung tasks timeouts. Usually on larger memory systems eg 100GB or more you can lower the dirty data value.

Default value of zfs_dirty_data_max : 10% of physical RAM, capped at zfs_dirty_data_max_max. The default value of zfs_dirty_data_max_max is 25% of physical RAM

To set 128MB as dirty data max value

echo "134217728" > /sys/module/zfs/parameters/zfs_dirty_data_max

Other tunables can be found here 

Be the first to reply!

Reply