Hello,

I put up a bug report for this issue at:
https://www.illumos.org/issues/5339


I also wanted to get some general feedback from other potentially interested parties. The gist of the issue is as follows (more detail at the bug report):

OS:
OmniOS v11 r151012 (omnios-10b9c79)

Hardware:
CPUs: dual Intel Xeon E5-2620v2 (hexa-core 2.1GHz)
RAM: 64GiB 1600MHz ECC Reg DDR3
SSDs: Samsung 845DC Pro 400GB connected to Intel C602J SATA3 (6Gbps)
HDDs: HGST HUS724040ALS640 (4TB SAS 7200 RPM) connected to LSI SAS 2308 via LSI SAS2 (6Gbps) expander backplane with 4x SAS2 controller links
network: not in the path for this testing, but Intel I350

The system is completely empty and unused besides this testing.

steps to reproduce:

1. create the simplest test pool with one vdev and one slog (demonstrative purpose only):
zpool create -f testpool c1t5000CCA05C68D505d0 log c5t0d0

2. using your freshly created, empty, and inactive pool, create a randomized file for testing: openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero | dd iflag=fullblock of=/testpool/randomfile.deleteme bs=1M count=4096

3. warm up the ARC for fast reads on your file:
dd if=/testpool/randomfile.deleteme of=/dev/null bs=1M

4. write 100MiB synchronously and observe slog "alloc" with 'zpool iostat -v': dd if=/testpool/randomfile.deleteme of=/testpool/newrandom.deleteme bs=1M oflag=sync count=100 ; zpool iostat -v testpool 1 7

before running any additional tests, run:
rm -f /testpool/newrandom*

expected results:
slog 'alloc' peak at 100MiB

actual results
slog 'alloc' peak at 200MiB


To observe this, your slog has to be sufficiently fast to accept the data within the time span of a single txg (5 seconds by default).

I've also observed the same behavior on the same hardware under FreeNAS (essentially FreeBSD 9 with some backports from 10 and a NAS specific web gui). The test had to be slightly different (openssl and dd a little different there) by way of using the sync=always zfs property to force sync writes and use of the slog, but the outcome was the same except for about 20% better sync write performance under FreeNAS.

This leads me to believe that this is in the ZFS code and not an OS peculiarity.

Is there a known reason why I'm seeing double writes to the slog? Am I alone or are others also seeing the same data amplification for sync writes with a slog?

Sincerely,
Andrew Kinney



-------------------------------------------
illumos-discuss
Archives: https://www.listbox.com/member/archive/182180/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182180/21175430-2e6923be
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21175430&id_secret=21175430-6a77cda4
Powered by Listbox: http://www.listbox.com

Reply via email to