Hi Jim,

The setup is not there anymore, however, I will share as much details
as I have documented. Could you please post the commands you have used
and any differences you think might be important. Did you ever test
with 2008.11 ? instead of sxce ?

I will probably be testing again soon. Any tips or obvious errors are welcome :)

------------->8-------------
The Setup
* A 100G zvol has been setup on each node of an AVS replicating pair
* A "ramdisk" has been setup on each node using
  ramdiskadm -a ram1 10m
* The replication relationship has been setup using
  sndradm -E pri /dev/zvol/rdsk/gold/myzvol /dev/rramdisk/ram1 sec
/dev/zvol/rdsk/gold/myzvol /dev/rramdisk/ram1 ip async
* The AVS driver was configured to not log the disk bitmap to disk,
rather to keep it in kernel memory and write it to disk only upon
machine shutdown. This is configured as such
  grep bitmap_mode /usr/kernel/drv/rdc.conf
  rdc_bitmap_mode=2;
* The replication was configured to be in logging mode
  sndradm -P
  /dev/zvol/rdsk/gold/myzvol      <-      pri:/dev/zvol/rdsk/gold/myzvol
  autosync: off, max q writes: 4096, max q fbas: 16384, async threads:
2, mode: async, state: logging

Testing was done with:

 dd if=/dev/zero of=/dev/zvol/rdsk/gold/xxVolNamexx oflag=dsync bs=256M count=10

* Option 'dsync' is chosen to try avoiding zfs's aggressive caching.
Moreover however, usually a couple of runs were launched initially to
fill the instant zfs cache and to force real writing to disk
* Option 'bs=256M' was used in order to avoid the overhead of copying
multiple small blocks to kernel memory before disk writes. A larger bs
size ensures max throughput. Smaller values were used without much
difference though

The results on multiple runs

Non Replicated Vol Throughputs: 42.2, 52.8, 50.9 MB/s
Replicated Vol Throughputs:  4.9, 5.5, 4.6 MB/s

---------->8---------------------

Regards

On Mon, Jan 26, 2009 at 1:22 AM, Jim Dunham <james.dun...@sun.com> wrote:
> Ahmed,
>
>> Thanks for your informative reply. I am involved with kristof
>> (original poster) in the setup, please allow me to reply below
>>
>>> Was the follow 'test' run during resynchronization mode or replication
>>> mode?
>>>
>>
>> Neither, testing was done while in logging mode. This was chosen to
>> simply avoid any network "issues" and to get the setup working as fast
>> as possible. The setup was created with:
>>
>> sndradm -E pri /dev/zvol/rdsk/gold/myzvol /dev/rramdisk/ram1 sec
>> /dev/zvol/rdsk/gold/myzvol /dev/rramdisk/ram1 ip async
>>
>> Note that the logging disks are ramdisks again trying to avoid disk
>> contention and get fastest performance (reliability is not a concern
>> in this test). Before running the tests, this was the state
>>
>> #sndradm -P
>> /dev/zvol/rdsk/gold/myzvol      <-      pri:/dev/zvol/rdsk/gold/myzvol
>> autosync: off, max q writes: 4096, max q fbas: 16384, async threads:
>> 2, mode: async, state: logging
>>
>> While we should be getting minimal performance hit (hopefully), we got
>> a big performance hit, disk throughput was reduced to almost 10% of
>> the normal rate.
>
> Is it possible to share information on your ZFS storage pool configuration,
> your testing tool, testing types and resulting data?
>
> I just downloaded Solaris Express CE (b105)
> http://opensolaris.org/os/downloads/sol_ex_dvd_1/,  configured ZFS in
> various storage pool types, SNDR with and without RAM disks, and I do not
> see that disk throughput was reduced to almost 10% o the normal rate. Yes
> there is some performance impact, but no where near there amount reported.
>
> There are various factors which could come into play here, but the most
> obvious reason that someone may see a serious performance degradation as
> reported, is that prior to SNDR being configured, the existing system under
> test was already maxed out on some system limitation, such as CPU and
> memory.  I/O impact should not be a factor, given that a RAM disk is used.
> The addition of both SNDR and a RAM disk in the data, regardless of how
> small their system cost is, will have a profound impact on disk throughput.
>
> Jim
>
>>
>> Please feel free to ask for any details, thanks for the help
>>
>> Regards
>> _______________________________________________
>> storage-discuss mailing list
>> storage-disc...@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/storage-discuss
>
>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to