Re: ZFS on top of GELI

2010-01-12 Thread Rafał Jackiewicz
Thanks, could you do the same, but using 2 .eli vdevs mirrorred
together in a zfs mirror?

- Sincerely,
Dan Naumov

Hi,

Proc: Intell Atom 330 (2x1.6Ghz) - 1 package(s) x 2 core(s) x 2 HTT threads
Chipset: Intel 82945G
Sys: 8.0-RELEASE FreeBSD 8.0-RELEASE #0
empty file: /boot/loader.conf
Hdd: 
   ad4: 953869MB Seagate ST31000533CS SC15 at ata2-master SATA150
   ad6: 953869MB Seagate ST31000533CS SC15 at ata3-master SATA150
Geli:
   geli init -s 4096 -K /etc/keys/ad4s2.key /dev/ad4s2
   geli init -s 4096 -K /etc/keys/ad6s2.key /dev/ad6s2


Results:


*** single drivewrite MB/s  read  MB/s
eli.journal.ufs223  14
eli.zfs 19  36


*** mirror  write MB/s  read  MB/s
mirror.eli.journal.ufs2 23  16
eli.zfs 31  40
zfs 83  79


*** degraded mirror write MB/s  read MB/s
mirror.eli.journal.ufs2 16  9
eli.zfs 56  40
zfs 86  71





* Single drive: **
Mount:
  data01 on /data01 (zfs, local)
  /dev/ad6s2.eli.journal on /data02 (ufs, local, gjournal)

*** (single hdd) UFS2, gjournal, eli.
srebrny# time dd if=/dev/zero of=/data02/test01 bs=1m count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 92.451346 secs (22683845 bytes/sec)
0.068u 10.386s 1:32.46 11.2%26+1497k 63+16066io 0pf+0w
** umount / mount, and:
srebrny# time dd if=/data02/test01 of=/dev/null bs=1m
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 147.219379 secs (14245081 bytes/sec)
0.008u 4.456s 2:27.22 3.0%  23+1324k 16066+0io 0pf+0w

*** (single hdd) zfs:
srebrny# time dd if=/dev/zero of=/data01/test01 bs=1m count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 113.049629 secs (18550720 bytes/sec)
0.014u 5.480s 1:53.05 4.8%  26+1516k 0+0io 0pf+0w
** umount / mount, and:
srebrny# time dd if=/data01/test01 of=/dev/null bs=1m
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 59.012860 secs (35537203 bytes/sec)
0.000u 3.135s 0:59.01 5.3%  24+1397k 0+0io 0pf+0w


* Mirror: *

*** (mirror hdd) UFS2, gjournal, eli.
srebrny# gmirror list
Geom name: data02
State: COMPLETE  
Components: 2
Balance: round-robin 
Slice: 4096  
Flags: NONE  
GenID: 0 
SyncID: 1  
**
srebrny# time dd if=/dev/zero of=/data02/test01 bs=1m count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 89.441874 secs (23447094 bytes/sec)
0.022u 11.110s 1:29.45 12.4%26+1515k 64+16066io 0pf+0w
**  umount / mount, and:
srebrny# time dd if=/data02/test01 of=/dev/null bs=1m
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 134.567914 secs (15584339 bytes/sec)
0.007u 4.333s 2:14.62 3.2%  26+1498k 16067+0io 0pf+0w


*** (mirror hdd, eli) zfs:
srebrny# time dd if=/dev/zero of=/data01/test01 bs=1m count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 67.255574 secs (31181832 bytes/sec)
0.029u 6.422s 1:07.25 9.5%  26+1531k 0+0io 0pf+0w
** (eli) umount / mount, and:
srebrny# time dd if=/data01/test01 of=/dev/null bs=1m
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 52.307404 secs (40092833 bytes/sec)
0.036u 3.405s 0:52.31 6.5%  27+1543k 0+0io 0pf+0w

*** (mirror hdd, no eli!) zfs:
srebrny# time dd if=/dev/zero of=/data01/test01 bs=1m count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 25.185164 secs (83269341 bytes/sec)
0.000u 5.506s 0:25.18 21.8% 26+1513k 0+0io 0pf+0w
** (no eli!) umount / mount, and:
srebrny# time dd if=/data01/test01 of=/dev/null bs=1m
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 26.457374 secs (79265312 bytes/sec)
0.000u 3.011s 0:26.45 11.3% 24+1396k 0+0io 0pf+0w

*

*** (mirror !!!degraded!!!, single drive ad4s2) UFS2, gjournal, eli.
df -h
/dev/mirror/data02.eli.journal857G8.0K788G 0%/data02
**
srebrny# time dd if=/dev/zero of=/data02/test01 bs=1m count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 131.554958 secs (15941262 bytes/sec)
0.029u 10.057s 2:11.58 7.6% 26+1528k 64+16066io 0pf+0w
**  (mirror !!!degraded!!!, single drive ad4s2) umount / mount, and:
srebrny# time dd if=/data02/test01 of=/dev/null bs=1m
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 226.056204 secs (9277127 bytes/sec)
0.029u 4.226s 3:46.08 1.8%  26+1529k 16066+0io 0pf+0w


*** (mirror !!!degraded!!!, single drive ad4s2; eli)  zfs:
srebrny# time dd if=/dev/zero of=/data01/test011 bs=1m count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 

Re: ZFS on top of GELI

2010-01-12 Thread Dan Naumov
2010/1/12 Rafał Jackiewicz free...@o2.pl:
Thanks, could you do the same, but using 2 .eli vdevs mirrorred
together in a zfs mirror?

- Sincerely,
Dan Naumov

 Hi,

 Proc: Intell Atom 330 (2x1.6Ghz) - 1 package(s) x 2 core(s) x 2 HTT threads
 Chipset: Intel 82945G
 Sys: 8.0-RELEASE FreeBSD 8.0-RELEASE #0
 empty file: /boot/loader.conf
 Hdd:
   ad4: 953869MB Seagate ST31000533CS SC15 at ata2-master SATA150
   ad6: 953869MB Seagate ST31000533CS SC15 at ata3-master SATA150
 Geli:
   geli init -s 4096 -K /etc/keys/ad4s2.key /dev/ad4s2
   geli init -s 4096 -K /etc/keys/ad6s2.key /dev/ad6s2


 Results:
 

 *** single drive                        write MB/s      read  MB/s
 eli.journal.ufs2                        23              14
 eli.zfs                         19              36


 *** mirror                              write MB/s      read  MB/s
 mirror.eli.journal.ufs2 23              16
 eli.zfs                         31              40
 zfs                                     83              79


 *** degraded mirror             write MB/s      read MB/s
 mirror.eli.journal.ufs2 16              9
 eli.zfs                         56              40
 zfs                                     86              71

 

Thanks a lot for your numbers, the relevant part for me was this:

*** mirror  write MB/s  read  MB/s
eli.zfs 31  40
zfs 83  79

*** degraded mirror write MB/s  read MB/s
eli.zfs 56  40
zfs 86  71

31 mb/s writes and 40 mb/s reads is something that I guess I could
potentially live with. I am guessing the main problem of stacking ZFS
on top of geli like this is the fact that writing to a mirror requires
double the CPU use, because we have to encrypt all written data twice
(once to each disk) instead of encrypting first and then writing the
encrypted data to 2 disks as would be the case if we had crypto
sitting on top of ZFS instead of ZFS sitting on top of crypto.

I now have to reevaluate my planned use of an SSD though, I was
planning to use a 40gb partition on an Intel 80GB X25-M G2 as a
dedicated L2ARC device for a ZFS mirror of 2 x 2tb disks. However
these numbers make it quite obvious that I would already be
CPU-starved at 40-50mb/s throughput on the encrypted ZFS mirror, so
adding an l2arc SSD, while improving latency, would do really nothing
for actual disk read speeds, considering the l2arc itself would too,
have to sit on top of a GELI device.

- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI

2010-01-11 Thread Pete French
 GELI+ZFS and Debian Linux with MDRAID and cryptofs. Has anyone here
 made any benchmarks regarding how much of a performance hit is caused
 by using 2 geli devices as vdevs for a ZFS mirror pool in FreeBSD (a

I havent done it directly on the same boxes, but I have two systems
with idenitical drives, each with a ZFS mirror pool, one wth GELI, and
one without. Simple read test shows no overhead in using GELI at all.

I would recommend using the new AHCI driver though - greatly
improves throughput.

-pete.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI

2010-01-11 Thread Dan Naumov
On Mon, Jan 11, 2010 at 7:30 PM, Pete French
petefre...@ticketswitch.com wrote:
 GELI+ZFS and Debian Linux with MDRAID and cryptofs. Has anyone here
 made any benchmarks regarding how much of a performance hit is caused
 by using 2 geli devices as vdevs for a ZFS mirror pool in FreeBSD (a

 I havent done it directly on the same boxes, but I have two systems
 with idenitical drives, each with a ZFS mirror pool, one wth GELI, and
 one without. Simple read test shows no overhead in using GELI at all.

 I would recommend using the new AHCI driver though - greatly
 improves throughput.

How fast is the CPU in the system showing no overhead? Having no
noticable overhead whatsoever sounds extremely unlikely unless you are
actually using it on something like a very modern dualcore or better.

- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI

2010-01-11 Thread Pete French
 How fast is the CPU in the system showing no overhead? Having no
 noticable overhead whatsoever sounds extremely unlikely unless you are
 actually using it on something like a very modern dualcore or better.

It's a very modern dual core :-) Phenom 550 - the other machine is an old
Opteron 252.

-pete.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI

2010-01-11 Thread Ben Schumacher
On Mon, Jan 11, 2010 at 11:39 AM, Dan Naumov dan.nau...@gmail.com wrote:
 On Mon, Jan 11, 2010 at 7:30 PM, Pete French
 How fast is the CPU in the system showing no overhead? Having no
 noticable overhead whatsoever sounds extremely unlikely unless you are
 actually using it on something like a very modern dualcore or better.

IIRC, GELI can take advantage of hardware acceleration for encryption,
so I'd bet that a slower CPU with hardware crypto (Via Nano, for
example) would probably be fast enough too. I've actually got one of
these at home, I might have to check this out and see how it runs.

Cheers,
Ben
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI

2010-01-11 Thread K. Macy

 If performance is an issue, you may want to consider carving off a partition
 on that SSD, geli-fying it, and using it as a ZIL device.  You'll probably
 see a marked performance improvement with such a setup.

 That is true, but using a single device for a dedicated ZIL is a huge
 no-no, considering it's an intent log, it's used to reconstruct the
 pool in case of a power failure for example, should such an event
 occur at the same time as a ZIL provider dies, you lose the entire
 pool because there is no way to recover it, so if ZIL gets put
 elsewhere, that elsewhere really should be a mirror and sadly I
 don't see myself affording to use 2 SSDs for my setup :)


This is  false. The ZIL is used for journalling synchronous writes. If
your ZIL is lost you will lose the data that was written to the ZIL,
but not yet written to the file system proper. Barring disk
corruption, the file system is always consistent.

-Kip
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI

2010-01-11 Thread Dan Naumov
2010/1/12 Rafał Jackiewicz free...@o2.pl:
 Two hdd Seagate ES2,Intel Atom 330 (2x1.6GHz), 2GB RAM:

 geli:
   geli init -s 4096 -K /etc/keys/ad4s2.key /dev/ad4s2
   geli init -s 4096 -K /etc/keys/ad6s2.key /dev/ad6s2

 zfs:
   zpool create data01 ad4s2.eli

 df -h:
   dev/ad6s2.eli.journal    857G    8.0K    788G     0%    /data02
   data01                           850G    128K    850G     0%    /data01

 srebrny# dd if=/dev/zero of=/data01/test bs=1M count=500
 500+0 records in
 500+0 records out
 524288000 bytes transferred in 8.802691 secs (59559969 bytes/sec)
 srebrny# dd if=/dev/zero of=/data02/test bs=1M count=500
 500+0 records in
 500+0 records out
 524288000 bytes transferred in 20.090274 secs (26096608 bytes/sec)

 Rafal Jackiewicz

Thanks, could you do the same, but using 2 .eli vdevs mirrorred
together in a zfs mirror?

- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI

2010-01-11 Thread Dan Naumov
On Tue, Jan 12, 2010 at 1:29 AM, K. Macy km...@freebsd.org wrote:

 If performance is an issue, you may want to consider carving off a partition
 on that SSD, geli-fying it, and using it as a ZIL device.  You'll probably
 see a marked performance improvement with such a setup.

 That is true, but using a single device for a dedicated ZIL is a huge
 no-no, considering it's an intent log, it's used to reconstruct the
 pool in case of a power failure for example, should such an event
 occur at the same time as a ZIL provider dies, you lose the entire
 pool because there is no way to recover it, so if ZIL gets put
 elsewhere, that elsewhere really should be a mirror and sadly I
 don't see myself affording to use 2 SSDs for my setup :)


 This is  false. The ZIL is used for journalling synchronous writes. If
 your ZIL is lost you will lose the data that was written to the ZIL,
 but not yet written to the file system proper. Barring disk
 corruption, the file system is always consistent.

 -Kip

Ok, lets assume we have a dedicated ZIL on a single non-redundant
disk. This disk dies. How do you remove the dedicated ZIL from the
pool or replace it with a new one? Solaris ZFS documentation indicates
that this is possible for dedicated L2ARC - you can remove a dedicated
l2arc from a pool at any time you wish and should some IO fail on the
l2arc, the system will gracefully continue to run, reverting said IO
to be processed by the actual default built-in ZIL on the disks of the
pool. However the capability to remove dedicated ZIL or gracefully
handle the death of a non-redundant dedicated ZIL vdev does not
currently exist in Solaris/OpenSolaris at all.

- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI

2010-01-11 Thread Rafał Jackiewicz
Two hdd Seagate ES2,Intel Atom 330 (2x1.6GHz), 2GB RAM:

geli:
   geli init -s 4096 -K /etc/keys/ad4s2.key /dev/ad4s2
   geli init -s 4096 -K /etc/keys/ad6s2.key /dev/ad6s2

zfs:
   zpool create data01 ad4s2.eli

df -h:
   dev/ad6s2.eli.journal857G8.0K788G 0%/data02
   data01   850G128K850G 0%/data01

srebrny# dd if=/dev/zero of=/data01/test bs=1M count=500
500+0 records in
500+0 records out
524288000 bytes transferred in 8.802691 secs (59559969 bytes/sec)
srebrny# dd if=/dev/zero of=/data02/test bs=1M count=500
500+0 records in
500+0 records out
524288000 bytes transferred in 20.090274 secs (26096608 bytes/sec)

Rafal Jackiewicz
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI

2010-01-11 Thread K. Macy
 Ok, lets assume we have a dedicated ZIL on a single non-redundant
 disk. This disk dies. How do you remove the dedicated ZIL from the
 pool or replace it with a new one? Solaris ZFS documentation indicates
 that this is possible for dedicated L2ARC - you can remove a dedicated
 l2arc from a pool at any time you wish and should some IO fail on the
 l2arc, the system will gracefully continue to run, reverting said IO
 to be processed by the actual default built-in ZIL on the disks of the
 pool. However the capability to remove dedicated ZIL or gracefully
 handle the death of a non-redundant dedicated ZIL vdev does not
 currently exist in Solaris/OpenSolaris at all.

Ahh - you're describing an implementation flaw as opposed to a design
flaw. Your initial statement could be interpreted as meaning that the
ZIL is required for file system consistency.

I hope they fix that.

-Kip
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI

2010-01-11 Thread Freddie Cash
On Mon, Jan 11, 2010 at 4:24 PM, Dan Naumov dan.nau...@gmail.com wrote:

 On Tue, Jan 12, 2010 at 1:29 AM, K. Macy km...@freebsd.org wrote:
 
  If performance is an issue, you may want to consider carving off a
 partition
  on that SSD, geli-fying it, and using it as a ZIL device.  You'll
 probably
  see a marked performance improvement with such a setup.
 
  That is true, but using a single device for a dedicated ZIL is a huge
  no-no, considering it's an intent log, it's used to reconstruct the
  pool in case of a power failure for example, should such an event
  occur at the same time as a ZIL provider dies, you lose the entire
  pool because there is no way to recover it, so if ZIL gets put
  elsewhere, that elsewhere really should be a mirror and sadly I
  don't see myself affording to use 2 SSDs for my setup :)
 
 
  This is  false. The ZIL is used for journalling synchronous writes. If
  your ZIL is lost you will lose the data that was written to the ZIL,
  but not yet written to the file system proper. Barring disk
  corruption, the file system is always consistent.
 
  -Kip

 Ok, lets assume we have a dedicated ZIL on a single non-redundant
 disk. This disk dies. How do you remove the dedicated ZIL from the
 pool or replace it with a new one? Solaris ZFS documentation indicates
 that this is possible for dedicated L2ARC - you can remove a dedicated
 l2arc from a pool at any time you wish and should some IO fail on the
 l2arc, the system will gracefully continue to run, reverting said IO
 to be processed by the actual default built-in ZIL on the disks of the
 pool. However the capability to remove dedicated ZIL or gracefully
 handle the death of a non-redundant dedicated ZIL vdev does not
 currently exist in Solaris/OpenSolaris at all.

 That has been implemented in OpenSolaris, do a search for slog removal.
 It's in a much newer zpool version than 13, though.

-- 
Freddie Cash
fjwc...@gmail.com
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI

2010-01-11 Thread jhell

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1



On Mon, 11 Jan 2010 19:45, fjwcash@ wrote:

On Mon, Jan 11, 2010 at 4:24 PM, Dan Naumov dan.nau...@gmail.com wrote:


On Tue, Jan 12, 2010 at 1:29 AM, K. Macy km...@freebsd.org wrote:


If performance is an issue, you may want to consider carving off a

partition

on that SSD, geli-fying it, and using it as a ZIL device.  You'll

probably

see a marked performance improvement with such a setup.


That is true, but using a single device for a dedicated ZIL is a huge
no-no, considering it's an intent log, it's used to reconstruct the
pool in case of a power failure for example, should such an event
occur at the same time as a ZIL provider dies, you lose the entire
pool because there is no way to recover it, so if ZIL gets put
elsewhere, that elsewhere really should be a mirror and sadly I
don't see myself affording to use 2 SSDs for my setup :)



This is  false. The ZIL is used for journalling synchronous writes. If
your ZIL is lost you will lose the data that was written to the ZIL,
but not yet written to the file system proper. Barring disk
corruption, the file system is always consistent.

-Kip


Ok, lets assume we have a dedicated ZIL on a single non-redundant
disk. This disk dies. How do you remove the dedicated ZIL from the
pool or replace it with a new one? Solaris ZFS documentation indicates
that this is possible for dedicated L2ARC - you can remove a dedicated
l2arc from a pool at any time you wish and should some IO fail on the
l2arc, the system will gracefully continue to run, reverting said IO
to be processed by the actual default built-in ZIL on the disks of the
pool. However the capability to remove dedicated ZIL or gracefully
handle the death of a non-redundant dedicated ZIL vdev does not
currently exist in Solaris/OpenSolaris at all.

That has been implemented in OpenSolaris, do a search for slog removal.

It's in a much newer zpool version than 13, though.




What I have seen more often by users is taking the usage of slog/ZIL 
wrong. For instance dedicating a whole SSD or another HDD as the slog. 
Your slog/ZIL only has to be big enough to handle 10 seconds of synchronous 
writes before it flushes. A recommended ZIL from Sun Micro is 128MB but 
you may not even see that fully used for general purpose cases.


I had is to dedicate a partition on the same disk that the pool is on and 
adding another ZIL vdev from another disk in the system. Results of this 
imply that if the off-disk ZIL dies for some stupid reason it falls back 
to the one that rests on the same disk as the pool and allows to replace 
the off-disk ZIL with something else.


PS: Save your disk space and use 256MB thumb drives. you can easily get 16 
of those at your local Walmart and have a priceless light show for a 
romantic dinner with the wife.


 :)

- -- 


 Mon Jan 11 22:31:17 2010

 jhell

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (FreeBSD)

iQEcBAEBAgAGBQJLS/PZAAoJEJBXh4mJ2FR+CTUH/RIqmRPE8SdKZYY7WIC/K9Yk
HThaiYHs6a15ZY58q2nHG0x5J85TOBMN4yvC1rI1DGcjX9SXlyjxY+jJ0sdIAbHz
N2+nT95X3SbNCPXtA3qo6uTplIiZnu9xgcAnFmjBh96Aq7qzcIvtFe2QMuxTp/lI
Na8K4t7udDFQ9xIoyptk/PukKvV/EtzDx449w6VPxu0fkXK812uWWl+jFy3XchrW
QfExuNIhVadcnxOB5/BQaAyd6daaI9tZNyARo43ww7bEKxaFP2Awre/IYfeuKZtm
/n4TOdOoookyIIO0fMWDQ4WyLLsQD6eHug0B0Ef7LYcrUkPEYFJQVxujhx/cyhI=
=cjuO
-END PGP SIGNATURE-
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


ZFS on top of GELI

2010-01-10 Thread Dan Naumov
Hello list.

I am evaluating options for my new upcoming storage system, where for
various reasons the data will be stored on 2 x 2tb SATA disk in a
mirror and has to be encrypted (a 40gb Intel SSD will be used for the
system disk). Right now I am considering the options of FreeBSD with
GELI+ZFS and Debian Linux with MDRAID and cryptofs. Has anyone here
made any benchmarks regarding how much of a performance hit is caused
by using 2 geli devices as vdevs for a ZFS mirror pool in FreeBSD (a
similar configuration is described here:
http://blog.experimentalworks.net/2008/03/setting-up-an-encrypted-zfs-with-freebsd/)?
Some direct comparisons using bonnie++ or similar, showing the number
differences of this is read/write/IOPS on top of a ZFS mirror and
this is read/write/IOPS on top of a ZFS mirror using GELI would be
nice.

I am mostly interested in benchmarks on lower end hardware, the system
is an Atom 330 which is currently using Windows 2008 server with
TrueCrypt in a non-raid configuration and with that setup, I am
getting roughly 55mb/s reads and writes when using TrueCrypt
(nonencrypted it's around 115mb/s).

Thanks.

- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI

2010-01-10 Thread Roland Smith
On Sun, Jan 10, 2010 at 05:08:29PM +0200, Dan Naumov wrote:
 Hello list.
 
 I am evaluating options for my new upcoming storage system, where for
 various reasons the data will be stored on 2 x 2tb SATA disk in a
 mirror and has to be encrypted (a 40gb Intel SSD will be used for the
 system disk). Right now I am considering the options of FreeBSD with
 GELI+ZFS and Debian Linux with MDRAID and cryptofs. Has anyone here
 made any benchmarks regarding how much of a performance hit is caused
 by using 2 geli devices as vdevs for a ZFS mirror pool in FreeBSD (a
 similar configuration is described here:
 http://blog.experimentalworks.net/2008/03/setting-up-an-encrypted-zfs-with-freebsd/)?
 Some direct comparisons using bonnie++ or similar, showing the number
 differences of this is read/write/IOPS on top of a ZFS mirror and
 this is read/write/IOPS on top of a ZFS mirror using GELI would be
 nice.
 
 I am mostly interested in benchmarks on lower end hardware, the system
 is an Atom 330 which is currently using Windows 2008 server with
 TrueCrypt in a non-raid configuration and with that setup, I am
 getting roughly 55mb/s reads and writes when using TrueCrypt
 (nonencrypted it's around 115mb/s).

Although I cannot comment on ZFS, my $HOME partition is UFS2+geli. Reads (with
dd) of uncached big[1] files are ~70MB/s. Reading an unchached big file from a
non-encrypted UFS2 partition is ~120MB/s. Note that the vfs cache has a huge
influence here; Repeating the same read will be 4 – 7 times faster!

The sysctls for ZFS chaching will probably have a big impact too.

Roland

[1] several 100s of MiB.
-- 
R.F.Smith   http://www.xs4all.nl/~rsmith/
[plain text _non-HTML_ PGP/GnuPG encrypted/signed email much appreciated]
pgp: 1A2B 477F 9970 BA3C 2914  B7CE 1277 EFB0 C321 A725 (KeyID: C321A725)


pgp4DdjGsXivv.pgp
Description: PGP signature


Re: ZFS on top of GELI

2010-01-10 Thread Dan Naumov
On Sun, Jan 10, 2010 at 6:12 PM, Damian Gerow dge...@afflictions.org wrote:
 Dan Naumov wrote:
 : I am mostly interested in benchmarks on lower end hardware, the system
 : is an Atom 330 which is currently using Windows 2008 server with
 : TrueCrypt in a non-raid configuration and with that setup, I am
 : getting roughly 55mb/s reads and writes when using TrueCrypt
 : (nonencrypted it's around 115mb/s).

 I've been using GELI-backed vdevs for some time now -- since 7.2-ish
 timeframes.  I've never benchmarked it, but I was running on relatively
 low-end hardware.  A few things to take into consideration:

 1) Make sure the individual drives are encrypted -- especially if they're
   =1TB.  This is less a performance thing and more a make sure your
   encryption actually encrypts properly thing.
 2) Seriously consider using the new AHCI driver.  I've been using it in a
   few places, and it's quite stable, and there is a marked performance
   improvement - 10-15% on the hardware I've got.
 3) Take a look at the VIA platform, as a replacement for the Atom.  I was
   running on an EPIA-SN 1800 (1.8GHz), and didn't have any real troubles
   with the encryption aspect of the rig (4x1TB drives).  Actually, if you
   get performance numbers privately comparing the Atom to a VIA (Nano or
   otherwise), can you post them to the list?  I'm curious to see if the
   on-chip encryption actually makes a difference.
 4) Since you're asking for benchmarks, probably best if you post the
   specific bonnie command you want run -- that way, it's tailored to your
   use-case, and you'll get consistant, comparable results.

Yes, this is what I was basically considering:

new AHCI driver = 40gb Intel SSD = UFS2 with Softupdates for the
system installation
new AHCI driver = 2 x 2tb disks, each fully encrypted with geli = 2
geli vdevs for a ZFS mirror for important data

The reason I am considering the new AHCI driver is to get NCQ support
now and TRIM support for the SSD later when it gets implemented,
although if the performance difference right now is already 10-15%,
that's a reason good enough on it's own. On a semi-related note, is it
still recommended to use softupdates or is GJournal a better choice
today?

- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI

2010-01-10 Thread Dan Naumov
On Sun, Jan 10, 2010 at 8:46 PM, Damian Gerow dge...@afflictions.org wrote:
 Dan Naumov wrote:
 : Yes, this is what I was basically considering:
 :
 : new AHCI driver = 40gb Intel SSD = UFS2 with Softupdates for the
 : system installation
 : new AHCI driver = 2 x 2tb disks, each fully encrypted with geli = 2
 : geli vdevs for a ZFS mirror for important data

 If performance is an issue, you may want to consider carving off a partition
 on that SSD, geli-fying it, and using it as a ZIL device.  You'll probably
 see a marked performance improvement with such a setup.

That is true, but using a single device for a dedicated ZIL is a huge
no-no, considering it's an intent log, it's used to reconstruct the
pool in case of a power failure for example, should such an event
occur at the same time as a ZIL provider dies, you lose the entire
pool because there is no way to recover it, so if ZIL gets put
elsewhere, that elsewhere really should be a mirror and sadly I
don't see myself affording to use 2 SSDs for my setup :)

- Sincerely,
Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI / Intel Atom 330 system

2009-05-31 Thread Ronald Klop
On Fri, 29 May 2009 13:34:57 +0200, Dan Naumov dan.nau...@gmail.com  
wrote:



Now that I have evaluated the numbers and my needs a bit, I am really
confused about what appropriate course of action for me would be.

1) Use ZFS without GELI and hope that zfs-crypto get implemented in
Solaris and ported to FreeBSD soon and that when it does, it won't
come with such a dramatic performance decrease as GELI/ZFS seems to
result in.
2) Go ahead with the original plan of using GELI/ZFS and grind my
teeth at the 24 MB/s read speed off a single disk.


3) Add extra disks. It will speed up reading. One disk extra will about  
double the read speed.


Ronald.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI / Intel Atom 330 system

2009-05-31 Thread Dan Naumov
I am pretty sure that adding more disks wouldn't solve anything in
this case, only either using a faster CPU or a faster crypto system.
When you are capable of 70 MB/s reads on a single unecrypted disk, but
only 24 MB/s reads off the same disk while encrypted, your disk speed
isn't the problem.

- Dan Naumov



On Sun, May 31, 2009 at 5:29 PM, Ronald Klop
ronald-freeb...@klop.yi.org wrote:
 On Fri, 29 May 2009 13:34:57 +0200, Dan Naumov dan.nau...@gmail.com wrote:

 Now that I have evaluated the numbers and my needs a bit, I am really
 confused about what appropriate course of action for me would be.

 1) Use ZFS without GELI and hope that zfs-crypto get implemented in
 Solaris and ported to FreeBSD soon and that when it does, it won't
 come with such a dramatic performance decrease as GELI/ZFS seems to
 result in.
 2) Go ahead with the original plan of using GELI/ZFS and grind my
 teeth at the 24 MB/s read speed off a single disk.

 3) Add extra disks. It will speed up reading. One disk extra will about
 double the read speed.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI / Intel Atom 330 system

2009-05-31 Thread Ulrich Spörlein
On Fri, 29.05.2009 at 12:47:38 +0200, Morgan Wesström wrote:
 You can benchmark the encryption subsytem only, like this:
 
 # kldload geom_zero
 # geli onetime -s 4096 -l 256 gzero
 # sysctl kern.geom.zero.clear=0
 # dd if=/dev/gzero.eli of=/dev/null bs=1M count=512
 
 512+0 records in
 512+0 records out
 536870912 bytes transferred in 11.861871 secs (45260222 bytes/sec)
 
 The benchmark will use 256-bit AES and the numbers are from my Core2 Duo
 Celeron E1200 1,6GHz. My old trusty Pentium III 933MHz performs at
 13MB/s on that test. Both machines are recompiled with CPUTYPE=core2 and
 CPUTYPE=pentium3 respectively but unfortunately I have no benchmarks on
 how they perform without the CPU optimizations.

Hi Morgan,

thanks for the nice benchmarking trick. I tried this on two ~7.2
systems:

CPU: Intel Pentium III (996.77-MHz 686-class CPU)
- 14.3MB/s

CPU: Intel(R) Pentium(R) 4 CPU 2.80GHz (2793.01-MHz 686-class CPU)
- 47.5MB/s

Reading a big file from the pool of this P4 results in 27.6MB/s netto
transfer rate (single 7200 rpm SATA disk).

I would be *very* interested in numbers from the dual core Atom, both
with 2 CPUs and with 1 active core only. I think that having dual core
is a must for this setup, so you can use 2 GELI threads and have the ZFS
threads on top of that to spread the load.

Cheers,
Ulrich Spörlein
-- 
http://www.dubistterrorist.de/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI / Intel Atom 330 system

2009-05-31 Thread Morgan Wesström
 Hi Morgan,
 
 thanks for the nice benchmarking trick. I tried this on two ~7.2
 systems:
 
 CPU: Intel Pentium III (996.77-MHz 686-class CPU)
 - 14.3MB/s
 
 CPU: Intel(R) Pentium(R) 4 CPU 2.80GHz (2793.01-MHz 686-class CPU)
 - 47.5MB/s
 
 Reading a big file from the pool of this P4 results in 27.6MB/s netto
 transfer rate (single 7200 rpm SATA disk).
 
 I would be *very* interested in numbers from the dual core Atom, both
 with 2 CPUs and with 1 active core only. I think that having dual core
 is a must for this setup, so you can use 2 GELI threads and have the ZFS
 threads on top of that to spread the load.
 
 Cheers,
 Ulrich Spörlein

Credit to pjd@ actually. Picked up the trick myself from freebsd-geom
some time ago :-)
http://lists.freebsd.org/pipermail/freebsd-geom/2007-July/002498.html

My Eee PC with a single core N270 is being repaired atm, it suffered a
bad BIOS flash so I can't help you with benchmarks until it's back. I
don't have access to another Atom CPU unfortunately.

/Morgan
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI / Intel Atom 330 system

2009-05-31 Thread Ulrich Spörlein
On Fri, 29.05.2009 at 11:19:44 +0300, Dan Naumov wrote:
 Also, free free to criticize my planned filesystem layout for the
 first disk of this system, the idea behind /mnt/sysbackup is to take a
 snapshot of the FreeBSD installation and it's settings before doing
 potentially hazardous things like upgrading to a new -RELEASE:
 
 ad1s1 (freebsd system slice)
   ad1s1a =  128bit Blowfish ad1s1a.eli 4GB swap
   ad1s1b 128GB ufs2+s /
   ad1s1c 128GB ufs2+s noauto /mnt/sysbackup
 
 ad1s2 =  128bit Blowfish ad1s2.eli
   zpool
   /home
   /mnt/data1

Hi Dan,

everybody has different needs, but what exactly are you doing with 128GB
of / ? What I did is the following:

2GB CF card + CF to ATA adapter (today, I would use 2x8GB USB sticks,
CF2ATA adapters suck, but then again, which Mobo has internal USB ports?)

Filesystem 1024-blocks  UsedAvail Capacity  Mounted on
/dev/ad0a   507630139740   32728030%/
/dev/ad0d  1453102   12922964455897%/usr
/dev/md025367816   233368 0%/tmp

/usr is quite crowded, but I just need to clean up some ports again.
/var, /usr/src, /home, /usr/obj, /usr/ports are all on the GELI+ZFS
pool. If /usr turns out to be to small, I can also move /usr/local
there. That way booting and single user involves trusty old UFS only.

I also do regular dumps from the UFS filesystems to the ZFS tank, but
there's really no sacred data under / or /usr that I would miss if the
system crashed (all configuration changes are tracked using mercurial).

Anyway, my point is to use the full disks for GELI+ZFS whenever
possible. This makes it more easy to replace faulty disks or grow ZFS
pools. The FreeBSD base system, I would put somewhere else.

Cheers,
Ulrich Spörlein
-- 
http://www.dubistterrorist.de/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI / Intel Atom 330 system

2009-05-31 Thread Dan Naumov
Hi

Since you are suggesting 2 x 8GB USB for a root partition, what is
your experience with read/write speed and lifetime expectation of
modern USB sticks under FreeBSD and why 2 of them, GEOM mirror?

- Dan Naumov



 Hi Dan,

 everybody has different needs, but what exactly are you doing with 128GB
 of / ? What I did is the following:

 2GB CF card + CF to ATA adapter (today, I would use 2x8GB USB sticks,
 CF2ATA adapters suck, but then again, which Mobo has internal USB ports?)

 Filesystem             1024-blocks      Used    Avail Capacity  Mounted on
 /dev/ad0a                   507630    139740   327280    30%    /
 /dev/ad0d                  1453102   1292296    44558    97%    /usr
 /dev/md0                    253678        16   233368     0%    /tmp

 /usr is quite crowded, but I just need to clean up some ports again.
 /var, /usr/src, /home, /usr/obj, /usr/ports are all on the GELI+ZFS
 pool. If /usr turns out to be to small, I can also move /usr/local
 there. That way booting and single user involves trusty old UFS only.

 I also do regular dumps from the UFS filesystems to the ZFS tank, but
 there's really no sacred data under / or /usr that I would miss if the
 system crashed (all configuration changes are tracked using mercurial).

 Anyway, my point is to use the full disks for GELI+ZFS whenever
 possible. This makes it more easy to replace faulty disks or grow ZFS
 pools. The FreeBSD base system, I would put somewhere else.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI / Intel Atom 330 system

2009-05-31 Thread Pertti Kosunen

Ulrich Spörlein wrote:

2GB CF card + CF to ATA adapter (today, I would use 2x8GB USB sticks,
CF2ATA adapters suck, but then again, which Mobo has internal USB ports?)


Many has internal USB header.

http://www.logicsupply.com/products/afap_082usb
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI / Intel Atom 330 system

2009-05-31 Thread Freddie Cash
On Sun, May 31, 2009 at 9:05 AM, Ulrich Spörlein u...@spoerlein.net wrote:
 everybody has different needs, but what exactly are you doing with 128GB
 of / ? What I did is the following:

 2GB CF card + CF to ATA adapter (today, I would use 2x8GB USB sticks,
 CF2ATA adapters suck, but then again, which Mobo has internal USB ports?)

You can get CF-to-SATA adapters.  We've used CF-to-IDE quite
successfully in a pair of storage server.  We have a couple of the
SATA adapters on order to test with as our new motherboards only have
1 IDE controller, and doing mirroring across master/slave of the same
channel sucks.

 /usr is quite crowded, but I just need to clean up some ports again.
 /var, /usr/src, /home, /usr/obj, /usr/ports are all on the GELI+ZFS
 pool. If /usr turns out to be to small, I can also move /usr/local
 there. That way booting and single user involves trusty old UFS only.

That's what we do as well, but with /usr/local on ZFS, leaving just /
and /usr on UFS.

-- 
Freddie Cash
fjwc...@gmail.com
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI / Intel Atom 330 system

2009-05-31 Thread Ulrich Spörlein
On Sun, 31.05.2009 at 19:28:51 +0300, Dan Naumov wrote:
 Hi
 
 Since you are suggesting 2 x 8GB USB for a root partition, what is
 your experience with read/write speed and lifetime expectation of
 modern USB sticks under FreeBSD and why 2 of them, GEOM mirror?

Well, my current setup is using an old 2GB CF card, so read/write speeds
suck (14 and 7 MB/s, respectively, IIRC), but then again, there are not
many actual read/writes on / or /usr for my setup anyway.

The 2x 8GB USB sticks I would of course use to gmirror the setup,
although I have been told that this is rather excessive. Modern flash
media should cope with enough write cycles to get you through a decade.
With /var being on GELI+ZFS this point is mood even more, IMHO.

A recent 8GB Sandisk U3 stick of mine manages to read/write ~25MB/s
(working from memory here), so this is pretty much the maximum USB 2.0
is giving you.

Cheers,
Ulrich Spörlein
-- 
http://www.dubistterrorist.de/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


ZFS on top of GELI / Intel Atom 330 system

2009-05-29 Thread Dan Naumov
Is there anyone here using ZFS on top of a GELI-encrypted provider on
hardware which could be considered slow by today's standards? What
are the performance implications of doing this? The reason I am asking
is that I am in the process of building a small home NAS/webserver,
starting with a single disk (intending to expand as the need arises)
on the following hardware:
http://www.tranquilpc-shop.co.uk/acatalog/BAREBONE_SERVERS.html This
is essentially: Intel Arom 330 1.6 Ghz dualcore on an Intel
D945GCLF2-based board with 2GB Ram, the first disk I am going to use
is a 1.5TB Western Digital Caviar Green.

I had someone run a few openssl crypto benchmarks (to unscientifically
assess the maximum possible GELI performance) on a machine running
FreeBSD on nearly the same hardware and it seems the CPU would become
the bottleneck at roughly 200 MB/s throughput when using 128 bit
Blowfish, 70 MB/s when using AES128 and 55 MB/s when using AES256.
This, on it's own is definately enough for my neeeds (especially in
the case of using Blowfish), but what are the performance implications
of using ZFS on top of a GELI-encrypted provider?

Also, free free to criticize my planned filesystem layout for the
first disk of this system, the idea behind /mnt/sysbackup is to take a
snapshot of the FreeBSD installation and it's settings before doing
potentially hazardous things like upgrading to a new -RELEASE:

ad1s1 (freebsd system slice)
ad1s1a =  128bit Blowfish ad1s1a.eli 4GB swap
ad1s1b 128GB ufs2+s /
ad1s1c 128GB ufs2+s noauto /mnt/sysbackup

ad1s2 =  128bit Blowfish ad1s2.eli
zpool
/home
/mnt/data1


Thanks for your input.

- Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI / Intel Atom 330 system

2009-05-29 Thread Pete French
 Is there anyone here using ZFS on top of a GELI-encrypted provider on
 hardware which could be considered slow by today's standards? What

I run a mirrored zpool on top of a pair of 1TB SATA drives - they are
only 7200 rpm so pretty dog slow as far as I'm concerned. The
CPOU is a dual core Athlon 6400, and I am running amd64. The performance
is not brilliant - about 25 meg/second writing a file, and about
53 meg/second reading it.

It's a bit dissapointing really - thats a lot slower that I expected
when I built it, especially the write speed.

-pete.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI / Intel Atom 330 system

2009-05-29 Thread Dan Naumov
Ouch, that does indeed sounds quite slow, especially considering that
a dual core Athlon 6400 is pretty fast CPU. Have you done any
comparison benchmarks between UFS2 with Softupdates and ZFS on the
same system? What are the read/write numbers like? Have you done any
investigating regarding possible causes of ZFS working so slow on your
system? Just wondering if its an ATA chipset problem, a drive problem,
a ZFS problem or what...

- Dan Naumov




On Fri, May 29, 2009 at 12:10 PM, Pete French
petefre...@ticketswitch.com wrote:
 Is there anyone here using ZFS on top of a GELI-encrypted provider on
 hardware which could be considered slow by today's standards? What

 I run a mirrored zpool on top of a pair of 1TB SATA drives - they are
 only 7200 rpm so pretty dog slow as far as I'm concerned. The
 CPOU is a dual core Athlon 6400, and I am running amd64. The performance
 is not brilliant - about 25 meg/second writing a file, and about
 53 meg/second reading it.

 It's a bit dissapointing really - thats a lot slower that I expected
 when I built it, especially the write speed.

 -pete.

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI / Intel Atom 330 system

2009-05-29 Thread Philipp Wuensche
Dan Naumov wrote:
 Is there anyone here using ZFS on top of a GELI-encrypted provider on
 hardware which could be considered slow by today's standards? What
 are the performance implications of doing this? The reason I am asking
 is that I am in the process of building a small home NAS/webserver,
 starting with a single disk (intending to expand as the need arises)
 on the following hardware:
 http://www.tranquilpc-shop.co.uk/acatalog/BAREBONE_SERVERS.html This
 is essentially: Intel Arom 330 1.6 Ghz dualcore on an Intel
 D945GCLF2-based board with 2GB Ram, the first disk I am going to use
 is a 1.5TB Western Digital Caviar Green.
 
 I had someone run a few openssl crypto benchmarks (to unscientifically
 assess the maximum possible GELI performance) on a machine running
 FreeBSD on nearly the same hardware and it seems the CPU would become
 the bottleneck at roughly 200 MB/s throughput when using 128 bit
 Blowfish, 70 MB/s when using AES128 and 55 MB/s when using AES256.
 This, on it's own is definately enough for my neeeds (especially in
 the case of using Blowfish), but what are the performance implications
 of using ZFS on top of a GELI-encrypted provider?

I have a zpool mirror on top of two 128bit GELI blowfish devices with
Sectorsize 4096, my system is a D945GCLF2 with 2GB RAM and a Intel Arom
330 1.6 Ghz dualcore. The two disks are a WDC WD10EADS and a WD10EACS
(5400rpm). The system is running 8.0-CURRENT amd64. I have set
kern.geom.eli.threads=3.

This is far from a real benchmarks but:

Using dd with bs=4m I get 35 MByte/s writing to the mirror (writing 35
MByte/s to each disk) and 48 MByte/s reading from the mirror (reading
with 24 MByte/s from each disk).

My experience is that ZFS is not much of an overhead and will not
degrade the performance as much as the encryption, so GELI is the
limiting factor. Using ZFS without GELI on this system gives way higher
read and write numbers, like reading with 70 MByte/s per disk etc.

greetings,
philipp


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI / Intel Atom 330 system

2009-05-29 Thread Pete French
 Ouch, that does indeed sounds quite slow, especially considering that
 a dual core Athlon 6400 is pretty fast CPU. Have you done any
 comparison benchmarks between UFS2 with Softupdates and ZFS on the

Not at all - but, now you have got me curious, I just went to
a completely different system (four core opteron box, no ecnryption,
four 15k SCSI drives and a zpool of 2 mirrored pairs), and that
also gave me about 25 meg/second!

I am using the wildly unscientific how long to copy a file
method to benchmark here, with the file residing on a different
drive, which can provided it at 80 meg/second.

 same system? What are the read/write numbers like? Have you done any
 investigating regarding possible causes of ZFS working so slow on your
 system? Just wondering if its an ATA chipset problem, a drive problem,
 a ZFS problem or what...

I have no idea, and now I think I need to look into it! certainly
I should be getting better than 25 meg/sec out of the 15K SCSI's.

-pete.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI / Intel Atom 330 system

2009-05-29 Thread Dan Naumov
Thank you for your numbers, now I know what to expect when I get my
new machine, since our system specs look identical.

So basically on this system:

unencrypted ZFS read: ~70 MB/s per disk

128bit Blowfish GELI/ZFS write: 35 MB/s per disk
128bit Blowfish GELI/ZFS read: 24 MB/s per disk

I am curious what part of GELI is so inefficient to cause such a
dramatic slowdown. In comparison, my home desktop is a

C2D E6600 2,4 Ghz, 4GB RAM, Intel DP35DP, 1 x 1,5TB Seagate Barracuda
- Windows Vista x64 SP1

Read/Write on an unencrypted NTFS partition: ~85 MB/s
Read/Write on a Truecrypt AES-encrypted NTFS partition: ~65 MB/s

As you can see, the performance drop is noticeable, but not anywhere
nearly as dramatic.


- Dan Naumov


 I have a zpool mirror on top of two 128bit GELI blowfish devices with
 Sectorsize 4096, my system is a D945GCLF2 with 2GB RAM and a Intel Arom
 330 1.6 Ghz dualcore. The two disks are a WDC WD10EADS and a WD10EACS
 (5400rpm). The system is running 8.0-CURRENT amd64. I have set
 kern.geom.eli.threads=3.

 This is far from a real benchmarks but:

 Using dd with bs=4m I get 35 MByte/s writing to the mirror (writing 35
 MByte/s to each disk) and 48 MByte/s reading from the mirror (reading
 with 24 MByte/s from each disk).

 My experience is that ZFS is not much of an overhead and will not
 degrade the performance as much as the encryption, so GELI is the
 limiting factor. Using ZFS without GELI on this system gives way higher
 read and write numbers, like reading with 70 MByte/s per disk etc.

 greetings,
 philipp
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI / Intel Atom 330 system

2009-05-29 Thread Morgan Wesström


Dan Naumov wrote:
 Thank you for your numbers, now I know what to expect when I get my
 new machine, since our system specs look identical.
 
 So basically on this system:
 
 unencrypted ZFS read: ~70 MB/s per disk
 
 128bit Blowfish GELI/ZFS write: 35 MB/s per disk
 128bit Blowfish GELI/ZFS read: 24 MB/s per disk
 
 I am curious what part of GELI is so inefficient to cause such a
 dramatic slowdown. In comparison, my home desktop is a
 


You can benchmark the encryption subsytem only, like this:

# kldload geom_zero
# geli onetime -s 4096 -l 256 gzero
# sysctl kern.geom.zero.clear=0
# dd if=/dev/gzero.eli of=/dev/null bs=1M count=512

512+0 records in
512+0 records out
536870912 bytes transferred in 11.861871 secs (45260222 bytes/sec)

The benchmark will use 256-bit AES and the numbers are from my Core2 Duo
Celeron E1200 1,6GHz. My old trusty Pentium III 933MHz performs at
13MB/s on that test. Both machines are recompiled with CPUTYPE=core2 and
CPUTYPE=pentium3 respectively but unfortunately I have no benchmarks on
how they perform without the CPU optimizations.

I'm in the same spot as you, planning to build a home NAS. I have
settled for graid5/geli but haven't yet decided if I would benefit most
from a dual core CPU at 3+ GHz or a quad core at 2.6. Budget is a concern...

Regards
Morgan
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI / Intel Atom 330 system

2009-05-29 Thread Dan Naumov
Now that I have evaluated the numbers and my needs a bit, I am really
confused about what appropriate course of action for me would be.

1) Use ZFS without GELI and hope that zfs-crypto get implemented in
Solaris and ported to FreeBSD soon and that when it does, it won't
come with such a dramatic performance decrease as GELI/ZFS seems to
result in.
2) Go ahead with the original plan of using GELI/ZFS and grind my
teeth at the 24 MB/s read speed off a single disk.


 So basically on this system:

 unencrypted ZFS read: ~70 MB/s per disk

 128bit Blowfish GELI/ZFS write: 35 MB/s per disk
 128bit Blowfish GELI/ZFS read: 24 MB/s per disk


 I'm in the same spot as you, planning to build a home NAS. I have
 settled for graid5/geli but haven't yet decided if I would benefit most
 from a dual core CPU at 3+ GHz or a quad core at 2.6. Budget is a concern...

Our difference is that my hardware is already ordered and Intel Atom
330 + D945GCLF2 + 2GB ram is what it's going to have :)


- Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI / Intel Atom 330 system

2009-05-29 Thread Emil Mikulic
On Fri, May 29, 2009 at 12:47:38PM +0200, Morgan Wesstr?m wrote:
 You can benchmark the encryption subsytem only, like this:
 
 # kldload geom_zero
 # geli onetime -s 4096 -l 256 gzero
 # sysctl kern.geom.zero.clear=0
 # dd if=/dev/gzero.eli of=/dev/null bs=1M count=512

I don't mean to take this off-topic wrt -stable but just
for fun, I built a -current kernel with dtrace and did:

geli onetime gzero
./hotkernel 
dd if=/dev/zero of=/dev/gzero.eli bs=1m count=1024
killall dtrace
geli detach gzero

The hot spots:
[snip stuff under 0.3%]
kernel`g_eli_crypto_run50   0.3%
kernel`_mtx_assert 56   0.3%
kernel`SHA256_Final58   0.3%
kernel`rijndael_encrypt72   0.4%
kernel`_mtx_unlock_flags   74   0.4%
kernel`rijndael128_encrypt 74   0.4%
kernel`copyout 92   0.5%
kernel`_mtx_lock_flags 93   0.5%
kernel`bzero  114   0.6%
kernel`spinlock_exit  240   1.3%
kernel`bcopy  325   1.7%
kernel`sched_idletd   810   4.3%
kernel`swcr_process  1126   6.0%
kernel`SHA256_Transform  1178   6.3%
kernel`rijndaelEncrypt   5574  29.7%
kernel`acpi_cpu_c1   8383  44.6%

I had to build crypto and geom_eli into the kernel to get proper
symbols.

References:
  http://wiki.freebsd.org/DTrace
  http://www.brendangregg.com/DTrace/hotkernel

--Emil
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI / Intel Atom 330 system

2009-05-29 Thread Ivan Voras
Emil Mikulic wrote:
 On Fri, May 29, 2009 at 12:47:38PM +0200, Morgan Wesstr?m wrote:
 You can benchmark the encryption subsytem only, like this:

 # kldload geom_zero
 # geli onetime -s 4096 -l 256 gzero
 # sysctl kern.geom.zero.clear=0
 # dd if=/dev/gzero.eli of=/dev/null bs=1M count=512
 
 I don't mean to take this off-topic wrt -stable but just
 for fun, I built a -current kernel with dtrace and did:
 
   geli onetime gzero
   ./hotkernel 
   dd if=/dev/zero of=/dev/gzero.eli bs=1m count=1024
   killall dtrace
   geli detach gzero
 
 The hot spots:
 [snip stuff under 0.3%]
 kernel`g_eli_crypto_run50   0.3%
 kernel`_mtx_assert 56   0.3%
 kernel`SHA256_Final58   0.3%
 kernel`rijndael_encrypt72   0.4%
 kernel`_mtx_unlock_flags   74   0.4%
 kernel`rijndael128_encrypt 74   0.4%
 kernel`copyout 92   0.5%
 kernel`_mtx_lock_flags 93   0.5%
 kernel`bzero  114   0.6%
 kernel`spinlock_exit  240   1.3%
 kernel`bcopy  325   1.7%
 kernel`sched_idletd   810   4.3%
 kernel`swcr_process  1126   6.0%
 kernel`SHA256_Transform  1178   6.3%
 kernel`rijndaelEncrypt   5574  29.7%
 kernel`acpi_cpu_c1   8383  44.6%

Hi,

What is the meaning of counts? Number of calls made or time?



signature.asc
Description: OpenPGP digital signature


Re: ZFS on top of GELI / Intel Atom 330 system

2009-05-29 Thread Vlad Galu
On Fri, May 29, 2009 at 2:49 PM, Ivan Voras ivo...@freebsd.org wrote:

 Hi,

 What is the meaning of counts? Number of calls made or time?



The former.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI / Intel Atom 330 system

2009-05-29 Thread Emil Mikulic
On Fri, May 29, 2009 at 01:49:54PM +0200, Ivan Voras wrote:
 Emil Mikulic wrote:
[...]
  kernel`SHA256_Transform  1178   6.3%
  kernel`rijndaelEncrypt   5574  29.7%
  kernel`acpi_cpu_c1   8383  44.6%
 
 Hi,
 
 What is the meaning of counts? Number of calls made or time?

Time.

Sorry, I inadvertently cut off the headings: function, count, percent

As I understand it, hotkernel uses statistical sampling at 1001 Hz, so
the percentage is an approximation of how much time is spent in each
function, based on how many profiler samples ended up in each function.

--Emil
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI / Intel Atom 330 system

2009-05-29 Thread Dan Naumov
Pardon my ignorance, but what do these numbers mean and what
information is deductible from them?

- Dan Naumov



 I don't mean to take this off-topic wrt -stable but just
 for fun, I built a -current kernel with dtrace and did:

        geli onetime gzero
        ./hotkernel 
        dd if=/dev/zero of=/dev/gzero.eli bs=1m count=1024
        killall dtrace
        geli detach gzero

 The hot spots:
 [snip stuff under 0.3%]
 kernel`g_eli_crypto_run                                    50   0.3%
 kernel`_mtx_assert                                         56   0.3%
 kernel`SHA256_Final                                        58   0.3%
 kernel`rijndael_encrypt                                    72   0.4%
 kernel`_mtx_unlock_flags                                   74   0.4%
 kernel`rijndael128_encrypt                                 74   0.4%
 kernel`copyout                                             92   0.5%
 kernel`_mtx_lock_flags                                     93   0.5%
 kernel`bzero                                              114   0.6%
 kernel`spinlock_exit                                      240   1.3%
 kernel`bcopy                                              325   1.7%
 kernel`sched_idletd                                       810   4.3%
 kernel`swcr_process                                      1126   6.0%
 kernel`SHA256_Transform                                  1178   6.3%
 kernel`rijndaelEncrypt                                   5574  29.7%
 kernel`acpi_cpu_c1                                       8383  44.6%

 I had to build crypto and geom_eli into the kernel to get proper
 symbols.

 References:
  http://wiki.freebsd.org/DTrace
  http://www.brendangregg.com/DTrace/hotkernel

 --Emil
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org

Re: ZFS on top of GELI / Intel Atom 330 system

2009-05-29 Thread Chris Dillon

Quoting Dan Naumov dan.nau...@gmail.com:


Ouch, that does indeed sounds quite slow, especially considering that
a dual core Athlon 6400 is pretty fast CPU. Have you done any
comparison benchmarks between UFS2 with Softupdates and ZFS on the
same system? What are the read/write numbers like? Have you done any
investigating regarding possible causes of ZFS working so slow on your
system? Just wondering if its an ATA chipset problem, a drive problem,
a ZFS problem or what...


I recently built a home NAS box on an Intel Atom 330 system (MSI Wind  
Nettop 100) with 2GB RAM and two WD Green 1TB (WD10EADS) drives in a  
mirrored ZFS pool using a FreeNAS 0.7 64-bit daily build.  I only see  
25-50MB/sec via Samba from my XP64 client, but in my experience SMB  
always seems to have horrible performance no matter what kind of  
servers and clients are used.  However, dd shows a different set of  
figures:


nas:/mnt/tank/scratch# dd if=/dev/zero of=zero.file bs=1M count=4000
4000+0 records in
4000+0 records out
4194304000 bytes transferred in 61.532492 secs (68164052 bytes/sec)

nas:/mnt/tank/scratch# dd if=zero.file of=/dev/null bs=1M
4000+0 records in
4000+0 records out
4194304000 bytes transferred in 33.347020 secs (125777476 bytes/sec)

68MB/sec writes and 125MB/sec reads... very impressive for such a  
low-powered box, I think, and yes the drives are mirrored, not striped!



___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org