Re: [CentOS] ZFS @ centOS

2011-04-07 Thread Ross Walker
On Apr 5, 2011, at 4:19 PM, rai...@ultra-secure.de wrote:

 
 
 But ...
 I've been reading about some of the issues with ZFS performance and have
 discovered that it needs a *lot* of RAM to support decent caching ...
 the recommendation is for a GByte of RAM per TByte of storage just for
 the metadata, which can add up.  Maybe cache memory starvation is one
 reason why so many disappointing test results are showing up.
 
 Yes, it uses most of any available RAM as cache.
 Newer implementations can use SSDs as a kind of 2nd-level cache (L2-ARC).
 Also, certain on-disk logs can be written out to NVRAMs directly, speeding
 up things even more.
 Compared with Cache-RAM in RAID-Controllers, RAM for servers is dirt-cheap.
 
 The philosophy is: why put tiny, expensive amounts of RAM into the
 RAID-controller and have it try to make guesses on what should be cached
 and what not - if we can add RAM to the server directly at a fraction of
 the cost and let the OS handle _everything_ short of moving the disk-heads
 over the platters.

The problem is the volatility of system RAM.

Those very expensive RAID write-back caches are usually battery backed (if they 
aren't don't buy them). It's the battery and the technology to recover the 
write cache after failure that makes it so expensive.

Look at the cost of capacitor backed SSDs vs non-capacitor backed SSDs.

When I ran a ZFS NFS server I used a RAID controller with battery backed 
write-back cache and the ZIL spread across the pool. Even with each disk as 
RAID0 setup, it was able to perform just as well as a similar pool using a 
standard SAS controller and a good SSD drive but the SSD setup cost twice as 
much and unlike the SSD the NVRAM doesn't burnout or need trimming.

 IMO, it's a brilliant concept.
 
 Do you know if there is a lot of performance-penalty with KVM/VBox,
 compared to Solaris Zones?

Solaris zones is not hardware virtualization it is a container technology like 
'jail' or openvz but with resource management.

-Ross

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ZFS @ centOS

2011-04-06 Thread Tracy Reed
On Sat, Apr 02, 2011 at 02:13:01PM -0700, John R Pierce spake thusly:
 ZFS isn't GPL, therefore can't be integrated into the kernel where a 
 file system belongs, therefore is pretty much relegated to user space 

It can be patched in by the end user. So if someone were to distribute a patch
which could be dropped into a current SRPM by the end user and a change to the
SPEC file made to apply the patch during the rpm build the user would be all
set.

-- 
Tracy Reed
http://tracyreed.org


pgppappkRTHW4.pgp
Description: PGP signature
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ZFS @ centOS

2011-04-05 Thread rainer
 On 4/2/2011 2:54 PM, Dawid Horacio Golebiewski wrote:
 You might be asking why I didn't choose to make a ~19 TB RAID-5 volume
 for the native 3ware RAID test

That is really a no-brainer.
In the time it takes to re-build such a RAID, another disk might just
fail and the R in RAID goes down the toilet. Your 19-disk RAID5 just
got turned into 25kg of scrap-metal.

As for ZFS - we're using it with FreeBSD with mixed results.
The truth is, you've got to follow the development very closely and work
with the developers (via mailinglists), potentially testing
patches/backports from current - or tracking current from the start.
It works much better with Solaris.
Frankly, I don't know why people want to do this ZFS on Linux thing.
It works perfectly well with Solaris, which runs most stuff that runs on
Linux just as well.
I wouldn't try to run Linux-binaries on Solaris with lxrun, either.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ZFS @ centOS

2011-04-05 Thread Chuck Munro

On 04/05/2011 09:00 AM, rai...@ultra-secure.de wrote:

 That is really a no-brainer.
 In the time it takes to re-build such a RAID, another disk might just
 fail and the R in RAID goes down the toilet. Your 19-disk RAID5 just
 got turned into 25kg of scrap-metal.

 As for ZFS - we're using it with FreeBSD with mixed results.
 The truth is, you've got to follow the development very closely and work
 with the developers (via mailinglists), potentially testing
 patches/backports from current - or tracking current from the start.
 It works much better with Solaris.
 Frankly, I don't know why people want to do this ZFS on Linux thing.
 It works perfectly well with Solaris, which runs most stuff that runs on
 Linux just as well.
 I wouldn't try to run Linux-binaries on Solaris with lxrun, either.


During my current work building a RAID-6 VM Host system (currently 
testing with SL-6 but later CentOS-6) I had a question rolling around in 
the back of my mind whether or not I should consider building the Host 
with OpenSolaris (or the OpenIndiana fork) and ZFS RAID-Z2, which I had 
heard performs somewhat better on Solaris.  I'd then run CentOS Guest OS 
instances with VirtualBox.

But ...
I've been reading about some of the issues with ZFS performance and have 
discovered that it needs a *lot* of RAM to support decent caching ... 
the recommendation is for a GByte of RAM per TByte of storage just for 
the metadata, which can add up.  Maybe cache memory starvation is one 
reason why so many disappointing test results are showing up.

Chuck
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ZFS @ centOS

2011-04-05 Thread rainer


 But ...
 I've been reading about some of the issues with ZFS performance and have
 discovered that it needs a *lot* of RAM to support decent caching ...
 the recommendation is for a GByte of RAM per TByte of storage just for
 the metadata, which can add up.  Maybe cache memory starvation is one
 reason why so many disappointing test results are showing up.

Yes, it uses most of any available RAM as cache.
Newer implementations can use SSDs as a kind of 2nd-level cache (L2-ARC).
Also, certain on-disk logs can be written out to NVRAMs directly, speeding
up things even more.
Compared with Cache-RAM in RAID-Controllers, RAM for servers is dirt-cheap.

The philosophy is: why put tiny, expensive amounts of RAM into the
RAID-controller and have it try to make guesses on what should be cached
and what not - if we can add RAM to the server directly at a fraction of
the cost and let the OS handle _everything_ short of moving the disk-heads
over the platters.

IMO, it's a brilliant concept.

Do you know if there is a lot of performance-penalty with KVM/VBox,
compared to Solaris Zones?


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ZFS @ centOS

2011-04-04 Thread Ross Walker
On Apr 2, 2011, at 5:28 PM, Dawid Horacy Golebiewski 
dawid.golebiew...@tu-harburg.de wrote:

 I pondered Solaris for some time, but as I do not intend to build the OS 
 from scratch and nexenta was to GUIed for me I started researching SME.
 What puzzled me is the theory and the practice: RAIDz is the best 
 solution from a theoretical standpoint (maximum features available) but 
 still raid 5,65+0 etc. are used. Why?

Raidz/2/3 isn't raid5/6, similar but not the same.

In ZFS you create vdevs, which can be individual disks, mirrors or raidzs. Then 
you create a pool out of multiple vdevs. The vdevs should be of the same size 
and type for performance and capacity planning reasons, but it's not a 
requirement. Currently you cannot add drives to a vdev, but you can add vdevs 
to a pool. IO written to a pool does round robin across the vdevs giving a 
raid0 type performance.

-Ross
 
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ZFS @ centOS

2011-04-04 Thread Warren Young
On 4/2/2011 2:54 PM, Dawid Horacio Golebiewski wrote:
 I do want to
 use ZFS and I thus far I have only found information about the ZFS-Fuse
 implementation and unclear hints that there is another way.

Here are some benchmark numbers I came up with just a week or two ago. 
(View with fixed-width font.)


Test ZFS raidz1  Hardware RAID-6
---  --  ---
Sequential write, per character  11.5 (15% CPU)  71.1 MByte/s (97% CPU)
Sequential write, block  12.3 (1%)   297.9 MB/s (50%)
Sequential write, rewrite11.8 (2%)   137.4 MB/s (27%)
Sequential read, per character   48.8 (63%)  72.5 MB/s (95%)
Sequential read, block   148.3 (5%)  344.3 MB/s (31%)
Random seeks 103.0/s 279.6/s


The fact that the write speeds on the ZFS-FUSE test seem capped at ~12 
MB/s strikes me as odd.  It doesn't seem to be a FUSE bottleneck, since 
the read speeds are so much faster, but I can't think where else the 
problem could be since the hardware was identical for both tests. 
Nevertheless, it means ZFS-FUSE performed about as well as a Best Buy 
bus-powered USB drive on this hardware.  On only one test did it even 
exceed the performance of a single one of the drives in the array, and 
then not by very much.  Pitiful.

I did this test with Bonnie++ on a 3ware/LSI 9750-8i controller, with 
eight WD 3 TB disks attached.  Both tests were done with XFS on CentOS 
5.5, 32-bit.  (Yes, 32-bit.  Hard requirement for this application.) 
The base machine was a low-end server with a Core 2 Duo E7500 in it.  I 
interpret several of the results above as suggesting that the 3ware 
numbers could have been higher if the array were in a faster box.

For the ZFS configuration, I exported each disk from the 3ware BIOS as a 
separate single-disk volume, then collected them together into a single 
~19 TB raidz1 pool.  (This controller doesn't have a JBOD mode.)  I 
broke that up into three ~7 TB slices, each formatted with XFS.  I did 
the test on only one of the slices, figuring that they'd all perform 
about equally.

For the RAID-6 configuration, I used the 3ware card's hardware RAID, 
creating a single ~16 TB volume, formatted XFS.

You might be asking why I didn't choose to make a ~19 TB RAID-5 volume 
for the native 3ware RAID test to minimize the number of unnecessary 
differences.  I did that because after testing the ZFS-based system for 
about a week, we decided we'd rather have the extra redundancy than the 
capacity.  Dropping to 16.37 TB on the RAID configuration by switching 
to RAID-6 let us put almost the entire array under a single 16 TB XFS 
filesystem.

Realize that this switch from single redundancy to dual is a handicap 
for the native RAID test, yet it performs better across the board. 
In-kernel ZFS might have beat the hardware RAID on at least a few of the 
tests, due to that handicap.

(Please don't ask me to test one of the in-kernel ZFS patches for Linux. 
  We can't delay putting this box into production any longer, and in any 
case, we're building this server for another organization, so we 
couldn't send the patched box out without violating the GPL.)

Oh, and in case anyone is thinking I somehow threw the test, realize 
that I was rooting for ZFS from the start.  I only did the benchmark 
when it so completely failed to perform under load.  ZFS is beautiful 
tech.  Too bad it doesn't play well with others.

 Phoronix
 reported that http://kqinfotech.com/ would release some form of ZFS for the
 kernel but I have found nothing.

What a total cock-up that was.

Here we had this random company no one had ever heard from before 
putting out a press release that they *will be releasing* something in a 
few months.

Maybe it's easy to say this 6 months hence and we're all sitting here 
listening to the crickets, but I called it at the time: Phoronix should 
have tossed that press release into the trash, or at least held off on 
saying anything about it until something actually shipped.  Reporting a 
clearly BS press release, seriously?  Are they *trying* to destroy their 
credibility?
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ZFS @ centOS

2011-04-04 Thread John R Pierce
On 04/04/11 8:09 PM, Warren Young wrote:
 On 4/2/2011 2:54 PM, Dawid Horacio Golebiewski wrote:
 I do want to
 use ZFS and I thus far I have only found information about the ZFS-Fuse
 implementation and unclear hints that there is another way.
 Here are some benchmark numbers I came up with just a week or two ago.
 (View with fixed-width font.)

try iozone and bonnie++




___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ZFS @ centOS

2011-04-04 Thread Warren Young
On 4/4/2011 9:17 PM, John R Pierce wrote:

 try iozone

Maybe on the next server.  This one can't be reformatted yet again.

 bonnie++

That's what I used.  I just reformatted its results for readability.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] ZFS @ centOS

2011-04-02 Thread Dawid Horacio Golebiewski
I have trouble finding definitive information about this. I am considering
the use of SME 7.5.1 (centOS based) for my server needs, but I do want to
use ZFS and I thus far I have only found information about the ZFS-Fuse
implementation and unclear hints that there is another way. Phoronix
reported that http://kqinfotech.com/ would release some form of ZFS for the
kernel but I have found nothing.

Can so. tell me if fuse-ZFS is more trouble than it's worth? 

Thanks in adv.

Dawide

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ZFS @ centOS

2011-04-02 Thread compdoc
 Can so. tell me if fuse-ZFS is more trouble than it's worth?

I've tried both fuse-ZFS, and also zfs installed from rpm's on
zfsonlinux.org. Both on centos 5.5.

fuse-ZFS is more polished, but cut write speeds in half on my raid 5.

I ended up going ext4.

SME Server is great by the way - been using it for years.



___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ZFS @ centOS

2011-04-02 Thread John R Pierce
On 04/02/11 1:54 PM, Dawid Horacio Golebiewski wrote:
 I have trouble finding definitive information about this. I am considering
 the use of SME 7.5.1 (centOS based) for my server needs, but I do want to
 use ZFS and I thus far I have only found information about the ZFS-Fuse
 implementation and unclear hints that there is another way. Phoronix
 reported that http://kqinfotech.com/ would release some form of ZFS for the
 kernel but I have found nothing.

 Can so. tell me if fuse-ZFS is more trouble than it's worth?

ZFS isn't GPL, therefore can't be integrated into the kernel where a 
file system belongs, therefore is pretty much relegated to user space 
(fuse), and its just not very well supported on Linux.

If you really want to use ZFS, I'd suggest using Solaris or one of its 
derivatives (OpenIndiana, etc) where its native.




___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ZFS @ centOS

2011-04-02 Thread Dawid Horacy Golebiewski
I pondered Solaris for some time, but as I do not intend to build the OS 
from scratch and nexenta was to GUIed for me I started researching SME.
What puzzled me is the theory and the practice: RAIDz is the best 
solution from a theoretical standpoint (maximum features available) but 
still raid 5,65+0 etc. are used. Why?

Opensolaris supposedly stopped last February.


John R Pierce schrieb:
 On 04/02/11 1:54 PM, Dawid Horacio Golebiewski wrote:
   
 I have trouble finding definitive information about this. I am considering
 the use of SME 7.5.1 (centOS based) for my server needs, but I do want to
 use ZFS and I thus far I have only found information about the ZFS-Fuse
 implementation and unclear hints that there is another way. Phoronix
 reported that http://kqinfotech.com/ would release some form of ZFS for the
 kernel but I have found nothing.

 Can so. tell me if fuse-ZFS is more trouble than it's worth?
 

 ZFS isn't GPL, therefore can't be integrated into the kernel where a 
 file system belongs, therefore is pretty much relegated to user space 
 (fuse), and its just not very well supported on Linux.

 If you really want to use ZFS, I'd suggest using Solaris or one of its 
 derivatives (OpenIndiana, etc) where its native.




 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos
   

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ZFS @ centOS

2011-04-02 Thread John R Pierce
On 04/02/11 2:28 PM, Dawid Horacy Golebiewski wrote:
 Opensolaris supposedly stopped last February.

opensolaris has been superceded by openindiana (full distribution) and 
illumos (a community developed/supported kernel derived from opensolaris).




___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos