[Jeff Bonwick:]
| That said, I suspect I know the reason for the particular problem
| you're seeing: we currently do a bit too much vdev-level caching.
| Each vdev can have up to 10MB of cache. With 132 pools, even if
| each pool is just a single iSCSI device, that's 1.32GB of cache.
|
| We need
Chris Siebenmann wrote:
| Still, I'm curious -- why lots of pools? Administration would be
| simpler with a single pool containing many filesystems.
The short answer is that it is politically and administratively easier
to use (at least) one pool per storage-buying group in our
Darren J Moffat [EMAIL PROTECTED] wrote:
Chris Siebenmann wrote:
| Still, I'm curious -- why lots of pools? Administration would be
| simpler with a single pool containing many filesystems.
The short answer is that it is politically and administratively easier
to use (at least) one pool per
| I think the root cause of the issue is that multiple groups are buying
| physical rather than virtual storage yet it is all being attached to a
| single system.
They're actually buying constant-sized chunks of virtual storage, which
is provided through a pool of SAN-based disk space. This
David Collier-Brown wrote:
Darren J Moffat [EMAIL PROTECTED] wrote:
Chris Siebenmann wrote:
| Still, I'm curious -- why lots of pools? Administration would be
| simpler with a single pool containing many filesystems.
The short answer is that it is politically and administratively
| There are two issues here. One is the number of pools, but the other
| is the small amount of RAM in the server. To be honest, most laptops
| today come with 2 GBytes, and most servers are in the 8-16 GByte range
| (hmmm... I suppose I could look up the average size we sell...)
Speaking as a
Chris Siebenmann wrote:
| There are two issues here. One is the number of pools, but the other
| is the small amount of RAM in the server. To be honest, most laptops
| today come with 2 GBytes, and most servers are in the 8-16 GByte range
| (hmmm... I suppose I could look up the average size
Chris Siebenmann [EMAIL PROTECTED] wrote:
| Speaking as a sysadmin (and a Sun customer), why on earth would I have
| to provision 8 GB+ of RAM on my NFS fileservers? I would much rather
| have that memory in the NFS client machines, where it can actually be
| put to work by user programs.
|
| (If
Bart Smaalders wrote:
Chris Siebenmann wrote:
| There are two issues here. One is the number of pools, but the other
| is the small amount of RAM in the server. To be honest, most laptops
| today come with 2 GBytes, and most servers are in the 8-16 GByte range
| (hmmm... I suppose I
I have a test system with 132 (small) ZFS pools[*], as part of our
work to validate a new ZFS-based fileserver environment. In testing,
it appears that we can produce situations that will run the kernel out
of memory, or at least out of some resource such that things start
complaining 'bash:
A silly question: Why are you using 132 ZFS pools as opposed to a
single ZFS pool with 132 ZFS filesystems?
--Bill
On Wed, Apr 30, 2008 at 01:53:32PM -0400, Chris Siebenmann wrote:
I have a test system with 132 (small) ZFS pools[*], as part of our
work to validate a new ZFS-based fileserver
Indeed, things should be simpler with fewer (generally one) pool.
That said, I suspect I know the reason for the particular problem
you're seeing: we currently do a bit too much vdev-level caching.
Each vdev can have up to 10MB of cache. With 132 pools, even if
each pool is just a single iSCSI
| Still, I'm curious -- why lots of pools? Administration would be
| simpler with a single pool containing many filesystems.
The short answer is that it is politically and administratively easier
to use (at least) one pool per storage-buying group in our environment.
This got discussed in more
13 matches
Mail list logo