Dick Davies [EMAIL PROTECTED] wrote on 01/10/2007 05:26:45 AM:
On 08/01/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
I think that in addition to lzjb compression, squishing blocks that
contain
the same data would buy a lot of space for administrators working in
many
common
[i]I think the original poster, was thinking that non-enterprise users
would be most interested in only having to *purchase* one drive at a time.
Enterprise users aren't likely to balk at purchasing 6-10 drives at a
time, so for them adding an additional *new* RaidZ to stripe across is
easier.
Hello Kyle,
Wednesday, January 10, 2007, 5:33:12 PM, you wrote:
KM Remember though that it's been mathematically figured that the
KM disadvantages to RaidZ start to show up after 9 or 10 drives. (That's
Well, nothing like this was proved and definitely not mathematically.
It's just a common
Hi Guys,
After reading through the discussion on this regarding ZFS memory
fragmentation on snv_53 (and forward) and going through our
::kmastat...looks like ZFS is sucking down about 544 MB of RAM in the
various caches. About 360MB of that is in the zio_buf_65536 cache.
Next most notable is
Robert Milkowski wrote:
Hello Kyle,
Wednesday, January 10, 2007, 5:33:12 PM, you wrote:
KM Remember though that it's been mathematically figured that the
KM disadvantages to RaidZ start to show up after 9 or 10 drives. (That's
Well, nothing like this was proved and definitely not
Hi Kyle,
I think there was a lot of talk about this behavior on the RAIDZ2 vs.
RAID-10 thread. My understanding from that discussion was that every
write stripes the block across all disks on a RAIDZ/Z2 group, thereby
making writing the group no faster than writing to a single disk.
However
# zpool create 500megpool /home/roland/tmp/500meg.dat
cannot create '500megpool': name must begin with a letter
pool name may have been omitted
huh?
ok - no problem if special characters aren`t allowed, but why _this_ weird
looking limitaton ?
This message posted from opensolaris.org
Hello Jason,
Wednesday, January 10, 2007, 10:54:29 PM, you wrote:
JJWW Hi Kyle,
JJWW I think there was a lot of talk about this behavior on the RAIDZ2 vs.
JJWW RAID-10 thread. My understanding from that discussion was that every
JJWW write stripes the block across all disks on a RAIDZ/Z2 group,
Hello Jason,
Wednesday, January 10, 2007, 9:45:05 PM, you wrote:
JJWW Sanjeev Robert,
JJWW Thanks guys. We put that in place last night and it seems to be doing
JJWW a lot better job of consuming less RAM. We set it to 4GB and each of
JJWW our 2 MySQL instances on the box to a max of 4GB. So
[EMAIL PROTECTED] wrote on 01/10/2007 05:16:33 PM:
Hello Jason,
Wednesday, January 10, 2007, 10:54:29 PM, you wrote:
JJWW Hi Kyle,
JJWW I think there was a lot of talk about this behavior on the RAIDZ2
vs.
JJWW RAID-10 thread. My understanding from that discussion was that
every
Hi Robert,
I read the following section from
http://blogs.sun.com/roch/entry/when_to_and_not_to as indicating
random writes to a RAID-Z had the performance of a single disk
regardless of the group size:
Effectively, as a first approximation, an N-disk RAID-Z group will
behave as a single
Hello Jason,
Thursday, January 11, 2007, 12:46:32 AM, you wrote:
JJWW Hi Robert,
JJWW I read the following section from
JJWW http://blogs.sun.com/roch/entry/when_to_and_not_to as indicating
JJWW random writes to a RAID-Z had the performance of a single disk
JJWW regardless of the group size:
Hello Wade,
Thursday, January 11, 2007, 12:30:40 AM, you wrote:
WSfc Long story short, I wiped and reinstalled with U3 and raidz2 with
WSfc hostspares like it should have had in the first place.
The same here.
Besides I always put my own system and I'm not using preinstalled
ones - except
It's just a common sense advise - for many users keeping raidz groups
below 9 disks should give good enough performance. However if someone
creates raidz group of 48 disks he/she probable expects also
performance and in general raid-z wouldn't offer one.
There is at least one reason for
Hi Robert,
We've got the default ncsize. I didn't see any advantage to increasing
it outside of NFS serving...which this server is not. For speed the
X4500 is showing to be a killer MySQL platform. Between the blazing
fast procs and the sheer number of spindles, its perfromance is
tremendous. If
On 10-Jan-07, at 5:29 PM, roland wrote:
# zpool create 500megpool /home/roland/tmp/500meg.dat
cannot create '500megpool': name must begin with a letter
pool name may have been omitted
huh?
ok - no problem if special characters aren`t allowed, but why
_this_ weird looking limitaton ?
Hello Peter,
Thursday, January 11, 2007, 1:08:38 AM, you wrote:
It's just a common sense advise - for many users keeping raidz groups
below 9 disks should give good enough performance. However if someone
creates raidz group of 48 disks he/she probable expects also
performance and in general
Hello Jason,
Thursday, January 11, 2007, 1:10:10 AM, you wrote:
JJWW Hi Robert,
JJWW We've got the default ncsize. I didn't see any advantage to increasing
JJWW it outside of NFS serving...which this server is not. For speed the
JJWW X4500 is showing to be a killer MySQL platform. Between the
Jason J. W. Williams wrote:
Hi Robert,
Thank you! Holy mackerel! That's a lot of memory. With that type of a
calculation my 4GB arc_max setting is still in the danger zone on a
Thumper. I wonder if any of the ZFS developers could shed some light
on the calculation?
In a worst-case scenario,
Hey guys,
Do to lng URL lookups, the DNLC was pushed to variable
sized entries. The hit rate was dropping because of
name to long misses. This was done long ago while I
was at Sun under a bug reported by me..
I don't know your usage, but you should
Hello Kyle,
Wednesday, January 10, 2007, 5:33:12 PM, you wrote:
KM Remember though that it's been mathematically
figured that the
KM disadvantages to RaidZ start to show up after 9
or 10 drives. (That's
Well, nothing like this was proved and definitely not
mathematically.
It's
On Wed, 10 Jan 2007, Mark Maybee wrote:
Jason J. W. Williams wrote:
Hi Robert,
Thank you! Holy mackerel! That's a lot of memory. With that type of a
calculation my 4GB arc_max setting is still in the danger zone on a
Thumper. I wonder if any of the ZFS developers could shed some light
Hi,
Why would I ever need to specify ZFS mount(s) in /etc/vfstab at all? I see it
in some documents that zfs can be defined in /etc/vfstab with fstype zfs.
Thanks.
This message posted from opensolaris.org
___
zfs-discuss mailing list
News Alert!
Fueled by the possibility of an upcoming merger,
(UTVG) is gearing up for an explosion.
Tension is building and soon the scramble to take
a position will push this one off the charts.
Symbol: UTVG
}Short Term Target: $5.00
Long term Target: $10
Finally the market is ready for
24 matches
Mail list logo