It seems we are hitting a boundary with zfs send/receive over a network
link (10Gb/s). We can see peak values of up to 150MB/s, but on average
about 40-50MB/s are replicated. This is far away from the bandwidth that
a 10Gb link can offer.
Is it possible, that ZFS is giving replication a too
Bob Friesenhahn wrote:
Striping across two large raidz2s is not ideal for multi-user use. You
are getting the equivalent of two disks worth of IOPS, which does not
go very far. More smaller raidz vdevs or mirror vdevs would be
better. Also, make sure that you have plenty of RAM installed.
Henri Meddox wrote:
Hi Folks,
call me a lernen ;-)
I got a crazy Problem with zpool list and the size of my pool:
created zpool create raidz2 hdd1 hdd2 hdd3 - each hdd is about 1GB.
zpool list shows me a size of 2.95GB - shouldn't this bis online 1GB?
After creating a file about 500MB
Mika Borner wrote:
You're lucky. Ben just wrote about it :-)
http://www.cuddletech.com/blog/pivot/entry.php?id=1013
Oops, should have read your message completly :-) Anyway you can
lernen something from it...
___
zfs-discuss mailing list
zfs
Hi
Updated today from snv 101 to 105 today. I wanted to do zfs send/receive to a
new zpool while forgetting that the new pool was a newer version.
zfs send timed out after a while, but it was impossible to kill the receive
process.
Shouldn't the zfs receive command just fail with a wrong
Ulrich Graef wrote:
You need not to wade through your paper...
ECC theory tells, that you need a minimum distance of 3
to correct one error in a codeword, ergo neither RAID-5 or RAID-6
are enough: you need RAID-2 (which nobody uses today).
Raid-Controllers today take advantage of the fact
Adam Leventhal wrote:
Yes. The Sun Storage 7000 Series uses the same ZFS that's in OpenSolaris
today. A pool created on the appliance could potentially be imported on an
OpenSolaris system; that is, of course, not explicitly supported in the
service contract.
Would be interesting to hear
Leave the default recordsize. With 128K recordsize,
files smaller than
If I turn zfs compression on, does the recordsize influence the compressratio
in anyway?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
I've read the same log entry, and was also thinking about ZFS...
Pillar Data Systems is also answering to the call
http://blog.pillardata.com/pillar_data_blog/2008/08/blog-i-love-a-p.html
BTW: Would transparent compression be considered as cheating? :-)
--
This message posted from
Unfortunately, the T1000 only has a
single drive bay (!) which makes it impossible to
follow our normal practice of mirroring the root file
You can replace the existing 3.5 disk with two 2.5 disks (quite cheap)
//Mika
This message posted from opensolaris.org
Here's an interesting read about forthcoming Oracle 11g file system
performance. Sadly, there is now information about how this works.
It will be interesting to compare it with ZFS Performance, as soon as ZFS is
tuned for Databases.
Speed and performance will be the hallmark of the 11g,
Hi
We have following scenario/problem:
Our zpool resides on a single LUN on a Hitachi Storage Array. We are
thinking about making a physical clone of the zpool with the ShadowImage
functionality.
ShadowImage takes a snapshot of the LUN, and copies all the blocks to a
new LUN (physical copy). In
The vdev can handle dynamic lun growth, but the underlying VTOC or
EFI label
may need to be zero'd and reapplied if you setup the initial vdev on
a slice. If
you introduced the entire disk to the pool you should be fine, but I
believe you'll
still need to offline/online the pool.
Fine, at
I'm a little confused by the first poster's message as well, but you
lose some benefits of ZFS if you don't create your pools with either
RAID1 or RAIDZ, such as data corruption detection. The array isn't
going to detect that because all it knows about are blocks.
That's the dilemma, the array
but there may not be filesystem space for double the data.
Sounds like there is a need for a zfs-defragement-file utility
perhaps?
Or if you want to be politically cagey about naming choice, perhaps,
zfs-seq-read-optimize-file ? :-)
For Datawarehouse and streaming applications a
Hi
Now that Solaris 10 06/06 is finally downloadable I have some questions
about ZFS.
-We have a big storage sytem supporting RAID5 and RAID1. At the moment,
we only use RAID5 (for non-solaris systems as well). We are thinking
about using ZFS on those LUNs instead of UFS. As ZFS on Hardware
16 matches
Mail list logo