Did anyone share a script to send/recv zfs filesystems tree in
parallel, especially if a cap on concurrency can be specified?
Richard, how fast were you taking those snapshots, how fast were the
syncs over the network. For example, assuming a snapshot every 10mins,
is it reasonable to expect to
Greg Mason writes:
We're running into a performance problem with ZFS over NFS. When working
with many small files (i.e. unpacking a tar file with source code), a
Thor (over NFS) is about 4 times slower than our aging existing storage
solution, which isn't exactly speedy to begin with
Nicholas Lee writes:
Another option to look at is:
set zfs:zfs_nocacheflush=1
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide
Best option is to get a a fast ZIL log device.
Depends on your pool as well. NFS+ZFS means zfs will wait for write
completes
Eric D. Mudama writes:
On Mon, Jan 19 at 23:14, Greg Mason wrote:
So, what we're looking for is a way to improve performance, without
disabling the ZIL, as it's my understanding that disabling the ZIL
isn't exactly a safe thing to do.
We're looking for the best way to improve
Eric D. Mudama writes:
On Tue, Jan 20 at 21:35, Eric D. Mudama wrote:
On Tue, Jan 20 at 9:04, Richard Elling wrote:
Yes. And I think there are many more use cases which are not
yet characterized. What we do know is that using an SSD for
the separate ZIL log works very well for
Hello all...
We are getting this error: E2BIG - Arg list too long, when trying to send
incremental backups (b89 - b101). Do you know about any bugs related to that?
I did a look on the archives, and google but could not find anything.
What i did find was something related with wrong
Ian Collins wrote:
Richard Elling wrote:
Recently, I've been working on a project which had agressive backup
requirements. I believe we solved the problem with parallelism. You
might consider doing the same. If you get time to do your own experiments,
please share your observations with
Ahmed Kamal wrote:
Did anyone share a script to send/recv zfs filesystems tree in
parallel, especially if a cap on concurrency can be specified?
Richard, how fast were you taking those snapshots, how fast were the
syncs over the network. For example, assuming a snapshot every 10mins,
is it
Hi Jim,
The setup is not there anymore, however, I will share as much details
as I have documented. Could you please post the commands you have used
and any differences you think might be important. Did you ever test
with 2008.11 ? instead of sxce ?
I will probably be testing again soon. Any
Richard Elling wrote:
Ian Collins wrote:
One thing I have yet to do is find the optimum number of parallel
transfers when there are 100s of filesystems. I'm looking into making
this dynamic, based on throughput.
I'm not convinced that a throughput throttle or metric will be
Hi guys!
I'm doing series of tests on ZFS before putting it into production on several
machines, and I've come to a dead end. I have two disks in mirror (rpool).
Intentionally, I corrupt data on second disk:
# dd if=/dev/urandom of=/dev/rdsk/c0d1t0 bs=512 count=20480 seek=10240
So, I've
Jakov Sosic wrote:
Hi guys!
I'm doing series of tests on ZFS before putting it into production on several
machines, and I've come to a dead end. I have two disks in mirror (rpool).
Intentionally, I corrupt data on second disk:
# dd if=/dev/urandom of=/dev/rdsk/c0d1t0 bs=512 count=20480
So I wonder now, how to fix this up? Why doesn't
scrub overwrite bad data with good data from first
disk?
ZFS doesn't know why the errors occurred, the most
likely scenario would be a
bad disk -- in which case you'd need to replace it.
I know and understand that... But, what is then a
Looks like your scrub was not finished yet. Did check it later? You should not
have had to replace the disk. You might have to reinstall the bootblock.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Jakov Sosic wrote:
Hi guys!
I'm doing series of tests on ZFS before putting it into production on several
machines, and I've come to a dead end. I have two disks in mirror (rpool).
Intentionally, I corrupt data on second disk:
# dd if=/dev/urandom of=/dev/rdsk/c0d1t0 bs=512 count=20480
On 26-Jan-09, at 6:21 PM, Jakov Sosic wrote:
So I wonder now, how to fix this up? Why doesn't
scrub overwrite bad data with good data from first
disk?
ZFS doesn't know why the errors occurred, the most
likely scenario would be a
bad disk -- in which case you'd need to replace it.
I know
That sounds like a great idea if I can get it to work--
I get how to add a drive to a zfs mirror, but for the life of me I can't find
out how to safely remove a drive from a mirror.
Also, if I do remove the drive from the mirror, then pop it back up in some
unsuspecting (and unrelated) Solaris
BJ Quinn wrote:
That sounds like a great idea if I can get it to work--
What does?
I get how to add a drive to a zfs mirror, but for the life of me I can't find
out how to safely remove a drive from a mirror.
Have you tried man zpool? See the entry for detach.
Also, if I do
js == Jakov Sosic jso...@gmail.com writes:
tt == Toby Thain t...@telegraphics.com.au writes:
js Yes but that will do the complete resilvering, and I just want
js to fix the corrupted blocks... :)
tt What you are asking for is impossible, since ZFS cannot know
tt which blocks
Ahmed,
The setup is not there anymore, however, I will share as much details
as I have documented. Could you please post the commands you have used
and any differences you think might be important. Did you ever test
with 2008.11 ? instead of sxce ?
Specific to the following:
While we
Jim Dunham wrote:
Ahmed,
The setup is not there anymore, however, I will share as much details
as I have documented. Could you please post the commands you have used
and any differences you think might be important. Did you ever test
with 2008.11 ? instead of sxce ?
Specific to
Richard Elling wrote:
Jim Dunham wrote:
Ahmed,
The setup is not there anymore, however, I will share as much
details
as I have documented. Could you please post the commands you have
used
and any differences you think might be important. Did you ever test
with 2008.11 ? instead of
The vendor wanted to come in and replace an HDD in the 2nd X4500, as it
was constantly busy, and since our x4500 has always died miserably in
the past when a HDD dies, they wanted to replace it before the HDD
actually died.
The usual was done, HDD replaced, resilvering started and ran for
While doing some performance testing on a pair of X4540's running
snv_105, I noticed some odd behavior while using CIFS.
I am copying a 6TB database file (yes, a single file) over our GigE
network to the X4540, then snapshotting that data to the secondary
X4540.
Writing said 6TB file can peak our
Dear support
when i connect my external usb dvdrom to the sparc machine which has
installed solaris 10u6 based zfs file system,,it return this error:
bash-3.00# mount /dev/dsk/c1t0d0s0 /dvd/
Jan 27 11:08:41 global ufs: NOTICE: mount: not a UFS magic number (0x0)
mount: /dev/dsk/c1t0d0s0 is not
25 matches
Mail list logo