Hi, reminds me about this dedup bug, don't use the -d switch in zfs send, it
produces broken stream that you won't be able to receive.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hi Brandon,
I'm not the right person to evaluate your zstreamdump output, but I
can't reproduce this error on my b152 system, which as close as I
could get to b151a. See below.
Are the rpool and radar pool versions reasonably equivalent?
In your follow-up, I think you are saying that
On Wed, Jan 5, 2011 at 9:44 AM, Cindy Swearingen
cindy.swearin...@oracle.com wrote:
In your follow-up, I think you are saying that rp...@copy is a recursive
snapshot and you are able to receive the individual rpool snapshots. You
just can't receive the recursive snapshot. Is this correct?
Okay. We are trying again to reproduce this on b151a.
In the meantime, you could rule out a problem with zfs send/recv on your
system if you could create another non-BE dataset with descendent
datasets, create a recursive snapshot, and retry the recursive send/recv
operation.
Thanks,
Cindy
On Wed, Jan 5, 2011 at 11:57 AM, Cindy Swearingen
cindy.swearin...@oracle.com wrote:
In the meantime, you could rule out a problem with zfs send/recv on your
system if you could create another non-BE dataset with descendent
datasets, create a recursive snapshot, and retry the recursive
We installed b151a and couldn't reproduce a failed receive of a
recursive root pool snapshot and also tested on b152 and b155.
The original error message isn't very helpful, but your test below
points to a problem in your root pool environment.
You might review your zpool history -il rpool
On Wed, Jan 5, 2011 at 1:16 PM, Cindy Swearingen
cindy.swearin...@oracle.com wrote:
You might review your zpool history -il rpool output for clues.
This isn't a critical problem, it's just a point of annoyance since it
seems like something that shouldn't happen. It's also just a test host
that's
On an snv_151a system I'm trying to do a send of rpool, and works when
using -n, but when I actually try to receive it's failing.
scrubs pass without issue, it's just the recv that fails.
# zfs send -R rp...@copy | zfs recv -n -vduF radar/foo
would receive full stream of rp...@copy into
Why are you sending from s1? If you've already sent that, the logical thing to
do is send from s3 the next time.
If you really do need to send from the start every time, you can do that with
the -f option on zfs receive, to force it to overwrite newer changes, but you
are going to be sending
Ross wrote:
[context, please!]
Why are you sending from s1? If you've already sent that, the logical thing to
do is send from s3 the next time.
If you really do need to send from the start every time, you can do that with
the -f option on zfs receive, to force it to overwrite newer changes,
Hi there,
While receiving incremental streams, zfs recv ignores the existing
snapshots and stops without processing rest of the streams.
Here is the scenario.
# zfs snapshot sp...@s1
# zfs send sp...@s1 | zfs recv dpath
# zfs snapshot sp...@s2
# zfs snapshot sp...@s3
# zfs send -I sp...@s1
Raghav Ilavarasu wrote:
Hi there,
While receiving incremental streams, zfs recv ignores the existing
snapshots and stops without processing rest of the streams.
Here is the scenario.
# zfs snapshot sp...@s1
# zfs send sp...@s1 | zfs recv dpath
# zfs snapshot sp...@s2
# zfs snapshot
I created http://defect.opensolaris.org/bz/show_bug.cgi?id=12249
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi,
snv_123, x64
zfs recv -F complains it can't open a snapshot it just destroyed itself as it
was destroyed on a sending side. Other than complaining about it it finishes
successfully.
Below is an example where I created a filesystem fs1 with three snapshots of it
called snap1, snap2,
OK, Thanks for that.
From reading the RFE, it sound's like having a faster machine on the
receive side will be enough to alleviate the problem in the short term?
The hardware I'm using at the moment is quite old, and not particularly
fast - although this is the first out out performance
Tristan Ball wrote:
OK, Thanks for that.
From reading the RFE, it sound's like having a faster machine on the
receive side will be enough to alleviate the problem in the short term?
That's correct.
--matt
___
zfs-discuss mailing list
Tristan Ball wrote:
Hi Everyone,
I have a couple of systems running opensolaris b118, one of which sends
hourly snapshots to the other. This has been working well, however as of
today, the receiving zfs process has started running extremely slowly,
and is running at 100% CPU on one core,
Hi Everyone,
I have a couple of systems running opensolaris b118, one of which sends
hourly snapshots to the other. This has been working well, however as
of today, the receiving zfs process has started running extremely
slowly, and is running at 100% CPU on one core, completely in kernel
I'm running OpenSolaris 2009.06, and when I attempt to restore a ZFS snapshot,
the machine hangs in an odd fashion.
I create a backup of fs1 (roughly 15GB):
zfs send -R tank/f...@1 | gzip /backups/test_1.gz
I create a new zpool to accept the backup:
zpool create testdev1 testpool
Then I
Hello Brent,
Friday, February 13, 2009, 8:15:55 AM, you wrote:
BJ Sad to report that I am seeing the slow zfs recv issue cropping up
BJ again while running b105 :(
BJ Not sure what has triggered the change, but I am seeing the same
BJ behavior again: massive amounts of reads on the receiving
On Mon, Feb 2, 2009 at 6:55 AM, Robert Milkowski mi...@task.gda.pl wrote:
It definitely does. I made some tests today comparing b101 with b105 while
doing 'zfs send -R -I A B /dev/null' with several dozen snapshots between A
and B. Well, b105 is almost 5x faster in my case - that's pretty
It definitely does. I made some tests today comparing b101 with b105 while
doing 'zfs send -R -I A B /dev/null' with several dozen snapshots between A
and B. Well, b105 is almost 5x faster in my case - that's pretty good.
--
Robert Milkowski
http://milek.blogspot.com
--
This message posted
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Brent Jones:
My results are much improved, on the order of 5-100 times faster
(either over Mbuffer or SSH).
this is good news - although not quite soon enough for my current 5TB zfs send
;-)
have you tested if this also improves the performance
On Fri, Jan 9, 2009 at 11:41 PM, Brent Jones br...@servuhome.net wrote:
On Fri, Jan 9, 2009 at 7:53 PM, Ian Collins i...@ianshome.com wrote:
Ian Collins wrote:
Send/receive speeds appear to be very data dependent. I have several
different filesystems containing differing data types. The
Brent Jones wrote:
On Fri, Jan 9, 2009 at 11:41 PM, Brent Jones br...@servuhome.net wrote:
On Fri, Jan 9, 2009 at 7:53 PM, Ian Collins i...@ianshome.com wrote:
Ian Collins wrote:
Send/receive speeds appear to be very data dependent. I have several
different filesystems
Ian Collins wrote:
Send/receive speeds appear to be very data dependent. I have several
different filesystems containing differing data types. The slowest to
replicate is mail and my guess it's the changes to the index files that takes
the time. Similar sized filesystems with similar
On Fri, Jan 9, 2009 at 7:53 PM, Ian Collins i...@ianshome.com wrote:
Ian Collins wrote:
Send/receive speeds appear to be very data dependent. I have several
different filesystems containing differing data types. The slowest to
replicate is mail and my guess it's the changes to the index
Hello
Yah, the incrementals are from a 30TB volume, with about 1TB used.
Watching iostat on each side during the incremental sends, the sender
side is hardly doing anything, maybe 50iops read, and that could be
from other machines accessing it, really light load.
The receiving side however,
On Wed 07/01/09 20:31 , Carsten Aulbert carsten.aulb...@aei.mpg.de sent:
Brent Jones wrote:
Using mbuffer can speed it up dramatically, but
this seems like a hack without addressing a real problem with zfs
send/recv. Trying to send any meaningful sized snapshots
from say an X4540 takes
Brent Jones wrote:
Reviving an old discussion, but has the core issue been addressed in
regards to zfs send/recv performance issues? I'm not able to find any
new bug reports on bugs.opensolaris.org related to this, but my search
kung-fu may be weak.
I raised:
CR 6729347 Poor zfs receive
On Wed, Jan 7, 2009 at 12:36 AM, Andrew Gabriel andrew.gabr...@sun.com wrote:
Brent Jones wrote:
Reviving an old discussion, but has the core issue been addressed in
regards to zfs send/recv performance issues? I'm not able to find any
new bug reports on bugs.opensolaris.org related to this,
On Thu 08/01/09 08:08 , Brent Jones br...@servuhome.net sent:
I have yet to devise a script that starts Mbuffer zfs recv on the
receiving side with proper parameters, then start an Mbuffer ZFS send
on the sending side, but I may work on one later this week.
I'd like the snapshots to be sent
On Sat, Dec 6, 2008 at 11:40 AM, Ian Collins i...@ianshome.com wrote:
Richard Elling wrote:
Ian Collins wrote:
Ian Collins wrote:
Andrew Gabriel wrote:
Ian Collins wrote:
I've just finished a small application to couple zfs_send and
zfs_receive through a socket to remove ssh from the
Hi,
Brent Jones wrote:
Using mbuffer can speed it up dramatically, but this seems like a hack
without addressing a real problem with zfs send/recv.
Trying to send any meaningful sized snapshots from say an X4540 takes
up to 24 hours, for as little as 300GB changerate.
I have not found a
Ian Collins wrote:
Andrew Gabriel wrote:
Ian Collins wrote:
I've just finished a small application to couple zfs_send and
zfs_receive through a socket to remove ssh from the equation and the
speed up is better than 2x. I have a small (140K) buffer on the sending
side to ensure the
Richard Elling wrote:
Ian Collins wrote:
Ian Collins wrote:
Andrew Gabriel wrote:
Ian Collins wrote:
I've just finished a small application to couple zfs_send and
zfs_receive through a socket to remove ssh from the equation and the
speed up is better than 2x. I have a small (140K)
Andrew Gabriel wrote:
Ian Collins wrote:
I don't see the 5 second bursty behaviour described in the bug
report. It's more like 5 second interval gaps in the network traffic
while the
data is written to disk.
That is exactly the issue. When the zfs recv data has been written,
zfs recv
Ian Collins wrote:
I've just finished a small application to couple zfs_send and
zfs_receive through a socket to remove ssh from the equation and the
speed up is better than 2x. I have a small (140K) buffer on the sending
side to ensure the minimum number of sent packets
The times I get
Andrew Gabriel wrote:
Ian Collins wrote:
I've just finished a small application to couple zfs_send and
zfs_receive through a socket to remove ssh from the equation and the
speed up is better than 2x. I have a small (140K) buffer on the sending
side to ensure the minimum number of sent
[EMAIL PROTECTED] wrote:
BTW: a lot of numbers in Solaris did not grow since a long time and
thus create problems now. Just think about the maxphys values
63 kB on x86 does not even allow to write a single BluRay disk sector
with a single transfer.
Any fixed value will soon be too
Seems like there's a strong case to have such a program bundled in Solaris.
I think, the idea of having a separate configurable buffer program with a high
feature set fits into UNIX philosophy of having small programs that can be used
as building blocks to solve larger problems.
mbuffer
Andrew Gabriel [EMAIL PROTECTED] wrote:
That is exactly the issue. When the zfs recv data has been written, zfs
recv starts reading the network again, but there's only a tiny amount of
data buffered in the TCP/IP stack, so it has to wait for the network to
heave more data across. In
Joerg Schilling schrieb:
Andrew Gabriel [EMAIL PROTECTED] wrote:
That is exactly the issue. When the zfs recv data has been written, zfs
recv starts reading the network again, but there's only a tiny amount of
data buffered in the TCP/IP stack, so it has to wait for the network to
heave
Joerg Schilling wrote:
Andrew Gabriel [EMAIL PROTECTED] wrote:
That is exactly the issue. When the zfs recv data has been written, zfs
recv starts reading the network again, but there's only a tiny amount of
data buffered in the TCP/IP stack, so it has to wait for the network to
heave
Andrew Gabriel wrote:
Interesting idea, but for 7200 RPM disks (and a 1Gb ethernet link), I
need a 250GB buffer (enough to buffer 4-5 seconds worth of data). That's
many orders of magnitude bigger than SO_RCVBUF can go.
No -- that's wrong -- should read 250MB buffer!
Still some orders of
Andrew Gabriel [EMAIL PROTECTED] wrote:
Andrew Gabriel wrote:
Interesting idea, but for 7200 RPM disks (and a 1Gb ethernet link), I
need a 250GB buffer (enough to buffer 4-5 seconds worth of data). That's
many orders of magnitude bigger than SO_RCVBUF can go.
No -- that's wrong --
On Fri, Nov 14, 2008 at 10:04 AM, Joerg Schilling
[EMAIL PROTECTED] wrote:
Andrew Gabriel [EMAIL PROTECTED] wrote:
Andrew Gabriel wrote:
Interesting idea, but for 7200 RPM disks (and a 1Gb ethernet link), I
need a 250GB buffer (enough to buffer 4-5 seconds worth of data). That's
many
On Fri, 14 Nov 2008, Joerg Schilling wrote:
On my first Sun at home (a Sun 2/50 with 1 MB of RAM) in 1986, I could
set the socket buffer size to 63 kB. 63kB : 1 MB is the same ratio
as 256 MB : 4 GB.
BTW: a lot of numbers in Solaris did not grow since a long time and
thus create problems
Joerg Schilling wrote:
Andrew Gabriel [EMAIL PROTECTED] wrote:
Andrew Gabriel wrote:
Interesting idea, but for 7200 RPM disks (and a 1Gb ethernet link), I
need a 250GB buffer (enough to buffer 4-5 seconds worth of data). That's
many orders of magnitude bigger than SO_RCVBUF can go.
No --
- original Nachricht
Betreff: Re: [zfs-discuss] 'zfs recv' is very slow
Gesendet: Fr, 14. Nov 2008
Von: Bob Friesenhahn[EMAIL PROTECTED]
On Fri, 14 Nov 2008, Joerg Schilling wrote:
On my first Sun at home (a Sun 2/50 with 1 MB of RAM) in 1986, I could
set the socket buffer
[EMAIL PROTECTED] wrote:
But zfs could certainly use bigger buffers; just like mbuffer, I also
wrote my own pipebuffer which does pretty much the same.
You too? (My buffer program which I used to diagnose the problem is
attached to the bugid ;-)
I know Chris Gerhard wrote one too.
Seems like
On Fri, 14 Nov 2008, Joerg Schilling wrote:
-----
Disk RPM 3,600 10,000x3
The best rate I did see in 1985 was 800 kB/s (w. linear reads)
now I see 120 MB/s this is more than x100 ;-)
Yes. And how that SSDs are
Bob Friesenhahn [EMAIL PROTECTED] wrote:
On Fri, 14 Nov 2008, Joerg Schilling wrote:
-----
Disk RPM 3,600 10,000x3
The best rate I did see in 1985 was 800 kB/s (w. linear reads)
now I see 120 MB/s this is more
I have an open ticket to have these putback into Solaris 10.
On Fri, Nov 7, 2008 at 3:24 PM, Ian Collins [EMAIL PROTECTED] wrote:
Brent Jones wrote:
Theres been a couple threads about this now, tracked some bug
ID's/ticket:
6333409
6418042
I see these are fixed in build 102.
Are
River Tarnell wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Andrew Gabriel:
This is quite easily worked around by putting a buffering program
between the network and the zfs receive.
i tested inserting mbuffer with a 250MB buffer between the zfs send and zfs
recv.
If anyone out there has a support contract with sun that covers Solaris 10
support. Feel free to email me and/or sun and have them add you to my
support case.
The Sun Case is 66104157 and I am seeking to have 6333409 and 6418042
putback into Solaris 10.
CR 6712788 was closed as a duplicate of CR
River Tarnell wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
hi,
i have two systems, A (Solaris 10 update 5) and B (Solaris 10 update 6). i'm
using 'zfs send -i' to replicate changes on A to B. however, the 'zfs recv'
on
B is running extremely slowly.
I'm sorry, I didn't notice
River Tarnell wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Ian Collins:
That's very slow. What's the nature of your data?
mainly two sets of mid-sized files; one of 200KB-2MB in size and other under
50KB. they are organised into subdirectories, A/B/C/file. each
Brent Jones wrote:
Theres been a couple threads about this now, tracked some bug ID's/ticket:
6333409
6418042
I see these are fixed in build 102.
Are they targeted to get back to Solaris 10 via a patch?
If not, is it worth escalating the issue with support to get a patch?
--
Ian.
Andrew Gabriel wrote:
Ian Collins wrote:
Brent Jones wrote:
Theres been a couple threads about this now, tracked some bug ID's/ticket:
6333409
6418042
I see these are fixed in build 102.
Are they targeted to get back to Solaris 10 via a patch?
If not, is it worth
Ian Collins wrote:
Andrew Gabriel wrote:
Ian Collins wrote:
Brent Jones wrote:
Theres been a couple threads about this now, tracked some bug ID's/ticket:
6333409
6418042
I see these are fixed in build 102.
Are they targeted to get back to Solaris 10 via a patch?
If
Andrew Gabriel wrote:
Ian Collins wrote:
Andrew Gabriel wrote:
Ian Collins wrote:
Brent Jones wrote:
Theres been a couple threads about this now, tracked some bug
ID's/ticket:
6333409
6418042
I see these are fixed in build 102.
Are they targeted to get back to Solaris
Ian Collins wrote:
Andrew Gabriel wrote:
Ian Collins wrote:
Andrew Gabriel wrote:
Given the issue described is slow zfs recv over network, I suspect
this is:
6729347 Poor zfs receive performance across networks
This is quite easily worked around by putting a buffering program
between the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
hi,
i have two systems, A (Solaris 10 update 5) and B (Solaris 10 update 6). i'm
using 'zfs send -i' to replicate changes on A to B. however, the 'zfs recv' on
B is running extremely slowly. if i run the zfs send on A and redirect output
to a
On Fri 07/11/08 12:09 , River Tarnell [EMAIL PROTECTED] sent:
hi,
i have two systems, A (Solaris 10 update 5) and B (Solaris 10 update 6).
i'musing 'zfs send -i' to replicate changes on A to B. however, the 'zfs
recv' onB is running extremely slowly. if i run the zfs send on A and
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Ian Collins:
That's very slow. What's the nature of your data?
mainly two sets of mid-sized files; one of 200KB-2MB in size and other under
50KB. they are organised into subdirectories, A/B/C/file. each directory
has 18,000-25,000 files. total
On Thu, Nov 6, 2008 at 4:19 PM, River Tarnell
[EMAIL PROTECTED] wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Ian Collins:
That's very slow. What's the nature of your data?
mainly two sets of mid-sized files; one of 200KB-2MB in size and other under
50KB. they are organised into
I'm trying to figure out how to restore a filesystem using zfs recv.
Obviously there's some important concept I don't understand. I'm
using my zfsdump script to create the dumps that I'm going to restore.
Here's what I tried:
Save a level 0 dump in d.0:
datsun# zfsdump 0 home/tckuser d.0
zfs
Bill Shannon wrote:
datsun# zfs recv -d test d.0
cannot open 'test/tckuser': dataset does not exist
Despite the error message, the recv does seem to work.
Is it a bug that it prints the error message, or is it a bug that it
restores the data?
___
Is there a zfs recv-like command that will list a table of contents
for what's in a stream?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Matthew Ahrens wrote:
Robert Milkowski wrote:
Hello zfs-discuss,
zfs recv -v at the end reported:
received 928Mb stream in 6346 seconds (150Kb/sec)
I'm not sure but shouldn't it be 928MB and 150KB ?
Or perhaps we're counting bits?
That's correct, it is in bytes and should use capital B
Hello zfs-discuss,
zfs recv -v at the end reported:
received 928Mb stream in 6346 seconds (150Kb/sec)
I'm not sure but shouldn't it be 928MB and 150KB ?
Or perhaps we're counting bits?
--
Best regards,
Robert mailto:[EMAIL PROTECTED
72 matches
Mail list logo