Hello Brent,
Friday, February 13, 2009, 8:15:55 AM, you wrote:
BJ Sad to report that I am seeing the slow zfs recv issue cropping up
BJ again while running b105 :(
BJ Not sure what has triggered the change, but I am seeing the same
BJ behavior again: massive amounts of reads on the receiving
On Mon, Feb 2, 2009 at 6:55 AM, Robert Milkowski mi...@task.gda.pl wrote:
It definitely does. I made some tests today comparing b101 with b105 while
doing 'zfs send -R -I A B /dev/null' with several dozen snapshots between A
and B. Well, b105 is almost 5x faster in my case - that's pretty
It definitely does. I made some tests today comparing b101 with b105 while
doing 'zfs send -R -I A B /dev/null' with several dozen snapshots between A
and B. Well, b105 is almost 5x faster in my case - that's pretty good.
--
Robert Milkowski
http://milek.blogspot.com
--
This message posted
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Brent Jones:
My results are much improved, on the order of 5-100 times faster
(either over Mbuffer or SSH).
this is good news - although not quite soon enough for my current 5TB zfs send
;-)
have you tested if this also improves the performance
On Fri, Jan 9, 2009 at 11:41 PM, Brent Jones br...@servuhome.net wrote:
On Fri, Jan 9, 2009 at 7:53 PM, Ian Collins i...@ianshome.com wrote:
Ian Collins wrote:
Send/receive speeds appear to be very data dependent. I have several
different filesystems containing differing data types. The
Brent Jones wrote:
On Fri, Jan 9, 2009 at 11:41 PM, Brent Jones br...@servuhome.net wrote:
On Fri, Jan 9, 2009 at 7:53 PM, Ian Collins i...@ianshome.com wrote:
Ian Collins wrote:
Send/receive speeds appear to be very data dependent. I have several
different filesystems
Ian Collins wrote:
Send/receive speeds appear to be very data dependent. I have several
different filesystems containing differing data types. The slowest to
replicate is mail and my guess it's the changes to the index files that takes
the time. Similar sized filesystems with similar
On Fri, Jan 9, 2009 at 7:53 PM, Ian Collins i...@ianshome.com wrote:
Ian Collins wrote:
Send/receive speeds appear to be very data dependent. I have several
different filesystems containing differing data types. The slowest to
replicate is mail and my guess it's the changes to the index
Hello
Yah, the incrementals are from a 30TB volume, with about 1TB used.
Watching iostat on each side during the incremental sends, the sender
side is hardly doing anything, maybe 50iops read, and that could be
from other machines accessing it, really light load.
The receiving side however,
On Wed 07/01/09 20:31 , Carsten Aulbert carsten.aulb...@aei.mpg.de sent:
Brent Jones wrote:
Using mbuffer can speed it up dramatically, but
this seems like a hack without addressing a real problem with zfs
send/recv. Trying to send any meaningful sized snapshots
from say an X4540 takes
Brent Jones wrote:
Reviving an old discussion, but has the core issue been addressed in
regards to zfs send/recv performance issues? I'm not able to find any
new bug reports on bugs.opensolaris.org related to this, but my search
kung-fu may be weak.
I raised:
CR 6729347 Poor zfs receive
On Wed, Jan 7, 2009 at 12:36 AM, Andrew Gabriel andrew.gabr...@sun.com wrote:
Brent Jones wrote:
Reviving an old discussion, but has the core issue been addressed in
regards to zfs send/recv performance issues? I'm not able to find any
new bug reports on bugs.opensolaris.org related to this,
On Thu 08/01/09 08:08 , Brent Jones br...@servuhome.net sent:
I have yet to devise a script that starts Mbuffer zfs recv on the
receiving side with proper parameters, then start an Mbuffer ZFS send
on the sending side, but I may work on one later this week.
I'd like the snapshots to be sent
On Sat, Dec 6, 2008 at 11:40 AM, Ian Collins i...@ianshome.com wrote:
Richard Elling wrote:
Ian Collins wrote:
Ian Collins wrote:
Andrew Gabriel wrote:
Ian Collins wrote:
I've just finished a small application to couple zfs_send and
zfs_receive through a socket to remove ssh from the
Hi,
Brent Jones wrote:
Using mbuffer can speed it up dramatically, but this seems like a hack
without addressing a real problem with zfs send/recv.
Trying to send any meaningful sized snapshots from say an X4540 takes
up to 24 hours, for as little as 300GB changerate.
I have not found a
Ian Collins wrote:
Andrew Gabriel wrote:
Ian Collins wrote:
I've just finished a small application to couple zfs_send and
zfs_receive through a socket to remove ssh from the equation and the
speed up is better than 2x. I have a small (140K) buffer on the sending
side to ensure the
Richard Elling wrote:
Ian Collins wrote:
Ian Collins wrote:
Andrew Gabriel wrote:
Ian Collins wrote:
I've just finished a small application to couple zfs_send and
zfs_receive through a socket to remove ssh from the equation and the
speed up is better than 2x. I have a small (140K)
Andrew Gabriel wrote:
Ian Collins wrote:
I don't see the 5 second bursty behaviour described in the bug
report. It's more like 5 second interval gaps in the network traffic
while the
data is written to disk.
That is exactly the issue. When the zfs recv data has been written,
zfs recv
Ian Collins wrote:
I've just finished a small application to couple zfs_send and
zfs_receive through a socket to remove ssh from the equation and the
speed up is better than 2x. I have a small (140K) buffer on the sending
side to ensure the minimum number of sent packets
The times I get
Andrew Gabriel wrote:
Ian Collins wrote:
I've just finished a small application to couple zfs_send and
zfs_receive through a socket to remove ssh from the equation and the
speed up is better than 2x. I have a small (140K) buffer on the sending
side to ensure the minimum number of sent
[EMAIL PROTECTED] wrote:
BTW: a lot of numbers in Solaris did not grow since a long time and
thus create problems now. Just think about the maxphys values
63 kB on x86 does not even allow to write a single BluRay disk sector
with a single transfer.
Any fixed value will soon be too
Seems like there's a strong case to have such a program bundled in Solaris.
I think, the idea of having a separate configurable buffer program with a high
feature set fits into UNIX philosophy of having small programs that can be used
as building blocks to solve larger problems.
mbuffer
Andrew Gabriel [EMAIL PROTECTED] wrote:
That is exactly the issue. When the zfs recv data has been written, zfs
recv starts reading the network again, but there's only a tiny amount of
data buffered in the TCP/IP stack, so it has to wait for the network to
heave more data across. In
Joerg Schilling schrieb:
Andrew Gabriel [EMAIL PROTECTED] wrote:
That is exactly the issue. When the zfs recv data has been written, zfs
recv starts reading the network again, but there's only a tiny amount of
data buffered in the TCP/IP stack, so it has to wait for the network to
heave
Joerg Schilling wrote:
Andrew Gabriel [EMAIL PROTECTED] wrote:
That is exactly the issue. When the zfs recv data has been written, zfs
recv starts reading the network again, but there's only a tiny amount of
data buffered in the TCP/IP stack, so it has to wait for the network to
heave
Andrew Gabriel wrote:
Interesting idea, but for 7200 RPM disks (and a 1Gb ethernet link), I
need a 250GB buffer (enough to buffer 4-5 seconds worth of data). That's
many orders of magnitude bigger than SO_RCVBUF can go.
No -- that's wrong -- should read 250MB buffer!
Still some orders of
Andrew Gabriel [EMAIL PROTECTED] wrote:
Andrew Gabriel wrote:
Interesting idea, but for 7200 RPM disks (and a 1Gb ethernet link), I
need a 250GB buffer (enough to buffer 4-5 seconds worth of data). That's
many orders of magnitude bigger than SO_RCVBUF can go.
No -- that's wrong --
On Fri, Nov 14, 2008 at 10:04 AM, Joerg Schilling
[EMAIL PROTECTED] wrote:
Andrew Gabriel [EMAIL PROTECTED] wrote:
Andrew Gabriel wrote:
Interesting idea, but for 7200 RPM disks (and a 1Gb ethernet link), I
need a 250GB buffer (enough to buffer 4-5 seconds worth of data). That's
many
On Fri, 14 Nov 2008, Joerg Schilling wrote:
On my first Sun at home (a Sun 2/50 with 1 MB of RAM) in 1986, I could
set the socket buffer size to 63 kB. 63kB : 1 MB is the same ratio
as 256 MB : 4 GB.
BTW: a lot of numbers in Solaris did not grow since a long time and
thus create problems
Joerg Schilling wrote:
Andrew Gabriel [EMAIL PROTECTED] wrote:
Andrew Gabriel wrote:
Interesting idea, but for 7200 RPM disks (and a 1Gb ethernet link), I
need a 250GB buffer (enough to buffer 4-5 seconds worth of data). That's
many orders of magnitude bigger than SO_RCVBUF can go.
No --
- original Nachricht
Betreff: Re: [zfs-discuss] 'zfs recv' is very slow
Gesendet: Fr, 14. Nov 2008
Von: Bob Friesenhahn[EMAIL PROTECTED]
On Fri, 14 Nov 2008, Joerg Schilling wrote:
On my first Sun at home (a Sun 2/50 with 1 MB of RAM) in 1986, I could
set the socket buffer
[EMAIL PROTECTED] wrote:
But zfs could certainly use bigger buffers; just like mbuffer, I also
wrote my own pipebuffer which does pretty much the same.
You too? (My buffer program which I used to diagnose the problem is
attached to the bugid ;-)
I know Chris Gerhard wrote one too.
Seems like
On Fri, 14 Nov 2008, Joerg Schilling wrote:
-----
Disk RPM 3,600 10,000x3
The best rate I did see in 1985 was 800 kB/s (w. linear reads)
now I see 120 MB/s this is more than x100 ;-)
Yes. And how that SSDs are
Bob Friesenhahn [EMAIL PROTECTED] wrote:
On Fri, 14 Nov 2008, Joerg Schilling wrote:
-----
Disk RPM 3,600 10,000x3
The best rate I did see in 1985 was 800 kB/s (w. linear reads)
now I see 120 MB/s this is more
I have an open ticket to have these putback into Solaris 10.
On Fri, Nov 7, 2008 at 3:24 PM, Ian Collins [EMAIL PROTECTED] wrote:
Brent Jones wrote:
Theres been a couple threads about this now, tracked some bug
ID's/ticket:
6333409
6418042
I see these are fixed in build 102.
Are
River Tarnell wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Andrew Gabriel:
This is quite easily worked around by putting a buffering program
between the network and the zfs receive.
i tested inserting mbuffer with a 250MB buffer between the zfs send and zfs
recv.
If anyone out there has a support contract with sun that covers Solaris 10
support. Feel free to email me and/or sun and have them add you to my
support case.
The Sun Case is 66104157 and I am seeking to have 6333409 and 6418042
putback into Solaris 10.
CR 6712788 was closed as a duplicate of CR
River Tarnell wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
hi,
i have two systems, A (Solaris 10 update 5) and B (Solaris 10 update 6). i'm
using 'zfs send -i' to replicate changes on A to B. however, the 'zfs recv'
on
B is running extremely slowly.
I'm sorry, I didn't notice
River Tarnell wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Ian Collins:
That's very slow. What's the nature of your data?
mainly two sets of mid-sized files; one of 200KB-2MB in size and other under
50KB. they are organised into subdirectories, A/B/C/file. each
Brent Jones wrote:
Theres been a couple threads about this now, tracked some bug ID's/ticket:
6333409
6418042
I see these are fixed in build 102.
Are they targeted to get back to Solaris 10 via a patch?
If not, is it worth escalating the issue with support to get a patch?
--
Ian.
Andrew Gabriel wrote:
Ian Collins wrote:
Brent Jones wrote:
Theres been a couple threads about this now, tracked some bug ID's/ticket:
6333409
6418042
I see these are fixed in build 102.
Are they targeted to get back to Solaris 10 via a patch?
If not, is it worth
Ian Collins wrote:
Andrew Gabriel wrote:
Ian Collins wrote:
Brent Jones wrote:
Theres been a couple threads about this now, tracked some bug ID's/ticket:
6333409
6418042
I see these are fixed in build 102.
Are they targeted to get back to Solaris 10 via a patch?
If
Andrew Gabriel wrote:
Ian Collins wrote:
Andrew Gabriel wrote:
Ian Collins wrote:
Brent Jones wrote:
Theres been a couple threads about this now, tracked some bug
ID's/ticket:
6333409
6418042
I see these are fixed in build 102.
Are they targeted to get back to Solaris
Ian Collins wrote:
Andrew Gabriel wrote:
Ian Collins wrote:
Andrew Gabriel wrote:
Given the issue described is slow zfs recv over network, I suspect
this is:
6729347 Poor zfs receive performance across networks
This is quite easily worked around by putting a buffering program
between the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
hi,
i have two systems, A (Solaris 10 update 5) and B (Solaris 10 update 6). i'm
using 'zfs send -i' to replicate changes on A to B. however, the 'zfs recv' on
B is running extremely slowly. if i run the zfs send on A and redirect output
to a
On Fri 07/11/08 12:09 , River Tarnell [EMAIL PROTECTED] sent:
hi,
i have two systems, A (Solaris 10 update 5) and B (Solaris 10 update 6).
i'musing 'zfs send -i' to replicate changes on A to B. however, the 'zfs
recv' onB is running extremely slowly. if i run the zfs send on A and
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Ian Collins:
That's very slow. What's the nature of your data?
mainly two sets of mid-sized files; one of 200KB-2MB in size and other under
50KB. they are organised into subdirectories, A/B/C/file. each directory
has 18,000-25,000 files. total
On Thu, Nov 6, 2008 at 4:19 PM, River Tarnell
[EMAIL PROTECTED] wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Ian Collins:
That's very slow. What's the nature of your data?
mainly two sets of mid-sized files; one of 200KB-2MB in size and other under
50KB. they are organised into
48 matches
Mail list logo