On Mon, Oct 04, 2010 at 07:39:16PM -0400, Dan Langille wrote:
On 10/4/2010 2:10 PM, Jeremy Chadwick wrote:
On Mon, Oct 04, 2010 at 01:31:07PM -0400, Dan Langille wrote:
On Mon, October 4, 2010 3:27 am, Martin Matuska wrote:
Try using zfs receive with the -v flag (gives you some stats at the
Try using zfs receive with the -v flag (gives you some stats at the end):
# zfs send storage/bac...@transfer | zfs receive -v
storage/compressed/bacula
And use the following sysctl (you may set that in /boot/loader.conf, too):
# sysctl vfs.zfs.txg.write_limit_override=805306368
I have good
On Mon, October 4, 2010 3:27 am, Martin Matuska wrote:
Try using zfs receive with the -v flag (gives you some stats at the end):
# zfs send storage/bac...@transfer | zfs receive -v
storage/compressed/bacula
And use the following sysctl (you may set that in /boot/loader.conf, too):
# sysctl
On Mon, Oct 04, 2010 at 01:31:07PM -0400, Dan Langille wrote:
On Mon, October 4, 2010 3:27 am, Martin Matuska wrote:
Try using zfs receive with the -v flag (gives you some stats at the end):
# zfs send storage/bac...@transfer | zfs receive -v
storage/compressed/bacula
And use the
On 10/4/2010 2:10 PM, Jeremy Chadwick wrote:
On Mon, Oct 04, 2010 at 01:31:07PM -0400, Dan Langille wrote:
On Mon, October 4, 2010 3:27 am, Martin Matuska wrote:
Try using zfs receive with the -v flag (gives you some stats at the end):
# zfs send storage/bac...@transfer | zfs receive -v
I've just tested on my box and loopback interface does not seem to be
the bottleneck. I can easily push through ~400MB/s through two
instances of mbuffer.
--Artem
On Fri, Oct 1, 2010 at 7:51 PM, Sean s...@gothic.net.au wrote:
On 02/10/2010, at 11:43 AM, Artem Belevich wrote:
As soon as I
On 10/1/2010 9:32 PM, Dan Langille wrote:
On 10/1/2010 7:00 PM, Artem Belevich wrote:
On Fri, Oct 1, 2010 at 3:49 PM, Dan Langilled...@langille.org wrote:
FYI: this is all on the same box.
In one of the previous emails you've used this command line:
# mbuffer -s 128k -m 1G -I 9090 | zfs
On Sun, Oct 3, 2010 at 6:11 PM, Dan Langille d...@langille.org wrote:
I'm rerunning my test after I had a drive go offline[1]. But I'm not
getting anything like the previous test:
time zfs send storage/bac...@transfer | mbuffer | zfs receive
storage/compressed/bacula-buffer
$ zpool iostat
worth of write throughput for your target disks.
Here's an example:
http://blogs.everycity.co.uk/alasdair/2010/07/using-mbuffer-to-speed-up-slow-zfs-send-zfs-receive/
I'm failing. In one session:
# mbuffer -s 128k -m 1G -I 9090 | zfs receive
storage/compressed/bacula-mbuffer
Assertion failed
it with a buffer large enough to accommodate few seconds
worth of write throughput for your target disks.
Here's an example:
http://blogs.everycity.co.uk/alasdair/2010/07/using-mbuffer-to-speed-up-slow-zfs-send-zfs-receive/
I'm failing. In one session:
# mbuffer -s 128k -m 1G -I 9090 | zfs
On Wed, September 29, 2010 2:04 pm, Dan Langille wrote:
$ zpool iostat 10
capacity operationsbandwidth
pool used avail read write read write
-- - - - - - -
storage 7.67T 5.02T358 38 43.1M 1.96M
On Fri, Oct 01, 2010 at 02:51:12PM -0400, Dan Langille wrote:
On Wed, September 29, 2010 2:04 pm, Dan Langille wrote:
$ zpool iostat 10
capacity operationsbandwidth
pool used avail read write read write
-- - - - - -
Hmm. It did help me a lot when I was replicating ~2TB worth of data
over GigE. Without mbuffer things were roughly in the ballpark of your
numbers. With mbuffer I've got around 100MB/s.
Assuming that you have two boxes connected via ethernet, it would be
good to check that nobody generates PAUSE
On Fri, Oct 1, 2010 at 3:49 PM, Dan Langille d...@langille.org wrote:
FYI: this is all on the same box.
In one of the previous emails you've used this command line:
# mbuffer -s 128k -m 1G -I 9090 | zfs receive
You've used mbuffer in network client mode. I assumed that you did do
your transfer
FYI: this is all on the same box.
--
Dan Langille
http://langille.org/
On Oct 1, 2010, at 5:56 PM, Artem Belevich fbsdl...@src.cx wrote:
Hmm. It did help me a lot when I was replicating ~2TB worth of data
over GigE. Without mbuffer things were roughly in the ballpark of your
numbers. With
On 10/1/2010 7:00 PM, Artem Belevich wrote:
On Fri, Oct 1, 2010 at 3:49 PM, Dan Langilled...@langille.org wrote:
FYI: this is all on the same box.
In one of the previous emails you've used this command line:
# mbuffer -s 128k -m 1G -I 9090 | zfs receive
You've used mbuffer in network
As soon as I opened this email I knew what it would say.
# time zfs send storage/bac...@transfer | mbuffer | zfs receive
storage/compressed/bacula-mbuffer
in @ 197 MB/s, out @ 205 MB/s, 1749 MB total, buffer 0% full
...
Big difference. :)
I'm glad it helped.
Does anyone know why
On Fri, Oct 1, 2010 at 8:43 PM, Artem Belevich fbsdl...@src.cx wrote:
As soon as I opened this email I knew what it would say.
# time zfs send storage/bac...@transfer | mbuffer | zfs receive
storage/compressed/bacula-mbuffer
in @ 197 MB/s, out @ 205 MB/s, 1749 MB total, buffer 0% full
On 02/10/2010, at 11:43 AM, Artem Belevich wrote:
As soon as I opened this email I knew what it would say.
# time zfs send storage/bac...@transfer | mbuffer | zfs receive
storage/compressed/bacula-mbuffer
in @ 197 MB/s, out @ 205 MB/s, 1749 MB total, buffer 0% full
..
Big
It's taken about 15 hours to copy 800GB. I'm sure there's some tuning I
can do.
The system is now running:
# zfs send storage/bac...@transfer | zfs receive storage/compressed/bacula
All the drives are Hitachi HDS722020ALA330 JKAOA28A ATA-8 SATA 2.x device
from systat:
1 usersLoad
:
http://blogs.everycity.co.uk/alasdair/2010/07/using-mbuffer-to-speed-up-slow-zfs-send-zfs-receive/
--Artem
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable
for your target disks.
Thanks. I just installed it. I'll use it next time. I don't want to
interrupt this one. I'd like to see how long it takes. Then compare.
Here's an example:
http://blogs.everycity.co.uk/alasdair/2010/07/using-mbuffer-to-speed-up-slow-zfs-send-zfs-receive
22 matches
Mail list logo