Joseph L. Casale wrote:
I have my own application that uses large circular buffers and a socket
connection between hosts. The buffers keep data flowing during ZFS
writes and the direct connection cuts out ssh.
Application, as in not script (something you can share)?
Not yet!
--
With Solaris 10U7 I see about 35MB/sec between Thumpers using a direct
socket connection rather than ssh for full sends and 7-12MB/sec for
incrementals, depending on the data set.
Ian,
What's the syntax you use for this procedure?
___
zfs-discuss
Joseph L. Casale wrote:
With Solaris 10U7 I see about 35MB/sec between Thumpers using a direct
socket connection rather than ssh for full sends and 7-12MB/sec for
incrementals, depending on the data set.
Ian,
What's the syntax you use for this procedure?
I have my own application that
I have my own application that uses large circular buffers and a socket
connection between hosts. The buffers keep data flowing during ZFS
writes and the direct connection cuts out ssh.
Application, as in not script (something you can share)?
:)
jlc
Thank you for all your replies, I'm collecting my responses in one
message below:
On Tue, Aug 18, 2009 at 7:43 PM, Nicolas
Williamsnicolas.willi...@sun.com wrote:
On Tue, Aug 18, 2009 at 04:22:19PM -0400, Paul Kraus wrote:
We have a system with some large datasets (3.3 TB and about 35
On Aug 18, 2009, at 1:16 PM, Paul Kraus wrote:
Is the speed of a 'zfs send' dependant on file size / number of
files ?
Not directly. It is dependent on the amount of changes per unit time.
We have a system with some large datasets (3.3 TB and about 35
million files) and conventional
Posted from the wrong address the first time, sorry.
Is the speed of a 'zfs send' dependant on file size / number of files ?
We have a system with some large datasets (3.3 TB and about 35
million files) and conventional backups take a long time (using
Netbackup 6.5 a FULL takes between
Is the speed of a 'zfs send' dependant on file size / number of files ?
I am going to say no, I have *far* inferior iron that I am running a backup
rig on, and doing a send/recv over ssh through gige and last night's replication
gave the following: received 40.2GB stream in 3498 seconds
On Tue, Aug 18, 2009 at 22:22, Paul Krauspk1...@gmail.com wrote:
Posted from the wrong address the first time, sorry.
Is the speed of a 'zfs send' dependant on file size / number of files ?
We have a system with some large datasets (3.3 TB and about 35
million files) and conventional
I changed to try zfs send on a UFS on zvolume as well:
received 92.9GB stream in 2354 seconds (40.4MB/sec)
Still fast enough to use. I have yet to get around to trying something
considerably larger in size.
Lund
Jorgen Lundman wrote:
So you recommend I also do speed test on larger
Did the ZFS send speed improvements make it into Solaris 10 update 7?
If not, are they targeted for a Solaris 10 update?
Thanks,
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Au contraire...
From what I have seen, larger file systems and large numbers of files
seem to slow down zfs send/receive, worsening the problem. So it may be
a good idea to partition your file system, subdividing it into smaller
ones, replicating each one separately.
Dirk
Am Di, den
Jorgen,
what is the size of the sending zfs?
I thought replication speed depends on the size of the sending fs, too not only size of the
snapshot being sent.
Regards
Dirk
--On Freitag, Mai 22, 2009 19:19:34 +0900 Jorgen Lundman lund...@gmo.jp wrote:
Sorry, yes. It is straight;
# time
So you recommend I also do speed test on larger volumes? The test data I
had on the b114 server was only 90GB. Previous tests included 500G ufs
on zvol etc. It is just it will take 4 days to send it to the b114
server to start with ;) (From Sol10 servers).
Lund
Dirk Wriedt wrote:
btw: caching data fro zfs send anf zfs recv on another side could make it
even faster. you could use something like mbuffer with buffers of 1-2GB
for example.
On Fri, 22 May 2009, Jorgen Lundman wrote:
To finally close my quest. I tested zfs send in osol-b114 version:
received 82.3GB
Brent Jones wrote:
On Thu, May 21, 2009 at 10:17 PM, Jorgen Lundman lund...@gmo.jp wrote:
To finally close my quest. I tested zfs send in osol-b114 version:
received 82.3GB stream in 1195 seconds (70.5MB/sec)
Can you give any details about your data set, what you piped zfs
Sorry, yes. It is straight;
# time zfs send zpool1/leroy_c...@speedtest | nc 172.20.12.232 3001
real19m48.199s
# /var/tmp/nc -l -p 3001 -vvv | time zfs recv -v zpool1/le...@speedtest
received 82.3GB stream in 1195 seconds (70.5MB/sec)
Sending is osol-b114.
Receiver is Solaris 10 10/08
On Fri, May 22 at 11:05, Robert Milkowski wrote:
btw: caching data fro zfs send anf zfs recv on another side could make it
even faster. you could use something like mbuffer with buffers of 1-2GB
for example.
As another datapoint, the 111a opensolaris preview got me ~29MB/s
through an SSH
On Fri, May 22, 2009 at 04:40:43PM -0600, Eric D. Mudama wrote:
As another datapoint, the 111a opensolaris preview got me ~29MB/s
through an SSH tunnel with no tuning on a 40GB dataset.
Sender was a Core2Duo E4500 reading from SSDs and receiver was a Xeon
E5520 writing to a few mirrored
To finally close my quest. I tested zfs send in osol-b114 version:
received 82.3GB stream in 1195 seconds (70.5MB/sec)
Yeeaahh!
That makes it completely usable! Just need to change our support
contract to allow us to run b114 and we're set! :)
Thanks,
Lund
Jorgen Lundman wrote:
We
Jorgen Lundman wrote:
We finally managed to upgrade the production x4500s to Sol 10 10/08
(unrelated to this) but with the hope that it would also make zfs send
usable.
Exactly how does build 105 translate to Solaris 10 10/08? My current
There is no easy/obvious mapping of Solaris
We finally managed to upgrade the production x4500s to Sol 10 10/08
(unrelated to this) but with the hope that it would also make zfs send
usable.
Exactly how does build 105 translate to Solaris 10 10/08? My current
speed test has sent 34Gb in 24 hours, which isn't great. Perhaps the
next
Howdy folks.
I've a customer looking to use ZFS in a DR situation. They have a large
data store where they will be taking snapshots every N minutes or so,
sending the difference of the snapshot and previous snapshot with zfs
send -i to a remote host, and in case of DR firing up the secondary.
Torrey McMahon wrote:
Howdy folks.
I've a customer looking to use ZFS in a DR situation. They have a large
data store where they will be taking snapshots every N minutes or so,
sending the difference of the snapshot and previous snapshot with zfs
send -i to a remote host, and in case of DR
Matthew Ahrens wrote:
Torrey McMahon wrote:
Howdy folks.
I've a customer looking to use ZFS in a DR situation. They have a
large data store where they will be taking snapshots every N minutes
or so, sending the difference of the snapshot and previous snapshot
with zfs send -i to a remote
Torrey McMahon wrote:
Matthew Ahrens wrote:
I'm only doing an initial investigation now so I have no test data at
this point. The reason I asked, and I should have tacked this on at the
end of the last email, was a blog entry that stated zfs send was slow
26 matches
Mail list logo