Re: [OpenIndiana-discuss] ZFS remote receive

2012-11-01 Thread Jim Klimov

On 2012-11-01 01:47, Richard Elling wrote:

Finally, a data point: using MTU of 1500 with ixgbe you can hit wire speed on a
modern CPU.



There is no CSMA/CD on gigabit and faster available from any vendor today.
Everything today is switched.


Ok then, I'll stand corrected by the practice, although my networking
education speaks against this statement. We were taught that since GbE
retains (at least formally) compatibility with older ethernet, and with
halfduplex in particular, it is capable of hubbing, csma/cd, etc. These
provisions are of course suboptimal (useless and harmful to performance)
on point-to-point fullduplex links like host-to-host and host-to-switch.

It may be a matter of particular modernized OS defaults however - to
disable those obsolete beasts. But they should be there, and when Jumbo
was introduced (for GbE, none other) - these beasts were known to cause
these sorts of problems.

Likely, interrupt coalescing, NIC offloads and CPU horsepower played
their roles as well. And buffering, I've rather forgot about this.
(Also note that buffers blindly thrown at any problem at all layers
of the stack might only hide the performance degradations and disable
protocol adaptability to reduced network abilities, as often happens
with WiFi during interference, for example - draining the several
hundred packets of the buffer over 1Mbit before even noticing that
a problem exists, can take a good part of the second, if not more
than one).

My 2c,
//Jim

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS remote receive

2012-11-01 Thread Richard Elling
On Nov 1, 2012, at 1:24 AM, Jim Klimov jimkli...@cos.ru wrote:

 On 2012-11-01 01:47, Richard Elling wrote:
 Finally, a data point: using MTU of 1500 with ixgbe you can hit wire speed 
 on a
 modern CPU.
 
 There is no CSMA/CD on gigabit and faster available from any vendor today.
 Everything today is switched.
 
 Ok then, I'll stand corrected by the practice, although my networking
 education speaks against this statement. We were taught that since GbE
 retains (at least formally) compatibility with older ethernet, and with
 halfduplex in particular, it is capable of hubbing, csma/cd, etc. These
 provisions are of course suboptimal (useless and harmful to performance)
 on point-to-point fullduplex links like host-to-host and host-to-switch.

It does retain half-duplex compatibility. But when it is FDX, it does not use 
the
HDX timing. I recall seeing a HDX GbE hub for sale once... but have never 
seen one in real life.

 It may be a matter of particular modernized OS defaults however - to
 disable those obsolete beasts. But they should be there, and when Jumbo
 was introduced (for GbE, none other) - these beasts were known to cause
 these sorts of problems.

Jumbo preceded GbE, but late in the deployment of FastEthernet, so there
weren't many FastEthernet switches that supported Jumbo frames, thus
limiting their deployment. Everything converged with GbE --  inexpensive
switches and Jumbo frame support.

 Likely, interrupt coalescing, NIC offloads and CPU horsepower played
 their roles as well. And buffering, I've rather forgot about this.
 (Also note that buffers blindly thrown at any problem at all layers
 of the stack might only hide the performance degradations and disable
 protocol adaptability to reduced network abilities, as often happens
 with WiFi during interference, for example - draining the several
 hundred packets of the buffer over 1Mbit before even noticing that
 a problem exists, can take a good part of the second, if not more
 than one).

Indeed :-)
 -- richard

--

richard.ell...@richardelling.com
+1-760-896-4422



___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS remote receive

2012-10-31 Thread Sebastian Gabler



Message: 7
Date: Tue, 30 Oct 2012 22:03:13 +0400
From: Jim Klimovjimkli...@cos.ru
To: Discussion list for OpenIndiana
openindiana-discuss@openindiana.org
Subject:
Message-ID:50901661.9050...@cos.ru
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

2012-10-30 19:21, Sebastian Gabler wrote:

Whereas that's relative: performance is still at a quite miserable 62
MB/s through a gigabit link. Apparently, my environment has room for
improvement.

Does your gigabit ethernet use Jumbo Frames (like 9000 or up to 16KB,
depending on your NICs, switches and other networking gear) for
unrouted (L2) storage links? It is said that traditional MTU=1500
has too many overheads with packet size and preamble delays between
packets that effectively limit a gigabit to 700-800Mbps...

//Jim



--
The MTU is on 1500 on source and target system, and there are no 
fragmentations happening. On the target system I am seeing writes up to 
160 MB/s with frequent zpool iostat probes. When iostat probes are up to 
5s+, there is a steady stream of 62 MB/s. At this time I am not sure if 
that is indeed a networking issue. I am also not sure how jumbo frames 
could provide an intersting benefit here. The usually alleged 15% (which 
are already on the high side) are not in the scope of making or breaking 
the use case.


BR

Sebastian

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS remote receive

2012-10-31 Thread Jim Klimov

2012-10-31 13:58, Sebastian Gabler wrote:

2012-10-30 19:21, Sebastian Gabler wrote:

Whereas that's relative: performance is still at a quite miserable 62
MB/s through a gigabit link. Apparently, my environment has room for
improvement.

Does your gigabit ethernet use Jumbo Frames (like 9000 or up to 16KB,
depending on your NICs, switches and other networking gear) for
unrouted (L2) storage links? It is said that traditional MTU=1500
has too many overheads with packet size and preamble delays between
packets that effectively limit a gigabit to 700-800Mbps...




The MTU is on 1500 on source and target system, and there are no
fragmentations happening.


The point of Jumbo frames (in unrouted L2 ethernet segments) is to
remove many overheads - CSMA/CD delays being a large contributor -
and send unfragmented chunks of 9-16Kb in size, increasing the local
network efficiency.

 On the target system I am seeing writes up to

160 MB/s with frequent zpool iostat probes. When iostat probes are up to
5s+, there is a steady stream of 62 MB/s.


I believe this *may* mean that your networking buffer receives data
into memory (ZFS cache) at 62Mb/s, then every 5s the dirty cache
is sent to disks during TXG commit at whatever speed in can burst
(160Mb/s in your case).

 At this time I am not sure if

that is indeed a networking issue. I am also not sure how jumbo frames
could provide an intersting benefit here. The usually alleged 15% (which
are already on the high side) are not in the scope of making or breaking
the use case.


Mostly elaborated above.

Other ways to reduce networking lags were discussed by other
responders, including use of netcat to pipe the stream quickly,
ssh without encryption/with cheap encryption/with HPC patches.

Based on some experience with NFS and OpenVPN I might also
suggest to try UDP vs. TCP (i.e. with netcat), though this
would probably play on the unsafe side - UDP-based programs
include retries like NFS (or accept the drop of data like VoIP),
as they deem necessary, and ZFS-send probably doesn't do this;
it is rather fragile already.

//Jim


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS remote receive

2012-10-31 Thread Roy Sigurd Karlsbakk
 2012-10-30 19:21, Sebastian Gabler wrote:
  Whereas that's relative: performance is still at a quite miserable
  62
  MB/s through a gigabit link. Apparently, my environment has room for
  improvement.
 
 Does your gigabit ethernet use Jumbo Frames (like 9000 or up to 16KB,
 depending on your NICs, switches and other networking gear) for
 unrouted (L2) storage links? It is said that traditional MTU=1500
 has too many overheads with packet size and preamble delays between
 packets that effectively limit a gigabit to 700-800Mbps...

Erm… That's not true. IPv4 header is 20 bytes. TCP header the same, meaning 40 
bytes in total out of 1500 bytes payload, leaving 1460 bytes left for real 
payload, or 97.3%. An overhead of 20-30% is *not* correct. The ~3% overhead 
matches well what I see in practice on my networks. You will get a gain with 
jumboframes, but mostly lower CPU use for handling the packets, and especially 
in iSCSI environments, but not much for the lower packet size overhead…

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 98013356
r...@karlsbakk.net
http://blogg.karlsbakk.net/
GPG Public key: http://karlsbakk.net/roysigurdkarlsbakk.pubkey.txt
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med xenotyp etymologi. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS remote receive

2012-10-31 Thread Richard Elling
On Oct 31, 2012, at 5:53 AM, Roy Sigurd Karlsbakk r...@karlsbakk.net wrote:

 2012-10-30 19:21, Sebastian Gabler wrote:
 Whereas that's relative: performance is still at a quite miserable
 62
 MB/s through a gigabit link. Apparently, my environment has room for
 improvement.
 
 Does your gigabit ethernet use Jumbo Frames (like 9000 or up to 16KB,
 depending on your NICs, switches and other networking gear) for
 unrouted (L2) storage links? It is said that traditional MTU=1500
 has too many overheads with packet size and preamble delays between
 packets that effectively limit a gigabit to 700-800Mbps...
 
 Erm… That's not true. IPv4 header is 20 bytes. TCP header the same, meaning 
 40 bytes in total out of 1500 bytes payload, leaving 1460 bytes left for 
 real payload, or 97.3%. An overhead of 20-30% is *not* correct. The ~3% 
 overhead matches well what I see in practice on my networks. You will get a 
 gain with jumboframes, but mostly lower CPU use for handling the packets, and 
 especially in iSCSI environments, but not much for the lower packet size 
 overhead…

In the bad old days, you could get one interrupt per packet, but those days
are long gone for high-speed NICs. Today, interrupt coalescing is quite common,
further reducing the benefits of jumbo frames.

NB, setting MTU to 9000 can actually work against performance when coalescing
on some popular OSes.  Ideally, the MTU should be set to what your workload
needs and not more.

Finally, a data point: using MTU of 1500 with ixgbe you can hit wire speed on a
modern CPU.
 -- richard

--

richard.ell...@richardelling.com
+1-760-896-4422



___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS remote receive

2012-10-31 Thread Richard Elling
On Oct 31, 2012, at 3:37 AM, Jim Klimov jimkli...@cos.ru wrote:

 2012-10-31 13:58, Sebastian Gabler wrote:
 2012-10-30 19:21, Sebastian Gabler wrote:
 Whereas that's relative: performance is still at a quite miserable 62
 MB/s through a gigabit link. Apparently, my environment has room for
 improvement.
 Does your gigabit ethernet use Jumbo Frames (like 9000 or up to 16KB,
 depending on your NICs, switches and other networking gear) for
 unrouted (L2) storage links? It is said that traditional MTU=1500
 has too many overheads with packet size and preamble delays between
 packets that effectively limit a gigabit to 700-800Mbps...
 
 
 The MTU is on 1500 on source and target system, and there are no
 fragmentations happening.
 
 The point of Jumbo frames (in unrouted L2 ethernet segments) is to
 remove many overheads - CSMA/CD delays being a large contributor -
 and send unfragmented chunks of 9-16Kb in size, increasing the local
 network efficiency.

There is no CSMA/CD on gigabit and faster available from any vendor today.
Everything today is switched.
 -- richard

 
  On the target system I am seeing writes up to
 160 MB/s with frequent zpool iostat probes. When iostat probes are up to
 5s+, there is a steady stream of 62 MB/s.
 
 I believe this *may* mean that your networking buffer receives data
 into memory (ZFS cache) at 62Mb/s, then every 5s the dirty cache
 is sent to disks during TXG commit at whatever speed in can burst
 (160Mb/s in your case).

More likely: straight pipe send | receive is a blocking configuration. This
is why most people who go for high speed send | receive use a buffer,
such as mbuffer, to smooth out the performance. Check the archives,
this has been rehashed hundreds of times on these aliases.
 -- richard

--

richard.ell...@richardelling.com
+1-760-896-4422



___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS remote receive

2012-10-31 Thread Timothy Coalson
On Wed, Oct 31, 2012 at 8:44 PM, Richard Elling 
richard.ell...@richardelling.com wrote:

   On the target system I am seeing writes up to
  160 MB/s with frequent zpool iostat probes. When iostat probes are up to
  5s+, there is a steady stream of 62 MB/s.
 
  I believe this *may* mean that your networking buffer receives data
  into memory (ZFS cache) at 62Mb/s, then every 5s the dirty cache
  is sent to disks during TXG commit at whatever speed in can burst
  (160Mb/s in your case).

 More likely: straight pipe send | receive is a blocking configuration. This
 is why most people who go for high speed send | receive use a buffer,
 such as mbuffer, to smooth out the performance. Check the archives,
 this has been rehashed hundreds of times on these aliases.


Thank you very much for rehashing it again, I stuck | mbuffer -b 8192 -m
256M -q 2 /dev/null | (some preliminary testing seemed to indicate it
wanted 8192 blocksize for pipes, and when run from cron it produces an odd
warning message) in the middle of my send | ssh recv pipe and was rewarded
with this over gigabit ethernet:

received 8.63GB stream in 89 seconds (99.3MB/sec)

Previously I was getting 70MB/s or less, even after switching to arcfour128
for ssh cipher.  My only gripe is that mbuffer doesn't have a manpage on
OpenIndiana.

Tim
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS remote receive

2012-10-30 Thread Jim Klimov

2012-10-30 19:21, Sebastian Gabler wrote:

Whereas that's relative: performance is still at a quite miserable 62
MB/s through a gigabit link. Apparently, my environment has room for
improvement.


Does your gigabit ethernet use Jumbo Frames (like 9000 or up to 16KB,
depending on your NICs, switches and other networking gear) for
unrouted (L2) storage links? It is said that traditional MTU=1500
has too many overheads with packet size and preamble delays between
packets that effectively limit a gigabit to 700-800Mbps...

//Jim

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS remote receive

2012-10-30 Thread Mike La Spina
Hi Sebastian,

Some examples using RBAC in my blog entry
http://blog.laspina.ca/ubiquitous/provisioning_disaster_recovery_with_zf
s
could help.

Regards,
Mike

-Original Message-
From: Sebastian Gabler [mailto:sequoiamo...@gmx.net] 
Sent: Tuesday, October 23, 2012 6:53 AM
To: openindiana-discuss@openindiana.org
Subject: [OpenIndiana-discuss] ZFS remote receive

Hi,

I am facing a problem with zfs receive through ssh. As usually, root
can't log on ssh; the log on users can't receive a zfs stream (rights
problem), and pfexec is disabled on the target host (as I understand it
is nowadays default for OI151_a...)

What are the suggestions to solve this? I tried several approaches with
sudo, and su to no avail. I had tried to enable pfexec on the target
system, too and couldn't do it.

Thanks for your help.

BR

Sebastian

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS remote receive

2012-10-30 Thread Mike La Spina
Some additional example in this one
http://blog.laspina.ca/ubiquitous/encapsulating-vt-d-accelerated-zfs-sto
rage-within-esxi


-Original Message-
From: Sebastian Gabler [mailto:sequoiamo...@gmx.net] 
Sent: Tuesday, October 23, 2012 6:53 AM
To: openindiana-discuss@openindiana.org
Subject: [OpenIndiana-discuss] ZFS remote receive

Hi,

I am facing a problem with zfs receive through ssh. As usually, root
can't log on ssh; the log on users can't receive a zfs stream (rights
problem), and pfexec is disabled on the target host (as I understand it
is nowadays default for OI151_a...)

What are the suggestions to solve this? I tried several approaches with
sudo, and su to no avail. I had tried to enable pfexec on the target
system, too and couldn't do it.

Thanks for your help.

BR

Sebastian

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS remote receive

2012-10-23 Thread Jonathan Adams
you could try zfs send'ing to a local file and chmod/chown the file so
that a known local user can access it on the sending server

then on the receiving server you could rsync/ssh into the sending
server grab the file and then zfs receive as root.

Jon

On 23 October 2012 12:52, Sebastian Gabler sequoiamo...@gmx.net wrote:
 Hi,

 I am facing a problem with zfs receive through ssh. As usually, root can't
 log on ssh; the log on users can't receive a zfs stream (rights problem),
 and pfexec is disabled on the target host (as I understand it is nowadays
 default for OI151_a...)

 What are the suggestions to solve this? I tried several approaches with
 sudo, and su to no avail. I had tried to enable pfexec on the target system,
 too and couldn't do it.

 Thanks for your help.

 BR

 Sebastian

 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS remote receive

2012-10-23 Thread Doug Hughes

On 10/23/2012 7:52 AM, Sebastian Gabler wrote:

Hi,

I am facing a problem with zfs receive through ssh. As usually, root 
can't log on ssh; the log on users can't receive a zfs stream (rights 
problem), and pfexec is disabled on the target host (as I understand 
it is nowadays default for OI151_a...)


What are the suggestions to solve this? I tried several approaches 
with sudo, and su to no avail. I had tried to enable pfexec on the 
target system, too and couldn't do it.


you can run it over any tcp transport. Do it yourself options include 
using ttcp or netcat as a transport, but almost anything will do. It 
requires a little bit more synchronization to do it this way since the 
receiver must be running (piped into zfs recv) before the transmitter is 
started or the transmitter will abort.
In the end, though, you need to run the zfs recv as root somehow. If 
that's the crux of the problem and not the inability to ssh as root, 
you'll have to figure out a fix to get root at least for the zfs recv 
process.


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS remote receive

2012-10-23 Thread Gary Gendel

On 10/23/12 8:23 AM, Doug Hughes wrote:

On 10/23/2012 7:52 AM, Sebastian Gabler wrote:

Hi,

I am facing a problem with zfs receive through ssh. As usually, root 
can't log on ssh; the log on users can't receive a zfs stream (rights 
problem), and pfexec is disabled on the target host (as I understand 
it is nowadays default for OI151_a...)


What are the suggestions to solve this? I tried several approaches 
with sudo, and su to no avail. I had tried to enable pfexec on the 
target system, too and couldn't do it.


you can run it over any tcp transport. Do it yourself options include 
using ttcp or netcat as a transport, but almost anything will do. It 
requires a little bit more synchronization to do it this way since the 
receiver must be running (piped into zfs recv) before the transmitter 
is started or the transmitter will abort.
In the end, though, you need to run the zfs recv as root somehow. If 
that's the crux of the problem and not the inability to ssh as root, 
you'll have to figure out a fix to get root at least for the zfs recv 
process.
I had the same issue with rsync over ssh.  I finally decided to make 
root a user and restrict login access.




___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS remote receive

2012-10-23 Thread Jonathan Adams
you could always set up an rsync server (not ssh):

man rsyncd.conf

this allows very controlled access, including read-only/specific IP
configurations.

Jon

On 23 October 2012 13:32, Gary Gendel g...@genashor.com wrote:
 On 10/23/12 8:23 AM, Doug Hughes wrote:

 On 10/23/2012 7:52 AM, Sebastian Gabler wrote:

 Hi,

 I am facing a problem with zfs receive through ssh. As usually, root
 can't log on ssh; the log on users can't receive a zfs stream (rights
 problem), and pfexec is disabled on the target host (as I understand it is
 nowadays default for OI151_a...)

 What are the suggestions to solve this? I tried several approaches with
 sudo, and su to no avail. I had tried to enable pfexec on the target system,
 too and couldn't do it.

 you can run it over any tcp transport. Do it yourself options include
 using ttcp or netcat as a transport, but almost anything will do. It
 requires a little bit more synchronization to do it this way since the
 receiver must be running (piped into zfs recv) before the transmitter is
 started or the transmitter will abort.
 In the end, though, you need to run the zfs recv as root somehow. If
 that's the crux of the problem and not the inability to ssh as root, you'll
 have to figure out a fix to get root at least for the zfs recv process.

 I had the same issue with rsync over ssh.  I finally decided to make root a
 user and restrict login access.




 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS remote receive

2012-10-23 Thread Chris Gerhard


Or send to a named pipe on the remote server that root is recving from.




On 10/23/12 13:03, Jonathan Adams wrote:

you could try zfs send'ing to a local file and chmod/chown the file so
that a known local user can access it on the sending server

then on the receiving server you could rsync/ssh into the sending
server grab the file and then zfs receive as root.

Jon

On 23 October 2012 12:52, Sebastian Gablersequoiamo...@gmx.net  wrote:

Hi,

I am facing a problem with zfs receive through ssh. As usually, root can't
log on ssh; the log on users can't receive a zfs stream (rights problem),
and pfexec is disabled on the target host (as I understand it is nowadays
default for OI151_a...)

What are the suggestions to solve this? I tried several approaches with
sudo, and su to no avail. I had tried to enable pfexec on the target system,
too and couldn't do it.

Thanks for your help.

BR

Sebastian

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss



___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS remote receive

2012-10-23 Thread Geoff Nordli

On 12-10-23 04:52 AM, Sebastian Gabler wrote:

Hi,

I am facing a problem with zfs receive through ssh. As usually, root 
can't log on ssh; the log on users can't receive a zfs stream (rights 
problem), and pfexec is disabled on the target host (as I understand 
it is nowadays default for OI151_a...)


What are the suggestions to solve this? I tried several approaches 
with sudo, and su to no avail. I had tried to enable pfexec on the 
target system, too and couldn't do it.


Thanks for your help.

BR

Sebastian



Hi Sebastian.

I use the sudo method and I also assign the user zfs rights for that pool.
here is my sudoers file:

bkuserALL = NOPASSWD: /usr/sbin/zfs

and here is the rights assignment:

zfs allow -s @adminrole 
clone,create,destroy,mount,promote,quota,receive,rename,reservation,rollback,send,snapshot,userprop 
backup

zfs allow bkuser @adminrole backup


I am sure it could be a lot tighter for security, but it works.

Have a great day!!

Geoff





___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS remote receive

2012-10-23 Thread Timothy Coalson
I set this up with pfexec, I think on 151a4, and it has survived updates
without change so far (currently working on a7), and all I had to do was
add the  ZFS File System Management profile to the backup user.  I did
this via the users-admin gui, I think usermod -P does the same thing, but
here is the relevant line from /etc/user_attr:

backuptype=normal;profiles=ZFS File System
Management;roles=netadm,netcfg,root,zfssnap

I didn't need to mess with the properties on the filesystem.  I set up ssh
keys for passwordless ssh, and my incremental zfs send/receive command
looks like this (with variables replaced and logging redirection removed):

zfs send -vI oldsnap fs@newsnap | ssh backup@host pfexec zfs
receive -vF \destfs\

Works pretty well, though I get ~70MB/s on gigabit ethernet instead of the
theoretically possible 120MB/s, and I'm not sure why (NFS gets pretty close
to 120MB/s on the same network).

Tim

On Tue, Oct 23, 2012 at 12:59 PM, Geoff Nordli geo...@gnaa.net wrote:

 On 12-10-23 04:52 AM, Sebastian Gabler wrote:

 Hi,

 I am facing a problem with zfs receive through ssh. As usually, root
 can't log on ssh; the log on users can't receive a zfs stream (rights
 problem), and pfexec is disabled on the target host (as I understand it is
 nowadays default for OI151_a...)

 What are the suggestions to solve this? I tried several approaches with
 sudo, and su to no avail. I had tried to enable pfexec on the target
 system, too and couldn't do it.

 Thanks for your help.

 BR

 Sebastian



 Hi Sebastian.

 I use the sudo method and I also assign the user zfs rights for that pool.
 here is my sudoers file:

 bkuserALL = NOPASSWD: /usr/sbin/zfs

 and here is the rights assignment:

 zfs allow -s @adminrole clone,create,destroy,mount,**
 promote,quota,receive,rename,**reservation,rollback,send,**snapshot,userprop
 backup
 zfs allow bkuser @adminrole backup


 I am sure it could be a lot tighter for security, but it works.

 Have a great day!!

 Geoff






 __**_
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@**openindiana.orgOpenIndiana-discuss@openindiana.org
 http://openindiana.org/**mailman/listinfo/openindiana-**discusshttp://openindiana.org/mailman/listinfo/openindiana-discuss

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS remote receive

2012-10-23 Thread Doug Hughes

On 10/23/2012 4:13 PM, Timothy Coalson wrote:


Works pretty well, though I get ~70MB/s on gigabit ethernet instead of the
theoretically possible 120MB/s, and I'm not sure why (NFS gets pretty close
to 120MB/s on the same network).



There's a fair bit of overhead to ssh and to zfs send/recive, so you're 
doing very well.



___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS remote receive

2012-10-23 Thread Michael Stapleton
You could try to set the crypo algorithm to none if you do not need
encryption.

ssh -c none 

Might also be worth trying to see if it is ssh that is slowing you down.


Mike

On Tue, 2012-10-23 at 17:03 -0400, Doug Hughes wrote:

 On 10/23/2012 4:13 PM, Timothy Coalson wrote:
 
  Works pretty well, though I get ~70MB/s on gigabit ethernet instead of the
  theoretically possible 120MB/s, and I'm not sure why (NFS gets pretty close
  to 120MB/s on the same network).
 
 
 There's a fair bit of overhead to ssh and to zfs send/recive, so you're 
 doing very well.
 
 
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS remote receive

2012-10-23 Thread Jason Matthews





From: Michael Stapleton [mailto:michael.staple...@techsologic.com] 

 You could try to set the crypo algorithm to none if you do not 
 need encryption.

 ssh -c none 

That wont work with the shipped ssh. You could use netcat

target# nc -l -p 31337 | zfs recv data/path/etc

source# zfs send data/path/etc@snap | nc target 31337

that will do it without encryption.

If you want to use the shipped ssh with the least overhead consider using
arcfour. ssh -c arcfour

Also, if you are going over the WAN I strongly recommend using the HPN ssh.
I have tutorial
(https://broken.net/zfs/using-hpn-openssh-to-flee-the-roach-motel-aws/)on
how to use it. I routinely get 80 megabits/sec transferring data out of a
east coast availability zones at AWS to san Francisco.

In my view the HPN patches should just be incorporated into not only ssh,
but all network application stacks as a configuration option.

j.

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS remote receive

2012-10-23 Thread Timothy Coalson
 You could try to set the crypo algorithm to none if you do not need
 encryption.

 ssh -c none 


If I really needed the extra speed, it would probably be better to spawn a
netcat over ssh so I don't have to modify the target's sshd_config.  I
played with the ciphers and arcfour128 seemed to give a marginal increase
in speed (could be measurement error though).


 Might also be worth trying to see if it is ssh that is slowing you down.


 It probably is ssh, considering the speeds I have seen on local
send/receives.  However, I am happy with 70MB/s for now, since zfs send
effectively eliminates the rsync filesystem walk overhead (which I was
using previously, and it had grown to something like an hour to walk it).

Anyway, enough with me hijacking the topic, back to Sebastian's problem.

Tim
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS remote receive

2012-10-23 Thread Roy Sigurd Karlsbakk
 I use the sudo method and I also assign the user zfs rights for that
 pool.
 here is my sudoers file:
 
 bkuser ALL = NOPASSWD: /usr/sbin/zfs
 
 and here is the rights assignment:
 
 zfs allow -s @adminrole
 clone,create,destroy,mount,promote,quota,receive,rename,reservation,rollback,send,snapshot,userprop
 backup
 zfs allow bkuser @adminrole backup
 
 I am sure it could be a lot tighter for security, but it works.

No point in using zfs allow if you run zfs receive with sudo…

Btw, I tried allowing all sorts of stuff to a similar user for zfs receive, but 
never got it to work, and ended up setting up sudo as above instead. These 
things may have been fixed now, though, since this was some time ago (and I 
don't work there anymore).

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 98013356
r...@karlsbakk.net
http://blogg.karlsbakk.net/
GPG Public key: http://karlsbakk.net/roysigurdkarlsbakk.pubkey.txt
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med xenotyp etymologi. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss