Hi,
I am facing a problem with zfs receive through ssh. As usually, root
can't log on ssh; the log on users can't receive a zfs stream (rights
problem), and pfexec is disabled on the target host (as I understand it
is nowadays default for OI151_a...)
What are the suggestions to solve this?
you could try zfs send'ing to a local file and chmod/chown the file so
that a known local user can access it on the sending server
then on the receiving server you could rsync/ssh into the sending
server grab the file and then zfs receive as root.
Jon
On 23 October 2012 12:52, Sebastian Gabler
On 10/23/2012 7:52 AM, Sebastian Gabler wrote:
Hi,
I am facing a problem with zfs receive through ssh. As usually, root
can't log on ssh; the log on users can't receive a zfs stream (rights
problem), and pfexec is disabled on the target host (as I understand
it is nowadays default for
Hi,
I've been using zfs for a while but still there are some questions that
have remained unanswered even after reading the documentation so I
thought I would ask them here.
I have learned that zfs datasets can be expanded by adding vdevs. Say
that you have created say a raidz3 pool named
On 10/23/12 8:23 AM, Doug Hughes wrote:
On 10/23/2012 7:52 AM, Sebastian Gabler wrote:
Hi,
I am facing a problem with zfs receive through ssh. As usually, root
can't log on ssh; the log on users can't receive a zfs stream (rights
problem), and pfexec is disabled on the target host (as I
you could always set up an rsync server (not ssh):
man rsyncd.conf
this allows very controlled access, including read-only/specific IP
configurations.
Jon
On 23 October 2012 13:32, Gary Gendel g...@genashor.com wrote:
On 10/23/12 8:23 AM, Doug Hughes wrote:
On 10/23/2012 7:52 AM, Sebastian
Or send to a named pipe on the remote server that root is recving from.
On 10/23/12 13:03, Jonathan Adams wrote:
you could try zfs send'ing to a local file and chmod/chown the file so
that a known local user can access it on the sending server
then on the receiving server you could
On 10/23/2012 8:29 AM, Robin Axelsson wrote:
Hi,
I've been using zfs for a while but still there are some questions
that have remained unanswered even after reading the documentation so
I thought I would ask them here.
I have learned that zfs datasets can be expanded by adding vdevs. Say
Comments inline...
On 10/23/12 8:29 AM, Robin Axelsson wrote:
Hi,
I've been using zfs for a while but still there are some questions
that have remained unanswered even after reading the documentation so
I thought I would ask them here.
I have learned that zfs datasets can be expanded by
Do you happen to know how that's done in OI? Otherwise I would have to
move each file one by one to a disk location outside the dataset and
then move it back or zfs send the dataset to another pool of at least
equal size and then zfs receive it back to the expanded pool.
Unless something was
Wouldn't walking the filesystem, making a copy, deleting the original
and renaming the copy balance things?
e.g.
#!/bin/sh
LIST=`find /foo -type d`
for I in ${LIST}
do
cp ${I} ${I}.tmp
rm ${I}
mv ${I}.tmp ${I}
done
or perhaps
# === rewrite.sh ===
#!/bin/bash
$fn=$1
On 10/23/2012 11:08 AM, Robin Axelsson wrote:
On 2012-10-23 15:41, Doug Hughes wrote:
On 10/23/2012 8:29 AM, Robin Axelsson wrote:
Hi,
I've been using zfs for a while but still there are some questions
that have remained unanswered even after reading the documentation
so I thought I would
Probably should use find -type f to limit to files and also cp -a to maintain
permissions and ownership. Not sure if the will maintain ACLs.
For the truly paranoid, dont delete the original file so early, rename it, move
the temp file back as the original filename, then compare md5 or sha
On 23/10/2012 17:18, Roy Sigurd Karlsbakk wrote:
Wouldn't walking the filesystem, making a copy, deleting the original
and renaming the copy balance things?
e.g.
#!/bin/sh
LIST=`find /foo -type d`
for I in ${LIST}
do
cp ${I} ${I}.tmp
rm ${I}
mv ${I}.tmp ${I}
done
or perhaps
And
On 2012-10-23 16:22, George Wilson wrote:
Comments inline...
On 10/23/12 8:29 AM, Robin Axelsson wrote:
Hi,
I've been using zfs for a while but still there are some questions
that have remained unanswered even after reading the documentation so
I thought I would ask them here.
I have
On 2012-10-23 17:32, Udo Grabowski (IMK) wrote:
On 23/10/2012 17:18, Roy Sigurd Karlsbakk wrote:
Wouldn't walking the filesystem, making a copy, deleting the original
and renaming the copy balance things?
e.g.
#!/bin/sh
LIST=`find /foo -type d`
for I in ${LIST}
do
cp ${I} ${I}.tmp
rm ${I}
And hardlinks ?
For hardlinks, this is bad, indeed, so depending on if you use them or not,
this may or may not be a good idea
This is a perfect way to completely trash your
system. There's no need to 'balance' zfs, over time filesystem
writes will balance roughly over the vdevs, only files
Today I updated OI 151a4 to OI151a7 on hp ML110G6.
KVM works fine. But I want to use virt-manager to manage VM.
So let's me tell the status of virt-manager and libvirt.
Best Regards.
ryo
___
OpenIndiana-discuss mailing list
On 12-10-23 04:52 AM, Sebastian Gabler wrote:
Hi,
I am facing a problem with zfs receive through ssh. As usually, root
can't log on ssh; the log on users can't receive a zfs stream (rights
problem), and pfexec is disabled on the target host (as I understand
it is nowadays default for
2012-10-23 19:53, Robin Axelsson wrote:
That sounds like a good point, unless you first scan for hard links and
avoid touching the files and their hard links in the shell script, I guess.
I guess the idea about reading into memory and writing back into the
same file (or cat $SRC
I set this up with pfexec, I think on 151a4, and it has survived updates
without change so far (currently working on a7), and all I had to do was
add the ZFS File System Management profile to the backup user. I did
this via the users-admin gui, I think usermod -P does the same thing, but
here is
On 10/23/2012 4:13 PM, Timothy Coalson wrote:
Works pretty well, though I get ~70MB/s on gigabit ethernet instead of the
theoretically possible 120MB/s, and I'm not sure why (NFS gets pretty close
to 120MB/s on the same network).
There's a fair bit of overhead to ssh and to zfs send/recive,
You could try to set the crypo algorithm to none if you do not need
encryption.
ssh -c none
Might also be worth trying to see if it is ssh that is slowing you down.
Mike
On Tue, 2012-10-23 at 17:03 -0400, Doug Hughes wrote:
On 10/23/2012 4:13 PM, Timothy Coalson wrote:
Works pretty
From: Michael Stapleton [mailto:michael.staple...@techsologic.com]
You could try to set the crypo algorithm to none if you do not
need encryption.
ssh -c none
That wont work with the shipped ssh. You could use netcat
target# nc -l -p 31337 | zfs recv data/path/etc
source# zfs
You could try to set the crypo algorithm to none if you do not need
encryption.
ssh -c none
If I really needed the extra speed, it would probably be better to spawn a
netcat over ssh so I don't have to modify the target's sshd_config. I
played with the ciphers and arcfour128 seemed to
I use the sudo method and I also assign the user zfs rights for that
pool.
here is my sudoers file:
bkuser ALL = NOPASSWD: /usr/sbin/zfs
and here is the rights assignment:
zfs allow -s @adminrole
26 matches
Mail list logo