Hello
Has anyone else come across a bug moving files between two zfs file systems?
I used mv /my/zfs/filesystem/files /my/zfs/otherfilesystem and got the error
too many open files.
This is on Solaris 11
___
zfs-discuss mailing list
Hi
Several observations with zfs cifs/smb shares in the new Solaris 11.
1) It seems that the previously documented way to set the smb share name no
longer works
zfs set sharesmb=name=my_share_name
You have to use the long-winded
zfs set share=name=my_share_name,path=/my/share/path,prot=smb
On 29 November, 2011 - sol sent me these 4,9K bytes:
Hi
Several observations with zfs cifs/smb shares in the new Solaris 11.
1) It seems that the previously documented way to set the smb share name no
longer works
zfs set sharesmb=name=my_share_name
You have to use the long-winded
I think the too many open files is a generic error message about
running out of file descriptors. You should check your shell ulimit
information.
On 11/29/11 09:28, sol wrote:
Hello
Has anyone else come across a bug moving files between two zfs file systems?
I used mv
I think the too many open files is a generic error message about
running out of file descriptors. You should check your shell ulimit
information.
Yeah, but mv shouldn't run out of file descriptors or should be
handle to deal with that.
Are we moving a tree of files?
Casper
Yep, that's not filesystem issue, it's a kernel VFS level.
Sent from my iPad
On Nov 29, 2011, at 10:17 PM, Cindy Swearingen cindy.swearin...@oracle.com
wrote:
I think the too many open files is a generic error message about running
out of file descriptors. You should check your shell ulimit
On Tue, Nov 29, 2011 at 12:17 PM, Cindy Swearingen
cindy.swearin...@oracle.com wrote:
I think the too many open files is a generic error message about running
out of file descriptors. You should check your shell ulimit
information.
Also, see how many open files you have: echo /proc/self/fd/*
Hi Sol,
For 1) and several others, review the ZFS Admin Guide for
a detailed description of the share changes, here:
http://docs.oracle.com/cd/E23824_01/html/821-1448/gayne.html
For 2-4), You can't rename a share. You would have to remove it
and recreate it with the new name.
For 6), I think
Yes, it's moving a tree of files, and the shell ulimit is the default (which I
think is 256).
It happened twice recently in normal use but not when I tried to replicate it
(standard test response ;-))
Anyway it only happened moving between zfs filesystems in Solaris 11, I've
never seen it
On Tue, 29 Nov 2011, sol wrote:
Yes, it's moving a tree of files, and the shell ulimit is the default (which I
think is 256).
It happened twice recently in normal use but not when I tried to replicate it
(standard test response ;-))
Is it possible that 'mv' is multi-threaded in Solaris 11?
I've hit an interesting (not) problem. I need to remove a problematic
ld.config file (due to an improper crle...) to boot my laptop. This is
OI 151a, but fundamentally this is zfs, so i'm asking here.
what I did after booting the live cd and su:
mkdir /tmp/disk
zpool import -R /tmp/disk -f rpool
On Tue, Nov 29, 2011 at 3:01 PM, Francois Dion francois.d...@gmail.com wrote:
I've hit an interesting (not) problem. I need to remove a problematic
ld.config file (due to an improper crle...) to boot my laptop. This is
OI 151a, but fundamentally this is zfs, so i'm asking here.
what I did
It is on openindiana 151a, no separate /var as far as But I'll have to
test this on solaris11 too when I get a chance.
The problem is that if I
zfs mount -o mountpoint=/tmp/rescue (or whatever) rpool/ROOT/openindiana
i get a cannot mount /mnt/rpool: directory is not empty.
The reason for that
On Tue, Nov 29, 2011 at 4:40 PM, Francois Dion francois.d...@gmail.com wrote:
It is on openindiana 151a, no separate /var as far as But I'll have to
test this on solaris11 too when I get a chance.
The problem is that if I
zfs mount -o mountpoint=/tmp/rescue (or whatever)
In the end what I needed to do was to set the mountpoint with:
zfs set mountpoint=/tmp/rescue rpool/ROOT/openindiana
it ended up mounting it in /mnt/rpool/tmp/rescue but still, it gave me
the access to var/ld/... and after removing the ld.config, doing a
zpool export and reboot, my desktop is
I haven't been able to get this working. To keep it simpler, next I am
going to try usbcopy of the live USB image in the VM, and see if I can boot
real hardware from the resultant live USB stick.
On Tue, Nov 22, 2011 at 5:25 AM, Fajar A. Nugraha l...@fajar.net wrote:
On Tue, Nov 22, 2011 at
On Wed, Nov 30, 2011 at 1:25 PM, Frank Cusack fr...@linetwo.net wrote:
I haven't been able to get this working. To keep it simpler, next I am
going to try usbcopy of the live USB image in the VM, and see if I can boot
real hardware from the resultant live USB stick.
To be clear, I'm talking
On Tue, Nov 29, 2011 at 10:39 PM, Fajar A. Nugraha l...@fajar.net wrote:
On Wed, Nov 30, 2011 at 1:25 PM, Frank Cusack fr...@linetwo.net wrote:
I haven't been able to get this working. To keep it simpler, next I am
going to try usbcopy of the live USB image in the VM, and see if I can
boot
On Wed, Nov 30, 2011 at 2:35 PM, Frank Cusack fr...@linetwo.net wrote:
The second one works on both real hardare and VM, BUT with a
prequisite that you have to export-import rpool first on that
particular system. Unless you already have solaris installed, this
usually means you need to boot
19 matches
Mail list logo