Re: [zfs-discuss] virtualbox rawdisk discrepancy

2011-11-29 Thread Fajar A. Nugraha
On Wed, Nov 30, 2011 at 2:35 PM, Frank Cusack  wrote:
>> The second one works on both real hardare and VM, BUT with a
>> prequisite that you have to export-import rpool first on that
>> particular system. Unless you already have solaris installed, this
>> usually means you need to boot with a live cd/usb first.
>
>
> yup.  I didn't quite do that, what I did is exit to shell after installing
> (from install CD) onto the USB.  Then in the shell from the install CD I did
> the zpool export.  The resultant USB is still unbootable for me on real
> hardware.

It won't work unless you did export-import on the real hardware. Blame
oracle for that. Even zfsonlinux can work without this hassle.

... then again your kind of use case is probably not the supported
configuration anyway, and there's no incentive for Oracle to "fix" it
:)

>
> Anyway, the point of that story is that I tried to install onto it as as USB
> device, instead of as a SATA device, in case something special happens to
> make USB bootable that doesn't happen when the S11 installer thinks it's a
> SATA device.  But I was unable to complete that test.

Not sure about solaris, but in linux grub1 installation would fail in
the BIOS does not list the disk as bootable. Virtualbox definitely
does not support booting from passthru-usb, so that may be the source
of your problem.

Mapping it as SATA disk should work as expected.

> I don't use live cd on real hardware because that doesn't meet my objective
> of being able to create a removable boot drive, created in a VM, that I can
> boot on real hardware if I wanted to.  I mean, I could do it that way, but I
> want to be able to do this in a 100% VM environment.

I use ubuntu for that, which works fine :D
It also supports zfs (via zfsonlinux), albeit limited to pool version
28 (same as openindiana)

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] virtualbox rawdisk discrepancy

2011-11-29 Thread Frank Cusack
On Tue, Nov 29, 2011 at 10:39 PM, Fajar A. Nugraha  wrote:

> On Wed, Nov 30, 2011 at 1:25 PM, Frank Cusack  wrote:
> > I haven't been able to get this working.  To keep it simpler, next I am
> > going to try usbcopy of the live USB image in the VM, and see if I can
> boot
> > real hardware from the resultant live USB stick.
>
> To be clear, I'm talking about two things:
> (1) live USB, created from Live CD
> (2) solaris installed on USB
>

yup


>
> This first one works on real hardware, but not on a VM. The cause is
> simple: seems like a boot code somewhere searches ONLY removable media
> for live solaris image. Since you need to map the USB disk as regular
> disk (SATA/IDE/SCSI) in a VM to be able to boot from it, you won't be
> able to boot live usb on a VM.
>

yup


>
> The second one works on both real hardare and VM, BUT with a
> prequisite that you have to export-import rpool first on that
> particular system. Unless you already have solaris installed, this
> usually means you need to boot with a live cd/usb first.
>

yup.  I didn't quite do that, what I did is exit to shell after installing
(from install CD) onto the USB.  Then in the shell from the install CD I
did the zpool export.  The resultant USB is still unbootable for me on real
hardware.

During this install, the USB is seen as a SATA disk.  I tried to install
onto it as a pass through USB device, but a python script in the installer
that tries to label the disk fails.  This is likely because it has to
invoke 'format -e' instead of 'format' in order to see the USB disk in the
first place.  When you invoke the 'label' command, if you have invoked
'format' as 'format -e' you get prompted whether you want an SMI or EFI
label.  The python script doesn't know about this and wants to just do 'y'
or 'n'.

In S10, I have no problem installing on real hardware onto a USB stick
(seen as USB), so I imagine this is just a deficiency of the new S11
installer.

Anyway, the point of that story is that I tried to install onto it as as
USB device, instead of as a SATA device, in case something special happens
to make USB bootable that doesn't happen when the S11 installer thinks it's
a SATA device.  But I was unable to complete that test.


> I'm not sure what you mean by "usbcopy of the live USB image in the
> VM, and see if I can boot real hardware from the resultant live USB
> stick.". If you're trying to create (1), it'd be simpler to just use
> live cd on real hardware, and if necessary create live usb there (MUCH
> faster than on a VM). If you mean (2), then it won't work unless you
> boot with live cd/usb first.
>

I meant (1), because I think this is an easier case to try out than (2).
(1) should DEFINITELY work, IMHO.

I don't use live cd on real hardware because that doesn't meet my objective
of being able to create a removable boot drive, created in a VM, that I can
boot on real hardware if I wanted to.  I mean, I *could* do it that way,
but I want to be able to do this in a 100% VM environment.


>
> Oh and for reference, instead of usbcopy, I prefer using this method:
> http://blogs.oracle.com/jim/entry/how_to_create_a_usb
>

Thanks, I'll check it out.


>
> --
> Fajar
>
> >
> > On Tue, Nov 22, 2011 at 5:25 AM, Fajar A. Nugraha 
> wrote:
> >>
> >> On Tue, Nov 22, 2011 at 7:32 PM, Jim Klimov  wrote:
> >> >> Or maybe not.  I guess this was findroot() in sol10 but in sol11 this
> >> >> seems to have gone away.
> >> >
> >> > I haven't used sol11 yet, so I can't say for certain.
> >> > But it is possible that the default boot (without findroot)
> >> > would use the bootfs property of your root pool.
> >>
> >> Nope.
> >>
> >> S11's grub specifies bootfs for every stanza in menu.lst. bootfs pool
> >> property is no longer used.
> >>
> >> Anyway, after some testing, I found out you CAN use vbox-installed s11
> >> usb stick on real notebook (enough hardware difference there). The
> >> trick is you have to import-export the pool on the system you're going
> >> to boot the stick on. Meaning, you need to have S11 live cd/usb handy
> >> and boot with that first before booting using your disk.
> >>
> >> --
> >> Fajar
> >> ___
> >> zfs-discuss mailing list
> >> zfs-discuss@opensolaris.org
> >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> >
> >
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] virtualbox rawdisk discrepancy

2011-11-29 Thread Fajar A. Nugraha
On Wed, Nov 30, 2011 at 1:25 PM, Frank Cusack  wrote:
> I haven't been able to get this working.  To keep it simpler, next I am
> going to try usbcopy of the live USB image in the VM, and see if I can boot
> real hardware from the resultant live USB stick.

To be clear, I'm talking about two things:
(1) live USB, created from Live CD
(2) solaris installed on USB

This first one works on real hardware, but not on a VM. The cause is
simple: seems like a boot code somewhere searches ONLY removable media
for live solaris image. Since you need to map the USB disk as regular
disk (SATA/IDE/SCSI) in a VM to be able to boot from it, you won't be
able to boot live usb on a VM.

The second one works on both real hardare and VM, BUT with a
prequisite that you have to export-import rpool first on that
particular system. Unless you already have solaris installed, this
usually means you need to boot with a live cd/usb first.

I'm not sure what you mean by "usbcopy of the live USB image in the
VM, and see if I can boot real hardware from the resultant live USB
stick.". If you're trying to create (1), it'd be simpler to just use
live cd on real hardware, and if necessary create live usb there (MUCH
faster than on a VM). If you mean (2), then it won't work unless you
boot with live cd/usb first.

Oh and for reference, instead of usbcopy, I prefer using this method:
http://blogs.oracle.com/jim/entry/how_to_create_a_usb

-- 
Fajar

>
> On Tue, Nov 22, 2011 at 5:25 AM, Fajar A. Nugraha  wrote:
>>
>> On Tue, Nov 22, 2011 at 7:32 PM, Jim Klimov  wrote:
>> >> Or maybe not.  I guess this was findroot() in sol10 but in sol11 this
>> >> seems to have gone away.
>> >
>> > I haven't used sol11 yet, so I can't say for certain.
>> > But it is possible that the default boot (without findroot)
>> > would use the bootfs property of your root pool.
>>
>> Nope.
>>
>> S11's grub specifies bootfs for every stanza in menu.lst. bootfs pool
>> property is no longer used.
>>
>> Anyway, after some testing, I found out you CAN use vbox-installed s11
>> usb stick on real notebook (enough hardware difference there). The
>> trick is you have to import-export the pool on the system you're going
>> to boot the stick on. Meaning, you need to have S11 live cd/usb handy
>> and boot with that first before booting using your disk.
>>
>> --
>> Fajar
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] virtualbox rawdisk discrepancy

2011-11-29 Thread Frank Cusack
I haven't been able to get this working.  To keep it simpler, next I am
going to try usbcopy of the live USB image in the VM, and see if I can boot
real hardware from the resultant live USB stick.

On Tue, Nov 22, 2011 at 5:25 AM, Fajar A. Nugraha  wrote:

> On Tue, Nov 22, 2011 at 7:32 PM, Jim Klimov  wrote:
> >> Or maybe not.  I guess this was findroot() in sol10 but in sol11 this
> >> seems to have gone away.
> >
> > I haven't used sol11 yet, so I can't say for certain.
> > But it is possible that the default boot (without findroot)
> > would use the bootfs property of your root pool.
>
> Nope.
>
> S11's grub specifies bootfs for every stanza in menu.lst. bootfs pool
> property is no longer used.
>
> Anyway, after some testing, I found out you CAN use vbox-installed s11
> usb stick on real notebook (enough hardware difference there). The
> trick is you have to import-export the pool on the system you're going
> to boot the stick on. Meaning, you need to have S11 live cd/usb handy
> and boot with that first before booting using your disk.
>
> --
> Fajar
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] gaining access to var from a live cd

2011-11-29 Thread Francois Dion
In the end what I needed to do was to set the mountpoint with:

zfs set mountpoint=/tmp/rescue rpool/ROOT/openindiana

it ended up mounting it in /mnt/rpool/tmp/rescue but still, it gave me
the access to var/ld/... and after removing the ld.config, doing a
zpool export and reboot, my desktop is back.

Thanks for the pointers. "man zfs" did mention mountpoint as a valid
option, not sure why it didnt work. as for mount -F zfs... it only
works on legacy.

On 11/29/11, Mike Gerdts  wrote:
> On Tue, Nov 29, 2011 at 4:40 PM, Francois Dion 
> wrote:
>> It is on openindiana 151a, no separate /var as far as But I'll have to
>> test this on solaris11 too when I get a chance.
>>
>> The problem is that if I
>>
>> zfs mount -o mountpoint=/tmp/rescue (or whatever) rpool/ROOT/openindiana
>>
>> i get a cannot mount /mnt/rpool: directory is not empty.
>>
>> The reason for that is that I had to do a zpool import -R /mnt/rpool
>> rpool (or wherever I mount it it doesnt matter) before I could do a
>> zfs mount, else I dont have access to the rpool zpool for zfs to do
>> its thing.
>>
>> chicken / egg situation? I miss the old fail safe boot menu...
>
> You can mount it pretty much anywhere:
>
> mkdir /tmp/foo
> zfs mount -o mountpoint=/tmp/foo ...
>
> I'm not sure when the temporary mountpoint option (-o mountpoint=...)
> came in. If it's not valid syntax then:
>
> mount -F zfs rpool/ROOT/solaris /tmp/foo
>
> --
> Mike Gerdts
> http://mgerdts.blogspot.com/
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] gaining access to var from a live cd

2011-11-29 Thread Mike Gerdts
On Tue, Nov 29, 2011 at 4:40 PM, Francois Dion  wrote:
> It is on openindiana 151a, no separate /var as far as But I'll have to
> test this on solaris11 too when I get a chance.
>
> The problem is that if I
>
> zfs mount -o mountpoint=/tmp/rescue (or whatever) rpool/ROOT/openindiana
>
> i get a cannot mount /mnt/rpool: directory is not empty.
>
> The reason for that is that I had to do a zpool import -R /mnt/rpool
> rpool (or wherever I mount it it doesnt matter) before I could do a
> zfs mount, else I dont have access to the rpool zpool for zfs to do
> its thing.
>
> chicken / egg situation? I miss the old fail safe boot menu...

You can mount it pretty much anywhere:

mkdir /tmp/foo
zfs mount -o mountpoint=/tmp/foo ...

I'm not sure when the temporary mountpoint option (-o mountpoint=...)
came in. If it's not valid syntax then:

mount -F zfs rpool/ROOT/solaris /tmp/foo

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] gaining access to var from a live cd

2011-11-29 Thread Francois Dion
It is on openindiana 151a, no separate /var as far as But I'll have to
test this on solaris11 too when I get a chance.

The problem is that if I

zfs mount -o mountpoint=/tmp/rescue (or whatever) rpool/ROOT/openindiana

i get a cannot mount /mnt/rpool: directory is not empty.

The reason for that is that I had to do a zpool import -R /mnt/rpool
rpool (or wherever I mount it it doesnt matter) before I could do a
zfs mount, else I dont have access to the rpool zpool for zfs to do
its thing.

chicken / egg situation? I miss the old fail safe boot menu...

On 11/29/11, Mike Gerdts  wrote:
> On Tue, Nov 29, 2011 at 3:01 PM, Francois Dion 
> wrote:
>> I've hit an interesting (not) problem. I need to remove a problematic
>> ld.config file (due to an improper crle...) to boot my laptop. This is
>> OI 151a, but fundamentally this is zfs, so i'm asking here.
>>
>> what I did after booting the live cd and su:
>> mkdir /tmp/disk
>> zpool import -R /tmp/disk -f rpool
>>
>> export shows up in there and rpool also, but in rpool there is only
>> boot and etc.
>>
>> zfs list shows rpool/ROOT/openindiana as mounted on /tmp/disk and I
>> see dump and swap, but no var. rpool/ROOT shows as legacy, so I
>> figured, maybe mount that.
>>
>> mount -F zfs rpool/ROOT /mnt/rpool
>
> That dataset (rpool/ROOT) should never have any files in it.  It is
> just a "container" for boot environments.  You can see which boot
> environments exist with:
>
> zfs list -r rpool/ROOT
>
> If you are running Solaris 11, the boot environment's root dataset
> will show a mountpoint property value of /.  Assuming it is called
> "solaris" you can mount it with:
>
> zfs mount -o mountpoint=/mnt/rpool rpool/ROOT/solaris
>
> If the system is running Solaris 11 (and was not updated from Solaris
> 11 Express), it will have a separate /var dataset.
>
> zfs mount -o mountpoint=/mnt/rpool/var rpool/ROOT/solaris/var
>
> --
> Mike Gerdts
> http://mgerdts.blogspot.com/
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] gaining access to var from a live cd

2011-11-29 Thread Mike Gerdts
On Tue, Nov 29, 2011 at 3:01 PM, Francois Dion  wrote:
> I've hit an interesting (not) problem. I need to remove a problematic
> ld.config file (due to an improper crle...) to boot my laptop. This is
> OI 151a, but fundamentally this is zfs, so i'm asking here.
>
> what I did after booting the live cd and su:
> mkdir /tmp/disk
> zpool import -R /tmp/disk -f rpool
>
> export shows up in there and rpool also, but in rpool there is only
> boot and etc.
>
> zfs list shows rpool/ROOT/openindiana as mounted on /tmp/disk and I
> see dump and swap, but no var. rpool/ROOT shows as legacy, so I
> figured, maybe mount that.
>
> mount -F zfs rpool/ROOT /mnt/rpool

That dataset (rpool/ROOT) should never have any files in it.  It is
just a "container" for boot environments.  You can see which boot
environments exist with:

zfs list -r rpool/ROOT

If you are running Solaris 11, the boot environment's root dataset
will show a mountpoint property value of /.  Assuming it is called
"solaris" you can mount it with:

zfs mount -o mountpoint=/mnt/rpool rpool/ROOT/solaris

If the system is running Solaris 11 (and was not updated from Solaris
11 Express), it will have a separate /var dataset.

zfs mount -o mountpoint=/mnt/rpool/var rpool/ROOT/solaris/var

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] gaining access to var from a live cd

2011-11-29 Thread Francois Dion
I've hit an interesting (not) problem. I need to remove a problematic
ld.config file (due to an improper crle...) to boot my laptop. This is
OI 151a, but fundamentally this is zfs, so i'm asking here.

what I did after booting the live cd and su:
mkdir /tmp/disk
zpool import -R /tmp/disk -f rpool

export shows up in there and rpool also, but in rpool there is only
boot and etc.

zfs list shows rpool/ROOT/openindiana as mounted on /tmp/disk and I
see dump and swap, but no var. rpool/ROOT shows as legacy, so I
figured, maybe mount that.

mount -F zfs rpool/ROOT /mnt/rpool

ls -alR /mnt/rpool shows nothing.


How do I access /var on my laptop's drive? Or as a matter of fact,
everything that is in / beside export and boot?

thanks,
Francois
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] bug moving files between two zfs filesystems (too many open files)

2011-11-29 Thread Bob Friesenhahn

On Tue, 29 Nov 2011, sol wrote:


Yes, it's moving a tree of files, and the shell ulimit is the default (which I 
think is 256).

It happened twice recently in normal use but not when I tried to replicate it 
(standard test response ;-))


Is it possible that 'mv' is multi-threaded in Solaris 11?

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] bug moving files between two zfs filesystems (too many open files)

2011-11-29 Thread sol
Yes, it's moving a tree of files, and the shell ulimit is the default (which I 
think is 256).


It happened twice recently in normal use but not when I tried to replicate it 
(standard test response ;-))


Anyway it only happened moving between zfs filesystems in Solaris 11, I've 
never seen it before, which is why I posted here first. But if it's a problem 
elsewhere in Solaris I should move the discussion... although any ideas are 
welcome!




>
> From: "casper@oracle.com" 
>>I think the "too many open files" is a generic error message about 
>>running out of file descriptors. You should check your shell ulimit
>>information.
>
>Yeah, but mv shouldn't run out of file descriptors or should be
>handle to deal with that.
>
>Are we moving a tree of files?
>___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS smb/cifs shares in Solaris 11 (some observations)

2011-11-29 Thread Cindy Swearingen

Hi Sol,

For 1) and several others, review the ZFS Admin Guide for
a detailed description of the share changes, here:

http://docs.oracle.com/cd/E23824_01/html/821-1448/gayne.html

For 2-4), You can't rename a share. You would have to remove it
and recreate it with the new name.

For 6), I think you need to upgrade your file systems.

Thanks,

Cindy
On 11/29/11 09:46, sol wrote:

Hi

Several observations with zfs cifs/smb shares in the new Solaris 11.

1) It seems that the previously documented way to set the smb share name
no longer works
zfs set sharesmb=name=my_share_name
You have to use the long-winded
zfs set share=name=my_share_name,path=/my/share/path,prot=smb
This is fine but not really obvious if moving scripts from Solaris10 to
Solaris11.

2) If you use "zfs rename" to rename a zfs filesystem it doesn't rename
the smb share name.

3) Also you might end up with two shares having the same name.

4) So how do you rename the smb share? There doesn't appear to be a "zfs
unset" and if you issue the command twice with different names then both
are listed when you use "zfs get share".

5) The "share" value act like a property but does not show up if you use
"zfs get" so that's not really consistent

6) zfs filesystems created with Solaris 10 and shared with smb cannot be
mounted from Windows when the server is upgraded to Solaris 11.
The client just gets "permission denied" but in the server log you might
see "access denied: share ACL".
If you create a brand new zfs filesystem then it works fine. So what is
the difference?
The ACLs have never been set or changed so it's not that, and the two
filesystems appear to have identical ACLs.
But if you look at the extended attributes the successful filesystem has
xattr {A--m} and the unsuccessful has {}.
However that xattr cannot be set on the share to see if it allows it to
be mounted.
"chmod S+cA share" gives "chmod: ERROR: extended system attributes not
supported for share" (even though it has the xattr=on property).
What is the problem here, why cannot a Solaris 10 filesystem be shared
via smb?
And how can extended attributes be set on a zfs filesystem?

Thanks folks



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] bug moving files between two zfs filesystems (too many open files)

2011-11-29 Thread Nico Williams
On Tue, Nov 29, 2011 at 12:17 PM, Cindy Swearingen
 wrote:
> I think the "too many open files" is a generic error message about running
> out of file descriptors. You should check your shell ulimit
> information.

Also, see how many open files you have: echo /proc/self/fd/*

It'd be quite weird though to have a very low fd limit or a very large
number of file descriptors open in the shell.

That said, as Casper says, utilities like mv(1) should be able to cope
with reasonably small fd limits (i.e., not as small as 3, but perhaps
as small as 10).

Nico
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] bug moving files between two zfs filesystems (too many open files)

2011-11-29 Thread Alexander
Yep, that's not filesystem issue, it's a kernel VFS level.

Sent from my iPad

On Nov 29, 2011, at 10:17 PM, Cindy Swearingen  
wrote:

> I think the "too many open files" is a generic error message about running 
> out of file descriptors. You should check your shell ulimit
> information.
> 
> On 11/29/11 09:28, sol wrote:
>> Hello
>> 
>> Has anyone else come across a bug moving files between two zfs file systems?
>> 
>> I used "mv /my/zfs/filesystem/files /my/zfs/otherfilesystem" and got the
>> error "too many open files".
>> 
>> This is on Solaris 11
>> 
>> 
>> 
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] bug moving files between two zfs filesystems (too many open files)

2011-11-29 Thread Casper . Dik

>I think the "too many open files" is a generic error message about 
>running out of file descriptors. You should check your shell ulimit
>information.


Yeah, but mv shouldn't run out of file descriptors or should be
handle to deal with that.

Are we moving a tree of files?

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] bug moving files between two zfs filesystems (too many open files)

2011-11-29 Thread Cindy Swearingen
I think the "too many open files" is a generic error message about 
running out of file descriptors. You should check your shell ulimit

information.

On 11/29/11 09:28, sol wrote:

Hello

Has anyone else come across a bug moving files between two zfs file systems?

I used "mv /my/zfs/filesystem/files /my/zfs/otherfilesystem" and got the
error "too many open files".

This is on Solaris 11



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS smb/cifs shares in Solaris 11 (some observations)

2011-11-29 Thread Tomas Forsman
On 29 November, 2011 - sol sent me these 4,9K bytes:

> Hi
> 
> Several observations with zfs cifs/smb shares in the new Solaris 11.
> 
> 1) It seems that the previously documented way to set the smb share name no 
> longer works
>  zfs set sharesmb=name=my_share_name
> You have to use the long-winded
> zfs set share=name=my_share_name,path=/my/share/path,prot=smb
> This is fine but not really obvious if moving scripts from Solaris10 to 
> Solaris11.

Same with nfs, all changed.

> 2) If you use "zfs rename" to rename a zfs filesystem it doesn't rename the 
> smb share name.
> 
> 3) Also you might end up with two shares having the same name.
> 
> 4) So how do you rename the smb share? There doesn't appear to be a "zfs 
> unset" and if you issue the command twice with different names then both are 
> listed when you use "zfs get share".

man zfs_share

 zfs set -c share=name=sharename filesystem

 Removes a file system share. The -c option distinguishes
 this subcommand from the zfs set share command described
 above.

> 
> 5) The "share" value act like a property but does not show up if you use "zfs 
> get" so that's not really consistent
> 
> 6) zfs filesystems created with Solaris 10 and shared with smb cannot be 
> mounted from Windows when the server is upgraded to Solaris 11.
> The client just gets "permission denied" but in the server log you might see 
> "access denied: share ACL".
> If you create a brand new zfs filesystem then it works fine. So what is the 
> difference?
> The ACLs have never been set or changed so it's not that, and the two 
> filesystems appear to have identical ACLs.
> But if you look at the extended attributes the successful filesystem has 
> xattr {A--m} and the unsuccessful has {}.
> However that xattr cannot be set on the share to see if it allows it to be 
> mounted.
> "chmod S+cA share" gives "chmod: ERROR: extended system attributes not 
> supported for share" (even though it has the xattr=on property).
> What is the problem here, why cannot a Solaris 10 filesystem be shared via 
> smb?
> And how can extended attributes be set on a zfs filesystem?
> 
> Thanks folks

> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



/Tomas
-- 
Tomas Forsman, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS smb/cifs shares in Solaris 11 (some observations)

2011-11-29 Thread sol
Hi

Several observations with zfs cifs/smb shares in the new Solaris 11.

1) It seems that the previously documented way to set the smb share name no 
longer works
 zfs set sharesmb=name=my_share_name
You have to use the long-winded
zfs set share=name=my_share_name,path=/my/share/path,prot=smb
This is fine but not really obvious if moving scripts from Solaris10 to 
Solaris11.

2) If you use "zfs rename" to rename a zfs filesystem it doesn't rename the smb 
share name.

3) Also you might end up with two shares having the same name.

4) So how do you rename the smb share? There doesn't appear to be a "zfs unset" 
and if you issue the command twice with different names then both are listed 
when you use "zfs get share".

5) The "share" value act like a property but does not show up if you use "zfs 
get" so that's not really consistent

6) zfs filesystems created with Solaris 10 and shared with smb cannot be 
mounted from Windows when the server is upgraded to Solaris 11.
The client just gets "permission denied" but in the server log you might see 
"access denied: share ACL".
If you create a brand new zfs filesystem then it works fine. So what is the 
difference?
The ACLs have never been set or changed so it's not that, and the two 
filesystems appear to have identical ACLs.
But if you look at the extended attributes the successful filesystem has xattr 
{A--m} and the unsuccessful has {}.
However that xattr cannot be set on the share to see if it allows it to be 
mounted.
"chmod S+cA share" gives "chmod: ERROR: extended system attributes not 
supported for share" (even though it has the xattr=on property).
What is the problem here, why cannot a Solaris 10 filesystem be shared via smb?
And how can extended attributes be set on a zfs filesystem?

Thanks folks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] bug moving files between two zfs filesystems (too many open files)

2011-11-29 Thread sol
Hello

Has anyone else come across a bug moving files between two zfs file systems?

I used "mv /my/zfs/filesystem/files /my/zfs/otherfilesystem" and got the error 
"too many open files".

This is on Solaris 11
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss