[zfs-discuss] Is oi_151 zpool version on par with sol11ex?
Hello, some time ago I've seen the existence of development ISOs of OpenIndiana dubbed build 151. How close or far is it from the sol11ex 151a? In particular, regarding ZFS/ZPOOL version and functionality? Namely, some people on the list report having problems with their pools built on zpoolv28, and we eagerly suggest that they can try other alternatives like FreeBSD or different flavors of OpenSolaris. When people have problems with zpoolv31, we shake our heads and state that they a re out of luck and in a vendor lock-in. So, is/will be oi_151 a working and safe alternative for such cases? I guess it will be yes after Oracle releases Solaris 11 and the source codes, as they seem to have promised. Will oi_151 match zpoolv31 before that? Is that even a development target? ;) Thanks, //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Is oi_151 zpool version on par with sol11ex?
On 07/19/11 12:03, Jim Klimov wrote: Hello, some time ago I've seen the existence of development ISOs of OpenIndiana dubbed build 151. How close or far is it from the sol11ex 151a? In particular, regarding ZFS/ZPOOL version and functionality? Solaris 11 Express (snv_151a) has the following pool versions beyond 28: 29 RAID-Z/mirror hybrid allocator 30 Encryption 31 Improved 'zfs list' performance http://hub.opensolaris.org/bin/view/Community+Group+zfs/29 http://hub.opensolaris.org/bin/view/Community+Group+zfs/30 http://hub.opensolaris.org/bin/view/Community+Group+zfs/31 -- Darren J Moffat ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] recover raidz from fried server ??
Ok, I went with windows and virtualbox solution. I could see all 5 of my raid-z disks in windows. I encapsulated them as entire disks in vmdk files and subsequently offlined them to windows. I then installed a sol11exp vbox instance, attached the 5 virtualized disks and can see them in my sol11exp (they are disks #1-#5). root@san:~# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c7t0d0 ATA-VBOX HARDDISK -1.0 cyl 26105 alt 2 hd 255 sec 63 /pci@0,0/pci8086,2829@d/disk@0,0 1. c7t2d0 ATA-VBOX HARDDISK -1.0 cyl 60798 alt 2 hd 255 sec 126 /pci@0,0/pci8086,2829@d/disk@2,0 2. c7t3d0 ATA-VBOX HARDDISK -1.0 cyl 60798 alt 2 hd 255 sec 126 /pci@0,0/pci8086,2829@d/disk@3,0 3. c7t4d0 ATA-VBOX HARDDISK -1.0 cyl 60798 alt 2 hd 255 sec 126 /pci@0,0/pci8086,2829@d/disk@4,0 4. c7t5d0 ATA-VBOX HARDDISK -1.0 cyl 60798 alt 2 hd 255 sec 126 /pci@0,0/pci8086,2829@d/disk@5,0 5. c7t6d0 ATA-VBOX HARDDISK -1.0 cyl 60798 alt 2 hd 255 sec 126 /pci@0,0/pci8086,2829@d/disk@6,0 Specify disk (enter its number): Great I thought, all i need to do is import my raid-z. root@san:~# zpool import root@san:~# Damn, that would have been just too easy I guess. Help !!! How do i recover my data? I know its still hiding on those disks. Where do i go from here? Thanks Rep -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] recover raidz from fried server ??
On Tue, Jul 19, 2011 at 4:29 PM, Brett repudi...@gmail.com wrote: Ok, I went with windows and virtualbox solution. I could see all 5 of my raid-z disks in windows. I encapsulated them as entire disks in vmdk files and subsequently offlined them to windows. I then installed a sol11exp vbox instance, attached the 5 virtualized disks and can see them in my sol11exp (they are disks #1-#5). root@san:~# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c7t0d0 ATA -VBOX HARDDISK -1.0 cyl 26105 alt 2 hd 255 sec 63 /pci@0,0/pci8086,2829@d/disk@0,0 1. c7t2d0 ATA -VBOX HARDDISK -1.0 cyl 60798 alt 2 hd 255 sec 126 /pci@0,0/pci8086,2829@d/disk@2,0 2. c7t3d0 ATA -VBOX HARDDISK -1.0 cyl 60798 alt 2 hd 255 sec 126 /pci@0,0/pci8086,2829@d/disk@3,0 3. c7t4d0 ATA -VBOX HARDDISK -1.0 cyl 60798 alt 2 hd 255 sec 126 /pci@0,0/pci8086,2829@d/disk@4,0 4. c7t5d0 ATA -VBOX HARDDISK -1.0 cyl 60798 alt 2 hd 255 sec 126 /pci@0,0/pci8086,2829@d/disk@5,0 5. c7t6d0 ATA -VBOX HARDDISK -1.0 cyl 60798 alt 2 hd 255 sec 126 /pci@0,0/pci8086,2829@d/disk@6,0 Specify disk (enter its number): Great I thought, all i need to do is import my raid-z. root@san:~# zpool import root@san:~# Damn, that would have been just too easy I guess. Help !!! How do i recover my data? I know its still hiding on those disks. Where do i go from here? What does zdb -l /dev/dsk/c7t6d0s0 or zdb -l /dev/dsk/c7t6d0p1 show? -- Fajar ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] recover raidz from fried server ??
root@san:~# zdb -l /dev/dsk/c7t6d0s0 cannot open '/dev/rdsk/c7t6d0s0': I/O error root@san:~# zdb -l /dev/dsk/c7t6d0p1 LABEL 0 failed to unpack label 0 LABEL 1 failed to unpack label 1 LABEL 2 failed to unpack label 2 LABEL 3 failed to unpack label 3 root@san:~# -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] recover raidz from fried server ??
Could you try to just boot up fbsd or linux on the box to see if zfs (native or fuse-based, respecively) can see the drives? roy - Original Message - root@san:~# zdb -l /dev/dsk/c7t6d0s0 cannot open '/dev/rdsk/c7t6d0s0': I/O error root@san:~# zdb -l /dev/dsk/c7t6d0p1 LABEL 0 failed to unpack label 0 LABEL 1 failed to unpack label 1 LABEL 2 failed to unpack label 2 LABEL 3 failed to unpack label 3 root@san:~# -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 r...@karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og relevante synonymer på norsk. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] What is .$EXTEND/$QUOTA ?
On Tue, Jul 19, 2011 at 2:39 PM, Orvar Korvar knatte_fnatte_tja...@yahoo.com wrote: I am using S11E, and have created a zpool on a single disk as storage. In several directories, I can see a directory called .$EXTEND/$QUOTA. What is it for? Can I delete it? -- Perhaps this is of help. http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/smbsrv/smb_pathname.c#752 752 /* 753 * smb_pathname_preprocess_quota 754 * 755 * There is a special file required by windows so that the quota 756 * tab will be displayed by windows clients. This is created in 757 * a special directory, $EXTEND, at the root of the shared file 758 * system. To hide this directory prepend a '.' (dot). 759 */ -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] revisiting aclmode options
On Mon, Jul 18, 2011 at 06:44:25PM -0700, Paul B. Henson wrote: It would be really nice if the aclmode could be specified on a per object level rather than a per file system level, but that would be considerably more difficult to achieve 8-/. If there were an acl permission for set legacy permission bits, as distinct from write_acl, that could be set to deny at whatever granularity you needed... -- Dan. pgpzVEgeRbhtA.pgp Description: PGP signature ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] [illumos-Developer] revisiting aclmode options
On Mon, Jul 18, 2011 at 9:44 PM, Paul B. Henson hen...@acm.org wrote: Now that illumos has restored the aclmode option to zfs, I would like to revisit the topic of potentially expanding the suite of available modes. [...] At one point, I was experimenting with some code for smbfs that would invent the mode bits (remember, smbfs does not get mode bits from the remote server, only the ACL). I ended up discarding it there due to objections from reviewers, but the idea might be useful for people who really don't care about mode bits. I'll attempt a description below. The idea: A new aclmode setting called discard, meaning that the users don't care at all about the traditional mode bits. A dataset with aclmode=discard would have the chmod system call and NFS setattr do absolutely nothing to the mode bits. The getattr call would receive mode bits derived from the ACL. (this derivation would actually happen when and acl is stored, not during getattr) The mode bits would be derived from the ACL such that the mode represents the greatest possible access that might be allowed by the ACL, without any consideration of deny entries or group memberships. In detail, that mode derivation might be: The mode's owner part would be the union of access granted by any owner type ACEs in the ACL and any ACEs where the ACE owner matches the file owner. The mode's group part would be the union of access granted by any group ACEs and any ACEs who's type is unknown (all SIDs are of unknown type). The mode's other part would be the access granted by an Everyone ACE, if present. Would that be of any use? Gordon ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss