[zfs-discuss] EON ZFS Storage 0.60.0 based on snv 130, Sun-set release!

2010-04-05 Thread Andre Lue
Embedded Operating system/Networking (EON), RAM based live ZFS NAS appliance is 
released on Genunix! This release marks the end of SXCE releases and Sun 
Microsystems as we know it! It is dubbed the Sun-set release! Many thanks to Al 
at Genunix.org for download hosting and serving the Opensolaris community.

EON Deduplication ZFS storage is available in 32 and 64-bit, CIFS and Samba 
versions:
EON 64-bit x86 CIFS ISO image version 0.60.0 based on snv_130
* eon-0.600-130-64-cifs.iso
* MD5: 55c5837985f282f9272f5275163f7d7b
* Size: ~93Mb
* Released: Monday 05-April-2010

EON 64-bit x86 Samba ISO image version 0.60.0 based on snv_130
* eon-0.600-130-64-smb.iso
* MD5: bf095f2187c29fb543285b72266c0295
* Size: ~106Mb
* Released: Monday 05-April-2010

EON 32-bit x86 CIFS ISO image version 0.60.0 based on snv_130
* eon-0.600-130-32-cifs.iso
* MD5: e2b312feefbfb14792c0d190e7ff69cf
* Size: ~59Mb
* Released: Monday 05-April-2010

EON 32-bit x86 Samba ISO image version 0.60.0 based on snv_130
* eon-0.600-130-32-smb.iso
* MD5: bcf6dc76bc9a22cff1431da20a5c56e2
* Size: ~73Mb
* Released: Monday 05-April-2010

EON 64-bit x86 CIFS ISO image version 0.60.0 based on snv_130 (NO HTTPD)
* eon-0.600-130-64-cifs-min.iso
* MD5: 78b0bb116c0e32a48c473ce1b94e604f
* Size: ~87Mb
* Released: Monday 05-April-2010

EON 64-bit x86 Samba ISO image version 0.60.0 based on snv_130 (NO HTTPD)
* eon-0.600-130-64-smb-min.iso
* MD5: e74732c41e4b3a9a06f52779bc9f8352
* Size: ~101Mb
* Released: Monday 05-April-2010

New/Changes/Fixes:
- Active Directory integration problem resolved
- Hotplug errors at boot are being worked on and are safe to ignore.
- Updated /mnt/eon0/.exec with new service configuration additions (light, 
nginx, afpd, and more ...).
- Updated ZFS, NFS v3 performance tuning in /etc/system
- Added megasys driver.
- EON rebooting at grub(since snv_122) in ESXi, Fusion and various versions of 
VMware workstation. This is related to bug 6820576. Workaround, at grub press e 
and add on the end of the kernel line "-B disable-pcieb=true"

http://eonstorage.blogspot.com/
http://sites.google.com/site/eonstorage/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] EON ZFS Storage 0.59.9 based on snv 129, Deduplication release!

2009-12-21 Thread Andre Lue
Embedded Operating system/Networking (EON), RAM based live ZFS NAS appliance is 
released on Genunix! This is the first EON release with inline Deduplication 
features! Many thanks to Genunix.org for download hosting and serving the 
opensolaris community.

EON Deduplication ZFS storage is available in 32 and 64-bit, CIFS and Samba 
versions:
tryitEON 64-bit x86 CIFS ISO image version 0.59.9 based on snv_129

* eon-0.599-129-64-cifs.iso
* MD5: 8e917a14dbf0c793ad2958bdf8feb24a
* Size: ~93Mb
* Released: Monday 21-December-2009

tryitEON 64-bit x86 Samba ISO image version 0.59.9 based on snv_129

* eon-0.599-129-64-smb.iso
* MD5: 2c38a93036e4367e5cdf8a74605fcbaf
* Size: ~107Mb
* Released: Monday 21-December-2009

tryitEON 32-bit x86 CIFS ISO image version 0.59.9 based on snv_129

* eon-0.599-129-32-cifs.iso
* MD5: 0dcdd754b937f1d6515eba34b6ed2607
* Size: ~59Mb
* Released: Monday 21-December-2009

tryitEON 32-bit x86 Samba ISO image version 0.59.9 based on snv_129

* eon-0.599-129-32-smb.iso
* MD5: c24008516eb4584a64d9239015559ba4
* Size: ~73Mb
* Released: Monday 21-December-2009

tryitEON 64-bit x86 CIFS ISO image version 0.59.9 based on snv_129 (NO HTTPD)

* eon-0.599-129-64-cifs-min.iso
* MD5: 78b0bb116c0e32a48c473ce1b94e604f
* Size: ~87Mb
* Released: Monday 21-December-2009

tryitEON 64-bit x86 Samba ISO image version 0.59.9 based on snv_129 (NO HTTPD)

* eon-0.599-129-64-smb-min.iso
* MD5: 57d93eba9286c4bcc4c00c0154c684de
* Size: ~101Mb
* Released: Monday 21-December-2009

New/Changes/Fixes:
- Deduplication, Deduplication, Deduplication. (Only 1x the storage space was 
used)
- The hotplug errors at boot are being worked on. They are safe to ignore.
- Cleaned up minor entries in /mnt/eon0/.exec. Added "rsync --daemon" to start 
by default.
- EON rebooting at grub(since snv_122) in ESXi, Fusion and various versions of 
VMware workstation. This is related to bug 6820576. Workaround, at grub press e 
and add on the end of the kernel line "-B disable-pcieb=true" 

http://eonstorage.blogspot.com
http://sites.google.com/site/eonstorage/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Dedupe asynchronous mode?

2009-12-11 Thread Andre Lue
I'm a bit unclear how to use/try de-duplication in asynchronous mode? Can 
someone kindly clarify?

Is it as simple as enabling then disabling after something completes?

Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] EON ZFS Storage 0.59.5 based on snv 125 released!

2009-12-03 Thread Andre Lue
Embedded Operating system/Networking (EON), RAM based live ZFS NAS appliance is 
released on Genunix! Many thanks to Al Hopper and Genunix.org for download 
hosting and serving the opensolaris community.

EON ZFS storage is available in a 32/64-bit CIFS and Samba versions:
tryitEON 64-bit x86 CIFS ISO image version 0.59.5 based on snv_125

* eon-0.595-125-64-cifs.iso
* MD5: a21c0b6111803f95c29e421af96ee016
* Size: ~90Mb
* Released: Thursday 3-December-2009

tryitEON 64-bit x86 Samba ISO image version 0.59.5 based on snv_125

* eon-0.595-125-64-smb.iso
* MD5: 4678298f0152439867d218987c3ec20e
* Size: ~103Mb
* Released: Thursday 3-December-2009

tryitEON 32-bit x86 CIFS ISO image version 0.59.5 based on snv_125

* eon-0.595-125-32-cifs.iso
* MD5: 4b76893c3363d46fad34bf7d0c23548c
* Size: ~57Mb
* Released: Thursday 3-December-2009

tryitEON 32-bit x86 Samba ISO image version 0.59.5 based on snv_125

* eon-0.595-125-32-smb.iso
* MD5: f478a8ea9228f16dc1bd93adae03d200
* Size: ~70Mb
* Released: Thursday 3-December-2009

tryitEON 64-bit x86 CIFS ISO image version 0.59.5 based on snv_125 (NO HTTP)

* eon-0.595-125-64-cifs-min.iso
* MD5: c7b9ec5c487302c1aa97363eb440fe00
* Size: ~85Mb
* Released: Thursday 3-December-2009

tryitEON 64-bit x86 Samba ISO image version 0.59.5 based on snv_125 (NO HTTP)

* eon-0.595-125-64-smb-min.iso
* MD5: a33f34506f05070ffc554de7beaafd4d
* Size: ~98Mb
* Released: Thursday 3-December-2009

New/Changes/Fixes:
- removed iscsitgd and replaced it with COMSTAR (iscsit, stmf)
- added SUNWhd to image vs being in the binary kit.
- added rsync to image vs being in the binary kit.
- added nge, yge and yukonx drivers.
- added (/etc/inet/hosts, /etc/default/init) to /mnt/eon0/.backup (TIMEZONE and 
hostname change fix)
- fixed typo entry /mnt/eon0/.exec zpool -a to zpool import -a
- eon rebooting at grub(since snv_122) in ESXi, Fusion and various versions of 
VMware workstation. This is related to bug 6820576. Workaround, at grub press e 
and add on the end of the kernel line "-B disable-pcieb=true"
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Any recommendation: what FS in DomU?

2009-12-02 Thread Andre Boegelsack
Hi to all,

I have a short question regarding which filesystem I should use in Dom0/DomU. 
I've built my Dom0 on basis of ZFS.

For my first DomU I've created a ZFS pool and installed the DomU (with OSOL 
inside). During the installation process you are being asked if you wanna use 
UFS or ZFS - I've chosen ZFS. The installation process was incredible slow. 

Hence, in the next DomU I used UFS instead of ZFS. And the installation process 
was pretty fast.

This leads me to the coonclusion: ZFS on top of ZFS = don't; UFS on top of ZFS 
= ok

Can anybody verify that performance issue?

Regards
André
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Performance of ZFS and UFS inside local/global zone

2009-10-20 Thread Andre Boegelsack
Dear all,

I was interested in the performance difference between filesystem operations 
inside a local and global zone. Therefore I utilized filebench and made several 
performance tests with the OLTP script for filebench. Here are some of my 
results:

-> In the global zone (filebench operates on basis of UFS): [b]281,850.2 
IOPS/sec[/b]
-> In the local zone (filebench operates on basis of UFS): [b]181,716 
IOPS/sec[/b]
So a huge difference bewteen the local and global zone when operating on basis 
of UFS.

After I saw the huge difference I wondered if I will see such a huge difference 
when using ZFS. Here are the results:
-> In the global zone (filebench operates on basis of ZFS): [b]1,710,268.1 
IOPS/sec[/b]
-> In the local zone (filebench operates on basis of ZFS): [b]449,332.6 
IOPS/sec[/b]
I was a little bit surprised to see a big difference again. Besides: ZFS 
outperforms UFS - but this is already known.

So in a first analysis I suspected the loop-back-device driver to cause the 
performance degredation. I repeated the tests without the loop-back-driver by 
mounting UFS device directly inside the zone. I couldn't discover the 
performance degredation again.

This leads me to the assumption, that the loop-back-driver causes the 
performance degredation - but how can I make sure it is the loop-back-driver 
and not anything else? Does anyone has an idea how to explore this phenomen?

Regards
André
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] EON ZFS Storage 0.59.4 based on snv_124 released!

2009-10-19 Thread Andre Lue
Embedded Operating system/Networking (EON), RAM based live ZFS NAS appliance is 
released on Genunix! Many thanks to Genunix.org for download hosting and 
serving the opensolaris community.

EON ZFS storage is available in a 32/64-bit CIFS and Samba versions:
tryitEON 64-bit x86 CIFS ISO image version 0.59.4 based on snv_124

* eon-0.594-124-64-cifs.iso
* MD5: 4bda930d1abc08666bf2f576b5dd006c
* Size: ~89Mb
* Released: Monday 19-October-2009

tryitEON 64-bit x86 Samba ISO image version 0.59.4 based on snv_124

* eon-0.594-124-64-smb.iso
* MD5: 80af8b288194377f13706572f7b174b3
* Size: ~102Mb
* Released: Monday 19-October-2009

tryitEON 32-bit x86 CIFS ISO image version 0.59.4 based on snv_124

* eon-0.594-124-32-cifs.iso
* MD5: dcc6f8cb35719950a6d4320aa5925d22
* Size: ~56Mb
* Released: Monday 19-October-2009

tryitEON 32-bit x86 Samba ISO image version 0.59.4 based on snv_124

* eon-0.594-124-32-smb.iso
* MD5: 3d6debd4595c1beb7ebbb68ca30b7391
* Size: ~69Mb
* Released: Monday 19-October-2009

New/Changes/Fixes:
- ntpd and nscd starting moved to /mnt/eon0/.exec
- added .disable and .purge feature
- install.sh bug fix for virtual disks, multiple run and improved error checking
- new transporter.sh CLI to automate upgrades, backups or downgrades to 
backed-up versions
- eon rebooting at grub in ESXi, Fusion and various versions of VMware 
workstation. This is related to bug 6820576. Workaround, at grub press e and 
add on the end of the kernel line "-B disable-pcieb=true"
http://eonstorage.blogspot.com
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] EON 0.59.3 based on snv_122 released

2009-09-15 Thread Andre Lue
EON ZFS NAS 0.59.3 based on snv_122 released!
Embedded Operating system/Networking (EON), RAM based live ZFS NAS appliance is 
released on Genunix! Much thanks to Al at Genunix.org for download hosting and 
serving the opensolaris community.

It is available in a CIFS and Samba flavor
EON 64-bit x86 CIFS ISO image version 0.59.3 based on snv_122
* eon-0.593-122-64-cifs.iso
* MD5: 8be86fb315b5b4929a04e0346ed0168c
* Size: ~89Mb
* Released: Monday 14-September-2009

EON 64-bit x86 Samba ISO image version 0.59.3 based on snv_122
* eon-0.593-122-64-smb.iso
* MD5: f68fefdc525a517b9c4b66028ae4347e
* Size: ~101Mb
* Released: Monday 14-September-2009

EON 32-bit x86 CIFS ISO image version 0.59.3 based on snv_122
* eon-0.593-122-32-cifs.iso
* MD5: fa71f059aa1eeefbcda597b98006ba9f
* Size: ~56Mb
* Released: Monday 14-September-2009

EON 32-bit x86 Samba ISO image version 0.59.3 based on snv_122
* eon-0.593-122-32-smb.iso
* MD5: 1b9861a780dc01da36ca17d1b4450132
* Size: ~69Mb
* Released: Monday 14-September-2009

New/Fixes:
- added 32/64-bit drivers: bnx, igb
- Workaround fix for IP validation in setup.sh
- added /usr/local/sbin for bin kit to bashrc
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] De-duplication before SXCE EOL ?

2009-09-10 Thread Andre Lue
Can anyone answer if we will get zfs de-duplication before SXCE EOL? If 
possible also anser the same on encryption?

Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] EON ZFS NAS 0.59.2 based on snv_119 released

2009-08-07 Thread Andre Lue
EON 64-bit x86 CIFS ISO image version 0.59.2 based on snv_119
* eon-0.592-119-64-cifs.iso
* MD5: a8560cf9b407c9da846dfa773aeaf676
* Size: ~87Mb
* Released: Friday 07-August-2009

EON 64-bit x86 Samba ISO image version 0.59.2 based on snv_119
* eon-0.592-119-64-smb.iso
* MD5: c845255be1d3efec26fdc919963b15de
* Size: ~100Mb
* Released: Friday 07-August-2009

EON 32-bit x86 CIFS ISO image version 0.59.2 based on snv_119
* eon-0.592-119-32-cifs.iso
* MD5: 9c0c093969c931f9a4614663faea90db
* Size: ~55Mb
* Released: Friday 07-August-2009

EON 32-bit x86 Samba ISO image version 0.59.2 based on snv_11
* eon-0.591-114-32-smb.iso
* MD5: 0a82dda197ab8a55007ba83145c0a662
* Size: ~68Mb
* Released: Friday 07-August-2009

New/Fixes:
- xntpd retired (R.I.P) by ntpd v4
- fixed a curpsinfo, libz.so.1 Dtrace bug
- added /usr/local path for symlinks to pool/bin, pool/lib for user's binaries
- added binary package containing: rsync, top, powertop, unzip, zip, less, wget
- added symlink preservation to updimg.sh, so users can add custom links.
- added drivers: si3124, ,sfe, rge, yukonx 

http://eonstorage.blogspot.com/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove disk from ZFS Pool

2009-08-05 Thread Andre van Eyssen

On Wed, 5 Aug 2009, Ketan wrote:


How can we remove disk from zfs pool, i want to remove disk c0d3


[snip]

Currently, you can't remove a vdev without destroying the pool.

--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs deduplication

2009-08-03 Thread Andre van Eyssen

On Tue, 4 Aug 2009, James C. McPherson wrote:


If so, did anyone see the presentation?


Yes. Everybody who attended.


You know, I think we might even have some evidence of their attendance!

http://mexico.purplecow.org/static/kca_spk/tn/IMG_2177.jpg.html

http://mexico.purplecow.org/static/kca_spk/tn/IMG_2178.jpg.html

http://mexico.purplecow.org/static/kca_spk/tn/IMG_2179.jpg.html

http://mexico.purplecow.org/static/kca_spk/tn/IMG_2184.jpg.html

http://mexico.purplecow.org/static/kca_spk/tn/IMG_2186.jpg.html

http://mexico.purplecow.org/static/kca_spk/tn/IMG_2228.jpg.html

So they obviously attended, but it takes time to get get video and 
documentation out the door.


You can already watch their participation in the ZFS panel online:

http://www.ustream.tv/recorded/1810931

--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [storage-discuss] ZFS and deduplication

2009-08-02 Thread Andre Lue
Was de-duplication slated for snv_119? If not can anyone say which snv_xxx and 
in which form will we see it (synchronous, asynchronous both)?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-29 Thread Andre van Eyssen

On Wed, 29 Jul 2009, Mark J Musante wrote:


Yes, if it's local. Just use df -n $path and it'll spit out the filesystem 
type.  If it's mounted over NFS, it'll just say something like nfs or autofs, 
though.


$ df -n /opt
Filesystemkbytesused   avail capacity  Mounted on
/dev/md/dsk/d24  33563061 11252547 2197488434%/opt
$ df -n /sata750
Filesystemkbytesused   avail capacity  Mounted on
sata750  2873622528  77 322671575 1%/sata750

Not giving the filesystem type. It's easy to spot the zfs with the lack of 
recognisable device path, though.


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-29 Thread Andre van Eyssen

On Wed, 29 Jul 2009, Andriy Gapon wrote:


Well, I specifically stated that this property should not be recursive, i.e. it
should work only in a root of a filesystem.
When setting this property on a filesystem an administrator should carefully set
permissions to make sure that only trusted entities can create directories 
there.


Even limited to the root of a filesystem, it still gives a user the 
ability to consume resources rapidly. While I appreciate the fact that it 
would be restricted by permissions, I can think of a number of usage cases 
where it could suddenly tank a host. One use that might pop up, for 
example, would be cache spools - which often contain *many* directories. 
One runaway and kaboom.


We generally use hosts now with plenty of RAM and the per-filesystem 
overhead for ZFS doesn't cause much concern. However, on a scratch box, 
try creating a big stack of filesystems - you can end up with a pool that 
consumes so much memory you can't import it!



'rmdir' question requires some thinking, my first reaction is it should do zfs
destroy...


.. which will fail if there's a snapshot, for example. The problem seems 
to be reasonably complex - compounded by the fact that many programs that 
create or remove directories do so directly - not by calling externals 
that would be ZFS aware.


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-29 Thread Andre van Eyssen

On Wed, 29 Jul 2009, David Magda wrote:


Which makes me wonder: is there a programmatic way to determine if a path
is on ZFS?


statvfs(2)

--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-29 Thread Andre van Eyssen

On Wed, 29 Jul 2009, Andriy Gapon wrote:


"Subdirectory is automatically a new filesystem" property - an administrator 
turns
on this magic property of a filesystem, after that every mkdir *in the root* of
that filesystem creates a new filesystem. The new filesystems have
default/inherited properties except for the magic property which is off.

Right now I see this as being mostly useful for /home. Main benefit in this case
is that various user administration tools can work unmodified and do the right
thing when an administrator wants a policy of a separate fs per user
But I am sure that there could be other interesting uses for this.


It's a nice idea, but zfs filesystems consume memory and have overhead. 
This would make it trivial for a non-root user (assuming they have 
permissions) to crush the host under the weight of .. mkdir.


$ mkdir -p waste/resources/now/waste/resources/now/waste/resources/now

(now make that much longer and put it in a loop)

Also, will rmdir call zfs destroy? Snapshots interacting with that could 
be somewhat unpredictable. What about rm -rf?


It'd either require major surgery to userland tools, including every 
single program that might want to create a directory, or major surgery to 
the kernel. The former is unworkable, the latter .. scary.


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] When writing to SLOG at full speed all disk IO is blocked

2009-07-26 Thread Andre Lue
byleal,

Can you share how to recreate or test this?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Something wrong with zfs mount

2009-07-21 Thread Andre Lue
Hi Ian,

Thanks for the the reply. I will check your recommendation when I get a chance. 
However this happens on any zfs that have hierarchical zfs filesystems. I 
noticed it started this problem since snv_114. This same filesystem structure 
last worked fine with snv_110.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Something wrong with zfs mount

2009-07-20 Thread Andre Lue
I have noticed this in snv_114 now at 117.

I have the following filesystems.
fs was created using zfs create pool/fs
movies created using zfs create pool/fs/movies
pool/fs/movies
pool/fs/music
pool/fs/photos
pool/fs/archives

at boot /lib/svc/method/fs-local fails where  zfs mount -a is called. failed to 
mount pool/fs/movies, directory not empty. This send filesystem/local into 
maintenance.

zfs mount -a cannot successful mount this zpool/zfs structure from the command 
line either. Any one knows what's wrong or have any guidance.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-19 Thread Andre van Eyssen

On Sun, 19 Jul 2009, Richard Elling wrote:

I do, even though I have a small business.  Neither InDesign nor 
Illustrator will be ported to Linux or OpenSolaris in my lifetime... 
besides, iTunes rocks and it is the best iPhone developer's environment 
on the planet.


Richard,

I think the point that Gavin was trying to make is that a sensible 
business would commit their valuable data back to a fileserver running on 
solid hardware with a solid operating system rather than relying on their 
single-spindle laptops to store their valuable content - not making any 
statement on the actual desktop platform.


For example, I use a mixture of Windows, MacOS, Solaris and OpenBSD around 
here, but all the valuable data is stored on a zpool located on a SPARC 
server (obviously with ECC RAM) with UPS power. With Windows around, I 
like the fact that I don't need to think twice before reinstalling those 
machines.


Andre.


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] deduplication

2009-07-12 Thread Andre van Eyssen

On Sun, 12 Jul 2009, Cyril Plisko wrote:


Open source is much more than throwing the code over the wall.
Heck, in the early pilot days I was told by a number of Sun engineers,
that the reason things are taking time is exactly that - we do not
want to just throw the code over the wall - we want to build a
community.


With respect, Sun is entitled to develop new features in whichever manner 
suits their ends. While the community may desire fresh, juicy source to 
dig through on a regular basis, it's not always going to land in your lap.


You can't always get what you want. In this case, however, you will get 
what you need - the finished product.


Finally, there is one rather simple way to pull development out into the 
open - write some relevant code and be part of the development process. If 
people delivered code as quickly as they deliver words, the development 
process would be wide out in the open.


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] deduplication

2009-07-12 Thread Andre van Eyssen

On Sun, 12 Jul 2009, Cyril Plisko wrote:


There is an ongoing speculations of what/when/how deduplication will
be in ZFS and I am curious: what is the reason to keep the thing
secret ? I always thought open source assumes open development
process. What exactly people behind deduplication effort trying to
prove by keeping their mouth shut ?

Something feels wrong...


The conference is less than a week away. It's hardly keeping things secret 
to announce at a public conference!


You should remember that the Amber Road product was announced at a 
conference, too - and was kept reasonably quiet until the announcement. 
I'm just glad I was at the conference where Amber Road was announced and 
will be attending KCA!


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-06 Thread Andre van Eyssen

On Mon, 6 Jul 2009, Gary Mills wrote:


As for a business case, we just had an extended and catastrophic
performance degradation that was the result of two ZFS bugs.  If we
have another one like that, our director is likely to instruct us to
throw away all our Solaris toys and convert to Microsoft products.


If you change platform every time you get two bugs in a product, you must 
cycle platforms on a pretty regular basis!


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, power failures, and UPSes

2009-07-01 Thread Andre van Eyssen

On Thu, 2 Jul 2009, Ian Collins wrote:


5+ is typical for telco use.


Aah, but we start getting into rooms full of giant 2V wet lead acid cells 
and giant busbars the size of railway tracks.


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any news on deduplication?

2009-06-30 Thread Andre van Eyssen

On Tue, 30 Jun 2009, MC wrote:


Any news on the ZFS deduplication work being done?  I hear Jeff Bonwick might 
speak about it this month.


Yes, it is definately on the agenda for Kernel Conference Australia 
(http://www.kernelconference.net) - you should come along!


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, power failures, and UPSes

2009-06-30 Thread Andre van Eyssen

On Tue, 30 Jun 2009, Monish Shah wrote:

The evil tuning guide says "The ZIL is an essential part of ZFS and should 
never be disabled."  However, if you have a UPS, what can go wrong that 
really requires ZIL?


Without addressing a single ZFS-specific issue:

* panics
* crashes
* hardware failures
- dead RAM
- dead CPU
- dead systemboard
- dead something else
* natural disasters
* UPS failure
* UPS failure (must be said twice)
* Human error (what does this button do?)
* Cabling problems (say, where did my disks go?)
* Malicious actions (Fired? Let me turn their power off!)

That's just a warm-up; I'm sure people can add both the ZFS-specific 
reasons and also the fallacy that a UPS does anything more than mitigate 
one particular single point of failure.


Don't forget to buy two UPSes and split your machine across both. And 
don't forget to actually maintain the UPS. And check the batteries. And 
schedule a load test.


The single best way to learn about the joys of UPS behaviour is to sit 
down and have a drink with a facilities manager who has been doing the job 
for at least ten years. At least you'll hear some funny stories about the 
day a loose screw on one floor took out a house UPS and 100+ hosts and NEs 
with it.


Andre.


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SPARC SATA, please.

2009-06-23 Thread Andre van Eyssen

On Tue, 23 Jun 2009, Thomas Maier-Komor wrote:


1) Once the disks spin down due to idleness it can become impossible to
reactivate them without doing a full reboot (i.e. hot plugging won't help)


That's a good point - I don't think a second goes by without at least a 
little I/O on those disks, so they've probably spun down twice since 
installion - for two other hardware upgrades.


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SPARC SATA, please.

2009-06-22 Thread Andre van Eyssen

On Mon, 22 Jun 2009, Jacob Ritorto wrote:

Is there a card for OpenSolaris 2009.06 SPARC that will do SATA 
correctly yet?  Need it for a super cheapie, low expectations, SunBlade 
100 filer, so I think it has to be notched for 5v PCI slot, iirc. I'm OK 
with slow -- main goals here are power saving (sleep all 4 disks) and 
1TB+ space.  Oh, and I hate to be an old head, but I don't want a 
peecee.  They still scare me :)  Thinking root pool on 16GB ssd, 
perhaps, so the thing can spin down the main pool and idle *really* 
cheaply..


The LSI SAS controllers with SATA ports work nicely with SPARC. I have one 
in my V880. On a Blade-100, however, you might have some issues due to the 
craptitude of the PCI slots.


To be honest, the Grover was a fun machine at the time, but I think that 
time may have passed.


Oh, and if you do grab the LSI card, don't let James catch you using the 
itmpt driver or lsiutils ;-)


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best controller card for 8 SATA drives ?

2009-06-21 Thread Andre van Eyssen

On Sun, 21 Jun 2009, Carson Gaspar wrote:

I'll chime in as a happy owner of the LSI SAS 3081E-R PCI-E board. It works 
just fine. You need to get "lsiutil" from the LSI web site to fully access 
all the functionality, and they cleverly hide the download link only under 
their FC HBAs on their support site, even though it works for everything.


I'll add another vote for the LSI products. I have a four port PCI-X card 
in my V880, and the performance is good and the product is well behaved. 
The only caveats:


1. Make sure you upgrade the firmware ASAP
2. You may need to use lsiutil to fiddle the target mappings

Andre.


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to destroy a pool by id?

2009-06-20 Thread Andre van Eyssen

On Sat, 20 Jun 2009, Cindy Swearingen wrote:


I wish we had a zpool destroy option like this:

# zpool destroy -really_dead tank2


Cindy,

The moment we implemented such a thing, there would be a rash of requests 
saying:


a) I just destroyed my pool with -really_dead - how can I get my data 
back??!
b) I was able to recover my data from -really_dead - can we have 
-ultra-nuke please?


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server Cloning With ZFS?

2009-06-19 Thread Andre Wenas
The device tree for your 250 might be different, so you may need to  
hack the path_to_inst and /devices and /dev to make it boot sucessfully.




On Jun 20, 2009, at 10:18 AM, Dave Ringkor   
wrote:


Cindy, my question is about what "system specific info" is  
maintained that would need to be changed?  To take my example, my  
E450, "homer", has disks that are failing and it's a big clunky  
server anyway, and management wants to decommission it.  But we have  
an old 220R racked up doing nothing, and it's not scheduled for  
disposal.


What would be wrong with this:
1) Create a recursive snapshot of the root pool on homer.
2) zfs send this snapshot to a file on some NFS server.
3) Boot my 220R (same architecture as the E450) into single user  
mode from a DVD.

4) Create a zpool on the 220R's local disks.
5) zfs receive the snapshot created in step 2 to the new pool.
6) Set the bootfs property.
7) Reboot the 220R.

Now my 220R comes up as "homer", with its IP address, users, root  
pool filesystems, any software that was installed in the old homer's  
root pool, etc.


Since ZFS filesystems don't care about the underlying disk structure  
-- they only care about the pool, and I've already created a pool  
for them on the 220R using the disks it has, there shouldn't be any  
storage-type "system specific into" to change, right?  And sure, the  
220R might have a different number and speed of CPUs, and more or  
less RAM than the E450 had.  But when you upgrade a server in place  
you don't have to manually configure the CPUs or RAM, and how is  
this different?


The only thing I can think of that I might need to change, in order  
to bring up my 220R and have it "be" homer, is the network  
interfaces, from hme to bge or whatever.  And that's a simple config  
setting.


I don't care about Flash.  Actually, if you wanted to provision new  
servers based on a golden image like you can with Flash, couldn't  
you just take a recursive snapshot of a zpool as above, "receive" it  
in an empty zpool on another server, set your bootfs, and do a sys- 
unconfig?


So my big question is, with a server on ZFS root, what "system  
specific info" would still need to be changed?

--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] eon or nexentacore or opensolaris

2009-06-14 Thread Andre Lue
Hi Bogdan,

I'd recommend the following RAM minimums for a fair balance of performance.
700Mb 32-bit
1Gb 64-bit
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] eon or nexentacore or opensolaris

2009-06-14 Thread Andre Lue
Js.lists,

My needs are:
* Easy package management
There is no pkgadd or ips included in EON. You can however add IPS and retrieve 
any of its available packages.

* Easy upgrades
EON is fairly easy to upgrade and the risk is low. All you have to do is 
preserve your previous image before upgrading.  If the upgrade is not suitable 
go back to your previous image/release. You can preview the each new release by 
simply burning the image and booting, your current install would remain 
untouched.

* Stability
EON is sxce minimized so it is as stable as the matching snv_xxx release. You 
can also roll your own appliance to include only the bits you need

* Ability to run Splunk
Splunk not included but if there is an ips package or pkgadd version included 
on the snv dvd you could always add or include it if you roll your own 
appliance.

Feel free to give EON a twirl. It will only cost you CD and the time to burn 
and boot it. Or if you have a VM you can test it there. You'll know reallly 
fast if it has enough of a framework for you to add the missing bits you need 
or not. Hope that helps.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What can I do to shorten the long awkward names of snapshots?

2009-04-15 Thread Andre van Eyssen

On Wed, 15 Apr 2009, Harry Putnam wrote:



Would become:
 a:freq-041509_1630


Can I suggest perhaps something inspired by the old convention for DNS 
serials, along the lines of fmmddtt? Like:


a:f200904151630

This makes things easier to sort and lines up in a tidy manner.


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Panic

2009-04-09 Thread Andre van Eyssen

On Fri, 10 Apr 2009, Rince wrote:


FWIW, I strongly expect live ripping of a SATA device to not panic the disk
layer. It explicitly shouldn't panic the ZFS layer, as ZFS is supposed to be
"fault-tolerant" and "drive dropping away at any time" is a rather expected
scenario.


Ripping a SATA device out runs a goodly chance of confusing the 
controller. If you'd had this problem with fibre channel or even SCSI, I'd 
find it a far bigger concern. IME, IDE and SATA just don't hold up to the 
abuses we'd like to level at them. Of course, this boils down to 
controller and enclosure and a lot of other random chances for disaster.


In addition, where there is a procedure to gently remove the device, use 
it. We don't just yank disks from the FC-AL backplanes on V880s, because 
there is a procedure for handling this even for failed disks. The five 
minutes to do it properly is a good investment compared to much longer 
downtime from a fault condition arising from careless manhandling of 
hardware.


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] j4200 drive carriers

2009-02-01 Thread Andre van Eyssen
On Sun, 1 Feb 2009, Bob Friesenhahn wrote:


> I am worried that Sun is primarily interested in new business and
> tends to not offer replacement/new drives for as long as the actual
> service-life of the array.  What is the poor customer to do when Sun
> is no longer willing to offer a service contract and Sun is no longer
> willing to sell drives (or even the carriers) for the array?

You can still procure replacement drives for real vintage kit, like the 
A1000/D1000 arrays. I doubt your argument is valid. As a side point, by 
the time these arrays are dead & buried, the sleds for them will no doubt 
be as common as spuds (and don't we all have at least 30 of those lying 
around?)

> Sometimes it is only a matter of weeks before Sun stops offering
> supportive components.  For example, my Ultra 40 was only discontinued
> a month or so ago but already Sun somehow no longer lists memory for
> it (huh?).

If your trusty Sun partner couldn't supply you with memory for an 
Ultra-40, I'd take that as a sign to find a new partner. EOSL products 
vanish from websites but parts can still be ordered for them.

-- 
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] j4200 drive carriers

2009-02-01 Thread Andre van Eyssen
On Sun, 1 Feb 2009, Richard Elling wrote:


> The drives that Sun sells will come with the correct bracket.
> Ergo, there is no reason to sell the bracket as a separate
> item unless the customer wishes to place non-Sun disks in
> them.  That represents a service liability for Sun, so they are
> not inclined to do so.  It is really basic business.
> -- richard

This thread has been running for a little too long, considering the issues 
are pretty simple.

Sun sells a JBOD storage product, along with disks and accessories. The 
disks they provide are mounted in the correct carriers for the array. The 
pricing of the disks and accessories are part of the price calculation for 
the entire system - you could provide the array empty, with a full set of 
empty carriers but the price will go up.

No brand name storage vendor supports or encourages installation of third 
party disks. It's not the way the business works. If the customer wants 
the reassurance, quality, (etc, etc) associated with buying brand name 
storage, they purchase the disks from the same vendor. If price is more 
critical than these factors, there's a wide range of "white box" 
solutions on the market. Try approaching IBM, HP, EMC, HDS, NetApp or 
similar and ask to buy an empty JBOD and spare trays - it's not happening.

Yes, this is unfortunate for those who would like to purchase a Sun JBOD 
for home or for a microbusiness. However, these users are probably aware 
that if they want to buy their own spindles and run an unsupported 
configuration, their local metal shop will be happy to bang out some 
frames. Not to mention the fact that one could always run up a set of 
sleds in the shed without too much strife - in fact, in the past when 
"spuds" were less commonly available, I've seen a home user make sleds out 
of wood that did the job.

Now, I'm looking forward to seeing the first Sun JBOD loaded up with 
CNC-milled mahogany sleds. It'll look great.

-- 
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS GUI

2009-01-21 Thread Andre Wenas

Hi Richard,

Currently there is a bug in Sol10u6, 6764133 causing zfs admin to crash.

*Bug ID:* 6764133*Synopsis:* ZFS admin gui causing jvm SIGSEGV on s10u6

Not sure if there is a fix already.

Regards,
Andre W.

Richard Elling wrote:

Marius van Vuuren wrote:
  

Is there a GUI for 2008.11 for ZFS like in S10U6?



It doesn't appear to be in the dev repository, but you
should be able to install the packages from S10u6.  If
you try this, please let us know if it works.

Look for packages contributing to "webconsole":
SUNWasu   Sun Java System Application Server (usr)
SUNWemcon Spanish Sun Java(TM) Web Console 3.1 (Core)
SUNWemctg Spanish Sun Java(TM) Web Console 3.1 (Tags & Components)
SUNWezfsg Spanish localization for Sun Web Console ZFS administration
SUNWfmcon French Sun Java(TM) Web Console 3.1 (Core)
SUNWfmctg French Sun Java(TM) Web Console 3.1 (Tags & Components)
SUNWfzfsg French localization for Sun Web Console ZFS administration
SUNWmcon  Sun Java(TM) Web Console 3.1 (Core)
SUNWmconr Sun Java(TM) Web Console 3.1 (Root)
SUNWmcos  Implementation of Sun Java(TM) Web Console (3.1) services
SUNWmcosx Implementation of Sun Java(TM) Web Console (3.1) services
SUNWmctag Sun Java(TM) Web Console 3.1 (Tags & Components)
SUNWzfsgr ZFS Administration for Sun Java(TM) Web Console (Root)
SUNWzfsgu ZFS Administration for Sun Java(TM) Web Console (Usr)

  -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate home "partition"?

2009-01-08 Thread Andre Wenas
You can edit /etc/user_attr file.

Sent from my iPhone

On Jan 9, 2009, at 11:13 AM, noz  wrote:

>> To do step no 4, you need to login as root, or create
>> new user which
>> home dir not at export.
>>
>> Sent from my iPhone
>>
>
> I tried to login as root at the login screen but it wouldn't let me,  
> some error about roles.  Is there another way to login as root?
> -- 
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate home "partition"?

2009-01-08 Thread Andre Wenas
To do step no 4, you need to login as root, or create new user which  
home dir not at export.

Sent from my iPhone

On Jan 9, 2009, at 10:10 AM, noz  wrote:

> Kyle wrote:
>> So if preserving the home filesystem through
>> re-installs are really
>> important, putting the home filesystem in a separate
>> pool may be in
>> order.
>
> My problem similar to the original thread author, and this scenario  
> is exactly the one I had in mind.  I figured out a workable solution  
> from the zfs admin guide, but I've only tested this in virtualbox.   
> I have no idea how well this would work if I actually had hundreds  
> of gigabytes of data.  I also don't know if my solution is the  
> recommended way to do this, so please let me know if anyone has a  
> better method.
>
> Here's my solution:
> (1) n...@holodeck:~# zpool create epool mirror c4t1d0 c4t2d0 c4t3d0
>
> n...@holodeck:~# zfs list
> NAME USED  AVAIL  REFER  MOUNTPOINT
> epool 69K  15.6G18K  /epool
> rpool   3.68G  11.9G72K  /rpool
> rpool/ROOT  2.81G  11.9G18K  legacy
> rpool/ROOT/opensolaris  2.81G  11.9G  2.68G  /
> rpool/dump   383M  11.9G   383M  -
> rpool/export 632K  11.9G19K  /export
> rpool/export/home612K  11.9G19K  /export/home
> rpool/export/home/noz594K  11.9G   594K  /export/home/noz
> rpool/swap   512M  12.4G  21.1M  -
> n...@holodeck:~#
>
> (2) n...@holodeck:~# zfs snapshot -r rpool/exp...@now
> (3) n...@holodeck:~# zfs send -R rpool/exp...@now > /tmp/export_now
> (4) n...@holodeck:~# zfs destroy -r -f rpool/export
> (5) n...@holodeck:~# zfs recv -d epool < /tmp/export_now
>
> n...@holodeck:~# zfs list
> NAME USED  AVAIL  REFER  MOUNTPOINT
> epool756K  15.6G18K  /epool
> epool/export 630K  15.6G19K  /export
> epool/export/home612K  15.6G19K  /export/home
> epool/export/home/noz592K  15.6G   592K  /export/home/noz
> rpool   3.68G  11.9G72K  /rpool
> rpool/ROOT  2.81G  11.9G18K  legacy
> rpool/ROOT/opensolaris  2.81G  11.9G  2.68G  /
> rpool/dump   383M  11.9G   383M  -
> rpool/swap   512M  12.4G  21.1M  -
> n...@holodeck:~#
>
> (6) n...@holodeck:~# zfs mount -a
>
> or
>
> (6) reboot
>
> The only part I'm uncomfortable with is when I have to destroy  
> rpool's export filesystem (step 4), because trying to destroy  
> without the -f switch results in a "filesystem is active" error.
> -- 
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Storage 7000

2008-11-19 Thread Andre Lue
Refering to the web gui or bui seen here

http://blogs.sun.com/brendan/entry/status_dashboard
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Storage 7000

2008-11-18 Thread Andre Lue
Is the web interface on the appliance available for download or will it make it 
to opensolaris sometime in the near future?

thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OpenSolaris+ZFS+RAIDZ+VirtualBox - ready for production systems?

2008-08-05 Thread Andre Wenas
Hi Evert,

Sun positions virtualbox as desktop virtualization software. It only 
support 32bit with 1 CPU only. If this met your requirement, it should 
run ok.

Regards,
Andre W.


Evert Meulie wrote:
> Hi all,
>
> I have been looking at various alternatives for a system that runs several 
> Linux & Windows guests. So far my favorite choice would be 
> OpenSolaris+ZFS+RAIDZ+VirtualBox. Is this combo ready to be a host for Linux 
> & Windows guests? Or is it not 100% stable (yet)?
>
>  
> Greetings,
>   Evert
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool upgrade wrecked GRUB

2008-08-05 Thread Andre Wenas
You can try to boot from Opensolaris CD, import rpool, mount the root 
filesystem and upgrade the grub.

Regards,
Andre W.


Seymour Krebs wrote:
> Machine is running x86 snv_94 after recent upgrade from opensolaris 2008.05.  
> ZFS and zpool reported no troubles except suggesting upgrade for from ver.10 
> to ver.11. seemed like a good idea at the time.  system up for several days 
> after that point then took down for some unrelated maintenance.
>
> now will not boot the opensol, drops to grub prompt, no menus.
>
> zfs was mirrored on two disks c6d0s0 and c7d0.  I never noted the GRUB 
> commands for booting  and not really familiar with the nomenclature.  at this 
> point I am hoping that a burn of SXCE snv_94 will give me access to the zfs 
> pools so I can try "update-grub" but at this point it will be about 9 hours 
> to download the .iso and I kinda need to work on data residing in that system
>
> any suggestions 
>
> thanks, 
> sgk
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS/Install related question

2008-07-10 Thread Andre
Hi there,

I'm currently setting up a new system to my lab. 4 SATA drives would be turned 
into the main file system (ZFS?) running on a soft raid (raid-z?).

My main target is reliability, my experience with Linux SoftRaid was 
catastrophic and the array could no be restored after some testing simulating 
power failures (thank god I did the tests before relying on that...)

For what I've seen so far, Solaris cannot boot from a raid-z system. Is that 
correct?

In this case, what needs to be out of the array? Example, on a Linux system, I 
could set the /boot to be on a old 256MB USB flash.(As long the boot loader and 
kernel were out of the array the system would boot.) What are the requirements 
for booting from the USB but loading a system on the array?

Second, how do I proceed during the Install process?

I know it's a little bit weird but I must confess I'm doing it on purpose. :-)

I thank you in advance
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot with gzip-9 compression

2008-05-16 Thread Andre Wenas
I have been using zfs boot with lzjb compression on since build 75, from 
time to time I had similar problem, not sure why.

As best practise, I do snapshot the root filesystem frequently, so that 
I can rollback to the last working snapshot.

Rgds,
Andre W.

Victor Latushkin wrote:
>>> this seems quite easy to me but I don't know how to "move around" to 
>>> actually implement/propose the required changes.
>>>
>>>
>>> To make grub aware of gzip (as it already is of lzjb) the steps should be:
>>>
>>>
>>> 1. create a new
>>>
>>> /onnv-gate/usr/src/grub/grub-0.95/stage2/zfs_gzip.c
>>>
>>> starting from
>>>
>>> /onnv-gate/usr/src/uts/common/fs/zfs/gzip.c
>>>
>>> and removing the gzip_compress funcion
>>>   
>
> Yes, but it is a little bit more complicated than that. gzip support in 
> in-kernel ZFS leverages in-kernel zlib implementation. There is support 
> for gzip decompression algorithm in grub (see gunzip.c), so one need to 
> figure out how to leverage that and replace z_uncompress() with proper 
> call.
>
>   
>>> 2. add gzip_decompress
>>>
>>> at the end of
>>>
>>> /onnv-gate/usr/src/grub/grub-0.95/stage2/fsys_zfs.h
>>>
>>>
>>> 3. update the decomp_table function to link "gzip" and all "gzip-N" 
>>> (with N=1...9) to the gzip_decompress function in
>>>
>>> /onnv-gate/usr/src/grub/grub-0.95/stage2/fsys_zfs.c
>>>   
>
> This sounds reasonable.
>
> Also one need to make sure that resulting binary does not exceed size 
> requirements (if any), thoroughly test it to verify that it works on all 
> HW architectures with all compression algorithms (and even mix of them).
>
> This may be not an exhaustive list of things to do.
>
>   
>>> What should I do to go on with this changes? Should I start a "community 
>>> project"?
>>>
>>>   
>
>   
>> These changes look simple enough so there is no point setting up 
>> community project imho.
>>
>> Just implement it, test it then ask for a sponsor and integrate it.
>> 
>
> As Robert points out, this indeed may be quite simple to bother setting 
> up community project, so it may be better to treat this just like 
> bite-size-rfe ;-)
>
> Wbr,
> Victor
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool create privileges

2008-04-18 Thread Andre Lue
do 

ppriv -e -D cmd 

and you will see the privs you need to add.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Booting Solaris on ZFS fiel system

2008-02-02 Thread Andre Wenas
Hi,

Solaris 8/07 does not support zfs boot. ZFS boot support currently only 
available on Solaris Express (Nevada) on x86.

Rgds,
Andre W.

Jayakrishna wrote:
> Hi ,
>
> I have the following machine
> sun4v sparc SUNW,Sun-Fire-T1000
>
> &  Solaris 10 8/07 OS , is it possible to create a ZFS file system & boot the 
> Solaris OS on the ZFS file system. Is this supported ? If not when it will be 
> supported. Does booting on ZFS partition is supported on any other platform 
> eg x86.
>
> Could any body please clarify.
>
> Regards
> Jayakrishna.K
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can't access my data

2008-01-18 Thread Andre Lue
I seem to remember hostid being added to the zpool to solve a bug for the poor 
man's storage cluster.

Trying doing a zdb -v ( you should see 4 copies and note if hostid is a field 
and if it differs from the current one)

you can also try a  zpool import -f -a

I've seen cases where zfs mount -v -a fail but a zpool import -f -a works.  I 
don't yet know why but I know zpool import searches /dev/dsk/ trees and my 
hunch is that zfs mount looks for a hostid match.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS partitions

2008-01-09 Thread Andre Wenas
Yes, please check http://www.opensolaris.org/os/community/zfs/boot/

Jon Arikata wrote:
> can one boot ZFS off a ZFS partition?
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Does block allocation for small writes work over iSCSI?

2008-01-08 Thread Andre Wenas

Although it looks like possible, but very complex architecture.

If you can wait, please explore pNFS: 
http://opensolaris.org/os/project/nfsv41/


What is pNFS?

   * The pNFS protocol allows us to separate a NFS file system's data
 and metadata paths. With a separate data path we are free to lay
 file data out in interesting ways like striping it across multiple
 different file servers. For more information, see the NFSv4.1
 specification.



Gilberto Mautner wrote:

Hello list,
 
 
I'm thinking about this topology:
 
NFS Client  zFS Host <---iSCSI---> zFS Node 1, 2, 3 etc.
 
The idea here is to create a scalable NFS server by plugging in more 
nodes as more space is needed, striping data across them.
 
A question is: we know from the docs that zFS optimizes random write 
speed by consolidating what would be many random writes into a single 
sequential operation.
 
I imagine that for zFS be able to do that it has to have some 
knowledge about the hard disk geography. Now, if this geography is 
being abstracted by iSCSI, is that optimization still valid?
 
 
Thanks
 
Gilberto
 



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs pool does remount automatically

2008-01-07 Thread Andre Lue
I have a slimmed down build on 61 and 72. None of these systems are 
automatically remounting the zpool on a reboot.

zfs list  returns "no datasets available"
zpool list returns "no pools available"

zfs mount -v -a  runs but doesn't mount the filesystem. I usually have to do a 
zpool import -f pool to get it back.

Does anyone know why this may be happening.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs mount -a intermittent

2007-11-15 Thread Andre Lue
I have a slimmed down install on on_b61 and sometimes when the box is rebooted 
it fails to automatically remount the pool. Most cases if I login and run "zfs 
mount -a" it will mount. Some cases I have to reboot again. Can someone provide 
some insight as to what may be going on here?

truss captures the following when it fails 
412:brk(0x0808D000) = 0
412:brk(0x0809D000) = 0
412:brk(0x080AD000) = 0
412:brk(0x080BD000) = 0
412:open("/dev/zfs", O_RDWR)= 3
412:fstat64(3, 0x08047BA0)  = 0
412:d=0x0448 i=95420420 m=0020666 l=1  u=0 g=3 rdev=0x02D800
00
412:at = Nov 15 06:17:13 PST 2007  [ 1195136233 ]
412:mt = Nov 15 06:17:13 PST 2007  [ 1195136233 ]
412:ct = Nov 15 06:17:13 PST 2007  [ 1195136233 ]
412:bsz=8192  blks=0 fs=devfs
412:stat64("/dev/pts/0", 0x08047CB0)= 0
412:d=0x044C i=447105886 m=0020620 l=1  u=0 g=0 rdev=0x00600
000
412:at = Nov 15 06:17:32 PST 2007  [ 1195136252 ]
412:mt = Nov 15 06:17:32 PST 2007  [ 1195136252 ]
412:ct = Nov 15 06:17:32 PST 2007  [ 1195136252 ]
412:bsz=8192  blks=0 fs=dev
412:open("/etc/mnttab", O_RDONLY)   = 4
412:fstat64(4, 0x08047B60)  = 0
412:d=0x04580001 i=2 m=0100444 l=2  u=0 g=0 sz=651
412:at = Nov 15 06:17:38 PST 2007  [ 1195136258 ]
412:mt = Nov 15 06:17:38 PST 2007  [ 1195136258 ]
412:ct = Nov 15 06:17:04 PST 2007  [ 1195136224 ]
412:bsz=512   blks=2 fs=mntfs
412:open("/etc/dfs/sharetab", O_RDONLY) Err#2 ENOENT
412:open("/etc/mnttab", O_RDONLY)   = 5
412:fstat64(5, 0x08047B80)  = 0
412:d=0x04580001 i=2 m=0100444 l=3  u=0 g=0 sz=651
412:at = Nov 15 06:17:38 PST 2007  [ 1195136258 ]
412:mt = Nov 15 06:17:38 PST 2007  [ 1195136258 ]
412:ct = Nov 15 06:17:04 PST 2007  [ 1195136224 ]
412:bsz=512   blks=2 fs=mntfs
412:sysconfig(_CONFIG_PAGESIZE) = 4096
412:ioctl(3, ZFS_IOC_POOL_CONFIGS, 0x08046DA4)  = 0
412:llseek(5, 0, SEEK_CUR)  = 0
412:close(5)= 0
412:close(3)= 0
412:llseek(4, 0, SEEK_CUR)  = 0
412:close(4)= 0
412:_exit(0)

Looking at the ioctl call in libzfs_configs.c  i think "412:ioctl(3, 
ZFS_IOC_POOL_CONFIGS, 0x08046DA4)  = 0" is matching the section of code 
below. 
245 for (;;) {
246 if (ioctl(zhp->zpool_hdl->libzfs_fd, ZFS_IOC_POOL_STATS,
247 &zc) == 0) {
248 /*
249  * The real error is returned in the zc_cookie 
field.
250  */
251 error = zc.zc_cookie;
252 break;
253 }
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Mountroot and Bootroot Comparison

2007-10-05 Thread Andre Wenas
ZFS boot is one of the best usage of ZFS for me. I can create more then 
10 boot environment, rollback or destroy if necessary. Not afraid of bfu 
anymore or patching or any other software installation. If bfu breaks 
the OS, just rollback as simple as that.

Rgds,
Andre W.

Kugutsumen wrote:
> Thanks, this is really strange.
> In your particular case you have /usr on the same pool as your rootfs 
> and I guess that's why it is working for you.
>
> Alll my attempts with b64, b70 and b73 failed if /usr is on a separate 
> pool.
>
>
> On 05/10/2007, at 4:10 PM, Andre Wenas wrote:
>
>> Hi Kugutsumen,
>>
>> Not sure abt the bugs, I follow instruction at 
>> http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual
>> and create separate /usr, /opt and /var filesystem.
>>
>> Here is the vfstab:
>> #device device  mount   FS  fsck
>> mount   mount
>> #to mount   to fsck point   typepassat 
>> boot options
>> #
>> fd  -   /dev/fd fd  -   no  -
>> /proc   -   /proc   proc-   no  -
>> /dev/dsk/c0d0s1 -   -   swap-   no  -
>> /devices-   /devicesdevfs   -   no  -
>> sharefs -   /etc/dfs/sharetab   sharefs -   no  -
>> ctfs-   /system/contractctfs-   no  -
>> objfs   -   /system/object  objfs   -   no  -
>> swap-   /tmptmpfs   -   yes -
>> /dev/dsk/c0d0p0:1   /dev/rdsk/c0d0p0:1  /windows/C  
>> pcfs2 yes
>>   -
>> /dev/dsk/c0d0p0:2   /dev/rdsk/c0d0p0:2  /windows/D  
>> pcfs2 yes
>>   -
>> /dev/dsk/c0d0p0:3   /dev/rdsk/c0d0p0:3  /windows/E  
>> pcfs2 yes
>>   -
>> rootpool/rootfs - / zfs - no -
>> rootpool/rootfs/usr - /usr zfs - no -
>> rootpool/rootfs/var - /var zfs - no -
>> rootpool/rootfs/opt - /opt zfs - yes -
>>
>> The reason why I separate /usr, /opt, /var because I want to compress 
>> them:
>> bash-3.00$ zfs get compressratio rootpool/rootfs/usr
>> NAME PROPERTY VALUE SOURCE
>> rootpool/rootfs/usr compressratio 1.65x -
>> bash-3.00$ zfs get compressratio rootpool/rootfs/var
>> NAME PROPERTY VALUE SOURCE
>> rootpool/rootfs/var compressratio 2.10x -
>> bash-3.00$ zfs get compressratio rootpool/rootfs/opt
>> NAME PROPERTY VALUE SOURCE
>> rootpool/rootfs/opt compressratio 1.66x
>>
>> My entire bootdisk only need 2.5GB (entire distribution):
>> bash-3.00$ zfs list rootpool/rootfs
>> NAME USED AVAIL REFER MOUNTPOINT
>> rootpool/rootfs 2.58G 1.85G 351M legacy
>>
>> To be able to rollback you need to create another boot environment 
>> using snapshot and clone. I named the new zfs root filesystem as 
>> rootpool/tx (planned to install Solaris trusted extension on the new 
>> boot environment).
>>
>> bash-3.00$ zfs list -r rootpool/tx
>> NAME USED AVAIL REFER MOUNTPOINT
>> rootpool/tx 57.2M 1.85G 343M legacy
>> rootpool/tx/opt 30K 1.85G 230M legacy
>> rootpool/tx/usr 198K 1.85G 1.79G legacy
>> rootpool/tx/var 644K 1.85G 68.1M legacy
>>
>> If you want to rollback you need to boot to the clone BE then rollback.
>>
>> Rgds,
>> Andre W.
>>
>> Kugutsumen wrote:
>>> Please do share how you managed to have a separate ZFS /usr since 
>>> b64; there are dependencies to /usr and they are not documented. -kv 
>>> doesn't help too. I tried added /usr/lib/libdisk* to a /usr/lib dir 
>>> on the root partition and failed. Jurgen also pointed that there are 
>>> two related bugs already filed: Bug ID 6570056 Synopsis /sbin/zpool 
>>> should not link to files in /usr/lib 
>>> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6570056 
>>> Bug ID 6494840 Synopsis libzfs should dlopen libiscsitgt rather than 
>>> linking to it 
>>> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6494840 I 
>>> can do a snapshot on bootroot too ... after I tried to do a rollback 
>>> from failsafe I couldn't boot anymore, probably because there was no 
>>> straightforward way to rebuild the boot archive. Regarding 
>>> compression, if I am not mistaken, grub cannot access files that are 
>>> compressed. Regards, K. On 05/10/2007, at 5:55 AM, Andre Wenas wrote:
>>>> Hi, Using bootroot I can do seperate /usr filesystem since b64. I 
>>>> can also do snapshot, clone and compression. Rgds, Andre W. 
>>>> Kugutsumen wrote:
>>>>> L

Re: [zfs-discuss] ZFS Mountroot and Bootroot Comparison

2007-10-05 Thread Andre Wenas

Hi Kugutsumen,

Not sure abt the bugs, I follow instruction at 
http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual

and create separate /usr, /opt and /var filesystem.

Here is the vfstab:
#device device  mount   FS  fsckmount   
mount
#to mount   to fsck point   typepassat boot 
options

#
fd  -   /dev/fd fd  -   no  -
/proc   -   /proc   proc-   no  -
/dev/dsk/c0d0s1 -   -   swap-   no  -
/devices-   /devicesdevfs   -   no  -
sharefs -   /etc/dfs/sharetab   sharefs -   no  -
ctfs-   /system/contractctfs-   no  -
objfs   -   /system/object  objfs   -   no  -
swap-   /tmptmpfs   -   yes -
/dev/dsk/c0d0p0:1   /dev/rdsk/c0d0p0:1  /windows/C  pcfs
2 yes  
 -
/dev/dsk/c0d0p0:2   /dev/rdsk/c0d0p0:2  /windows/D  pcfs
2 yes  
 -
/dev/dsk/c0d0p0:3   /dev/rdsk/c0d0p0:3  /windows/E  pcfs
2 yes  
 -

rootpool/rootfs - / zfs - no -
rootpool/rootfs/usr - /usr zfs - no -
rootpool/rootfs/var - /var zfs - no -
rootpool/rootfs/opt - /opt zfs - yes -

The reason why I separate /usr, /opt, /var because I want to compress them:

bash-3.00$ zfs get compressratio rootpool/rootfs/usr
NAME PROPERTY VALUE SOURCE
rootpool/rootfs/usr compressratio 1.65x -
bash-3.00$ zfs get compressratio rootpool/rootfs/var
NAME PROPERTY VALUE SOURCE
rootpool/rootfs/var compressratio 2.10x -
bash-3.00$ zfs get compressratio rootpool/rootfs/opt
NAME PROPERTY VALUE SOURCE
rootpool/rootfs/opt compressratio 1.66x

My entire bootdisk only need 2.5GB (entire distribution):
bash-3.00$ zfs list rootpool/rootfs
NAME USED AVAIL REFER MOUNTPOINT
rootpool/rootfs 2.58G 1.85G 351M legacy

To be able to rollback you need to create another boot environment using 
snapshot and clone. I named the new zfs root filesystem as rootpool/tx 
(planned to install Solaris trusted extension on the new boot environment).


bash-3.00$ zfs list -r rootpool/tx
NAME USED AVAIL REFER MOUNTPOINT
rootpool/tx 57.2M 1.85G 343M legacy
rootpool/tx/opt 30K 1.85G 230M legacy
rootpool/tx/usr 198K 1.85G 1.79G legacy
rootpool/tx/var 644K 1.85G 68.1M legacy

If you want to rollback you need to boot to the clone BE then rollback.

Rgds,
Andre W.

Kugutsumen wrote:
Please do share how you managed to have a separate ZFS /usr since  
b64; there are dependencies to /usr and they are not documented.
-kv doesn't help too.  I tried added /usr/lib/libdisk* to a /usr/lib  
dir on the root partition and failed.


Jurgen also pointed that there are two related bugs already filed:

Bug ID   6570056
Synopsis/sbin/zpool should not link to files in /usr/lib
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6570056

Bug ID   6494840
Synopsislibzfs should dlopen libiscsitgt rather than linking to it
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6494840

I can do a snapshot on bootroot too ... after I tried to do a  
rollback from failsafe I couldn't boot anymore, probably because  
there was no straightforward way to rebuild the boot archive.


Regarding compression, if I am not mistaken, grub cannot access files  
that are compressed.


Regards,
K.

On 05/10/2007, at 5:55 AM, Andre Wenas wrote:

  

Hi,

Using bootroot I can do seperate /usr filesystem since b64. I can  
also do snapshot, clone and compression.


Rgds,
Andre W.

Kugutsumen wrote:

Lori Alt told me that mountrount was a temporary hack until grub   
could boot zfs natively.
Since build 62, mountroot support was dropped and I am not  
convinced  that this is a mistake.


Let's compare the two:

Mountroot:

Pros:
   * can have root partition on raid-z: YES
   * can have root partition on zfs stripping mirror: YES
   * can have usr partition on separate ZFS partition with build  
<  72 : YES

   * can snapshot and rollback root partition: YES
   * can use copies on root partition on a single root disk (e.g.  
a  laptop ): YES

   * can use compression on root partition: YES
Cons:
   * grub native support: NO (if you use raid-z or stripping  
mirror,  you will need to have a small UFS partition
 to bootstrap the system, but you can use a small usb stick  
for  that purpose.)


New and "improved" *sigh* bootroot scheme:

Pros:
   * grub native support: YES
Cons:
   * can have root partition on raid-z: NO
   * can have root partition on zfs stripping mirror: NO
   * can use copies on root partition on a single root disk (e.g.  
a  laptop ): NO
   * can have usr partition on separate ZFS partition with build  
<  72 : NO

   * can snapshot and rollback root partition: NO
   * can use compression on root partition: NO
   * No backward compatibility with zfs mountroot.

Why did we completely drop support for the old mountroot approach   
which is so m

Re: [zfs-discuss] ZFS Mountroot and Bootroot Comparison

2007-10-04 Thread Andre Wenas
Hi,

Using bootroot I can do seperate /usr filesystem since b64. I can also 
do snapshot, clone and compression.

Rgds,
Andre W.

Kugutsumen wrote:
> Lori Alt told me that mountrount was a temporary hack until grub  
> could boot zfs natively.
> Since build 62, mountroot support was dropped and I am not convinced  
> that this is a mistake.
>
> Let's compare the two:
>
> Mountroot:
>
> Pros:
>* can have root partition on raid-z: YES
>* can have root partition on zfs stripping mirror: YES
>* can have usr partition on separate ZFS partition with build <  
> 72 : YES
>* can snapshot and rollback root partition: YES
>* can use copies on root partition on a single root disk (e.g. a  
> laptop ): YES
>* can use compression on root partition: YES
> Cons:
>* grub native support: NO (if you use raid-z or stripping mirror,  
> you will need to have a small UFS partition
>  to bootstrap the system, but you can use a small usb stick for  
> that purpose.)
>
> New and "improved" *sigh* bootroot scheme:
>
> Pros:
>* grub native support: YES
> Cons:
>* can have root partition on raid-z: NO
>* can have root partition on zfs stripping mirror: NO
>* can use copies on root partition on a single root disk (e.g. a  
> laptop ): NO
>* can have usr partition on separate ZFS partition with build <  
> 72 : NO
>* can snapshot and rollback root partition: NO
>* can use compression on root partition: NO
>* No backward compatibility with zfs mountroot.
>
> Why did we completely drop support for the old mountroot approach  
> which is so much more flexible?
>
> Kugutsumen
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS and thin provisioning

2007-09-19 Thread Andre Lue
Greets,

Is anyone here using ZFS as a thin provisioned storage solution in production.

If so could you kindly share your experiences and techniques, adding physical 
backing etc. 

thanks
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-25 Thread Andre Wenas
I have tried tcpip over fc in the lab, the performance was no diff compare to 
gigabit ethernet. 

-Original Message-
From: "Al Hopper" <[EMAIL PROTECTED]>
To: "Matt B" <[EMAIL PROTECTED]>
Cc: zfs-discuss@opensolaris.org
Sent: 8/26/2007 9:29 AM
Subject: Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

On Sat, 25 Aug 2007, Matt B wrote:

 snip 
> I still wonder if NFS could be used over the FC network in some way similar 
> to how NFS works over ethernet/tcp network

If you're running Qlogic FC HBAs, you can run a TCP/IP stack over the 
FC links.  That would allow NFS traffic over the FC connections.

I'm not necessarily recommending this as a solution - nor have I tried 
it myself.  Just letting you know that the possibility exists.

... snip ...

Regards,

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot process - where from ZFS knows which pools/datasets should be mounted after OS reboot?

2007-07-26 Thread Andre Wenas
You need to specify your boot zfs pool in grub menu.lst:

# ZFS boot
title Solaris ZFS
root (hd0,3,d)
*bootfs rootpool/rootfs
*kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS
module$ /platform/i86pc/$ISADIR/boot_archive

In this example, the bootfs is rootpool/rootfs. Grub will load the 
kernel & boot_archive from this pool.

Rgds,
Andre W.

Robert Prus - Solution Architect, Systems Practice - Sun Poland wrote:
> Hi,
>
> I have a question concerning booting Solaris (SPARC/X64) with some ZFS 
> storage pools/datasets created and mounted.
>
> Where from Solaris/ZFS knows which storage pools/datasets should be 
> mounted after reboot???
>
> ZFS is not using at all /etc/vfstab configuration file (I exclude here 
> case where we set mountpoint=legacy).
>
> My understanding is that ZFS kernel modules are writing some information 
> about mounting state of storage pools/datasets to disks just before 
> unloading (of ZFS kernel modules). When Solaris boots, it is loading ZFS 
> kernel modules. These kernel modules are checking all disks devices in 
> /dev, /devices directories for possibility of performing silent 'zpool 
> import -a'.
>
> Anybody could confirm/deny above statements???
>
> Greetings and thanks in advance,
>
> Robert
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any fix for zpool import kernel panic (reboot loop)?

2007-07-25 Thread Andre Wenas
Hi Rodney,

I have been using zfs root/boot for few months without any problem. I 
can also import the pool from other environment.

Do you have problem importing the zfs boot pool only ? or you can's use 
the zfs boot at all ?

Rgds,
Andre W.

Rodney wrote:
> My system (a laptop with ZFS root and boot, SNV 64A) on which I was trying 
> Opensolaris now has the zpool-related kernel panic reboot loop. 
>
> Booting into failsafe mode or another solaris installation and attempting:
>
> 'zpool import -F rootpool' results in a kernel panic and reboot.
>
> A search shows this type of kernel panic has been discussed on this forum 
> over the last year. Apparently zfs panicking on a failed write to a 
> non-redundant pool is a known issue to be addressed (although I'm not sure if 
> the panic I'm seeing is due to a failed write).
>
> Does anyone know about progress on this issue? I feel this should have more 
> prominence on the ZFS pages as a known bug. (Looks like apple made a good 
> call in only going with read-only Zfs for leopard, it's very very promising 
> but not yet ready for prime time).
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS webadmin

2007-07-02 Thread Andre Lue
Does anyone know the minimum packages needed to make the ZFS webadmin work?

So far It seems you need Java, SUNWjato, tomcat.

thanks
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] b64 zfs on boot ?

2007-05-28 Thread Andre Wenas

Yes.

Horvath wrote:

Is there zfs available in boot with b64 ?
 
 
This message posted from opensolaris.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS and thin provisioning

2007-02-02 Thread Andre Lue
thanks Darren! I got led down the wrong path by following newfs.

Now my other question is. How would you add raw storage to the vtank (virtual 
filesystem) as the usage approached the current underlying raw storage?

Would you going forward just simply in the normal fashion ( i will try this 
when I can add another disk)
zpool add vtank cXtXdX

Is there a performance hit for having what seems to be a zfs on top a zpool on 
top a zpool?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS limits on zpool snapshots

2007-02-01 Thread Andre Lue
As far as I know the recalled on paper number of snapshots you can have in a 
filesystem is 2^48.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS and thin provisioning

2007-02-01 Thread Andre Lue
I found this article (http://www.cuddletech.com/blog/pivot/entry.php?id=729) 
but I have 2 questions. I am trying the steps on Opensolaris build 54.

Since you create the filesystem with newfs, isn't that really a ufs filesystem 
running on top of zfs? Also I haven't been able to do anything in the normal 
fashion (ie zfs and zpool commands) with the thin provisioned created 
filesytem. Can't even mount it or online it.

Was this just a demo of things to come or is thin provision ready for testing 
in zfs?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss