Re: [zfs-discuss] UFS over zvol major performance hit

2008-12-15 Thread Ahmed Kamal
Well, I checked and it is 8k
volblocksize  8K

Any other suggestions how to begin to debug such issue ?



On Mon, Dec 15, 2008 at 2:44 AM, Bob Friesenhahn 
bfrie...@simple.dallas.tx.us wrote:

 On Mon, 15 Dec 2008, Ahmed Kamal wrote:


 RandomWrite-8k:  0.9M/s
 SingleStreamWriteDirect1m: 5.8M/s   (??)
 MultiStreamWrite1m:  33M/s
 MultiStreamWriteDirect1m: 11M/s

 Obviously, there's a major hit. Can someone please shed some light as to
 why
 this is happening ? If more info is required, I'd be happy to test some
 more
 ... This is all running on osol 2008.11 release.


 What blocksize did you specify when creating the zvol?  Perhaps UFS will
 perform best if the zvol blocksize is similar to the UFS blocksize.  For
 example, try testing with a zvol blocksize of 8k.

 Bob
 ==
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Need Help Recovering Zpool

2008-12-15 Thread Nathan Hand
I have moved the zpool image file to an OpenSolaris machine running 101b.

r...@opensolaris:~# uname -a
SunOS opensolaris 5.11 snv_101b i86pc i386 i86pc Solaris

Here I am able to attempt an import of the pool and at least the OS does not 
panic.

r...@opensolaris:~# zpool import -d /mnt
  pool: zones
id: 17407806223688303760
 state: ONLINE
status: The pool is formatted using an older on-disk version.
action: The pool can be imported using its name or numeric identifier, though
some features will not be available without an explicit 'zpool upgrade'.
config:

zones   ONLINE
  /mnt/zpool.zones  ONLINE

But it hangs forever when I actually attempt the import.

r...@opensolaris:~# zpool import -d /mnt -f zones
never returns

The thread associated with the import is stuck on txg_wait_synced.

r...@opensolaris:~# echo 0t757::pid2proc|::walk thread|::findstack -v | mdb -k
stack pointer for thread d6dcc800: d51bdc44
  d51bdc74 swtch+0x195()
  d51bdc84 cv_wait+0x53(d62ef1e6, d62ef1a8, d51bdcc4, fa15f9e1)
  d51bdcc4 txg_wait_synced+0x90(d62ef040, 0, 0, 2)
  d51bdd34 spa_load+0xd0b(d6c1f080, da5dccd8, 2, 1)
  d51bdd84 spa_import_common+0xbd()
  d51bddb4 spa_import+0x18(d6c8f000, da5dccd8, 0, fa187dac)
  d51bdde4 zfs_ioc_pool_import+0xcd(d6c8f000, 0, 0)
  d51bde14 zfsdev_ioctl+0xe0()
  d51bde44 cdev_ioctl+0x31(2d8, 5a02, 8042450, 13, da532b28, d51bdf00)
  d51bde74 spec_ioctl+0x6b(d6dbfc80, 5a02, 8042450, 13, da532b28, d51bdf00)
  d51bdec4 fop_ioctl+0x49(d6dbfc80, 5a02, 8042450, 13, da532b28, d51bdf00)
  d51bdf84 ioctl+0x171()
  d51bdfac sys_call+0x10c()

There is a corresponding thread stuck on zio_wait.

d5e50de0 fec1dad80   0  60 d5cb76c8
  PC: _resume_from_idle+0xb1THREAD: txg_sync_thread()
  stack pointer for thread d5e50de0: d5e50a58
swtch+0x195()
cv_wait+0x53()
zio_wait+0x55()
dbuf_read+0x1fd()
dbuf_will_dirty+0x30()
dmu_write+0xd7()
space_map_sync+0x304()
metaslab_sync+0x284()
vdev_sync+0xc6()
spa_sync+0x35c()
txg_sync_thread+0x295()
thread_start+8()

I see from another discussion on zfs-discuss that Victor Latushkin helped Erik 
Gulliksson recover from a similar situation by using a specially patched zfs 
module. Would it be possible for me to get that same module?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] WBEM Services monitoring of ZFS file systems

2008-12-15 Thread Adriana Weiss
I am currently developing an agent that will monitor Solaris machines. We are 
using wbem services to access the system information.
When I query for the Solaris_LocalFileSystem the only systems it displays are 
UFS and HSFS. (I am using cimworkshop to look at this information)
I need to know if there is a way to look at the ZFS file systems using wbem 
services. 
I was wondering if someone knows if the wbem services in Solaris were upgraded 
to support ZFS file systems.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Need Help Invalidating Uberblock

2008-12-15 Thread Nathan Hand
I don't know if this is relevant or merely a coincidence but the zdb command 
fails an assertion in the same txg_wait_synced function.

r...@opensolaris:~# zdb -p /mnt -e zones 
Assertion failed: tx-tx_threads == 2, file ../../../uts/common/fs/zfs/txg.c, 
line 423, function txg_wait_synced
Abort (core dumped)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Split responsibility for data with ZFS

2008-12-15 Thread Ross Smith
Forgive me for not understanding the details, but couldn't you also
work backwards through the blocks with ZFS and attempt to recreate the
uberblock?

So if you lost the uberblock, could you (memory and time allowing)
start scanning the disk, looking for orphan blocks that aren't
refernced anywhere else and piece together the top of the tree?

Or roll back to a previous uberblock (or a snapshot uberblock), and
then look to see what blocks are on the disk but not referenced
anywhere.  Is there any way to intelligently work out where those
blocks would be linked by looking at how they interact with the known
data?

Of course, rolling back to a previous uberblock would still be a
massive step forward, and something I think would do much to improve
the perception of ZFS as a tool to reliably store data.

You cannot understate the difference to the end user between a file
system that on boot says:
Sorry, can't read your data pool.

With one that says:
Whoops, the uberblock, and all the backups are borked.  Would you
like to roll back to a backup uberblock, or leave the filesystem
offline to repair manually?

As much as anything else, a simple statement explaining *why* a pool
is inaccessible, and saying just how badly things have gone wrong
helps tons.  Being able to recover anything after that is just the
icing on the cake, especially if it can be done automatically.

Ross

PS.  Sorry for the duplicate Casper, I forgot to cc the list.



On Mon, Dec 15, 2008 at 10:30 AM,  casper@sun.com wrote:

I think the problem for me is not that there's a risk of data loss if
a pool becomes corrupt, but that there are no recovery tools
available.  With UFS, people expect that if the worst happens, fsck
will be able to recover their data in most cases.

 Except, of course, that fsck lies.  In fixes the meta data and the
 quality of the rest is unknown.

 Anyone using UFS knows that UFS file corruption are common; specifically,
 when using a UFS root and the system panic's when trying to
 install a device driver, there's a good chance that some files in
 /etc are corrupt. Some were application problems (some code used
 fsync(fileno(fp)); fclose(fp); it doesn't guarantee anything)


With ZFS you have no such tools, yet Victor has on at least two occasions
shown that it's quite possible to recover pools that were completely unusable
(I believe by making use of old / backup copies of the uberblock).

 True; and certainly ZFS should be able backtrack.  But it's
 much more likely to happen automatically then using a recovery
 tool.

 See, fsck could only be written because specific corruption are known
 and the patterns they have.   With ZFS, you can only backup to
 a certain uberblock and the pattern will be a surprise.

 Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Need Help Recovering Zpool

2008-12-15 Thread Kees Nuyt
On Mon, 15 Dec 2008 06:12:19 PST, Nathan Hand wrote:

 I have moved the zpool image file to an 
 OpenSolaris machine running 101b.

 r...@opensolaris:~# uname -a
 SunOS opensolaris 5.11 snv_101b i86pc i386 i86pc Solaris

 Here I am able to attempt an import of the pool and at
 least the OS does not panic.

[snip]

But it hangs forever when I actually attempt the import.

The failmode is a property of the pool.
PROPERTY   EDIT  VALUES
failmode   YESwait | continue | panic

In OpenSolaris 101b it defaults to wait, and that's what
seems to be happening now. Perhaps you can change it to
continue (and keep your fingers crossed)?
-- 
  (  Kees Nuyt
  )
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS snapshot create/deletion event notification

2008-12-15 Thread Erwann Chenede
Hi All,

Is there a way to get event notification for zfs filesystems and 
snapshot creation/deletion ?
I looked at HAL and event ports but couldn't find anything.

   Does such a feature exist for zfs ?

  Thanks in advance,

Erwann

-- 
  Erwann Chénedé,
 Desktop Group, Sun Microsystems, Grenoble

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Split responsibility for data with ZFS

2008-12-15 Thread Casper . Dik

I think the problem for me is not that there's a risk of data loss if
a pool becomes corrupt, but that there are no recovery tools
available.  With UFS, people expect that if the worst happens, fsck
will be able to recover their data in most cases.

Except, of course, that fsck lies.  In fixes the meta data and the
quality of the rest is unknown.

Anyone using UFS knows that UFS file corruption are common; specifically,
when using a UFS root and the system panic's when trying to
install a device driver, there's a good chance that some files in
/etc are corrupt. Some were application problems (some code used
fsync(fileno(fp)); fclose(fp); it doesn't guarantee anything)


With ZFS you have no such tools, yet Victor has on at least two occasions
shown that it's quite possible to recover pools that were completely unusable
(I believe by making use of old / backup copies of the uberblock).

True; and certainly ZFS should be able backtrack.  But it's
much more likely to happen automatically then using a recovery
tool.

See, fsck could only be written because specific corruption are known
and the patterns they have.   With ZFS, you can only backup to
a certain uberblock and the pattern will be a surprise.

Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Using zfs mirror as a simple backup mechanism for time-slider.

2008-12-15 Thread Niall Power
Hi all,

A  while back, I posted here about the issues ZFS has with USB hotplugging
of ZFS formatted media when we were trying to plan an external media backup
solution for time-slider:
http://www.opensolaris.org/jive/thread.jspa?messageID=299501

As well as the USB issues in the subject we became aware of some serious issues
with the ZFS send/recv stream format's backwards and forwards compatibility.
This was enough incentive to make us think about other potential solutions. One
of my colleagues in Xdesign (Frank Ludolph) came up with the suggestion of using
a ZFS mirror of the root pool as the mechanism to backup to an external device.
This seemed like a very nice idea. It made use of existing functionality that 
already
exists in ZFS and works reliably, allows time-slider to expose yet another 
great feature
of ZFS to the desktop/laptop user:

   * It provides a full, reliable backup that is maintained in sync with the 
main storage device
   * No user interaction required. Just plug the disk in and pull it out when 
you like. ZFS will
 resynchronize it ASAP
   * Provides a bootable backup mechanism. The mirror is bootable and the main 
disk can be
  fully replaced and/or recovered from it.
   * Errors on the main disk can be corrected from the external device and vice 
versa.
   * Simplified UI - user doesn't have to configure backup schedules etc.
   * Resynchronisation is always optimal because zfs handles it directly rather 
than some 
  external program that can't optimise as effectively
   * It would permit network backwork configurations if the mirror device were 
an iSCSI target.
  And the recent putback of:
PSARC/2008/427 iSCSI Boot
PSARC/2008/640 iSCSI With DHCP
  should enable booting over the network
 
So I set off about testing this out and the initial results have been very 
promising. 
To configure the mirror I added a new device as described here:
http://darkstar-solaris.blogspot.com/2008/09/zfs-root-mirror.html

After a couple of weeks testing and trying to break the mirroring it has 
continued to 
work reliably despite leaving my USB mirror device detached for days on end, 
adding
gigabytes of files and deleting hundreds of snapshots while detached. ZFS took 
it 
all in it's stride and resynced the mirror device when I reattached it. It also 
withstood
my attempts to break it by disconnecting the USB mirror in the middle of a 
resilver.

There are a few minor issues however which I'd love to get some feedback on in 
addition
to the overall direction of this proposal:

1. When the external device is disconnected, the zpool status output reports 
that the 
pool is in a degraded state and displays a status message that indicates 
that there
was an unrecoverable error. While this is all technically correct, and is 
appropriate
in the context of a setup where it is assumed that the mirrored device is 
always 
connected, it might lead a user to be unnecessarily alarmed when his 
backup mirror 
disk is not connected. We're trying to use a mirror configuration here in a 
manner that
is a bit different than the conventional manner, but not in any way that 
it's not designed
to cope with.

2. When reattaching the external mirror device, hald and the removable media 
manager try
to mount the mirror device and pop up an error message when it fails to 
mount the pool.
This is an annoyance that needs a bug logged against hald I believe. ZFS 
does the right thing
and starts resivering to bring the 2 mirror devices in sync.

3. The zpool status outut is not very good about estimating resilver completion 
times. Resilvers
that were estimated to take 23 hours by zpool status have completed in 8 
hours. At least it
wasn't the other way around :-)

So I'd like to ask if this is an appropriate use of ZFS mirror functionality? 
It has many benefits
that we really should take advantage of. It would seem a real shame not to. 
Secondly, i'd like
to address issue 1 above. There doesn't appear to be any fundamental, 
underlying problem here.
It's just that the messages reported by zpool status might cause unnecessary 
alarm to the user.
In the context that we plan to use a mirror there is nothing actually wrong 
because we expect the
mirror device to get detached and reattached frequently which is not the normal 
situation for a
mirror pool.
How about defining a new subtype of mirror device, such as a backup-mirror or 
removable-mirror?
All the mirror functionality remains the same, but the status messages are a 
bit less panicy when 
the mirror device is disconnected since it's expected to happen frequently.

Time-slider is getting a lot of coverage and positive feedback from reviewers 
and bloggers because it
allows ordinary users to take advantage of ZFS features in a very simple and 
beneficial way. I
think this proposal presents another opportunity to bring the benefits of ZFS 
to the desktop.
Remember that we're aiming at 

[zfs-discuss] ZFS filesystem creation during JumpStart

2008-12-15 Thread Brad Hudson
Does anyone know of a way to specify the creation of ZFS file systems for a ZFS 
root pool during a JumpStart installation?  For example, creating the following 
during the install:

Filesystem   Mountpoint
rpool/var /var
rpool/var/tmp /var/tmp
rpool/home /home

The creation of separate filesystems allows the use of quotas/reservations via 
ZFS, whereas these are not created/protected during a JumpStart install with 
ZFS root.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [Fwd: Re: [indiana-discuss] build 100 image-update: cannot boot to previous BEs]

2008-12-15 Thread Sebastien Roy
On Sat, 2008-12-13 at 12:18 -0500, Sebastien Roy wrote:
 I sent the following to indiana-disc...@opensolaris.org, but perhaps
 someone here can get to the bottom of this.  Why must zfs trash my
 system so often with this hostid nonsense?  How do I recover from this
 situation?  (I have no OpenSolaris boot CD with me at the moment, so
 zpool import while booted off of the CD isn't an option)

The problem turned out to be not related to the hostid, but because I
had done a zpool upgrade while booted into ON build  103 (bringing
the zpool version to 14).  The zpool version is newer than what is
supported in OpenSolaris, so none of my BEs can boot.

Perhaps it would be nice if zpool upgrade could say something like,
Upgrading the pool will cause other boot environments using this pool
to no longer boot.  Are you sure you want to do this?

-Seb


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Split responsibility for data with ZFS

2008-12-15 Thread Ross
I think the problem for me is not that there's a risk of data loss if a pool 
becomes corrupt, but that there are no recovery tools available.  With UFS, 
people expect that if the worst happens, fsck will be able to recover their 
data in most cases.

With ZFS you have no such tools, yet Victor has on at least two occasions shown 
that it's quite possible to recover pools that were completely unusable (I 
believe by making use of old / backup copies of the uberblock).

My concern is that ZFS has all this information on disk, it has the ability to 
know exactly what is and isn't corrupted, and it should (at least for a system 
with snapshots) have many, many potential uberblocks to try.  It should be far, 
far better than UFS at recovering from these things, but for a certain class of 
faults, when it hits a problem it just stops dead.

That's what frustrates me - knowing that there's potential to have all my data 
there, stored safely away, but having it completely inaccessible due to a lack 
of recovery tools.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS and aging

2008-12-15 Thread Thanos McAtos
Hello all.

I'm doing a course project to evaluate recovery time of RAID-Z.

One of my tests is to examine the impact of aging on recovery speed.

I've used PostMark to stress the file-system but I didn't observe any 
noticeable slowdown.

Is there a better way to age a ZFS file-system?

Does ZFS have aging issues at all?

Thanx in advance.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS filesystem creation during JumpStart

2008-12-15 Thread Peter Weil
On Mon, Dec 15, 2008 at 6:06 PM, Brad Hudson brad.hud...@wachovia.com wrote:
 Does anyone know of a way to specify the creation of ZFS file systems for a 
 ZFS root pool during a JumpStart installation?  For example, creating the 
 following during the install:

 Filesystem   Mountpoint
 rpool/var /var
 rpool/var/tmp /var/tmp
 rpool/home /home

 The creation of separate filesystems allows the use of quotas/reservations 
 via ZFS, whereas these are not created/protected during a JumpStart install 
 with ZFS root.

Hi!

For /var you could/should add

bootenv installbe bename your_name dataset /var

to the profile.
Other fs can be created through the finish script. (just add zfs
create rpool/home ...)

Best regards

Peter
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Split responsibility for data with ZFS

2008-12-15 Thread Bob Friesenhahn
On Mon, 15 Dec 2008, Ross wrote:

 My concern is that ZFS has all this information on disk, it has the 
 ability to know exactly what is and isn't corrupted, and it should 
 (at least for a system with snapshots) have many, many potential 
 uberblocks to try.  It should be far, far better than UFS at 
 recovering from these things, but for a certain class of faults, 
 when it hits a problem it just stops dead.

While ZFS knows if a data block is retrieved correctly from disk, a 
correctly retrieved data block does not indicate that the pool isn't 
corrupted.  A block written in the wrong order is a form of 
corruption.

Bob
==
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS filesystem creation during JumpStart

2008-12-15 Thread Vincent Fox
Just put commands to create them in the finish script.

I create several and set options on them like so:

### Create ZFS additional filesystems
echo setting up additional filesystems.
zfs create -o compression=on -o mountpoint=/ucd rpool/ucd
zfs create -o compression=on -o mountpoint=/local/d01 rpool/d01
zfs create -o mountpoint=/var/cache/openafs rpool/afs-cache
zfs set compression=on rpool/afs-cache
zfs set quota=2g rpool/afs-cache
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Split responsibility for data with ZFS

2008-12-15 Thread Ross Smith
I'm not sure I follow how that can happen, I thought ZFS writes were
designed to be atomic?  They either commit properly on disk or they
don't?


On Mon, Dec 15, 2008 at 6:34 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
 On Mon, 15 Dec 2008, Ross wrote:

 My concern is that ZFS has all this information on disk, it has the
 ability to know exactly what is and isn't corrupted, and it should (at least
 for a system with snapshots) have many, many potential uberblocks to try.
  It should be far, far better than UFS at recovering from these things, but
 for a certain class of faults, when it hits a problem it just stops dead.

 While ZFS knows if a data block is retrieved correctly from disk, a
 correctly retrieved data block does not indicate that the pool isn't
 corrupted.  A block written in the wrong order is a form of corruption.

 Bob
 ==
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Split responsibility for data with ZFS

2008-12-15 Thread Bob Friesenhahn
On Mon, 15 Dec 2008, Ross Smith wrote:

 I'm not sure I follow how that can happen, I thought ZFS writes were
 designed to be atomic?  They either commit properly on disk or they
 don't?

Yes, this is true.  One reason why people complain about corrupted ZFS 
pools is because they have hardware which writes data in a different 
order than what was requested.  Some hardware claims to have written 
the data but instead it has been secretly cached for later (or perhaps 
for never) and data blocks get written in some other order.  It seems 
that ZFS is capable of working reliably with cheap hardware but not 
with wrongly designed hardware.

Bob
==
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Split responsibility for data with ZFS

2008-12-15 Thread Nicolas Williams
On Mon, Dec 15, 2008 at 01:36:46PM -0600, Bob Friesenhahn wrote:
 On Mon, 15 Dec 2008, Ross Smith wrote:
 
  I'm not sure I follow how that can happen, I thought ZFS writes were
  designed to be atomic?  They either commit properly on disk or they
  don't?
 
 Yes, this is true.  One reason why people complain about corrupted ZFS 
 pools is because they have hardware which writes data in a different 
 order than what was requested.  Some hardware claims to have written 
 the data but instead it has been secretly cached for later (or perhaps 
 for never) and data blocks get written in some other order.  It seems 
 that ZFS is capable of working reliably with cheap hardware but not 
 with wrongly designed hardware.

Order of writes matters between transactions, not inside transactions,
and at the boundary is a cache flush.  Thus what matters really isn't
write order as much as whether the devices lie about cache flushes.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Need Help Recovering Zpool

2008-12-15 Thread Nathan Hand
Thanks for the reply. I tried the following:

$ zpool import -o failmode=continue -d /mnt -f zones

But the situation did not improve. It still hangs on the import.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS filesystem creation during JumpStart

2008-12-15 Thread Brad Hudson
Thanks for the response Peter.  However, I'm not looking to create a different 
boot environment (bootenv).  I'm actually looking for a way within JumpStart to 
separate out the ZFS filesystems from a new installation to have better control 
over quotas and reservations for applications that usually run rampant later.  
In particular, I would like better control over the following (e.g. the ability 
to explicitly create them at install time):

rpool/opt - /opt
rpool/usr - /usr
rpool/var - /var
rpool/home - /home

Of the above /home can easily be created post-install, but the others need to 
have the flexibility of being explicitly called out in the JumpStart profile 
from the initial install to provide better ZFS accounting/controls.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool import broke my system

2008-12-15 Thread Robert Buick
Over the weekend, I installed opensolaris 2008.11 onto a removable harddisk on 
my laptop (having previously removed the zfs bootable snv_103 imaged 
harddrive). - side note: opensolaris could do with a companion cd as it takes a 
long long time to install the usefull/essential software from the repository 
:-( .
 
Having installed opensolaris 2008.11 I thought I might try to mount the zfs 
bootable snv_103 drive and copy all the data over (i.e. in a similar way to 
when I moved from ufs to zfs boot, a while back).
 
I struggled to be able to mount the old disk and as I had to do some 'proper 
work' removed the opensolaris 2008.11 harddisk and put back the snv_103 
harddisk.

All worked fine, as before. 
 
When I later found some time, I tried to mount the opensolaris 2008.11 
harddrive (now in a usb caddy) from the booted snv_103 drive, by entering: 

zpool import -f rpool
 
It complained about stuff being already mounted but sure enough I could still 
see the original pool 'rootpool' of snv_103 and the pool 'rpool' of opensolaris 
2008.11 image got some sort of mention as well, when I did 'zfs list' and I 
could still see my familiar snv_103 home directory.
 
A reboot soon changed all of that :-(
 
On booting it comes up with:
.
.
.
Reading ZFS config: done
Mounting ZFS filesystems: (1/6)cannot mount '/' : directory is not enpty
(6/6)
svc:/system/filesystem/local/:default: WARNING: /usr/sbin/zfs mount -a
failed exit status 1
 
I get a login prompt and can login ok.

Typing 'zpool status' returns
pool: rootpool
state: ONLINE
scrub: none requested
config:

NAMESTATE   READWRITE   CKSUM
rootpool  ONLINE  000
c0d0s0   ONLINE  000

errors: No known data errors
 
There is no mention of opensolaris pool 'rpool' (attached on usb drive).

'zfs list' gives:
   USEDAVAIL   REFER   MOUNTPOINT
rootpool   78.6G191G37k /
rootpool/ROOT18.7G191G18k /ROOT
rootpool/ROOT/zfs_1033.81G   191G11.3G   /.alt.tmp.b-iR.mnt/
rootpool/ROOT/zfs_10414.9G   191G9.34G   /   
rootpool/dump  3.25G   191G3.25G   -
rootpool/home  50.3G   191G41.9G   /export/home
rootpool/swap  3.43G   191G105M-
rootpool/u01 2.84G   191G2.78G   /u01
 
Also swapping the opensolaris 2008.11 harddisk into the laptop drive bay no
longer boots up opensolaris 2008.11 either.
 
How do I get back to a bootable system e.g. what zfs or
zpool commands will get me out of this hole - before I dig myself in
deeper.

Thanks for your time.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS filesystem creation during JumpStart

2008-12-15 Thread Richard Elling
These issues are discussed on the install-discuss forum.  You'll have
better luck getting to the right audience there.
http://www.opensolaris.org/jive/forum.jspa?forumID=107

Also see the various design docs in the install community.
http://www.opensolaris.org/os/community/install
 -- richard

Brad Hudson wrote:
 Thanks for the response Peter.  However, I'm not looking to create a 
 different boot environment (bootenv).  I'm actually looking for a way within 
 JumpStart to separate out the ZFS filesystems from a new installation to have 
 better control over quotas and reservations for applications that usually run 
 rampant later.  In particular, I would like better control over the following 
 (e.g. the ability to explicitly create them at install time):

 rpool/opt - /opt
 rpool/usr - /usr
 rpool/var - /var
 rpool/home - /home

 Of the above /home can easily be created post-install, but the others need to 
 have the flexibility of being explicitly called out in the JumpStart profile 
 from the initial install to provide better ZFS accounting/controls.
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Split responsibility for data with ZFS

2008-12-15 Thread Miles Nordin
 nw == Nicolas Williams nicolas.willi...@sun.com writes:

nw Your thesis is that all corruption problems observed with ZFS
nw on SANs are: a) phantom writes that never reached the rotating
nw rust, b) not bit rot, corruption in the I/O paths, ...
nw Correct?

yeah.  

by ``all'' I mean the several single-LUN pools that were recovered by
using an older set of ueberblocks.  Of course I don't mean ``all'' as
in all pools imagineable including this one 10 years ago on an unnamed
Major Vendor's RAID shelf that gave you a scar just above the ankle.

But it is really sounding so far like just one major problem with
single-LUN ZFS's on SAN's?  or am I wrong, there are lots of pools
which can't be recovered with old ueberblocks?

Remember the problem is losing pools.  It is not, ``for weeks I kept
losing files.  I would get errors reported in 'zpool status', and it
would tell me the filename 'blah' has uncorrectable errors.  This went
on for a while, then one day we lost the whole pool.''  I've heard
zero reports like that.

nw Some of the earlier problems of type (2) were triggered by
nw checksum verification failures on pools with no redundancy,

but checksum failures aren't caused just by bitrot in ZFS.  I get
hundreds of them after half of my iSCSI mirror bounces because of the
incomplete-resilvering bug.  

I don't know the on-disk format well, but maybe the checksum was wrong
because the label pointed to a block that wasn't an ueberblock.  Maybe
the checksum is functioning in leiu of a commit sector: maybe all four
ueberblocks were written incompletely because there is some bug or
missing-workaround in the way ZFS flushes and schedules the ueberblock
writes, so with some written sectors and some unwritten sectors the
overall block checksum is wrong.

Maybe this is a downside to the filesystem-level checksum.  For
integrity it's an upside, but the netapp block-level checksum, where
you checksum just the data plus the block-number at RAID layer, should
narrow down checksum failures to disk bit flips only and thus be
better for tracking down problems and building statistics comparable
with other systems.  We already know the 'zpool status' CKSUM column
isn't so selective, and can catch out-of-date data too.

The overall point, what I'd rather have as my ``thesis,'' is you can't
allow ZFS to exhonerate itself with an error message.  Losing the
whole pool in a situation where UFS would (or _might_, is not even
proven beyond doubt that it _would_), have corrupted a bit of data,
isn't an advantage just because ZFS can printf a warning that says
``loss of entire pool detected.  must be corruption outside ZFS!''


pgpnWFqKEeiVx.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hybrid Pools - Since when?

2008-12-15 Thread Marion Hakanson
richard.ell...@sun.com said:
 L2ARC arrived in NV at the same time as ZFS boot, b79, November 2007. It was
 not back-ported to Solaris 10u6.

You sure?  Here's output on a Solaris-10u6 machine:

cyclops 4959# uname -a
SunOS cyclops 5.10 Generic_137138-09 i86pc i386 i86pc
cyclops 4960# zpool upgrade -v
This system is currently running ZFS pool version 10.

The following versions are supported:

VER  DESCRIPTION
---  
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z
 4   zpool history
 5   Compression using the gzip algorithm
 6   bootfs pool property
 7   Separate intent log devices
 8   Delegated administration
 9   refquota and refreservation properties
 10  Cache devices
For more information on a particular version, including supported releases, 
see:
http://www.opensolaris.org/os/community/zfs/version/N

Where 'N' is the version number.
cyclops 4961#


Note, I haven't tried adding a cache device yet.

Regards,

Marion


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Using zfs mirror as a simple backup mechanism for time-slider.

2008-12-15 Thread Volker A. Brandt
 A  while back, I posted here about the issues ZFS has with USB hotplugging
 of ZFS formatted media when we were trying to plan an external media backup
 solution for time-slider:
 http://www.opensolaris.org/jive/thread.jspa?messageID=299501
[...]

 There are a few minor issues however which I'd love to get some feedback on 
 in addition
 to the overall direction of this proposal:

 1. When the external device is disconnected, the zpool status output reports 
 that the
 pool is in a degraded state and displays a status message that indicates 
 that there
 was an unrecoverable error. While this is all technically correct, and is 
 appropriate
 in the context of a setup where it is assumed that the mirrored device is 
 always
 connected, it might lead a user to be unnecessarily alarmed when his 
 backup mirror
 disk is not connected. We're trying to use a mirror configuration here in 
 a manner that
 is a bit different than the conventional manner, but not in any way that 
 it's not designed
 to cope with.
[...]

 So I'd like to ask if this is an appropriate use of ZFS mirror functionality? 
 It has many benefits
 that we really should take advantage of.

Yes, by all means.  I am doing something very similar on my T1000, but
I have two separate one-disk pools and copy to the backup pool using
rsync.  I would very much like to replace this with automatic resilvering.

One prerequisite for wide adoption would be to fix the issue #1 you
described above.  I would advise not to integrate this anywhere before
fixing that degraded display.

BTW is this USB-specific?  While it seems to imply that, you don't state
it anywhere explicitly.  I attach my backup disk via eSATA, power it up,
import the pool, etc.  Not really hotplugging...


Regards -- Volker
-- 

Volker A. Brandt  Consulting and Support for Sun Solaris
Brandt  Brandt Computer GmbH   WWW: http://www.bb-c.de/
Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de
Handelsregister: Amtsgericht Bonn, HRB 10513  Schuhgröße: 45
Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Split responsibility for data with ZFS

2008-12-15 Thread Miles Nordin
 bc == Bryan Cantrill b...@eng.sun.com writes:
 jz == Joseph Zhou j...@excelsioritsolutions.com writes:

bc most of the people I talk to are actually _using_ NetApp's
bc technology, a practice that tends to leave even the most
bc stalwart proponents realistic about the (many) limitations of
bc NetApp's

same applies to ZFS pundits!

As Tim said, the one-filesystem-per-user thing is not working out.
O(1) for number of filesystems would be great but isn't there.

Maybe the format allows unlimited O(1) snapshots, but it's at best
O(1) to take them.  All over the place it's probably O(n) or worse to
_have_ them.  to boot with them, to scrub with them.

I think the winning snapshot architecture is more like source code
revision control: take infinitely-granular snapshots, a continuous
line, and run a cron service to trim the line into a series of points.

The management can be delegated, but inspection commands are not safe
and can lock the whole filesystem, and 'zfs recv'ing certain streams
panics the whole box so backup cannot really be safely delegated either.
The panic-on-import problems are bad for delegation because you can't
safely let users mount things, which to my view is where delegated
administration begins.  It's too unstable to think of delegating
anything---it's all just UI baloney until the panics are fixed and
failures are contained within one pool.

The scalability to multiple cores goals are admirable, but only
certain things are parallelized.  You can only replace one device at a
time, which some day will not be enough to keep up with natural
failure rates.  I think 'zfs send' does not use multiple cores well,
right?  AIUI people are getting non-scaling performance in send/recv
while the ordinary filesystem performance does scale, and thus getting
painted into a corner.

Yeah there's compression, but as Tim said people are getting more
savings from dedup, which goes naturally with writeable clones too.
Also the NetApp dedup is a background thread while the ZFS compression
is synchronous with writing.  as well as not scaling to multiple cores
and seeming to have some bugs in the gzip version.

Yeah there is some heirarchical storage in it, but after half a year
still a slog cannot be removed?

In general I think ZFS pundits compliment the architecture and not the
implementation.

The big compliment I have for it is just that the ZFS piece is free
software, even though large chunks of OpenSolaris aren't.  That's a
gigantic advantage, especially over NetApp, which probably has about
as much long-term future as Lisp.

jz As a friend, and trusting your personal integrity, I ask you,
jz please, don't get mad, enjoy the open discussion.

Joseph, I don't see the problem and think it's fine to excited so long
as actual information comes out.  There's nothing ad-hominem in the
discussion yet, and being ordered not to get mad will make any normal
person furious, especially if you make the order based on ``trust''
and ``personal integrity''---why bring up such things at all?  I
almost feel like you're baiting them!  I know it's normal for
sysadmins to be dry and menial, but it's still a technical discussion,
so I hope it doesn't upset anyone because it's not boring.


pgpxsvjCwkKjS.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS filesystem creation during JumpStart

2008-12-15 Thread Kyle McDonald
Brad Hudson wrote:
 Thanks for the response Peter.  However, I'm not looking to create a 
 different boot environment (bootenv).  I'm actually looking for a way within 
 JumpStart to separate out the ZFS filesystems from a new installation to have 
 better control over quotas and reservations for applications that usually run 
 rampant later.  In particular, I would like better control over the following 
 (e.g. the ability to explicitly create them at install time):

   
Whether you want a bootenv or not, that command, and syntax) is the only 
way to specify to jumpstart to both use ZFS instead of UFS, and to 
customize how it's intalled (it's option to split out /var is, 
unfortunately, the only FS that can be split at the moment.)

You're not the first to lament over this fact, but I wouldn't hold your 
breath for any improvements, since JumpStart is not really being 
actively improved anylonger. Sun is instead focusing on it's replacement 
'AI', which is currently being developed and used on OpenSolaris, and I 
beleive is intended to replace JS on Sun Solaris at some undefined time 
in the future.

At the moment I don't beleive that AI has the features you're looking 
for either - It has quite a few other differences from JS too, if you 
think you'll use it, you should keep tabs on the project pages, and 
mailing lists.
 rpool/opt - /opt
 rpool/usr - /usr
 rpool/var - /var
 rpool/home - /home

 Of the above /home can easily be created post-install, but the others need to 
 have the flexibility of being explicitly called out in the JumpStart profile 
 from the initial install to provide better ZFS accounting/controls.
   
It's not hard to create /opt, or /var/xyz ZFS filesystems, and move 
files into them during post install, or first boot even, then mve the 
originals, and set the zfs mountpoints to where the originals are. This 
even give you the advantage of enabling compression (since all the data 
will be rewritten and thus compressed.) /usr is harder. Might not be 
impossible in a finish script, but probably much harder in a first-boot 
script.

All that said, if you're planning on using live upgrade (or snap upgrade 
on OS) after installation is done, I'm not sure if they'll just 'Do the 
right thing' (or even work at all) with these other filesystems as they 
clone and upgrade the new BE's. My bet would be no.


   -Kyle

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Need Help Invalidating Uberblock

2008-12-15 Thread Nathan Hand
I've had some success.

I started with the ZFS on-disk format PDF.

http://opensolaris.org/os/community/zfs/docs/ondiskformat0822.pdf

The uberblocks all have magic value 0x00bab10c. Used od -x to find that value 
in the vdev.

r...@opensolaris:~# od -A x -x /mnt/zpool.zones | grep b10c 00ba
02 b10c 00ba   0004   
020400 b10c 00ba   0004   
020800 b10c 00ba   0004   
020c00 b10c 00ba   0004   
021000 b10c 00ba   0004   
021400 b10c 00ba   0004   
021800 b10c 00ba   0004   
021c00 b10c 00ba   0004   
022000 b10c 00ba   0004   
022400 b10c 00ba   0004   
...

So the uberblock array begins 128kB into the vdev and there's an uberblock 
every 1kb.

To identify the active uberblock I used zdb.

r...@kestrel:/opt$ zdb -U -uuuv zones
Uberblock
magic = 00bab10c
version = 4
txg = 1504158 (= 0x16F39E) 
guid_sum = 10365405068077835008 = (0x8FD950FDBBD02300)
timestamp = 1229142108 UTC = Sat Dec 13 15:21:48 2008 = (0x4943385C)
rootbp = [L0 DMU objset] 400L/200P DVA[0]=0:52e3edc00:200 
DVA[1]=0:6f9c1d600:200 DVA[2]=0:16e280400:200 fletcher4 lzjb LE contiguous 
birth=1504158 fill=172 cksum=b0a5275f3:474e0ed6469:e993ed9bee4d:205661fa1d4016

I spy those hex values at the uberblock starting 027800.

027800 b10c 00ba   0004   
027810 f39e 0016   2300 bbd0 50fd 8fd9
027820 385c 4943   0001   
027830 1f6e 0297   0001   
027840 e0eb 037c   0001   
027850 1402 00b7   0001  0703 800b
027860        
027870     f39e 0016  
027880 00ac    75f3 0a52 000b 
027890 6469 e0ed 0474  ee4d ed9b e993 
0278a0 4016 fa1d 5661 0020    
0278b0        

Breaking it down

* the first 8 bytes are the magic uberblock number (b10c 00ba  )
* the second 8 bytes are the version number (0004   )
* the third 8 bytes are the transaction group a.k.a txg (f39e 0016  )
* the fourth 8 bytes are the guid sum (2300 bbd0 50fd 8fd9)
* the fifth 8 bytes are the timestamp (385c 4943  )

The remainder of the bytes are the blkptr structure and I'll ignore them.

Those values match the active uberblock exactly, so I know this is the on-disk 
location of the first active uberblock.

Scanning further I find an exact duplicate 256kB later in the device.

067800 b10c 00ba   0004   
067810 f39e 0016   2300 bbd0 50fd 8fd9
067820 385c 4943   0001   
067830 1f6e 0297   0001   
067840 e0eb 037c   0001   
067850 1402 00b7   0001  0703 800b
067860        
067870     f39e 0016  
067880 00ac    75f3 0a52 000b 
067890 6469 e0ed 0474  ee4d ed9b e993 
0678a0 4016 fa1d 5661 0020    
0678b0        

I know ZPOOL keeps four copies of the label; two at the front and two at the 
back, each 256kB in size.

r...@opensolaris:~# ls -l /mnt/zpool.zones 
-rw-r--r-- 1 root root 42949672960 Dec 15 04:49 /mnt/zpool.zones

That's 0xA = 42949672960 = 41943040kB. If I subtract 512kB I should see 
the third and fourth labels.

r...@opensolaris:~# dd if=/mnt/zpool.zones bs=1k skip=41942528 | od -A x -x | 
grep 385c 4943  
027820 385c 4943   0001   
512+0 records in
512+0 records out
524288 bytes (524 kB) copied, 0.0577013 s, 9.1 MB/s
r...@opensolaris:~# 

Oddly enough I see the third uberblock at 0x27800 but the fourth uberblock at 
0x67800 is missing. Perhaps corrupted?

No matter. I now work out the exact offsets to the three valid uberblocks and 
confirm I'm looking at the right uberblocks.

r...@opensolaris:~# dd if=/mnt/zpool.zones bs=1k skip=158 | od -A x -x | head -3
00 b10c 00ba   0004   
10 f39e 0016   2300 bbd0 50fd 8fd9
20 385c 4943   0001   
r...@opensolaris:~# dd if=/mnt/zpool.zones bs=1k skip=414 | od -A x -x | head -3
00 b10c 00ba   0004   
10 f39e 0016   2300 bbd0 50fd 8fd9
20 385c 4943   0001   
r...@opensolaris:~# dd if=/mnt/zpool.zones bs=1k skip=41942686 | od -A x -x | 
head -3
00 b10c 00ba   0004   
10 f39e 0016   2300 bbd0 50fd 8fd9
20 385c 4943   0001   

They all have the same timestamp. I'm looking at the correct uberblocks. Now I 
intentionally harm them.

r...@opensolaris:/mnt# dd if=/dev/zero of=/mnt/zpool.zones bs=1k seek=158 
count=1 conv=notrunc
1+0 records in
1+0 records out
1024 bytes (1.0 kB) copied, 0.000315229 s, 3.2 

Re: [zfs-discuss] Split responsibility for data with ZFS

2008-12-15 Thread Nicolas Williams
On Mon, Dec 15, 2008 at 05:04:03PM -0500, Miles Nordin wrote:
 As Tim said, the one-filesystem-per-user thing is not working out.

For NFSv3 clients that truncate MOUNT protocol answers (and v4 clients
that still rely on the MOUNT protocol), yes, one-filesystem-per-user is
a problem.  For NFSv4 clients that support mirror mounts its not a
problem at all.  You're not required to go with one-filesystem-per-user
though!  That's only if you want to approximate quotas.

 O(1) for number of filesystems would be great but isn't there.

It is O(1) for filesystems (parts of the system could be parallelized
more, but the on-disk data format is O(1) for filesystem creation and
mounting, just like it is for snapshots and clones).

 Maybe the format allows unlimited O(1) snapshots, but it's at best
 O(1) to take them.  All over the place it's probably O(n) or worse to
 _have_ them.  to boot with them, to scrub with them.

It's NOT O(N) to boot because of snapshots, nor to scrub.  Scrub and
resilver are O(N) where N is the amount used (as opposed to O(N) where N
is the size of the volume, for HW RAID and the like).

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Need Help Invalidating Uberblock

2008-12-15 Thread Kees Nuyt
On Mon, 15 Dec 2008 14:23:37 PST, Nathan Hand wrote:

[snip]

 Initial inspection of the filesystems are promising.
 I can read from files, there are no panics, 
 everything seems to be intact.

Good work, congratulations, and thanks for the clear
description of the process. I hope I never need it.
Now one wonders why zfs doesn't have a rescue like that
built-in...
-- 
  (  Kees Nuyt
  )
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Split responsibility for data with ZFS

2008-12-15 Thread Toby Thain

 Maybe the format allows unlimited O(1) snapshots, but it's at best
 O(1) to take them.  All over the place it's probably O(n) or worse to
 _have_ them.  to boot with them, to scrub with them.

Why would a scrub be O(n snapshots)?

The O(n filesystems) effects reported from time to time in  
OpenSolaris seem due to code that iterates over them. The new ability  
to create huge numbers of them puts stress on assumptions valid in  
more traditional UNIX configurations, right?

--Toby
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] future PCIe x4 SATA/eSATA card products

2008-12-15 Thread SV
I emailed one of the more popular low-cost PCI card vendors and asked them 
about [maybe 4/8 port???], PCIe x4 [or more] cards in their product roadmap. 
They replied positively theat they were working on a PCIe x4 card, with both 
internal and eSATA options. They said it's cooking in the lab, and wait until 
the RD folks get it all together... but I feel it is fair to say we will have 
more options in 2009.

Since I have a mobo with PCIe, and couldn't justify the cost of [a PCI-X, 
Opteron based] motherboard, I real excited about this.

I told the vendor specificially to check out our postings here on 
opensolaris.org, and encouraged them make sure the chipset works in JBOD mode 
on OpenSolaris x86/x64. We'll see.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hybrid Pools - Since when?

2008-12-15 Thread Cyril Plisko

 The following versions are supported:

 VER  DESCRIPTION
 ---  
  1   Initial ZFS version
  2   Ditto blocks (replicated metadata)
  3   Hot spares and double parity RAID-Z
  4   zpool history
  5   Compression using the gzip algorithm
  6   bootfs pool property
  7   Separate intent log devices
  8   Delegated administration
  9   refquota and refreservation properties
  10  Cache devices
 For more information on a particular version, including supported releases,
 see:
 http://www.opensolaris.org/os/community/zfs/version/N


Check http://www.opensolaris.org/os/community/zfs/version/10 - the
fact that L2ARC for Solaris 10 is absent noted there.

-- 
Regards,
Cyril
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hybrid Pools - Since when?

2008-12-15 Thread James C. McPherson
On Tue, 16 Dec 2008 16:42:02 +1100
Nathan Kroenert nathan.kroen...@sun.com wrote:

 asking the question I know will be the next one on the list...
 
 So - will it be arriving in a patch? :)

no - we need a hook to get customers to use whatever
we package NV up as. Or buy fishworks kit :-)


James
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hybrid Pools - Since when?

2008-12-15 Thread James C. McPherson
On Tue, 16 Dec 2008 16:19:05 +1000
James C. McPherson james.mcpher...@sun.com wrote:

 On Tue, 16 Dec 2008 16:42:02 +1100
 Nathan Kroenert nathan.kroen...@sun.com wrote:
 
  asking the question I know will be the next one on the list...
  
  So - will it be arriving in a patch? :)
 
 no - we need a hook to get customers to use whatever
 we package NV up as. Or buy fishworks kit :-)

Ahem... 

Or you could install Solaris 10 Update 6, or apply
patch 137137-01 (sparc) or patch 137138-01.


The relevant CR number is 6536054, if you're wondering.


I'd still prefer to see people buy fishworks kit :-)


James
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss