[zfs-discuss] zfsboot from USB stick

2008-04-28 Thread Bernd Schemmer
Hi,

Is it possible to boot Solaris via zfs boot from an USB stick?

My efforts to boot Milax via zfs boot from an USB stick are not successfull 
until now (the kernel loads and panics with a message about not being able to 
mount root from /ramdisk:a) and before I continue I want to make sure that this 
can be done. Any pointers to docs about zfs boot are welcome.

regards

Bernd
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cp -r hanged copying a directory

2008-04-28 Thread Simon Breden
I don't like the sound of broken hardware :(

I did the cp -r dir1 dir2 again and when it hanged I issued 'fmdump -e' like 
you said -- here is the output:

# fmdump -e
TIME CLASS
fmdump: /var/fm/fmd/errlog is empty
# 

I also checked /var/adm/messages and I didn't see anything in there either. So 
I don't know what else I can do to understand what's happening. Any ideas?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cp -r hanged copying a directory

2008-04-28 Thread Rob Logan
  I did the cp -r dir1 dir2 again and when it hanged

when its hung, can you type:  iostat -xce 1
in another window and is there a 100 in the %b column?
when you reset and try the cp again, and look at
iostat -xce 1 on the second hang, is the same disk at 100 in %b?

if all your windows are hung, does your keyboard num locks LED
track the num locks key?

Rob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs data corruption

2008-04-28 Thread eric kustarz

On Apr 27, 2008, at 4:39 PM, Carson Gaspar wrote:

 Ian Collins wrote:
 Carson Gaspar wrote:

 If this is possible, it's entirely undocumented... Actually, fmd's
 documentation is generally terrible. The sum total of configuration
 information is:

 FILES
  /etc/fm/fmd Fault manager  configuration  direc-
  tory

 Which is empty... It does look like I could write code to copy the
 output of fmdump -f somewhere useful if I had to.


 Have you tried man fmadm?

 http://onesearch.sun.com/search/docs/index.jsp?col=docs_enlocale=enqt=fmadmcs=falsest=11

 Brings up some useful information.

 man fmadm has:

 - nothing to do with configuration (the topic) (OK, it prints the
 config, whatever that means, but you can't _change_ anything)
 - no examples of usage

 I stand by my statement that the fault management docs need a lot of  
 help.

I found the fmadm manpage very unhelpful as well.  This CR is going to  
be fixed soon:
6679902 fmadm(1M) needs examples
http://bugs.opensolaris.org/view_bug.do?bug_id=6679902

If you have specifics, feel free to add to the CR.

eric

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS - Implementation Successes and Failures

2008-04-28 Thread Dominic Kay
Hi

Firstly apologies for the spam if you got this email via multiple aliases.

I'm trying to document a number of common scenarios where ZFS is used as
part of the solution such as email server, $homeserver, RDBMS and so forth
but taken from real implementations where things worked and equally
importantly threw up things that needed to be avoided (even if that was the
whole of ZFS!).

I'm not looking to replace the Best Practices or Evil Tuning guides but to
take a slightly different slant.  If you have been involved in a ZFS
implementation small or large and would like to discuss it either in
confidence or as a referenceable case study that can be written up, I'd be
grateful if you'd make contact.

-- 
Dominic Kay
http://blogs.sun.com/dom
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS - Implementation Successes and Failures

2008-04-28 Thread Vincent Fox
Cyrus mail-stores for UC Davis are in ZFS.

Began as failure ended as success.  We hit the FSYNC performance issue and our 
systems collapsed under user load.  We could not track it down and neither 
could the Sun reps we contacted.  Eventually I found a reference to FSYNC bug 
and we tried out what was then an IDR patch for it.

Now I'd say it's a success, performance is good and very stable.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS - Implementation Successes and Failures

2008-04-28 Thread Bob Friesenhahn
On Mon, 28 Apr 2008, Dominic Kay wrote:

 I'm not looking to replace the Best Practices or Evil Tuning guides but to
 take a slightly different slant.  If you have been involved in a ZFS
 implementation small or large and would like to discuss it either in
 confidence or as a referenceable case study that can be written up, I'd be
 grateful if you'd make contact.

Back in February I set up ZFS on a 12-disk StorageTek 2540 array and 
documented my experience (at that time) in the white paper available 
at 
http://www.simplesystems.org/users/bfriesen/zfs-discuss/2540-zfs-performance.pdf;.

Since then I am still quite satisified.  ZFS has yet to report a bad 
block or cause me any trouble at all.

The only complaint I would have is that 'cp -r' performance is less 
than would be expected given the raw bandwidth capacity.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Metadata corrupted

2008-04-28 Thread Siegfried Nikolaivich
 Were you able to fix this problem in the end?

Unfortunately, no.  I believe Matthew Ahrens took a look at it and couldn't 
find the cause or how to fix it.  We had to destroy the pool and re-create it 
from scratch.

Fortunately, this was during the ZFS testing period, and no critically 
important data was lost, but I am still a bit shaken by the incident.  Since 
then we did eventually adopt ZFS, and it has been running well without further 
such problems for over a year now.  This leads me to believe it was either a 
software bug, or a hardware failure that triggered a fatal condition in the 
software that is not resilient to error in a redundant configuration.  I am 
sincerely hoping that this has been fixed, on purpose or by accident.

Cheers,
Siegfried
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] recovering data from a dettach mirrored vdev

2008-04-28 Thread Benjamin Brumaire
Hi,

my system (solaris b77) was physically destroyed and i loosed data saved in a 
zpool mirror. The only thing left is a dettached vdev from the pool. I'm aware 
that uberblock is gone and that i can't import the pool. But i still hope their 
is a way or a tool (like tct http://www.porcupine.org/forensics/) i can go too 
recover at least partially some data)

thanks in advance for any hints.

bbr
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss