Re: [zfs-discuss] ZFS Hardware Check, OS X Compatibility, NEWBIE!!

2008-06-03 Thread Andrew Lange
This looks like a great config, but I'm trying to avoid having to run a 
separate SATA Controller along with any BIOS issues it may have.  

Does anyone have experience with some of the gforce 8200 boards mentioned 
earlier in this thread, such as the ASUS M3N78-EMH HDMI?  At @$90 with 6 SATA 
ports and decent onboard video, it would provide a good option... assuming that 
is is supported.

Thanks!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS panics solaris while switching a volume to read-only

2008-06-03 Thread Veltror
I am trying to get a commitment to get this fixed, if you have a server with a 
whole bunch of SAN attached disks and then use the internals for some sort of 
temp space if one of those el cheapo sas disks dies it takes down the whole 
lot, not good. This problem is enought o prevent a roll out of zfs in 
production. ZFS great for local disks not so great for SAN, sadly all the great 
features of snapshots clones etc are being missed whilst this bug is till 
present. What is even more annoying is that this is being treated as a RFE.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Hardware Check, OS X Compatibility, NEWBIE!!

2008-06-03 Thread Thommy M.
Darryl wrote:
 This thread really messed me up, posts dont follow a chronological order...  
 so sorry for all the extra posts!

That's what you get when you don't use working tools like usenet news.
nntp for ever!!!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Project Hardware

2008-06-03 Thread Volker A. Brandt
 Timely discussion. I too am trying to build a stable yet inexpensive storage
 server for my home lab
[...]

 Other options are that I build a whitebox or buy a new PowerEdge or Sun
 X2200 etc

If this is really just a lab storage server then an X2100M2 will be
enough.  Just get the minimum spec, buy two 3.5 SATA-II disks
(I guess the sweet spot is 750GB right now), and buy 8GB of third
party memory to max out the box for ZFS.

Then set up a ZFS-rooted Nevada and you're in business.  Depending
on your requirements, you have slightly over 1.3TB capacity, or
about 690GB mirrored.

I have just such a machine and am very happy.  I do run S10U5
on it since I need the box for other things, too.  So I don't have
ZFS root.


HTH -- Volker
-- 

Volker A. Brandt  Consulting and Support for Sun Solaris
Brandt  Brandt Computer GmbH   WWW: http://www.bb-c.de/
Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED]
Handelsregister: Amtsgericht Bonn, HRB 10513  Schuhgröße: 45
Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Cannot delete errored file

2008-06-03 Thread Ben Middleton
Hi,

I can't seem to delete a file in my zpool that has permanent errors:

zpool status -vx
  pool: rpool
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: scrub completed after 2h10m with 1 errors on Tue Jun  3 11:36:49 2008
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c0t0d0  ONLINE   0 0 0
c0t1d0  ONLINE   0 0 0
c0t2d0  ONLINE   0 0 0

errors: Permanent errors have been detected in the following files:

/export/duke/test/Acoustic/3466/88832/09 - Check.mp3


rm /export/duke/test/Acoustic/3466/88832/09 - Check.mp3

rm: cannot remove `/export/duke/test/Acoustic/3466/88832/09 - Check.mp3': I/O 
error

Each time I try to do anything to the file, the checksum error count goes up on 
the pool.

I also tried a mv and a cp over the top - but same I/O error.

I performed a zpool scrub rpool followed by a zpool clear rpool - but still 
get the same error. Any ideas?

PS - I'm running snv_86, and use the sata driver on an intel x86 architecture.

B
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Hardware Check, OS X Compatibility, NEWBIE!!

2008-06-03 Thread Darryl
 Darryl wrote:
  This thread really messed me up, posts dont follow
 a chronological order...  so sorry for all the extra
 posts!
 
 That's what you get when you don't use working tools
 like usenet news.
 nntp for ever!!!
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discu
 ss

This is very true haha, and the page count on the forum page confuses me too, 
when i click second page i still enter the first page, haha..

I would prefer not to run a sata controller either, but i figure i would run my 
boot drives off them, as i will not be read/writing much from them, the speed 
should not be much of an issue (please correct me if i am wrong).

VAB: Did you have any  issues with this board, or was everything detected, i 
believe reading in the HCL that the sound card was not detected...

Brandon: I like this board as you advised (GIGABYTE GA-MA78GM-S2H), but it only 
has 5 Sata ports, so i would still need a controller (if i wanted to run my 
boot disk in raid 1).  I got all excited too! I do believe the less hardware 
the better, this one looked good 
(http://www.newegg.com/Product/Product.aspx?Item=N82E16813131273), but wasn't 
on the supported list.  Being that i'm new to all this, i'm weary of taking 
chances on my troubleshooting abilities, or my (bug you guys) and direction 
following.

Thanks to everyone again.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] new install - when is zfs root offered? (snv_90)

2008-06-03 Thread Gordon Ross
I'm trying a new install, creating a solaris partition in free space
(p3) which follows Windows XP (p1) and Ubuntu Linux (p2).
I want ZFS root, but the interactive installer (choice 1) on the
snv_90 DVD does not offer me ZFS root. Anyone know why?
or how to work-around the issue?

When I did a netinstall using the same DVD (exported
from a server I had nearby) on a machine where I let
Solaris have the whole disk, it offered ZFS root.

I'd really like to know: What are the conditions
under which the installer will offer ZFS root?

Thanks,
Gordon
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] can anyone help me?

2008-06-03 Thread Hernan Freschi
no, weird situation. I unplugged the disks from the controller (I have them 
labeled) before upgrading to snv89. after the upgrade, the controller names 
changed.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] new install - when is zfs root offered? (snv_90)

2008-06-03 Thread Mark J Musante
On Tue, 3 Jun 2008, Gordon Ross wrote:

 I'd really like to know: What are the conditions under which the 
 installer will offer ZFS root?

Only the text-based installer will offer it - not the GUI.


Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Bad results from importing a pool on two machines at once

2008-06-03 Thread Chris Siebenmann
 As part of testing for our planned iSCSI + ZFS NFS server environment,
I wanted to see what would happen if I imported a ZFS pool on two
machines at once (as might happen someday in, for example, a failover
scenario gone horribly wrong).

 What I expected was something between a pool with damage and a pool
that was unrecoverable. What I appear to have got is a a ZFS pool
that panics the system whenever you try to import it. The panic is a
'bad checksum (read on unknown off 0: ... [L0 packed nvlist]' error
from zfs:zfsctl_ops_root (I've put the whole thing at the end of this
message).

 I got this without doing very much to the dual-imported pool:
- import on both systems (-f'ing on one)
- read a large file a few times on both systems
- zpool export on one system
- zpool scrub on the other; system panics
- zpool import now panics either system

 One system was Solaris 10 U4 server with relatively current patches;
the other was Solaris 10 U5 with current patches.  (Both 64-bit x86.)

 What appears to be the same issue was reported back in April 2007 on
the mailing list, in the message
http://mail.opensolaris.org/pipermail/zfs-discuss/2007-April/039238.html,
but I don't see any followups.

 Is this a known and filed bug? Is there any idea when it might be fixed
(or the fix appear in Solaris 10)?

 I have to say that I'm disappointed with ZFS's behavior here; I don't
expect a filesystem that claims to have all sorts of checksums and
survive all sorts of disk corruptions to *ever* panic because it doesn't
like the data on the disk. That is very definitely not 'surviving disk
corruption', especially since it seems to have happened to someone who
was not doing violence to their ZFS pools the way I was.

- cks
[The full panic:
Jun  3 11:05:14 sansol2 genunix: [ID 809409 kern.notice] ZFS: bad checksum 
(read on unknown off 0: zio 8e508340 [L0 packed nvlist] 4000L/600P 
DVA[0]=0:a8000c000:600 DVA[1]=0:1040003000:600 fletcher4 lzjb LE contiguous 
birth=119286 fill=1 
cksum=6e160f6970:632da4719324:3057ff16f69527:10e6e1af42eb9b10): error 50
Jun  3 11:05:14 sansol2 unix: [ID 10 kern.notice] 
Jun  3 11:05:14 sansol2 genunix: [ID 655072 kern.notice] fe8000f9dac0 
zfs:zfsctl_ops_root+3003724c ()
Jun  3 11:05:14 sansol2 genunix: [ID 655072 kern.notice] fe8000f9dad0 
zfs:zio_next_stage+65 ()
Jun  3 11:05:14 sansol2 genunix: [ID 655072 kern.notice] fe8000f9db00 
zfs:zio_wait_for_children+49 ()
Jun  3 11:05:14 sansol2 genunix: [ID 655072 kern.notice] fe8000f9db10 
zfs:zio_wait_children_done+15 ()
Jun  3 11:05:14 sansol2 genunix: [ID 655072 kern.notice] fe8000f9db20 
zfs:zio_next_stage+65 ()
Jun  3 11:05:14 sansol2 genunix: [ID 655072 kern.notice] fe8000f9db60 
zfs:zio_vdev_io_assess+84 ()
Jun  3 11:05:14 sansol2 genunix: [ID 655072 kern.notice] fe8000f9db70 
zfs:zio_next_stage+65 ()
Jun  3 11:05:14 sansol2 genunix: [ID 655072 kern.notice] fe8000f9dbd0 
zfs:vdev_mirror_io_done+c1 ()
Jun  3 11:05:14 sansol2 genunix: [ID 655072 kern.notice] fe8000f9dbe0 
zfs:zio_vdev_io_done+14 ()
Jun  3 11:05:14 sansol2 genunix: [ID 655072 kern.notice] fe8000f9dc60 
genunix:taskq_thread+bc ()
Jun  3 11:05:14 sansol2 genunix: [ID 655072 kern.notice] fe8000f9dc70 
unix:thread_start+8 ()
]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] new install - when is zfs root offered? (snv_90)

2008-06-03 Thread Rich Teer
On Tue, 3 Jun 2008, Mark J Musante wrote:

 On Tue, 3 Jun 2008, Gordon Ross wrote:
 
  I'd really like to know: What are the conditions under which the 
  installer will offer ZFS root?
 
 Only the text-based installer will offer it - not the GUI.

Hmm, why's that?  Also, is there an ETA for when ZFS root will be
offered in the GUI installer?  (I recently installed Build 89 and
was surprised by the lack of ZFS pools after installation using
the GUI installer.)

-- 
Rich Teer, SCSA, SCNA, SCSECA

CEO,
My Online Home Inventory

URLs: http://www.rite-group.com/rich
  http://www.linkedin.com/in/richteer
  http://www.myonlinehomeinventory.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Hardware Check, OS X Compatibility, NEWBIE!!

2008-06-03 Thread Volker A. Brandt
Hi Darryl!


 VAB: Did you have any  issues with this board, or was everything detected, i
 believe reading in the HCL that the sound card was not detected...

I did not have any issues.  Here's what prtconf says about the sound:

caiman:/home/volker,378 prtconf -vpc /dev/sound/0
pci1043,8249, instance #0
System software properties:
name='play-interrupts' type=int items=1
value=00af
name='record-interrupts' type=int items=1
value=00af
name='interrupt-priorities' type=int items=1
value=0009
Hardware properties:
name='acpi-namespace' type=string items=1
value='\_SB_.PCI0.AZAL'
name='assigned-addresses' type=int items=5
value=8300a210..fe02..4000
name='reg' type=int items=10

value=a200.....0300a210....4000
name='compatible' type=string items=7
value='pci1002,4383.1043.8249.0' + 'pci1002,4383.1043.8249' + 
'pci1043,8249' + 'pci1002,4383.0' + 'pci1002,4383' + 'pciclass,040300' + 
'pciclass,0403'
name='model' type=string items=1
value='Mixed Mode device'
name='power-cons


The Device Driver Utility says that audiohd is attached.  The audioplay
tool works out of the box with some ancient .au files I have.

However, I have not been able to convince either Rhythmbox or
Totem to play anything at all... they complain about missing codecs
etc.  Since in typical Gnome fashion they don't say anywhere how to
get codecs, I've given up on them.  :-(

For completeness, here's prtdiag:

System Configuration: System manufacturer System Product Name
BIOS Configuration: Phoenix Technologies, LTD ASUS M2A-VM ACPI BIOS Revision 
1705 03/28/2008

 Processor Sockets 

Version  Location Tag
 --
AMD Athlon(tm) 64 X2 Dual Core Processor 4400+ Socket AM2

 Memory Device Sockets 

TypeStatus Set Device Locator  Bank Locator
--- -- --- --- 
DDR2in use 0   A0  Bank0/1
DDR2in use 0   A1  Bank2/3
DDR2in use 0   A2  Bank4/5
DDR2in use 0   A3  Bank6/7

 On-Board Devices =

 Upgradeable Slots 

ID  StatusType Description
--- -  
1   available PCI  PCI1
2   available PCI  PCI2
3   available PCI Express  PCIEX16
4   available PCI Express  PCIEX1_1


HTH -- Volker
-- 

Volker A. Brandt  Consulting and Support for Sun Solaris
Brandt  Brandt Computer GmbH   WWW: http://www.bb-c.de/
Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED]
Handelsregister: Amtsgericht Bonn, HRB 10513  Schuhgröße: 45
Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2008-06-03 Thread Richard Elling
Nathan Galvin wrote:
 When on leased equipment and previously using VxVM we were able to migrate 
 even a lowly UFS filesystems from one storage array to another storage array 
 via the evacuate process.  I guess this makes us only the 3rd customer 
 waiting for this feature.
   

UFS cannot be shrunk, so clearly you were not using a shrink feature.
For ZFS, you can migrate from one storage device to another using the
zpool replace command.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] new install - when is zfs root offered? (snv_90)

2008-06-03 Thread Brandon High
On Tue, Jun 3, 2008 at 9:38 AM, Rich Teer [EMAIL PROTECTED] wrote:
 Hmm, why's that?  Also, is there an ETA for when ZFS root will be
 offered in the GUI installer?  (I recently installed Build 89 and
 was surprised by the lack of ZFS pools after installation using
 the GUI installer.)

There was some discussion about it recently, I think the reason is
that the GUI for SXDE is not open sourced so it was more
difficult/political to add. The 2008.05 installer should be able to do
it when they sync up to b90 or beyond.

-B

-- 
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Not automatically importing ZFS pools at boot

2008-06-03 Thread Chris Siebenmann
 Is there any way to set ZFS on a system so that it will not
automatically import all of the ZFS pools it had active when it was last
running?

 The problem with automatic importation is preventing disasters in a
failover situation. Assume that you have a SAN environment with the same
disks visible to system A and system B. If system A loses power (or
otherwise goes down) with ZFS pools live, you 'zpool import -f' them on
system B to get them available again, and system A comes back up, system
A will happily import the pools too, despite them being in use on system
B.

(And then there are explosions. Bad explosions. You will probably lose
pools hard, per my previous email.)

- cks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Not automatically importing ZFS pools at boot

2008-06-03 Thread Eric Schrock
On Nevada, use the 'cachefile' property.  On S10 releases, use '-R /'
when creating/importing the pool.

- Eric

On Tue, Jun 03, 2008 at 02:16:03PM -0400, Chris Siebenmann wrote:
  Is there any way to set ZFS on a system so that it will not
 automatically import all of the ZFS pools it had active when it was last
 running?
 
  The problem with automatic importation is preventing disasters in a
 failover situation. Assume that you have a SAN environment with the same
 disks visible to system A and system B. If system A loses power (or
 otherwise goes down) with ZFS pools live, you 'zpool import -f' them on
 system B to get them available again, and system A comes back up, system
 A will happily import the pools too, despite them being in use on system
 B.
 
 (And then there are explosions. Bad explosions. You will probably lose
 pools hard, per my previous email.)
 
   - cks
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Not automatically importing ZFS pools at boot

2008-06-03 Thread eric kustarz

On Jun 3, 2008, at 11:16 AM, Chris Siebenmann wrote:

 Is there any way to set ZFS on a system so that it will not
 automatically import all of the ZFS pools it had active when it was  
 last
 running?

 The problem with automatic importation is preventing disasters in a
 failover situation. Assume that you have a SAN environment with the  
 same
 disks visible to system A and system B. If system A loses power (or
 otherwise goes down) with ZFS pools live, you 'zpool import -f' them  
 on
 system B to get them available again, and system A comes back up,  
 system
 A will happily import the pools too, despite them being in use on  
 system
 B.

What you need is in Nevada and is coming to s10u6:
http://blogs.sun.com/erickustarz/en_US/entry/poor_man_s_cluster_end

eric
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Hardware Check, OS X Compatibility, NEWBIE!!

2008-06-03 Thread Darryl
From what i can understand (prtconf is gibberish to me :)), the other sys 
checks seem to recognize everything else, so seems to be all good with this 
board.

Thank you for confirming this!

ps.  Not sure if this ok, but i came across a deal, i though i should pass on.. 
 a 1 TB Samsung drive for 169.99 (1/household) ends today.

http://shop1.frys.com/category/Outpost/Hard+Drives++Memory/Hard+Drives/Internal+Bare+Drives?accipterAreaString=
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Hardware Check, OS X Compatibility, NEWBIE!!

2008-06-03 Thread Richard Elling
Darryl wrote:
 From what i can understand (prtconf is gibberish to me :)), the other sys 
 checks seem to recognize everything else, so seems to be all good with this 
 board.

 Thank you for confirming this!

 ps.  Not sure if this ok, but i came across a deal, i though i should pass 
 on..  a 1 TB Samsung drive for 169.99 (1/household) ends today.

 http://shop1.frys.com/category/Outpost/Hard+Drives++Memory/Hard+Drives/Internal+Bare+Drives?accipterAreaString=
  
   
Last weekend's Fry's advertisement said limit two per customer so
you might have better luck at the local store.

NB. This is the Samsung HD103UJ which is a desktop model.  If you
can find a HE103UJ, then you get the enterprise-class model.  AFAICT,
the difference may just be warranty, bin, and firmware... but beware the 
bin!
http://www.samsung.com/global/business/hdd/productmodel.do?group=72type=61subtype=63model_cd=249ppmi=1155

 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Not automatically importing ZFS pools at boot

2008-06-03 Thread Chris Siebenmann
| On Nevada, use the 'cachefile' property.  On S10 releases, use '-R /'
| when creating/importing the pool.

 The drawback of '-R /' appears to be that it requires forcing the
import after a system reboot *all* the time (unless you explicitly
export the pool during reboot).

- cks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] disk names?

2008-06-03 Thread Benjamin Ellison
Sorry for the newbie sounding post (well, I *am a zfs newb), but I've googled 
for a couple hours now and have yet to find a clear or specific answer...

The zfs documentation indicates that it is very easy to make storage pools, 
using commands such as:
zpool create tank raidz c1t0d0 c2t0d0 c3t0d0

My question:  Where/how in the heck does one get a list of which devices are 
valid targets?  I've got 4 SATA drives in a hot-swap backplane hooked to my 
motherboard...   where do I look or what command should I use to see what I 
should put after the zpool create [poolname] raidz bit?   There are a ton of 
things listed under /dev  -- no clue which ones are real and which are just 
there, if you get my drift.   

System details, if necessary:
Mobo: KINO 690S1 
System Drive:  2.5 IDE with OpenSolaris 2005.08 installed.
SATA is SB600 (southbridge).  Case w/ hotswap bays is Chenbro ES34690 (if I 
recall correctly).  BIOS is set to IDE-AHCI currently  (options are IDE, 
Legacy, RAID, AHCI, and IDE- AHCI).

Thanks,
--Ben
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] disk names?

2008-06-03 Thread Bill McGonigle
On Jun 3, 2008, at 16:35, Benjamin Ellison wrote:

 've got 4 SATA drives in a hot-swap backplane hooked to my  
 motherboard...   where do I look or what command should I use to  
 see what I should put after the zpool create [poolname] raidz bit?


See if 'cfgadm' gives you a good list.  You're about an hour behind  
me on the learning curve (building a Nexenta box here - ZFS found a  
problem disk that was causing troubles previously - sweet!).

-Bill


-
Bill McGonigle, Owner   Work: 603.448.4440
BFC Computing, LLC  Home: 603.448.1668
[EMAIL PROTECTED]   Cell: 603.252.2606
http://www.bfccomputing.com/Page: 603.442.1833
Blog: http://blog.bfccomputing.com/
VCard: http://bfccomputing.com/vcard/bill.vcf

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] disk names?

2008-06-03 Thread Will Murnane
On Tue, Jun 3, 2008 at 4:35 PM, Benjamin Ellison [EMAIL PROTECTED] wrote:
 My question:  Where/how in the heck does one get a list of which devices are 
 valid targets?
Run format and it'll list the devices that are available.  If you
hot-plug a drive, you may need to run devfsadm -c disk first, to
make sure that entries in /dev are created for it.

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] disk names?

2008-06-03 Thread Benjamin Ellison
Thanks for the reply (Will's too - although I see it hasn't posted to the 
thread).

format  gets me a list of just my system drive.

devfsadm -c disk just spins for a second, but doesn't change anything found 
under format.

cfgadm shows the following:
App_Id  Type  Receptacle   Occupant Condition
sata0/0 sata-port   empty  unconfigured   ok
sata0/1 sata-port   empty  unconfigured   ok
sata0/2 sata-port   empty  unconfigured   ok
sata0/3 sata-port   empty  unconfigured   ok

Which looks somewhat promising -- from my un-tutored interpretation of that, it 
seems to knows there should be 4 SATA devices attached.

Still -- nothing shows anything like the c0t1d0 or similar.

I may open the case back up and see if hooking a drive or two directly to the 
sata ports on the mobo (vs. the hotswap backplane) makes any difference...

Thanks,
--Ben
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] disk names?

2008-06-03 Thread James C. McPherson
Will Murnane wrote:
 On Tue, Jun 3, 2008 at 4:35 PM, Benjamin Ellison [EMAIL PROTECTED] wrote:
 My question:  Where/how in the heck does one get a list of which devices are 
 valid targets?
 Run format and it'll list the devices that are available.  If you
 hot-plug a drive, you may need to run devfsadm -c disk first, to
 make sure that entries in /dev are created for it.

Some time ago I logged an RFE for this very issue

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6605832

6605832 zpool could tell me what disks are available for use


It's still unassigned, unfortunately.



James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] disk names?

2008-06-03 Thread Bill McGonigle
On Jun 3, 2008, at 16:50, Benjamin Ellison wrote:

 cfgadm shows the following:
 App_Id  Type  Receptacle   Occupant Condition
 sata0/0 sata-port   empty  unconfigured   ok
 sata0/1 sata-port   empty  unconfigured   ok
 sata0/2 sata-port   empty  unconfigured   ok
 sata0/3 sata-port   empty  unconfigured   ok

Just for the sake of comparison, mine looks like:

[EMAIL PROTECTED]:~$ cfgadm
Ap_Id  Type Receptacle   Occupant  
Condition
sata0/0::dsk/c1t0d0disk connectedconfigured   ok
sata0/1::dsk/c1t1d0disk connectedconfigured   ok
--snip---

-Bill
-
Bill McGonigle, Owner   Work: 603.448.4440
BFC Computing, LLC  Home: 603.448.1668
[EMAIL PROTECTED]   Cell: 603.252.2606
http://www.bfccomputing.com/Page: 603.442.1833
Blog: http://blog.bfccomputing.com/
VCard: http://bfccomputing.com/vcard/bill.vcf

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pause Solaris with ZFS compression busy by doing a cp?

2008-06-03 Thread roland
Try running iostat in another ssh window, you'll see it can't even gather 
stats every 5 seconds (below is iostats every 5 seconds):
Tue May 27 09:26:41 2008
Tue May 27 09:26:57 2008
Tue May 27 09:27:34 2008

that should not happen!
i`d call that a bug!

how does vmstat behave with lzjb compression?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS problems with USB Storage devices

2008-06-03 Thread Paulo Soeiro
Did the same test again and here is the result:

1)

zpool create myPool mirror c6t0d0p0 c7t0d0p0

2)

-bash-3.2# zfs create myPool/myfs

-bash-3.2# zpool status

pool: myPool

state: ONLINE

scrub: none requested

config:

NAME STATE READ WRITE CKSUM

myPool ONLINE 0 0 0

mirror ONLINE 0 0 0

c6t0d0p0 ONLINE 0 0 0

c7t0d0p0 ONLINE 0 0 0

errors: No known data errors

pool: rpool

state: ONLINE

scrub: none requested

config:

NAME STATE READ WRITE CKSUM

rpool ONLINE 0 0 0

c5t0d0s0 ONLINE 0 0 0

errors: No known data errors

3)Copy a file to /myPool/myfs

ls -ltrh

total 369687

-rwxr-xr-x 1 root root 184M Jun 3 22:38 test.bin

4)Copy a second file

cp test.bin test2.bin 

And shutdown

Startup

5)

-bash-3.2# zpool status

pool: myPool

state: UNAVAIL

status: One or more devices could not be opened. There are insufficient

replicas for the pool to continue functioning.

action: Attach the missing device and online it using 'zpool online'.

see: http://www.sun.com/msg/ZFS-8000-3C

scrub: none requested

config:

NAME STATE READ WRITE CKSUM

myPool UNAVAIL 0 0 0 insufficient replicas

mirror UNAVAIL 0 0 0 insufficient replicas

c6t0d0p0 UNAVAIL 0 0 0 cannot open

c7t0d0p0 UNAVAIL 0 0 0 cannot open

pool: rpool

state: ONLINE

scrub: none requested

config:

NAME STATE READ WRITE CKSUM

rpool ONLINE 0 0 0

c5t0d0s0 ONLINE 0 0 0

errors: No known data errors

6)Remove and attached the usb sticks:

zpool status

pool: myPool

state: UNAVAIL

status: One or more devices could not be used because the label is missing

or invalid. There are insufficient replicas for the pool to continue

functioning.

action: Destroy and re-create the pool from a backup source.

see: http://www.sun.com/msg/ZFS-8000-5E

scrub: none requested

config:

NAME STATE READ WRITE CKSUM

myPool UNAVAIL 0 0 0 insufficient replicas

mirror UNAVAIL 0 0 0 insufficient replicas

c6t0d0p0 FAULTED 0 0 0 corrupted data

c7t0d0p0 FAULTED 0 0 0 corrupted data

pool: rpool

state: ONLINE

scrub: none requested

config:

NAME STATE READ WRITE CKSUM

rpool ONLINE 0 0 0

c5t0d0s0 ONLINE 0 0 0

errors: No known data errors

---

So it's not a hub problem, but it seems to be a zfs  usb storage problem. I
just hope zfs works fine on hardisks. Because it's not working on usb
sticks. It would be nice somebody from SUN could fix this problem...



Thanks  Regards

Paulo


On Tue, Jun 3, 2008 at 8:19 PM, Paulo Soeiro [EMAIL PROTECTED] wrote:

 I'll try the same without the hub.

 Thanks  Regards
 Paulo




 On 6/2/08, Thommy M. [EMAIL PROTECTED] wrote:

 Paulo Soeiro wrote:
  Greetings,
 
  I was experimenting with zfs, and i made the following test, i shutdown
  the computer during a write operation
  in a mirrored usb storage filesystem.
 
  Here is my configuration
 
  NGS USB 2.0 Minihub 4
  3 USB Silicom Power Storage Pens 1 GB each
 
  These are the ports:
 
  hub devices
  /---\
  | port 2 | port  1  |
  | c10t0d0p0  | c9t0d0p0  |
  -
  | port 4 | port  4  |
  | c12t0d0p0  | c11t0d0p0|
  \/
 
  Here is the problem:
 
  1)First i create a mirror with port2 and port1 devices
 
  zpool create myPool mirror c10t0d0p0 c9t0d0p0
  -bash-3.2# zpool status
pool: myPool
   state: ONLINE
   scrub: none requested
  config:
 
  NAME   STATE READ WRITE CKSUM
  myPool ONLINE   0 0 0
mirror   ONLINE   0 0 0
  c10t0d0p0  ONLINE   0 0 0
  c9t0d0p0   ONLINE   0 0 0
 
  errors: No known data errors
 
pool: rpool
   state: ONLINE
   scrub: none requested
  config:
 
  NAMESTATE READ WRITE CKSUM
  rpool   ONLINE   0 0 0
c5t0d0s0  ONLINE   0 0 0
 
  errors: No known data errors
 
  2)zfs create myPool/myfs
 
  3)created a random file (file.txt - more or less 100MB size)
 
  digest -a md5 file.txt
  3f9d17531d6103ec75ba9762cb250b4c
 
  4)While making a second copy of the file:
 
  cp file.txt test 
 
  I've shutdown the computer while the file was being copied. And
  restarted the computer again. And here is the result:
 
 
  -bash-3.2# zpool status
pool: myPool
   state: UNAVAIL
  status: One or more devices could not be used because the label is
 missing
  or invalid.  There are insufficient replicas for the pool to
 continue
  functioning.
  action: Destroy and re-create the pool from a backup source.
 see: http://www.sun.com/msg/ZFS-8000-5E
   scrub: none requested
  config:
 
  NAME   STATE READ WRITE CKSUM
  myPool UNAVAIL  0 0 0  insufficient replicas
mirror   UNAVAIL  0 0 0  insufficient replicas
  c12t0d0p0  OFFLINE  0 0 0
  c9t0d0p0   FAULTED  0 0 0  corrupted data
 
pool: rpool
   state: ONLINE
   scrub: 

Re: [zfs-discuss] ZFS problems with USB Storage devices

2008-06-03 Thread Paulo Soeiro
This test was done without the hub:

On Tue, Jun 3, 2008 at 11:33 PM, Paulo Soeiro [EMAIL PROTECTED] wrote:

 Did the same test again and here is the result:

 1)

 zpool create myPool mirror c6t0d0p0 c7t0d0p0

 2)

 -bash-3.2# zfs create myPool/myfs

 -bash-3.2# zpool status

 pool: myPool

 state: ONLINE

 scrub: none requested

 config:

 NAME STATE READ WRITE CKSUM

 myPool ONLINE 0 0 0

 mirror ONLINE 0 0 0

 c6t0d0p0 ONLINE 0 0 0

 c7t0d0p0 ONLINE 0 0 0

 errors: No known data errors

 pool: rpool

 state: ONLINE

 scrub: none requested

 config:

 NAME STATE READ WRITE CKSUM

 rpool ONLINE 0 0 0

 c5t0d0s0 ONLINE 0 0 0

 errors: No known data errors

 3)Copy a file to /myPool/myfs

 ls -ltrh

 total 369687

 -rwxr-xr-x 1 root root 184M Jun 3 22:38 test.bin

 4)Copy a second file

 cp test.bin test2.bin 

 And shutdown

 Startup

 5)

 -bash-3.2# zpool status

 pool: myPool

 state: UNAVAIL

 status: One or more devices could not be opened. There are insufficient

 replicas for the pool to continue functioning.

 action: Attach the missing device and online it using 'zpool online'.

 see: http://www.sun.com/msg/ZFS-8000-3C

 scrub: none requested

 config:

 NAME STATE READ WRITE CKSUM

 myPool UNAVAIL 0 0 0 insufficient replicas

 mirror UNAVAIL 0 0 0 insufficient replicas

 c6t0d0p0 UNAVAIL 0 0 0 cannot open

 c7t0d0p0 UNAVAIL 0 0 0 cannot open

 pool: rpool

 state: ONLINE

 scrub: none requested

 config:

 NAME STATE READ WRITE CKSUM

 rpool ONLINE 0 0 0

 c5t0d0s0 ONLINE 0 0 0

 errors: No known data errors

 6)Remove and attached the usb sticks:

 zpool status

 pool: myPool

 state: UNAVAIL

 status: One or more devices could not be used because the label is missing

 or invalid. There are insufficient replicas for the pool to continue

 functioning.

 action: Destroy and re-create the pool from a backup source.

 see: http://www.sun.com/msg/ZFS-8000-5E

 scrub: none requested

 config:

 NAME STATE READ WRITE CKSUM

 myPool UNAVAIL 0 0 0 insufficient replicas

 mirror UNAVAIL 0 0 0 insufficient replicas

 c6t0d0p0 FAULTED 0 0 0 corrupted data

 c7t0d0p0 FAULTED 0 0 0 corrupted data

 pool: rpool

 state: ONLINE

 scrub: none requested

 config:

 NAME STATE READ WRITE CKSUM

 rpool ONLINE 0 0 0

 c5t0d0s0 ONLINE 0 0 0

 errors: No known data errors

 ---

 So it's not a hub problem, but it seems to be a zfs  usb storage problem.
 I just hope zfs works fine on hardisks. Because it's not working on usb
 sticks. It would be nice somebody from SUN could fix this problem...



 Thanks  Regards

 Paulo


   On Tue, Jun 3, 2008 at 8:19 PM, Paulo Soeiro [EMAIL PROTECTED] wrote:

 I'll try the same without the hub.

 Thanks  Regards
 Paulo




 On 6/2/08, Thommy M. [EMAIL PROTECTED] wrote:

 Paulo Soeiro wrote:
  Greetings,
 
  I was experimenting with zfs, and i made the following test, i shutdown
  the computer during a write operation
  in a mirrored usb storage filesystem.
 
  Here is my configuration
 
  NGS USB 2.0 Minihub 4
  3 USB Silicom Power Storage Pens 1 GB each
 
  These are the ports:
 
  hub devices
  /---\
  | port 2 | port  1  |
  | c10t0d0p0  | c9t0d0p0  |
  -
  | port 4 | port  4  |
  | c12t0d0p0  | c11t0d0p0|
  \/
 
  Here is the problem:
 
  1)First i create a mirror with port2 and port1 devices
 
  zpool create myPool mirror c10t0d0p0 c9t0d0p0
  -bash-3.2# zpool status
pool: myPool
   state: ONLINE
   scrub: none requested
  config:
 
  NAME   STATE READ WRITE CKSUM
  myPool ONLINE   0 0 0
mirror   ONLINE   0 0 0
  c10t0d0p0  ONLINE   0 0 0
  c9t0d0p0   ONLINE   0 0 0
 
  errors: No known data errors
 
pool: rpool
   state: ONLINE
   scrub: none requested
  config:
 
  NAMESTATE READ WRITE CKSUM
  rpool   ONLINE   0 0 0
c5t0d0s0  ONLINE   0 0 0
 
  errors: No known data errors
 
  2)zfs create myPool/myfs
 
  3)created a random file (file.txt - more or less 100MB size)
 
  digest -a md5 file.txt
  3f9d17531d6103ec75ba9762cb250b4c
 
  4)While making a second copy of the file:
 
  cp file.txt test 
 
  I've shutdown the computer while the file was being copied. And
  restarted the computer again. And here is the result:
 
 
  -bash-3.2# zpool status
pool: myPool
   state: UNAVAIL
  status: One or more devices could not be used because the label is
 missing
  or invalid.  There are insufficient replicas for the pool to
 continue
  functioning.
  action: Destroy and re-create the pool from a backup source.
 see: http://www.sun.com/msg/ZFS-8000-5E
   scrub: none requested
  config:
 
  NAME   STATE READ WRITE CKSUM
  myPool UNAVAIL  0 0 0  insufficient replicas
mirror   UNAVAIL 

Re: [zfs-discuss] [caiman-discuss] disk names?

2008-06-03 Thread Dave Miner
Richard Elling wrote:
 James C. McPherson wrote:
 Will Murnane wrote:
   
 On Tue, Jun 3, 2008 at 4:35 PM, Benjamin Ellison [EMAIL PROTECTED] wrote:
 
 My question:  Where/how in the heck does one get a list of which devices 
 are valid targets?
   
 Run format and it'll list the devices that are available.  If you
 hot-plug a drive, you may need to run devfsadm -c disk first, to
 make sure that entries in /dev are created for it.
 
 Some time ago I logged an RFE for this very issue

 http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6605832

 6605832 zpool could tell me what disks are available for use


 It's still unassigned, unfortunately.
   
 
 methinks this is a command which will be much more useful beyond
 the ZFS context.  In fact, it seems that the caiman project already has
 the code written to do this... just not externalized.  You might see if
 we could promote test_td.c into a useful sys-admin command.
 http://cvs.opensolaris.org/source/xref/caiman/snap_upgrade/usr/src/lib/libtd/test_td.c

Putting into the zpool command would feel odd to me, but I agree that 
there may be a useful utility here.

Dave
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS problems with USB Storage devices

2008-06-03 Thread Bill McGonigle
On Jun 3, 2008, at 18:34, Paulo Soeiro wrote:

 This test was done without the hub:

FWIW, I bought 9 microSD's and 9 USB controller units for them from  
NewEgg to replicate the famous ZFS demo video, and I had problems  
getting them working with OpenSolaris (on VMWare on OSX, in this case).

After getting frustrated and thinking about it for a while, I decided  
to test each MicroSD card and controller independently (using dd) and  
one of the adapters turned out to be flakey at just writing zeros.   
It also happened to be the #0 adapter which through me off for a  
while, since that's where I started.  So, then I was still having  
problems (but I had tested the remaining units), so I went home for  
the weekend, and left them plugged into their hubs (i-rocks brand  
seems OK so far), and came back to a system log full of a second  
adapter dropping out several times over the weekend (though it  
survived a quick dd).  Taking it off the hub, it did the same thing  
for me if I waited long enough (10 minutes or so - I assume it was  
getting warmed up).

I've also had to replace a server mobo which had a faulty USB  
implementation (Compaq brand, one of the early USB2.0 chips).

Just food for thought - there's a lot to go wrong before ZFS sees it  
and USB gear isn't always well-made.

-Bill

-
Bill McGonigle, Owner   Work: 603.448.4440
BFC Computing, LLC  Home: 603.448.1668
[EMAIL PROTECTED]   Cell: 603.252.2606
http://www.bfccomputing.com/Page: 603.442.1833
Blog: http://blog.bfccomputing.com/
VCard: http://bfccomputing.com/vcard/bill.vcf

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] bug id 6462690, SYNC_NV issue

2008-06-03 Thread Bill Sprouse
I'm pretty sure that this bug is fixed in Solaris 10U5, patch 127127-11 and 
127128-11 (note: 6462690 sd driver should set SYNC_NV bit when issuing 
SYNCHRONIZE CACHE to SBC-2 devices). However, a test system with new 6140 
arrays still seems to be suffering from lots of cache flushes. This is verified 
by setting zfs_nocacheflush=1 and seeing noticeable improvement in 
performance. Can someone verify that this fix is indeed in the above release 
and patch set.  If so, then are there parameters that need to be set for this 
to be active or is there firmware or other updates that need to be loaded into 
this new array?  Any help appreciated.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] disk names?

2008-06-03 Thread James C. McPherson
Richard Elling wrote:
 James C. McPherson wrote:
 Will Murnane wrote:
   
 On Tue, Jun 3, 2008 at 4:35 PM, Benjamin Ellison [EMAIL PROTECTED] wrote:
 
 My question:  Where/how in the heck does one get a list of which devices 
 are valid targets?
   
 Run format and it'll list the devices that are available.  If you
 hot-plug a drive, you may need to run devfsadm -c disk first, to
 make sure that entries in /dev are created for it.
 
 Some time ago I logged an RFE for this very issue

 http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6605832

 6605832 zpool could tell me what disks are available for use


 It's still unassigned, unfortunately.
   
 
 methinks this is a command which will be much more useful beyond
 the ZFS context.  In fact, it seems that the caiman project already has
 the code written to do this... just not externalized.  You might see if
 we could promote test_td.c into a useful sys-admin command.
 http://cvs.opensolaris.org/source/xref/caiman/snap_upgrade/usr/src/lib/libtd/test_td.c

Yes, that would be a very handy command to have around as a general
tool. Especially if it could say slice X is used by (mountpoint/...)
as well.


James
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] new install - when is zfs root offered? (snv_90)

2008-06-03 Thread Richard L. Hamilton
How about SPARC - can it do zfs install+root yet, or if not, when?
Just got a couple of nice 1TB SAS drives, and I think I'd prefer to
have a mirrored pool where zfs owns the entire drives, if possible.
(I'd also eventually like to have multiple bootable zfs filesystems in
that pool, corresponding to multiple versions.)

Is/will all that be possible?  Would it be ok to pre-create the pool,
and if so, any particular requirements?

Currently running snv_81 on a Sun Blade 2000; SAS/SATA controller
is an LSI Logic SAS 3800X 8-port, in the 66MHz slot.  I chose SAS drives
for the first two (of 8) trusting SCSI support to probably be more mature
and functional than SATA support, but the rest (as I'm willing to part with the 
$$)
will probably be SATA for price.  The current two SAS drives are Seagate
ST31000640SS (which I just used smartctl to confirm have SMART support
including temperature reporting).  Enclosure is an Enhance E8-ML (no
enclosure services support).
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] new install - when is zfs root offered? (snv_90)

2008-06-03 Thread A Darren Dunham
On Tue, Jun 03, 2008 at 05:56:44PM -0700, Richard L. Hamilton wrote:
 How about SPARC - can it do zfs install+root yet, or if not, when?
 Just got a couple of nice 1TB SAS drives, and I think I'd prefer to
 have a mirrored pool where zfs owns the entire drives, if possible.
 (I'd also eventually like to have multiple bootable zfs filesystems in
 that pool, corresponding to multiple versions.)

Is they just under 1TB?  I don't believe there's any boot support in
Solaris for EFI labels, which would be required for 1TB+.

-- 
Darren
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss