[zfs-discuss] Re: ZFS Inexpensive SATA Whitebox

2006-10-11 Thread Peter van Gemert
Hi There,

You might want to check the HCL at http://www.sun.com/bigadmin/hcl to find out 
which hardware is supported by Solaris 10.

Greetings,
Peter
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Inexpensive SATA Whitebox

2006-10-11 Thread Erik Trimble
Generally, I've found the way to go is to get a 4-port SATA PCI 
controller (something based on the Silicon Image stuff seems to be 
cheap, common, and supported), and then plunk it into any old PC you can 
find (or get off of eBay).


The major caveat here is that I'd recommend trying to find a PC which 
has a 64-bit processor, something like an AMD Sempron64 or Intel Celeron 
D 331 (or similar).  Running Solaris in 64-bit mode makes things so much 
simpler (and usually faster) than 32-bit mode.


Avoid like the plague any of the on-board RAID solutions. At best, you 
can use the SATA ports as normal ports. In many cases, they're just useless.


-Erik


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS Inexpensive SATA Whitebox

2006-10-11 Thread Al Hopper

Followup - if you also want to also use the machine as a workstation:

Graphics card (PCI Express): Pick a Nvidia based board to take advantage
fo the excellent Solaris native driver[0].  The 7600GS has a great
price/performance ratio.  This ref [1] also mentions the 7600GT - altough
I'm (almost) sure you won't be interested in volt modding them.

[0] http://www.nvidia.com/object/solaris_display_1.0-8774.html
[1] 
http://www.xbitlabs.com/articles/video/display/geforce7600gs-voltmodding.html


Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
   Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005
OpenSolaris Governing Board (OGB) Member - Feb 2006
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS Inexpensive SATA Whitebox

2006-10-11 Thread Dale Ghent

On Oct 11, 2006, at 10:10 AM, [EMAIL PROTECTED] wrote:

So are there any pci-e SATA cards that are supported ? I was hoping  
to go with a sempron64. Using old-pci seems like a waste.


Yes.

I wrote up a little review of the SIIG SC-SAE412-S1 card which is a  
two port PCIe card based on the Silicon Image 3132 chip:


http://elektronkind.org/2006/09/siig-esata-ii-pcie-card-and-opensolaris

The card is a two port eSATA2 card, but SIIG also sells a two port  
internal SATA card based on the same chip as well.


This card is running fine under SX:CR build 47 and would presumably  
also run fine under Solaris 10 Update 2 or later.


/dale

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Inexpensive SATA Whitebox

2006-10-11 Thread Al Hopper
On Wed, 11 Oct 2006, Dana H. Myers wrote:

 Al Hopper wrote:

  Memory: DDR-400 - your choice but Kingston is always a safe bet.  2*512Mb
  sticks for a starter, cost effective, system.  4*512Mb for a good long
  term solution.

 Due to fan-out considerations, every BIOS I've seen will run DDR400
 memory at 333MHz when connected to more than 1 DIMM-per-channel (I
 believe at AMD's urging).

Really!?  That's surprising.  Is there a way to verify that on an Ultra20
running Solaris 06/06?

Now you've gone  done it Dana - you've aroused my curiosity!  :)

 In other words, you might save a few dollars using DDR333 for 4 x 512MB
 if you're not going to run 2 x 1GB (which is the preferred approach).

 Dana


Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
   Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005
OpenSolaris Governing Board (OGB) Member - Feb 2006
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Inexpensive SATA Whitebox

2006-10-11 Thread Dana H. Myers
Al Hopper wrote:
 On Wed, 11 Oct 2006, Dana H. Myers wrote:
 
 Al Hopper wrote:

 Memory: DDR-400 - your choice but Kingston is always a safe bet.  2*512Mb
 sticks for a starter, cost effective, system.  4*512Mb for a good long
 term solution.
 Due to fan-out considerations, every BIOS I've seen will run DDR400
 memory at 333MHz when connected to more than 1 DIMM-per-channel (I
 believe at AMD's urging).
 
 Really!?  That's surprising.  Is there a way to verify that on an Ultra20
 running Solaris 06/06?

Have a look at the BIOS set-up screen; see what speed it's running
your DDR at.  It may make a difference whether you have single-sided
vs. double-sided DIMMs.  It's not an OS issue, it's a hardware issue
handled by the BIOS.

 Now you've gone  done it Dana - you've aroused my curiosity!  :)

My apologies ;-)

Dana
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool misses the obvious?

2006-10-11 Thread James Litchfield

I have a zfs pool on a USB hard drive attached to my system.
I had unplugged it and when I reconnect it, zpool import does
not see the pool.

# cd /dev/dsk
# fstyp  c3t0d0s0
zfs

When I truss zpool import, it looks everywhere (seemingly) *but*
c3t0d0s0 for the pool...

The relevant portion...

stat64(/dev/dsk/c3t0d0s1, 0x08043150) = 0
open64(/dev/dsk/c3t0d0s1, O_RDONLY)   Err#5 EIO
stat64(/dev/dsk/c1t0d0p3, 0x08043150) = 0
open64(/dev/dsk/c1t0d0p3, O_RDONLY)   Err#16 EBUSY

This is Nevada B49, BFUed to B50 and then BFUed to
10/9/2006 nightly. I have been seeing this behavior for a while
so I don't think it is the result of a very recent change...

Thoughts?

Jim Litchfield

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] A versioning FS

2006-10-11 Thread Joerg Schilling
Nicolas Williams [EMAIL PROTECTED] wrote:

 On Mon, Oct 09, 2006 at 12:44:34PM +0200, Joerg Schilling wrote:
  Nicolas Williams [EMAIL PROTECTED] wrote:
  
   You're arguing for treating FV as extended/named attributes :)
  
   I think that'd be the right thing to do, since we have tools that are
   aware of those already.  Of course, we're talking about somewhat magical
   attributes, but I think that's fine (though, IIRC, NFSv4 [RFC3530] has
   some strange verbiage limiting attributes to applications).
  
  I thought NFSv4 supports extended attributes. What limiting are you 
  aware of?

 It does.  I meant this on pg. 12:

  [...]  Named attributes
are meant to be used by client applications as a method to associate
application specific data with a regular file or directory.

FreeBSD and Linux implement something different also called extended attributes.
There should be a possibility to map from FreeBSD/Linux to Solaris.

 and this on pg. 36:

Named attributes are intended for data needed by applications rather
than by an NFS client implementation.  NFS implementors are strongly
encouraged to define their new attributes as recommended attributes
by bringing them to the IETF standards-track process.

See above... Since the extended attributes appeared on a Solaris ( 8 update???),
I was looking for a way to map simple exteneded attribute implementation as 
those on Mac OS, FreeBSD and Linux to the more general implementation on 
Solaris.

Before we start defining the first offocial functionality for this Sun feature, 
we should define a mapping for Mac OS, FreeBSD and Linux. It may make sense, to 
define a sub directory for the attribute directory for keeping old versions
of a file.

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: Re: Metadata corrupted

2006-10-11 Thread Siegfried Nikolaivich
 On Mon, Oct 09, 2006 at 11:08:14PM -0700, Matthew
 Ahrens wrote:
 You may also want to try 'fmdump -eV' to get an idea
 of what those
 faults were.

I am not sure how to interpret the results, maybe you can help me.  It looks 
like the following with many more similar pages following:

% fmdump -eV
TIME   CLASS
Oct 07 2006 17:28:48.265102839 ereport.fs.zfs.checksum
nvlist version: 0
class = ereport.fs.zfs.checksum
ena = 0x933872163a1
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = zfs
pool = 0xbe23c6961def3450
vdev = 0x46f50fe03a3fd818
(end detector)

pool = tank
pool_guid = 0xbe23c6961def3450
pool_context = 0
vdev_guid = 0x46f50fe03a3fd818
vdev_type = disk
vdev_path = /dev/dsk/c0t1d0s0
parent_guid = 0x3bb6ede3be1cf975
parent_type = raidz
zio_err = 0
zio_offset = 0x1c3644ae00
zio_size = 0xac00
zio_objset = 0x20
zio_object = 0x78
zio_level = 0
zio_blkid = 0xafaf
__ttl = 0x1
__tod = 0x45284640 0xfcd25f7

Oct 07 2006 17:31:24.616729701 ereport.fs.zfs.checksum
nvlist version: 0
class = ereport.fs.zfs.checksum
ena = 0xb7a0bad55900401
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = zfs
pool = 0xbe23c6961def3450
vdev = 0xa543197df30d1460
(end detector)

pool = tank
pool_guid = 0xbe23c6961def3450
pool_context = 0
vdev_guid = 0xa543197df30d1460
vdev_type = disk
vdev_path = /dev/dsk/c0t2d0s0
parent_guid = 0x3bb6ede3be1cf975
parent_type = raidz
zio_err = 0
zio_offset = 0x30d218e00
zio_size = 0xac00
zio_objset = 0x20
zio_object = 0xea
zio_level = 0
zio_blkid = 0x7577
__ttl = 0x1
__tod = 0x452846dc 0x24c28c65

Oct 07 2006 17:31:24.903968466 ereport.fs.zfs.checksum
nvlist version: 0
class = ereport.fs.zfs.checksum
ena = 0xb7b1da39251
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = zfs
pool = 0xbe23c6961def3450
vdev = 0x46f50fe03a3fd818
(end detector)

pool = tank
pool_guid = 0xbe23c6961def3450
pool_context = 0
vdev_guid = 0x46f50fe03a3fd818
vdev_type = disk
vdev_path = /dev/dsk/c0t1d0s0
parent_guid = 0x3bb6ede3be1cf975
parent_type = raidz
zio_err = 0
zio_offset = 0x30e558800
zio_size = 0xac00
zio_objset = 0x20
zio_object = 0xea
zio_level = 0
zio_blkid = 0x7724
__ttl = 0x1
__tod = 0x452846dc 0x35e176d2

Oct 07 2006 17:31:52.178481693 ereport.fs.zfs.checksum
nvlist version: 0
class = ereport.fs.zfs.checksum
ena = 0xbe0bb6f3b11
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = zfs
pool = 0xbe23c6961def3450
vdev = 0xa543197df30d1460
(end detector)

pool = tank
pool_guid = 0xbe23c6961def3450
pool_context = 0
vdev_guid = 0xa543197df30d1460
vdev_type = disk
vdev_path = /dev/dsk/c0t2d0s0
parent_guid = 0x3bb6ede3be1cf975
parent_type = raidz
zio_err = 0
zio_offset = 0x375e12800
zio_size = 0xac00
zio_objset = 0x20
zio_object = 0xec
zio_level = 0
zio_blkid = 0x7788
__ttl = 0x1
__tod = 0x452846f8 0xaa36a1d

Cheers,
Albert
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] best dual HD install-- RAID vs ZFS?

2006-10-11 Thread Patrick
i'm replacing the stock HD in my vaio notebook with 2 100GB 7200 RPM hitachi-- 
yes it can hold  2 HDs. ;)  i was thinking about doing some sort of striping 
setup to get even more performance, but i am hardly a storage expert, so i'm 
not sure if it is better to set them up to do sofware RAID or to install 
solaris on a normal UFS partition on one and then make a big zpool spanning 
both disks-- i'm sure i read it the ZFS docs that it would stripe to two disks 
just fine.

can anyone recommend the best course of action?

thanks!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS Inexpensive SATA Whitebox

2006-10-11 Thread David Dyer-Bennet

On 10/11/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:


There are tools around that can tell you if hardware is supported by
Solaris.
One such tool can be found at:
http://www.sun.com/bigadmin/hcl/hcts/install_check.html


Beware of this tool.  It reports Y for both 32-bit and 64-bit on the
nVidia MCP55 SATA controller -- but in the real world, it's supported
only in compatibility mode, and (fatal flaw for me) *it doesn't
support hot-swap with this controller*.  So apparently even a clean
result from this utility isn't a safe indication that the device is
fully supported.

Also, it says that the nVidia MCP55 ethernet is NOT supported in
either 32 or 64 bit, but actually nv_44 found the ethernet without any
trouble.  Maybe that's just that the support was extended recently;
the install tool is based on S10 6/06.

The more I learn about Solaris hardware support, the more I see it as
a minefield.
--
David Dyer-Bennet, mailto:[EMAIL PROTECTED], http://www.dd-b.net/dd-b/
RKBA: http://www.dd-b.net/carry/
Pics: http://www.dd-b.net/dd-b/SnapshotAlbum/
Dragaera/Steven Brust: http://dragaera.info/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS Inexpensive SATA Whitebox

2006-10-11 Thread Darren . Reed

David Dyer-Bennet wrote:


On 10/11/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:


There are tools around that can tell you if hardware is supported by
Solaris.
One such tool can be found at:
http://www.sun.com/bigadmin/hcl/hcts/install_check.html



Beware of this tool.  It reports Y for both 32-bit and 64-bit on the
nVidia MCP55 SATA controller -- but in the real world, it's supported
only in compatibility mode, and (fatal flaw for me) *it doesn't
support hot-swap with this controller*.  So apparently even a clean
result from this utility isn't a safe indication that the device is
fully supported.

Also, it says that the nVidia MCP55 ethernet is NOT supported in
either 32 or 64 bit, but actually nv_44 found the ethernet without any
trouble.  Maybe that's just that the support was extended recently;
the install tool is based on S10 6/06.



Driver support for Solaris Nevada is not the same as Solaris 10 Update 2,
so it is not surprising to see these discrepencies.

In some cases, getting Solaris to support a piece of hardware is as simple
as running the update_drv command to tell it about a new PCI id (these
change often and are central to driver support on all x86 platforms.)


The more I learn about Solaris hardware support, the more I see it as
a minefield.



I've found this to be true for almost all open source platforms where
you're trying to use something that hasn't been explicitly used and
tested by the developers.

Darren

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool misses the obvious?

2006-10-11 Thread Artem Kachitchkine



# fstyp  c3t0d0s0
zfs


s0? How is this disk labeled? From what I saw, when you put EFI label on a USB 
disk, the whole disk device is going to be d0 (without slice). What do these 
commands print:


# fstyp /dev/dsk/c3t0d0

# fdisk -W /dev/rdsk/c3t0d0

# fdisk -W /dev/rdsk/c3t0d0p0

-Artem.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS Inexpensive SATA Whitebox

2006-10-11 Thread David Dyer-Bennet

On 10/11/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:


 The more I learn about Solaris hardware support, the more I see it as
 a minefield.


I've found this to be true for almost all open source platforms where
you're trying to use something that hasn't been explicitly used and
tested by the developers.


I've been running Linux since kernel 0.99pl13, I think it was, and
have had amazingly little trouble.  Whereas I'm now sitting on $2k of
hardware that won't do what I wanted it to do under Solaris, so it's a
bit of a hot-button issue for me right now.  I've never had to
consider Linux issues in selecting hardware (in fact I haven't
selected hardware, my linux boxes have all been castoffs originally
purchased to run Windowsx), whereas I made considerable efforts to
find out what should work and how careful I had to be, including
asking for advice on this list, and I have still ended up getting
screwed.  Yeah, I'm a little bitter about this.
--
David Dyer-Bennet, mailto:[EMAIL PROTECTED], http://www.dd-b.net/dd-b/
RKBA: http://www.dd-b.net/carry/
Pics: http://www.dd-b.net/dd-b/SnapshotAlbum/
Dragaera/Steven Brust: http://dragaera.info/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool misses the obvious?

2006-10-11 Thread James Litchfield

Artem Kachitchkine wrote:



# fstyp  c3t0d0s0
zfs


s0? How is this disk labeled? From what I saw, when you put EFI label 
on a USB disk, the whole disk device is going to be d0 (without 
slice). What do these commands print:


# fstyp /dev/dsk/c3t0d0


unknown_fstyp (no matches)


# fdisk -W - /dev/rdsk/c3t0d0



/dev/rdsk/c3t0d0 default fdisk table
Dimensions:
   512 bytes/sector
63 sectors/track
   255 tracks/cylinder
  36483 cylinders

[ eliding almost all the systid cruft ]

*  238: EFI_PMBR

* IdAct  Bhead  Bsect  BcylEhead  Esect  EcylRsectNumsect
 238   025563 102325563 10231586114703


# fdisk -W /dev/rdsk/c3t0d0p0


Same dimension info as above...

* IdAct  Bhead  Bsect  BcylEhead  Esect  EcylRsectNumsect
 238   025563 102325563 10231586114703


-Artem.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Where is the ZFS configuration data stored?

2006-10-11 Thread Steve Goldberg
Hi All,

Where is the ZFS configuration (zpools, mountpoints, filesystems, etc) data 
stored within Solaris?  Is there something akin to vfstab or perhaps a database?

Thanks,

Steve
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Where is the ZFS configuration data stored?

2006-10-11 Thread James McPherson

On 10/12/06, Steve Goldberg [EMAIL PROTECTED] wrote:

Where is the ZFS configuration (zpools, mountpoints, filesystems,
etc) data stored within Solaris?  Is there something akin to vfstab
or perhaps a database?



Have a look at the contents of /etc/zfs for an in-filesystem artefact
of zfs. Apart from that, the information required is stored on the
disk itself.

There is really good documentation on ZFS at the ZFS community
pages found via http://www.opensolaris.org/os/community/zfs.


cheers,
James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
 http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/in/jamescmcpherson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS Inexpensive SATA Whitebox

2006-10-11 Thread Dale Ghent

On Oct 11, 2006, at 7:36 PM, David Dyer-Bennet wrote:


I've been running Linux since kernel 0.99pl13, I think it was, and
have had amazingly little trouble.  Whereas I'm now sitting on $2k of
hardware that won't do what I wanted it to do under Solaris, so it's a
bit of a hot-button issue for me right now.


Yes, but remember back in the days of Linux 0.99, the amount of PC  
hardware was nowhere near as varied as it is today. Integrated  
chipsets? A pipe dream! Aside from video card chips and proprietary  
pre-ATAPI CDROM interfaces, you didn't have to reach far to find a  
driver which covered a given piece of hardware because when you got  
down to it, most hardware was the same. NE2000, anyone?


Today, in 2006 - much different story. I even had Linux AND Solaris  
problems with my machine's MCP51 chipset when it first came out. Both  
forcedeth and nge croaked on it. Welcome to the bleeding edge. You're  
unfortunately on the bleeding edge of hardware AND software.


When in that situation, one can be patient, be helpful, or go back to  
where one came from.


/dale
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS Inexpensive SATA Whitebox

2006-10-11 Thread Dale Ghent

On Oct 12, 2006, at 12:23 AM, Frank Cusack wrote:

On October 11, 2006 11:14:59 PM -0400 Dale Ghent  
[EMAIL PROTECTED] wrote:

Today, in 2006 - much different story. I even had Linux AND Solaris
problems with my machine's MCP51 chipset when it first came out. Both
forcedeth and nge croaked on it. Welcome to the bleeding edge. You're
unfortunately on the bleeding edge of hardware AND software.


Yeah, Solaris x86 is so bleeding edge that it doesn't even support
Sun's own hardware!  (x2100 SATA, which is now already in its second
generation)


You know, I'm really perplexed over that, especially given that the  
silicon image chips (AFAIK) aren't in any Sun product and yet they  
have a SATA framework driver.


/dale
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss