[zfs-discuss] ZFS Project Hardware

2008-05-23 Thread David Francis
Greetings all

I was looking at creating a little ZFS storage box at home using the following 
SATA controllers (Adaptec Serial ATA II RAID 1420SA) on Opensolaris X86 build

Just wanted to know if anyone out there is using these and can vouch for them. 
If not if there's something else you can recommend or suggest.

Disk's would be 6*Seagate 500GB drives.

Thanks 

David
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Project Hardware

2008-05-23 Thread Aaron Blew
I've had great luck with my Supermicro AOC-SAT2-MV8 card so far.  I'm
using it in an old PCI slot, so it's probably not as fast as it could
be, but it worked great right out of the box.

-Aaron


On Fri, May 23, 2008 at 12:09 AM, David Francis [EMAIL PROTECTED] wrote:
 Greetings all

 I was looking at creating a little ZFS storage box at home using the 
 following SATA controllers (Adaptec Serial ATA II RAID 1420SA) on Opensolaris 
 X86 build

 Just wanted to know if anyone out there is using these and can vouch for 
 them. If not if there's something else you can recommend or suggest.

 Disk's would be 6*Seagate 500GB drives.

 Thanks

 David


 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Project Hardware

2008-05-23 Thread Ian Collins
David Francis wrote:
 Greetings all

 I was looking at creating a little ZFS storage box at home using the 
 following SATA controllers (Adaptec Serial ATA II RAID 1420SA) on Opensolaris 
 X86 build

 Just wanted to know if anyone out there is using these and can vouch for 
 them. If not if there's something else you can recommend or suggest.

 Disk's would be 6*Seagate 500GB drives.

   
6 or more SATA slots are quite common on current motherboards, so if you
shop around, you may not need an add on card.

Ian

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Project Hardware

2008-05-23 Thread Pascal Vandeputte
That 1420SA will not work, period. Type 1420sa solaris in Google and you'll 
find a thread about the problems I had with it.

I sold it and took the cheap route again with a Silicon Image 3124-based 
adapter and had more problems which now probably would be solved with the 
latest Solaris updates.

Anyway, I finally settled for a motherboard with an Intel ICH9-R and couldn't 
be happier (Intel DG33TL/DG33TLM, 6 SATA ports). No hassles and very speedy.

That Supermicro card someone else is recommending should also work without any 
issues, and it's really cheap for what you get (8 ports). Your maximum 
throuhput won't exceed 100MB/s though if you can't plug it in a PCI-X slot but 
resort to a regular PCI slot instead.

Greetings,

Pascal
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Project Hardware

2008-05-23 Thread Brian Hechinger
On Fri, May 23, 2008 at 12:47:18AM -0700, Pascal Vandeputte wrote:
 
 I sold it and took the cheap route again with a Silicon Image 3124-based 
 adapter and had more problems which now probably would be solved with the 
 latest Solaris updates.

I'm running a 3124 with snv81 and haven't had a single problem with it.
Whatever problems you ran into have likely been resolved.

Just my $0.02. ;)

-brian
-- 
Coding in C is like sending a 3 year old to do groceries. You gotta
tell them exactly what you want or you'll end up with a cupboard full of
pop tarts and pancake mix. -- IRC User (http://www.bash.org/?841435)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS panics solaris while switching a volume to read-only

2008-05-23 Thread Veltror
Why does update 6 have to bve out before a patch can be produced for this? This 
is a show-stopper for putting ZFS into production on anything other then local 
disks, a production box that panics when a single disk goes offline is  worse 
then useless. I cannot see why this is not a high priority bug.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] help with a BIG problem, can't import my zpool anymore

2008-05-23 Thread Hernan Freschi
Hello, I'm having a big problem here, disastrous maybe. 

I have a zpool consisting of 4x500GB SATA drives, this pool was born on S10U4 
and was recently upgraded to snv85 because of iSCSI issues with some initiator. 
Last night I was doing housekeeping, deleting old snapshots. One snapshot 
failed to delete because it had a dependant clone. So I try to destroy that 
clone: Everything went wrong from there.

The deletion was taking an excessively long time (over 40 minutes). zpool 
status hungs when I call it. zfs list too. zpool iostat showed disk activity. 
Other services non dependant on the pool were running, and the iSCSI this 
machine was serving was unbearably slow. 

At one point, I lost all iSCSI, SSH, web, and all other services. Ping still 
worked. So I go to the server and notice that the fans are running at 100%. I 
try to get a console (local VGA+keyboard) but the monitor shows no signal. No 
disk activity seemed to be happening at the moment. So, I do the standard 
procedure (reboot). Solaris boots but stops at hostname: blah. I see disk 
activity from the pool disks, so I let it boot. 30 minutes later, still didn't 
finish. I thought (correctly) that the system was waiting to mount the ZFS 
before booting, but for some reason it doesn't. I call it the day and let the 
machine do its thing.

8 hours later I return. CPU is cold, disks are idle and... solaris stays at the 
same hostname: blah. Time to reboot again, this time in failsafe. zpool 
import shows that the devices are detected and online. I delete 
/etc/zfs/zpool.cache and reboot. Solaris starts normally with all services 
running, but of course no zfs. zpool import shows the available pool, no 
errors. I do zpool import -f pool... 20 minutes later I'm still waiting for the 
pool to mount. zpool iostat shows activity:

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tera1.51T   312G274  0  1.61M  2.91K
tera1.51T   312G308  0  1.82M  0
tera1.51T   312G392  0  2.31M  0
tera1.51T   312G468  0  2.75M  0

but the mountpoint /tera is still not populated (and zpool import still doesn't 
exit).

zpool status shows:

  pool: tera
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
teraONLINE   0 0 0
  raidz1ONLINE   0 0 0
c1d0ONLINE   0 0 0
c2d0ONLINE   0 0 0
c3d0ONLINE   0 0 0
c4d0ONLINE   0 0 0

errors: No known data errors

What's going on? Why is taking so long to import?

Thanks in advance,
Hernan
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] help with a BIG problem, can't import my zpool anymore

2008-05-23 Thread Hernan Freschi
I got more info. I can run zpool history and this is what I get:

2008-05-23.00:29:40 zfs destroy tera/[EMAIL PROTECTED]
2008-05-23.00:29:47 [internal destroy_begin_sync txg:3890809] dataset = 152
2008-05-23.01:28:38 [internal destroy_begin_sync txg:3891101] dataset = 152
2008-05-23.07:01:36 zpool import -f tera
2008-05-23.07:01:40 [internal destroy_begin_sync txg:3891106] dataset = 152
2008-05-23.10:52:56 zpool import -f tera
2008-05-23.10:52:58 [internal destroy_begin_sync txg:3891112] dataset = 152
2008-05-23.12:17:49 [internal destroy_begin_sync txg:3891114] dataset = 152
2008-05-23.12:27:48 zpool import -f tera
2008-05-23.12:27:50 [internal destroy_begin_sync txg:3891120] dataset = 152
2008-05-23.13:03:07 [internal destroy_begin_sync txg:3891122] dataset = 152
2008-05-23.13:56:52 zpool import -f tera
2008-05-23.13:56:54 [internal destroy_begin_sync txg:3891128] dataset = 152

apparently, it starts destroying dataset #152, which is the parent snapshot of 
the clone I issued the command to destroy. Not sure how it works, but I ordered 
the deletion of the CLONE, not the snapshot (which I was going to destroy 
anyway). 

The question is still, why does it hang the machine? Why can't I access the 
filesystems? Isn't it supposed to import the zpool, mount the ZFSs and then do 
the destroy, in background?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Project Hardware

2008-05-23 Thread Erik Trimble
Brian Hechinger wrote:
 On Fri, May 23, 2008 at 12:47:18AM -0700, Pascal Vandeputte wrote:
   
 I sold it and took the cheap route again with a Silicon Image 3124-based 
 adapter and had more problems which now probably would be solved with the 
 latest Solaris updates.
 

 I'm running a 3124 with snv81 and haven't had a single problem with it.
 Whatever problems you ran into have likely been resolved.

 Just my $0.02. ;)

 -brian
   
The Silicon Image 3114 also works like a champ, but it's SATA 1.0 only.  
It's dirt cheap (under $25), and you will probably need to re-flash the 
BIOS with one from Silicon Image's web site to remove the RAID software 
(Solaris doesn't understand it), but I've had nothing but success with 
this card (the re-flash is simple).


On a related note - does anyone know of a good Solaris-supported 4+ port 
SATA card for PCI-Express?  Preferably 1x or 4x slots...




-- 
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Project Hardware

2008-05-23 Thread Brian Hechinger
On Fri, May 23, 2008 at 12:25:34PM -0700, Erik Trimble wrote:
 
 I'm running a 3124 with snv81 and haven't had a single problem with it.
 Whatever problems you ran into have likely been resolved.
 
 The Silicon Image 3114 also works like a champ, but it's SATA 1.0 only.  
 It's dirt cheap (under $25), and you will probably need to re-flash the 
 BIOS with one from Silicon Image's web site to remove the RAID software 
 (Solaris doesn't understand it), but I've had nothing but success with 
 this card (the re-flash is simple).

With the 3124 you don't even need to do the flash game, the 3124 is comletely
supported.

 On a related note - does anyone know of a good Solaris-supported 4+ port 
 SATA card for PCI-Express?  Preferably 1x or 4x slots...

The Silicon Image 3134 is supported by Solaris.

-brian
-- 
Coding in C is like sending a 3 year old to do groceries. You gotta
tell them exactly what you want or you'll end up with a cupboard full of
pop tarts and pancake mix. -- IRC User (http://www.bash.org/?841435)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 3510 JBOD with multipath

2008-05-23 Thread Charles Soto
The Solaris SAN Configuration and Multipathing Guide proved very helpful for me:

http://docs.sun.com/app/docs/doc/820-1931/

I, too was surprised to see MPIO enabled by default on x86 (we're using Dell/EMC
CX3-40 with our X4500  X6250 systems).

Charles

Quoting Krutibas Biswal [EMAIL PROTECTED]:

 Robert Milkowski wrote:
  Hello Krutibas,
 
  Wednesday, May 21, 2008, 10:43:03 AM, you wrote:
 
  KB On x64 Solaris 10, the default setting of mpxio was :
 
  KB mpxio-disable=no;
 
  KB I changed it to
 
  KB mpxio-disable=yes;
 
  KB and rebooted the machine and it detected 24 drives.
 
  Originally you wanted to get it multipathed which was the case by
  default. Now you have disabled it (well, you still have to paths but
  no automatic failover).
 
 Thanks. Can somebody point me to some documentation  on this ?
 I wanted to see 24 drives so that I can use load sharing between
 two controllers (C1Disk1, C2Disk2, C1Disk3, C2Disk4...) for
 performance.

 If I enable multipathing, would the drive do automatic load balancing
 (sharing) between the two controllers ?

 Thanks,
 Krutibas

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Indiana vs Nevada (for ZFS file server)

2008-05-23 Thread Christopher Gibbs
Pretty much what the subject says. I'm wondering which platform will
have the best stability/performance for a ZFS file server.

I've been using Solaris Express builds of Nevada for quite a while and
I'm currently on build 79b but I'm at a point where I want to upgrade.
So now I have to ask, should I go with the OpenSolaris (.com) release
instead? Also, is there one that has better/newer driver support?
(Mostly in relation to SATA controllers)

Not sure if this is the right place to post this but since my main
goal is a ZFS server then I should get your guys opinions.

-- 
Christopher Gibbs
Programmer / Analyst
Web Integration  Programming
Abilene Christian University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Project Hardware

2008-05-23 Thread Tim
On Fri, May 23, 2008 at 2:36 PM, Brian Hechinger [EMAIL PROTECTED] wrote:

 On Fri, May 23, 2008 at 12:25:34PM -0700, Erik Trimble wrote:
  
  I'm running a 3124 with snv81 and haven't had a single problem with it.
  Whatever problems you ran into have likely been resolved.
  
  The Silicon Image 3114 also works like a champ, but it's SATA 1.0 only.
  It's dirt cheap (under $25), and you will probably need to re-flash the
  BIOS with one from Silicon Image's web site to remove the RAID software
  (Solaris doesn't understand it), but I've had nothing but success with
  this card (the re-flash is simple).

 With the 3124 you don't even need to do the flash game, the 3124 is
 comletely
 supported.

  On a related note - does anyone know of a good Solaris-supported 4+ port
  SATA card for PCI-Express?  Preferably 1x or 4x slots...

 The Silicon Image 3134 is supported by Solaris.



I'm looking on their site and don't even see any data on the 3134... this
*something new* that hasn't been released or?  The only thing I see is 3132.





 -brian
 --
 Coding in C is like sending a 3 year old to do groceries. You gotta
 tell them exactly what you want or you'll end up with a cupboard full of
 pop tarts and pancake mix. -- IRC User (http://www.bash.org/?841435)
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Project Hardware

2008-05-23 Thread Brandon High
On Fri, May 23, 2008 at 12:43 PM, Tim [EMAIL PROTECTED] wrote:
 I'm looking on their site and don't even see any data on the 3134... this
 *something new* that hasn't been released or?  The only thing I see is 3132.

There isn't a 3134, but there is a 3124, which is a PCI-X based 4-port.

-B

-- 
Brandon High[EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What is a vdev?

2008-05-23 Thread Orvar Korvar
Ok, so i make one vdev out of 8 discs. And I combine all vdevs into one large 
zpool? Is it correct?

I have 8 port SATA card. I have 4 drives into one zpool. That is one vdev, 
right? Now I can add 4 new drives and make them into one zpool. And now I 
combine both zpool into one zpool? That can not be right? I dont get vdevs. Can 
someone explain?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What is a vdev?

2008-05-23 Thread Bill Sommerfeld

On Fri, 2008-05-23 at 13:45 -0700, Orvar Korvar wrote:
 Ok, so i make one vdev out of 8 discs. And I combine all vdevs into one large 
 zpool? Is it correct?
 
 I have 8 port SATA card. I have 4 drives into one zpool.

zpool create mypool raidz1 disk0 disk1 disk2 disk3

you have a pool consisting of one vdev made up of 4 disks.

  That is one vdev, right? Now I can add 4 new drives and make them
 into one zpool.

you could do that and keep the pool separate, or you could add them as a
single vdev to the existing pool:

zpool add mypool raidz1 disk4 disk5 disk6 disk7

- Bill


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Project Hardware

2008-05-23 Thread Tim
On Fri, May 23, 2008 at 3:15 PM, Brandon High [EMAIL PROTECTED] wrote:

 On Fri, May 23, 2008 at 12:43 PM, Tim [EMAIL PROTECTED] wrote:
  I'm looking on their site and don't even see any data on the 3134... this
  *something new* that hasn't been released or?  The only thing I see is
 3132.

 There isn't a 3134, but there is a 3124, which is a PCI-X based 4-port.

 -B

 --
 Brandon High[EMAIL PROTECTED]
 The good is the enemy of the best. - Nietzsche


So we're still stuck the same place we were a year ago.  No high port count
pci-E compatible non-raid sata cards.  You'd think with all the demand
SOMEONE would've stepped up to the plate by now.  Marvell, cmon ;)

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What is a vdev?

2008-05-23 Thread Victor Latushkin
Orvar Korvar пишет:
 Ok, so i make one vdev out of 8 discs. And I combine all vdevs into
 one large zpool? Is it correct?

I think it is easier to provide couple of examples:

zpool create pool c1t0d0 mirror c1t1d0 c1t2d0

This command would create storage pool with name 'pool' consisting of 2 
top-level vdevd:

first is c1t0d0
second is a mirror of c1t1d0 c1t2d0

Though it is not recommended to combine top-level vdevs with different 
replication setting in one pool.

ZFS will distribute blocks of data between these two top-level vdevs 
automatically, so blocks of data which end up on mirror will be 
protected, and blocks of data which end up on single disk will not be 
protected.


zpool add pool c1t0d0

would add another single-disk top level to the pool 'pool'


zpool add pool raidz c3t0d0 c4t0d0 c5t0d0

would add another raidz top-level vdev to our pool.

Inside ZFS disks forming mirror and raidz are called vdev also, but not 
top-level vdevs.

Hth,
victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Indiana vs Nevada (for ZFS file server)

2008-05-23 Thread Tim
Depends on what your end goal is really.  The opensolaris.com version is
releasing every 6 months, and I don't believe there's currently any patching
between releases.  If you feel comfortable sitting on it that long, with
potential bugs for 6 months, great.  If not... it should be an easy choice.
Personally I chose option 3 and loaded nexenta.  You get regular updates,
but it still *stable* (or has been for me to date).

--Tim

On Fri, May 23, 2008 at 2:43 PM, Christopher Gibbs [EMAIL PROTECTED]
wrote:

 Pretty much what the subject says. I'm wondering which platform will
 have the best stability/performance for a ZFS file server.

 I've been using Solaris Express builds of Nevada for quite a while and
 I'm currently on build 79b but I'm at a point where I want to upgrade.
 So now I have to ask, should I go with the OpenSolaris (.com) release
 instead? Also, is there one that has better/newer driver support?
 (Mostly in relation to SATA controllers)

 Not sure if this is the right place to post this but since my main
 goal is a ZFS server then I should get your guys opinions.

 --
 Christopher Gibbs
 Programmer / Analyst
 Web Integration  Programming
 Abilene Christian University
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Indiana vs Nevada (for ZFS file server)

2008-05-23 Thread Christopher Gibbs
One other thing I noticed is that OpenSolaris (.com) will
automatically install ZFS root for you. Will Nexenta do that?

On Fri, May 23, 2008 at 4:31 PM, Tim [EMAIL PROTECTED] wrote:
 Depends on what your end goal is really.  The opensolaris.com version is
 releasing every 6 months, and I don't believe there's currently any patching
 between releases.  If you feel comfortable sitting on it that long, with
 potential bugs for 6 months, great.  If not... it should be an easy choice.
 Personally I chose option 3 and loaded nexenta.  You get regular updates,
 but it still *stable* (or has been for me to date).

 --Tim

 On Fri, May 23, 2008 at 2:43 PM, Christopher Gibbs [EMAIL PROTECTED]
 wrote:

 Pretty much what the subject says. I'm wondering which platform will
 have the best stability/performance for a ZFS file server.

 I've been using Solaris Express builds of Nevada for quite a while and
 I'm currently on build 79b but I'm at a point where I want to upgrade.
 So now I have to ask, should I go with the OpenSolaris (.com) release
 instead? Also, is there one that has better/newer driver support?
 (Mostly in relation to SATA controllers)

 Not sure if this is the right place to post this but since my main
 goal is a ZFS server then I should get your guys opinions.

 --
 Christopher Gibbs
 Programmer / Analyst
 Web Integration  Programming
 Abilene Christian University
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss





-- 
Christopher Gibbs
Programmer / Analyst
Web Integration  Programming
Abilene Christian University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Indiana vs Nevada (for ZFS file server)

2008-05-23 Thread Tim
Yup.  They were the first to do so (as far as I know).

--Tim


On Fri, May 23, 2008 at 4:47 PM, Christopher Gibbs [EMAIL PROTECTED]
wrote:

 One other thing I noticed is that OpenSolaris (.com) will
 automatically install ZFS root for you. Will Nexenta do that?

 On Fri, May 23, 2008 at 4:31 PM, Tim [EMAIL PROTECTED] wrote:
  Depends on what your end goal is really.  The opensolaris.com version is
  releasing every 6 months, and I don't believe there's currently any
 patching
  between releases.  If you feel comfortable sitting on it that long, with
  potential bugs for 6 months, great.  If not... it should be an easy
 choice.
  Personally I chose option 3 and loaded nexenta.  You get regular updates,
  but it still *stable* (or has been for me to date).
 
  --Tim
 
  On Fri, May 23, 2008 at 2:43 PM, Christopher Gibbs [EMAIL PROTECTED]
  wrote:
 
  Pretty much what the subject says. I'm wondering which platform will
  have the best stability/performance for a ZFS file server.
 
  I've been using Solaris Express builds of Nevada for quite a while and
  I'm currently on build 79b but I'm at a point where I want to upgrade.
  So now I have to ask, should I go with the OpenSolaris (.com) release
  instead? Also, is there one that has better/newer driver support?
  (Mostly in relation to SATA controllers)
 
  Not sure if this is the right place to post this but since my main
  goal is a ZFS server then I should get your guys opinions.
 
  --
  Christopher Gibbs
  Programmer / Analyst
  Web Integration  Programming
  Abilene Christian University
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 



 --
 Christopher Gibbs
 Programmer / Analyst
 Web Integration  Programming
 Abilene Christian University

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08

2008-05-23 Thread Bill McGonigle
On May 22, 2008, at 19:54, Richard Elling wrote:

 The Adaptive Replacement Cache
 (ARC) uses main memory as a read cache.  But sometimes
 people want high performance, but don't want to spend money
 on main memory. So, the Level-2 ARC can be placed on a
 block device, such as a fast [solid state] disk which may even
 be volatile.

The remote-disk cache makes perfect sense.  I'm curious if there are  
measurable benefits for caching local disks as well?  NAND-flash SSD  
drives have good 'seek' and slow  transfer, IIRC, but that might  
still be useful for lots of small reads where seek is everything.

I'm not quite understanding the argument for a being read-only so it  
can be used on volatile SDRAM-based SSD's, though.  Those tend to be  
much, much more expensive than main memory, right?  So, why would  
anybody buy one for cache - is it so they can front a really massive  
pool of disks that would exhaust market-available maximum main memory  
sizes?

-Bill

-
Bill McGonigle, Owner   Work: 603.448.4440
BFC Computing, LLC  Home: 603.448.1668
[EMAIL PROTECTED]   Cell: 603.252.2606
http://www.bfccomputing.com/Page: 603.442.1833
Blog: http://blog.bfccomputing.com/
VCard: http://bfccomputing.com/vcard/bill.vcf

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08

2008-05-23 Thread Bob Friesenhahn
On Fri, 23 May 2008, Bill McGonigle wrote:
 The remote-disk cache makes perfect sense.  I'm curious if there are
 measurable benefits for caching local disks as well?  NAND-flash SSD
 drives have good 'seek' and slow  transfer, IIRC, but that might
 still be useful for lots of small reads where seek is everything.

NAND-flash SSD drives also wear out.  They are not very useful as a 
cache device which is written to repetitively.  A busy server could 
likely wear one out in just a day or two unless the drive contains 
aggressive hardware-based write leveling so that it might survive a 
few more days, depending on how large the device is.

Cache devices are usually much smaller and run a lot hotter than a 
normal filesystem.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] help with a BIG problem, can't import my zpool anymore

2008-05-23 Thread Hernan Freschi
I let it run for about 4 hours. when I returned, still the same: I can ping the 
machine but I can't SSH to it, or use the console. Please, I need urgent help 
with this issue!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] help with a BIG problem, can't import my zpool anymore

2008-05-23 Thread Hernan Freschi
I let it run while watching TOP, and this is what I got just before it hung. 
Look at free mem. Is this memory allocated to the kernel? can I allow the 
kernel to swap?

last pid:  7126;  load avg:  3.36,  1.78,  1.11;   up 0+01:01:11
  21:16:49
88 processes: 78 sleeping, 9 running, 1 on cpu
CPU states: 22.4% idle,  0.4% user, 77.2% kernel,  0.0% iowait,  0.0% swap
Memory: 3072M phys mem, 31M free mem, 2055M swap, 1993M free swap

   PID USERNAME LWP PRI NICE  SIZE   RES STATETIMECPU COMMAND
  7126 root   9  580   45M 4188K run  0:00  0.71% java
  4821 root   1  590 5124K 1724K run  0:03  0.46% zfs
  5096 root   1  590 5124K 1724K run  0:03  0.45% zfs
  2470 root   1  590 4956K 1660K sleep0:06  0.45% zfs
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] help with a BIG problem, can't import my zpool anymore

2008-05-23 Thread Rob
  Memory: 3072M phys mem, 31M free mem, 2055M swap, 1993M free swap

perhaps this might help..
mkfile -n 4g /usr/swap
swap   -a/usr/swap

http://blogs.sun.com/realneel/entry/zfs_arc_statistics

Rob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08

2008-05-23 Thread Rob

  measurable benefits for caching local disks as well?  NAND-flash SSD

I'm confused, the only reason I can think of making a

  To create a pool with cache devices, specify a cache  vdev
  with any number of devices. For example:

# zpool create pool c0d0 c1d0 cache c2d0 c3d0

  Cache devices cannot be mirrored or part of a  raidz  confi-
  guration.  If a read error is encountered on a cache device,
  that read I/O is reissued to the original storage pool  dev-
  ice,   which   might   be   part  of  a  mirrored  or  raidz
  configuration.

  The content of the cache devices is considered volatile,  as
  is the case with other system caches.

device non-volatile was to fill the ARC after reboot, and the in ram
ARC pointers for the cache device will take quite abit of ram too, so
perhaps spending the $$ on more system ram rather than a SSD cache
device would be better? unless you have really slow iscsi vdevs :-)

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] help with a BIG problem,

2008-05-23 Thread Hernan Freschi
I forgot to post arcstat.pl's output:

Time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz c
22:32:37  556K  525K 94  515K   949K   98  515K   97 1G1G
22:32:38636310063  100 0063  100 1G1G
22:32:39747410074  100 0074  100 1G1G
22:32:40767610076  100 0076  100 1G1G
State Changed
22:32:41757510075  100 0075  100 1G1G
22:32:42777710077  100 0077  100 1G1G
22:32:43727210072  100 0072  100 1G1G
22:32:44808010080  100 0080  100 1G1G
State Changed
22:32:45989810098  100 0098  100 1G1G

sometimes c is 2G.

I tried the mkfile and swap, but I get:
[EMAIL PROTECTED]:/]# mkfile -n 4g /export/swap
[EMAIL PROTECTED]:/]# swap -a /export/swap
/export/swap may contain holes - can't swap on it.

/export is the only place where I have enough free space. I could add another 
drive if needed.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] help with a BIG problem,

2008-05-23 Thread Robin Guo
Hi, Herman
 
  You may not use '-n' to Makefile, that'll lead swap comlain.

Hernan Freschi wrote:
 I forgot to post arcstat.pl's output:

 Time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz c
 22:32:37  556K  525K 94  515K   949K   98  515K   97 1G1G
 22:32:38636310063  100 0063  100 1G1G
 22:32:39747410074  100 0074  100 1G1G
 22:32:40767610076  100 0076  100 1G1G
 State Changed
 22:32:41757510075  100 0075  100 1G1G
 22:32:42777710077  100 0077  100 1G1G
 22:32:43727210072  100 0072  100 1G1G
 22:32:44808010080  100 0080  100 1G1G
 State Changed
 22:32:45989810098  100 0098  100 1G1G

 sometimes c is 2G.

 I tried the mkfile and swap, but I get:
 [EMAIL PROTECTED]:/]# mkfile -n 4g /export/swap
 [EMAIL PROTECTED]:/]# swap -a /export/swap
 /export/swap may contain holes - can't swap on it.

 /export is the only place where I have enough free space. I could add another 
 drive if needed.
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] help with a BIG problem,

2008-05-23 Thread Victor Latushkin
Hernan Freschi пишет:
 I tried the mkfile and swap, but I get:
 [EMAIL PROTECTED]:/]# mkfile -n 4g /export/swap
 [EMAIL PROTECTED]:/]# swap -a /export/swap
 /export/swap may contain holes - can't swap on it.

You should not use -n for creating files for additional swap. This is 
mentioned in the mfile man page.


Wbr,
Victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] help with a BIG problem,

2008-05-23 Thread Hernan Freschi
oops. replied too fast.
Ran without -n, and space was added successfully... but it didn't work. It died 
out of memory again.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08

2008-05-23 Thread Richard Elling
[EMAIL PROTECTED] wrote:
   measurable benefits for caching local disks as well?  NAND-flash SSD

 I'm confused, the only reason I can think of making a

   To create a pool with cache devices, specify a cache  vdev
   with any number of devices. For example:

 # zpool create pool c0d0 c1d0 cache c2d0 c3d0

   Cache devices cannot be mirrored or part of a  raidz  confi-
   guration.  If a read error is encountered on a cache device,
   that read I/O is reissued to the original storage pool  dev-
   ice,   which   might   be   part  of  a  mirrored  or  raidz
   configuration.

   The content of the cache devices is considered volatile,  as
   is the case with other system caches.

 device non-volatile was to fill the ARC after reboot, and the in ram
 ARC pointers for the cache device will take quite abit of ram too, so
 perhaps spending the $$ on more system ram rather than a SSD cache
 device would be better? unless you have really slow iscsi vdevs :-)
   

Consider a case where you might use large, slow SATA drives (1 TByte, 
7,200 rpm)
for the main storage, and a single small, fast (36 GByte, 15krpm) drive 
for the
L2ARC.  This might provide a reasonable cost/performance trade-off.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] help with a BIG problem, can't import my zpool anymore

2008-05-23 Thread Mattias Pantzare
2008/5/24 Hernan Freschi [EMAIL PROTECTED]:
 I let it run while watching TOP, and this is what I got just before it hung. 
 Look at free mem. Is this memory allocated to the kernel? can I allow the 
 kernel to swap?

No, the kernel will not use swap for this.

But most of the memory used by the kernel is probably in caches that
should release memory when needed.

Is this a 32 or 64 bit system?

ZFS will sometimes use all kernel address space on a 32-bit system.

You can give the kernel more address space with this command (only on
32-bit system):
eeprom kernelbase=0x5000
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss