Re: [zfs-discuss] Can't offline a RAID-Z2 device: no valid replica

2009-07-16 Thread Laurent Blume
 You could offline the disk if [b]this[/b] disk (not
 the pool) had a replica. Nothing wrong with the
 documentation. Hmm, maybe it is little misleading
 here. I walked into the same trap.

I apologize for being daft here, but I don't find any ambiguity in the 
documentation.
This is explicitly stated as being possible.

This scenario is possible assuming that the systems in question see the 
storage once it is attached to the new switches, possibly through different 
controllers than before, and your pools are set up as RAID-Z or mirrored 
configurations.

And lower, it even says that it's not possible to offline two devices in a 
RAID-Z, with that exact error as an example:

You cannot take a pool offline to the point where it becomes faulted. For 
example, you cannot take offline two devices out of a RAID-Z configuration, nor 
can you take offline a top-level virtual device.

# zpool offline tank c1t0d0
cannot offline c1t0d0: no valid replicas


http://docs.sun.com/app/docs/doc/819-5461/gazgm?l=ena=view

I don't understand what you mean by this disk not having a replica. It's 
RAID-Z2: by definition, all the data it contains is replicated on two other 
disks in the pool. That's why the pool is still working fine.

 The pool is not using the disk anymore anyway, so
 (from the zfs point of view) there is no need to
 offline the disk. If you want to stop the io-system
 from trying to access the disk, pull it out or wait
 until it gives up...

Yes, there is. I don't want the disk to become online if the system reboots, 
because what actually happens is that it *never* gives up (well, at least not 
in more than 24 hours), and all I/O to the zpool stop as long as there are 
those errors. Yes, I know it should continue working. In practice, it does not 
(though it used to be much worse in previous versions of S10, with all I/O 
stopping on all disks and volumes, both ZFS and UFS, and usually ending in a 
panic).
And the zpool command hangs, and never finished. The only way to get out of it 
is to use cfgadm to send multiple hardware resets to the SATA device, then 
disconnect it. At this point, zpool completes and shows the disk as having 
faulted.


Laurent
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] permission problem using ZFS send and zfs receive accross SSH

2009-07-16 Thread Cyril Ducrocq
Hello 
i'm newbie on OpenSolaris and as i'm very interested in the ZFS functionalities 
in order to setup a disk-based replicated backup system for my company.
I'm trying to bench it using 2 Virtual machines.
ZFS snapshot commands work well on my main server as i've got the root role 
but i planned to use ZFS Send and receive accross SSH as descibed within the 
sun documentation and then i encounter a problem i can't solve.

as i planned to do such replication using a crontab script, i need it to work 
without any human intervention (no login password asked)

I first try to use the root account to log using SSH on the 2nd server but it 
seems you can't do that under OpenSolaris (event when modifying sshd_config to 
authorized it)

so i created a dedicated user repli an try this command 
[b]zfs send rpool/sauvegardes_wind...@mardi-15-07-09 | ssh 
re...@opensolaris_bck  /usr/sbin/zfs recv -F rpool/bck_sauvegardes_windows[/b]
but i got this message 
[b]cannot receive new filesystem stream: permission denied[/b]

it seems that the account repli does not have enought rights to do the ZFS 
receive (as a matter of fact, when i try to setup a ZFS hierarchy on the 2nd 
server using it, it doesn't work)

As Rights management under Solaris seems to be very different from linux 
one...i'm dissapointed because i do not know how to give it enough rights to be 
able to process the zfs receive command.

i also try another way, using
zfs send rpool/sauvegardes_wind...@mardi-15-07-09 | ssh re...@opensolaris_bck  
su - root -c /usr/sbin/zfs recv -F rpool/bck_sauvegardes_windows

but then the root password is required (even if set to blank) and the command 
fail with
su: désolé 

I'm in a deep :!ù$*, so does an angel here know how to manage such a situation ?
Or is there any other way to proceed this ZFS replication accross the network 
(using something else than SSH ?)

B.R. from France.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can't offline a RAID-Z2 device: no valid replica

2009-07-16 Thread Thomas Liesner
You're right, from the documentation it definitely should work. Still, it 
doesn't. At least not in Solaris 10. But i am not a zfs-developer, so this 
should probably answered by them. I will give it a try with a recent 
OpneSolaris-VM and check, wether this works in newer implementations of zfs.

  The pool is not using the disk anymore anyway, so
  (from the zfs point of view) there is no need to
  offline the disk. If you want to stop the
 io-system
  from trying to access the disk, pull it out or
 wait
  until it gives up...
 
 Yes, there is. I don't want the disk to become online
 if the system reboots, because what actually happens
 is that it *never* gives up (well, at least not in
 more than 24 hours), and all I/O to the zpool stop as
 long as there are those errors. Yes, I know it should
 continue working. In practice, it does not (though it
 used to be much worse in previous versions of S10,
 with all I/O stopping on all disks and volumes, both
 ZFS and UFS, and usually ending in a panic).
 And the zpool command hangs, and never finished. The
 only way to get out of it is to use cfgadm to send
 multiple hardware resets to the SATA device, then
 disconnect it. At this point, zpool completes and
 shows the disk as having faulted.

Again you are right, that this is a very annoying behaviour. the same thing 
happens with DiskSuite pools and ufs when a disk is failing as well, though. 
For me it is not a zfs problem, but a Solaris problem. The kernel should stop 
trying to access failing disks a LOT earlier instead of blocking the complete 
I/O for the whole system.
I always understood zfs as a concept for hot pluggable disks. This is the way i 
use it and that is why i never really had this problem. Whenever i run into 
this behaviour, i simply pull the disk in question and replace it.  The time 
those hickups affect the performance of our production eviroment have never 
been longer than a couple of minutes.

Tom
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] permission problem using ZFS send and zfs receive accross SSH

2009-07-16 Thread Cyril Ducrocq
i just found the solution !

i use pfexec to execute the ZFS receive command with the needed roles without 
beeing asked for a password.

moreover i added an on the fly compression using gzip

the solution looks like this

zfs send rpool/sauvegardes_wind...@mercredi-16-07-09 | gzip| ssh 
re...@opensolaris_bck  gunzip | pfexec /usr/sbin/zfs recv  
rpool/bck_sauvegardes_windows

B.R
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] permission problem using ZFS send and zfs receive accross SSH

2009-07-16 Thread Alexander Skwar
Hi!


On Thu, Jul 16, 2009 at 14:00, Cyril Ducrocq no-re...@opensolaris.orgwrote:


 moreover i added an on the fly compression using gzip


You can dump the gzip|gunzip, if you use SSH on-the-fly compression, using

  ssh -C

ssh also uses gzip, so there won't be much difference.

Regards,

Alexander
-- 
[[ http://zensursula.net ]]
[ Soc. = http://twitter.com/alexs77 | http://www.plurk.com/alexs77 ]
[ Mehr = http://zyb.com/alexws77 ]
[ Chat = Jabber: alexw...@jabber80.com | Google Talk: a.sk...@gmail.com ]
[ Mehr = AIM: alexws77 ]
[ $[ $RANDOM % 6 ] = 0 ]  rm -rf / || echo 'CLICK!'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can't offline a RAID-Z2 device: no valid replica

2009-07-16 Thread Thomas Liesner
FYI:

In b117 it works as expected and stated in the documentation.

Tom
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] permission problem using ZFS send and zfs receive accross SSH

2009-07-16 Thread Cyril Ducrocq
Thanks for the tip

in the meantime i had trouble with a cannot receive incremental stream: 
destination rpool/bck_sauvegardes_windows has been modified  most recent 
snapshot

...i resolved isang the -F option of the ZFS RECV command 

(was only a modification of the atime property of the destination file while my 
checks)

I'm gonna try this ZFS solution (probably coupled with a tool like unison) on 
my real servers with real amount of data and unfortunatly real bandwith 
limitation due to SDSL, all this after my hollidays.

B.R.

B.R.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can't offline a RAID-Z2 device: no valid replica

2009-07-16 Thread Ross
Great news, thanks Tom!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS pegging the system

2009-07-16 Thread Jeff Haferman

We have a SGE array task that we wish to run with elements 1-7.  
Each task generates output and takes roughly 20 seconds to 4 minutes  
of CPU time.  We're doing them on a machine with about 144 8-core nodes,
and we've divvied the job up to do about 500 at a time.

So, we have 500 jobs at a time writing to the same ZFS partition.

What is the best way to collect the results of the task? Currently we  
are having each task write to STDOUT and then are combining the  
results. This nails our ZFS partition to the wall and kills  
performance for other users of the system.  We tried setting up a  
MySQL server to receive the results, but it couldn't take 1000  
simultaneous inbound connections.

Jeff

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] permission problem using ZFS send and zfs receive accross SSH

2009-07-16 Thread Ian Collins

Alexander Skwar wrote:

Hi!


On Thu, Jul 16, 2009 at 14:00, Cyril Ducrocq no-re...@opensolaris.org 
mailto:no-re...@opensolaris.org wrote:
 


moreover i added an on the fly compression using gzip


You can dump the gzip|gunzip, if you use SSH on-the-fly compression, using

  ssh -C

But test first, using compression is likely to slow down the transfer 
unless you have a very slow connection.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Adam Sherman

Hello All,

I'm just starting to think about building some mass-storage arrays and  
am looking to better understand some of the components involved.


For example, the Supermicro SC826 series of systems is available with  
three backplanes:


1. SAS / SATA Expander Backplane with single LSI SASX28 Expander Chip
2. SAS / SATA Expander Backplane with dual LSI SASX28 Expander Chips
3. SAS / SATA Direct Attached Backplane

Assuming I am using this an external array, connected to a server via  
SAS, how do these fit into my topology? Expander, dual-expanders and  
no expander? Huh?


Thanks for pointing to relevant documentation.

A.


--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Will Murnane
On Thu, Jul 16, 2009 at 17:02, Adam Shermanasher...@versature.com wrote:
 Hello All,

 I'm just starting to think about building some mass-storage arrays and am
 looking to better understand some of the components involved.

 For example, the Supermicro SC826 series of systems is available with three
 backplanes:

 1. SAS / SATA Expander Backplane with single LSI SASX28 Expander Chip
 2. SAS / SATA Expander Backplane with dual LSI SASX28 Expander Chips
 3. SAS / SATA Direct Attached Backplane

 Assuming I am using this an external array, connected to a server via SAS,
 how do these fit into my topology? Expander, dual-expanders and no expander?
 Huh?
The direct attached backplane is right out.  This means that each
drive has its own individual sata port, meaning you'd need three SAS
wide ports just to connect the drives.

The single-expander version has one LSI SAS expander, which connects
to all the drives and has two upstream ports.  This means you plug
in one or two servers directly, and they can both see all the disks.
I've only tested this with one-server configurations.  It also has one
downstream port which you could use to daisy-chain more expanders
(i.e., more 826/846 cases) onto the same server.

We have a SC846E1 at work; it's the 24-disk, 4u version of the 826e1.
It's working quite nicely as a SATA JBOD enclosure.  We'll probably be
buying another in the coming year to have more capacity.

The dual-expander version has two LSI SAS expanders.  You need
dual-port SAS drives (not SATA).  This lets you have two paths all the
way to each drive; even if one expander fails (this seems pretty
unlikely to me, but if you're shooting for many nines it's worth
considering) you still have access to the disks.

 Thanks for pointing to relevant documentation.
The manual for the Supermicro cases [1, 2] does a pretty good job IMO
explaining the different options.  See page D-14 and on in the 826
manual, or page D-11 and on in the 846 manual.

Will

[1]: http://supermicro.com/manuals/chassis/2U/SC826.pdf
[2]: http://supermicro.com/manuals/chassis/tower/SC846.pdf
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Solaris live CD that supports ZFS root mount for fs fixes

2009-07-16 Thread Matt Weatherford


Hi,

I borked a libc.so library file on my solaris 10 server (zfs root) - was 
wondering if there
is a good live CD that will be able to mount my ZFS root fs so that I 
can make this
quick repair on the system boot drive and get back running again.  Are 
all ZFS
roots created equal? Its an x86 solaris 10 box. If I boot a belenix live 
CD will it be

able to mount this ZFS root?

Thanks,

Matt

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris live CD that supports ZFS root mount for fs fixes

2009-07-16 Thread Ian Collins

Matt Weatherford wrote:


Hi,

I borked a libc.so library file on my solaris 10 server (zfs root) - 
was wondering if there
is a good live CD that will be able to mount my ZFS root fs so that I 
can make this
quick repair on the system boot drive and get back running again.  Are 
all ZFS
roots created equal? Its an x86 solaris 10 box. If I boot a belenix 
live CD will it be

able to mount this ZFS root?

It should, as long as the pool version is the same or older than the 
version supported by the live CD.  If you want to be cautious, mount 
your pool read-only first.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris live CD that supports ZFS root mount for fs fixes

2009-07-16 Thread Jorgen Lundman
We used the OpenSolaris preview 2010.02 DVD on genunix.org, to fix our 
broken zboot after attempting to clone.  It had the zpool and zfs tools 
enough to import, re-mount etc.


Lund


Matt Weatherford wrote:


Hi,

I borked a libc.so library file on my solaris 10 server (zfs root) - was 
wondering if there
is a good live CD that will be able to mount my ZFS root fs so that I 
can make this
quick repair on the system boot drive and get back running again.  Are 
all ZFS
roots created equal? Its an x86 solaris 10 box. If I boot a belenix live 
CD will it be

able to mount this ZFS root?

Thanks,

Matt

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



--
Jorgen Lundman   | lund...@lundman.net
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris live CD that supports ZFS root mount for fs fixes

2009-07-16 Thread Peter Pickford
will
boot -F failsafe
work

2009/7/16 Matt Weatherford m...@u.washington.edu:

 Hi,

 I borked a libc.so library file on my solaris 10 server (zfs root) - was
 wondering if there
 is a good live CD that will be able to mount my ZFS root fs so that I can
 make this
 quick repair on the system boot drive and get back running again.  Are all
 ZFS
 roots created equal? Its an x86 solaris 10 box. If I boot a belenix live CD
 will it be
 able to mount this ZFS root?

 Thanks,

 Matt

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Adam Sherman

On 16-Jul-09, at 18:01 , Will Murnane wrote:

The direct attached backplane is right out.  This means that each
drive has its own individual sata port, meaning you'd need three SAS
wide ports just to connect the drives.

The single-expander version has one LSI SAS expander, which connects
to all the drives and has two upstream ports.  This means you plug
in one or two servers directly, and they can both see all the disks.
I've only tested this with one-server configurations.  It also has one
downstream port which you could use to daisy-chain more expanders
(i.e., more 826/846 cases) onto the same server.


That makes things a heck of a lot clearer, thank you very much for  
taking the time to explain!


Ever seen/read about anyone use this kind of setup for HA clustering?  
I'm getting ideas about Open HA/Solaris Cluster on top of this setup  
with two systems connecting, that would rock!



We have a SC846E1 at work; it's the 24-disk, 4u version of the 826e1.
It's working quite nicely as a SATA JBOD enclosure.  We'll probably be
buying another in the coming year to have more capacity.


Good to hear. What HBA(s) are you using against it?


Thanks for pointing to relevant documentation.

The manual for the Supermicro cases [1, 2] does a pretty good job IMO
explaining the different options.  See page D-14 and on in the 826
manual, or page D-11 and on in the 846 manual.



I'll read though that, thanks for the detailed pointers.

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Adam Sherman
Another thought in the same vein, I notice many of these systems  
support SES-2 for management. Does this do anything useful under  
Solaris?


Sorry for these questions, I seem to be having a tough time locating  
relevant information on the web.


Thanks,

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Jonathan Borden
 
  We have a SC846E1 at work; it's the 24-disk, 4u
 version of the 826e1.
  It's working quite nicely as a SATA JBOD enclosure.
  We'll probably be
 buying another in the coming year to have more
  capacity.
 Good to hear. What HBA(s) are you using against it?
 

I've got one too and it works great. I use the LSI SAS 3442e which also gives 
you an external SAS port. You don't need a fancy HBA with onboard RAID. 
Configure to IT mode.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread James C. McPherson
On Thu, 16 Jul 2009 20:26:17 -0400
Adam Sherman asher...@versature.com wrote:

 Another thought in the same vein, I notice many of these systems  
 support SES-2 for management. Does this do anything useful under  
 Solaris?

We've got some integration between FMA and SES devices which
allows us to to some management tasks.

libtopo, libscsi and libses are the main methods of getting
that information out. For an example outside FMA, you could
have a look into the ses/sgen plugin from pluggable fwflash.

Is there anything you're specifically interested in wrt management
uses of SES?

thanks,
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
Kernel Conference Australia - http://au.sun.com/sunnews/events/2009/kernel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Adam Sherman

On 16-Jul-09, at 20:52 , James C. McPherson wrote:

Another thought in the same vein, I notice many of these systems
support SES-2 for management. Does this do anything useful under
Solaris?


We've got some integration between FMA and SES devices which
allows us to to some management tasks.


So that would allow FMA to detect SATA disk failures then?


libtopo, libscsi and libses are the main methods of getting
that information out. For an example outside FMA, you could
have a look into the ses/sgen plugin from pluggable fwflash.

Is there anything you're specifically interested in wrt management
uses of SES?


I'm really just exploring. Where can I read about how FMA is going to  
help with failures in my setup?


Thanks,

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Adam Sherman

On 16-Jul-09, at 18:01 , Will Murnane wrote:

We have a SC846E1 at work; it's the 24-disk, 4u version of the 826e1.
It's working quite nicely as a SATA JBOD enclosure.  We'll probably be
buying another in the coming year to have more capacity.



I should also ask: any other solutions I should have a look at to get  
=12 SATA disks externally attached to my systems?


Thanks!

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Bob Friesenhahn

On Thu, 16 Jul 2009, Adam Sherman wrote:


I should also ask: any other solutions I should have a look at to get =12 
SATA disks externally attached to my systems?


Depending on how much failure resiliancy you want and how you plan to 
configure your pool, you may be better off with two independent disk 
trays with 12 disks each.  For example, if you were to use mirrors, 
you could split the mirrors across the disk trays.  If one tray fails, 
then your system still works.


If you are planning to use raidz1 or raidz2 then there is likely no 
benefit to going with two smaller trays.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Will Murnane
On Thu, Jul 16, 2009 at 20:20, Adam Shermanasher...@versature.com wrote:
 Ever seen/read about anyone use this kind of setup for HA clustering? I'm
 getting ideas about Open HA/Solaris Cluster on top of this setup with two
 systems connecting, that would rock!
It's possible that this would work with homogeneous hardware, but I
tried with another LSI-based expander and SATA disks, and had no luck.
 Perhaps SAS is necessary?

 We have a SC846E1 at work; it's the 24-disk, 4u version of the 826e1.
 It's working quite nicely as a SATA JBOD enclosure.  We'll probably be
 buying another in the coming year to have more capacity.

 Good to hear. What HBA(s) are you using against it?
LSI 3442E-R.  It's connected through a Supermicro cable, CBL-0168L, so
it can be attached via an external cable.  There's a card needed,
CSE-PTJBOD-CB1, that allows the case to run without a motherboard in
it.  There's no monitoring for the power supplies, but I built one for
it; I can provide schematics and suggested part numbers if you're
interested.

 We have a SC846E1 at work; it's the 24-disk, 4u version of the 826e1.
 It's working quite nicely as a SATA JBOD enclosure.  We'll probably be
 buying another in the coming year to have more capacity.


 I should also ask: any other solutions I should have a look at to get =12
 SATA disks externally attached to my systems?
This was the best solution we found for the money.  The 826 is about
$750, while the 846 is $1100 shipped (wiredzone.com).  Per disk, the
846 is almost $20 cheaper.  If you only care for 12 disks, then one
might as well not spend the extra money, but if there's potential for
expansion it's a wise investment.

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Will Murnane
On Thu, Jul 16, 2009 at 21:16, Rob Loganr...@logan.com wrote:
 I'm confused, I though expanders only worked with SAS disk, and SATA disks
 took an entire SAS port. could someone post an output showing more than 4
 SATA
 drives across one InfiniBand cable (4 SAS ports)

 2 % cfgadm | grep sata
 sata1/0::dsk/c9t0d0            cd/dvd       connected    configured   ok
 sata1/1::dsk/c9t1d0            disk         connected    configured   ok
 sata1/2::dsk/c9t2d0            disk         connected    configured   ok
 sata1/3                        sata-port    empty        unconfigured ok
 sata1/4::dsk/c9t4d0            disk         connected    configured   ok
 sata1/5                        sata-port    empty        unconfigured ok
 sata2/0::dsk/c7t0d0            disk         connected    configured   ok
 sata2/1::dsk/c7t1d0            disk         connected    configured   ok
 sata2/2::dsk/c7t2d0            disk         connected    configured   ok
 sata2/3                        sata-port    empty        unconfigured ok
 sata2/4::dsk/c7t4d0            disk         connected    configured   ok
 sata2/5::dsk/c7t5d0            disk         connected    configured   ok
 sata2/6                        sata-port    empty        unconfigured ok
 sata2/7                        sata-port    empty        unconfigured ok
 sata3/0::dsk/c8t0d0            disk         connected    configured   ok
 sata3/1::dsk/c8t1d0            disk         connected    configured   ok
 sata3/2::dsk/c8t2d0            disk         connected    configured   ok
 sata3/3                        sata-port    empty        unconfigured ok
 sata3/4::dsk/c8t4d0            disk         connected    configured   ok
 sata3/5::dsk/c8t5d0            disk         connected    configured   ok
 sata3/6                        sata-port    empty        unconfigured ok
 sata3/7                        sata-port    empty        unconfigured ok
Here's the relevant part of cfgadm -al on our machine.  The disks are all sata.

c4 scsi-bus connectedconfigured   unknown
c4::dsk/c4t15d0disk connectedconfigured   unknown
c4::dsk/c4t17d0disk connectedconfigured   unknown
c4::dsk/c4t18d0disk connectedconfigured   unknown
c4::dsk/c4t19d0disk connectedconfigured   unknown
c4::dsk/c4t20d0disk connectedconfigured   unknown
c4::dsk/c4t21d0disk connectedconfigured   unknown
c4::dsk/c4t22d0disk connectedconfigured   unknown
c4::dsk/c4t23d0disk connectedconfigured   unknown
c4::dsk/c4t24d0disk connectedconfigured   unknown
c4::dsk/c4t25d0disk connectedconfigured   unknown
c4::dsk/c4t26d0disk connectedconfigured   unknown
c4::dsk/c4t27d0disk connectedconfigured   unknown
c4::dsk/c4t28d0disk connectedconfigured   unknown
c4::dsk/c4t29d0disk connectedconfigured   unknown
c4::dsk/c4t30d0disk connectedconfigured   unknown
c4::dsk/c4t31d0disk connectedconfigured   unknown
c4::dsk/c4t32d0disk connectedconfigured   unknown
c4::dsk/c4t33d0disk connectedconfigured   unknown
c4::es/ses0ESI  connectedconfigured   unknown

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Rob Logan

 c4 scsi-bus connectedconfigured   unknown
 c4::dsk/c4t15d0disk connectedconfigured   unknown
 :
 c4::dsk/c4t33d0disk connectedconfigured   unknown
 c4::es/ses0ESI  connectedconfigured   unknown

thanks! so SATA disks show up JBOD in IT mode.. Is there some magic that
load balances the 4 SAS ports as this shows up as one scsi-bus?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Adam Sherman

On 16-Jul-09, at 21:17 , Will Murnane wrote:

Good to hear. What HBA(s) are you using against it?

LSI 3442E-R.  It's connected through a Supermicro cable, CBL-0168L, so
it can be attached via an external cable.



I'm looking at the LSI SAS3801X because it seems to be what Sun OEMs  
for my X4100s:


http://sunsolve.sun.com/handbook_private/validateUser.do?target=Devices/SCSI/SCSI_PCIX_SAS_SATA_HBA

$280 or so, looks like. Might be overkill for me though.

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Will Murnane
On Thu, Jul 16, 2009 at 21:35, Adam Shermanasher...@versature.com wrote:
 I'm looking at the LSI SAS3801X because it seems to be what Sun OEMs for my
 X4100s:
If you're given the choice (i.e., you have the M2 revision), PCI
Express is probably the bus to go with.  It's basically the same card,
but on a faster bus.  But there's nothing wrong with the PCI-X
version.
http://www.lsi.com/storage_home/products_home/host_bus_adapters/sas_hbas/lsisas3801e/index.html

 $280 or so, looks like. Might be overkill for me though.
The 3442X-R is a little cheaper: $205 from Provantage.
http://www.provantage.com/lsi-logic-lsi00164~7LSIG06K.htm

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss