Re: [zfs-discuss] X45xx storage vs 7xxx Unified storage

2009-11-25 Thread Erik Trimble

Miles Nordin wrote:

"et" == Erik Trimble  writes:



et> I'd still get the 7310 hardware.
et> Worst case scenario is that you can blow away the AmberRoad

okay but, AIUI he was saying pricing is 6% more for half as much
physical disk.  This is also why it ``uses less energy'' while
supposedly filling the same role: fishworks clustering is based on SAS
multi-initiator, on SAS fan...uh,...fan-in?  switches, while OP's
home-rolled cluster plan was based on copying the data to another
zpool.  remember pricing is based on ``market forces'': it's not dumb,
is the opposite of dumb, but...under ``market forces'' pricing if you
are paying for clever-schemes you can't use, YHL.
  

No, 6% LESS for the 7310 solution, vs the dual x4540 solution.

The key here is Usable disk space.  Yes, the X4540 comes with 2x the 
disk space, but having to cluster them via non-shared storage, you 
effectively eliminate that advantage.  Not to mention that expanding a 
clustered X4540 either means you have to buy 2x the required storage 
(i.e. attach another array to each x4540), or you do the exact same 
thing as with a 7310 (i.e. dual-attach an array to both).



You certainly are paying some premium for the A-R software; however, I 
was stating the worst-case scenario where he finds he can't make use of 
the A-R software. He's still left with a hardware solution that is 
superior to the dual X4540 (in my opinion).   That is, software aside, 
my opinion is that a clustered X4140 with shared J4400 chassis is a 
better idea than "redundant" X4540 setup.  With or without the AR 
software.  The AR software just makes the configuration of the 7310 
extremely simple, which is no small win in and of itself.



--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] X45xx storage vs 7xxx Unified storage

2009-11-25 Thread Miles Nordin
> "et" == Erik Trimble  writes:

et> I'd still get the 7310 hardware.
et> Worst case scenario is that you can blow away the AmberRoad

okay but, AIUI he was saying pricing is 6% more for half as much
physical disk.  This is also why it ``uses less energy'' while
supposedly filling the same role: fishworks clustering is based on SAS
multi-initiator, on SAS fan...uh,...fan-in?  switches, while OP's
home-rolled cluster plan was based on copying the data to another
zpool.  remember pricing is based on ``market forces'': it's not dumb,
is the opposite of dumb, but...under ``market forces'' pricing if you
are paying for clever-schemes you can't use, YHL.


pgplSxaec58ln.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] X45xx storage vs 7xxx Unified storage

2009-11-24 Thread Erik Trimble

Erik Trimble wrote:

Miles Nordin wrote:

"lz" == Len Zaifman  writes:



lz> So I now have 2 disk paths and two network paths as opposed to
lz> only one in the 7310 cluster.



You're configuring all your failover on the client, so the HA stuff is
stateless wrt the server?  sounds like the smart way since you control
both ends.  The entirely-server-side HA makes more sense when you
cannot control the clients because they are entee usars.

lz> Under these circumstances what advantage would a 7310 cluster
lz> over 2 X4540s backing each other up and splitting the load?

remember fishworks does not include any source code nor run the
standard opensolaris builds, both of which could be a big problem if
you are working with IB, RDMA, and eventually Lustre.  You may get
access to certain fancy/flakey stuff around those protocols a year
sooner by sticking to the trunk builds.  Furthermore by not letting
them take the source away you get the ability to backport your own
fixes, if you get really aggressive.

Having a cluster where one can give up the redundancy for a moment and
turn one of the boxes into ``development'' is exactly what I'd want if
I were betting my future on bleeding edge stuff like HOL-blocking
fabrics, RDMA, and zpool-backed Lustre. 
It also lets you copy your whole dataset with rsync so you don't get

painted into a corner, ex trying to downgrade a zpool version.
  


I'd still get the 7310 hardware.

Worst case scenario is that you can blow away the AmberRoad software 
load, and install OpenSolaris/Solaris.  The hardware is a standard 
X4140 and J4200.


Note, that if you do that, well, you can't re-load A-R without a 
support contract.




Oops, I mean a J4400. 


--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] X45xx storage vs 7xxx Unified storage

2009-11-24 Thread Erik Trimble

Miles Nordin wrote:

"lz" == Len Zaifman  writes:



lz> So I now have 2 disk paths and two network paths as opposed to
lz> only one in the 7310 cluster.



You're configuring all your failover on the client, so the HA stuff is
stateless wrt the server?  sounds like the smart way since you control
both ends.  The entirely-server-side HA makes more sense when you
cannot control the clients because they are entee usars.

lz> Under these circumstances what advantage would a 7310 cluster
lz> over 2 X4540s backing each other up and splitting the load?

remember fishworks does not include any source code nor run the
standard opensolaris builds, both of which could be a big problem if
you are working with IB, RDMA, and eventually Lustre.  You may get
access to certain fancy/flakey stuff around those protocols a year
sooner by sticking to the trunk builds.  Furthermore by not letting
them take the source away you get the ability to backport your own
fixes, if you get really aggressive.

Having a cluster where one can give up the redundancy for a moment and
turn one of the boxes into ``development'' is exactly what I'd want if
I were betting my future on bleeding edge stuff like HOL-blocking
fabrics, RDMA, and zpool-backed Lustre.  


It also lets you copy your whole dataset with rsync so you don't get
painted into a corner, ex trying to downgrade a zpool version.
  


I'd still get the 7310 hardware.

Worst case scenario is that you can blow away the AmberRoad software 
load, and install OpenSolaris/Solaris.  The hardware is a standard X4140 
and J4200.


Note, that if you do that, well, you can't re-load A-R without a support 
contract.



--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] X45xx storage vs 7xxx Unified storage

2009-11-23 Thread Miles Nordin
> "lz" == Len Zaifman  writes:

lz> So I now have 2 disk paths and two network paths as opposed to
lz> only one in the 7310 cluster.



You're configuring all your failover on the client, so the HA stuff is
stateless wrt the server?  sounds like the smart way since you control
both ends.  The entirely-server-side HA makes more sense when you
cannot control the clients because they are entee usars.

lz> Under these circumstances what advantage would a 7310 cluster
lz> over 2 X4540s backing each other up and splitting the load?

remember fishworks does not include any source code nor run the
standard opensolaris builds, both of which could be a big problem if
you are working with IB, RDMA, and eventually Lustre.  You may get
access to certain fancy/flakey stuff around those protocols a year
sooner by sticking to the trunk builds.  Furthermore by not letting
them take the source away you get the ability to backport your own
fixes, if you get really aggressive.

Having a cluster where one can give up the redundancy for a moment and
turn one of the boxes into ``development'' is exactly what I'd want if
I were betting my future on bleeding edge stuff like HOL-blocking
fabrics, RDMA, and zpool-backed Lustre.  

It also lets you copy your whole dataset with rsync so you don't get
painted into a corner, ex trying to downgrade a zpool version.


pgpNpD8ARnXEN.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] X45xx storage vs 7xxx Unified storage

2009-11-23 Thread Erik Trimble
Get the 7310 setup.  Vs. the X4540 it is:

(1) less configuration on your clients
(2) instant failover with no intervention on your part
(3) less expensive
(4) expandable to 3x your current disk space
(5) lower power draw & less rack space
(6) So Simple, A Caveman Could Do It (tm)

-Erik


On Mon, 2009-11-23 at 14:46 -0500, Len Zaifman wrote:
> I asked this question a week ago but now I have what I feel are reasonable 
> pricing numbers :
> 
> For 2 X4540s (24 TB each) I pay 6% more than for  one  7310 redundant cluster 
> (2 7310s in a cluster configuration) with 22 TB of disk and 2 x 18 GB SSDs.
> 
> I lose live redundancy, but can switch the filerserver serving nodes with a 
> short downtime if I have 2 X4540s.
> 
> I have a live copy of 1/2 the disk on one file server and a backup copy of 
> the other half of the disk on the same fileserver. The second server then has 
> the other half live , and the first half  mirrored.
> 
> So I now have 2 disk paths and two network paths as opposed to only one in 
> the 7310 cluster.
> 
> Under these circumstances what advantage would a 7310 cluster over 2 X4540s 
> backing each other up and splitting the load?
> Len Zaifman
> Systems Manager, High Perime formance Systems
> The Centre for Computational Biology
> The Hospital for Sick Children
> 555 University Ave.
> Toronto, Ont M5G 1X8
> 
> tel: 416-813-5513
> email: leona...@sickkids.ca
> 

-- 
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] X45xx storage vs 7xxx Unified storage

2009-11-23 Thread David Magda

On Nov 23, 2009, at 14:46, Len Zaifman wrote:

Under these circumstances what advantage would a 7310 cluster over 2  
X4540s backing each other up and splitting the load?


Do you want to worry about your storage system at 3 AM?

That's what all these appliances (regardless of vendor) get you for  
the extra cash: the manufacturers have test things out so they Just  
Work. Of course if you're comfortable with creating your own fail-over  
and redundancy system (or don't have the cash for the appliance), then  
there's no reason why you can't roll your own.


It's the usual trade-off: time or money.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] X45xx storage vs 7xxx Unified storage

2009-11-23 Thread Trevor Pretty








Len Zaifman wrote:

  Under these circumstances what advantage would a 7310 cluster over 2 X4540s backing each other up and splitting the load?
  

FISH!  My wife could drive a
7310 :-)




www.eagle.co.nz 
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] X45xx storage vs 7xxx Unified storage

2009-11-23 Thread Scott Meilicke
If the 7310s can meet your performance expectations, they sound much better 
than a pair of x4540s. Auto-fail over, SSD performance (although these can be 
added to the 4540s), ease of management, and a great front end. 

I haven't seen if you can use your backup software with the 7310s, but from 
what I have read in this thread, that may be the only downside (a big one). 
Everything else points to the 7310s.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] X45xx storage vs 7xxx Unified storage

2009-11-23 Thread Len Zaifman
I asked this question a week ago but now I have what I feel are reasonable 
pricing numbers :

For 2 X4540s (24 TB each) I pay 6% more than for  one  7310 redundant cluster 
(2 7310s in a cluster configuration) with 22 TB of disk and 2 x 18 GB SSDs.

I lose live redundancy, but can switch the filerserver serving nodes with a 
short downtime if I have 2 X4540s.

I have a live copy of 1/2 the disk on one file server and a backup copy of the 
other half of the disk on the same fileserver. The second server then has the 
other half live , and the first half  mirrored.

So I now have 2 disk paths and two network paths as opposed to only one in the 
7310 cluster.

Under these circumstances what advantage would a 7310 cluster over 2 X4540s 
backing each other up and splitting the load?
Len Zaifman
Systems Manager, High Perime formance Systems
The Centre for Computational Biology
The Hospital for Sick Children
555 University Ave.
Toronto, Ont M5G 1X8

tel: 416-813-5513
email: leona...@sickkids.ca

From: Len Zaifman
Sent: November 17, 2009 12:20 PM
To: storage-disc...@opensolaris.org; zfs-discuss@opensolaris.org
Subject: X45xx storage vs 7xxx Unified storage

We are looking at adding to our storage. We would like ~20TB-30 TB.

we have ~ 200 nodes (1100 cores)   to feed data to using nfs, and we are 
looking for high reliability, good performance (up to at least 350 MBytes 
/second over 10 GigE connection) and large capacity.

For the X45xx (aka thumper) capacity and performanance seem to be there (we 
have 3 now)
However, for system upgrades , maintenance and failures, we have an 
availability problem.

For the 7xxx in a cluster configuration, we seem to be able to solve the 
availability issue, and perhaps get performance benefits the SSD.

however, the costs constrain the capacity we could afford.

If anyone has experience with both systems, or with the 7xxx system in a 
cluster configuration, we would be interested in hearing:

1) Does the 7xxx perform as well or better than thumpers?
2) Does the 7xxx failove r work as expected (in test and real life)
3) Does the SSD really help?
4) Do the analytics help prevent and solve real problems, or arui?he ge they 
frivolous pretty pictures?
5) is the 7xxx really a black box to be managed only by the GUI?
6) If are in or near Toronto, would you be willing to let us see your setup?


Thanks for any insights you may have and are willing to share.

Len Zaifman
Systems Manager, High Performance Systems
The Centre for Computational Biology
The Hospital for Sick Children
555 University Ave.
Toronto, Ont M5G 1X8

tel: 416-813-5513
email: leona...@sickkids.ca

This e-mail may contain confidential, personal and/or health 
information(information which may be subject to legal restrictions on use, 
retention and/or disclosure) for the sole use of the intended recipient. Any 
review or distribution by anyone other than the person for whom it was 
originally intended is strictly prohibited. If you have received this e-mail in 
error, please contact the sender and delete all copies.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] X45xx storage vs 7xxx Unified storage

2009-11-20 Thread Len Zaifman
Thanks for your note:

Re type of 7xxx system: It is mostlikely a 7310 with one tray, two if we can 
squeeze it in.

Each tray will , we hope be 22x1TB disks and 2x 18GB SSDS. In a private 
response to this I got:
>> With SSD it performs better than the Thumper. My feeling would be two
>> trays plus two heads...4 read SSDs and 1 write SSD per tray should
>> outperform 3-4 Thumpers.

Thanks for the failover reference.


And thanks for mentioning backups :we use networker and if we cannot install 
the networker client, we need to enable ndmp on the backup server.
I will investigate.


Len Zaifman
Systems Manager, High Performance Systems
The Centre for Computational Biology
The Hospital for Sick Children
555 University Ave.
Toronto, Ont M5G 1X8

tel: 416-813-5513
email: leona...@sickkids.ca

From: darren.mof...@sun.com [darren.mof...@sun.com] On Behalf Of Darren J 
Moffat [darr...@opensolaris.org]
Sent: November 18, 2009 12:10 PM
To: Len Zaifman
Cc: storage-disc...@opensolaris.org; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] X45xx storage vs 7xxx Unified storage

Len Zaifman wrote:
> We are looking at adding to our storage. We would like ~20TB-30 TB.
>
> we have ~ 200 nodes (1100 cores)   to feed data to using nfs, and we are 
> looking for high reliability, good performance (up to at least 350 MBytes 
> /second over 10 GigE connection) and large capacity.
>
> For the X45xx (aka thumper) capacity and performanance seem to be there (we 
> have 3 now)
> However, for system upgrades , maintenance and failures, we have an 
> availability problem.
>
> For the 7xxx in a cluster configuration, we seem to be able to solve the 
> availability issue, and perhaps get performance benefits the SSD.
>
> however, the costs constrain the capacity we could afford.
>
> If anyone has experience with both systems, or with the 7xxx system in a 
> cluster configuration, we would be interested in hearing:
>
> 1) Does the 7xxx perform as well or better than thumpers?

Depends on which 7xxx you pick.
> 2) Does the 7xxx failove r work as expected (in test and real life)

Depends what your expectations are! The time to failover depends on how
you configure the cluster and how many filesystems you have and how many
disks etc etc.

Have a read over this blog entry:

http://blogs.sun.com/wesolows/entry/7000_series_takeover_and_failback

> 3) Does the SSD really help?

For NFS yes the WriteZilla (slog) really helps because of how the NFS
protocol works.  For ReadZillia (l2arc) it depends on your workload.

> 4) Do the analytics help prevent and solve real problems, or arui?he ge they 
> frivolous pretty pictures?

Yes they do, at a level of detail no other storage vendor can currently
provide.

> 5) is the 7xxx really a black box to be managed only by the GUI?

GUI or CLI but the CLI is *NOT* a Solaris shell it is a CLI version of
the GUI.  The 7xxx is a true appliance, it happens to be built from
OpenSolaris code but it is not a Solaris/OpenSolaris install.  So you
can't run your own applications on it.  Backups are via NDMP for example.


I highly recommend downloading the simulator and trying it in
VirtualBox/VMware:

http://www.sun.com/storage/disk_systems/unified_storage/

--
Darren J Moffat

This e-mail may contain confidential, personal and/or health 
information(information which may be subject to legal restrictions on use, 
retention and/or disclosure) for the sole use of the intended recipient. Any 
review or distribution by anyone other than the person for whom it was 
originally intended is strictly prohibited. If you have received this e-mail in 
error, please contact the sender and delete all copies.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] X45xx storage vs 7xxx Unified storage

2009-11-18 Thread Erik Trimble

Darren J Moffat wrote:

Len Zaifman wrote:

We are looking at adding to our storage. We would like ~20TB-30 TB.

we have ~ 200 nodes (1100 cores)   to feed data to using nfs, and we 
are looking for high reliability, good performance (up to at least 
350 MBytes /second over 10 GigE connection) and large capacity.


For the X45xx (aka thumper) capacity and performanance seem to be 
there (we have 3 now)
However, for system upgrades , maintenance and failures, we have an 
availability problem.


For the 7xxx in a cluster configuration, we seem to be able to solve 
the availability issue, and perhaps get performance benefits the SSD.


however, the costs constrain the capacity we could afford.

If anyone has experience with both systems, or with the 7xxx system 
in a cluster configuration, we would be interested in hearing:


1) Does the 7xxx perform as well or better than thumpers?


Depends on which 7xxx you pick.
The 7210 (the thumper/thor-based AmberRoad) does not support clustering. 
Neither does the 7110.  The 7310/7410 are the clusterable solutions.


They are much more flexible in configuration than the Thumper stuff, as 
they provide disk attach via a J4000-series JBOD, which can be populated 
with SAS or SATA drives, and different SSD configurations. 

Frankly, you might want a 7310/7410 in any case, over a thumper. Even 
with SSDs, certain workloads are far better served with SAS drives than 
SATA drives, and with a 7310/7410, you can easily mix both types in the 
same clustered setup.   In my case, I'm going with SAS to serve xVM 
images, as they demand a very high level of random I/O which is not well 
served by even SSD/SATA configs.


I'd really concentrate on the 7310 - it's in your capacity band, and 
provides clustering and SSD support.


A note here:  officially, you can't add anything other than a J4400 with 
SATA/SSDs to these things. HOWEVER, there's no technical reason not to 
add any J4xxx into one, and populate it with any combination of SAS or 
SSDs. The software certainly has no problem with it.  I'm still waiting 
for Official Support of a SAS-populated J4xxx into an A-R system.



2) Does the 7xxx failove r work as expected (in test and real life)


Depends what your expectations are! The time to failover depends on 
how you configure the cluster and how many filesystems you have and 
how many disks etc etc.


Have a read over this blog entry:

http://blogs.sun.com/wesolows/entry/7000_series_takeover_and_failback


3) Does the SSD really help?


For NFS yes the WriteZilla (slog) really helps because of how the NFS 
protocol works.  For ReadZillia (l2arc) it depends on your workload.


I'm testing SLOG performance right now with iSCSI-shared xVM images.  
The L2ARC definitely makes a big difference here, as my VMs have a huge 
amount of common data which is read-mostly.



4) Do the analytics help prevent and solve real problems, or arui?he 
ge they frivolous pretty pictures?


Yes they do, at a level of detail no other storage vendor can 
currently provide.


I have to agree here.  The A-R custom software is definitely nicer than 
the roll-my-own OpenSolaris-based setup I pitted the A-R against.



5) is the 7xxx really a black box to be managed only by the GUI?


GUI or CLI but the CLI is *NOT* a Solaris shell it is a CLI version of 
the GUI.  The 7xxx is a true appliance, it happens to be built from 
OpenSolaris code but it is not a Solaris/OpenSolaris install.  So you 
can't run your own applications on it.  Backups are via NDMP for example.



I highly recommend downloading the simulator and trying it in 
VirtualBox/VMware:


http://www.sun.com/storage/disk_systems/unified_storage/
My biggest bitch with the A-R systems is that I can't add common 
X4x40-series upgrades to them (and, attaching any combo of a J4xxx to 
one is still not Officially Supported). That is, I'd love to be able to 
add a FC HBA into one and make it act like a FC target, but so far, I'm 
not getting that it's a supported option. Also, while you can add a 
second CPU or more RAM to some of the configs, it's not really 
"encouraged".   A-R is an appliance, and frankly, you have to live with 
the limited configurations it's sold in.




--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] X45xx storage vs 7xxx Unified storage

2009-11-18 Thread Jacob Ritorto
	I don't wish to hijack, but along these same comparing lines, is there 
anyone able to compare the 7200 to the HP LeftHand series?   I'll start 
another thread if this goes too far astray.


thx
jake


Darren J Moffat wrote:

Len Zaifman wrote:

We are looking at adding to our storage. We would like ~20TB-30 TB.

we have ~ 200 nodes (1100 cores)   to feed data to using nfs, and we 
are looking for high reliability, good performance (up to at least 350 
MBytes /second over 10 GigE connection) and large capacity.


For the X45xx (aka thumper) capacity and performanance seem to be 
there (we have 3 now)
However, for system upgrades , maintenance and failures, we have an 
availability problem.


For the 7xxx in a cluster configuration, we seem to be able to solve 
the availability issue, and perhaps get performance benefits the SSD.


however, the costs constrain the capacity we could afford.

If anyone has experience with both systems, or with the 7xxx system in 
a cluster configuration, we would be interested in hearing:


1) Does the 7xxx perform as well or better than thumpers?


Depends on which 7xxx you pick.

2) Does the 7xxx failove r work as expected (in test and real life)


Depends what your expectations are! The time to failover depends on how 
you configure the cluster and how many filesystems you have and how many 
disks etc etc.


Have a read over this blog entry:

http://blogs.sun.com/wesolows/entry/7000_series_takeover_and_failback


3) Does the SSD really help?


For NFS yes the WriteZilla (slog) really helps because of how the NFS 
protocol works.  For ReadZillia (l2arc) it depends on your workload.


4) Do the analytics help prevent and solve real problems, or arui?he 
ge they frivolous pretty pictures?


Yes they do, at a level of detail no other storage vendor can currently 
provide.



5) is the 7xxx really a black box to be managed only by the GUI?


GUI or CLI but the CLI is *NOT* a Solaris shell it is a CLI version of 
the GUI.  The 7xxx is a true appliance, it happens to be built from 
OpenSolaris code but it is not a Solaris/OpenSolaris install.  So you 
can't run your own applications on it.  Backups are via NDMP for example.



I highly recommend downloading the simulator and trying it in 
VirtualBox/VMware:


http://www.sun.com/storage/disk_systems/unified_storage/



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] X45xx storage vs 7xxx Unified storage

2009-11-18 Thread Darren J Moffat

Len Zaifman wrote:

We are looking at adding to our storage. We would like ~20TB-30 TB.

we have ~ 200 nodes (1100 cores)   to feed data to using nfs, and we are 
looking for high reliability, good performance (up to at least 350 MBytes 
/second over 10 GigE connection) and large capacity.

For the X45xx (aka thumper) capacity and performanance seem to be there (we 
have 3 now)
However, for system upgrades , maintenance and failures, we have an 
availability problem.

For the 7xxx in a cluster configuration, we seem to be able to solve the 
availability issue, and perhaps get performance benefits the SSD.

however, the costs constrain the capacity we could afford.

If anyone has experience with both systems, or with the 7xxx system in a 
cluster configuration, we would be interested in hearing:

1) Does the 7xxx perform as well or better than thumpers?


Depends on which 7xxx you pick.

2) Does the 7xxx failove r work as expected (in test and real life)


Depends what your expectations are! The time to failover depends on how 
you configure the cluster and how many filesystems you have and how many 
disks etc etc.


Have a read over this blog entry:

http://blogs.sun.com/wesolows/entry/7000_series_takeover_and_failback


3) Does the SSD really help?


For NFS yes the WriteZilla (slog) really helps because of how the NFS 
protocol works.  For ReadZillia (l2arc) it depends on your workload.



4) Do the analytics help prevent and solve real problems, or arui?he ge they 
frivolous pretty pictures?


Yes they do, at a level of detail no other storage vendor can currently 
provide.



5) is the 7xxx really a black box to be managed only by the GUI?


GUI or CLI but the CLI is *NOT* a Solaris shell it is a CLI version of 
the GUI.  The 7xxx is a true appliance, it happens to be built from 
OpenSolaris code but it is not a Solaris/OpenSolaris install.  So you 
can't run your own applications on it.  Backups are via NDMP for example.



I highly recommend downloading the simulator and trying it in 
VirtualBox/VMware:


http://www.sun.com/storage/disk_systems/unified_storage/

--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] X45xx storage vs 7xxx Unified storage

2009-11-18 Thread Len Zaifman
We are looking at adding to our storage. We would like ~20TB-30 TB.

we have ~ 200 nodes (1100 cores)   to feed data to using nfs, and we are 
looking for high reliability, good performance (up to at least 350 MBytes 
/second over 10 GigE connection) and large capacity.

For the X45xx (aka thumper) capacity and performanance seem to be there (we 
have 3 now)
However, for system upgrades , maintenance and failures, we have an 
availability problem.

For the 7xxx in a cluster configuration, we seem to be able to solve the 
availability issue, and perhaps get performance benefits the SSD.

however, the costs constrain the capacity we could afford.

If anyone has experience with both systems, or with the 7xxx system in a 
cluster configuration, we would be interested in hearing:

1) Does the 7xxx perform as well or better than thumpers?
2) Does the 7xxx failove r work as expected (in test and real life)
3) Does the SSD really help?
4) Do the analytics help prevent and solve real problems, or arui?he ge they 
frivolous pretty pictures?
5) is the 7xxx really a black box to be managed only by the GUI?
6) If are in or near Toronto, would you be willing to let us see your setup?


Thanks for any insights you may have and are willing to share.

Len Zaifman
Systems Manager, High Performance Systems
The Centre for Computational Biology
The Hospital for Sick Children
555 University Ave.
Toronto, Ont M5G 1X8

tel: 416-813-5513
email: leona...@sickkids.ca

This e-mail may contain confidential, personal and/or health 
information(information which may be subject to legal restrictions on use, 
retention and/or disclosure) for the sole use of the intended recipient. Any 
review or distribution by anyone other than the person for whom it was 
originally intended is strictly prohibited. If you have received this e-mail in 
error, please contact the sender and delete all copies.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss