Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-29 Thread Pasi Kärkkäinen
On Tue, Nov 27, 2012 at 08:52:06AM +0100, Grégory Giannoni wrote:
 
 The LSI 9240-4I was not able to connect to the 25-drives bay ; Not tested  
 LSI 9260-16I or LSI 9280-24i.
 

What was the problem connecting LSI 9240-4i to the 25-drives bay?

-- Pasi

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-29 Thread Grégory Giannoni

Le 29 nov. 2012 à 09:27, Pasi Kärkkäinen a écrit :
 The LSI 9240-4I was not able to connect to the 25-drives bay ; Not tested  
 LSI 9260-16I or LSI 9280-24i.
 
 
 What was the problem connecting LSI 9240-4i to the 25-drives bay?
 

The 25-drives backplane needs two SFF-8087 (multilane cables) to work 
correctly. The LSI 9240-4i has just one SFF-8087 port.

Using 2 LSI 9240-4i didn't worked also.

-- 
Grégory Giannoni
http://www.wmaker.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-29 Thread Pasi Kärkkäinen
On Thu, Nov 29, 2012 at 09:42:21AM +0100, Grégory Giannoni wrote:
 
 Le 29 nov. 2012 à 09:27, Pasi Kärkkäinen a écrit :
  The LSI 9240-4I was not able to connect to the 25-drives bay ; Not tested  
  LSI 9260-16I or LSI 9280-24i.
  
  
  What was the problem connecting LSI 9240-4i to the 25-drives bay?
  
 
 The 25-drives backplane needs two SFF-8087 (multilane cables) to work 
 correctly. The LSI 9240-4i has just one SFF-8087 port.


Yeah, that explains :) 

-- Pasi
 
 Using 2 LSI 9240-4i didn't worked also.
 
 -- 
 Grégory Giannoni
 http://www.wmaker.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-27 Thread Edmund White
On 11/27/12 1:52 AM, Grégory Giannoni s...@narguile.org wrote:



Le 27 nov. 2012 à 01:17, Erik Trimble a écrit :

 On 11/26/2012 12:54 PM, Grégory Giannoni wrote:
 [snip]
 I switched few month ago from Sun X45x0 to HP things : My fast NAS are
now DL 180 G6. I got better perfs using LSI 9240-8I rather than HP
SmartArray (tried P410  P812). I'm using only 600Gb SSD drives.
 That LSI controllers supports SATA III, or 6Gbps SATA.   The Px1x
controllers do 6GB SAS, but only 3GB SATA, so that's your likely perf
difference.  The SmartArray Px2x series should do both SATA and SAS at
6Gbps.

The SSD drives I'm using (Intel 320 600GB) are limited to 270MB/sec ; So
I don't think that SATA II is limiting.

 
 That said, I do think you're right that the LSI controller is probably
a better fit for connections requiring a SATA SSD.  The only exception
is having to give up the 1GB of NVRAM on the HP controller. :-(

I don't think that this is a real issue when using a bunch of SSDs. I
even wonder if the NVRAM is not slowing down writings. My tests were done
with ZIL enabled, so a power loss shouldn't damage the datas.

HP recommends to disable the write accelerator on SSD-only volumes.
http://h2.www2.hp.com/bizsupport/TechSupport/Document.jsp?lang=encc=us
taskId=120prodSeriesId=3802118prodTypeId=329290objectID=c02963968



 [...]
 Is the bottleneck the LSI controller, or the SAS/SATA bus, or the PCI-E
bus itself?  That is, have you tested with LSI 9240-4i  (one per 8-drive
cage, which I *believe* can use the HP multi-lane cable), and with a LSI
9260-16i or LSI 9280-24i?   My instinct would be to say it's the PCI-E
bus, and you could probably get away with the 4-channel cards.  i.e.
4-channels @ 6Gbit/s = 3 GBytes/s  4x PCI-E 2.0 at 2GB/s


The first bottleneck we reached (DL 180 / standard 25 drives bay) was the
HP controller (both P410 AND P812 reached the same perfs : 800MB/sec
writing, 1.3GB/sec reading).

With LSI 9240-8I, we reached 1.2GB/s writing, 1.3Gb/s reading.

The LSI 9240-4I was not able to connect to the 25-drives bay ; Not tested
 LSI 9260-16I or LSI 9280-24i.

The results were the same with 10 or 25 drives, so I suspected either the
PCI bus, either the expander in the 25-drives bay (HP 530946-001).
Plugging the disks directly to the LSI card allowed to gain few MB/s :
the expander was limiting a bit, but moreover, it disallowed to use more
than 1 disk controller !

By replacing the 25-drives bay by three 8-drives bays (507803-B21), the
system was able to use 3 LSI 9240-8I, with this 4.4GB/sec reading rate.


That's correct that you've run into the limitation of the expander on the
25-disk drive backplane. However, I'm curious about the 8-drive cage you
mention. I use that cage in the ML/DL370 G6 servers. I didn't think it
would fit into a DL180 G6. How is this arranged in your unit? What does
the resulting setup look like? Sine the DL180 drive cages are part of the
bezel, do you just have three loose cages connected to the controllers?

Also, with three controllers, didn't you max the number of available PCIe
slots? 

Anyway, the new HP SL4540 server is the next product worth testing in this
realmŠ 60 x LFF disks.
http://h18004.www1.hp.com/products/quickspecs/14406_na/14406_na.html



-- 
Edmund White
ewwh...@mac.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-27 Thread Grégory Giannoni

 [...]
 The results were the same with 10 or 25 drives, so I suspected either the
 PCI bus, either the expander in the 25-drives bay (HP 530946-001).
 Plugging the disks directly to the LSI card allowed to gain few MB/s :
 the expander was limiting a bit, but moreover, it disallowed to use more
 than 1 disk controller !
 [...]
 
 That's correct that you've run into the limitation of the expander on the
 25-disk drive backplane. However, I'm curious about the 8-drive cage you
 mention. I use that cage in the ML/DL370 G6 servers. I didn't think it
 would fit into a DL180 G6. How is this arranged in your unit? What does
 the resulting setup look like? Sine the DL180 drive cages are part of the
 bezel, do you just have three loose cages connected to the controllers?

It was not as easy that just unplug the 25-drives bay and plus 3 8 
drives-bays.. Few rivets to drill, backplane alimentation cable to trick (the 
pins and wires colors are not the same !), minimolex - molex cable the drives 
alimentation, and some screw to fix the cages. The result is really clean. Here 
are few pictures :

http://www.flickr.com/photos/webzinemaker/6964036523/in/photostream/




 Also, with three controllers, didn't you max the number of available PCIe
 slots? 

4 slots are available on the DL180 : 3 were used for the LSI controllers, and 
one for a nic.

 
 Anyway, the new HP SL4540 server is the next product worth testing in this
 realmŠ 60 x LFF disks.
 http://h18004.www1.hp.com/products/quickspecs/14406_na/14406_na.html

I might be a very good alternative for the X4540... But I wonder how many 
controllers are connected, and what are their perfs.

-- 
Grégory Giannoni
http://www.wmaker.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-26 Thread Robert Milkowski

 I am in the market for something newer than that, though. Anyone know
 what HP's using as a replacement for the DL320s?

I have no idea... but they have dl380  Gen8 with a disk plane supporting 25x
2.5 disks (all in front), and it is Sandy Bridge based.

Oracle/Sun have X3-2L - 24x 2.5 disks in front, another 2x 2.5 in rear,
Sandy Bridge as well.

-- 
Robert Milkowski
http://milek.blogspot.com


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-26 Thread Grégory Giannoni

Le 24 nov. 2012 à 03:51, Erik Trimble a écrit :
 This is what we decided to do at work, and this is the reason why.
 But we didn't buy the appliance-branded boxes; we just bought normal servers 
 running solaris.
 
 
 I gave up and am now buying HP-branded hardware for running Solaris on it. 
 Particularly if you get off-lease used hardware (for which, HP is still very 
 happy to let you buy a HW support contract), it's cheap, and HP has a lot of 
 Solaris drivers for their branded stuff. Their whole SmartArray line of 
 adapters has much better Solaris driver coverage than the generic stuff or 
 the equivalent IBM or Dell items.
 
 For instance, I just got a couple of DL380 G5 systems with dual Harpertown 
 CPUs, fully loaded with 8 2.5 SAS drives and 32GB of RAM, for about $800 
 total.  You can attach their MSA30/50/70-series (or DS2700-series, if you 
 want new) as dumb JBODs via SAS, and the nice SmartArray controllers have 1GB 
 of NVRAM, which is sufficient for many purposes, so you don't even have cough 
 up the dough for a nice ZIL SSD.
 
 HP even made a sweet little appliance thing that was designed for Windows, 
 but happens to run Solaris really, really well.  The DL320s  (the s is part 
 of the model designation).   14x 3.5 SAS/SATA hot swap bays, a Xeon 3070 
 dual-core CPU, SmartArray controller, 2 x GB Nic, LOM, and a free 1x PCI-E 
 expansion slot. The only drawback is that it only takes up to 8GB of RAM.   
 It makes a *fabulous* little backup system for logs and stuff, and it's under 
 about $2000 even after you splurge for 1TB drives and an SSD for the thing.
 
 I am in the market for something newer than that, though. Anyone know what 
 HP's using as a replacement for the DL320s?


I switched few month ago from Sun X45x0 to HP things : My fast NAS are now DL 
180 G6. I got better perfs using LSI 9240-8I rather than HP SmartArray (tried 
P410  P812). I'm using only 600Gb SSD drives.

In one of the servers I replaced the 25-disks bays by 3 8-disks bays, allowing 
me to connect 3 LSI 9240-8I rather than only one. This NAS achieved 
4.4GBytes/sec reading and 4.1GBytes/Sec writing with 48 io/s, running 
Solaris 11. Using raidz-2, perfs dropped to 3.1 / 3.0 GB/sec


For mass storage, I'm still using my olds X4500 and X4540 with 2TB drives and 
few SSDs. Lot of problems with Seagate 2TB drives in X4540 / None in the X4500. 
 Have no replacement yet (I'll soon try some super-micro hardware).

--
Grégory Giannoni
http://www.wmaker.net


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-26 Thread Erik Trimble

On 11/26/2012 12:54 PM, Grégory Giannoni wrote:

[snip]
I switched few month ago from Sun X45x0 to HP things : My fast NAS are now DL 180 
G6. I got better perfs using LSI 9240-8I rather than HP SmartArray (tried P410 
 P812). I'm using only 600Gb SSD drives.
That LSI controllers supports SATA III, or 6Gbps SATA.   The Px1x 
controllers do 6GB SAS, but only 3GB SATA, so that's your likely perf 
difference.  The SmartArray Px2x series should do both SATA and SAS at 
6Gbps.


That said, I do think you're right that the LSI controller is probably a 
better fit for connections requiring a SATA SSD.  The only exception is 
having to give up the 1GB of NVRAM on the HP controller. :-(



In one of the servers I replaced the 25-disks bays by 3 8-disks bays, allowing 
me to connect 3 LSI 9240-8I rather than only one. This NAS achieved 
4.4GBytes/sec reading and 4.1GBytes/Sec writing with 48 io/s, running 
Solaris 11. Using raidz-2, perfs dropped to 3.1 / 3.0 GB/sec

Is the bottleneck the LSI controller, or the SAS/SATA bus, or the PCI-E 
bus itself?  That is, have you tested with LSI 9240-4i  (one per 8-drive 
cage, which I *believe* can use the HP multi-lane cable), and with a LSI 
9260-16i or LSI 9280-24i?   My instinct would be to say it's the PCI-E 
bus, and you could probably get away with the 4-channel cards.  i.e. 
4-channels @ 6Gbit/s = 3 GBytes/s  4x PCI-E 2.0 at 2GB/s


Also, the HP H220 is simply the OEM version of the LSI 9240-8i


-Erik
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-26 Thread Grégory Giannoni

Le 27 nov. 2012 à 01:17, Erik Trimble a écrit :

 On 11/26/2012 12:54 PM, Grégory Giannoni wrote:
 [snip]
 I switched few month ago from Sun X45x0 to HP things : My fast NAS are now 
 DL 180 G6. I got better perfs using LSI 9240-8I rather than HP SmartArray 
 (tried P410  P812). I'm using only 600Gb SSD drives.
 That LSI controllers supports SATA III, or 6Gbps SATA.   The Px1x controllers 
 do 6GB SAS, but only 3GB SATA, so that's your likely perf difference.  The 
 SmartArray Px2x series should do both SATA and SAS at 6Gbps.

The SSD drives I'm using (Intel 320 600GB) are limited to 270MB/sec ; So I 
don't think that SATA II is limiting.

 
 That said, I do think you're right that the LSI controller is probably a 
 better fit for connections requiring a SATA SSD.  The only exception is 
 having to give up the 1GB of NVRAM on the HP controller. :-(

I don't think that this is a real issue when using a bunch of SSDs. I even 
wonder if the NVRAM is not slowing down writings. My tests were done with ZIL 
enabled, so a power loss shouldn't damage the datas.


 [...]
 Is the bottleneck the LSI controller, or the SAS/SATA bus, or the PCI-E bus 
 itself?  That is, have you tested with LSI 9240-4i  (one per 8-drive cage, 
 which I *believe* can use the HP multi-lane cable), and with a LSI 9260-16i 
 or LSI 9280-24i?   My instinct would be to say it's the PCI-E bus, and you 
 could probably get away with the 4-channel cards.  i.e. 4-channels @ 6Gbit/s 
 = 3 GBytes/s  4x PCI-E 2.0 at 2GB/s


The first bottleneck we reached (DL 180 / standard 25 drives bay) was the HP 
controller (both P410 AND P812 reached the same perfs : 800MB/sec writing, 
1.3GB/sec reading).

With LSI 9240-8I, we reached 1.2GB/s writing, 1.3Gb/s reading.

The LSI 9240-4I was not able to connect to the 25-drives bay ; Not tested  LSI 
9260-16I or LSI 9280-24i.

The results were the same with 10 or 25 drives, so I suspected either the PCI 
bus, either the expander in the 25-drives bay (HP 530946-001).
Plugging the disks directly to the LSI card allowed to gain few MB/s : the 
expander was limiting a bit, but moreover, it disallowed to use more than 1 
disk controller !

By replacing the 25-drives bay by three 8-drives bays (507803-B21), the system 
was able to use 3 LSI 9240-8I, with this 4.4GB/sec reading rate.



-- 
Grégory Giannoni
http://www.wmaker.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-24 Thread Edmund White
Heh, I wouldn't be using G5's for ZFS purposes now. G6 and better
ProLiants are a better deal for RAM capacity and CPU core countŠ

Either way, I also use HP systems as the basis for my ZFS/Nexenta storage
systems. Typically DL380's, since I have expansion room for either 16
drive bays, or for using them as a head unit to a D2700 or D2600 JBOD.

The right replacement for the old DL320s storage server is the DL180 G6.
This model was available in a number of configurations, but the best
solutions for storage were the 2U 12-bay 3.5 model and the 2U 25-bay 2.5
model. Both models have a SAS expander on the backplane, but with a nice
controller (LSI 9211-4i), make good ZFS storage servers.

-- 
Edmund White




On 11/23/12 8:51 PM, Erik Trimble tr...@netdemons.com wrote:

On 11/23/2012 5:50 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Jim Klimov
   
 I wonder if it would make weird sense to get the boxes, forfeit the
 cool-looking Fishworks, and install Solaris/OI/Nexenta/whatever to
 get the most flexibility and bang for a buck from the owned hardware...
 This is what we decided to do at work, and this is the reason why.
 But we didn't buy the appliance-branded boxes; we just bought normal
servers running solaris.



I gave up and am now buying HP-branded hardware for running Solaris on
it. Particularly if you get off-lease used hardware (for which, HP is
still very happy to let you buy a HW support contract), it's cheap, and
HP has a lot of Solaris drivers for their branded stuff. Their whole
SmartArray line of adapters has much better Solaris driver coverage than
the generic stuff or the equivalent IBM or Dell items.

For instance, I just got a couple of DL380 G5 systems with dual
Harpertown CPUs, fully loaded with 8 2.5 SAS drives and 32GB of RAM,
for about $800 total.  You can attach their MSA30/50/70-series (or
DS2700-series, if you want new) as dumb JBODs via SAS, and the nice
SmartArray controllers have 1GB of NVRAM, which is sufficient for many
purposes, so you don't even have cough up the dough for a nice ZIL SSD.

HP even made a sweet little appliance thing that was designed for
Windows, but happens to run Solaris really, really well.  The DL320s
(the s is part of the model designation).   14x 3.5 SAS/SATA hot swap
bays, a Xeon 3070 dual-core CPU, SmartArray controller, 2 x GB Nic, LOM,
and a free 1x PCI-E expansion slot. The only drawback is that it only
takes up to 8GB of RAM.   It makes a *fabulous* little backup system for
logs and stuff, and it's under about $2000 even after you splurge for
1TB drives and an SSD for the thing.

I am in the market for something newer than that, though. Anyone know
what HP's using as a replacement for the DL320s?

-Erik


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-24 Thread Erik Trimble

On 11/24/2012 5:17 AM, Edmund White wrote:

Heh, I wouldn't be using G5's for ZFS purposes now. G6 and better
ProLiants are a better deal for RAM capacity and CPU core countŠ

Either way, I also use HP systems as the basis for my ZFS/Nexenta storage
systems. Typically DL380's, since I have expansion room for either 16
drive bays, or for using them as a head unit to a D2700 or D2600 JBOD.

The right replacement for the old DL320s storage server is the DL180 G6.
This model was available in a number of configurations, but the best
solutions for storage were the 2U 12-bay 3.5 model and the 2U 25-bay 2.5
model. Both models have a SAS expander on the backplane, but with a nice
controller (LSI 9211-4i), make good ZFS storage servers.



Really?  I mean, sure, the G6 is beefier, but I can still get 8 cores of 
decently-fast CPU and 64GB of RAM in a G5, which, unless I'm doing Dedup 
and need a *stupid* amount of RAM, is more than sufficient for anything 
I've ever seen as a ZFS appliance.   I'd agree that the 64GB of RAM 
limit can be annoying if you really want to run a Super App Server + ZFS 
server on them, but they're so much more powerful than the X4500/X4540 
that I'd think they make an excellent drop-in replacement when paired 
with an MSA70, particularly on cost. The G6 is over double the cost of 
the G5.


One thing that I do know about the G6 is that they have Nehalem CPUS 
(X5500-series), which support VT-D, the virtualization I/O acceleration 
technology from Intel, while the G5's X5400-series Harpertown's don't.   
If you're running zones on the system, it won't matter, but VirtualBox 
will care.


---

Thanks for the DL180 link.  Once again, I think I'd go for the G5 rather 
than the G6 - it's roughly half the cost (or less, as the 2.5-enabled 
G6s seem to be expensive), and these boxes make nice log servers, not 
app servers. The DL180 G5 seems to be pretty much a DL380 G5 with a 
different hard drive layout (12x2.5 rather than 8x2.5)


---

One word here for everyone getting HP equipment:  you want to get the 
Px1x or Px2x  (e.g. P812) series of SmartArray controllers, if you plan 
on running SATA drives attached to them.  The older Px0x series only 
supports SATA I (1.5GB/s) and SAS 3GB/s, which is a serious handicap if 
you want to do SSDs on that channel. The newer series do SATA II (3GB/s) 
and SAS 6Gb/s.


http://h18004.www1.hp.com/products/servers/proliantstorage/arraycontrollers/index.html


-Erik

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-24 Thread Edmund White
On 11/24/12 5:51 PM, Erik Trimble tr...@netdemons.com wrote:


On 11/24/2012 5:17 AM, Edmund White wrote:
 Heh, I wouldn't be using G5's for ZFS purposes now. G6 and better
 ProLiants are a better deal for RAM capacity and CPU core countŠ

 Either way, I also use HP systems as the basis for my ZFS/Nexenta
storage
 systems. Typically DL380's, since I have expansion room for either 16
 drive bays, or for using them as a head unit to a D2700 or D2600 JBOD.

 The right replacement for the old DL320s storage server is the DL180 G6.
 This model was available in a number of configurations, but the best
 solutions for storage were the 2U 12-bay 3.5 model and the 2U 25-bay
2.5
 model. Both models have a SAS expander on the backplane, but with a nice
 controller (LSI 9211-4i), make good ZFS storage servers.


Really?  I mean, sure, the G6 is beefier, but I can still get 8 cores of
decently-fast CPU and 64GB of RAM in a G5, which, unless I'm doing Dedup
and need a *stupid* amount of RAM, is more than sufficient for anything
I've ever seen as a ZFS appliance.   I'd agree that the 64GB of RAM
limit can be annoying if you really want to run a Super App Server + ZFS
server on them, but they're so much more powerful than the X4500/X4540
that I'd think they make an excellent drop-in replacement when paired
with an MSA70, particularly on cost. The G6 is over double the cost of
the G5.

My X4540 wasn't lacking in power... Just the annoyance of SATA drive
timeouts. Regardless, recommending a G5 ProLiant nowadays is a bad deal.
I've nearly replaced all of the G5 units I installed between 2006 and
2009. You're limited to 3G SAS and the constrained (super $$$) RAM supply
is an issue.

One thing that I do know about the G6 is that they have Nehalem CPUS
(X5500-series), which support VT-D, the virtualization I/O acceleration
technology from Intel, while the G5's X5400-series Harpertown's don't.
If you're running zones on the system, it won't matter, but VirtualBox
will care.

VT-D can be handy. As can HyperThreading, *moar* RAM, DirectPath, etc.


Thanks for the DL180 link.  Once again, I think I'd go for the G5 rather
than the G6 - it's roughly half the cost (or less, as the 2.5-enabled
G6s seem to be expensive), and these boxes make nice log servers, not
app servers. The DL180 G5 seems to be pretty much a DL380 G5 with a
different hard drive layout (12x2.5 rather than 8x2.5)

While there is a DL180 G5, the DL180 G6 is the right recommendation
because it fixes a lot of the ugly issues present in the G5. The 180 G5
platform is nothing like the DL380 G5. Different system boards, backplane,
management.

Not sure where you're looking, but eBay and a couple of the HP liquidators
are good sources for these systems. E.g. http://r.ebay.com/p0xrLu

-- 
Edmund White
ewwh...@mac.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-23 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Jim Klimov
  
 I wonder if it would make weird sense to get the boxes, forfeit the
 cool-looking Fishworks, and install Solaris/OI/Nexenta/whatever to
 get the most flexibility and bang for a buck from the owned hardware...

This is what we decided to do at work, and this is the reason why.
But we didn't buy the appliance-branded boxes; we just bought normal servers 
running solaris.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-23 Thread Erik Trimble
On 11/23/2012 5:50 AM, Edward Ned Harvey 
(opensolarisisdeadlongliveopensolaris) wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
  
I wonder if it would make weird sense to get the boxes, forfeit the

cool-looking Fishworks, and install Solaris/OI/Nexenta/whatever to
get the most flexibility and bang for a buck from the owned hardware...

This is what we decided to do at work, and this is the reason why.
But we didn't buy the appliance-branded boxes; we just bought normal servers 
running solaris.




I gave up and am now buying HP-branded hardware for running Solaris on 
it. Particularly if you get off-lease used hardware (for which, HP is 
still very happy to let you buy a HW support contract), it's cheap, and 
HP has a lot of Solaris drivers for their branded stuff. Their whole 
SmartArray line of adapters has much better Solaris driver coverage than 
the generic stuff or the equivalent IBM or Dell items.


For instance, I just got a couple of DL380 G5 systems with dual 
Harpertown CPUs, fully loaded with 8 2.5 SAS drives and 32GB of RAM, 
for about $800 total.  You can attach their MSA30/50/70-series (or 
DS2700-series, if you want new) as dumb JBODs via SAS, and the nice 
SmartArray controllers have 1GB of NVRAM, which is sufficient for many 
purposes, so you don't even have cough up the dough for a nice ZIL SSD.


HP even made a sweet little appliance thing that was designed for 
Windows, but happens to run Solaris really, really well.  The DL320s  
(the s is part of the model designation).   14x 3.5 SAS/SATA hot swap 
bays, a Xeon 3070 dual-core CPU, SmartArray controller, 2 x GB Nic, LOM, 
and a free 1x PCI-E expansion slot. The only drawback is that it only 
takes up to 8GB of RAM.   It makes a *fabulous* little backup system for 
logs and stuff, and it's under about $2000 even after you splurge for 
1TB drives and an SSD for the thing.


I am in the market for something newer than that, though. Anyone know 
what HP's using as a replacement for the DL320s?


-Erik


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-22 Thread Jim Klimov

A customer is looking to replace or augment their Sun Thumper
with a ZFS appliance like 7320. However, the Thumper was used
not only as a protocol storage server (home dirs, files, backups
over NFS/CIFS/Rsync), but also as a general-purpose server with
unpredictably-big-data programs running directly on it (such as
corporate databases, Alfresco for intellectual document storage,
etc.) in order to avoid the networking transfer of such data
between pure-storage and compute nodes - this networking was
seen as both a bottleneck and a possible point of failure.

Is it possible to use the ZFS Storage appliances in a similar
way, and fire up a Solaris zone (or a few) directly on the box
for general-purpose software; or to shell-script administrative
tasks such as the backup archive management in the global zone
(if that concept still applies) as is done on their current
Solaris-based box?

Is it possible to run VirtualBoxes in the ZFS-SA OS, dare I ask? ;)

Thanks,
//Jim Klimov

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-22 Thread Darren J Moffat



On 11/22/12 16:24, Jim Klimov wrote:

A customer is looking to replace or augment their Sun Thumper
with a ZFS appliance like 7320. However, the Thumper was used
not only as a protocol storage server (home dirs, files, backups
over NFS/CIFS/Rsync), but also as a general-purpose server with
unpredictably-big-data programs running directly on it (such as
corporate databases, Alfresco for intellectual document storage,
etc.) in order to avoid the networking transfer of such data
between pure-storage and compute nodes - this networking was
seen as both a bottleneck and a possible point of failure.

Is it possible to use the ZFS Storage appliances in a similar
way, and fire up a Solaris zone (or a few) directly on the box
for general-purpose software; or to shell-script administrative
tasks such as the backup archive management in the global zone
(if that concept still applies) as is done on their current
Solaris-based box?


No it is a true appliance, it might look like it has Solaris underneath 
but it is just based on Solaris.


You can script administrative tasks but not using bash/ksh style 
scripting you use the ZFSSA's own scripting language.



Is it possible to run VirtualBoxes in the ZFS-SA OS, dare I ask? ;)


No.

--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-22 Thread Jim Klimov

On 2012-11-22 17:31, Darren J Moffat wrote:

Is it possible to use the ZFS Storage appliances in a similar
way, and fire up a Solaris zone (or a few) directly on the box
for general-purpose software; or to shell-script administrative
tasks such as the backup archive management in the global zone
(if that concept still applies) as is done on their current
Solaris-based box?


No it is a true appliance, it might look like it has Solaris underneath
but it is just based on Solaris.

You can script administrative tasks but not using bash/ksh style
scripting you use the ZFSSA's own scripting language.


So, the only supported (or even possible) way is indeed to us it
as NAS for file or block IO from another head running the database
or application servers?..

In the Datasheet I read that Cloning and Remote replication are
separately licensed features; does this mean that the capability
for zfs send|zfs recv backups from remote Solaris systems should
be purchased separately? :(

I wonder if it would make weird sense to get the boxes, forfeit the
cool-looking Fishworks, and install Solaris/OI/Nexenta/whatever to
get the most flexibility and bang for a buck from the owned hardware...
Or, rather, shop for the equivalent non-appliance servers...

//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-22 Thread Tim Cook
On Thu, Nov 22, 2012 at 10:50 AM, Jim Klimov jimkli...@cos.ru wrote:

 On 2012-11-22 17:31, Darren J Moffat wrote:

 Is it possible to use the ZFS Storage appliances in a similar
 way, and fire up a Solaris zone (or a few) directly on the box
 for general-purpose software; or to shell-script administrative
 tasks such as the backup archive management in the global zone
 (if that concept still applies) as is done on their current
 Solaris-based box?


 No it is a true appliance, it might look like it has Solaris underneath
 but it is just based on Solaris.

 You can script administrative tasks but not using bash/ksh style
 scripting you use the ZFSSA's own scripting language.


 So, the only supported (or even possible) way is indeed to us it
 as NAS for file or block IO from another head running the database
 or application servers?..

 In the Datasheet I read that Cloning and Remote replication are
 separately licensed features; does this mean that the capability
 for zfs send|zfs recv backups from remote Solaris systems should
 be purchased separately? :(

 I wonder if it would make weird sense to get the boxes, forfeit the
 cool-looking Fishworks, and install Solaris/OI/Nexenta/whatever to
 get the most flexibility and bang for a buck from the owned hardware...
 Or, rather, shop for the equivalent non-appliance servers...

 //Jim




You'd be paying a massive premium to buy them and then install some other
OS on them.  You'd be far better off buying equivalent servers.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-22 Thread Ian Collins

On 11/23/12 05:50, Jim Klimov wrote:

On 2012-11-22 17:31, Darren J Moffat wrote:

Is it possible to use the ZFS Storage appliances in a similar
way, and fire up a Solaris zone (or a few) directly on the box
for general-purpose software; or to shell-script administrative
tasks such as the backup archive management in the global zone
(if that concept still applies) as is done on their current
Solaris-based box?

No it is a true appliance, it might look like it has Solaris underneath
but it is just based on Solaris.

You can script administrative tasks but not using bash/ksh style
scripting you use the ZFSSA's own scripting language.

So, the only supported (or even possible) way is indeed to us it
as NAS for file or block IO from another head running the database
or application servers?..


Yes.


I wonder if it would make weird sense to get the boxes, forfeit the
cool-looking Fishworks, and install Solaris/OI/Nexenta/whatever to
get the most flexibility and bang for a buck from the owned hardware...
Or, rather, shop for the equivalent non-appliance servers...


As Tim Cook says, that would be a very expensive option.

I'm sure Oracle dropped the Thumper line because they competed head on 
with the appliances and gave way more flexibility.


If you are experienced with Solaris and ZFS, you will find using 
appliances very frustrating! You can't use the OS as you would like and 
you have to go through support when you would other wise fix things 
yourself.  In my part of the world, that isn't much fun.


Buy and equivalent JBOD and head unit and pretend you have a new Thumper.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-22 Thread Robert Milkowski

So, the only supported (or even possible) way is indeed to us it
as NAS for file or block IO from another head running the database
or application servers?..

Technically speaking you can get access to standard shell and do whatever
you want - this would essentially void support contract though.

-- 
Robert Milkowski
http://milek.blogspot.com


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss