Re: Extending your zfs pool with multiple devices

2010-09-04 Thread perryh
Michal mic...@sharescope.co.uk wrote:

 What is really odd is I see your replies but not my original post,
 how very strange??

One of your subscription options is whether you get your own posts
back.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Extending your zfs pool with multiple devices

2010-09-03 Thread Don Lewis
On  2 Sep, Jeremy Chadwick wrote:
 On Thu, Sep 02, 2010 at 04:56:04PM -0400, Zaphod Beeblebrox wrote:
 [regarding getting more disks in a machine]

 An inexpensive option are SATA port replicators.  Think SATA switch or
 hub.  1:4 is common and cheap.
 
 I have a motherboard with intel ICH10 chipset.  It commonly provides 6
 ports.  This chipset is happy to configure port replicators.  Meaning
 you can put 24 drives on this motherboard.

 ...

 With 1.5T disks, I find that the 4 to 1 multipliers have a small
 effect on speed.  The 4 drives I have on the multipler are saturated
 at 100% a little bit more than the drives directly connected.
 Essentially you have 3 gigabit for 4 drives instead of 3 gigabit for 1
 drive.
 
 1:4 SATA replicators impose a bottleneck on the overall bandwidth
 available between the replicator and the disks attached, as you stated.
 Diagram:
 
 ICH10
   |||___ (SATA300) Port 0, Disk 0
   || (SATA300) Port 1, Disk 1
   |_ (SATA300) Port 2, eSATA Replicator
 (SATA300) Port 0, Disk 2
|||_ (SATA300) Port 1, Disk 3
||__ (SATA300) Port 2, Disk 4
|___ (SATA300) Port 3, Disk 5
 
 If Disks 2 through 5 are decent disks (pushing 100MB/sec), essentially
 you have 100*4 = 400MB/sec worth of bandwidth being shoved across a
 300MB/sec link.  That's making the assumption the disks attached are
 magnetic and not SSD, and not taking into consideration protocol
 overhead.
 
 Given the evolutionary rate of hard disks and SSDs, replicators are (in
 my opinion) not a viable solution mid or long-term.

 A better choice is a SATA multilane HBA, which are usually PCIe-based
 with a single connector on the back of the HBA which splits out to
 multiple disks (usually 4, but sometimes more).
 
 An ideal choice is ane Areca ARC-1300 series SAS-based PCIe x4 multilane
 adapters, which provides SATA300 to each individual disk and uses PCIe
 x4 (which can handle about 1GByte/sec in each direction, so 2GByte/sec
 total)...
 
 http://www.areca.com.tw/products/sasnoneraid.htm
 
 ...but there doesn't appear to be driver support for FreeBSD for this
 series of controller (arcmsr(4) doesn't mention the ARC-1300 series).  I
 also don't know what Areca means on their site when they say
 BSD/FreeBSD (will be available with 6Gb/s Host Adapter), given that
 none of the ARC-1300 series cards are SATA600.
 
 If people are more focused on total number of devices (disks) that are
 available, then they should probably be looking at dropping a pretty
 penny on a low-end filer.  Otherwise, consider replacing the actual hard
 disks themselves with drives of a higher capacity.

[raises hand]

Here's what I've got on my mythtv box (running Fedora ... sorry):

FilesystemSize  
/dev/sda4 439G  
/dev/sdb1 1.9T  
/dev/sdc1 1.9T  
/dev/sdd1 1.9T  
/dev/sde1 1.9T  
/dev/sdf1 1.4T  
/dev/sdg1 1.4T  
/dev/sdh1 932G  
/dev/sdi1 932G  
/dev/sdj1 1.4T  
/dev/sdk1 1.9T  
/dev/sdl1 932G  
/dev/sdm1 1.9T  
/dev/sdn1 932G  
/dev/sdo1 699G  
/dev/sdp1 1.4T  

I'm currently upgrading the older drives as I run out of space, and I'm
really hoping that  2TB drives arrive soon.  The motherboard is
full-size ATX with six onboard SATA ports, all of which are in use.  The
only x16 PCIe slot is occupied by a graphics card, and all but one of
the x1 PCIe slots are in use.  One of the x1 PCIe slots has a Silicon
Image two-port ESATA controller, which connects to two external
enclosures with 1:4 and 1:5 port replicators.  At the moment there are
also three external USB drives.  This weekend's project is to install a
new 2TB drive and do some consolidation.

Fortunately the bandwidth requirements aren't too high ...

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Extending your zfs pool with multiple devices

2010-09-03 Thread Michal
What is really odd is I see your replies but not my original post, how 
very strange??


Thank you for all of your assistance. I would like to move to being able 
to build a cheap san-like storage area for a DB, I don't know how well 
it would work but I'd like to try it anyway since things like HP MSA's 
are hugely expensive.


I like these suggestions of filling a second box and connecting this to 
the 1st box using these expanders and port replicators. I don't really 
need as fast  as I can get as this is not a high-use DB backend or many 
user file server. A few users here and there but nothing that worries me 
about the bottleneck caused by these replicators. This way is ALOT 
better then my system of trying to export iscsi disks or something like 
that. This way I can add create a second box then have a cable into an 
expander or replicator on the 1st box, a 3rd box could then be added to 
the expander/replicator at a later date. There is a limit on how far 
this could go realistically, but I like this way. I could go further by 
adding SSD's for the L2ARC and ZIL if I wanted to. I found zfsbuild.com 
to be a quite nice site/blog


Thanks for all your help
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Extending your zfs pool with multiple devices

2010-09-03 Thread jhell
On 09/03/2010 04:25, Michal wrote:
 What is really odd is I see your replies but not my original post, how
 very strange??
 
 Thank you for all of your assistance. I would like to move to being able
 to build a cheap san-like storage area for a DB, I don't know how well
 it would work but I'd like to try it anyway since things like HP MSA's
 are hugely expensive.
 
 I like these suggestions of filling a second box and connecting this to
 the 1st box using these expanders and port replicators. I don't really
 need as fast  as I can get as this is not a high-use DB backend or many
 user file server. A few users here and there but nothing that worries me
 about the bottleneck caused by these replicators. This way is ALOT
 better then my system of trying to export iscsi disks or something like
 that. This way I can add create a second box then have a cable into an
 expander or replicator on the 1st box, a 3rd box could then be added to
 the expander/replicator at a later date. There is a limit on how far
 this could go realistically, but I like this way. I could go further by
 adding SSD's for the L2ARC and ZIL if I wanted to. I found zfsbuild.com
 to be a quite nice site/blog
 

Thanks for the link: zfsbuild.com I'm going to check that out.

Anyway... not that this is a great solution but if it is windows clients
that are connecting to this that your worried about and would like to
split off storage to separate machines etc... you can use DFS with
Samba. Imagine building two more machines and having them be completely
transparent to the clients that connect to the main server.

Using a Samba DFS server would allow you to distribute out the
file-systems to different shares on different machines without the
client ever having to know that the actual location that the directory
lays on is another machine and allows you to easily migrate new servers
into the network without client ever seeing the change.

Implement ISCSI, ZFS  HAST into this mix and you have yourself one hell
of a network.

Just an idea, Regards,

-- 

 jhell,v
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Extending your zfs pool with multiple devices

2010-09-02 Thread Michal
I have a small problem that I am trying to work out a solution for. 
Imagine you create a NAS box. Fill a small server with 5 hard drives, 
zfs them with raidz or whatever, create your pools and then shear this 
to the network using samba. Simple NAS box for your network to put their 
files and they just connenct to \\nas1


This box is now full, my problem is that I could create a 2nd NAS box 
and people use \\nas1 and \\nas2 but it's not very use friendly. Can I 
somehow build a 2nd box which is identicle, but extend my pools into 
nas2. I was thinking something like exporting the nas2 drives via iscsi 
and then nas1 add's the drives to the pool...or something similar. I 
find that with any NAS whether its home build or shop bought you will 
eventually run out of space, and sure you can replace the HDD's with 
bigger ones but you will see run out of space, and having multiple 
locations, in my mind, is not very elegant. I cannot simply add more 
HDD's to the box as well as it's at full capacity


Thanks
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Extending your zfs pool with multiple devices

2010-09-02 Thread Joshua Boyd
You need an HP SAS expander card in the new box, and an HBA in your primary
box with external ports to hook it into.

Then the drives in the other box will show up as local drives on your
primary box.

You don't even need an operating system on the second box, it just needs
enough hardware in it to supply power to the SAS expander.

On Thu, Sep 2, 2010 at 9:22 AM, Michal mic...@sharescope.co.uk wrote:

 I have a small problem that I am trying to work out a solution for. Imagine
 you create a NAS box. Fill a small server with 5 hard drives, zfs them with
 raidz or whatever, create your pools and then shear this to the network
 using samba. Simple NAS box for your network to put their files and they
 just connenct to \\nas1

 This box is now full, my problem is that I could create a 2nd NAS box and
 people use \\nas1 and \\nas2 but it's not very use friendly. Can I somehow
 build a 2nd box which is identicle, but extend my pools into nas2. I was
 thinking something like exporting the nas2 drives via iscsi and then nas1
 add's the drives to the pool...or something similar. I find that with any
 NAS whether its home build or shop bought you will eventually run out of
 space, and sure you can replace the HDD's with bigger ones but you will see
 run out of space, and having multiple locations, in my mind, is not very
 elegant. I cannot simply add more HDD's to the box as well as it's at full
 capacity

 Thanks
 ___
 freebsd-stable@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-stable
 To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org




-- 
Joshua Boyd
JBipNet

E-mail: boy...@jbip.net

http://www.jbip.net
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Extending your zfs pool with multiple devices

2010-09-02 Thread Zaphod Beeblebrox
[regarding getting more disks in a machine]

On Thu, Sep 2, 2010 at 12:47 PM, Joshua Boyd boy...@jbip.net wrote:
 You need an HP SAS expander card in the new box, and an HBA in your primary
 box with external ports to hook it into.

 Then the drives in the other box will show up as local drives on your
 primary box.

 You don't even need an operating system on the second box, it just needs
 enough hardware in it to supply power to the SAS expander.

An inexpensive option are SATA port replicators.  Think SATA switch or
hub.  1:4 is common and cheap.

I have a motherboard with intel ICH10 chipset.  It commonly provides 6
ports.  This chipset is happy to configure port replicators.  Meaning
you can put 24 drives on this motherboard.

Be warned that many SATA chipsets (even plugin cards) will not work
with port replicators...

But the ICH10 does and it makes a wonderfully cheap ZFS server.

With 1.5T disks, I find that the 4 to 1 multipliers have a small
effect on speed.  The 4 drives I have on the multipler are saturated
at 100% a little bit more than the drives directly connected.
Essentially you have 3 gigabit for 4 drives instead of 3 gigabit for 1
drive.

The ICH10 motherboard ports can be connected to the back of your
system by cables and faceplaces that deliver eSATA connectors.  I have
my drives in a case that delivers eSATA (or USB).  One USB for 4
drives was a dog, but the eSATA conneciton works well.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Extending your zfs pool with multiple devices

2010-09-02 Thread Jeremy Chadwick
On Thu, Sep 02, 2010 at 04:56:04PM -0400, Zaphod Beeblebrox wrote:
 [regarding getting more disks in a machine]
 
 On Thu, Sep 2, 2010 at 12:47 PM, Joshua Boyd boy...@jbip.net wrote:
  You need an HP SAS expander card in the new box, and an HBA in your primary
  box with external ports to hook it into.
 
  Then the drives in the other box will show up as local drives on your
  primary box.
 
  You don't even need an operating system on the second box, it just needs
  enough hardware in it to supply power to the SAS expander.
 
 An inexpensive option are SATA port replicators.  Think SATA switch or
 hub.  1:4 is common and cheap.
 
 I have a motherboard with intel ICH10 chipset.  It commonly provides 6
 ports.  This chipset is happy to configure port replicators.  Meaning
 you can put 24 drives on this motherboard.

 ...

 With 1.5T disks, I find that the 4 to 1 multipliers have a small
 effect on speed.  The 4 drives I have on the multipler are saturated
 at 100% a little bit more than the drives directly connected.
 Essentially you have 3 gigabit for 4 drives instead of 3 gigabit for 1
 drive.

1:4 SATA replicators impose a bottleneck on the overall bandwidth
available between the replicator and the disks attached, as you stated.
Diagram:

ICH10
  |||___ (SATA300) Port 0, Disk 0
  || (SATA300) Port 1, Disk 1
  |_ (SATA300) Port 2, eSATA Replicator
    (SATA300) Port 0, Disk 2
   |||_ (SATA300) Port 1, Disk 3
   ||__ (SATA300) Port 2, Disk 4
   |___ (SATA300) Port 3, Disk 5

If Disks 2 through 5 are decent disks (pushing 100MB/sec), essentially
you have 100*4 = 400MB/sec worth of bandwidth being shoved across a
300MB/sec link.  That's making the assumption the disks attached are
magnetic and not SSD, and not taking into consideration protocol
overhead.

Given the evolutionary rate of hard disks and SSDs, replicators are (in
my opinion) not a viable solution mid or long-term.

Even Silicon Image's products at this point are starting to force a 1:2
ratio on the replicators, probably to address the bottleneck issue:

http://www.siliconimage.com/products/product.aspx?pid=154
http://www.siliconimage.com/products/product.aspx?pid=155

A better choice is a SATA multilane HBA, which are usually PCIe-based
with a single connector on the back of the HBA which splits out to
multiple disks (usually 4, but sometimes more).

An ideal choice is ane Areca ARC-1300 series SAS-based PCIe x4 multilane
adapters, which provides SATA300 to each individual disk and uses PCIe
x4 (which can handle about 1GByte/sec in each direction, so 2GByte/sec
total)...

http://www.areca.com.tw/products/sasnoneraid.htm

...but there doesn't appear to be driver support for FreeBSD for this
series of controller (arcmsr(4) doesn't mention the ARC-1300 series).  I
also don't know what Areca means on their site when they say
BSD/FreeBSD (will be available with 6Gb/s Host Adapter), given that
none of the ARC-1300 series cards are SATA600.

If people are more focused on total number of devices (disks) that are
available, then they should probably be looking at dropping a pretty
penny on a low-end filer.  Otherwise, consider replacing the actual hard
disks themselves with drives of a higher capacity.

/soapbox

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Extending your zfs pool with multiple devices

2010-09-02 Thread Zaphod Beeblebrox
On Fri, Sep 3, 2010 at 12:08 AM, Jeremy Chadwick
free...@jdc.parodius.com wrote:
 On Thu, Sep 02, 2010 at 04:56:04PM -0400, Zaphod Beeblebrox wrote:

 With 1.5T disks, I find that the 4 to 1 multipliers have a small
 effect on speed.  The 4 drives I have on the multipler are saturated
 at 100% a little bit more than the drives directly connected.
 Essentially you have 3 gigabit for 4 drives instead of 3 gigabit for 1
 drive.

 1:4 SATA replicators impose a bottleneck on the overall bandwidth
 available between the replicator and the disks attached, as you stated.
 Diagram:

 ICH10
  |||___ (SATA300) Port 0, Disk 0
  || (SATA300) Port 1, Disk 1
  |_ (SATA300) Port 2, eSATA Replicator
                            (SATA300) Port 0, Disk 2
                           |||_ (SATA300) Port 1, Disk 3
                           ||__ (SATA300) Port 2, Disk 4
                           |___ (SATA300) Port 3, Disk 5

 If Disks 2 through 5 are decent disks (pushing 100MB/sec), essentially
 you have 100*4 = 400MB/sec worth of bandwidth being shoved across a
 300MB/sec link.  That's making the assumption the disks attached are
 magnetic and not SSD, and not taking into consideration protocol
 overhead.

 A better choice is a SATA multilane HBA, which are usually PCIe-based
 with a single connector on the back of the HBA which splits out to
 multiple disks (usually 4, but sometimes more).

That's just connector-foo.  The cards are still very expensive.

Many ZFS loads don't saturate disks ... or don't saturate them
consistently.  I just built several systems with one port per disk ---
and those cards tended towards $100/port.  1:4 replicators are less
than $10/port and the six port motherboards don't seem to add any cost
(4 or 6 SATA ports seem standard now).

My point is: if you're building a database server and speed is all you
care about, then one port per disk makes sense.  If you are building a
pile of disk and you're on a budget, port replicators are a good
solution.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Extending your zfs pool with multiple devices

2010-09-02 Thread Edho P Arief
geom_gate - ggated(8)
Not going to be fast though

(sorry for bad reply, mobile gmail sucks)

-- 
O ascii ribbon campaign - stop html mail - www.asciiribbon.org
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org