Re: [ceph-users] Ceph and RAID

2013-10-03 Thread James Harper
> If following the "raid mirror" approach, would you then skip redundency
> at the ceph layer to keep your total overhead the same?  It seems that
> would be risky in the even you loose your storage server with the
> raid-1'd drives.  No Ceph level redunancy would then be fatal.  But if
> you do raid-1 plus ceph redundancy, doesn't that mean it takes 4TB for
> each 1 real TB?
> 

Depends on your replication settings. Maybe if you originally wanted 3 
replica's, you might decide that because you are now using RAID1, 2 replicas is 
sufficient, so you have gone from 3x to 4x in terms of raw storage vs useable 
storage. Disks fail more than entire nodes, so depending on your requirements, 
a 33% increase in storage may be a reasonable tradeoff.

James
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph and RAID

2013-10-03 Thread Dimitri Maziuk
On 10/03/2013 12:40 PM, Andy Paluch wrote:

> Don't you have to take down a ceph node to replace defective drive? If I have 
> a 
> ceph node with 12 disks and one goes bad, would I not have to take the entire 
> node down to replace and then reformat?
> 
> If I have a hotswap chassis but using just an hba to connect my drives will 
> the 
> os (say latest Ubuntu)  support hot-swapping the drive or do I have to shut 
> it 
> down to replace the drive then bring ip and format etc.

Linux supports hotswap. You'll have to restart an osd, but not reboot
the node.

The issue with cluster rebalancing is bandwidth: basically, sata/sas
backplane on one node vs (potentially) the slowest network link in your
cluster that also carries data traffic for everybody. There's too many
variables involved, you figure out the balance between ceph replication
and raid replication for your cluster & budget.

-- 
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph and RAID

2013-10-03 Thread Mike Dawson
Currently Ceph uses replication. Each pool is set with a replication 
factor. A replication factor of 1 obviously offers no redundancy. 
Replication factors of 2 or 3 are common. So, Ceph currently halfs or 
thirds your usable storage, accordingly. Also, note you can co-mingle 
pools of various replication factors, so the actual math can get more 
complicated.


There is a team of developers building an Erasure Coding backend for 
Ceph that will allow for more options.


http://wiki.ceph.com/01Planning/02Blueprints/Dumpling/Erasure_encoding_as_a_storage_backend

http://wiki.ceph.com/01Planning/02Blueprints/Emperor/Erasure_coded_storage_backend_%28step_2%29

Initial release is scheduled for Ceph's Firefly release in February 2014.


Thanks,

Mike Dawson
Co-Founder & Director of Cloud Architecture
Cloudapt LLC

On 10/3/2013 2:44 PM, Aronesty, Erik wrote:

Does Ceph really halve your storage like that?

If if you specify N+1,does it really store two copies, or just compute 
checksums across MxN stripes?  I guess Raid5+Ceph with a large array (12 disks 
say) would be not too bad (2.2TB for each 1).

But It would be nicer, if I had 12 storage units in a single rack on a single 
network, for me to tell CEPH to stripe across them in a RAIDZ fashion, so that 
I'm only losing 10% of my storage to redundancy... not 50%.

-Original Message-
From: ceph-users-boun...@lists.ceph.com 
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of John-Paul Robinson
Sent: Thursday, October 03, 2013 12:08 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph and RAID

What is this take on such a configuration?

Is it worth the effort of tracking "rebalancing" at two layers, RAID
mirror and possibly Ceph if the pool has a redundancy policy.  Or is it
better to just let ceph rebalance itself when you lose a non-mirrored disk?

If following the "raid mirror" approach, would you then skip redundency
at the ceph layer to keep your total overhead the same?  It seems that
would be risky in the even you loose your storage server with the
raid-1'd drives.  No Ceph level redunancy would then be fatal.  But if
you do raid-1 plus ceph redundancy, doesn't that mean it takes 4TB for
each 1 real TB?

~jpr

On 10/02/2013 10:03 AM, Dimitri Maziuk wrote:

I would consider (mdadm) raid-1, dep. on the hardware & budget,
because this way a single disk failure will not trigger a cluster-wide
rebalance.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph and RAID

2013-10-03 Thread Aronesty, Erik
Does Ceph really halve your storage like that?  

If if you specify N+1,does it really store two copies, or just compute 
checksums across MxN stripes?  I guess Raid5+Ceph with a large array (12 disks 
say) would be not too bad (2.2TB for each 1).

But It would be nicer, if I had 12 storage units in a single rack on a single 
network, for me to tell CEPH to stripe across them in a RAIDZ fashion, so that 
I'm only losing 10% of my storage to redundancy... not 50%.

-Original Message-
From: ceph-users-boun...@lists.ceph.com 
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of John-Paul Robinson
Sent: Thursday, October 03, 2013 12:08 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph and RAID

What is this take on such a configuration?

Is it worth the effort of tracking "rebalancing" at two layers, RAID
mirror and possibly Ceph if the pool has a redundancy policy.  Or is it
better to just let ceph rebalance itself when you lose a non-mirrored disk?

If following the "raid mirror" approach, would you then skip redundency
at the ceph layer to keep your total overhead the same?  It seems that
would be risky in the even you loose your storage server with the
raid-1'd drives.  No Ceph level redunancy would then be fatal.  But if
you do raid-1 plus ceph redundancy, doesn't that mean it takes 4TB for
each 1 real TB?

~jpr

On 10/02/2013 10:03 AM, Dimitri Maziuk wrote:
> I would consider (mdadm) raid-1, dep. on the hardware & budget,
> because this way a single disk failure will not trigger a cluster-wide
> rebalance.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph and RAID

2013-10-03 Thread Andy Paluch

  
  
One good thing about raid controller is you have hot swap capability
that you don't have with a ceph node that just has a disk hba. 

Don't you have to take down a ceph node to replace defective drive?
If I have a ceph node with 12 disks and one goes bad, would I not
have to take the entire node down to replace and then reformat? 

If I have a hotswap chassis but using just an hba to connect my
drives will the os (say latest Ubuntu)  support hot-swapping the
drive or do I have to shut it down to replace the drive then bring
ip and format etc.

Not a linux guy so if I'm mistaken let me know.

Thanks!


On 10/3/2013 12:13 PM, Scott Devoid
  wrote:


  An additional side to the RAID question: when you
have a box with more drives than you can front with OSDs due to
memory or CPU constraints, is some form of RAID advisable? At
the moment "one OSD per drive" is the recommendation, but from
my perspective this does not scale at high drive densities (e.g
10+ drives per U).

  

  
  

On Thu, Oct 3, 2013 at 11:08 AM,
  John-Paul Robinson  wrote:
  What is
this take on such a configuration?

Is it worth the effort of tracking "rebalancing" at two
layers, RAID
mirror and possibly Ceph if the pool has a redundancy
policy.  Or is it
better to just let ceph rebalance itself when you lose a
non-mirrored disk?

If following the "raid mirror" approach, would you then skip
redundency
at the ceph layer to keep your total overhead the same?  It
seems that
would be risky in the even you loose your storage server
with the
raid-1'd drives.  No Ceph level redunancy would then be
fatal.  But if
you do raid-1 plus ceph redundancy, doesn't that mean it
takes 4TB for
each 1 real TB?

~jpr
  

  On 10/02/2013 10:03 AM, Dimitri Maziuk wrote:
  > I would consider (mdadm) raid-1, dep. on the hardware
  & budget,
  > because this way a single disk failure will not
  trigger a cluster-wide
  > rebalance.
  


  ___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
  

  


  
  
  
  
  ___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



  



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph and RAID

2013-10-03 Thread Scott Devoid
An additional side to the RAID question: when you have a box with more
drives than you can front with OSDs due to memory or CPU constraints, is
some form of RAID advisable? At the moment "one OSD per drive" is the
recommendation, but from my perspective this does not scale at high drive
densities (e.g 10+ drives per U).



On Thu, Oct 3, 2013 at 11:08 AM, John-Paul Robinson  wrote:

> What is this take on such a configuration?
>
> Is it worth the effort of tracking "rebalancing" at two layers, RAID
> mirror and possibly Ceph if the pool has a redundancy policy.  Or is it
> better to just let ceph rebalance itself when you lose a non-mirrored disk?
>
> If following the "raid mirror" approach, would you then skip redundency
> at the ceph layer to keep your total overhead the same?  It seems that
> would be risky in the even you loose your storage server with the
> raid-1'd drives.  No Ceph level redunancy would then be fatal.  But if
> you do raid-1 plus ceph redundancy, doesn't that mean it takes 4TB for
> each 1 real TB?
>
> ~jpr
>
> On 10/02/2013 10:03 AM, Dimitri Maziuk wrote:
> > I would consider (mdadm) raid-1, dep. on the hardware & budget,
> > because this way a single disk failure will not trigger a cluster-wide
> > rebalance.
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph and RAID

2013-10-03 Thread John-Paul Robinson
What is this take on such a configuration?

Is it worth the effort of tracking "rebalancing" at two layers, RAID
mirror and possibly Ceph if the pool has a redundancy policy.  Or is it
better to just let ceph rebalance itself when you lose a non-mirrored disk?

If following the "raid mirror" approach, would you then skip redundency
at the ceph layer to keep your total overhead the same?  It seems that
would be risky in the even you loose your storage server with the
raid-1'd drives.  No Ceph level redunancy would then be fatal.  But if
you do raid-1 plus ceph redundancy, doesn't that mean it takes 4TB for
each 1 real TB?

~jpr

On 10/02/2013 10:03 AM, Dimitri Maziuk wrote:
> I would consider (mdadm) raid-1, dep. on the hardware & budget,
> because this way a single disk failure will not trigger a cluster-wide
> rebalance.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph and RAID

2013-10-02 Thread Dimitri Maziuk

On 2013-10-02 07:35, Loic Dachary wrote:

Hi,

I would not use RAID5 since it would be redundant with what Ceph provides.


I would not use raid-5 (or 6) because its safety on modern drives is 
questionable and because I haven't seen anyone comment on ceph's 
performance -- e.g. openstack docs explicitly say don't use raid-5 
because swift's access patterns are the worst case for raid.


I would consider (mdadm) raid-1, dep. on the hardware & budget, because 
this way a single disk failure will not trigger a cluster-wide rebalance.


Dima


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph and RAID

2013-10-02 Thread Loic Dachary
I successfully installed a new cluster recently following the intructions here 
: http://ceph.com/docs/master/rados/deployment/

Cheers

On 02/10/2013 16:32, shacky wrote:
> Thank you very much for your answer!
> So I could save the use of hardware RAID controllers on storage servers. Good 
> news.
> I see in the Ceph documentation that I will have to manually configure the 
> datastore to be efficient, reliable and full fault tolerant.
> Is there a particular way to configure it, or just a datastore on every hard 
> drive (12 datastores) and then Ceph will automatically use them to be 
> reliable and full fault tolerant?
> Can you advise me how to configure it in the better way?
> 
> 
> 2013/10/2 Loic Dachary mailto:l...@dachary.org>>
> 
> Hi,
> 
> I would not use RAID5 since it would be redundant with what Ceph provides.
> 
> My 2cts ;-)
> 
> On 02/10/2013 13:50, shacky wrote:
> > Hi.
> >
> > I am going to create my first Ceph cluster using 3 physical servers and 
> Ubuntu distribution.
> > Each server will have three 3Tb hard drives, connected with or without 
> a physycal RAID controller.
> > I would have to be protect on a fault of one of this three servers, 
> having as much as space possible, but without losing the failover security.
> >
> > Shall I configure the hard drive on each server using RAID (5?) or not?
> >
> > Can you advise me the correct answer, or tell me what are the pro/cons 
> of using or not RAID on physical servers in a Ceph cluster?
> >
> > Thank you very much!
> > Bye.
> >
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com 
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> 
> --
> Loïc Dachary, Artisan Logiciel Libre
> All that is necessary for the triumph of evil is that good people do 
> nothing.
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 

-- 
Loïc Dachary, Artisan Logiciel Libre
All that is necessary for the triumph of evil is that good people do nothing.



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph and RAID

2013-10-02 Thread shacky
Thank you very much for your answer!
So I could save the use of hardware RAID controllers on storage servers.
Good news.
I see in the Ceph documentation that I will have to manually configure the
datastore to be efficient, reliable and full fault tolerant.
Is there a particular way to configure it, or just a datastore on every
hard drive (12 datastores) and then Ceph will automatically use them to be
reliable and full fault tolerant?
Can you advise me how to configure it in the better way?


2013/10/2 Loic Dachary 

> Hi,
>
> I would not use RAID5 since it would be redundant with what Ceph provides.
>
> My 2cts ;-)
>
> On 02/10/2013 13:50, shacky wrote:
> > Hi.
> >
> > I am going to create my first Ceph cluster using 3 physical servers and
> Ubuntu distribution.
> > Each server will have three 3Tb hard drives, connected with or without a
> physycal RAID controller.
> > I would have to be protect on a fault of one of this three servers,
> having as much as space possible, but without losing the failover security.
> >
> > Shall I configure the hard drive on each server using RAID (5?) or not?
> >
> > Can you advise me the correct answer, or tell me what are the pro/cons
> of using or not RAID on physical servers in a Ceph cluster?
> >
> > Thank you very much!
> > Bye.
> >
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
> --
> Loïc Dachary, Artisan Logiciel Libre
> All that is necessary for the triumph of evil is that good people do
> nothing.
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph and RAID

2013-10-02 Thread Loic Dachary
Hi,

I would not use RAID5 since it would be redundant with what Ceph provides.

My 2cts ;-)

On 02/10/2013 13:50, shacky wrote:
> Hi.
> 
> I am going to create my first Ceph cluster using 3 physical servers and 
> Ubuntu distribution.
> Each server will have three 3Tb hard drives, connected with or without a 
> physycal RAID controller.
> I would have to be protect on a fault of one of this three servers, having as 
> much as space possible, but without losing the failover security.
> 
> Shall I configure the hard drive on each server using RAID (5?) or not?
> 
> Can you advise me the correct answer, or tell me what are the pro/cons of 
> using or not RAID on physical servers in a Ceph cluster?
> 
> Thank you very much!
> Bye.
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

-- 
Loïc Dachary, Artisan Logiciel Libre
All that is necessary for the triumph of evil is that good people do nothing.



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph and RAID

2013-10-02 Thread shacky
Hi.

I am going to create my first Ceph cluster using 3 physical servers and
Ubuntu distribution.
Each server will have three 3Tb hard drives, connected with or without a
physycal RAID controller.
I would have to be protect on a fault of one of this three servers, having
as much as space possible, but without losing the failover security.

Shall I configure the hard drive on each server using RAID (5?) or not?

Can you advise me the correct answer, or tell me what are the pro/cons of
using or not RAID on physical servers in a Ceph cluster?

Thank you very much!
Bye.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com