Re: [ceph-users] add hard drives to 3 CEPH servers (3 server cluster)

2017-12-18 Thread Cary
all copies of this e-mail. > > > -Original Message- > From: Cary [mailto:dynamic.c...@gmail.com] > Sent: Friday, December 15, 2017 5:56 PM > To: David Turner > Cc: James Okken; ceph-users@lists.ceph.com > Subject: Re: [ceph-users] add hard drives to 3 CEPH servers (

Re: [ceph-users] add hard drives to 3 CEPH servers (3 server cluster)

2017-12-18 Thread James Okken
this e-mail. -Original Message- From: Cary [mailto:dynamic.c...@gmail.com] Sent: Friday, December 15, 2017 5:56 PM To: David Turner Cc: James Okken; ceph-users@lists.ceph.com Subject: Re: [ceph-users] add hard drives to 3 CEPH servers (3 server cluster) James, You can set these values in

Re: [ceph-users] add hard drives to 3 CEPH servers (3 server cluster)

2017-12-15 Thread Cary
1.0 836G 49612M 788G 5.79 1.16 56 >> 5 3.7 1.0 3723G 192G 3531G 5.17 1.04 282 >> 2 0.81689 1.0 836G 33639M 803G 3.93 0.79 58 >> 3 3.7 1.0 3723G 202G 3521G 5.43 1.09 291 >> TOTAL 13680G 682G 12998G 4.99 >>

Re: [ceph-users] add hard drives to 3 CEPH servers (3 server cluster)

2017-12-15 Thread David Turner
t; 3 3.7 1.0 3723G 202G 3521G 5.43 1.09 291 > TOTAL 13680G 682G 12998G 4.99 > MIN/MAX VAR: 0.79/1.16 STDDEV: 0.67 > > Thanks! > > -Original Message----- > From: Cary [mailto:dynamic.c...@gmail.com] > Sent: Friday, December 15, 2017 4:05 PM > To: J

Re: [ceph-users] add hard drives to 3 CEPH servers (3 server cluster)

2017-12-15 Thread Ronny Aasen
13680G 682G 12998G 4.99 MIN/MAX VAR: 0.79/1.16 STDDEV: 0.67 Thanks! -Original Message- From: Cary [mailto:dynamic.c...@gmail.com] Sent: Friday, December 15, 2017 4:05 PM To: James Okken Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] add hard drives to 3 CEPH servers (3 se

Re: [ceph-users] add hard drives to 3 CEPH servers (3 server cluster)

2017-12-15 Thread James Okken
17:28:22.893662 7fd2f9e928c0 -1 created new key in keyring > /var/lib/ceph/osd/ceph-4/keyring > > thanks > > -Original Message- > From: Cary [mailto:dynamic.c...@gmail.com] > Sent: Thursday, December 14, 2017 7:13 PM > To: James Okken > Cc: ceph-users@lists.ceph.com &

Re: [ceph-users] add hard drives to 3 CEPH servers (3 server cluster)

2017-12-15 Thread Cary
yring > /var/lib/ceph/osd/ceph-4/keyring > > thanks > > -Original Message- > From: Cary [mailto:dynamic.c...@gmail.com] > Sent: Thursday, December 14, 2017 7:13 PM > To: James Okken > Cc: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] add hard drives to

Re: [ceph-users] add hard drives to 3 CEPH servers (3 server cluster)

2017-12-15 Thread James Okken
2017 7:13 PM To: James Okken Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] add hard drives to 3 CEPH servers (3 server cluster) James, Usually once the misplaced data has balanced out the cluster should reach a healthy state. If you run a "ceph health detail" Ceph will sho

Re: [ceph-users] add hard drives to 3 CEPH servers (3 server cluster)

2017-12-14 Thread Cary
B/s, 10 objects/s > recovering > 2017-12-14 22:46:02.482228 mon.0 [INF] pgmap v3936177: 512 pgs: 1 > active+recovering+degraded, 26 active+recovery_wait+degraded, 1 > active+remapped+backfilling, 308 active+clean, 176 > active+remapped+wait_backfill; 333 GB data, 370 GB used,

Re: [ceph-users] add hard drives to 3 CEPH servers (3 server cluster)

2017-12-14 Thread James Okken
4, 2017 4:21 PM To: James Okken Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] add hard drives to 3 CEPH servers (3 server cluster) Jim, I am not an expert, but I believe I can assist. Normally you will only have 1 OSD per drive. I have heard discussions about using multiple OSDs per d

Re: [ceph-users] add hard drives to 3 CEPH servers (3 server cluster)

2017-12-14 Thread Ronny Aasen
On 14.12.2017 18:34, James Okken wrote: Hi all, Please let me know if I am missing steps or using the wrong steps I'm hoping to expand my small CEPH cluster by adding 4TB hard drives to each of the 3 servers in the cluster. I also need to change my replication factor from 1 to 3. This is part

Re: [ceph-users] add hard drives to 3 CEPH servers (3 server cluster)

2017-12-14 Thread Cary
Jim, I am not an expert, but I believe I can assist. Normally you will only have 1 OSD per drive. I have heard discussions about using multiple OSDs per disk, when using SSDs though. Once your drives have been installed you will have to format them, unless you are using Bluestore. My steps for

[ceph-users] add hard drives to 3 CEPH servers (3 server cluster)

2017-12-14 Thread James Okken
Hi all, Please let me know if I am missing steps or using the wrong steps I'm hoping to expand my small CEPH cluster by adding 4TB hard drives to each of the 3 servers in the cluster. I also need to change my replication factor from 1 to 3. This is part of an Openstack environment deployed by F