Re: [ceph-users] Orphan PG

2015-06-07 Thread Marek Dohojda
referencing non-existent OSDs. On Sun, Jun 7, 2015 at 2:00 PM, Marek Dohojda mdoho...@altitudedigital.com wrote: Unfortunately nothing. It done its thing, re-balanced it, and left with same thing in the end. BTW Thank you very much for the time and suggestion, I really appreciate it. ceph

Re: [ceph-users] Orphan PG

2015-06-07 Thread Marek Dohojda
health OK. Running ceph health details should list those OSDs. Do you have any? El dia 07/06/2015 16:16, Marek Dohojda mdoho...@altitudedigital.com va escriure: Thank you. Unfortunately this won't work because 0.21 is already being creating: ~# ceph pg force_create_pg 0.21 pg 0.21 already

Re: [ceph-users] Orphan PG

2015-06-07 Thread Marek Dohojda
, or in any display of OSDs. On Sun, Jun 7, 2015 at 8:41 AM, Marek Dohojda mdoho...@altitudedigital.com wrote: I think this is the issue. look at ceph health detail you will see that 0.21 and others are orphan: HEALTH_WARN 65 pgs stale; 22 pgs stuck inactive; 65 pgs stuck stale; 22 pgs stuck

Re: [ceph-users] Orphan PG

2015-06-07 Thread Marek Dohojda
and it keeps trying. On Sun, Jun 7, 2015 at 12:18 AM, Alex Muntada al...@alexm.org wrote: Marek Dohojda: One of the Stuck Inactive is 0.21 and here is the output of ceph pg map #ceph pg map 0.21 osdmap e579 pg 0.21 (0.21) - up [] acting [] #ceph pg dump_stuck stale ok pg_stat state up

[ceph-users] Orphan PG

2015-06-06 Thread Marek Dohojda
I recently started with Ceph, and overall had very few issues. However during a process of Cluster creation I must have done something wrong which created orphan PG groups. I suspect it was broken when I removed OSD right after initial creation, but I am guessing. Currently here is the Ceph

Re: [ceph-users] Fwd: Too many PGs

2015-06-16 Thread Marek Dohojda
Somnath From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Marek Dohojda Sent: Monday, June 15, 2015 1:05 PM To: ceph-users@lists.ceph.com Subject: [ceph-users] Fwd: Too many PGs I hate to bug, but I truly hope someone has an answer to below. Thank you kindly

[ceph-users] Fwd: Too many PGs

2015-06-15 Thread Marek Dohojda
I hate to bug, but I truly hope someone has an answer to below. Thank you kindly! -- Forwarded message -- From: Marek Dohojda mdoho...@altitudedigital.com Date: Wed, Jun 10, 2015 at 7:49 AM Subject: Too many PGs To: ceph-users-requ...@lists.ceph.com Hello I am running “Hammer

Re: [ceph-users] Is it safe to increase pg number in a production environment

2015-08-05 Thread Marek Dohojda
. The realoctation in my case took over an hour to accomplish. On Aug 4, 2015, at 7:43 PM, Jevon Qiao qiaojianf...@unitedstack.com wrote: Thank you and Samuel for the prompt response. On 5/8/15 00:52, Marek Dohojda wrote: I have done this not that long ago. My original PG estimates were wrong

Re: [ceph-users] Is it safe to increase pg number in a production environment

2015-08-04 Thread Marek Dohojda
I have done this not that long ago. My original PG estimates were wrong and I had to increase them. After increasing the PG numbers the Ceph rebalanced, and that took a while. To be honest in my case the slowdown wasn’t really visible, but it took a while. My strong suggestion to you

Re: [ceph-users] OSD on a partition

2015-12-01 Thread Marek Dohojda
YkheBkYxLlVaf7faUWcjySwunW1SY/rc2FkUFe52VlZ5cbFfJ+ym0an5 > i5SdfLd0gk4zR5l35j7svdJZU9+QIZLcz/S12Nx5mwUxhnhEeqYMBS/ENSca > tKq4nlqyIGaCyDaLlcaECRLBjskrNRMeV7vnNUQ59BzJuMWOHhq571zHeXYO > tezS > =mxz9 > -END PGP SIGNATURE- > ---- > Robert LeBlanc > PGP Fingerprint 79A2 9CA4 6CC

Re: [ceph-users] OSD on a partition

2015-12-01 Thread Marek Dohojda
: ceph-users [mailto:ceph-users-boun...@lists.ceph.com >> <mailto:ceph-users-boun...@lists.ceph.com>] On Behalf Of >> Marek Dohojda >> Sent: 01 December 2015 19:34 >> To: Wido den Hollander <w...@42on.com <mailto:w...@42on.com>> >> Cc: ceph-users@lists.ceph.co

Re: [ceph-users] OSD on a partition

2015-12-01 Thread Marek Dohojda
gk4zR5l35j7svdJZU9+QIZLcz/S12Nx5mwUxhnhEeqYMBS/ENSca > tKq4nlqyIGaCyDaLlcaECRLBjskrNRMeV7vnNUQ59BzJuMWOHhq571zHeXYO > tezS > =mxz9 > -END PGP SIGNATURE- > ---- > Robert LeBlanc > PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 > > > O

[ceph-users] OSD on a partition

2015-12-01 Thread Marek Dohojda
I am looking through google, and I am not seeing a good guide as to how to put an OSD on a partition (GPT) of a disk. I see lots of options for file system, or single physical drive but not partition. http://dachary.org/?p=2548 This is only thing I found but

Re: [ceph-users] OSD on a partition

2015-12-01 Thread Marek Dohojda
. Down the road I will have more SSD but this won’t happen until new budget hits and I can get more servers. > On Dec 1, 2015, at 12:11 PM, Wido den Hollander <w...@42on.com> wrote: > > On 12/01/2015 07:29 PM, Marek Dohojda wrote: >> I am looking through google, and I

Re: [ceph-users] Performance question

2015-11-24 Thread Marek Dohojda
; I think what Nick is suggesting is that you create Nx5GB partitions on the > SSD's (where N is the number of OSD's you want to have fast journals for), > and use the rest of the space for OSDs that would form the SSD pool. > > Bill > > On Tue, Nov 24, 2015 at 10:56 AM, Marek Doh

Re: [ceph-users] Performance question

2015-11-24 Thread Marek Dohojda
oes iostat looks > like whilst you are running rados bench, are the disks getting maxed out? > > > > *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf > Of *Marek Dohojda > *Sent:* 24 November 2015 16:27 > *To:* Alan Johnson <al...@supermicro.com>

Re: [ceph-users] Performance question

2015-11-24 Thread Marek Dohojda
workload down on the > spinners to 3Xrather than 6X > > > > *From:* Marek Dohojda [mailto:mdoho...@altitudedigital.com] > *Sent:* Tuesday, November 24, 2015 1:24 PM > *To:* Nick Fisk > *Cc:* Alan Johnson; ceph-users@lists.ceph.com > > *Subject:* Re: [ceph-users] Perfor

Re: [ceph-users] Performance question

2015-11-24 Thread Marek Dohojda
what your workload will be? There maybe other things that > can be done. > > > > *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf > Of *Marek Dohojda > *Sent:* 24 November 2015 18:32 > *To:* Alan Johnson <al...@supermicro.com> > *Cc:* ceph-us

Re: [ceph-users] Performance question

2015-11-24 Thread Marek Dohojda
xample of expected performance. > > > > *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf > Of *Marek Dohojda > *Sent:* 24 November 2015 18:47 > *To:* Nick Fisk <n...@fisk.me.uk> > > *Cc:* ceph-users@lists.ceph.com > *Subject:* Re: [ceph-users] Per

Re: [ceph-users] Performance question

2015-11-24 Thread Marek Dohojda
writes) you > could try RADOS bench as a baseline, I would expect more performance with 7 > X 10K spinners journaled to SSDs. The fact that SSDs did not perform much > better may mean to a bottleneck elsewhere – network perhaps? > > *From:* Marek Dohojda [mailto:mdoho...@altitudedig

Re: [ceph-users] Performance question

2015-11-23 Thread Marek Dohojda
Sorry I should have specified SAS is the 100 MB :) , but to be honest SSD isn't much faster. On Mon, Nov 23, 2015 at 7:38 PM, Haomai Wang <haomaiw...@gmail.com> wrote: > On Tue, Nov 24, 2015 at 10:35 AM, Marek Dohojda > <mdoho...@altitudedigital.com> wrote: > > N

[ceph-users] Performance question

2015-11-23 Thread Marek Dohojda
I have a Hammer Ceph cluster on 7 nodes with total 14 OSDs. 7 of which are SSD and 7 of which are SAS 10K drives. I get typically about 100MB IO rates on this cluster. I have a simple question. Is 100MB within my configuration what I should expect, or should it be higher? I am not sure if I

Re: [ceph-users] Performance question

2015-11-23 Thread Marek Dohojda
No SSD and SAS are in two separate pools. On Mon, Nov 23, 2015 at 7:30 PM, Haomai Wang <haomaiw...@gmail.com> wrote: > On Tue, Nov 24, 2015 at 10:23 AM, Marek Dohojda > <mdoho...@altitudedigital.com> wrote: > > I have a Hammer Ceph cluster on 7 nodes with total 14 OSDs.

Re: [ceph-users] Performance question

2015-11-24 Thread Marek Dohojda
t; Are the journals on the same device – it might be better to use the SSDs > for journaling since you are not getting better performance with SSDs? > > > > *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf > Of *Marek Dohojda > *Sent:* Monday, November 23,

Re: [ceph-users] Performance question

2015-11-24 Thread Marek Dohojda
, > I wonder how well it reflects the performance of the platform. With rados > bench you can specify how many threads you want to use. > > Regards, > > Mart > > > > > On 11/24/2015 04:37 PM, Marek Dohojda wrote: > > Yeah they are, that is one thing I was planning on cha

[ceph-users] Migrating from one Ceph cluster to another

2016-06-08 Thread Marek Dohojda
I have a ceph cluster (Hammer) and I just built a new cluster (Infernalis). This cluster contains VM boxes based on KVM. What I would like to do is move all the data from one ceph cluster to another. However the only way I could find from my google searches would be to move each image to local