Can someone elaborate on
[cid:image001.png@01D4D44C.27BDF330]
>From http://tracker.ceph.com/issues/38122
Which exactly package is missing?
And why is this happening ? In Mimic all dependencies are resolved by yum?
- Rado
___
ceph-users mailing list
Try filestore instead of bluestore ?
- Rado
From: ceph-users On Behalf Of Steven
Vacaroaia
Sent: Thursday, April 19, 2018 8:11 AM
To: ceph-users
Subject: [ceph-users] ceph luminous 12.2.4 - 2 servers better than 3 ?
Hi,
Any idea
Probably priorities have changed since RedHat acquired Ceph/InkTank (
https://www.redhat.com/en/about/press-releases/red-hat-acquire-inktank-provider-ceph
) ?
Why support a competing hypervisor ? Long term switching to KVM seems to be the
solution.
- Rado
From: ceph-users
Performance as well - in my testing FileStore was much quicker than BlueStore.
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Sage
Weil
Sent: Friday, December 29, 2017 3:51 PM
To: Travis Nielsen
Cc:
Is there a way to rebuild the contents of .rgw.buckets.index pool removed by
accident ?
Thanks in advance.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
read : io=1282.4MB, bw=7295.3KB/s, iops=1823, runt=180003msec
read : io=1380.9MB, bw=7854.1KB/s, iops=1963, runt=180007msec
- Rado
-Original Message-
From: Mark Nelson [mailto:mnel...@redhat.com]
Sent: Thursday, November 16, 2017 2:04 PM
To: Milanov, Radoslav Nikiforov <rad...@bu.
No,
What test parameters (iodepth/file size/numjobs) would make sense for 3
node/27OSD@4TB ?
- Rado
-Original Message-
From: Mark Nelson [mailto:mnel...@redhat.com]
Sent: Thursday, November 16, 2017 10:56 AM
To: Milanov, Radoslav Nikiforov <rad...@bu.edu>; David Turner
<
FYI
Having 50GB bock.db made no difference on the performance.
- Rado
From: David Turner [mailto:drakonst...@gmail.com]
Sent: Tuesday, November 14, 2017 6:13 PM
To: Milanov, Radoslav Nikiforov <rad...@bu.edu>
Cc: Mark Nelson <mnel...@redhat.com>; ceph-users@lists.ceph.com
Subjec
Thank you,
It is 4TB OSDs and they might become full someday, I’ll try 60GB db partition –
this is the max OSD capacity.
- Rado
From: David Turner [mailto:drakonst...@gmail.com]
Sent: Tuesday, November 14, 2017 5:38 PM
To: Milanov, Radoslav Nikiforov <rad...@bu.edu>
Cc: Mark Nelson
nel...@redhat.com>
Cc: Milanov, Radoslav Nikiforov <rad...@bu.edu>; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Bluestore performance 50% of filestore
How big was your blocks.db partition for each OSD and what size are your HDDs?
Also how full is your cluster? It's possib
16 MB block, single thread, sequential writes, this is
[cid:image001.emz@01D35D67.61AF9D30]
- Rado
-Original Message-
From: Mark Nelson [mailto:mnel...@redhat.com]
Sent: Tuesday, November 14, 2017 4:36 PM
To: Milanov, Radoslav Nikiforov <rad...@bu.edu>; ceph-users@lists.ce
and a much
bigger partition to do random writes over.
Mark
On 11/14/2017 01:54 PM, Milanov, Radoslav Nikiforov wrote:
> Hi
>
> We have 3 node, 27 OSDs cluster running Luminous 12.2.1
>
> In filestore configuration there are 3 SSDs used for journals of 9
> OSDs on each hosts (1
Hi
We have 3 node, 27 OSDs cluster running Luminous 12.2.1
In filestore configuration there are 3 SSDs used for journals of 9 OSDs on each
hosts (1 SSD has 3 journal paritions for 3 OSDs).
I've converted filestore to bluestore by wiping 1 host a time and waiting for
recovery. SSDs now contain
13 matches
Mail list logo