Try
ceph osd pool set rbd pgp_num 310
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Dave Durkee
Sent: 19 June 2015 22:31
To: ceph-users@lists.ceph.com
Subject: [ceph-users] New cluster in unhealthy state
I just built a small lab cluster. 1 mon node, 3 osd nodes
Just configure '.rgw.buckets' as an EC pool and rest of the rgw pools should
be replicated.
Thanks Regards
Somnath
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Deneau, Tom
Sent: Friday, June 19, 2015 2:31 PM
To: ceph-users@lists.ceph.com
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Mark Nelson
Sent: 19 June 2015 13:44
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph EC pool performance benchmarking, high
latencies.
On 06/19/2015 07:28 AM, MATHIAS, Bryn
On 06/19/2015 11:16 AM, Daniel Schneller wrote:
On 2015-06-18 09:53:54 +, Joao Eduardo Luis said:
Setting 'mon debug = 0/5' should be okay. Unless you see that setting
'/5' impacts your performance and/or memory consumption, you should
leave that be. '0/5' means 'output only debug 0 or
I just built a small lab cluster. 1 mon node, 3 osd nodes with 3 ceph disks
and 1 os/journal disk, an admin vm and 3 client vm's.
I followed the preflight and install instructions and when I finished adding
the osd's I ran a ceph status and got the following:
ceph status
cluster
what is the correct way to make radosgw create its pools as erasure coded pools?
-- Tom Deneau, AMD
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi, guys!
Do we have any procedure on how to build the latest KRBD module? I think it
will be helpful to many people here.
Regards, Vasily.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 19 June 2015 at 13:46, Gregory Farnum g...@gregs42.com wrote:
On Thu, Jun 18, 2015 at 10:15 PM, Roland Giesler rol...@giesler.za.net
wrote:
On 15 June 2015 at 13:09, Gregory Farnum g...@gregs42.com wrote:
On Mon, Jun 15, 2015 at 4:03 AM, Roland Giesler rol...@giesler.za.net
wrote:
Hello everybody,
I'm doing some experiments and I am trying to re-add an removed osd. I
removed it with the bellow five commands.
http://ceph.com/docs/master/rados/operations/add-or-rm-osds/
ceph osd out 5
/etc/init.d/ceph stop osd.5
ceph osd crush remove osd.5
ceph auth del osd.5
ceph osd rm 5
On 19/06/15 16:07, Jelle de Jong wrote:
Hello everybody,
I'm doing some experiments and I am trying to re-add an removed osd. I
removed it with the bellow five commands.
http://ceph.com/docs/master/rados/operations/add-or-rm-osds/
ceph osd out 5
/etc/init.d/ceph stop osd.5
ceph osd
On Thu, Jun 18, 2015 at 01:24:38PM +0200, Mateusz Skała wrote:
Hi,
After some hardware errors one of pg in our backup server is 'incomplete'.
I do export pg without problems like here:
https://ceph.com/community/incomplete-pgs-oh-my/
After remove pg from all osd's and import pg to one
On 06/19/15 13:42, Burkhard Linke wrote:
Forget the reply to the list...
Forwarded Message
Subject: Re: [ceph-users] Unexpected disk write activity with btrfs OSDs
Date: Fri, 19 Jun 2015 09:06:33 +0200
From: Burkhard Linke
Hi Jan,
On 06/18/2015 12:48 AM, Jan Schermer wrote:
1) Flags available in ceph osd set are
pause|noup|nodown|noout|noin|nobackfill|norecover|noscrub|nodeep-scrub|notieragent
I know or can guess most of them (the docs are a “bit” lacking)
But with ceph osd set nodown” I have no idea what it
Hi All,
I am currently benchmarking CEPH to work out the correct read / write model, to
get the optimal cluster throughput and latency.
For the moment I am writing 4Mb files to an EC 4+1 pool with a randomised name
using the rados python interface.
Load generation is happening on external
On Thu, Jun 18, 2015 at 10:15 PM, Roland Giesler rol...@giesler.za.net wrote:
On 15 June 2015 at 13:09, Gregory Farnum g...@gregs42.com wrote:
On Mon, Jun 15, 2015 at 4:03 AM, Roland Giesler rol...@giesler.za.net
wrote:
I have a small cluster of 4 machines and quite a few drives. After
On 2015-06-18 09:53:54 +, Joao Eduardo Luis said:
Setting 'mon debug = 0/5' should be okay. Unless you see that setting
'/5' impacts your performance and/or memory consumption, you should
leave that be. '0/5' means 'output only debug 0 or lower to the logs;
keep the last 1000 debug level
Forget the reply to the list...
Forwarded Message
Subject:Re: [ceph-users] Unexpected disk write activity with btrfs OSDs
Date: Fri, 19 Jun 2015 09:06:33 +0200
From: Burkhard Linke burkhard.li...@computational.bio.uni-giessen.de
To: Lionel Bouton
Hi,
I have send a patch to qemu devel mailing list to add support jemalloc linking
http://lists.nongnu.org/archive/html/qemu-devel/2015-06/msg05265.html
Help is welcome to get it upstream !
___
ceph-users mailing list
ceph-users@lists.ceph.com
I'm trying to evaluate various object stores/distributed file systems for
use in our company and have a little experience of using Ceph in the past.
However I'm running into a few issues when running some benchmarks against
RadosGW.
Basically my script is pretty dumb, but it captures one of our
Hi guys,
I also use a combination of intel 520 and 530 for my journals and have noticed
that the latency and the speed of 520s is better than 530s.
Could someone please confirm that doing the following at start up will stop the
dsync on the relevant drives?
# echo temporary write through
*
I am looking to use Ceph using EC on a few leftover storage servers (36
disk supermicro servers with dual xeon sockets and around 256Gb of ram).
I did a small test using one node and using the ISA library and noticed
that the CPU load was pretty spikey for just normal operation.
Does
Hi,
I have been formatting my OSD drives with XFS (using mkfs.xfs )with default
options. Is it recommended for Ceph to choose a bigger block size?
I'd like to understand the impact of block size. Any recommendations?
Thanks
Pankaj
___
ceph-users
Hi Sean,
We have ~1PB of EC storage using Dell R730xd servers with 6TB OSDs. We've got
our erasure coding profile set up to be k=10,m=3 which gives us a very
reasonable chunk of the raw storage with nice resiliency.
I found that CPU usage was significantly higher in EC, but not so much as to
On 06/19/2015 11:19 AM, Andrei Mikhailovsky wrote:
Mark, thanks for putting it down this way. It does make sense.
Does it mean that having the Intel 520s, which bypass the dsync is theat to the
data stored on the journals?
I'm not sure if anyone has ever 100% conclusively shown that this
On 06/19/2015 10:29 AM, Andrei Mikhailovsky wrote:
Mark,
Thanks, I do understand that there is a risk of data loss by doing this. Having
said this, ceph is designed to be fault tollerant and self repairing should
something happen to individual journals, osds and server nodes. Isn't this a
I am following the quick doc.
It is successful until Adding the initial monitor.
So I made the osd folder (/var/local/osd0, osd10, osd20) in each node
(csAnt, csBull, csCat), and deployed to prepare the OSDs.
But the below error was occurred.
---
All - I have been following this thread for a bit, and am happy to see how
involved, capable, and collaborative that this ceph-users community seems
to be. It appears there is a fairly strong amount of domain knowledge
around the hardware used by many Ceph deployments, with a lot of thumbs
up
Mark, thanks for putting it down this way. It does make sense.
Does it mean that having the Intel 520s, which bypass the dsync is theat to the
data stored on the journals?
I do have a few of these installed, alongside with 530s. I did not plan to
replace them just yet. Would it make more
Thanks lincoln! May I ask how many drives you have per storage node and
how many threads you have available? IE are you using hyper threading
and do you have more than 24 disks per node in your cluster? I noticed
with our replicated cluster that disks == more pgs == more cpu/ram and
with 24+
Pankaj,
I think Linux will not allow bigger block size than page_size. If you want
bigger block size than 4K, you need to rebuild the kernel I guess.
Now, I am not sure if there is any internal settings (or grub param) to tweak
this page size during reboot or not.
I think it is recommended (or
We're running 12 OSDs per node, with 32 hyper-threaded CPUs available. We
over-provisioned the CPUs because we would like to additionally run jobs from
our batch system and isolate them via cgroups (we're a high-throughput
computing facility). . With a total of ~13000 pgs across a few pools,
Hi!
Recently over a few hours our 4 Ceph disk nodes showed unusually high
and somewhat constant iowait times. Cluster runs 0.94.1 on Ubuntu
14.04.1.
It started on one node, then - with maybe 15 minutes delay each - on the
next and the next one. Overall duration of the phenomenon was about 90
32 matches
Mail list logo