[ceph-users] Locating CephFS clients in warn message

2016-11-09 Thread Yutian Li
I get a HEALTH_WARN when I run `ceph status`. It says health HEALTH_WARN mds0: Many clients (17) failing to respond to cache pressure I have 50 OSDs, 3 MONs, and 1 MDS. I just use CephFS and attach it to 20 ~ 30 clients using kernel mount option. I wonder how to locate those

Re: [ceph-users] Replication strategy, write throughput

2016-11-09 Thread Christian Balzer
Hello, On Wed, 9 Nov 2016 21:56:08 +0100 Andreas Gerstmayr wrote: > Hello, > > >> 2 parallel jobs with one job simulating the journal (sequential > >> writes, ioengine=libaio, direct=1, sync=1, iodeph=128, bs=1MB) and the > >> other job simulating the datastore (random writes of 1MB)? > >> > >

Re: [ceph-users] radosgw - http status 400 while creating a bucket

2016-11-09 Thread Andrei Mikhailovsky
Hi Yoann, I am running 10.2.3 on all nodes. Andrei - Original Message - > From: "Yoann Moulin" > To: "ceph-users" > Sent: Wednesday, 9 November, 2016 21:20:45 > Subject: Re: [ceph-users] radosgw - http status 400 while creating a bucket

Re: [ceph-users] radosgw - http status 400 while creating a bucket

2016-11-09 Thread Yoann Moulin
Hello, > many thanks for your help. I've tried setting the zone to master, followed by > the period update --commit command. This is what i've had: maybe it's related to this issue : http://tracker.ceph.com/issues/16839 (fixe in Jewel 10.2.3) or this one :

Re: [ceph-users] radosgw - http status 400 while creating a bucket

2016-11-09 Thread Andrei Mikhailovsky
Hi Yehuda, many thanks for your help. I've tried setting the zone to master, followed by the period update --commit command. This is what i've had: root@arh-ibstorage1-ib:~# radosgw-admin zonegroup get --rgw-zonegroup=default { "id": "default", "name": "default", "api_name": "",

Re: [ceph-users] Replication strategy, write throughput

2016-11-09 Thread Andreas Gerstmayr
Hello, 2 parallel jobs with one job simulating the journal (sequential writes, ioengine=libaio, direct=1, sync=1, iodeph=128, bs=1MB) and the other job simulating the datastore (random writes of 1MB)? To test against a single HDD? Yes, something like that, the first fio job would need go

Re: [ceph-users] Bluestore + erasure coding memory usage

2016-11-09 Thread bobobo1...@gmail.com
Here it is after running overnight (~9h): http://ix.io/1DNi On Tue, Nov 8, 2016 at 11:00 PM, bobobo1...@gmail.com wrote: > Ah, I was actually mistaken. After running without Valgrind, it seems > I just estimated how slowed down it was. I'll leave it to run > overnight as

Re: [ceph-users] radosgw - http status 400 while creating a bucket

2016-11-09 Thread Yehuda Sadeh-Weinraub
On Wed, Nov 9, 2016 at 1:30 AM, Andrei Mikhailovsky wrote: > Hi Yehuda, > > just tried to run the command to set the master_zone to default followed by > the bucket create without doing the restart and I still have the same error > on the client: > >

Re: [ceph-users] ceph-mon crash after update from hammer 0.94.7 to jewel 10.2.3

2016-11-09 Thread Ian Colle
That recommendation changed to upgrade OSDs first then Monitors due to http://tracker.ceph.com/issues/17386#note-6 Ian On Wed, Nov 9, 2016 at 3:11 PM, Peter Maloney < peter.malo...@brockmann-consult.de> wrote: > On 11/09/16 15:06, Alexander Walker wrote: > > Hello, > > I've a cluster of three

[ceph-users] multiple openstacks on one ceph / namespaces

2016-11-09 Thread Matthew Vernon
Hi, I'm configuring ceph as the storage for our openstack install. One thing we might want to do in the future is have a second openstack instance (e.g. to test the next release of openstack); we might well want to have this talk to our existing ceph cluster. I could do this by giving each stack

[ceph-users] Radosgw pool creation (jewel / Ubuntu16.04)

2016-11-09 Thread Matthew Vernon
Hi, I have a jewel/Ubuntu16.40 ceph cluster. I attempted to add some radosgws, having already made the pools I thought they would need per http://docs.ceph.com/docs/jewel/radosgw/config-ref/#pools i.e. .rgw and so on: .rgw .rgw.control .rgw.gc .log .intent-log .usage

Re: [ceph-users] ceph-mon crash after update from hammer 0.94.7 to jewel 10.2.3

2016-11-09 Thread Peter Maloney
On 11/09/16 15:06, Alexander Walker wrote: > > Hello, > > I've a cluster of three node (two osd on each node). First I've > updated on node - osd is ok and running, but ceph-mon crashed. > What does that mean... you updated a mon and an osd, and other mons and osds are not upgraded? I think you

[ceph-users] ceph-mon crash after update from hammer 0.94.7 to jewel 10.2.3

2016-11-09 Thread Alexander Walker
Hello, I've a cluster of three node (two osd on each node). First I've updated on node - osd is ok and running, but ceph-mon crashed. cephus@ceph3:~$ sudo /usr/bin/ceph-mon --cluster=ceph -i ceph3 -f --setuser ceph --setgroup ceph --debug_mon 20 starting mon.ceph3 rank 2 at

Re: [ceph-users] PGs stuck at creating forever

2016-11-09 Thread Vlad Blando
Hi Mehmet, It won't let me adjust the PGs because there are "creating" tasks not done yet. --- [root@avatar0-ceph0 ~]# !157 ceph osd pool set rbd pgp_num 300 Error EBUSY: currently creating pgs, wait [root@avatar0-ceph0 ~]# --- ᐧ Regards, Vladimir FS Blando Cloud Operations Manager

Re: [ceph-users] MDS Problems - Solved but reporting for benefit of others

2016-11-09 Thread Nick Fisk
> -Original Message- > From: Gregory Farnum [mailto:gfar...@redhat.com] > Sent: 08 November 2016 22:55 > To: Nick Fisk > Cc: Ceph Users > Subject: Re: [ceph-users] MDS Problems - Solved but reporting for benefit of > others > > On Wed, Nov 2,

Re: [ceph-users] radosgw - http status 400 while creating a bucket

2016-11-09 Thread Andrei Mikhailovsky
Hi Yehuda, just tried to run the command to set the master_zone to default followed by the bucket create without doing the restart and I still have the same error on the client: InvalidArgumentmy-new-bucket-31337tx00010-005822ebbd-9951ad8-default9951ad8-default-default Andrei

Re: [ceph-users] radosgw - http status 400 while creating a bucket

2016-11-09 Thread Andrei Mikhailovsky
Hi Yehuda, I've tried that and after performed: # radosgw-admin zonegroup get --rgw-zonegroup=default { "id": "default", "name": "default", "api_name": "", "is_master": "true", "endpoints": [], "hostnames": [], "hostnames_s3website": [], "master_zone":