Re: [ceph-users] ceph block - volume with RAID#0

2019-01-31 Thread Janne Johansson
Den fre 1 feb. 2019 kl 06:30 skrev M Ranga Swami Reddy : > Here user requirement is - less write and more reads...so not much > worried on performance . > So why go for raid0 at all? It is the least secure way to store data. -- May the most significant bit of your life be positive.

Re: [ceph-users] v12.2.11 Luminous released

2019-01-31 Thread Wido den Hollander
On 2/1/19 8:44 AM, Abhishek wrote: > We are glad to announce the eleventh bug fix release of the Luminous > v12.2.x long term stable release series. We recommend that all users > upgrade to this release. Please note the following precautions while > upgrading. > > Notable Changes >

[ceph-users] v12.2.11 Luminous released

2019-01-31 Thread Abhishek
We are glad to announce the eleventh bug fix release of the Luminous v12.2.x long term stable release series. We recommend that all users upgrade to this release. Please note the following precautions while upgrading. Notable Changes --- * This release fixes the pg log hard limit

Re: [ceph-users] Explanation of perf dump of rbd

2019-01-31 Thread Sinan Polat
Thanks for the clarification! Great that the next release will include the feature. We are running on Red Hat Ceph, so we might have to wait longer before having the feature available. Another related (simple) question: We are using /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok in

Re: [ceph-users] Question regarding client-network

2019-01-31 Thread Buchberger, Carsten
Thank you - we were expecting that, but wanted to be sure. By the way - we are running our clusters on IPv6-BGP, to achieve massive scalability and load-balancing ;-) Mit freundlichen Grüßen Carsten Buchberger [WITCOM_LOGO_CS4_CMYK_1.png] WiTCOM Wiesbadener Informations- und

Re: [ceph-users] ceph block - volume with RAID#0

2019-01-31 Thread M Ranga Swami Reddy
Here user requirement is - less write and more reads...so not much worried on performance . Thanks Swami On Thu, Jan 31, 2019 at 1:55 PM Piotr Dałek wrote: > > On 2019-01-31 6:05 a.m., M Ranga Swami Reddy wrote: > > My thought was - Ceph block volume with raid#0 (means I mounted a ceph > >

[ceph-users] RGW multipart objects

2019-01-31 Thread Niels Maumenee
We have a public object storage cluster running Ceph rados gateway lumious 12.2.4, which we plan to update soon. My question concerns some multipart object that appear to upload successfully but when retrieving the object the client can only get 4MB. An example would be radosgw-admin object stat

Re: [ceph-users] Explanation of perf dump of rbd

2019-01-31 Thread Jason Dillaman
On Thu, Jan 31, 2019 at 12:16 PM Paul Emmerich wrote: > > "perf schema" has a description field that may or may not contain > additional information. > > My best guess for these fields would be bytes read/written since > startup of this particular librbd instance. (Based on how these > counters

Re: [ceph-users] DockerSwarm and CephFS

2019-01-31 Thread Carlos Mogas da Silva
On 31/01/2019 18:51, Jacob DeGlopper wrote: Hi Carlos - just a guess, but you might need your credentials from /etc/ceph on the host mounted inside the container.     -- jacob Hi Jacob! That's not the case afaik. Docker daemon itself mounts the target, so it's still the host in here, and

Re: [ceph-users] ceph-ansible - where to ask questions?

2019-01-31 Thread Martin Palma
Hi Will, there is a dedicated mailing list for ceph-ansible: http://lists.ceph.com/listinfo.cgi/ceph-ansible-ceph.com Best, Martin On Thu, Jan 31, 2019 at 5:07 PM Will Dennis wrote: > > Hi all, > > > > Trying to utilize the ‘ceph-ansible’ project > (https://github.com/ceph/ceph-ansible ) to

[ceph-users] Cephalocon Barcelona 2019 CFP ends tomorrow!

2019-01-31 Thread Mike Perez
Hey everyone, Just a last minute reminder if you're considering presenting at Cephalocon Barcelona 2019, the CFP will be ending tomorrow. Early bird ticket rate ends February 15. https://ceph.com/cephalocon/barcelona-2019/ -- Mike Perez (thingee) ___

Re: [ceph-users] DockerSwarm and CephFS

2019-01-31 Thread Jacob DeGlopper
Hi Carlos - just a guess, but you might need your credentials from /etc/ceph on the host mounted inside the container.     -- jacob Hey guys! First post to the list and new Ceph user so I might say/ask some stupid stuff ;) I've setup a Ceph Storage (and crashed it 2 days after), with 2

[ceph-users] DockerSwarm and CephFS

2019-01-31 Thread Carlos Mogas da Silva
Hey guys! First post to the list and new Ceph user so I might say/ask some stupid stuff ;) I've setup a Ceph Storage (and crashed it 2 days after), with 2 ceph-mon, 2 ceph-ods (same host), 2 ceph-mgr and 1 ceph-mgs. Everything is up and running and works great. Now I'm trying to integrate the

Re: [ceph-users] Self serve / automated S3 key creation?

2019-01-31 Thread Jack
Hi, There is an admin API for RGW : http://docs.ceph.com/docs/master/radosgw/adminops/ You can check out rgwadmin¹ to see how to use it Best regards, [1] https://github.com/UMIACS/rgwadmin On 01/31/2019 06:11 PM, shubjero wrote: > Has anyone automated the ability to generate S3 keys for

[ceph-users] Self serve / automated S3 key creation?

2019-01-31 Thread shubjero
Has anyone automated the ability to generate S3 keys for OpenStack users in Ceph? Right now we take in a users request manually (Hey we need an S3 API key for our OpenStack project 'X', can you help?). We as cloud/ceph admins just use radosgw-admin to create them an access/secret key pair for

Re: [ceph-users] Explanation of perf dump of rbd

2019-01-31 Thread Paul Emmerich
"perf schema" has a description field that may or may not contain additional information. My best guess for these fields would be bytes read/written since startup of this particular librbd instance. (Based on how these counters usually work) Paul -- Paul Emmerich Looking for help with your

[ceph-users] pgs inactive after setting a new crush rule (Re: backfill_toofull after adding new OSDs)

2019-01-31 Thread Jan Kasprzak
Jan Kasprzak wrote: : OKay, now I changed the crush rule also on a pool with : the real data, and it seems all the client i/o on that pool has stopped. : The recovery continues, but things like qemu I/O, "rbd ls", and so on : are just stuck doing nothing. : : Can I unstuck it somehow

Re: [ceph-users] Spec for Ceph Mon+Mgr?

2019-01-31 Thread Jesper Krogh
> : We're currently co-locating our mons with the head node of our Hadoop > : installation. That may be giving us some problems, we dont know yet, but > : thus I'm speculation about moving them to dedicated hardware. Would it be ok to run them on kvm VM’s - of course not backed by ceph? Jesper

Re: [ceph-users] ceph-ansible - where to ask questions? [EXT]

2019-01-31 Thread Matthew Vernon
Hi, On 31/01/2019 16:06, Will Dennis wrote: Trying to utilize the ‘ceph-ansible’ project (https://github.com/ceph/ceph-ansible) to deploy some Ceph servers in a Vagrant testbed; hitting some issues with some of the plays – where is the right (best) venue to ask questions about this?

Re: [ceph-users] backfill_toofull after adding new OSDs

2019-01-31 Thread Jan Kasprzak
OKay, now I changed the crush rule also on a pool with the real data, and it seems all the client i/o on that pool has stopped. The recovery continues, but things like qemu I/O, "rbd ls", and so on are just stuck doing nothing. Can I unstuck it somehow (faster than waiting for all

[ceph-users] ceph-ansible - where to ask questions?

2019-01-31 Thread Will Dennis
Hi all, Trying to utilize the 'ceph-ansible' project (https://github.com/ceph/ceph-ansible ) to deploy some Ceph servers in a Vagrant testbed; hitting some issues with some of the plays - where is the right (best) venue to ask questions about this? Thanks, Will

Re: [ceph-users] backfill_toofull after adding new OSDs

2019-01-31 Thread Jan Kasprzak
Fyodor Ustinov wrote: : Hi! : : I saw the same several times when I added a new osd to the cluster. One-two pg in "backfill_toofull" state. : : In all versions of mimic. Yep. In my case it is not (only) after adding the new OSDs. An hour or so ago my cluster reached the HEALTH_OK

Re: [ceph-users] backfill_toofull after adding new OSDs

2019-01-31 Thread Fyodor Ustinov
Hi! I saw the same several times when I added a new osd to the cluster. One-two pg in "backfill_toofull" state. In all versions of mimic. - Original Message - From: "Caspar Smit" To: "Jan Kasprzak" Cc: "ceph-users" Sent: Thursday, 31 January, 2019 15:43:07 Subject: Re: [ceph-users]

[ceph-users] Explanation of perf dump of rbd

2019-01-31 Thread Sinan Polat
Hi, I finally figured out how to measure the statistics of a specific RBD volume; $ ceph --admin-daemon perf dump It outputs a lot, but I don't know what it means, is there any documentation about the output? For now the most important values are: - bytes read - bytes written I think I

Re: [ceph-users] Simple API to have cluster healthcheck ?

2019-01-31 Thread Ben Kerr
"...Dashboard is a dashboard so could not get health thru curl..." If i didn't miss the question, IMHO "dashboard" does this job adequately: curl -s -XGET :7000/health_data | jq -C ".health.status" ceph version 12.2.10 Am Do., 31. Jan. 2019 um 11:02 Uhr schrieb PHARABOT Vincent <

Re: [ceph-users] backfill_toofull after adding new OSDs

2019-01-31 Thread Caspar Smit
Hi Jan, You might be hitting the same issue as Wido here: https://www.spinics.net/lists/ceph-users/msg50603.html Kind regards, Caspar Op do 31 jan. 2019 om 14:36 schreef Jan Kasprzak : > Hello, ceph users, > > I see the following HEALTH_ERR during cluster rebalance: > >

[ceph-users] backfill_toofull after adding new OSDs

2019-01-31 Thread Jan Kasprzak
Hello, ceph users, I see the following HEALTH_ERR during cluster rebalance: Degraded data redundancy (low space): 8 pgs backfill_toofull Detailed description: I have upgraded my cluster to mimic and added 16 new bluestore OSDs on 4 hosts. The hosts are in a separate region in my

Re: [ceph-users] Cluster Status:HEALTH_ERR for Full OSD

2019-01-31 Thread Fabio - NS3 srl
Il 30/01/19 17:00, Paul Emmerich ha scritto: Quick and dirty solution: take the full OSD down to issue the deletion command ;) Better solutions: temporarily incrase the full limit (ceph osd set-full-ratio) or reduce the OSD's reweight (ceph osd reweight) Paul Many thanks

Re: [ceph-users] Cluster Status:HEALTH_ERR for Full OSD

2019-01-31 Thread Fabio - NS3 srl
Il 30/01/19 17:04, Amit Ghadge ha scritto: Better way is increase osd set-full-ratio slightly (.97) and then remove buckets. Many thanks ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Simple API to have cluster healthcheck ?

2019-01-31 Thread PHARABOT Vincent
I tried to start on the Monitor node itself Yes Dashboard is enabled # ceph mgr services { "dashboard": "https://ip-10-8-36-16.internal:8443/;, "restful": "https://ip-10-8-36-16.internal:8003/; } # curl -k https://ip-10-8-36-16.eu-west-2.compute.internal:8443/api/health {"status": "404 Not

Re: [ceph-users] ceph block - volume with RAID#0

2019-01-31 Thread Piotr Dałek
On 2019-01-31 6:05 a.m., M Ranga Swami Reddy wrote: My thought was - Ceph block volume with raid#0 (means I mounted a ceph block volumes to an instance/VM, there I would like to configure this volume with RAID0). Just to know, if anyone doing the same as above, if yes what are the constraints?