[ceph-users] QEMU/KVM client compatibility

2019-05-27 Thread Kevin Olbrich
Hi! How can I determine which client compatibility level (luminous, mimic, nautilus, etc.) is supported in Qemu/KVM? Does it depend on the version of ceph packages on the system? Or do I need a recent version Qemu/KVM? Which component defines, which client level will be supported? Thank you very

Re: [ceph-users] assume_role() :http_code 400 error

2019-05-27 Thread Pritha Srivastava
Hello, What is the value of rgw sts key in your ceph.conf file? It has to be 16 bytes in length e.g. abcdefghijklmnop Thanks, Pritha On Tue, May 28, 2019 at 9:16 AM Yuan Minghui wrote: > The log file says: > > invalid secret key. > > > > The key I put is ‘tom’s accessKey and secret_Key’. > >

Re: [ceph-users] assume_role() :http_code 400 error

2019-05-27 Thread Yuan Minghui
The log file says: invalid secret key. The key I put is ‘tom’s  accessKey and secret_Key’. And I am sure that the ‘tom’s key’ is correct. 发件人: Yuan Minghui 日期: 2019年5月28日 星期二 上午11:35 收件人: Pritha Srivastava 抄送: "ceph-users@lists.ceph.com" 主题: [ceph-users] assume_role() :http_code 400

[ceph-users] assume_role() :http_code 400 error

2019-05-27 Thread Yuan Minghui
Hello Pritha:    I reinstall the latest ceph version 14.2.1. and when I use ‘assume_role()’ there are something wrong about http_code = 400. Do you know the reasons? Thanks a lot. yuan ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] Quotas with Mimic (CephFS-FUSE) clients in a Luminous Cluster

2019-05-27 Thread Yan, Zheng
On Mon, May 27, 2019 at 6:54 PM Oliver Freyermuth wrote: > > Am 27.05.19 um 12:48 schrieb Oliver Freyermuth: > > Am 27.05.19 um 11:57 schrieb Dan van der Ster: > >> On Mon, May 27, 2019 at 11:54 AM Oliver Freyermuth > >> wrote: > >>> > >>> Dear Dan, > >>> > >>> thanks for the quick reply! > >>>

Re: [ceph-users] Luminous OSD: replace block.db partition

2019-05-27 Thread Yury Shevchuk
Hi Swami, In Luminous you will have to delete and re-create the OSD with the desired size. Please follow this link for details: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-May/034805.html -- Yury PS [cross-posting to ceph-devel removed] On Mon, May 27, 2019 at 05:37:02PM +0530,

[ceph-users] Fwd: Luminous OSD: replace block.db partition

2019-05-27 Thread M Ranga Swami Reddy
-- Forwarded message - From: M Ranga Swami Reddy Date: Mon, May 27, 2019 at 5:37 PM Subject: Luminous OSD: replace block.db partition To: ceph-devel , ceph-users Hello - I have created an OSD with 20G block.db, now I wanted to change the block.db to 100G size. Please let us

Re: [ceph-users] large omap object in usage_log_pool

2019-05-27 Thread shubjero
Thanks Casey. This helped me understand the purpose of this pool. I trimmed the usage logs which reduced the number of keys stored in that index significantly and I may even disable the usage log entirely as I don't believe we use it for anything. On Fri, May 24, 2019 at 3:51 PM Casey Bodley

[ceph-users] Luminous OSD: replace block.db partition

2019-05-27 Thread M Ranga Swami Reddy
Hello - I have created an OSD with 20G block.db, now I wanted to change the block.db to 100G size. Please let us know if there is a process for the same. PS: Ceph version 12.2.4 with bluestore backend. Thanks Swami ___ ceph-users mailing list

Re: [ceph-users] MDS hangs in "heartbeat_map" deadlock

2019-05-27 Thread Stefan Kooman
Quoting Stefan Kooman (ste...@bit.nl): > Hi Patrick, > > Quoting Stefan Kooman (ste...@bit.nl): > > Quoting Stefan Kooman (ste...@bit.nl): > > > Quoting Patrick Donnelly (pdonn...@redhat.com): > > > > Thanks for the detailed notes. It looks like the MDS is stuck > > > > somewhere it's not even

Re: [ceph-users] Quotas with Mimic (CephFS-FUSE) clients in a Luminous Cluster

2019-05-27 Thread Oliver Freyermuth
Am 27.05.19 um 12:48 schrieb Oliver Freyermuth: Am 27.05.19 um 11:57 schrieb Dan van der Ster: On Mon, May 27, 2019 at 11:54 AM Oliver Freyermuth wrote: Dear Dan, thanks for the quick reply! Am 27.05.19 um 11:44 schrieb Dan van der Ster: Hi Oliver, We saw the same issue after upgrading

Re: [ceph-users] Quotas with Mimic (CephFS-FUSE) clients in a Luminous Cluster

2019-05-27 Thread Oliver Freyermuth
Am 27.05.19 um 11:57 schrieb Dan van der Ster: On Mon, May 27, 2019 at 11:54 AM Oliver Freyermuth wrote: Dear Dan, thanks for the quick reply! Am 27.05.19 um 11:44 schrieb Dan van der Ster: Hi Oliver, We saw the same issue after upgrading to mimic. IIRC we could make the max_bytes xattr

Re: [ceph-users] [events] Ceph Day CERN September 17 - CFP now open!

2019-05-27 Thread Mike Perez
Hi Peter, Thanks for verifying this. September 17 is the new date. We moved it in order to get a bigger room for the event after receiving good interest about it during Cephalocon. — Mike Perez (thingee) On May 27, 2019, 2:56 AM -0700, Peter Wienemann , wrote: > Hi Mike, > > there is a date

Re: [ceph-users] [events] Ceph Day CERN September 17 - CFP now open!

2019-05-27 Thread Dan van der Ster
Tuesday Sept 17 is indeed the correct day! We had to move it by one day to get a bigger room... sorry for the confusion. -- dan ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Quotas with Mimic (CephFS-FUSE) clients in a Luminous Cluster

2019-05-27 Thread Dan van der Ster
On Mon, May 27, 2019 at 11:54 AM Oliver Freyermuth wrote: > > Dear Dan, > > thanks for the quick reply! > > Am 27.05.19 um 11:44 schrieb Dan van der Ster: > > Hi Oliver, > > > > We saw the same issue after upgrading to mimic. > > > > IIRC we could make the max_bytes xattr visible by touching an

Re: [ceph-users] [events] Ceph Day CERN September 17 - CFP now open!

2019-05-27 Thread Peter Wienemann
Hi Mike, there is a date incompatibility between your announcement and Dan's initial announcement [0]. Which date is correct: September 16 or September 17? Peter [0] http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-April/034259.html On 27.05.19 11:22, Mike Perez wrote: > Hey everyone,

Re: [ceph-users] Quotas with Mimic (CephFS-FUSE) clients in a Luminous Cluster

2019-05-27 Thread Oliver Freyermuth
Dear Dan, thanks for the quick reply! Am 27.05.19 um 11:44 schrieb Dan van der Ster: Hi Oliver, We saw the same issue after upgrading to mimic. IIRC we could make the max_bytes xattr visible by touching an empty file in the dir (thereby updating the dir inode). e.g. touch

Re: [ceph-users] Quotas with Mimic (CephFS-FUSE) clients in a Luminous Cluster

2019-05-27 Thread Dan van der Ster
Hi Oliver, We saw the same issue after upgrading to mimic. IIRC we could make the max_bytes xattr visible by touching an empty file in the dir (thereby updating the dir inode). e.g. touch /cephfs/user/freyermu/.quota; rm /cephfs/user/freyermu/.quota Does that work? -- dan On Mon, May 27,

[ceph-users] Quotas with Mimic (CephFS-FUSE) clients in a Luminous Cluster

2019-05-27 Thread Oliver Freyermuth
Dear Cephalopodians, in the process of migrating a cluster from Luminous (12.2.12) to Mimic (13.2.5), we have upgraded the FUSE clients first (we took the chance during a time of low activity), thinking that this should not cause any issues. All MDS+MON+OSDs are still on Luminous, 12.2.12.

[ceph-users] Multisite RGW

2019-05-27 Thread Matteo Dacrema
Hi all, I’m planning to replace a swift multi-region deployment using Ceph. Right now Swift is deployed across 3 region in Europe and the data is replicated across this 3 regions. Is it possible to configure Ceph to do the same? I think I need to go with multiple zone group with a single realm

[ceph-users] [events] Ceph Day CERN September 17 - CFP now open!

2019-05-27 Thread Mike Perez
Hey everyone, Ceph CERN Day will be a full-day event dedicated to fostering Ceph's research and non-profit user communities. The event is hosted by the Ceph team from the CERN IT department. We invite this community to meet and discuss the status of the Ceph project, recent improvements, and

Re: [ceph-users] performance in a small cluster

2019-05-27 Thread Stefan Kooman
Quoting Robert Sander (r.san...@heinlein-support.de): > Hi, > > we have a small cluster at a customer's site with three nodes and 4 SSD-OSDs > each. > Connected with 10G the system is supposed to perform well. > > rados bench shows ~450MB/s write and ~950MB/s read speeds with 4MB objects > but

Re: [ceph-users] Cephfs free space vs ceph df free space disparity

2019-05-27 Thread Stefan Kooman
Quoting Robert Ruge (robert.r...@deakin.edu.au): > Ceph newbie question. > > I have a disparity between the free space that my cephfs file system > is showing and what ceph df is showing. As you can see below my > cephfs file system says there is 9.5TB free however ceph df says there > is 186TB

Re: [ceph-users] inconsistent number of pools

2019-05-27 Thread Lars Täuber
Fri, 24 May 2019 21:41:33 +0200 Michel Raabe ==> Lars Täuber , ceph-users@lists.ceph.com : > > You can also try > > $ rados lspools > $ ceph osd pool ls > > and verify that with the pgs > > $ ceph pg ls --format=json-pretty | jq -r '.pg_stats[].pgid' | cut -d. > -f1 | uniq > Yes, now I