[ceph-users] CephFS "authorize" on erasure-coded FS

2018-09-14 Thread Виталий Филиппов
Hi, I've recently tried to setup a user for CephFS running on a pair of replicated+erasure pools, but after I ran ceph fs authorize ecfs client.samba / rw The "client.samba" user could only see listings, but couldn't read or write any files. I've tried to look in logs and to raise the

Re: [ceph-users] AsyncConnection seems to keep buffers allocated for longer than necessary

2018-09-14 Thread Charles-François Natali
Thanks Greg! Will test and report back on Monday. Cheers, Charles On Fri, 14 Sep 2018, 20:32 Gregory Farnum, wrote: > [Adding ceph-devel] > > On Fri, Sep 14, 2018 at 5:22 AM, Charles-François Natali > wrote: > > See > > >

Re: [ceph-users] lost osd while migrating EC pool to device-class crush rules

2018-09-14 Thread Gregory Farnum
On Thu, Sep 13, 2018 at 3:05 PM, Graham Allan wrote: > I'm now following up to my earlier message regarding data migration from old > to new hardware in our ceph cluster. As part of this we wanted to move to > device-class-based crush rules. For the replicated pools the directions for > this were

Re: [ceph-users] AsyncConnection seems to keep buffers allocated for longer than necessary

2018-09-14 Thread Gregory Farnum
[Adding ceph-devel] On Fri, Sep 14, 2018 at 5:22 AM, Charles-François Natali wrote: > See > http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-September/029780.html > for the original thread. > > Here is a trivial reproducer not using any aio or dynamically allocated > memory to store the

Re: [ceph-users] Mimic upgrade failure

2018-09-14 Thread Sage Weil
One theory: When running mixed versions, sharing osdmaps is less efficient, because the sender must reencode the map in a compatible way for the old version to interpret. This is normally not a huge deal, but with a large cluster is probably presents a significant CPU overhead. My guess is

[ceph-users] monitor metrics ceph subsystems

2018-09-14 Thread Bruno Carvalho
Hi Cepher, I´m creating a dashboard with grafana more advanced with many metrics de latency e suboperations for share in community. Where I can read more about AsyncMessenger::Worker-(0-3), WBThrottle, throttle-*, mutex* , there is some documentation more details about metrics and your sub

[ceph-users] Ceph Swift API with rgw_dns_name

2018-09-14 Thread Huseyin Cotuk
Hello Cephers, I am using my ceph cluster as object storage backend with both S3 and Swift APIs. In order to benefit S3 bucket subdomain access, I use raw_dns_name directive for rados gw instances. When I define raw_dns_name, Swift API stops working for the OpenStack object storage backend.

[ceph-users] dm-writecache

2018-09-14 Thread Dan van der Ster
Hi, Has anyone tried the new dm-writecache target that landed in 4.18 [1]? Might be super useful in the osd context... Cheers, Dan [1] https://www.phoronix.com/scan.php?page=news_item=Linux-4.18-DM-Writecache ___ ceph-users mailing list

Re: [ceph-users] Benchmark does not show gains with DB on SSD

2018-09-14 Thread Eugen Block
Hi, Between tests we destroyed the OSDs and created them from scratch. We used Docker image to deploy Ceph on one machine. I've seen that there are WAL/DB partitions created on the disks. Should I also check somewhere in ceph config that it actually uses those? if you created them from

Re: [ceph-users] can we drop support of centos/rhel 7.4?

2018-09-14 Thread kefu chai
On Fri, Sep 14, 2018 at 10:07 PM John Spray wrote: > > On Fri, Sep 14, 2018 at 2:26 PM David Turner wrote: > > > > Release dates > > RHEL 7.4 - July 2017 > > Luminous 12.2.0 - August 2017 > > CentOS 7.4 - September 2017 > > RHEL 7.5 - April 2018 > > CentOS 7.5 - May 2018 > > Mimic 13.2.0 - June

Re: [ceph-users] can we drop support of centos/rhel 7.4?

2018-09-14 Thread John Spray
On Fri, Sep 14, 2018 at 2:26 PM David Turner wrote: > > Release dates > RHEL 7.4 - July 2017 > Luminous 12.2.0 - August 2017 > CentOS 7.4 - September 2017 > RHEL 7.5 - April 2018 > CentOS 7.5 - May 2018 > Mimic 13.2.0 - June 2018 > > In the world of sysadmins it takes time to let new

Re: [ceph-users] can we drop support of centos/rhel 7.4?

2018-09-14 Thread Marc Roos
I agree. I was on centos7.4 and updated to I think luminous 12.2.7, and had something not working related to some python dependancy. This was resolved by upgrading to centos7.5 -Original Message- From: David Turner [mailto:drakonst...@gmail.com] Sent: vrijdag 14 september 2018

Re: [ceph-users] can we drop support of centos/rhel 7.4?

2018-09-14 Thread David Turner
It's odd to me because this feels like the opposite direction of the rest of Ceph. Making management and operating Ceph simpler and easier. Requiring fast OS upgrades on dot releases of Ceph versions is not that direction at all. On Fri, Sep 14, 2018, 9:25 AM David Turner wrote: > Release dates

Re: [ceph-users] can we drop support of centos/rhel 7.4?

2018-09-14 Thread David Turner
Release dates RHEL 7.4 - July 2017 Luminous 12.2.0 - August 2017 CentOS 7.4 - September 2017 RHEL 7.5 - April 2018 CentOS 7.5 - May 2018 Mimic 13.2.0 - June 2018 In the world of sysadmins it takes time to let new releases/OS's simmer before beginning to test them let alone upgrading to them. It

[ceph-users] AsyncConnection seems to keep buffers allocated for longer than necessary

2018-09-14 Thread Charles-François Natali
See http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-September/029780.html for the original thread. Here is a trivial reproducer not using any aio or dynamically allocated memory to store the objects read. It simply reads 20,000 1MB large objects sequentially: when run, instead of using a

Re: [ceph-users] Slow Ceph: Any plans on torrent-like transfers from OSDs ?

2018-09-14 Thread Maged Mokhtar
On 14/09/18 12:13, Alex Lupsa wrote: Hi, Thank you for the answer Ronny. I did indeed try 2x RBD drives (rdb-cache was already active), striping them, and got double write/read speed instantly. So I am chalking this one on KVM who is single-threaded and not fully ceph-aware it seems.

Re: [ceph-users] Standby mgr stopped sending beacons after upgrade to 12.2.8

2018-09-14 Thread John Spray
Thanks, it's clear from that backtrace what's going on. Opened http://tracker.ceph.com/issues/35985 John On Fri, Sep 14, 2018 at 11:33 AM Christian Albrecht wrote: > > 14. September 2018 11:31, "John Spray" schrieb: > > > On Thu, Sep 13, 2018 at 7:55 PM Christian Albrecht wrote: > > > >> Hi

Re: [ceph-users] RADOS async client memory usage explodes when reading several objects in sequence

2018-09-14 Thread Daniel Goldbach
One of my colleagues believes he's tracked down the source of the missing deallocations in the librados code. I'll let him reply himself with his findings. For now, we've found a workaround: when an object is reread, the memory allocated for it seems to be freed and a new block is allocated for

Re: [ceph-users] Standby mgr stopped sending beacons after upgrade to 12.2.8

2018-09-14 Thread Christian Albrecht
14. September 2018 11:31, "John Spray" schrieb: > On Thu, Sep 13, 2018 at 7:55 PM Christian Albrecht wrote: > >> Hi all, >> ... >> Let me know I have to provide more information on this. > > There was very little change in ceph-mgr between 12.2.7 and 12.2.8, so > this is strange. > > You

Re: [ceph-users] Slow Ceph: Any plans on torrent-like transfers from OSDs ?

2018-09-14 Thread Alex Lupsa
Hi, Thank you for the answer Ronny. I did indeed try 2x RBD drives (rdb-cache was already active), striping them, and got double write/read speed instantly. So I am chalking this one on KVM who is single-threaded and not fully ceph-aware it seems. Although I can see some threads talking about

Re: [ceph-users] cephfs is growing up rapidly

2018-09-14 Thread Zhenshi Zhou
Hi, I use rsync to back up filse. I'm not sure if it update files by removing and retransfering or by overwiriting the files. Options of rsync command include '-artuz', and I'm trying to figure out how it works. MDS logs has nothing error as I think it's not the same bug (or it's not a bug).

Re: [ceph-users] Standby mgr stopped sending beacons after upgrade to 12.2.8

2018-09-14 Thread John Spray
On Thu, Sep 13, 2018 at 7:55 PM Christian Albrecht wrote: > > Hi all, > > after upgrading from 12.2.7 to 12.2.8 the standby mgr instances in my cluster > stopped sending beacons. > The service starts and everything seems to work just fine, but after a period > of time the mgr disappears. > All

Re: [ceph-users] Ceph MDS WRN replayed op client.$id

2018-09-14 Thread Stefan Kooman
Quoting John Spray (jsp...@redhat.com): > On Thu, Sep 13, 2018 at 11:01 AM Stefan Kooman wrote: > We implement locking, and it's correct that another client can't gain > the lock until the first client is evicted. Aside from speeding up > eviction by modifying the timeout, if you have another

Re: [ceph-users] cephfs is growing up rapidly

2018-09-14 Thread John Spray
On Fri, Sep 14, 2018 at 7:25 AM Zhenshi Zhou wrote: > > Hi, > > I have a ceph cluster of version 12.2.5 on centos7. > > I created 3 pools, 'rbd' for rbd storage, as well as 'cephfs_data' > and 'cephfs_meta' for cephfs. Cephfs is used for backing up by > rsync and volumes mounting by docker. > >

Re: [ceph-users] can we drop support of centos/rhel 7.4?

2018-09-14 Thread John Spray
On Fri, Sep 14, 2018 at 3:48 AM kefu chai wrote: > > hi ceph-{maintainers,users,developers}, > > recently, i ran into an issue[0] which popped up when we build Ceph on > centos 7.5, but test it on centos 7.4. as we know, the gperftools-libs > package provides the tcmalloc allocator shared

Re: [ceph-users] osx support and performance testing

2018-09-14 Thread kefu chai
On Wed, Sep 12, 2018 at 11:06 PM Marc Roos wrote: > > > Is this osxfuse, the only and best performing way to mount a ceph > filesystem on an osx client? > http://docs.ceph.com/docs/mimic/dev/macos/ yes. and probably you could reference

[ceph-users] cephfs is growing up rapidly

2018-09-14 Thread Zhenshi Zhou
Hi, I have a ceph cluster of version 12.2.5 on centos7. I created 3 pools, 'rbd' for rbd storage, as well as 'cephfs_data' and 'cephfs_meta' for cephfs. Cephfs is used for backing up by rsync and volumes mounting by docker. The size of backup files is 3.5T. Besides, docker use less than 60G