Hi,
I've recently tried to setup a user for CephFS running on a pair of
replicated+erasure pools, but after I ran
ceph fs authorize ecfs client.samba / rw
The "client.samba" user could only see listings, but couldn't read or
write any files. I've tried to look in logs and to raise the
Thanks Greg!
Will test and report back on Monday.
Cheers,
Charles
On Fri, 14 Sep 2018, 20:32 Gregory Farnum, wrote:
> [Adding ceph-devel]
>
> On Fri, Sep 14, 2018 at 5:22 AM, Charles-François Natali
> wrote:
> > See
> >
>
On Thu, Sep 13, 2018 at 3:05 PM, Graham Allan wrote:
> I'm now following up to my earlier message regarding data migration from old
> to new hardware in our ceph cluster. As part of this we wanted to move to
> device-class-based crush rules. For the replicated pools the directions for
> this were
[Adding ceph-devel]
On Fri, Sep 14, 2018 at 5:22 AM, Charles-François Natali
wrote:
> See
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-September/029780.html
> for the original thread.
>
> Here is a trivial reproducer not using any aio or dynamically allocated
> memory to store the
One theory:
When running mixed versions, sharing osdmaps is less efficient, because
the sender must reencode the map in a compatible way for the old version
to interpret. This is normally not a huge deal, but with a large
cluster is probably presents a significant CPU overhead. My guess is
Hi Cepher, I´m creating a dashboard with grafana more advanced with many
metrics de latency e suboperations for share in community.
Where I can read more about AsyncMessenger::Worker-(0-3), WBThrottle,
throttle-*,
mutex* , there is some documentation more details about metrics and your
sub
Hello Cephers,
I am using my ceph cluster as object storage backend with both S3 and Swift
APIs. In order to benefit S3 bucket subdomain access, I use raw_dns_name
directive for rados gw instances. When I define raw_dns_name, Swift API stops
working for the OpenStack object storage backend.
Hi,
Has anyone tried the new dm-writecache target that landed in 4.18 [1]?
Might be super useful in the osd context...
Cheers, Dan
[1] https://www.phoronix.com/scan.php?page=news_item=Linux-4.18-DM-Writecache
___
ceph-users mailing list
Hi,
Between tests we destroyed the OSDs and created them from scratch. We used
Docker image to deploy Ceph on one machine.
I've seen that there are WAL/DB partitions created on the disks.
Should I also check somewhere in ceph config that it actually uses those?
if you created them from
On Fri, Sep 14, 2018 at 10:07 PM John Spray wrote:
>
> On Fri, Sep 14, 2018 at 2:26 PM David Turner wrote:
> >
> > Release dates
> > RHEL 7.4 - July 2017
> > Luminous 12.2.0 - August 2017
> > CentOS 7.4 - September 2017
> > RHEL 7.5 - April 2018
> > CentOS 7.5 - May 2018
> > Mimic 13.2.0 - June
On Fri, Sep 14, 2018 at 2:26 PM David Turner wrote:
>
> Release dates
> RHEL 7.4 - July 2017
> Luminous 12.2.0 - August 2017
> CentOS 7.4 - September 2017
> RHEL 7.5 - April 2018
> CentOS 7.5 - May 2018
> Mimic 13.2.0 - June 2018
>
> In the world of sysadmins it takes time to let new
I agree. I was on centos7.4 and updated to I think luminous 12.2.7, and
had something not working related to some python dependancy. This was
resolved by upgrading to centos7.5
-Original Message-
From: David Turner [mailto:drakonst...@gmail.com]
Sent: vrijdag 14 september 2018
It's odd to me because this feels like the opposite direction of the rest
of Ceph. Making management and operating Ceph simpler and easier. Requiring
fast OS upgrades on dot releases of Ceph versions is not that direction at
all.
On Fri, Sep 14, 2018, 9:25 AM David Turner wrote:
> Release dates
Release dates
RHEL 7.4 - July 2017
Luminous 12.2.0 - August 2017
CentOS 7.4 - September 2017
RHEL 7.5 - April 2018
CentOS 7.5 - May 2018
Mimic 13.2.0 - June 2018
In the world of sysadmins it takes time to let new releases/OS's simmer
before beginning to test them let alone upgrading to them. It
See
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-September/029780.html
for
the original thread.
Here is a trivial reproducer not using any aio or dynamically allocated
memory to store the objects read.
It simply reads 20,000 1MB large objects sequentially: when run, instead of
using a
On 14/09/18 12:13, Alex Lupsa wrote:
Hi,
Thank you for the answer Ronny. I did indeed try 2x RBD drives
(rdb-cache was already active), striping them, and got double
write/read speed instantly. So I am chalking this one on KVM who is
single-threaded and not fully ceph-aware it seems.
Thanks, it's clear from that backtrace what's going on. Opened
http://tracker.ceph.com/issues/35985
John
On Fri, Sep 14, 2018 at 11:33 AM Christian Albrecht wrote:
>
> 14. September 2018 11:31, "John Spray" schrieb:
>
> > On Thu, Sep 13, 2018 at 7:55 PM Christian Albrecht wrote:
> >
> >> Hi
One of my colleagues believes he's tracked down the source of the missing
deallocations in the librados code. I'll let him reply himself with his
findings. For now, we've found a workaround: when an object is reread, the
memory allocated for it seems to be freed and a new block is allocated for
14. September 2018 11:31, "John Spray" schrieb:
> On Thu, Sep 13, 2018 at 7:55 PM Christian Albrecht wrote:
>
>> Hi all,
>> ...
>> Let me know I have to provide more information on this.
>
> There was very little change in ceph-mgr between 12.2.7 and 12.2.8, so
> this is strange.
>
> You
Hi,
Thank you for the answer Ronny. I did indeed try 2x RBD drives (rdb-cache
was already active), striping them, and got double write/read speed
instantly. So I am chalking this one on KVM who is single-threaded and not
fully ceph-aware it seems. Although I can see some threads talking about
Hi,
I use rsync to back up filse. I'm not sure if it update files by removing
and retransfering or by overwiriting the files. Options of rsync command
include '-artuz', and I'm trying to figure out how it works.
MDS logs has nothing error as I think it's not the same bug (or it's not
a bug).
On Thu, Sep 13, 2018 at 7:55 PM Christian Albrecht wrote:
>
> Hi all,
>
> after upgrading from 12.2.7 to 12.2.8 the standby mgr instances in my cluster
> stopped sending beacons.
> The service starts and everything seems to work just fine, but after a period
> of time the mgr disappears.
> All
Quoting John Spray (jsp...@redhat.com):
> On Thu, Sep 13, 2018 at 11:01 AM Stefan Kooman wrote:
> We implement locking, and it's correct that another client can't gain
> the lock until the first client is evicted. Aside from speeding up
> eviction by modifying the timeout, if you have another
On Fri, Sep 14, 2018 at 7:25 AM Zhenshi Zhou wrote:
>
> Hi,
>
> I have a ceph cluster of version 12.2.5 on centos7.
>
> I created 3 pools, 'rbd' for rbd storage, as well as 'cephfs_data'
> and 'cephfs_meta' for cephfs. Cephfs is used for backing up by
> rsync and volumes mounting by docker.
>
>
On Fri, Sep 14, 2018 at 3:48 AM kefu chai wrote:
>
> hi ceph-{maintainers,users,developers},
>
> recently, i ran into an issue[0] which popped up when we build Ceph on
> centos 7.5, but test it on centos 7.4. as we know, the gperftools-libs
> package provides the tcmalloc allocator shared
On Wed, Sep 12, 2018 at 11:06 PM Marc Roos wrote:
>
>
> Is this osxfuse, the only and best performing way to mount a ceph
> filesystem on an osx client?
> http://docs.ceph.com/docs/mimic/dev/macos/
yes. and probably you could reference
Hi,
I have a ceph cluster of version 12.2.5 on centos7.
I created 3 pools, 'rbd' for rbd storage, as well as 'cephfs_data'
and 'cephfs_meta' for cephfs. Cephfs is used for backing up by
rsync and volumes mounting by docker.
The size of backup files is 3.5T. Besides, docker use less than
60G
27 matches
Mail list logo