[ceph-users] Re: looking for telegram group in English or Chinese

2020-05-26 Thread Konstantin Shalygin
On 5/26/20 1:13 PM, Zhenshi Zhou wrote: Is there any telegram group for communicating with ceph users? AFAIK there is only Russian (CIS) group [1], but feel free to join with English! [1] https://t.me/ceph_ru k ___ ceph-users mailing list --

[ceph-users] Re: Multisite RADOS Gateway replication factor in zonegroup

2020-05-26 Thread Konstantin Shalygin
On 5/25/20 9:50 PM, alexander.vysoc...@megafon.ru wrote: I didn’t find any information about the replication factor in the zone group. Assume  I  have three ceph clusters with Rados Gateway in one zone group each with replica size 3. How many replicas of an object  I’ll get in total? Is it

[ceph-users] Re: [External Email] Re: Ceph Nautius not working after setting MTU 9000

2020-05-26 Thread Paul Emmerich
Don't optimize stuff without benchmarking *before and after*, don't apply random tuning tipps from the Internet without benchmarking them. My experience with Jumbo frames: 3% performance. On a NVMe-only setup with 100 Gbit/s network. Paul -- Paul Emmerich Looking for help with your Ceph

[ceph-users] Re: Prometheus Python Errors

2020-05-26 Thread Ernesto Puerta
This has been recently fixed in master (I just submitted backporting PRs for octopus and nautilus ). BTW the fix is pretty trivial

[ceph-users] Re: [External Email] Re: Ceph Nautius not working after setting MTU 9000

2020-05-26 Thread Marc Roos
Look what I have found!!! :) https://ceph.com/geen-categorie/ceph-loves-jumbo-frames/ -Original Message- From: Anthony D'Atri [mailto:anthony.da...@gmail.com] Sent: maandag 25 mei 2020 22:12 To: Marc Roos Cc: kdhall; martin.verges; sstkadu; amudhan83; ceph-users; doustar Subject:

[ceph-users] Re: move bluestore wal/db

2020-05-26 Thread Eneko Lacunza
Hi, Yes, it can be done (shuting down the OSD but no rebuild required), we did it for resizing wal partition to a bigger one. A simple Google search will help; I can paste the procedure we followed but it's in spanish :( Cheers El 26/5/20 a las 17:20, Frank R escribió: Is there a safe

[ceph-users] move bluestore wal/db

2020-05-26 Thread Frank R
Is there a safe way to move the bluestore wal and db to a new device that doesn't involve rebuilding the entire OSD? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: OSDs taking too much memory, for buffer_anon

2020-05-26 Thread Mark Nelson
Hi Harald, Yeah, I suspect your issue is definitely related to what Adam has been investigating. FWIW, we are talking about re-introducing a periodic trim in Adam's PR here: https://github.com/ceph/ceph/pull/35171 That should help on the memory growth side, but if we still have objects

[ceph-users] Ceph client on rhel6?

2020-05-26 Thread Simon Sutter
Hello again, I have a new question: We want to upgrade a server, with an os based on rhel6. The ceph cluster is atm on octopus. How can I install the client packages to mount cephfs and do a backup of the server? Is it even possible? Are the client packages from hammer compatible with the

[ceph-users] Re: mds container dies during deployment

2020-05-26 Thread Simon Sutter
Hello, Didn't read the right one: https://docs.ceph.com/docs/master/cephadm/install/#deploy-mdss There it says, how to do it right. The command I was using, was just to add a mds daemon if you have already one. Hopes it helps others. Cheers, Simon Von: Simon

[ceph-users] dealing with spillovers

2020-05-26 Thread thoralf schulze
hi there, trying to get around my head rocksdb spillovers and how to deal with them … in particular, i have one osds which does not have any pools associated (as per ceph pg ls-by-osd $osd ), yet it does show up in ceph health detail as: osd.$osd spilled over 2.9 MiB metadata from 'db'

[ceph-users] Re: Nautilus: (Minority of) OSDs with huge buffer_anon usage - triggering OOMkiller in worst cases.

2020-05-26 Thread aoanla
Hi Mark, thanks for your efforts on this already. I had to wait for my account on tracker.ceph to be approved before I could submit the bug - which is here: https://tracker.ceph.com/issues/45706 Sam ___ ceph-users mailing list -- ceph-users@ceph.io

[ceph-users] Performance issues in newly deployed Ceph cluster

2020-05-26 Thread Loschwitz,Martin Gerhard
Folks, I am running into a very strange issue with a brand new Ceph cluster during initial testing. Cluster consists of 12 nodes, 4 of them have SSDs only, the other eight have a mixture of SSDs and HDDs. The latter nods are configured so that three or four HDDs use one SSDs for their blockdb.

[ceph-users] Cephadm Setup Query

2020-05-26 Thread Shivanshi .
Hi, I am facing an issue on Cephadm cluster setup. Whenever, I try to add remote devices as OSDs, command just hangs. The steps I have followed : sudo ceph orch daemon add osd node1:device   For the setup I have followed the steps mentioned in

[ceph-users] looking for telegram group in English or Chinese

2020-05-26 Thread Zhenshi Zhou
Hi all, Is there any telegram group for communicating with ceph users? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: RGW Multisite metadata sync

2020-05-26 Thread Zhenshi Zhou
I did encounter the same issue. I found that I missed the restart progress, and after restart the rgw I can commit period. What's more, I rename the default zone as well as zonegroup. Sailaja Yedugundla 于2020年5月26日周二 上午11:06写道: > Yes. I restarted the rgw service on master zone before committing