[ceph-users] issue with monitors

2020-08-27 Thread techno10
i'm running the following [root@node1 ~]# ceph --versionceph version 15.2.4 (7447c15c6ff58d7fce91843b705a268a1917325c) octopus (stable) on fedora 32.. installed from the built-in repos. I'm running into a simple issue that's rather frustrating. Here is a set of commands i'm running and output:

[ceph-users] Re: Is it possible to mount a cephfs within a container?

2020-08-27 Thread steven prothero
Hello, octopus 15.2.4 just as a test, I put my OSDs each inside of a LXD container. Set up cephFS and mounted it inside a LXD container and it works. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to

[ceph-users] Fwd: Ceph Upgrade Issue - Luminous to Nautilus (14.2.11 ) using ceph-ansible

2020-08-27 Thread Suresh Rama
Hi All, We encountered an issue while upgrading our Ceph cluster from Luminous 12.2.12 to Nautilus 14.2.11. We used https://docs.ceph.com/docs/master/releases/nautilus/#upgrading-from-mimic-or-luminous and ceph-ansible to upgrade the cluster. We use HDD for data and NVME for WAL and DB.

[ceph-users] Re: [cephadm] Deploy Ceph in a closed environment

2020-08-27 Thread Tony Liu
Please discard this question, I figure it out. Tony > -Original Message- > From: Tony Liu > Sent: Thursday, August 27, 2020 1:55 PM > To: ceph-users@ceph.io > Subject: [ceph-users] [cephadm] Deploy Ceph in a closed environment > > Hi, > > I'd like to deploy Ceph in a closed environment

[ceph-users] [cephadm] Deploy Ceph in a closed environment

2020-08-27 Thread Tony Liu
Hi, I'd like to deploy Ceph in a closed environment (no connectivity to public). I will build repository and registry to hold required packages and container images. How do I specify the private registry when running "cephadm bootstrap"? The same question for adding OSD. Thanks! Tony

[ceph-users] Re: ceph auth ls

2020-08-27 Thread Marc Roos
This what I mean, this guy is just posting all his keys. https://www.mail-archive.com/ceph-devel@vger.kernel.org/msg26140.html -Original Message- To: ceph-users Subject: [ceph-users] ceph auth ls Am I the only one that thinks it is not necessary to dump these keys with every

[ceph-users] Is it possible to mount a cephfs within a container?

2020-08-27 Thread Marc Roos
I am getting this, on a osd node I am able to mount the path. adding ceph secret key to kernel failed: Operation not permitted ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] ceph auth ls

2020-08-27 Thread Marc Roos
Am I the only one that thinks it is not necessary to dump these keys with every command (ls and get)? Either remove these keys from auth ls and auth get. Or remove the commands "auth print_key" "auth print-key" and "auth get-key" ___ ceph-users

[ceph-users] Re: Fwd: Upgrade Path Advice Nautilus (CentOS 7) -> Octopus (new OS)

2020-08-27 Thread Cloud Guy
On Thu, 27 Aug 2020 at 13:21, Anthony D'Atri wrote: > > > > > > Looking for a bit of guidance / approach to upgrading from Nautilus to > > Octopus considering CentOS and Ceph-Ansible. > > > > We're presently running a Nautilus cluster (all nodes / daemons 14.2.11 > as > > of this post). > > -

[ceph-users] Re: Cluster degraded after adding OSDs to increase capacity

2020-08-27 Thread DHilsbos
Dallas; It looks to me like you will need to wait until data movement naturally resolves the near-full issue. So long as you continue to have this: io: recovery: 477 KiB/s, 330 keys/s, 29 objects/s the cluster is working. That said, there are some things you can do. 1) The near-full

[ceph-users] Re: Add OSD with primary on HDD, WAL and DB on SSD

2020-08-27 Thread Tony Liu
How's WAL utilize disk when it shares the same device with DB? Say device size 50G, 100G, 200G, they are no difference to DB because DB will take 30G anyways. Does it make any difference to WAL? Thanks! Tony > -Original Message- > From: Zhenshi Zhou > Sent: Wednesday, August 26, 2020

[ceph-users] Re: Fwd: Upgrade Path Advice Nautilus (CentOS 7) -> Octopus (new OS)

2020-08-27 Thread Anthony D'Atri
> > Looking for a bit of guidance / approach to upgrading from Nautilus to > Octopus considering CentOS and Ceph-Ansible. > > We're presently running a Nautilus cluster (all nodes / daemons 14.2.11 as > of this post). > - There are 4 monitor-hosts with mon, mgr, and dashboard functions >

[ceph-users] Re: Ceph Tech Talk: Secure Token Service in the Rados Gateway

2020-08-27 Thread Mike Perez
Hi everyone, In 30 minutes join us for this month's Ceph Tech Talk: Secure Token Service in RGW: https://ceph.io/ceph-tech-talks/ On Thu, Aug 13, 2020 at 1:11 PM Mike Perez wrote: > Hi everyone, > > Join us August 27th at 17:00 UTC to hear Pritha Srivastava present on this > month's Ceph Tech

[ceph-users] Re: Cluster degraded after adding OSDs to increase capacity

2020-08-27 Thread Anthony D'Atri
Is your MUA wrapping lines, or is the list software? As predicted. Look at the VAR column and the STDDEV of 37.27 > On Aug 27, 2020, at 9:02 AM, Dallas Jones wrote: > > 1 122.79410- 123 TiB 42 TiB 41 TiB 217 GiB 466 GiB 81 > TiB 33.86 1.00 -root default > -3

[ceph-users] Re: Cluster degraded after adding OSDs to increase capacity

2020-08-27 Thread Dallas Jones
The new drives are larger capacity than the first drives I added to the cluster, but they're all SAS HDDs. cephuser@ceph01:~$ ceph osd df tree ID CLASS WEIGHTREWEIGHT SIZERAW USE DATAOMAPMETAAVAIL %USE VAR PGS STATUS TYPE NAME -1 122.79410- 123 TiB 42 TiB

[ceph-users] Re: Cluster degraded after adding OSDs to increase capacity

2020-08-27 Thread Anthony D'Atri
Doubling the capacity in one shot was a big topology change, hence the 53% misplaced. OSD fullness will naturally reflect a bell curve; there will be a tail of under-full and over-full OSDs. If you’d not said that your cluster was very full before expansion I would have predicted it from the

[ceph-users] Re: Cluster degraded after adding OSDs to increase capacity

2020-08-27 Thread Eugen Block
Hi, are the new OSDs in the same root and is it the same device class? Can you share the output of ‚ceph osd df tree‘? Zitat von Dallas Jones : My 3-node Ceph cluster (14.2.4) has been running fine for months. However, my data pool became close to full a couple of weeks ago, so I added 12

[ceph-users] Cluster degraded after adding OSDs to increase capacity

2020-08-27 Thread Dallas Jones
My 3-node Ceph cluster (14.2.4) has been running fine for months. However, my data pool became close to full a couple of weeks ago, so I added 12 new OSDs, roughly doubling the capacity of the cluster. However, the pool size has not changed, and the health of the cluster has changed for the worse.

[ceph-users] Re: Recover pgs from failed osds

2020-08-27 Thread Eugen Block
What is the memory_target for your OSDs? Can you share more details about your setup? You write about high memory, are the OSD nodes affected by OOM killer? You could try to reduce the osd_memory_target and see if that helps bring the OSDs back up. Splitting the PGs is a very heavy

[ceph-users] Re: kernel: ceph: mdsmap_decode got incorrect state(up:standby-replay)

2020-08-27 Thread Stefan Kooman
Hi list (and cephfs devs :-)), On 2020-04-29 17:43, Jake Grimmett wrote: > ...the "mdsmap_decode" errors stopped suddenly on all our clients... > > Not exactly sure what the problem was, but restarting our standby mds > demons seems to have been the fix. > > Here's the log on the standby mds

[ceph-users] Re: [Ceph Octopus 15.2.3 ] MDS crashed suddenly

2020-08-27 Thread carlimeunier
Hello, Same issue with another cluster. Here is the coredump tag 41659448-bc1b-4f8a-b563-d1599e84c0ab Thanks, Carl ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: rados df with nautilus / bluestore

2020-08-27 Thread Igor Fedotov
Hi Manuel, this behavior was primarily updated in Nautilus by https://github.com/ceph/ceph/pull/19454 Per-pool stats under "POOLS" section are now the most precise means to answer various questions about space utilization. 'STORED" column provides net amount of data for a specific pool.

[ceph-users] Re: radowsgw still needs dedicated clientid?

2020-08-27 Thread Wido den Hollander
On 27/08/2020 14:23, Marc Roos wrote: Can someone shed a light on this? Because it is the difference of running multiple instances of one task, or running multiple different tasks. As far as I know this is still required because the client talk to each other using RADOS notifies and

[ceph-users] Recover pgs from failed osds

2020-08-27 Thread Vahideh Alinouri
vahideh.alino...@gmail.com ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Recover pgs from failed osds

2020-08-27 Thread Vahideh Alinouri
Ceph cluster is updated from nautilus to octopus. On ceph-osd nodes we have high I/O wait. After increasing one of pool’s pg_num from 64 to 128 according to warning message (more objects per pg), this lead to high cpu load and ram usage on ceph-osd nodes and finally crashed the whole cluster.

[ceph-users] Re: radowsgw still needs dedicated clientid?

2020-08-27 Thread Marc Roos
Can someone shed a light on this? Because it is the difference of running multiple instances of one task, or running multiple different tasks. -Original Message- To: ceph-users Subject: [ceph-users] radowsgw still needs dedicated clientid? I think I can remember reading somewhere

[ceph-users] Re: RandomCrashes on OSDs Attached to Mon Hosts with Octopus

2020-08-27 Thread Denis Krienbühl
Hi Igor Just to clarify: >> I grepped the logs for "checksum mismatch" and "_verify_csum". The only >> occurrences I could find where the ones that preceed the crashes. > > Are you able to find multiple _verify_csum precisely? There are no “_verify_csum” entries whatsoever. I wrote that

[ceph-users] Re: slow "rados ls"

2020-08-27 Thread Marcel Kuiper
Sorry that had to be Wido/Stefan Another question is: hoe to use this ceph-kvstore-tool tool to compact the rocksdb? (can't find a lot of examples) The WAL and DB are on a separate NVMe. The directoy structure for an osd looks like: root@se-rc3-st8vfr2t2:/var/lib/ceph/osd# ls -l ceph-174 total

[ceph-users] Re: slow "rados ls"

2020-08-27 Thread Marcel Kuiper
Hi Wido/Joost pg_num is 64. It is not that we use 'rados ls' for operations. We just noticed as a difference that on this cluster it takes about 15 seconds to return on pool .rgw.root or rc3-se.rgw.buckets.index and our other clusters return almost instantaniously Is there a way that I can

[ceph-users] Re: Infiniband support

2020-08-27 Thread Max Krasilnikov
День добрий! Wed, Aug 26, 2020 at 10:08:57AM -0300, quaglio wrote: >Hi, > I could not see in the doc if Ceph has infiniband support. Is there >someone using it? > Also, is there any rdma support working natively? > > Can anyoune point me where to find more

[ceph-users] export administration regulations issue for ceph community edition

2020-08-27 Thread Peter Parker
Does anyone know of any new statements from the ceph community or foundation regarding EAR? I read the legal page of ceph.com and mentioned some information. https://ceph.com/legal-page/terms-of-service/ But I am still not sure, if my clients and I are within the scope of the entity list,

[ceph-users] rados df with nautilus / bluestore

2020-08-27 Thread Manuel Lausch
Hi we found a very ugly issue in rados df I have several clusters, all running ceph nautilus (14.2.11), We have there replicated pools with replica size 4. On the older clusters "rados df" shows in the used column the net used space. On our new cluster, rados df shows in the used column the

[ceph-users] Re: RandomCrashes on OSDs Attached to Mon Hosts with Octopus

2020-08-27 Thread Denis Krienbühl
Hi Igor, Thanks for your input. I tried to gather as much information as I could to answer your questions. Hopefully we can get to the bottom of this. > 0) What is backing disks layout for OSDs in question (main device type?, > additional DB/WAL devices?). Everything is on a single Intel NVMe

[ceph-users] How To Configure Bellsouth Email Settings in a Right Way?

2020-08-27 Thread sofi Hayat
It is required to use the right server and port settings to enjoy all the benefits of the Bellsouth email service. It is also recommended everyone to configure Bellsouth Email Settings and correctly and appropriately. There are few users unable to setup Bellsouth email on Android phone, iPhone,

[ceph-users] How To Configure Bellsouth Email Settings in a Right Way?

2020-08-27 Thread sofi Hayat
It is required to use the right server and port settings to enjoy all the benefits of the Bellsouth email service. It is also recommended everyone to configure Bellsouth Email Settings and correctly and appropriately. There are few users unable to setup Bellsouth email on Android phone, iPhone,

[ceph-users] Re: Add OSD with primary on HDD, WAL and DB on SSD

2020-08-27 Thread Zhenshi Zhou
Official document says that you should allocate 4% of the slow device space for block.db. But the main problem is that Bluestore uses RocksDB and RocksDB puts a file on the fast device only if it thinks that the whole layer will fit there. As for RocksDB, L1 is about 300M, L2 is about 3G, L3 is