[ceph-users] Re: cephadm to setup wal/db on nvme

2023-08-28 Thread Satish Patel
> > Removed 488729 objects > > Clean up completed and total clean up time :8.84114 > > > > > > > > > > > > > > > > > > On Wed, Aug 23, 2023 at 1:25 PM Adam King wrote: > > > >> this should be possible by specifying a

[ceph-users] Re: cephadm to setup wal/db on nvme

2023-08-25 Thread Satish Patel
ut of "ceph > orch device ls --format json | jq" to see things like what cephadm > considers the model, size etc. for the devices to be for use in the > filtering. > > On Wed, Aug 23, 2023 at 1:13 PM Satish Patel wrote: > >> Folks, >> >> I have 3 nodes w

[ceph-users] cephadm to setup wal/db on nvme

2023-08-23 Thread Satish Patel
Folks, I have 3 nodes with each having 1x NvME (1TB) and 3x 2.9TB SSD. Trying to build ceph storage using cephadm on Ubuntu 22.04 distro. If I want to use NvME for Journaling (WAL/DB) for my SSD based OSDs then how does cephadm handle it? Trying to find a document where I can tell cephadm to

[ceph-users] cephadm docker registry

2023-05-09 Thread Satish Patel
Folks, I am trying to install ceph on 10 node clusters and planning to use cephadm. My question is if next year i will add new nodes to this cluster then what docker image version cephadm will use to add new nodes? Are there any local registry can i create one to copy images locally? How does

[ceph-users] [Quincy] Module 'devicehealth' has failed: disk I/O error

2023-02-09 Thread Satish Patel
Folks, Any idea what is going on, I am running 3 node quincy version of openstack and today suddenly i noticed the following error. I found reference link but not sure if that is my issue or not https://tracker.ceph.com/issues/51974 root@ceph1:~# ceph -s cluster: id:

[ceph-users] Re: [cephadm] Found duplicate OSDs

2022-10-21 Thread Satish Patel
adm rm-daemon —name osd.3‘. Or you try it with the > orchestrator: ceph orch daemon rm… > I don’t have the exact command at the moment, you should check the docs. > > Zitat von Satish Patel : > > > Hi Eugen, > > > > I have delected osd.3 directory from datastorn4 n

[ceph-users] Re: [cephadm] Found duplicate OSDs

2022-10-21 Thread Satish Patel
them. Do > you see the osd directory in /var/lib/ceph//osd.3 on datastorn4? > Just remove the osd.3 directory, then cephadm won't try to activate it. > > > Zitat von Satish Patel : > > > Folks, > > > > I have deployed 15 OSDs node clusters using cephadm and

[ceph-users] [cephadm] Found duplicate OSDs

2022-10-21 Thread Satish Patel
Folks, I have deployed 15 OSDs node clusters using cephadm and encount duplicate OSD on one of the nodes and am not sure how to clean that up. root@datastorn1:~# ceph health HEALTH_WARN 1 failed cephadm daemon(s); 1 pool(s) have no replicas configured osd.3 is duplicated on two nodes, i would

[ceph-users] Re: strange osd error during add disk

2022-09-30 Thread Satish Patel
; > Wat happens if you try to start a cephadm shell on that node? > > > > -Oorspronkelijk bericht- > > Van: Satish Patel > > Verzonden: donderdag 29 september 2022 21:45 > > Aan: ceph-users > > Onderwerp: [ceph-users] Re: strange osd error during add disk

[ceph-users] Re: strange osd error during add disk

2022-09-30 Thread Satish Patel
not be your work hours. Please do not feel the > need to respond during a time that is not convenient for you. > -- > Great people talk about ideas, > ordinary people talk about things, > small people talk... about other people. > &

[ceph-users] Re: strange osd error during add disk

2022-09-29 Thread Satish Patel
Bump! Any suggestions? On Wed, Sep 28, 2022 at 4:26 PM Satish Patel wrote: > Folks, > > I have 15 nodes for ceph and each node has a 160TB disk attached. I am > using cephadm quincy release and all 14 nodes have been added except one > node which is giving a very strange erro

[ceph-users] strange osd error during add disk

2022-09-28 Thread Satish Patel
Folks, I have 15 nodes for ceph and each node has a 160TB disk attached. I am using cephadm quincy release and all 14 nodes have been added except one node which is giving a very strange error during adding it. I have put all logs here https://paste.opendev.org/show/bbSKwlSLyANMbrlhwzXL/ In

[ceph-users] Re: [cephadm] not detecting new disk

2022-09-03 Thread Satish Patel
ipe the disk properly first. > > Zitat von Satish Patel : > >> Folks, >> >> I have created a new lab using cephadm and installed a new 1TB spinning >> disk which is trying to add in a cluster but somehow ceph is not detecting >> it. >> >> $ part

[ceph-users] [cephadm] not detecting new disk

2022-09-02 Thread Satish Patel
Folks, I have created a new lab using cephadm and installed a new 1TB spinning disk which is trying to add in a cluster but somehow ceph is not detecting it. $ parted /dev/sda print Model: ATA WDC WD10EZEX-00B (scsi) Disk /dev/sda: 1000GB Sector size (logical/physical): 512B/4096B Partition

[ceph-users] Re: [cephadm] mgr: no daemons active

2022-09-02 Thread Satish Patel
mon service and now i can't do anything because i don't have mon. ceph -s and all other command hangs Try to find out how to get back mon :) On Fri, Sep 2, 2022 at 3:34 PM Satish Patel wrote: > Yes, i have stopped upgrade and those log before upgrade > > On Fri, Sep 2, 2022 at 3:2

[ceph-users] Re: [cephadm] mgr: no daemons active

2022-09-02 Thread Satish Patel
re having stopped the upgrade? > > On Fri, Sep 2, 2022 at 2:53 PM Satish Patel wrote: > >> Do you think this is because I have only a single MON daemon running? I >> have only two nodes. >> >> On Fri, Sep 2, 2022 at 2:39 PM Satish Patel wrote: >> >>

[ceph-users] Re: [cephadm] mgr: no daemons active

2022-09-02 Thread Satish Patel
Do you think this is because I have only a single MON daemon running? I have only two nodes. On Fri, Sep 2, 2022 at 2:39 PM Satish Patel wrote: > Adam, > > I have enabled debug and my logs flood with the following. I am going to > try some stuff from your provided mailing

[ceph-users] Re: [cephadm] mgr: no daemons active

2022-09-02 Thread Satish Patel
e get more info on why it's not performing the > redeploy from those debug logs. Just remember to set the log level back > after 'ceph config set mgr mgr/cephadm/log_to_cluster_level info' as debug > logs are quite verbose. > > On Fri, Sep 2, 2022 at 11:39 AM Satish Patel wrote: >

[ceph-users] Re: [cephadm] mgr: no daemons active

2022-09-02 Thread Satish Patel
a "ceph orch redeploy mgr" and then "ceph orch upgrade start --image > quay.io/ceph/ceph:v16.2.10" and see if it goes better. > > On Fri, Sep 2, 2022 at 10:36 AM Satish Patel wrote: > >> Hi Adam, >> >> I run the following command to upgrade but i

[ceph-users] Re: [cephadm] mgr: no daemons active

2022-09-02 Thread Satish Patel
r progress: Upgrade to quay.io/ceph/ceph:v16.2.10 (0s) [........] On Fri, Sep 2, 2022 at 10:25 AM Satish Patel wrote: > It Looks like I did it with the following command. > > $ ceph orch daemon add mgr ceph2:10.73.0.192 > > Now i can see two wit

[ceph-users] Re: [cephadm] mgr: no daemons active

2022-09-02 Thread Satish Patel
:c08064dde4bba4e72a1f55d90ca32df9ef5aafab82efe2e0a0722444a5aaacca 93146564743f 294fd6ab6c97 On Fri, Sep 2, 2022 at 10:19 AM Satish Patel wrote: > Let's come back to the original question: how to bring back the second mgr? > > root@ceph1:~# ceph orch apply mgr 2 > Scheduled mgr update... > > Nothing happened

[ceph-users] Re: [cephadm] mgr: no daemons active

2022-09-02 Thread Satish Patel
-02T14:18:21.127973+ mgr.ceph1.smfvfd (mgr.334626) 17010 : cephadm [INF] refreshing ceph2 facts On Fri, Sep 2, 2022 at 10:15 AM Satish Patel wrote: > Hi Adam, > > Wait..wait.. now it's working suddenly without doing anything.. very odd > > root@ceph1:~# ceph

[ceph-users] Re: [cephadm] mgr: no daemons active

2022-09-02 Thread Satish Patel
e5a616e4b9cf osd.osd_spec_default 4/0 5s ago - quay.io/ceph/ceph:v15 93146564743f prometheus1/1 5s ago 2w count:1 quay.io/prometheus/prometheus:v2.18.1 On Fri, Sep 2, 2022 at 10:13 AM Satish Patel wrote: > I can see that in the out

[ceph-users] Re: [cephadm] mgr: no daemons active

2022-09-02 Thread Satish Patel
remove the files. > > On Fri, Sep 2, 2022 at 9:57 AM Satish Patel wrote: > >> Hi Adam, >> >> I have deleted file located here - rm >> /var/lib/ceph/f270ad9e-1f6f-11ed-b6f8-a539d87379ea/cephadm.7ce656a8721deb5054c37b0cfb90381522d521dde51fb0c5a2142314d663f63d >&g

[ceph-users] Re: [cephadm] mgr: no daemons active

2022-09-02 Thread Satish Patel
c37b0cfb9038 > 1522d521dde51fb0c5a2142314d663f63d (and any others like it) file would be > the way forward to get orch ls working again. > > On Fri, Sep 2, 2022 at 9:44 AM Satish Patel wrote: > >> Hi Adam, >> >> In cephadm ls i found the following service but i believe it was there >> b

[ceph-users] Re: [cephadm] mgr: no daemons active

2022-09-02 Thread Satish Patel
thinks is this "cephadm" service that it has deployed. > Lastly, you could try having the mgr you manually deploy be a 16.2.10 one > instead of 15.2.17 (I'm assuming here, but the line numbers in that > traceback suggest octopus). The 16.2.10 one is just much less likely to >

[ceph-users] Re: [cephadm] mgr: no daemons active

2022-09-01 Thread Satish Patel
gr/orchestrator/_interface.py", line 642, in raise_if_exception raise e AssertionError: cephadm On Fri, Sep 2, 2022 at 1:32 AM Satish Patel wrote: > nevermind, i found doc related that and i am able to get 1 mgr up - > https://docs.ceph.com/en/quincy/cephadm/troubleshooting/#ma

[ceph-users] Re: [cephadm] mgr: no daemons active

2022-09-01 Thread Satish Patel
nevermind, i found doc related that and i am able to get 1 mgr up - https://docs.ceph.com/en/quincy/cephadm/troubleshooting/#manually-deploying-a-mgr-daemon On Fri, Sep 2, 2022 at 1:21 AM Satish Patel wrote: > Folks, > > I am having little fun time with cephadm and it's very annoyin

[ceph-users] [cephadm] mgr: no daemons active

2022-09-01 Thread Satish Patel
Folks, I am having little fun time with cephadm and it's very annoying to deal with it I have deployed a ceph cluster using cephadm on two nodes. Now when i was trying to upgrade and noticed hiccups where it just upgraded a single mgr with 16.2.10 but not other so i started messing around and

[ceph-users] Re: [cephadm] Found duplicate OSDs

2022-09-01 Thread Satish Patel
a relatively > sage operation). In the end, the IP cephadm lists for each host in "ceph > orch host ls" must be an IP that allows correctly ssh-ing to the host. > > On Thu, Sep 1, 2022 at 9:17 PM Satish Patel wrote: > >> Hi Adam, >> >> You are correct, look like

[ceph-users] Re: [cephadm] Found duplicate OSDs

2022-09-01 Thread Satish Patel
e on both ceph1 and ceph2 and some sort of networking issue is > the only thing I'm aware of currently that causes something like that. > > On Thu, Sep 1, 2022 at 6:30 PM Satish Patel wrote: > >> Hi Adam, >> >> I have also noticed a very strange thing which is Duplicat

[ceph-users] Re: [cephadm] Found duplicate OSDs

2022-09-01 Thread Satish Patel
? https://achchusnulchikam.medium.com/deploy-ceph-cluster-with-cephadm-on-centos-8-257b300e7b42 On Thu, Sep 1, 2022 at 6:20 PM Satish Patel wrote: > Hi Adam, > > Getting the following error, not sure why it's not able to find it. > > root@ceph1:~# ceph orch daemon redeploy mgr.ceph1

[ceph-users] Re: [cephadm] Found duplicate OSDs

2022-09-01 Thread Satish Patel
loy mgr.ceph1.xmbvsb`? > > On Thu, Sep 1, 2022 at 5:12 PM Satish Patel wrote: > >> Hi Adam, >> >> Here is requested output >> >> root@ceph1:~# ceph health detail >> HEALTH_WARN 4 stray daemon(s) not managed by cephadm >> [WRN] CEPHADM_STRAY_DAEMON:

[ceph-users] Re: [cephadm] Found duplicate OSDs

2022-09-01 Thread Satish Patel
s reported as > being on ceph1 in the orch ps output in your original message in this > thread. I'm interested in what `ceph health detail` is reporting now as > well, as it says there are 4 stray daemons. Also, the `ceph orch host ls` > output just to get a better grasp of the topolo

[ceph-users] Re: [cephadm] Found duplicate OSDs

2022-09-01 Thread Satish Patel
"container_image_name": "quay.io/ceph/ceph:v15", "container_image_id": "93146564743febec815d6a764dad93fc07ce971e88315403ac508cb5da6d35f4", "version": "15.2.17", "started": "2022-08-31T03:2

[ceph-users] [cephadm] Found duplicate OSDs

2022-09-01 Thread Satish Patel
Folks, I am playing with cephadm and life was good until I started upgrading from octopus to pacific. My upgrade process stuck after upgrading mgr and in logs now i can see following error root@ceph1:~# ceph log last cephadm 2022-09-01T14:40:45.739804+ mgr.ceph2.hmbdla (mgr.265806) 8 :

[ceph-users] cephadm upgrade from octopus to pasific stuck

2022-08-31 Thread Satish Patel
Hi, I have a small cluster in the lab which has only two nodes. I have a single monitor and two OSD nodes. Running upgrade but somehow it stuck after upgrading mgr ceph orch upgrade start --ceph-version 16.2.10 root@ceph1:~# ceph -s cluster: id: f270ad9e-1f6f-11ed-b6f8-a539d87379ea

[ceph-users] Re: Benefits of dockerized ceph?

2022-08-24 Thread Satish Patel
Hi, I believe only advantage of running dockerize which isolate binaries from OS and as you said upgrade is easier, In my case i am running OSD/MON role on same servers so it provide greater isolation when i want to upgrade component. cephadm uses containers to deploy ceph clusters in

[ceph-users] Re: Suggestion to build ceph storage

2022-06-20 Thread Satish Patel
des do you have for your cluster size? Do you have dedicated or shared MDS with OSDs? > I don't know if this is optimum, we are in testing process... > > - Mail original - > > De: "Stefan Kooman" > > À: "Jake Grimmett" , "Christian Wuerdig" &

[ceph-users] Re: Suggestion to build ceph storage

2022-06-20 Thread Satish Patel
NVMe drives. > > Can you share what company NvME drives are you using? > best regards, > > Jake > > On 20/06/2022 08:22, Stefan Kooman wrote: > > On 6/19/22 23:23, Christian Wuerdig wrote: > >> On Sun, 19 Jun 2022 at 02:29, Satish Patel > wr

[ceph-users] Suggestion to build ceph storage

2022-06-18 Thread Satish Patel
Greeting folks, We are planning to build Ceph storage for mostly cephFS for HPC workload and in future we are planning to expand to S3 style but that is yet to be decided. Because we need mass storage, we bought the following HW. 15 Total servers and each server has a 12x18TB HDD (spinning disk)