[ceph-users] ceph-deploy error

2018-10-19 Thread Vikas Rana
Hi there, While upgrading from jewel to luminous, all packages wereupgraded but while adding MGR with cluster name CEPHDR, it fails. It works with default cluster name CEPH root@vtier-P-node1:~# sudo su - ceph-deploy ceph-deploy@vtier-P-node1:~$ ceph-deploy --ceph-conf /etc/ceph/cephdr.conf mgr

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-09-05 Thread Eugen Block
Hi Jones, Just to make things clear: are you so telling me that it is completely impossible to have a ceph "volume" in non-dedicated devices, sharing space with, for instance, the nodes swap, boot or main partition? And so the only possible way to have a functioning ceph distributed filesystem

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-09-04 Thread Jones de Andrade
Hi Eugen. Just tried everything again here by removing the /sda4 partitions and letting it so that either salt-run proposal-populate or salt-run state.orch ceph.stage.configure could try to find the free space on the partitions to work with: unsuccessfully again. :( Just to make things clear:

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-09-03 Thread Eugen Block
Hi Jones, I still don't think creating an OSD on a partition will work. The reason is that SES creates an additional partition per OSD resulting in something like this: vdb 253:16 05G 0 disk ├─vdb1253:17 0 100M 0 part /var/lib/ceph/osd/ceph-1 └─vdb2

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-31 Thread Jones de Andrade
Hi Eugen. Entirely my missunderstanding, I thought there would be something at boot time (what would certainly not make any sense at all). Sorry. Before stage 3 I ran the commands you suggested on the nodes, and only one got me the output below: ### # grep -C5 sda4

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-31 Thread Eugen Block
Hi, I'm not sure if there's a misunderstanding. You need to track the logs during the osd deployment step (stage.3), that is where it fails, and this is where /var/log/messages could be useful. Since the deployment failed you have no systemd-units (ceph-osd@.service) to log anything.

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-30 Thread Jones de Andrade
Hi Eugen. Ok, edited the file /etc/salt/minion, uncommented the "log_level_logfile" line and set it to "debug" level. Turned off the computer, waited a few minutes so that the time frame would stand out in the /var/log/messages file, and restarted the computer. Using vi I "greped out" (awful

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-30 Thread Eugen Block
Hi, So, it only contains logs concerning the node itself (is it correct? sincer node01 is also the master, I was expecting it to have logs from the other too) and, moreover, no ceph-osd* files. Also, I'm looking the logs I have available, and nothing "shines out" (sorry for my poor english) as

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-29 Thread Jones de Andrade
Hi Eugen. Sorry for the delay in answering. Just looked in the /var/log/ceph/ directory. It only contains the following files (for example on node01): ### # ls -lart total 3864 -rw--- 1 ceph ceph 904 ago 24 13:11 ceph.audit.log-20180829.xz drwxr-xr-x 1 root root 898 ago 28 10:07

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-27 Thread Eugen Block
Hi Jones, all ceph logs are in the directory /var/log/ceph/, each daemon has its own log file, e.g. OSD logs are named ceph-osd.*. I haven't tried it but I don't think SUSE Enterprise Storage deploys OSDs on partitioned disks. Is there a way to attach a second disk to the OSD nodes,

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-26 Thread Jones de Andrade
Hi Eugen. Thanks for the suggestion. I'll look for the logs (since it's our first attempt with ceph, I'll have to discover where they are, but no problem). One thing called my attention on your response however: I haven't made myself clear, but one of the failures we encountered were that the

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-25 Thread Eugen Block
Hi, take a look into the logs, they should point you in the right direction. Since the deployment stage fails at the OSD level, start with the OSD logs. Something's not right with the disks/partitions, did you wipe the partition from previous attempts? Regards, Eugen Zitat von Jones de

[ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-24 Thread Jones de Andrade
(Please forgive my previous email: I was using another message and completely forget to update the subject) Hi all. I'm new to ceph, and after having serious problems in ceph stages 0, 1 and 2 that I could solve myself, now it seems that I have hit a wall harder than my head. :) When I run

Re: [ceph-users] Ceph-deploy error

2015-10-08 Thread Ken Dreyer
This issue with the conflicts between Firefly and EPEL is tracked at http://tracker.ceph.com/issues/11104 On Sun, Aug 30, 2015 at 4:11 PM, pavana bhat wrote: > In case someone else runs into the same issue in future: > > I came out of this issue by installing

Re: [ceph-users] Ceph-deploy error

2015-08-30 Thread pavana bhat
In case someone else runs into the same issue in future: I came out of this issue by installing epel-release before installing ceph-deploy. If the order of installation is ceph-deploy followed by epel-release, the issue is being hit. Thanks, Pavana On Sat, Aug 29, 2015 at 10:02 AM, pavana bhat

[ceph-users] Ceph-deploy error

2015-08-29 Thread pavana bhat
Hi, I'm trying to install ceph for the first time following the quick installation guide. I'm getting the below error, can someone please help? ceph-deploy install --release=firefly ceph-vm-mon1 [*ceph_deploy.conf*][*DEBUG* ] found configuration file at: /home/cloud-user/.cephdeploy.conf

Re: [ceph-users] ceph-deploy error

2014-08-18 Thread Alfredo Deza
Oh yes, we don't have ARM packages for wheezy. On Mon, Aug 11, 2014 at 7:12 PM, joshua Kay scjo...@gmail.com wrote: Hi, I am running into an error when I am attempting to use ceph-deploy install when creating my cluster. I am attempting to run ceph on Debian 7.0 wheezy with an ARM