Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-09-05 Thread Eugen Block
Hi Jones, Just to make things clear: are you so telling me that it is completely impossible to have a ceph "volume" in non-dedicated devices, sharing space with, for instance, the nodes swap, boot or main partition? And so the only possible way to have a functioning ceph distributed filesystem

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-09-04 Thread Jones de Andrade
Hi Eugen. Just tried everything again here by removing the /sda4 partitions and letting it so that either salt-run proposal-populate or salt-run state.orch ceph.stage.configure could try to find the free space on the partitions to work with: unsuccessfully again. :( Just to make things clear:

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-09-03 Thread Eugen Block
Hi Jones, I still don't think creating an OSD on a partition will work. The reason is that SES creates an additional partition per OSD resulting in something like this: vdb 253:16 05G 0 disk ├─vdb1253:17 0 100M 0 part /var/lib/ceph/osd/ceph-1 └─vdb2

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-31 Thread Jones de Andrade
Hi Eugen. Entirely my missunderstanding, I thought there would be something at boot time (what would certainly not make any sense at all). Sorry. Before stage 3 I ran the commands you suggested on the nodes, and only one got me the output below: ### # grep -C5 sda4

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-31 Thread Eugen Block
Hi, I'm not sure if there's a misunderstanding. You need to track the logs during the osd deployment step (stage.3), that is where it fails, and this is where /var/log/messages could be useful. Since the deployment failed you have no systemd-units (ceph-osd@.service) to log anything.

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-30 Thread Jones de Andrade
Hi Eugen. Ok, edited the file /etc/salt/minion, uncommented the "log_level_logfile" line and set it to "debug" level. Turned off the computer, waited a few minutes so that the time frame would stand out in the /var/log/messages file, and restarted the computer. Using vi I "greped out" (awful

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-30 Thread Eugen Block
Hi, So, it only contains logs concerning the node itself (is it correct? sincer node01 is also the master, I was expecting it to have logs from the other too) and, moreover, no ceph-osd* files. Also, I'm looking the logs I have available, and nothing "shines out" (sorry for my poor english) as

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-29 Thread Jones de Andrade
Hi Eugen. Sorry for the delay in answering. Just looked in the /var/log/ceph/ directory. It only contains the following files (for example on node01): ### # ls -lart total 3864 -rw--- 1 ceph ceph 904 ago 24 13:11 ceph.audit.log-20180829.xz drwxr-xr-x 1 root root 898 ago 28 10:07

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-27 Thread Eugen Block
Hi Jones, all ceph logs are in the directory /var/log/ceph/, each daemon has its own log file, e.g. OSD logs are named ceph-osd.*. I haven't tried it but I don't think SUSE Enterprise Storage deploys OSDs on partitioned disks. Is there a way to attach a second disk to the OSD nodes,

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-26 Thread Jones de Andrade
Hi Eugen. Thanks for the suggestion. I'll look for the logs (since it's our first attempt with ceph, I'll have to discover where they are, but no problem). One thing called my attention on your response however: I haven't made myself clear, but one of the failures we encountered were that the

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-25 Thread Eugen Block
Hi, take a look into the logs, they should point you in the right direction. Since the deployment stage fails at the OSD level, start with the OSD logs. Something's not right with the disks/partitions, did you wipe the partition from previous attempts? Regards, Eugen Zitat von Jones de

[ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-24 Thread Jones de Andrade
(Please forgive my previous email: I was using another message and completely forget to update the subject) Hi all. I'm new to ceph, and after having serious problems in ceph stages 0, 1 and 2 that I could solve myself, now it seems that I have hit a wall harder than my head. :) When I run