Re: [ceph-users] ceph-deploy can't generate the client.admin keyring

2019-12-19 Thread Jean-Philippe Méthot
Alright, so I figured it out. It was essentially because the monitor’s main IP wasn’t on the public network in the ceph.conf file. Hence, ceph was trying to connect on an IP where the monitor wasn’t listening. Jean-Philippe Méthot Openstack system administrator Administrateur système Openstack

[ceph-users] ceph-deploy can't generate the client.admin keyring

2019-12-19 Thread Jean-Philippe Méthot
Hi, We’re currently running Ceph mimic in production and that works fine. However, I am currently deploying another Ceph mimic setup for testing purposes and ceph-deploy is running into issues I’ve never seen before. Essentially, the initial monitor setup starts the service, but the process

Re: [ceph-users] ceph-deploy osd create adds osds but weight is 0 and not adding hosts to CRUSH map

2019-06-26 Thread Hayashida, Mami
Please disregard the earlier message. I found the culprit: `osd_crush_update_on_start` was set to false. *Mami Hayashida* *Research Computing Associate* Univ. of Kentucky ITS Research Computing Infrastructure On Wed, Jun 26, 2019 at 11:37 AM Hayashida, Mami wrote: > I am trying to build a

[ceph-users] ceph-deploy osd create adds osds but weight is 0 and not adding hosts to CRUSH map

2019-06-26 Thread Hayashida, Mami
I am trying to build a Ceph cluster using ceph-deploy. To add OSDs, I used the following command (which I had successfully used before to build another cluster): ceph-deploy osd create --block-db=ssd0/db0 --data=/dev/sdh osd0 ceph-deploy osd create --block-db=ssd0/db1 --data=/dev/sdi osd0

[ceph-users] Ceph Deploy issues

2019-04-20 Thread Sp, Madhumita
Hi All, Can anyone please help here? I have tried installing Ceph on a physical server as a single node cluster. Steps followed: rpm --import 'https://download.ceph.com/keys/release.asc' yum install http://download.ceph.com/rpm-mimic/el7/noarch/ceph-deploy-2.0.0-0.noarch.rpm ceph-deploy new

Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-05 Thread Alfredo Deza
On Tue, Dec 4, 2018 at 6:44 PM Matthew Pounsett wrote: > > > > On Tue, 4 Dec 2018 at 18:31, Vasu Kulkarni wrote: >>> >>> >>> Is there a way we can easily set that up without trying to use outdated >>> tools? Presumably if ceph still supports this as the docs claim, there's a >>> way to get it

Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Matthew Pounsett
On Tue, 4 Dec 2018 at 18:31, Vasu Kulkarni wrote: > >> Is there a way we can easily set that up without trying to use outdated >> tools? Presumably if ceph still supports this as the docs claim, there's a >> way to get it done without using ceph-deploy? >> > It might be more involved if you are

Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Vasu Kulkarni
On Tue, Dec 4, 2018 at 3:19 PM Matthew Pounsett wrote: > > > On Tue, 4 Dec 2018 at 18:12, Vasu Kulkarni wrote: > >> >>> As explained above, we can't just create smaller raw devices. Yes, >>> these are VMs but they're meant to replicate physical servers that will be >>> used in production,

Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Matthew Pounsett
On Tue, 4 Dec 2018 at 18:12, Vasu Kulkarni wrote: > >> As explained above, we can't just create smaller raw devices. Yes, these >> are VMs but they're meant to replicate physical servers that will be used >> in production, where no such volumes are available. >> > In that case you will have to

Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Vasu Kulkarni
On Tue, Dec 4, 2018 at 3:07 PM Matthew Pounsett wrote: > > > On Tue, 4 Dec 2018 at 18:04, Vasu Kulkarni wrote: > >> >> >> On Tue, Dec 4, 2018 at 2:42 PM Matthew Pounsett >> wrote: >> >>> you are using HOST:DIR option which is bit old and I think it was >>> supported till jewel, since you are

Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Matthew Pounsett
On Tue, 4 Dec 2018 at 18:04, Vasu Kulkarni wrote: > > > On Tue, Dec 4, 2018 at 2:42 PM Matthew Pounsett > wrote: > >> you are using HOST:DIR option which is bit old and I think it was >> supported till jewel, since you are using 2.0.1 you should be using only >> 'osd create' with logical

Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Vasu Kulkarni
On Tue, Dec 4, 2018 at 2:42 PM Matthew Pounsett wrote: > Going to take another stab at this... > > We have a development environment–made up of VMs–for developing and > testing the deployment tools for a particular service that depends on > cephfs for sharing state data between hosts. In

Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Matthew Pounsett
Going to take another stab at this... We have a development environment–made up of VMs–for developing and testing the deployment tools for a particular service that depends on cephfs for sharing state data between hosts. In production we will be using filestore OSDs because of the very low

Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Vasu Kulkarni
On Mon, Dec 3, 2018 at 4:47 PM Matthew Pounsett wrote: > > I'm in the process of updating some development VMs that use ceph-fs. It > looks like recent updates to ceph have deprecated the 'ceph-deploy osd > prepare' and 'activate' commands in favour of the previously-optional > 'create'

[ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-03 Thread Matthew Pounsett
I'm in the process of updating some development VMs that use ceph-fs. It looks like recent updates to ceph have deprecated the 'ceph-deploy osd prepare' and 'activate' commands in favour of the previously-optional 'create' command. We're using filestore OSDs on these VMs, but I can't seem to

Re: [ceph-users] ceph-deploy osd creation failed with multipath and dmcrypt

2018-11-06 Thread Alfredo Deza
On Tue, Nov 6, 2018 at 8:41 AM Pavan, Krish wrote: > > Trying to created OSD with multipath with dmcrypt and it failed . Any > suggestion please?. ceph-disk is known to have issues like this. It is already deprecated in the Mimic release and will no longer be available for the upcoming release

Re: [ceph-users] ceph-deploy osd creation failed with multipath and dmcrypt

2018-11-06 Thread Kevin Olbrich
I met the same problem. I had to create GPT table for each disk, create first partition over full space and then fed these to ceph-volume (should be similar for ceph-deploy). Also I am not sure if you can combine fs-type btrfs with bluestore (afaik this is for filestore). Kevin Am Di., 6. Nov.

[ceph-users] ceph-deploy osd creation failed with multipath and dmcrypt

2018-11-06 Thread Pavan, Krish
Trying to created OSD with multipath with dmcrypt and it failed . Any suggestion please?. ceph-deploy --overwrite-conf osd create ceph-store1:/dev/mapper/mpathr --bluestore --dmcrypt -- failed ceph-deploy --overwrite-conf osd create ceph-store1:/dev/mapper/mpathr --bluestore - worked the logs

Re: [ceph-users] ceph-deploy with a specified osd ID

2018-10-30 Thread Paul Emmerich
ceph-deploy doesn't support that. You can use ceph-disk or ceph-volume directly (with basically the same syntax as ceph-deploy), but you can only explicitly re-use an OSD id if you set it to destroyed before. I.e., the proper way to replace an OSD while avoiding unnecessary data movement is: ceph

[ceph-users] ceph-deploy with a specified osd ID

2018-10-29 Thread Jin Mao
Gents, My cluster had a gap in the OSD sequence numbers at certain point. Basically, because of missing osd auth del/rm" in a previous disk replacement task for osd.17, a new osd.34 was created. It did not really bother me until recently when I tried to replace all smaller disks to bigger disks.

[ceph-users] ceph-deploy error

2018-10-19 Thread Vikas Rana
Hi there, While upgrading from jewel to luminous, all packages wereupgraded but while adding MGR with cluster name CEPHDR, it fails. It works with default cluster name CEPH root@vtier-P-node1:~# sudo su - ceph-deploy ceph-deploy@vtier-P-node1:~$ ceph-deploy --ceph-conf /etc/ceph/cephdr.conf mgr

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-09-05 Thread Eugen Block
Hi Jones, Just to make things clear: are you so telling me that it is completely impossible to have a ceph "volume" in non-dedicated devices, sharing space with, for instance, the nodes swap, boot or main partition? And so the only possible way to have a functioning ceph distributed filesystem

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-09-04 Thread Jones de Andrade
Hi Eugen. Just tried everything again here by removing the /sda4 partitions and letting it so that either salt-run proposal-populate or salt-run state.orch ceph.stage.configure could try to find the free space on the partitions to work with: unsuccessfully again. :( Just to make things clear:

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-09-03 Thread Eugen Block
Hi Jones, I still don't think creating an OSD on a partition will work. The reason is that SES creates an additional partition per OSD resulting in something like this: vdb 253:16 05G 0 disk ├─vdb1253:17 0 100M 0 part /var/lib/ceph/osd/ceph-1 └─vdb2

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-31 Thread Jones de Andrade
Hi Eugen. Entirely my missunderstanding, I thought there would be something at boot time (what would certainly not make any sense at all). Sorry. Before stage 3 I ran the commands you suggested on the nodes, and only one got me the output below: ### # grep -C5 sda4

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-31 Thread Eugen Block
Hi, I'm not sure if there's a misunderstanding. You need to track the logs during the osd deployment step (stage.3), that is where it fails, and this is where /var/log/messages could be useful. Since the deployment failed you have no systemd-units (ceph-osd@.service) to log anything.

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-30 Thread Jones de Andrade
Hi Eugen. Ok, edited the file /etc/salt/minion, uncommented the "log_level_logfile" line and set it to "debug" level. Turned off the computer, waited a few minutes so that the time frame would stand out in the /var/log/messages file, and restarted the computer. Using vi I "greped out" (awful

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-30 Thread Eugen Block
Hi, So, it only contains logs concerning the node itself (is it correct? sincer node01 is also the master, I was expecting it to have logs from the other too) and, moreover, no ceph-osd* files. Also, I'm looking the logs I have available, and nothing "shines out" (sorry for my poor english) as

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-29 Thread Jones de Andrade
Hi Eugen. Sorry for the delay in answering. Just looked in the /var/log/ceph/ directory. It only contains the following files (for example on node01): ### # ls -lart total 3864 -rw--- 1 ceph ceph 904 ago 24 13:11 ceph.audit.log-20180829.xz drwxr-xr-x 1 root root 898 ago 28 10:07

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-27 Thread Eugen Block
Hi Jones, all ceph logs are in the directory /var/log/ceph/, each daemon has its own log file, e.g. OSD logs are named ceph-osd.*. I haven't tried it but I don't think SUSE Enterprise Storage deploys OSDs on partitioned disks. Is there a way to attach a second disk to the OSD nodes,

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-26 Thread Jones de Andrade
Hi Eugen. Thanks for the suggestion. I'll look for the logs (since it's our first attempt with ceph, I'll have to discover where they are, but no problem). One thing called my attention on your response however: I haven't made myself clear, but one of the failures we encountered were that the

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-25 Thread Eugen Block
Hi, take a look into the logs, they should point you in the right direction. Since the deployment stage fails at the OSD level, start with the OSD logs. Something's not right with the disks/partitions, did you wipe the partition from previous attempts? Regards, Eugen Zitat von Jones de

[ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-24 Thread Jones de Andrade
(Please forgive my previous email: I was using another message and completely forget to update the subject) Hi all. I'm new to ceph, and after having serious problems in ceph stages 0, 1 and 2 that I could solve myself, now it seems that I have hit a wall harder than my head. :) When I run

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-10 Thread Robert Stanford
ind regards, > Glen Baars > > -Original Message- > From: Thode Jocelyn > Sent: Thursday, 9 August 2018 1:41 PM > To: Erik McCormick > Cc: Glen Baars ; Vasu Kulkarni < > vakul...@redhat.com>; ceph-users@lists.ceph.com > Subject: RE: [ceph-users] [Ceph-depl

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-10 Thread Glen Baars
To: Erik McCormick Cc: Glen Baars ; Vasu Kulkarni ; ceph-users@lists.ceph.com Subject: RE: [ceph-users] [Ceph-deploy] Cluster Name Hi Erik, The thing is that the rbd-mirror service uses the /etc/sysconfig/ceph file to determine which configuration file to use (from CLUSTER_NAME). So you need to set

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-09 Thread Thode Jocelyn
Jocelyn Thode From: Magnus Grönlund [mailto:mag...@gronlund.se] Sent: jeudi, 9 août 2018 14:33 To: Thode Jocelyn Cc: Erik McCormick ; ceph-users@lists.ceph.com Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name Hi Jocelyn, I'm in the process of setting up rdb-mirroring myself and stumbled

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-09 Thread Magnus Grönlund
: mercredi, 8 août 2018 16:39 > To: Thode Jocelyn > Cc: Glen Baars ; Vasu Kulkarni < > vakul...@redhat.com>; ceph-users@lists.ceph.com > Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name > > I'm not using this feature, so maybe I'm missing something, but from the >

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-08 Thread Thode Jocelyn
al Message- From: Erik McCormick [mailto:emccorm...@cirrusseven.com] Sent: mercredi, 8 août 2018 16:39 To: Thode Jocelyn Cc: Glen Baars ; Vasu Kulkarni ; ceph-users@lists.ceph.com Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name I'm not using this feature, so maybe I'm missing something

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-08 Thread Erik McCormick
018 05:43 > To: Erik McCormick > Cc: Thode Jocelyn ; Vasu Kulkarni > ; ceph-users@lists.ceph.com > Subject: RE: [ceph-users] [Ceph-deploy] Cluster Name > > > > Hello Erik, > > > > We are going to use RBD-mirror to replicate the clusters. This seems to need > sep

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-08 Thread Thode Jocelyn
@lists.ceph.com Subject: RE: [ceph-users] [Ceph-deploy] Cluster Name Hello Erik, We are going to use RBD-mirror to replicate the clusters. This seems to need separate cluster names. Kind regards, Glen Baars From: Erik McCormick mailto:emccorm...@cirrusseven.com>> Sent: Thursday, 2 August 201

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-01 Thread Glen Baars
-users] [Ceph-deploy] Cluster Name Don't set a cluster name. It's no longer supported. It really only matters if you're running two or more independent clusters on the same boxes. That's generally inadvisable anyway. Cheers, Erik On Wed, Aug 1, 2018, 9:17 PM Glen Baars mailto:g

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-01 Thread Erik McCormick
2018 5:59 PM > To: Thode Jocelyn ; Vasu Kulkarni < > vakul...@redhat.com> > Cc: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name > > How very timely, I am facing the exact same issue. > > Kind regards, > Glen Baars > > -Original Me

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-01 Thread Glen Baars
To: Thode Jocelyn ; Vasu Kulkarni Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name How very timely, I am facing the exact same issue. Kind regards, Glen Baars -Original Message- From: ceph-users On Behalf Of Thode Jocelyn Sent: Monday, 23 July 2018 1:42

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-07-23 Thread Glen Baars
How very timely, I am facing the exact same issue. Kind regards, Glen Baars -Original Message- From: ceph-users On Behalf Of Thode Jocelyn Sent: Monday, 23 July 2018 1:42 PM To: Vasu Kulkarni Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name Hi, Yes

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-07-22 Thread Thode Jocelyn
et 2018 17:25 To: Thode Jocelyn Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name On Fri, Jul 20, 2018 at 7:29 AM, Thode Jocelyn wrote: > Hi, > > > > I noticed that in commit > https://github.com/ceph/ceph-deploy/commit/b1c27b85d524f25

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-07-20 Thread Vasu Kulkarni
On Fri, Jul 20, 2018 at 7:29 AM, Thode Jocelyn wrote: > Hi, > > > > I noticed that in commit > https://github.com/ceph/ceph-deploy/commit/b1c27b85d524f2553af2487a98023b60efe421f3, > the ability to specify a cluster name was removed. Is there a reason for > this removal ? > > > > Because right

[ceph-users] [Ceph-deploy] Cluster Name

2018-07-20 Thread Thode Jocelyn
Hi, I noticed that in commit https://github.com/ceph/ceph-deploy/commit/b1c27b85d524f2553af2487a98023b60efe421f3, the ability to specify a cluster name was removed. Is there a reason for this removal ? Because right now, there are no possibility to create a ceph cluster with a different name

[ceph-users] ceph-deploy disk list return a python error

2018-06-10 Thread Max Cuttins
I'm running a new installation of MIMIC: #ceph-deploy disk list ceph01 [ceph01][DEBUG ] connection detected need for sudo [ceph01][DEBUG ] connected to host: ceph01 [ceph01][DEBUG ] detect platform information from remote host [ceph01][DEBUG ] detect machine type [ceph01][DEBUG ]

Re: [ceph-users] ceph-deploy: is it a requirement that the name of each node of the ceph cluster must be resolved to the public IP ?

2018-05-10 Thread Massimo Sgaravatto
This [*] is my ceph.conf 10.70.42.9 is the public address And it is indeed the IP used by the MON daemon: [root@c-mon-02 ~]# netstat -anp | grep 6789 tcp0 0 10.70.42.9:6789 0.0.0.0:* LISTEN 3835/ceph-mon tcp0 0 10.70.42.9:33592

Re: [ceph-users] ceph-deploy: is it a requirement that the name of each node of the ceph cluster must be resolved to the public IP ?

2018-05-10 Thread Paul Emmerich
check ceph.conf, it controls to which mon IP the client tries to connect. 2018-05-10 12:57 GMT+02:00 Massimo Sgaravatto : > I configured the "public network" attribute in the ceph configuration file. > > But it looks like to me that in the "auth get client.admin"

Re: [ceph-users] ceph-deploy: is it a requirement that the name of each node of the ceph cluster must be resolved to the public IP ?

2018-05-10 Thread Massimo Sgaravatto
I configured the "public network" attribute in the ceph configuration file. But it looks like to me that in the "auth get client.admin" command [*] issued by ceph-deploy the address of the management network is used (I guess because c-mon-02 gets resolved to the IP management address) Cheers,

Re: [ceph-users] ceph-deploy: is it a requirement that the name of each node of the ceph cluster must be resolved to the public IP ?

2018-05-10 Thread Paul Emmerich
Monitors can use only exactly one IP address. ceph-deploy uses some heuristics based on hostname resolution and ceph public addr configuration to guess which one to use during setup. (Which I've always found to be a quite annoying feature.) The mon's IP must be reachable from all ceph daemons and

[ceph-users] ceph-deploy: is it a requirement that the name of each node of the ceph cluster must be resolved to the public IP ?

2018-05-10 Thread Massimo Sgaravatto
I have a ceph cluster that I manually deployed, and now I am trying to see if I can use ceph-deploy to deploy new nodes (in particular the object gw). The network configuration is the following: Each MON node has two network IP: one on a "management network" (not used for ceph related stuff) and

Re: [ceph-users] ceph-deploy on 14.04

2018-04-30 Thread Scottix
Alright I'll try that. Thanks On Mon, Apr 30, 2018 at 5:45 PM Vasu Kulkarni wrote: > If you are on 14.04 or need to use ceph-disk, then you can install > version 1.5.39 from pip. to downgrade just uninstall the current one > and reinstall 1.5.39 you dont have to delete

Re: [ceph-users] ceph-deploy on 14.04

2018-04-30 Thread Vasu Kulkarni
If you are on 14.04 or need to use ceph-disk, then you can install version 1.5.39 from pip. to downgrade just uninstall the current one and reinstall 1.5.39 you dont have to delete your conf file folder. On Mon, Apr 30, 2018 at 5:31 PM, Scottix wrote: > It looks like

[ceph-users] ceph-deploy on 14.04

2018-04-30 Thread Scottix
It looks like ceph-deploy@2.0.0 is incompatible with systems running 14.04 and it got released in the luminous branch with the new deployment commands. Is there anyway to downgrade to an older version? Log of osd list XYZ@XYZStat200:~/XYZ-cluster$ ceph-deploy --overwrite-conf osd list

Re: [ceph-users] ceph-deploy: recommended?

2018-04-06 Thread Anthony D'Atri
> ?I read a couple of versions ago that ceph-deploy was not recommended > for production clusters.? InkTank had sort of discouraged the use of ceph-deploy; in 2014 we used it only to deploy OSDs. Some time later the message changed. ___ ceph-users

Re: [ceph-users] ceph-deploy: recommended?

2018-04-06 Thread David Turner
use something like ceph-ansible in > parallel for the missing stuff, I can only hope we will find a (full > time?!) maintainer for ceph-deploy and keep it alive. PLEASE ;) > > > > Gesendet: Donnerstag, 05. April 2018 um 08:53 Uhr > Von: "Wido den Hollander" <w...@42on.com&

Re: [ceph-users] ceph-deploy: recommended?

2018-04-05 Thread ceph . novice
like ceph-ansible in parallel for the missing stuff, I can only hope we will find a (full time?!) maintainer for ceph-deploy and keep it alive. PLEASE ;)     Gesendet: Donnerstag, 05. April 2018 um 08:53 Uhr Von: "Wido den Hollander" <w...@42on.com> An: ceph-users@lists.ceph.com Betr

Re: [ceph-users] ceph-deploy: recommended?

2018-04-05 Thread Wido den Hollander
On 04/04/2018 08:58 PM, Robert Stanford wrote: > >  I read a couple of versions ago that ceph-deploy was not recommended > for production clusters.  Why was that?  Is this still the case?  We > have a lot of problems automating deployment without ceph-deploy. > > In the end it is just a

Re: [ceph-users] ceph-deploy: recommended?

2018-04-05 Thread Dietmar Rieder
On 04/04/2018 08:58 PM, Robert Stanford wrote: > >  I read a couple of versions ago that ceph-deploy was not recommended > for production clusters.  Why was that?  Is this still the case?  We > have a lot of problems automating deployment without ceph-deploy. > We are using it in production on

Re: [ceph-users] ceph-deploy: recommended?

2018-04-04 Thread Brady Deetz
We use ceph-deploy in production. That said, our crush map is getting more complex and we are starting to make use of other tooling as that occurs. But we still use ceph-deploy to install ceph and bootstrap OSDs. On Wed, Apr 4, 2018, 1:58 PM Robert Stanford wrote: > >

Re: [ceph-users] ceph-deploy: recommended?

2018-04-04 Thread ceph
Am 4. April 2018 20:58:19 MESZ schrieb Robert Stanford : >I read a couple of versions ago that ceph-deploy was not recommended >for >production clusters. Why was that? Is this still the case? We have a I cannot Imagine that. Did use it Now a few versions before 2.0

[ceph-users] ceph-deploy: recommended?

2018-04-04 Thread Robert Stanford
I read a couple of versions ago that ceph-deploy was not recommended for production clusters. Why was that? Is this still the case? We have a lot of problems automating deployment without ceph-deploy. ___ ceph-users mailing list

Re: [ceph-users] ceph-deploy won't install luminous (but Jewel instead)

2018-03-01 Thread David Turner
Massimiliano Cuttini <m...@phoenixweb.it> > wrote: > >> This worked. >> >> However somebody should investigate why default is still jewel on Centos >> 7.4 >> >> Il 28/02/2018 00:53, jorpilo ha scritto: >> >> Try using: >> ceph-

Re: [ceph-users] ceph-deploy won't install luminous (but Jewel instead)

2018-03-01 Thread Max Cuttins
jorpilo ha scritto: Try using: ceph-deploy --release luminous host1... Mensaje original De: Massimiliano Cuttini <m...@phoenixweb.it> <mailto:m...@phoenixweb.it> Fecha: 28/2/18 12:42 a. m. (GMT+01:00) Para: ceph-users@lists.ceph.com <mailto:ceph-user

Re: [ceph-users] ceph-deploy won't install luminous (but Jewel instead)

2018-02-28 Thread Sébastien VIGNERON
jewel on Centos 7.4 >> >> Il 28/02/2018 00:53, jorpilo ha scritto: >>> Try using: >>> ceph-deploy --release luminous host1... >>> >>> Mensaje original >>> De: Massimiliano Cuttini <m...@phoenixweb.it> <mailto:m...@phoeni

Re: [ceph-users] ceph-deploy won't install luminous (but Jewel instead)

2018-02-28 Thread Max Cuttins
xweb.it> Fecha: 28/2/18 12:42 a. m. (GMT+01:00) Para: ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com> Asunto: [ceph-users] ceph-deploy won't install luminous (but Jewel instead) This is the 5th time that I install and after purge the installation. C

Re: [ceph-users] ceph-deploy won't install luminous (but Jewel instead)

2018-02-28 Thread David Turner
> Try using: > ceph-deploy --release luminous host1... > > Mensaje original > De: Massimiliano Cuttini <m...@phoenixweb.it> <m...@phoenixweb.it> > Fecha: 28/2/18 12:42 a. m. (GMT+01:00) > Para: ceph-users@lists.ceph.com > Asunto: [ceph

Re: [ceph-users] ceph-deploy won't install luminous (but Jewel instead)

2018-02-28 Thread Massimiliano Cuttini
2 a. m. (GMT+01:00) Para: ceph-users@lists.ceph.com Asunto: [ceph-users] ceph-deploy won't install luminous (but Jewel instead) This is the 5th time that I install and after purge the installation. Ceph Deploy is alway install JEWEL instead of Luminous. No way even if I force the repo from d

Re: [ceph-users] ceph-deploy won't install luminous (but Jewel instead)

2018-02-27 Thread jorpilo
Try using:ceph-deploy --release luminous host1... Mensaje original De: Massimiliano Cuttini <m...@phoenixweb.it> Fecha: 28/2/18 12:42 a. m. (GMT+01:00) Para: ceph-users@lists.ceph.com Asunto: [ceph-users] ceph-deploy won't install luminous (but Jewel i

[ceph-users] ceph-deploy ver 2 - [ceph_deploy.gatherkeys][WARNING] No mon key found in host

2018-02-20 Thread Steven Vacaroaia
Hi, I have decided to redeploy my test cluster using latest ceph-deploy and Luminous I cannot pass the ceph-deploy mon create-initial stage due to [ceph_deploy.gatherkeys][WARNING] No mon key found in host Any help will be appreciated ceph-deploy --version 2.0.0 [cephuser@ceph prodceph]$ ls

[ceph-users] ceph-deploy 2.0.0

2018-02-20 Thread Alfredo Deza
A fully backwards incompatible release of ceph-deploy was completed in early January [0] which removed ceph-disk as a backend to create OSDs in favor of ceph-volume. The backwards incompatible change means that the API for creating OSDs has changed [1], and also that it now relies on Ceph

Re: [ceph-users] ceph-deploy failed to deploy osd randomly

2017-11-16 Thread Alfredo Deza
On Wed, Nov 15, 2017 at 8:31 AM, Wei Jin wrote: > I tried to do purge/purgedata and then redo the deploy command for a > few times, and it still fails to start osd. > And there is no error log, anyone know what's the problem? Seems like this is OSD 0, right? Have you checked

Re: [ceph-users] ceph-deploy failed to deploy osd randomly

2017-11-15 Thread Wei Jin
I tried to do purge/purgedata and then redo the deploy command for a few times, and it still fails to start osd. And there is no error log, anyone know what's the problem? BTW, my os is dedian with 4.4 kernel. Thanks. On Wed, Nov 15, 2017 at 8:24 PM, Wei Jin wrote: > Hi,

[ceph-users] ceph-deploy failed to deploy osd randomly

2017-11-15 Thread Wei Jin
Hi, List, My machine has 12 ssd There are some errors for ceph-deploy. It failed randomly root@n10-075-012:~# *ceph-deploy osd create --zap-disk n10-075-094:sdb:sdb* [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.39):

[ceph-users] ceph-deploy failed to deploy osd randomly

2017-11-15 Thread Wei Jin
Hi, List, My machine has 12 SSDs disk, and I use ceph-deploy to deploy them. But for some machine/disks,it failed to start osd. I tried many times, some success but others failed. But there is no error info. Following is ceph-deploy log for one disk: root@n10-075-012:~# ceph-deploy osd create

Re: [ceph-users] ceph-deploy mgr create error No such file or directory:

2017-07-14 Thread Vasu Kulkarni
On Fri, Jul 14, 2017 at 10:37 AM, Oscar Segarra wrote: > I'm testing on latest Jewell version I've found in repositories: > you can skip that command then, I will fix the document to add a note for jewel or pre luminous build. > > [root@vdicnode01 yum.repos.d]# ceph

Re: [ceph-users] ceph-deploy mgr create error No such file or directory:

2017-07-14 Thread Oscar Segarra
I'm testing on latest Jewell version I've found in repositories: [root@vdicnode01 yum.repos.d]# ceph --version ceph version 10.2.8 (f5b1f1fd7c0be0506ba73502a675de9d048b744e) thanks a lot! 2017-07-14 19:21 GMT+02:00 Vasu Kulkarni : > It is tested for master and is working

Re: [ceph-users] ceph-deploy mgr create error No such file or directory:

2017-07-14 Thread Vasu Kulkarni
It is tested for master and is working fine, I will run those same tests on luminous and check if there is an issue and update here. mgr create is needed for luminous+ bulids only. On Fri, Jul 14, 2017 at 10:18 AM, Roger Brown wrote: > I've been trying to work through

Re: [ceph-users] ceph-deploy mgr create error No such file or directory:

2017-07-14 Thread Roger Brown
I've been trying to work through similar mgr issues for Xenial-Luminous... roger@desktop:~/ceph-cluster$ ceph-deploy mgr create mon1 nuc2 [ceph_deploy.conf][DEBUG ] found configuration file at: /home/roger/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.38): /usr/bin/ceph-deploy mgr

Re: [ceph-users] ceph-deploy , osd_journal_size and entire disk partiton for journal

2017-06-12 Thread David Turner
Then you want separate partitions for each OSD journal. if you have 4 HDD OSDs using this as they're journal, you should have 4x 5GB partitions on the SSD. On Mon, Jun 12, 2017 at 12:07 PM Deepak Naidu wrote: > Thanks for the note, yes I know them all. It will be shared

Re: [ceph-users] ceph-deploy , osd_journal_size and entire disk partiton for journal

2017-06-12 Thread Deepak Naidu
Thanks for the note, yes I know them all. It will be shared among multiple 3-4 HDD OSD Disks. -- Deepak On Jun 12, 2017, at 7:07 AM, David Turner > wrote: Why do you want a 70GB journal? You linked to the documentation, so I'm assuming

Re: [ceph-users] ceph-deploy , osd_journal_size and entire disk partiton for journal

2017-06-12 Thread David Turner
Why do you want a 70GB journal? You linked to the documentation, so I'm assuming that you followed the formula stated to figure out how big your journal should be... "osd journal size = {2 * (expected throughput * filestore max sync interval)}". I've never heard of a cluster that requires such a

[ceph-users] ceph-deploy , osd_journal_size and entire disk partiton for journal

2017-06-11 Thread Deepak Naidu
Hello folks, I am trying to use an entire ssd partition for journal disk ie example /dev/sdf1 partition(70GB). But when I look up the osd config using below command I see ceph-deploy sets journal_size as 5GB. More confusing, I see the OSD logs showing the correct size in blocks in the

Re: [ceph-users] ceph-deploy to a particular version

2017-05-02 Thread German Anders
I think you can do *$ceph-deploy install --release --repo-url http://download.ceph.com/. .. *, also you can change the --release flag with --dev or --testing and specify the version, I've done with release and dev flags and work great :) hope it helps best,

[ceph-users] ceph-deploy to a particular version

2017-05-02 Thread Puff, Jonathon
From what I can find ceph-deploy only allows installs for a release, i.e jewel which is giving me 10.2.7, but I’d like to specify the particular update. For instance, I want to go to 10.2.3.Do I need to avoid ceph-deploy entirely to do this or can I install the correct version via yum then

[ceph-users] ceph-deploy updated without version number change

2017-04-12 Thread Brendan Moloney
Hi, I noticed that the Debian package for ceph-deploy was updated yesterday, but the version number remains the same (1.5.37). Any idea what is going on? Thanks, Brendan ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-16 Thread Shain Miley
I ended up using a newer version of ceph-deploy and things went more smoothly after that. Thanks again to everyone for all the help! Shain > On Mar 16, 2017, at 10:29 AM, Shain Miley wrote: > > This sender failed our fraud detection checks and may not be who they appear >

Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-16 Thread Shain Miley
It looks like things are working a bit better today…however now I am getting the following error: [hqosd6][DEBUG ] detect platform information from remote host [hqosd6][DEBUG ] detect machine type [ceph_deploy.install][INFO ] Distro info: Ubuntu 14.04 trusty [hqosd6][INFO ] installing ceph on

Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread Shain Miley
Thanks for all the help so far. Just to be clear…if I am planning on upgrading the cluster from Hammer in say the next 3 months…what is the suggested upgrade path? Thanks again, Shain > On Mar 15, 2017, at 2:05 PM, Abhishek Lekshmanan wrote: > > > > On 15/03/17 18:32,

Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread Abhishek Lekshmanan
On 15/03/17 18:32, Shinobu Kinjo wrote: So description of Jewel is wrong? http://docs.ceph.com/docs/master/releases/ Yeah we missed updating jewel dates as well when updating about hammer, Jewel is an LTS and would get more upgrades. Once Luminous is released, however, we'll eventually

Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread Shinobu Kinjo
So description of Jewel is wrong? http://docs.ceph.com/docs/master/releases/ On Thu, Mar 16, 2017 at 2:27 AM, John Spray wrote: > On Wed, Mar 15, 2017 at 5:04 PM, Shinobu Kinjo wrote: >> It may be probably kind of challenge but please consider Kraken (or

Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread John Spray
On Wed, Mar 15, 2017 at 5:04 PM, Shinobu Kinjo wrote: > It may be probably kind of challenge but please consider Kraken (or > later) because Jewel will be retired: > > http://docs.ceph.com/docs/master/releases/ Nope, Jewel is LTS, Kraken is not. Kraken will only receive

Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread Shinobu Kinjo
Would you file this as a doc bug? So we discuss properly with tracking. http://tracker.ceph.com On Thu, Mar 16, 2017 at 2:17 AM, Deepak Naidu wrote: >>> because Jewel will be retired: > Hmm. Isn't Jewel LTS ? > > Every other stable releases is a LTS (Long Term Stable) and

Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread Deepak Naidu
>> because Jewel will be retired: Hmm. Isn't Jewel LTS ? Every other stable releases is a LTS (Long Term Stable) and will receive updates until two LTS are published. -- Deepak > On Mar 15, 2017, at 10:09 AM, Shinobu Kinjo wrote: > > It may be probably kind of challenge

Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread Shinobu Kinjo
It may be probably kind of challenge but please consider Kraken (or later) because Jewel will be retired: http://docs.ceph.com/docs/master/releases/ On Thu, Mar 16, 2017 at 1:48 AM, Shain Miley wrote: > No this is a production cluster that I have not had a chance to upgrade yet.

Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread Shain Miley
No this is a production cluster that I have not had a chance to upgrade yet. We had an is with the OS on a node so I am just trying to reinstall ceph and hope that the osd data is still in tact. Once I get things stable again I was planning on upgrading…but the upgrade is a bit intensive by

Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread Vasu Kulkarni
Just curious, why you still want to deploy new hammer instead of stable jewel? Is this a test environment? the last .10 release was basically for bug fixes for 0.94.9. On Wed, Mar 15, 2017 at 9:16 AM, Shinobu Kinjo wrote: > FYI: >

Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread Deepak Naidu
I had similar issue when using older version of ceph-deploy. I see the URL got.ceph.com doesn't work on browser as well. To resolve this, I installed the latest version of ceph-deploy and it worked fine. New version wasn't using git.ceph.com. During ceph-deploy you can mention what version of

  1   2   3   4   5   6   >