Re: [ceph-users] ceph-deploy can't generate the client.admin keyring

2019-12-19 Thread Jean-Philippe Méthot
Alright, so I figured it out. It was essentially because the monitor’s main IP wasn’t on the public network in the ceph.conf file. Hence, ceph was trying to connect on an IP where the monitor wasn’t listening. Jean-Philippe Méthot Openstack system administrator Administrateur système Openstack

Re: [ceph-users] ceph-deploy osd create adds osds but weight is 0 and not adding hosts to CRUSH map

2019-06-26 Thread Hayashida, Mami
Please disregard the earlier message. I found the culprit: `osd_crush_update_on_start` was set to false. *Mami Hayashida* *Research Computing Associate* Univ. of Kentucky ITS Research Computing Infrastructure On Wed, Jun 26, 2019 at 11:37 AM Hayashida, Mami wrote: > I am trying to build a

Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-05 Thread Alfredo Deza
On Tue, Dec 4, 2018 at 6:44 PM Matthew Pounsett wrote: > > > > On Tue, 4 Dec 2018 at 18:31, Vasu Kulkarni wrote: >>> >>> >>> Is there a way we can easily set that up without trying to use outdated >>> tools? Presumably if ceph still supports this as the docs claim, there's a >>> way to get it

Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Matthew Pounsett
On Tue, 4 Dec 2018 at 18:31, Vasu Kulkarni wrote: > >> Is there a way we can easily set that up without trying to use outdated >> tools? Presumably if ceph still supports this as the docs claim, there's a >> way to get it done without using ceph-deploy? >> > It might be more involved if you are

Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Vasu Kulkarni
On Tue, Dec 4, 2018 at 3:19 PM Matthew Pounsett wrote: > > > On Tue, 4 Dec 2018 at 18:12, Vasu Kulkarni wrote: > >> >>> As explained above, we can't just create smaller raw devices. Yes, >>> these are VMs but they're meant to replicate physical servers that will be >>> used in production,

Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Matthew Pounsett
On Tue, 4 Dec 2018 at 18:12, Vasu Kulkarni wrote: > >> As explained above, we can't just create smaller raw devices. Yes, these >> are VMs but they're meant to replicate physical servers that will be used >> in production, where no such volumes are available. >> > In that case you will have to

Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Vasu Kulkarni
On Tue, Dec 4, 2018 at 3:07 PM Matthew Pounsett wrote: > > > On Tue, 4 Dec 2018 at 18:04, Vasu Kulkarni wrote: > >> >> >> On Tue, Dec 4, 2018 at 2:42 PM Matthew Pounsett >> wrote: >> >>> you are using HOST:DIR option which is bit old and I think it was >>> supported till jewel, since you are

Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Matthew Pounsett
On Tue, 4 Dec 2018 at 18:04, Vasu Kulkarni wrote: > > > On Tue, Dec 4, 2018 at 2:42 PM Matthew Pounsett > wrote: > >> you are using HOST:DIR option which is bit old and I think it was >> supported till jewel, since you are using 2.0.1 you should be using only >> 'osd create' with logical

Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Vasu Kulkarni
On Tue, Dec 4, 2018 at 2:42 PM Matthew Pounsett wrote: > Going to take another stab at this... > > We have a development environment–made up of VMs–for developing and > testing the deployment tools for a particular service that depends on > cephfs for sharing state data between hosts. In

Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Matthew Pounsett
Going to take another stab at this... We have a development environment–made up of VMs–for developing and testing the deployment tools for a particular service that depends on cephfs for sharing state data between hosts. In production we will be using filestore OSDs because of the very low

Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Vasu Kulkarni
On Mon, Dec 3, 2018 at 4:47 PM Matthew Pounsett wrote: > > I'm in the process of updating some development VMs that use ceph-fs. It > looks like recent updates to ceph have deprecated the 'ceph-deploy osd > prepare' and 'activate' commands in favour of the previously-optional > 'create'

Re: [ceph-users] ceph-deploy osd creation failed with multipath and dmcrypt

2018-11-06 Thread Alfredo Deza
On Tue, Nov 6, 2018 at 8:41 AM Pavan, Krish wrote: > > Trying to created OSD with multipath with dmcrypt and it failed . Any > suggestion please?. ceph-disk is known to have issues like this. It is already deprecated in the Mimic release and will no longer be available for the upcoming release

Re: [ceph-users] ceph-deploy osd creation failed with multipath and dmcrypt

2018-11-06 Thread Kevin Olbrich
I met the same problem. I had to create GPT table for each disk, create first partition over full space and then fed these to ceph-volume (should be similar for ceph-deploy). Also I am not sure if you can combine fs-type btrfs with bluestore (afaik this is for filestore). Kevin Am Di., 6. Nov.

Re: [ceph-users] ceph-deploy with a specified osd ID

2018-10-30 Thread Paul Emmerich
ceph-deploy doesn't support that. You can use ceph-disk or ceph-volume directly (with basically the same syntax as ceph-deploy), but you can only explicitly re-use an OSD id if you set it to destroyed before. I.e., the proper way to replace an OSD while avoiding unnecessary data movement is: ceph

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-09-05 Thread Eugen Block
Hi Jones, Just to make things clear: are you so telling me that it is completely impossible to have a ceph "volume" in non-dedicated devices, sharing space with, for instance, the nodes swap, boot or main partition? And so the only possible way to have a functioning ceph distributed filesystem

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-09-04 Thread Jones de Andrade
Hi Eugen. Just tried everything again here by removing the /sda4 partitions and letting it so that either salt-run proposal-populate or salt-run state.orch ceph.stage.configure could try to find the free space on the partitions to work with: unsuccessfully again. :( Just to make things clear:

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-09-03 Thread Eugen Block
Hi Jones, I still don't think creating an OSD on a partition will work. The reason is that SES creates an additional partition per OSD resulting in something like this: vdb 253:16 05G 0 disk ├─vdb1253:17 0 100M 0 part /var/lib/ceph/osd/ceph-1 └─vdb2

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-31 Thread Jones de Andrade
Hi Eugen. Entirely my missunderstanding, I thought there would be something at boot time (what would certainly not make any sense at all). Sorry. Before stage 3 I ran the commands you suggested on the nodes, and only one got me the output below: ### # grep -C5 sda4

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-31 Thread Eugen Block
Hi, I'm not sure if there's a misunderstanding. You need to track the logs during the osd deployment step (stage.3), that is where it fails, and this is where /var/log/messages could be useful. Since the deployment failed you have no systemd-units (ceph-osd@.service) to log anything.

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-30 Thread Jones de Andrade
Hi Eugen. Ok, edited the file /etc/salt/minion, uncommented the "log_level_logfile" line and set it to "debug" level. Turned off the computer, waited a few minutes so that the time frame would stand out in the /var/log/messages file, and restarted the computer. Using vi I "greped out" (awful

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-30 Thread Eugen Block
Hi, So, it only contains logs concerning the node itself (is it correct? sincer node01 is also the master, I was expecting it to have logs from the other too) and, moreover, no ceph-osd* files. Also, I'm looking the logs I have available, and nothing "shines out" (sorry for my poor english) as

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-29 Thread Jones de Andrade
Hi Eugen. Sorry for the delay in answering. Just looked in the /var/log/ceph/ directory. It only contains the following files (for example on node01): ### # ls -lart total 3864 -rw--- 1 ceph ceph 904 ago 24 13:11 ceph.audit.log-20180829.xz drwxr-xr-x 1 root root 898 ago 28 10:07

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-27 Thread Eugen Block
Hi Jones, all ceph logs are in the directory /var/log/ceph/, each daemon has its own log file, e.g. OSD logs are named ceph-osd.*. I haven't tried it but I don't think SUSE Enterprise Storage deploys OSDs on partitioned disks. Is there a way to attach a second disk to the OSD nodes,

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-26 Thread Jones de Andrade
Hi Eugen. Thanks for the suggestion. I'll look for the logs (since it's our first attempt with ceph, I'll have to discover where they are, but no problem). One thing called my attention on your response however: I haven't made myself clear, but one of the failures we encountered were that the

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-25 Thread Eugen Block
Hi, take a look into the logs, they should point you in the right direction. Since the deployment stage fails at the OSD level, start with the OSD logs. Something's not right with the disks/partitions, did you wipe the partition from previous attempts? Regards, Eugen Zitat von Jones de

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-10 Thread Robert Stanford
ind regards, > Glen Baars > > -Original Message- > From: Thode Jocelyn > Sent: Thursday, 9 August 2018 1:41 PM > To: Erik McCormick > Cc: Glen Baars ; Vasu Kulkarni < > vakul...@redhat.com>; ceph-users@lists.ceph.com > Subject: RE: [ceph-users] [Ceph-depl

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-10 Thread Glen Baars
To: Erik McCormick Cc: Glen Baars ; Vasu Kulkarni ; ceph-users@lists.ceph.com Subject: RE: [ceph-users] [Ceph-deploy] Cluster Name Hi Erik, The thing is that the rbd-mirror service uses the /etc/sysconfig/ceph file to determine which configuration file to use (from CLUSTER_NAME). So you need to set

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-09 Thread Thode Jocelyn
Jocelyn Thode From: Magnus Grönlund [mailto:mag...@gronlund.se] Sent: jeudi, 9 août 2018 14:33 To: Thode Jocelyn Cc: Erik McCormick ; ceph-users@lists.ceph.com Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name Hi Jocelyn, I'm in the process of setting up rdb-mirroring myself and stumbled

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-09 Thread Magnus Grönlund
: mercredi, 8 août 2018 16:39 > To: Thode Jocelyn > Cc: Glen Baars ; Vasu Kulkarni < > vakul...@redhat.com>; ceph-users@lists.ceph.com > Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name > > I'm not using this feature, so maybe I'm missing something, but from the >

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-08 Thread Thode Jocelyn
al Message- From: Erik McCormick [mailto:emccorm...@cirrusseven.com] Sent: mercredi, 8 août 2018 16:39 To: Thode Jocelyn Cc: Glen Baars ; Vasu Kulkarni ; ceph-users@lists.ceph.com Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name I'm not using this feature, so maybe I'm missing something

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-08 Thread Erik McCormick
018 05:43 > To: Erik McCormick > Cc: Thode Jocelyn ; Vasu Kulkarni > ; ceph-users@lists.ceph.com > Subject: RE: [ceph-users] [Ceph-deploy] Cluster Name > > > > Hello Erik, > > > > We are going to use RBD-mirror to replicate the clusters. This seems to need > sep

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-08 Thread Thode Jocelyn
@lists.ceph.com Subject: RE: [ceph-users] [Ceph-deploy] Cluster Name Hello Erik, We are going to use RBD-mirror to replicate the clusters. This seems to need separate cluster names. Kind regards, Glen Baars From: Erik McCormick mailto:emccorm...@cirrusseven.com>> Sent: Thursday, 2 August 201

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-01 Thread Glen Baars
ph.com>> On Behalf Of Glen Baars Sent: Monday, 23 July 2018 5:59 PM To: Thode Jocelyn mailto:jocelyn.th...@elca.ch>>; Vasu Kulkarni mailto:vakul...@redhat.com>> Cc: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com> Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-01 Thread Erik McCormick
2018 5:59 PM > To: Thode Jocelyn ; Vasu Kulkarni < > vakul...@redhat.com> > Cc: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name > > How very timely, I am facing the exact same issue. > > Kind regards, > Glen Baars > > -Original Me

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-01 Thread Glen Baars
To: Thode Jocelyn ; Vasu Kulkarni Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name How very timely, I am facing the exact same issue. Kind regards, Glen Baars -Original Message- From: ceph-users On Behalf Of Thode Jocelyn Sent: Monday, 23 July 2018 1:42

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-07-23 Thread Glen Baars
How very timely, I am facing the exact same issue. Kind regards, Glen Baars -Original Message- From: ceph-users On Behalf Of Thode Jocelyn Sent: Monday, 23 July 2018 1:42 PM To: Vasu Kulkarni Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name Hi, Yes

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-07-22 Thread Thode Jocelyn
et 2018 17:25 To: Thode Jocelyn Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name On Fri, Jul 20, 2018 at 7:29 AM, Thode Jocelyn wrote: > Hi, > > > > I noticed that in commit > https://github.com/ceph/ceph-deploy/commit/b1c27b85d524f25

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-07-20 Thread Vasu Kulkarni
On Fri, Jul 20, 2018 at 7:29 AM, Thode Jocelyn wrote: > Hi, > > > > I noticed that in commit > https://github.com/ceph/ceph-deploy/commit/b1c27b85d524f2553af2487a98023b60efe421f3, > the ability to specify a cluster name was removed. Is there a reason for > this removal ? > > > > Because right

Re: [ceph-users] ceph-deploy: is it a requirement that the name of each node of the ceph cluster must be resolved to the public IP ?

2018-05-10 Thread Massimo Sgaravatto
This [*] is my ceph.conf 10.70.42.9 is the public address And it is indeed the IP used by the MON daemon: [root@c-mon-02 ~]# netstat -anp | grep 6789 tcp0 0 10.70.42.9:6789 0.0.0.0:* LISTEN 3835/ceph-mon tcp0 0 10.70.42.9:33592

Re: [ceph-users] ceph-deploy: is it a requirement that the name of each node of the ceph cluster must be resolved to the public IP ?

2018-05-10 Thread Paul Emmerich
check ceph.conf, it controls to which mon IP the client tries to connect. 2018-05-10 12:57 GMT+02:00 Massimo Sgaravatto : > I configured the "public network" attribute in the ceph configuration file. > > But it looks like to me that in the "auth get client.admin"

Re: [ceph-users] ceph-deploy: is it a requirement that the name of each node of the ceph cluster must be resolved to the public IP ?

2018-05-10 Thread Massimo Sgaravatto
I configured the "public network" attribute in the ceph configuration file. But it looks like to me that in the "auth get client.admin" command [*] issued by ceph-deploy the address of the management network is used (I guess because c-mon-02 gets resolved to the IP management address) Cheers,

Re: [ceph-users] ceph-deploy: is it a requirement that the name of each node of the ceph cluster must be resolved to the public IP ?

2018-05-10 Thread Paul Emmerich
Monitors can use only exactly one IP address. ceph-deploy uses some heuristics based on hostname resolution and ceph public addr configuration to guess which one to use during setup. (Which I've always found to be a quite annoying feature.) The mon's IP must be reachable from all ceph daemons and

Re: [ceph-users] ceph-deploy on 14.04

2018-04-30 Thread Scottix
Alright I'll try that. Thanks On Mon, Apr 30, 2018 at 5:45 PM Vasu Kulkarni wrote: > If you are on 14.04 or need to use ceph-disk, then you can install > version 1.5.39 from pip. to downgrade just uninstall the current one > and reinstall 1.5.39 you dont have to delete

Re: [ceph-users] ceph-deploy on 14.04

2018-04-30 Thread Vasu Kulkarni
If you are on 14.04 or need to use ceph-disk, then you can install version 1.5.39 from pip. to downgrade just uninstall the current one and reinstall 1.5.39 you dont have to delete your conf file folder. On Mon, Apr 30, 2018 at 5:31 PM, Scottix wrote: > It looks like

Re: [ceph-users] ceph-deploy: recommended?

2018-04-06 Thread Anthony D'Atri
> ?I read a couple of versions ago that ceph-deploy was not recommended > for production clusters.? InkTank had sort of discouraged the use of ceph-deploy; in 2014 we used it only to deploy OSDs. Some time later the message changed. ___ ceph-users

Re: [ceph-users] ceph-deploy: recommended?

2018-04-06 Thread David Turner
use something like ceph-ansible in > parallel for the missing stuff, I can only hope we will find a (full > time?!) maintainer for ceph-deploy and keep it alive. PLEASE ;) > > > > Gesendet: Donnerstag, 05. April 2018 um 08:53 Uhr > Von: "Wido den Hollander" <w...@42on.com&

Re: [ceph-users] ceph-deploy: recommended?

2018-04-05 Thread ceph . novice
like ceph-ansible in parallel for the missing stuff, I can only hope we will find a (full time?!) maintainer for ceph-deploy and keep it alive. PLEASE ;)     Gesendet: Donnerstag, 05. April 2018 um 08:53 Uhr Von: "Wido den Hollander" <w...@42on.com> An: ceph-users@lists.ceph.com Betr

Re: [ceph-users] ceph-deploy: recommended?

2018-04-05 Thread Wido den Hollander
On 04/04/2018 08:58 PM, Robert Stanford wrote: > >  I read a couple of versions ago that ceph-deploy was not recommended > for production clusters.  Why was that?  Is this still the case?  We > have a lot of problems automating deployment without ceph-deploy. > > In the end it is just a

Re: [ceph-users] ceph-deploy: recommended?

2018-04-05 Thread Dietmar Rieder
On 04/04/2018 08:58 PM, Robert Stanford wrote: > >  I read a couple of versions ago that ceph-deploy was not recommended > for production clusters.  Why was that?  Is this still the case?  We > have a lot of problems automating deployment without ceph-deploy. > We are using it in production on

Re: [ceph-users] ceph-deploy: recommended?

2018-04-04 Thread Brady Deetz
We use ceph-deploy in production. That said, our crush map is getting more complex and we are starting to make use of other tooling as that occurs. But we still use ceph-deploy to install ceph and bootstrap OSDs. On Wed, Apr 4, 2018, 1:58 PM Robert Stanford wrote: > >

Re: [ceph-users] ceph-deploy: recommended?

2018-04-04 Thread ceph
Am 4. April 2018 20:58:19 MESZ schrieb Robert Stanford : >I read a couple of versions ago that ceph-deploy was not recommended >for >production clusters. Why was that? Is this still the case? We have a I cannot Imagine that. Did use it Now a few versions before 2.0

Re: [ceph-users] ceph-deploy won't install luminous (but Jewel instead)

2018-03-01 Thread David Turner
You mean documentation like `ceph-deploy --help` or `man ceph-deploy` or the [1] online documentation? Spoiler, they all document and explain what `--release` does. I do agree that the [2] documentation talking about deploying a luminous cluster should mention it if jewel was left the default

Re: [ceph-users] ceph-deploy won't install luminous (but Jewel instead)

2018-03-01 Thread Max Cuttins
Ah! So you think this is done by design? However that command is very very very usefull. Please add that to documentation. Next time it will save me 2/3 hours. Il 01/03/2018 06:12, Sébastien VIGNERON ha scritto: Hi Max, I had the same issue (under Ubuntu 1/6/.04) but I have read the

Re: [ceph-users] ceph-deploy won't install luminous (but Jewel instead)

2018-02-28 Thread Sébastien VIGNERON
Hi Max, I had the same issue (under Ubuntu 16.04) but I have read the ceph-deploy 2.0.0 source code and saw a "—-release" flag for the install subcommand. You can found the flag with the following command: ceph-deploy install --help It looks like the culprit part of ceph-deploy can be found

Re: [ceph-users] ceph-deploy won't install luminous (but Jewel instead)

2018-02-28 Thread Max Cuttins
Didn't check at time. I deployed everything from VM standalone. The VM was just build up with fresh new centOS7.4 using minimal installation ISO1708. It's a completly new/fresh/empty system. Then I run: yum update -y yum install wget zip unzip vim pciutils -y yum install epel-release -y yum

Re: [ceph-users] ceph-deploy won't install luminous (but Jewel instead)

2018-02-28 Thread David Turner
Which version of ceph-deploy are you using? On Wed, Feb 28, 2018 at 4:37 AM Massimiliano Cuttini wrote: > This worked. > > However somebody should investigate why default is still jewel on Centos > 7.4 > > Il 28/02/2018 00:53, jorpilo ha scritto: > > Try using: > ceph-deploy

Re: [ceph-users] ceph-deploy won't install luminous (but Jewel instead)

2018-02-28 Thread Massimiliano Cuttini
This worked. However somebody should investigate why default is still jewel on Centos 7.4 Il 28/02/2018 00:53, jorpilo ha scritto: Try using: ceph-deploy --release luminous host1... Mensaje original De: Massimiliano Cuttini Fecha: 28/2/18 12:42 a. m.

Re: [ceph-users] ceph-deploy won't install luminous (but Jewel instead)

2018-02-27 Thread jorpilo
Try using:ceph-deploy --release luminous host1... Mensaje original De: Massimiliano Cuttini Fecha: 28/2/18 12:42 a. m. (GMT+01:00) Para: ceph-users@lists.ceph.com Asunto: [ceph-users] ceph-deploy won't install luminous (but Jewel instead) This is the

Re: [ceph-users] ceph-deploy failed to deploy osd randomly

2017-11-16 Thread Alfredo Deza
On Wed, Nov 15, 2017 at 8:31 AM, Wei Jin wrote: > I tried to do purge/purgedata and then redo the deploy command for a > few times, and it still fails to start osd. > And there is no error log, anyone know what's the problem? Seems like this is OSD 0, right? Have you checked

Re: [ceph-users] ceph-deploy failed to deploy osd randomly

2017-11-15 Thread Wei Jin
I tried to do purge/purgedata and then redo the deploy command for a few times, and it still fails to start osd. And there is no error log, anyone know what's the problem? BTW, my os is dedian with 4.4 kernel. Thanks. On Wed, Nov 15, 2017 at 8:24 PM, Wei Jin wrote: > Hi,

Re: [ceph-users] ceph-deploy mgr create error No such file or directory:

2017-07-14 Thread Vasu Kulkarni
On Fri, Jul 14, 2017 at 10:37 AM, Oscar Segarra wrote: > I'm testing on latest Jewell version I've found in repositories: > you can skip that command then, I will fix the document to add a note for jewel or pre luminous build. > > [root@vdicnode01 yum.repos.d]# ceph

Re: [ceph-users] ceph-deploy mgr create error No such file or directory:

2017-07-14 Thread Oscar Segarra
I'm testing on latest Jewell version I've found in repositories: [root@vdicnode01 yum.repos.d]# ceph --version ceph version 10.2.8 (f5b1f1fd7c0be0506ba73502a675de9d048b744e) thanks a lot! 2017-07-14 19:21 GMT+02:00 Vasu Kulkarni : > It is tested for master and is working

Re: [ceph-users] ceph-deploy mgr create error No such file or directory:

2017-07-14 Thread Vasu Kulkarni
It is tested for master and is working fine, I will run those same tests on luminous and check if there is an issue and update here. mgr create is needed for luminous+ bulids only. On Fri, Jul 14, 2017 at 10:18 AM, Roger Brown wrote: > I've been trying to work through

Re: [ceph-users] ceph-deploy mgr create error No such file or directory:

2017-07-14 Thread Roger Brown
I've been trying to work through similar mgr issues for Xenial-Luminous... roger@desktop:~/ceph-cluster$ ceph-deploy mgr create mon1 nuc2 [ceph_deploy.conf][DEBUG ] found configuration file at: /home/roger/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.38): /usr/bin/ceph-deploy mgr

Re: [ceph-users] ceph-deploy , osd_journal_size and entire disk partiton for journal

2017-06-12 Thread David Turner
Then you want separate partitions for each OSD journal. if you have 4 HDD OSDs using this as they're journal, you should have 4x 5GB partitions on the SSD. On Mon, Jun 12, 2017 at 12:07 PM Deepak Naidu wrote: > Thanks for the note, yes I know them all. It will be shared

Re: [ceph-users] ceph-deploy , osd_journal_size and entire disk partiton for journal

2017-06-12 Thread Deepak Naidu
Thanks for the note, yes I know them all. It will be shared among multiple 3-4 HDD OSD Disks. -- Deepak On Jun 12, 2017, at 7:07 AM, David Turner > wrote: Why do you want a 70GB journal? You linked to the documentation, so I'm assuming

Re: [ceph-users] ceph-deploy , osd_journal_size and entire disk partiton for journal

2017-06-12 Thread David Turner
Why do you want a 70GB journal? You linked to the documentation, so I'm assuming that you followed the formula stated to figure out how big your journal should be... "osd journal size = {2 * (expected throughput * filestore max sync interval)}". I've never heard of a cluster that requires such a

Re: [ceph-users] ceph-deploy to a particular version

2017-05-02 Thread German Anders
I think you can do *$ceph-deploy install --release --repo-url http://download.ceph.com/. .. *, also you can change the --release flag with --dev or --testing and specify the version, I've done with release and dev flags and work great :) hope it helps best,

Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-16 Thread Shain Miley
I ended up using a newer version of ceph-deploy and things went more smoothly after that. Thanks again to everyone for all the help! Shain > On Mar 16, 2017, at 10:29 AM, Shain Miley wrote: > > This sender failed our fraud detection checks and may not be who they appear >

Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-16 Thread Shain Miley
It looks like things are working a bit better today…however now I am getting the following error: [hqosd6][DEBUG ] detect platform information from remote host [hqosd6][DEBUG ] detect machine type [ceph_deploy.install][INFO ] Distro info: Ubuntu 14.04 trusty [hqosd6][INFO ] installing ceph on

Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread Shain Miley
Thanks for all the help so far. Just to be clear…if I am planning on upgrading the cluster from Hammer in say the next 3 months…what is the suggested upgrade path? Thanks again, Shain > On Mar 15, 2017, at 2:05 PM, Abhishek Lekshmanan wrote: > > > > On 15/03/17 18:32,

Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread Abhishek Lekshmanan
On 15/03/17 18:32, Shinobu Kinjo wrote: So description of Jewel is wrong? http://docs.ceph.com/docs/master/releases/ Yeah we missed updating jewel dates as well when updating about hammer, Jewel is an LTS and would get more upgrades. Once Luminous is released, however, we'll eventually

Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread Shinobu Kinjo
So description of Jewel is wrong? http://docs.ceph.com/docs/master/releases/ On Thu, Mar 16, 2017 at 2:27 AM, John Spray wrote: > On Wed, Mar 15, 2017 at 5:04 PM, Shinobu Kinjo wrote: >> It may be probably kind of challenge but please consider Kraken (or

Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread John Spray
On Wed, Mar 15, 2017 at 5:04 PM, Shinobu Kinjo wrote: > It may be probably kind of challenge but please consider Kraken (or > later) because Jewel will be retired: > > http://docs.ceph.com/docs/master/releases/ Nope, Jewel is LTS, Kraken is not. Kraken will only receive

Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread Shinobu Kinjo
Would you file this as a doc bug? So we discuss properly with tracking. http://tracker.ceph.com On Thu, Mar 16, 2017 at 2:17 AM, Deepak Naidu wrote: >>> because Jewel will be retired: > Hmm. Isn't Jewel LTS ? > > Every other stable releases is a LTS (Long Term Stable) and

Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread Deepak Naidu
>> because Jewel will be retired: Hmm. Isn't Jewel LTS ? Every other stable releases is a LTS (Long Term Stable) and will receive updates until two LTS are published. -- Deepak > On Mar 15, 2017, at 10:09 AM, Shinobu Kinjo wrote: > > It may be probably kind of challenge

Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread Shinobu Kinjo
It may be probably kind of challenge but please consider Kraken (or later) because Jewel will be retired: http://docs.ceph.com/docs/master/releases/ On Thu, Mar 16, 2017 at 1:48 AM, Shain Miley wrote: > No this is a production cluster that I have not had a chance to upgrade yet.

Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread Shain Miley
No this is a production cluster that I have not had a chance to upgrade yet. We had an is with the OS on a node so I am just trying to reinstall ceph and hope that the osd data is still in tact. Once I get things stable again I was planning on upgrading…but the upgrade is a bit intensive by

Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread Vasu Kulkarni
Just curious, why you still want to deploy new hammer instead of stable jewel? Is this a test environment? the last .10 release was basically for bug fixes for 0.94.9. On Wed, Mar 15, 2017 at 9:16 AM, Shinobu Kinjo wrote: > FYI: >

Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread Deepak Naidu
I had similar issue when using older version of ceph-deploy. I see the URL got.ceph.com doesn't work on browser as well. To resolve this, I installed the latest version of ceph-deploy and it worked fine. New version wasn't using git.ceph.com. During ceph-deploy you can mention what version of

Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread Shinobu Kinjo
FYI: https://plus.google.com/+Cephstorage/posts/HuCaTi7Egg3 On Thu, Mar 16, 2017 at 1:05 AM, Shain Miley wrote: > Hello, > I am trying to deploy ceph to a new server using ceph-deply which I have > done in the past many times without issue. > > Right now I am seeing a timeout

Re: [ceph-users] Ceph-deploy not creating osd's

2016-09-09 Thread Shain Miley
Can someone please suggest a course of action moving forward? I don't fee comfortable making changes to the crush map without a better understanding of what exactly is going on here. The new osd appears in the 'osd tree' but not in the current crush map. The sever that hosts the osd is not

Re: [ceph-users] Ceph-deploy not creating osd's

2016-09-08 Thread Shain Miley
I ended up starting from scratch and doing a purge and purgedata on that host using ceph-deploy, after that things seemed to go better. The osd is up and in at this point, however when the osd was added to the cluster...no data was being moved to the new osd. Here is a copy of my current crush

Re: [ceph-users] Ceph-deploy on Jewel error

2016-08-03 Thread Chengwei Yang
On Thu, Aug 04, 2016 at 12:20:01AM +, EP Komarla wrote: > Hi All, > > > > I am trying to do a fresh install of Ceph Jewel on my cluster. I went through > all the steps in configuring the network, ssh, password, etc. Now I am at the > stage of running the ceph-deploy commands to install

Re: [ceph-users] Ceph-deploy new OSD addition issue

2016-06-28 Thread Pisal, Ranjit Dnyaneshwar
This is another error I get while trying to activate disk - [ceph@MYOPTPDN16 ~]$ sudo ceph-disk activate /dev/sdl1 2016-06-29 11:25:17.436256 7f8ed85ef700 0 -- :/1032777 >> 10.115.1.156:6789/0 pipe(0x7f8ed4021610 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8ed40218a0).fault 2016-06-29 11:25:20.436362

Re: [ceph-users] ceph-deploy jewel install dependencies

2016-06-14 Thread Noah Watkins
Working for me now. Thanks for taking care of this. - Noah On Tue, Jun 14, 2016 at 5:42 PM, Alfredo Deza wrote: > We are now good to go. > > Sorry for all the troubles, some packages were missed in the metadata, > had to resync+re-sign them to get everything in order. > > Just

Re: [ceph-users] ceph-deploy jewel install dependencies

2016-06-14 Thread Alfredo Deza
We are now good to go. Sorry for all the troubles, some packages were missed in the metadata, had to resync+re-sign them to get everything in order. Just tested it out and it works as expected. Let me know if you have any issues. On Tue, Jun 14, 2016 at 5:57 PM, Noah Watkins

Re: [ceph-users] ceph-deploy jewel install dependencies

2016-06-14 Thread Noah Watkins
Yeh, I'm still seeing the problem, too Thanks. On Tue, Jun 14, 2016 at 2:55 PM Alfredo Deza wrote: > On Tue, Jun 14, 2016 at 5:52 PM, Alfredo Deza wrote: > > Is it possible you tried to install just when I was syncing 10.2.2 ? > > > > :) > > > > Would you

Re: [ceph-users] ceph-deploy jewel install dependencies

2016-06-14 Thread Alfredo Deza
On Tue, Jun 14, 2016 at 5:52 PM, Alfredo Deza wrote: > Is it possible you tried to install just when I was syncing 10.2.2 ? > > :) > > Would you mind trying this again and see if you are good? > > On Tue, Jun 14, 2016 at 5:31 PM, Noah Watkins wrote: >>

Re: [ceph-users] ceph-deploy jewel install dependencies

2016-06-14 Thread Alfredo Deza
Is it possible you tried to install just when I was syncing 10.2.2 ? :) Would you mind trying this again and see if you are good? On Tue, Jun 14, 2016 at 5:31 PM, Noah Watkins wrote: > Installing Jewel with ceph-deploy has been working for weeks. Today I > started to get

Re: [ceph-users] ceph-deploy prepare journal on software raid ( md device )

2016-06-12 Thread Oliver Dzombic
Hi to myself =) just in case other's run into the same: #1: You will have to update parted from version 3.1 to 3.2 ( for example simply take the fedora package, its newer, and replace with it ) -which is responsible for partprobe. #2: Softwareraid will still not work, because of the guid of the

Re: [ceph-users] ceph-deploy jewel stopped working

2016-04-21 Thread Stephen Lord
Sorry about the mangled urls in there, these are all from download.ceph.com rpm-jewel el7 xfs_64 Steve > On Apr 21, 2016, at 1:17 PM, Stephen Lord wrote: > > > > Running this command > > ceph-deploy install --stable jewel ceph00 > > And using the 1.5.32 version

Re: [ceph-users] ceph deploy osd install broken on centos 7 with hammer 0.94.6

2016-03-23 Thread Oliver Dzombic
Hi, after i copied /lib/lsb/* ( was not existing on my new centos 7.2 ) system now # service ceph start Error EINVAL: entity osd.18 exists but key does not match ERROR:ceph-disk:Failed to activate ceph-disk: Command '['/usr/bin/ceph', '--cluster', 'ceph', '--name', 'client.bootstrap-osd',

Re: [ceph-users] ceph-deploy create-initial errors out with "Some monitors have still not reached quorum"

2016-01-05 Thread Martin Palma
Hi Maruthi, happy to hear that it is working now. Yes, with the latest stable release, infernalis, the "ceph" username is reserved for the Ceph daemons. Best, Martin On Tuesday, 5 January 2016, Maruthi Seshidhar wrote: > Thank you Martin, > > Yes, "nslookup "

Re: [ceph-users] ceph-deploy create-initial errors out with "Some monitors have still not reached quorum"

2016-01-04 Thread Maruthi Seshidhar
Thank you Martin, Yes, "nslookup " was not working. After configuring DNS on all nodes, the nslookup issue got sorted out. But the "some monitors have still not reach quorun" issue was still seen. I was using user "ceph" for ceph deployment. The user "ceph" is reserved for ceph internal use.

Re: [ceph-users] ceph-deploy create-initial errors out with "Some monitors have still not reached quorum"

2016-01-01 Thread Martin Palma
Hi Maruthi, and did you test that DNS name lookup properly works (e.g. nslookup ceph-mon1 etc...) on all hosts? >From the output of 'ceph-deploy' it seem that the host can only resolve it's own name but not the others: [ceph-mon1][DEBUG ] "monmap": { [ceph-mon1][DEBUG ] "created":

Re: [ceph-users] ceph-deploy for "deb http://ceph.com/debian-hammer/ trusty main"

2015-11-13 Thread Jaime Melis
Hi, can someone shed some light on the status of this issue? I can see that Loic removed the target version a few days ago. Is there any way we can help to fix this? cheers, Jaime On Thu, Oct 22, 2015 at 10:16 PM, David Clarke wrote: > On 23/10/15 09:08, Kjetil

Re: [ceph-users] ceph-deploy on lxc container - 'initctl: Event failed'

2015-11-06 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 I've put monitors in LXC but I haven't done it with ceph-deploy. I've had no problems with it. - Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Fri, Nov 6, 2015 at 12:55 PM, Bogdan SOLGA

Re: [ceph-users] ceph-deploy - default release

2015-11-04 Thread Luke Jing Yuan
: Wednesday, November 04, 2015 8:40 PM To: ceph-users Subject: Re: [ceph-users] ceph-deploy - default release Hello! A retry of this question, as I'm still stuck at the install step, due to the old version issue. Any help is highly appreciated. Regards, Bogdan On Sat, Oct 31, 2015 at 9:22 AM

Re: [ceph-users] ceph-deploy - default release

2015-11-04 Thread Bogdan SOLGA
Hello! A retry of this question, as I'm still stuck at the install step, due to the old version issue. Any help is highly appreciated. Regards, Bogdan On Sat, Oct 31, 2015 at 9:22 AM, Bogdan SOLGA wrote: > Hello everyone! > > I'm struggling to get a new Ceph cluster

  1   2   3   4   >