Re: [ceph-users] ceph-deploy can't generate the client.admin keyring

2019-12-19 Thread Jean-Philippe Méthot
Alright, so I figured it out. It was essentially because the monitor’s main IP 
wasn’t on the public network in the ceph.conf file. Hence, ceph was trying to 
connect on an IP where the monitor wasn’t listening.


Jean-Philippe Méthot
Openstack system administrator
Administrateur système Openstack
PlanetHoster inc.




> Le 19 déc. 2019 à 11:50, Jean-Philippe Méthot  a 
> écrit :
> 
> Hi,
> 
> We’re currently running Ceph mimic in production and that works fine. 
> However, I am currently deploying another Ceph mimic setup for testing 
> purposes and ceph-deploy is running into issues I’ve never seen before.
> Essentially, the initial monitor setup starts the service, but the process 
> gets interrupted at 
> 
> sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. 
> --keyring=/var/lib/ceph/mon/ceph-ceph-monitor1/keyring auth get client.admin
> 
> with the client.admin keyring generation timing out. I can however issue 
> commands manually using ceph-authtool to create an admin keyring and it works 
> flawlessly.
> 
> ceph monitor logs don’t show any error and I am able to reach the monitor’s 
> port by telnet. What could be causing this timeout?
> 
> 
> Jean-Philippe Méthot
> Openstack system administrator
> Administrateur système Openstack
> PlanetHoster inc.
> 
> 
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy osd create adds osds but weight is 0 and not adding hosts to CRUSH map

2019-06-26 Thread Hayashida, Mami
Please disregard the earlier message.  I found the culprit:
`osd_crush_update_on_start` was set to false.

*Mami Hayashida*
*Research Computing Associate*
Univ. of Kentucky ITS Research Computing Infrastructure



On Wed, Jun 26, 2019 at 11:37 AM Hayashida, Mami 
wrote:

> I am trying to build a Ceph cluster using ceph-deploy.  To add OSDs, I
> used the following command (which I had successfully used before to build
> another cluster):
>
> ceph-deploy osd create --block-db=ssd0/db0 --data=/dev/sdh  osd0
> ceph-deploy osd create --block-db=ssd0/db1 --data=/dev/sdi   osd0
> etc.
>
> Prior to running those commands, I did manually create LVs on /dev/sda for
> DB/WAL with:
>
> *** on osd0 node***
> sudo pvcreate /dev/sda
> sudo vgcreate ssd0 /dev/sda;
> for i in {0..9}; do
> sudo lvcreate -L 40G -n db${i} ssd0;
> done
> **
> But I just realized (after creating over 240 OSDs!) neither the host nor
> each osd weight was added to the CRUSH map as far as I can tell (expected
> weight for each osd is 3.67799):
>
> cephuser@admin_node:~$ ceph osd tree
> ID  CLASS WEIGHT TYPE NAMESTATUS REWEIGHT PRI-AFF
>  -10 root default
>   0   hdd  0 osd.0up  1.0 1.0
>   1   hdd  0 osd.1up  1.0 1.0
> (... and so on)
>
> And checking the cruch map with `ceph osd crush dump` also confirms that
> there are no host entries or weight (capacity) of each osd.  At the same
> time,
> `ceph -s` and the dashboard correctly shows ` usage: 9.7 TiB used, 877 TiB
> / 886 TiB avail` (correct number for all the OSDs added so far). In fact,
> the dashboard even correctly groups OSDs into correct hosts.
>
> One additional info: I have been able to create a test pool `ceph osd pool
> create mytest 8` but cannot create an object in the pool.
>
> I am running Ceph version mimic 13.2.6 which I installed using ceph-deploy
> version 2.0.1, all servers running Ubuntu 18.0.4.2.
>
> Any help/advice is appreciated.
>
> *Mami Hayashida*
> *Research Computing Associate*
> Univ. of Kentucky ITS Research Computing Infrastructure
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-05 Thread Alfredo Deza
On Tue, Dec 4, 2018 at 6:44 PM Matthew Pounsett  wrote:
>
>
>
> On Tue, 4 Dec 2018 at 18:31, Vasu Kulkarni  wrote:
>>>
>>>
>>> Is there a way we can easily set that up without trying to use outdated 
>>> tools?  Presumably if ceph still supports this as the docs claim, there's a 
>>> way to get it done without using ceph-deploy?
>>
>> It might be more involved if you are trying to setup manually, you can give 
>> 1.5.38  a try(not that old) and see if it works 
>> https://pypi.org/project/ceph-deploy/1.5.38/

Vasu has pointed out pretty much everything correctly. If you don't
want the new syntax, 1.5.38 is what you want. Would like to point out
a couple of things here:

* ceph-deploy has a pretty good changelog that gets updated for every
release. The 2.0.0 release has a backwards incompatibility notice
explaining much of the issues you've raised:
http://docs.ceph.com/ceph-deploy/docs/changelog.html

* Deploying to directories is not supported anymore with ceph-deploy,
and soon to be impossible in Ceph with tooling. If you are trying to
replicate production environments with smaller devices, you can still
do this
with some manual work: ceph-deploy (and ceph-volume on the remote
machine) can consume logical volumes, which could be easily set up to
be on a loop device. That is what we do for some of the functional
testing:
we create sparse files of 10GB and attach them to a loop device, then
create an LV on top.

>>
>
>  Not that old now.. but eventually it will be. :)
>
> The goal was to revisit our deployment tools later (after getting the rest of 
> the service working) and replace ceph-deploy with direct configuration of all 
> the machines, so maybe having a deprecated piece of software around that 
> needs to be replaced will help with that motivation when the time comes.
>
> Thanks for your help!
>Matt
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Matthew Pounsett
On Tue, 4 Dec 2018 at 18:31, Vasu Kulkarni  wrote:

>
>> Is there a way we can easily set that up without trying to use outdated
>> tools?  Presumably if ceph still supports this as the docs claim, there's a
>> way to get it done without using ceph-deploy?
>>
> It might be more involved if you are trying to setup manually, you can
> give 1.5.38  a try(not that old) and see if it works
> https://pypi.org/project/ceph-deploy/1.5.38/
>
>
 Not that old now.. but eventually it will be. :)

The goal was to revisit our deployment tools later (after getting the rest
of the service working) and replace ceph-deploy with direct configuration
of all the machines, so maybe having a deprecated piece of software around
that needs to be replaced will help with that motivation when the time
comes.

Thanks for your help!
   Matt
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Vasu Kulkarni
On Tue, Dec 4, 2018 at 3:19 PM Matthew Pounsett  wrote:

>
>
> On Tue, 4 Dec 2018 at 18:12, Vasu Kulkarni  wrote:
>
>>
>>> As explained above, we can't just create smaller raw devices.  Yes,
>>> these are VMs but they're meant to replicate physical servers that will be
>>> used in production, where no such volumes are available.
>>>
>> In that case you will have to use the same version of ceph-deploy you
>> have used to deploy the original systems. you cannot do this now with newer
>> version.
>>
>>>
>>> So.. we're trying to figure out how to replicate the configuration we've
>>> been using where the ceph data is stored on the OS filesystem.
>>>
>>
> That's irritating, but understood. :)
>
> Is there a way we can easily set that up without trying to use outdated
> tools?  Presumably if ceph still supports this as the docs claim, there's a
> way to get it done without using ceph-deploy?
>
It might be more involved if you are trying to setup manually, you can give
1.5.38  a try(not that old) and see if it works
https://pypi.org/project/ceph-deploy/1.5.38/


>
> If not.. I guess worst case we can revisit having the hardware group to
> repartition the drives with separate, tiny, data and journal partitions...
> that is assuming it's not required that those functions have access to
> whole disks.  We would rather avoid that if at all possible, though.
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Matthew Pounsett
On Tue, 4 Dec 2018 at 18:12, Vasu Kulkarni  wrote:

>
>> As explained above, we can't just create smaller raw devices.  Yes, these
>> are VMs but they're meant to replicate physical servers that will be used
>> in production, where no such volumes are available.
>>
> In that case you will have to use the same version of ceph-deploy you have
> used to deploy the original systems. you cannot do this now with newer
> version.
>
>>
>> So.. we're trying to figure out how to replicate the configuration we've
>> been using where the ceph data is stored on the OS filesystem.
>>
>
That's irritating, but understood. :)

Is there a way we can easily set that up without trying to use outdated
tools?  Presumably if ceph still supports this as the docs claim, there's a
way to get it done without using ceph-deploy?

If not.. I guess worst case we can revisit having the hardware group to
repartition the drives with separate, tiny, data and journal partitions...
that is assuming it's not required that those functions have access to
whole disks.  We would rather avoid that if at all possible, though.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Vasu Kulkarni
On Tue, Dec 4, 2018 at 3:07 PM Matthew Pounsett  wrote:

>
>
> On Tue, 4 Dec 2018 at 18:04, Vasu Kulkarni  wrote:
>
>>
>>
>> On Tue, Dec 4, 2018 at 2:42 PM Matthew Pounsett 
>> wrote:
>>
>>> you are using HOST:DIR option which is bit old and I think it was
>>> supported till jewel,  since you are using 2.0.1 you should be using  only
>>> 'osd create' with logical volume or full block device as defined here:
>>> http://docs.ceph.com/docs/mimic/ceph-volume/lvm/prepare/#ceph-volume-lvm-prepare-filestore
>>> , ceph-deploy calls ceph-volume with the same underlying syntax. Since this
>>> is a VM, you can just add addtional smaller raw devices (eg: /dev/sde ) and
>>> use that for journal.
>>>
>>
> As explained above, we can't just create smaller raw devices.  Yes, these
> are VMs but they're meant to replicate physical servers that will be used
> in production, where no such volumes are available.
>
In that case you will have to use the same version of ceph-deploy you have
used to deploy the original systems. you cannot do this now with newer
version.

>
> So.. we're trying to figure out how to replicate the configuration we've
> been using where the ceph data is stored on the OS filesystem.
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Matthew Pounsett
On Tue, 4 Dec 2018 at 18:04, Vasu Kulkarni  wrote:

>
>
> On Tue, Dec 4, 2018 at 2:42 PM Matthew Pounsett 
> wrote:
>
>> you are using HOST:DIR option which is bit old and I think it was
>> supported till jewel,  since you are using 2.0.1 you should be using  only
>> 'osd create' with logical volume or full block device as defined here:
>> http://docs.ceph.com/docs/mimic/ceph-volume/lvm/prepare/#ceph-volume-lvm-prepare-filestore
>> , ceph-deploy calls ceph-volume with the same underlying syntax. Since this
>> is a VM, you can just add addtional smaller raw devices (eg: /dev/sde ) and
>> use that for journal.
>>
>
As explained above, we can't just create smaller raw devices.  Yes, these
are VMs but they're meant to replicate physical servers that will be used
in production, where no such volumes are available.

So.. we're trying to figure out how to replicate the configuration we've
been using where the ceph data is stored on the OS filesystem.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Vasu Kulkarni
On Tue, Dec 4, 2018 at 2:42 PM Matthew Pounsett  wrote:

> Going to take another stab at this...
>
> We have a development environment–made up of VMs–for developing and
> testing the deployment tools for a particular service that depends on
> cephfs for sharing state data between hosts.  In production we will be
> using filestore OSDs because of the very low volume of data (a few hundred
> kilobytes) and the very low rate of change.  There's insufficient
> performance benefit for it to make sense for us to create an operational
> exception by configuring the hardware differently from everything else just
> to have separate block devices.
>
> Unfortunately, even though the documentation says that filestore OSDs are
> well tested and supported, they don't seem to be well documented.
>
Sadly the documentation on ceph-deploy is bit behind :(


> In a recent test of our deployment tools (using Kraken on Centos/7) the
> 'ceph-deploy osd' steps failed. Assuming this was simply because Kraken is
> now so far past EOL that it just wasn't supported properly on an updated
> Centos box I started working on an update to Luminos.  However, I've since
> discovered that the problem is actually that ceph-deploy's OSD 'prepare'
> and 'activate' commands have been deprecated regardless of ceph release.  I
> now realize that ceph-deploy is maintained independently from the rest of
> ceph, but not documented independently, so the ceph documentation that
> references ceph-deploy seems to now be frequently incorrect.
>
> Except where mentioned otherwise, the rest of this is using the latest
> Luminos from the download.ceph.com Yum archive (12.2.10) with ceph-deploy
> 2.0.1
>
> Our scripts, written for Kraken, were doing this to create filestore OSDs
> on four dev VMs:
> ceph-deploy osd prepare tldhost01:/var/local/osd0
> tldhost02:/var/local/osd0 tldhost03:/var/local/osd0
> tldhost04:/var/local/osd0
> ceph-deploy osd activate tldhost01:/var/local/osd0
> tldhost02:/var/local/osd0 tldhost03:/var/local/osd0
> tldhost04:/var/local/osd0
>
you are using HOST:DIR option which is bit old and I think it was supported
till jewel,  since you are using 2.0.1 you should be using  only 'osd
create' with logical volume or full block device as defined here:
http://docs.ceph.com/docs/mimic/ceph-volume/lvm/prepare/#ceph-volume-lvm-prepare-filestore
, ceph-deploy calls ceph-volume with the same underlying syntax. Since this
is a VM, you can just add addtional smaller raw devices (eg: /dev/sde ) and
use that for journal.



>
> Both 'prepare' and 'activate' seem to be completely deprecated now
> (neither shows up in the help output generated when the above commands
> fail) in Kraken and Luminos.  This seems to have changed in the last 60
> days or so.  The above commands now fail with this error:
>usage: ceph-deploy osd [-h] {list,create} ...
>ceph-deploy osd: error: argument subcommand: invalid choice: 'prepare'
> (choose from 'list', 'create')
>
> I'm trying to figure out the 'ceph-deploy osd create' syntax to duplicate
> the above, but the documentation is no help.  The Luminos documentation
> still shows the above prepare/activate syntax should be valid, and
> continues to show the journal path as being optional for the 'ceph-deploy
> osd create' command.
> <
> http://docs.ceph.com/docs/luminous/rados/deployment/ceph-deploy-osd/#prepare-osds
> >.
> The same documentation for Mimic seems to be updated for the new
> ceph-deploy syntax, including the elimination of 'prepare' and 'activate',
> but doesn't include specifics for a filestore deployment:
> <
> http://docs.ceph.com/docs/mimic/rados/deployment/ceph-deploy-osd/#create-osds
> >
>
> The new syntax seems to suggest I can now only do one host at a time, and
> must split up the host, data, and journal values.  After much trial and
> error I've also found it's now required to specify the journal path, but
> not knowing for sure what ceph-deploy was doing in the background with the
> journal path by default before, I've had a hard time sorting out things to
> try with the new syntax.  Following the above logic, and skipping over a
> few things I've tried to get here, in my latest attempt I've moved the ceph
> data down one level in the directory tree and added a journal directory.
> Where tldhost01 is localhost:
>mkdir -p /var/local/ceph/{osd0,journal}
>ceph-deploy osd create --data /var/local/ceph/osd0 --journal
> /var/local/ceph/journal --filestore tldhost01
>
> The assumption in this is that --data and --journal accept filesystem
> paths the same way the 'prepare' and 'activate' commands used to, but that
> is clearly not the case, as the above complains that I have not supplied
> block devices.  It looks like --filestore is not doing what I hoped.
>
you cannot use DIR as option to --data and --journal as explained above.
--filestore doesn't actually mean filesystem option here, It needs a block
device and will automatically create fs on the block device.

>
> At this point 

Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Matthew Pounsett
Going to take another stab at this...

We have a development environment–made up of VMs–for developing and testing
the deployment tools for a particular service that depends on cephfs for
sharing state data between hosts.  In production we will be using filestore
OSDs because of the very low volume of data (a few hundred kilobytes) and
the very low rate of change.  There's insufficient performance benefit for
it to make sense for us to create an operational exception by configuring
the hardware differently from everything else just to have separate block
devices.

Unfortunately, even though the documentation says that filestore OSDs are
well tested and supported, they don't seem to be well documented.

In a recent test of our deployment tools (using Kraken on Centos/7) the
'ceph-deploy osd' steps failed. Assuming this was simply because Kraken is
now so far past EOL that it just wasn't supported properly on an updated
Centos box I started working on an update to Luminos.  However, I've since
discovered that the problem is actually that ceph-deploy's OSD 'prepare'
and 'activate' commands have been deprecated regardless of ceph release.  I
now realize that ceph-deploy is maintained independently from the rest of
ceph, but not documented independently, so the ceph documentation that
references ceph-deploy seems to now be frequently incorrect.

Except where mentioned otherwise, the rest of this is using the latest
Luminos from the download.ceph.com Yum archive (12.2.10) with ceph-deploy
2.0.1

Our scripts, written for Kraken, were doing this to create filestore OSDs
on four dev VMs:
ceph-deploy osd prepare tldhost01:/var/local/osd0 tldhost02:/var/local/osd0
tldhost03:/var/local/osd0 tldhost04:/var/local/osd0
ceph-deploy osd activate tldhost01:/var/local/osd0
tldhost02:/var/local/osd0 tldhost03:/var/local/osd0
tldhost04:/var/local/osd0

Both 'prepare' and 'activate' seem to be completely deprecated now (neither
shows up in the help output generated when the above commands fail) in
Kraken and Luminos.  This seems to have changed in the last 60 days or so.
The above commands now fail with this error:
   usage: ceph-deploy osd [-h] {list,create} ...
   ceph-deploy osd: error: argument subcommand: invalid choice: 'prepare'
(choose from 'list', 'create')

I'm trying to figure out the 'ceph-deploy osd create' syntax to duplicate
the above, but the documentation is no help.  The Luminos documentation
still shows the above prepare/activate syntax should be valid, and
continues to show the journal path as being optional for the 'ceph-deploy
osd create' command.
<
http://docs.ceph.com/docs/luminous/rados/deployment/ceph-deploy-osd/#prepare-osds
>.
The same documentation for Mimic seems to be updated for the new
ceph-deploy syntax, including the elimination of 'prepare' and 'activate',
but doesn't include specifics for a filestore deployment:
<
http://docs.ceph.com/docs/mimic/rados/deployment/ceph-deploy-osd/#create-osds
>

The new syntax seems to suggest I can now only do one host at a time, and
must split up the host, data, and journal values.  After much trial and
error I've also found it's now required to specify the journal path, but
not knowing for sure what ceph-deploy was doing in the background with the
journal path by default before, I've had a hard time sorting out things to
try with the new syntax.  Following the above logic, and skipping over a
few things I've tried to get here, in my latest attempt I've moved the ceph
data down one level in the directory tree and added a journal directory.
Where tldhost01 is localhost:
   mkdir -p /var/local/ceph/{osd0,journal}
   ceph-deploy osd create --data /var/local/ceph/osd0 --journal
/var/local/ceph/journal --filestore tldhost01

The assumption in this is that --data and --journal accept filesystem paths
the same way the 'prepare' and 'activate' commands used to, but that is
clearly not the case, as the above complains that I have not supplied block
devices.  It looks like --filestore is not doing what I hoped.

At this point I'm stuck.  I've gone through all the documentation I can
find, and although it frequently mentions that ceph started by storing its
data on the filesystem and that doing so is still well supported, I can't
actually find any documentation that says how to do it.  When we started
this project we used information from the quickstart documents to get
filestore OSDs set up, but even the quickstart documents don't seem to
supply that information (anymore).

Thanks for any pointers anyone can supply.
   Matt
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Vasu Kulkarni
On Mon, Dec 3, 2018 at 4:47 PM Matthew Pounsett  wrote:

>
> I'm in the process of updating some development VMs that use ceph-fs.  It
> looks like recent updates to ceph have deprecated the 'ceph-deploy osd
> prepare' and 'activate' commands in favour of the previously-optional
> 'create' command.
>
> We're using filestore OSDs on these VMs, but I can't seem to figure out
> the syntax that ceph-deploy wants to specify the path.
>
> Where we used to use:
> ceph-deploy osd prepare tldhost01:/var/local/osd0
> tldhost02:/var/local/osd0 tldhost03:/var/local/osd0
> tldhost04:/var/local/osd0
> ceph-deploy osd activate tldhost01:/var/local/osd0
> tldhost02:/var/local/osd0 tldhost03:/var/local/osd0
> tldhost04:/var/local/osd0
> .. a similar path syntax for 'osd create' generates an error.
>
> The help output for 'ceph-deploy osd create --help' seems to suggest the
> following could work:
> ceph-deploy osd-create --filestore tldhost01:/var/local/osd0 ...
>
what version of ceph-deploy are you using and what is the version of ceph?
If you are using latest ceph-deploy from pypi, it
will default to ceph-volume and you have to specifiy '--journal' option
which is mandatory for filestore, since you are using a VM, you can
partition part
of the disk for data and and another smaller partition for journal.


>
> However it does not.
>
> What's the actual process for using ceph-deploy to set up filestore OSDs
> now?
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy osd creation failed with multipath and dmcrypt

2018-11-06 Thread Alfredo Deza
On Tue, Nov 6, 2018 at 8:41 AM Pavan, Krish  wrote:
>
> Trying to created OSD with multipath with dmcrypt and it failed . Any 
> suggestion please?.

ceph-disk is known to have issues like this. It is already deprecated
in the Mimic release and will no longer be available for the upcoming
release (Nautilus).

I would strongly suggest you upgrade ceph-deploy to the 2.X.X series
which supports ceph-volume.

>
> ceph-deploy --overwrite-conf osd create ceph-store1:/dev/mapper/mpathr 
> --bluestore --dmcrypt  -- failed
>
> ceph-deploy --overwrite-conf osd create ceph-store1:/dev/mapper/mpathr 
> --bluestore – worked
>
>
>
> the logs for fail
>
> [ceph-store12][WARNIN] command: Running command: /usr/sbin/restorecon -R 
> /var/lib/ceph/osd-lockbox/e15f1adc-feff-4890-a617-adc473e7331e/magic.68428.tmp
>
> [ceph-store12][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph 
> /var/lib/ceph/osd-lockbox/e15f1adc-feff-4890-a617-adc473e7331e/magic.68428.tmp
>
> [ceph-store12][WARNIN] Traceback (most recent call last):
>
> [ceph-store12][WARNIN]   File "/usr/sbin/ceph-disk", line 9, in 
>
> [ceph-store12][WARNIN] load_entry_point('ceph-disk==1.0.0', 
> 'console_scripts', 'ceph-disk')()
>
> [ceph-store12][WARNIN]   File 
> "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5736, in run
>
> [ceph-store12][WARNIN] main(sys.argv[1:])
>
> [ceph-store12][WARNIN]   File 
> "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5687, in main
>
> [ceph-store12][WARNIN] args.func(args)
>
> [ceph-store12][WARNIN]   File 
> "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2108, in main
>
> [ceph-store12][WARNIN] Prepare.factory(args).prepare()
>
> [ceph-store12][WARNIN]   File 
> "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2097, in prepare
>
> [ceph-store12][WARNIN] self._prepare()
>
> [ceph-store12][WARNIN]   File 
> "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2171, in _prepare
>
> [ceph-store12][WARNIN] self.lockbox.prepare()
>
> [ceph-store12][WARNIN]   File 
> "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2861, in prepare
>
> [ceph-store12][WARNIN] self.populate()
>
> [ceph-store12][WARNIN]   File 
> "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2818, in populate
>
> [ceph-store12][WARNIN] get_partition_base(self.partition.get_dev()),
>
> [ceph-store12][WARNIN]   File 
> "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 844, in 
> get_partition_base
>
> [ceph-store12][WARNIN] raise Error('not a partition', dev)
>
> [ceph-store12][WARNIN] ceph_disk.main.Error: Error: not a partition: 
> /dev/dm-215
>
> [ceph-store12][ERROR ] RuntimeError: command returned non-zero exit status: 1
>
> [ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-disk -v 
> prepare --dmcrypt --dmcrypt-key-dir /etc/ceph/dmcrypt-keys --bluestore 
> --cluster ceph --fs-type btrfs -- /dev/mapper/mpathr
>
> [ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
>
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy osd creation failed with multipath and dmcrypt

2018-11-06 Thread Kevin Olbrich
I met the same problem. I had to create GPT table for each disk, create
first partition over full space and then fed these to ceph-volume (should
be similar for ceph-deploy).
Also I am not sure if you can combine fs-type btrfs with bluestore (afaik
this is for filestore).

Kevin


Am Di., 6. Nov. 2018 um 14:41 Uhr schrieb Pavan, Krish <
krish.pa...@nuance.com>:

> Trying to created OSD with multipath with dmcrypt and it failed . Any
> suggestion please?.
>
> ceph-deploy --overwrite-conf osd create ceph-store1:/dev/mapper/mpathr
> --bluestore --dmcrypt  -- failed
>
> ceph-deploy --overwrite-conf osd create ceph-store1:/dev/mapper/mpathr
> --bluestore – worked
>
>
>
> the logs for fail
>
> [ceph-store12][WARNIN] command: Running command: /usr/sbin/restorecon -R
> /var/lib/ceph/osd-lockbox/e15f1adc-feff-4890-a617-adc473e7331e/magic.68428.tmp
>
> [ceph-store12][WARNIN] command: Running command: /usr/bin/chown -R
> ceph:ceph
> /var/lib/ceph/osd-lockbox/e15f1adc-feff-4890-a617-adc473e7331e/magic.68428.tmp
>
> [ceph-store12][WARNIN] Traceback (most recent call last):
>
> [ceph-store12][WARNIN]   File "/usr/sbin/ceph-disk", line 9, in 
>
> [ceph-store12][WARNIN] load_entry_point('ceph-disk==1.0.0',
> 'console_scripts', 'ceph-disk')()
>
> [ceph-store12][WARNIN]   File
> "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5736, in run
>
> [ceph-store12][WARNIN] main(sys.argv[1:])
>
> [ceph-store12][WARNIN]   File
> "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5687, in main
>
> [ceph-store12][WARNIN] args.func(args)
>
> [ceph-store12][WARNIN]   File
> "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2108, in main
>
> [ceph-store12][WARNIN] Prepare.factory(args).prepare()
>
> [ceph-store12][WARNIN]   File
> "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2097, in prepare
>
> [ceph-store12][WARNIN] self._prepare()
>
> [ceph-store12][WARNIN]   File
> "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2171, in _prepare
>
> [ceph-store12][WARNIN] self.lockbox.prepare()
>
> [ceph-store12][WARNIN]   File
> "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2861, in prepare
>
> [ceph-store12][WARNIN] self.populate()
>
> [ceph-store12][WARNIN]   File
> "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2818, in populate
>
> [ceph-store12][WARNIN] get_partition_base(self.partition.get_dev()),
>
> [ceph-store12][WARNIN]   File
> "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 844, in
> get_partition_base
>
> [ceph-store12][WARNIN] raise Error('not a partition', dev)
>
> [ceph-store12][WARNIN] ceph_disk.main.Error: Error: not a partition:
> /dev/dm-215
>
> [ceph-store12][ERROR ] RuntimeError: command returned non-zero exit
> status: 1
>
> [ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-disk
> -v prepare --dmcrypt --dmcrypt-key-dir /etc/ceph/dmcrypt-keys --bluestore
> --cluster ceph --fs-type btrfs -- /dev/mapper/mpathr
>
> [ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy with a specified osd ID

2018-10-30 Thread Paul Emmerich
ceph-deploy doesn't support that. You can use ceph-disk or ceph-volume
directly (with basically the same syntax as ceph-deploy), but you can
only explicitly re-use an OSD id if you set it to destroyed before.

I.e., the proper way to replace an OSD while avoiding unnecessary data
movement is:
ceph osd destroy osd.XX
ceph-volume lvm prepare ... --osd-id 10

Also, check out "ceph osd purge" to remove OSDs with one simple step.

Paul
Am Mo., 29. Okt. 2018 um 14:43 Uhr schrieb Jin Mao :
>
> Gents,
>
> My cluster had a gap in the OSD sequence numbers at certain point. Basically, 
> because of missing osd auth del/rm" in a previous disk replacement task for 
> osd.17, a new osd.34 was created. It did not really bother me until recently 
> when I tried to replace all smaller disks to bigger disks.
>
> Ceph seems also pick up the next available osd sequence number. When I 
> replace osd.18, the disk came up online as osd.17. When I am doing osd.19, it 
> became osd.18. It generated more backfull_wait pgs than sticking to the 
> original osd number.
>
> Using ceph-deploy in version 10.2.3, is there a way to specify osd id when 
> doing osd activate?
>
> Thank you.
>
> Jin.
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-09-05 Thread Eugen Block

Hi Jones,


Just to make things clear: are you so telling me that it is completely
impossible to have a ceph "volume" in non-dedicated devices, sharing space
with, for instance, the nodes swap, boot or main partition?

And so the only possible way to have a functioning ceph distributed
filesystem working would be by having in each node at least one disk
dedicated for the operational system and another, independent disk
dedicated to the ceph filesystem?


I don't think it's completely impossible, but it would require code  
changes in SES and DeepSea and that seems quite challenging.


But if you don't have to stick with SES/DeepSea and instead build your  
cluster manually, you could create a logical volume on your spare  
partition and deploy OSDs with ceph-volume lvm.


This could be something like this:

---cut here---

# create logical volume "osd4" on volume group "vg0"
ceph-2:~ # lvcreate -n osd4 -L 1G vg0
  Logical volume "osd4" created.


# prepare lvm for bluestore
ceph-2:~ # ceph-volume lvm prepare --bluestore --data vg0/osd4
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name  
client.bootstrap-osd --keyring  
/var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new  
3b9eaa0e-9a4a-49ec-9042-34ad19a59592

Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-4
--> Absolute path not found for executable: restorecon
--> Ensure $PATH environment variable contains common executable locations
Running command: /bin/chown -h ceph:ceph /dev/vg0/osd4
Running command: /bin/chown -R ceph:ceph /dev/dm-4
Running command: /bin/ln -s /dev/vg0/osd4 /var/lib/ceph/osd/ceph-4/block
Running command: /usr/bin/ceph --cluster ceph --name  
client.bootstrap-osd --keyring  
/var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o  
/var/lib/ceph/osd/ceph-4/activate.monmap

 stderr: got monmap epoch 2
Running command: /usr/bin/ceph-authtool  
/var/lib/ceph/osd/ceph-4/keyring --create-keyring --name osd.4  
--add-key AQD3j49bDzsFIBAAsXQjhbwqFQwt/Vqq9VOnsw==

 stdout: creating /var/lib/ceph/osd/ceph-4/keyring
added entity osd.4 auth auth(auid = 18446744073709551615  
key=AQD3j49bDzsFIBAAsXQjhbwqFQwt/Vqq9VOnsw== with 0 caps)

Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4/keyring
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4/
Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore  
bluestore --mkfs -i 4 --monmap  
/var/lib/ceph/osd/ceph-4/activate.monmap --keyfile - --osd-data  
/var/lib/ceph/osd/ceph-4/ --osd-uuid  
3b9eaa0e-9a4a-49ec-9042-34ad19a59592 --setuser ceph --setgroup ceph

--> ceph-volume lvm prepare successful for: vg0/osd4


# activate lvm OSD
ceph-2:~ # ceph-volume lvm activate 4 3b9eaa0e-9a4a-49ec-9042-34ad19a59592
Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph  
prime-osd-dir --dev /dev/vg0/osd4 --path /var/lib/ceph/osd/ceph-4  
--no-mon-config

Running command: /bin/ln -snf /dev/vg0/osd4 /var/lib/ceph/osd/ceph-4/block
Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-4/block
Running command: /bin/chown -R ceph:ceph /dev/dm-4
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4
Running command: /bin/systemctl enable  
ceph-volume@lvm-4-3b9eaa0e-9a4a-49ec-9042-34ad19a59592
 stderr: Created symlink  
/etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-4-3b9eaa0e-9a4a-49ec-9042-34ad19a59592.service →  
/usr/lib/systemd/system/ceph-volume@.service.

Running command: /bin/systemctl enable --runtime ceph-osd@4
 stderr: Created symlink  
/run/systemd/system/ceph-osd.target.wants/ceph-osd@4.service →  
/usr/lib/systemd/system/ceph-osd@.service.

Running command: /bin/systemctl start ceph-osd@4
--> ceph-volume lvm activate successful for osd ID: 4
---cut here---

Instead of running "prepare" and "activate" separately you can run  
"ceph-volume lvm create ...", this will execute both steps and launch  
the OSD.


This way you don't need further partitions, but you won't be able to  
use deepsea for automated deployment since SES doesn't support lvm  
based OSDs (yet).


So you should not give up, there is a way :-)

Note: because of a compatibility issue with python3 and ceph-volume  
you should use at least


ceph-2:~ # ceph --version
ceph version 13.2.1-106-g9a1fcb1b6a  
(9a1fcb1b6a6682c3323a38c52898a94e121f6c15) mimic (stable)


Hope this helps!

Regards,
Eugen


Zitat von Jones de Andrade :


Hi Eugen.

Just tried everything again here by removing the /sda4 partitions and
letting it so that either salt-run proposal-populate or salt-run state.orch
ceph.stage.configure could try to find the free space on the partitions to
work with: unsuccessfully again. :(

Just to make things clear: are you so telling me that it is completely
impossible to have a ceph "volume" in non-dedicated devices, sharing space
with, for instance, the nodes swap, boot or main partition?

And so the only possible way to 

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-09-04 Thread Jones de Andrade
Hi Eugen.

Just tried everything again here by removing the /sda4 partitions and
letting it so that either salt-run proposal-populate or salt-run state.orch
ceph.stage.configure could try to find the free space on the partitions to
work with: unsuccessfully again. :(

Just to make things clear: are you so telling me that it is completely
impossible to have a ceph "volume" in non-dedicated devices, sharing space
with, for instance, the nodes swap, boot or main partition?

And so the only possible way to have a functioning ceph distributed
filesystem working would be by having in each node at least one disk
dedicated for the operational system and another, independent disk
dedicated to the ceph filesystem?

That would be a awful drawback in our plans if real, but if there is no
other way, we will have to just give up. Just, please, answer this two
questions clearly, before we capitulate?  :(

Anyway, thanks a lot, once again,

Jones

On Mon, Sep 3, 2018 at 5:39 AM Eugen Block  wrote:

> Hi Jones,
>
> I still don't think creating an OSD on a partition will work. The
> reason is that SES creates an additional partition per OSD resulting
> in something like this:
>
> vdb   253:16   05G  0 disk
> ├─vdb1253:17   0  100M  0 part /var/lib/ceph/osd/ceph-1
> └─vdb2253:18   0  4,9G  0 part
>
> Even with external block.db and wal.db on additional devices you would
> still need two partitions for the OSD. I'm afraid with your setup this
> can't work.
>
> Regards,
> Eugen
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-09-03 Thread Eugen Block

Hi Jones,

I still don't think creating an OSD on a partition will work. The  
reason is that SES creates an additional partition per OSD resulting  
in something like this:


vdb   253:16   05G  0 disk
├─vdb1253:17   0  100M  0 part /var/lib/ceph/osd/ceph-1
└─vdb2253:18   0  4,9G  0 part

Even with external block.db and wal.db on additional devices you would  
still need two partitions for the OSD. I'm afraid with your setup this  
can't work.


Regards,
Eugen


Zitat von Jones de Andrade :


Hi Eugen.

Sorry for the double email, but now it stoped complaining (too much) about
repositories and NTP and moved forward again.

So, I ran on master:

**


























































*# salt-run state.orch ceph.stage.deployfirewall :
disabledapparmor : disabledfsid :
validpublic_network   : validcluster_network  :
validcluster_interface: validmonitors :
validmgrs : validstorage  :
validganesha  : validmaster_role  :
validtime_server  : validfqdn :
valid[ERROR   ] {'out': 'highstate', 'ret': {'bohemia.iq.ufrgs.br
':
{'file_|-/var/lib/ceph/bootstrap-osd/ceph.keyring_|-/var/lib/ceph/bootstrap-osd/ceph.keyring_|-managed':
{'changes': {}, 'pchanges': {}, 'comment': 'File
/var/lib/ceph/bootstrap-osd/ceph.keyring is in the correct state', 'name':
'/var/lib/ceph/bootstrap-osd/ceph.keyring', 'result': True, '__sls__':
'ceph.osd.keyring.default', '__run_num__': 0, 'start_time':
'12:43:51.639582', 'duration': 40.998, '__id__':
'/var/lib/ceph/bootstrap-osd/ceph.keyring'},
'file_|-/etc/ceph/ceph.client.storage.keyring_|-/etc/ceph/ceph.client.storage.keyring_|-managed':
{'changes': {}, 'pchanges': {}, 'comment': 'File
/etc/ceph/ceph.client.storage.keyring is in the correct state', 'name':
'/etc/ceph/ceph.client.storage.keyring', 'result': True, '__sls__':
'ceph.osd.keyring.default', '__run_num__': 1, 'start_time':
'12:43:51.680857', 'duration': 19.265, '__id__':
'/etc/ceph/ceph.client.storage.keyring'}, 'module_|-deploy
OSDs_|-osd.deploy_|-run': {'name': 'osd.deploy', 'changes': {}, 'comment':
'Module function osd.deploy threw an exception. Exception: Mine on
bohemia.iq.ufrgs.br  for cephdisks.list',
'result': False, '__sls__': 'ceph.osd.default', '__run_num__': 2,
'start_time': '12:43:51.701179', 'duration': 38.789, '__id__': 'deploy
OSDs'}}, 'torcello.iq.ufrgs.br ':
{'file_|-/var/lib/ceph/bootstrap-osd/ceph.keyring_|-/var/lib/ceph/bootstrap-osd/ceph.keyring_|-managed':
{'changes': {}, 'pchanges': {}, 'comment': 'File
/var/lib/ceph/bootstrap-osd/ceph.keyring is in the correct state', 'name':
'/var/lib/ceph/bootstrap-osd/ceph.keyring', 'result': True, '__sls__':
'ceph.osd.keyring.default', '__run_num__': 0, 'start_time':
'12:43:51.768119', 'duration': 39.544, '__id__':
'/var/lib/ceph/bootstrap-osd/ceph.keyring'},
'file_|-/etc/ceph/ceph.client.storage.keyring_|-/etc/ceph/ceph.client.storage.keyring_|-managed':
{'changes': {}, 'pchanges': {}, 'comment': 'File
/etc/ceph/ceph.client.storage.keyring is in the correct state', 'name':
'/etc/ceph/ceph.client.storage.keyring', 'result': True, '__sls__':
'ceph.osd.keyring.default', '__run_num__': 1, 'start_time':
'12:43:51.807977', 'duration': 16.645, '__id__':
'/etc/ceph/ceph.client.storage.keyring'}, 'module_|-deploy
OSDs_|-osd.deploy_|-run': {'name': 'osd.deploy', 'changes': {}, 'comment':
'Module function osd.deploy threw an exception. Exception: Mine on
torcello.iq.ufrgs.br  for cephdisks.list',
'result': False, '__sls__': 'ceph.osd.default', '__run_num__': 2,
'start_time': '12:43:51.825744', 'duration': 39.334, '__id__': 'deploy
OSDs'}}, 'patricia.iq.ufrgs.br ':
{'file_|-/var/lib/ceph/bootstrap-osd/ceph.keyring_|-/var/lib/ceph/bootstrap-osd/ceph.keyring_|-managed':
{'changes': {}, 'pchanges': {}, 'comment': 'File
/var/lib/ceph/bootstrap-osd/ceph.keyring is in the correct state', 'name':
'/var/lib/ceph/bootstrap-osd/ceph.keyring', 'result': True, '__sls__':
'ceph.osd.keyring.default', '__run_num__': 0, 'start_time':
'12:43:52.039506', 'duration': 41.975, '__id__':
'/var/lib/ceph/bootstrap-osd/ceph.keyring'},
'file_|-/etc/ceph/ceph.client.storage.keyring_|-/et in
advancec/ceph/ceph.client.storage.keyring_|-managed': {'changes': {},
'pchanges': {}, 'comment': 'File /etc/ceph/ceph.client.storage.keyring is
in the correct state', 'name': '/etc/ceph/ceph.client.storage.keyring',
'result': True, '__sls__': 'ceph.osd.keyring.default', '__run_num__': 1,
'start_time': '12:43:52.081767', 'duration': 17.852, '__id__':
'/etc/ceph/ceph.client.storage.keyring'}, 'module_|-deploy
OSDs_|-osd.deploy_|-run': {'name': 'osd.deploy', 'changes': {}, 'comment':
'Module function osd.deploy threw 

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-31 Thread Jones de Andrade
Hi Eugen.

Entirely my missunderstanding, I thought there would be something at boot
time (what would certainly not make any sense at all). Sorry.

Before stage 3 I ran the commands you suggested on the nodes, and only one
got me the output below:

###
# grep -C5 sda4 /var/log/messages
2018-08-28T08:26:50.635077-03:00 polar kernel: [3.029809] ata2.00:
ATAPI: PLDS DVD+/-RW DU-8A5LH, 6D1M, max UDMA/133
2018-08-28T08:26:50.635080-03:00 polar kernel: [3.030616] ata2.00:
configured for UDMA/133
2018-08-28T08:26:50.635082-03:00 polar kernel: [3.038249] scsi 1:0:0:0:
CD-ROMPLDS DVD+-RW DU-8A5LH 6D1M PQ: 0 ANSI: 5
2018-08-28T08:26:50.635085-03:00 polar kernel: [3.048102] usb 1-6: new
low-speed USB device number 2 using xhci_hcd
2018-08-28T08:26:50.635095-03:00 polar kernel: [3.051408] scsi 1:0:0:0:
Attached scsi generic sg1 type 5
2018-08-28T08:26:50.635098-03:00 polar kernel: [3.079763]  sda: sda1
sda2 sda3 sda4
2018-08-28T08:26:50.635101-03:00 polar kernel: [3.080548] sd 0:0:0:0:
[sda] Attached SCSI disk
2018-08-28T08:26:50.635104-03:00 polar kernel: [3.109021] sr 1:0:0:0:
[sr0] scsi3-mmc drive: 24x/24x writer cd/rw xa/form2 cdda tray
2018-08-28T08:26:50.635106-03:00 polar kernel: [3.109025] cdrom:
Uniform CD-ROM driver Revision: 3.20
2018-08-28T08:26:50.635109-03:00 polar kernel: [3.109246] sr 1:0:0:0:
Attached scsi CD-ROM sr0
2018-08-28T08:26:50.635112-03:00 polar kernel: [3.206490] usb 1-6: New
USB device found, idVendor=413c, idProduct=2113
--
2018-08-28T10:11:10.512604-03:00 polar os-prober: debug: running
/usr/lib/os-probes/mounted/83haiku on mounted /dev/sda1
2018-08-28T10:11:10.516374-03:00 polar 83haiku: debug: /dev/sda1 is not a
BeFS partition: exiting
2018-08-28T10:11:10.517805-03:00 polar os-prober: debug: running
/usr/lib/os-probes/mounted/90linux-distro on mounted /dev/sda1
2018-08-28T10:11:10.523382-03:00 polar os-prober: debug: running
/usr/lib/os-probes/mounted/90solaris on mounted /dev/sda1
2018-08-28T10:11:10.529317-03:00 polar os-prober: debug: /dev/sda2: is
active swap
2018-08-28T10:11:10.539818-03:00 polar os-prober: debug: running
/usr/lib/os-probes/50mounted-tests on /dev/sda4
2018-08-28T10:11:10.669852-03:00 polar systemd-udevd[456]: Network
interface NamePolicy= disabled by default.
2018-08-28T10:11:10.705602-03:00 polar systemd-udevd[456]: Specified group
'plugdev' unknown
2018-08-28T10:11:10.812270-03:00 polar 50mounted-tests: debug: mounted
using GRUB xfs filesystem driver
2018-08-28T10:11:10.817141-03:00 polar 50mounted-tests: debug: running
subtest /usr/lib/os-probes/mounted/05efi
2018-08-28T10:11:10.832257-03:00 polar 05efi: debug: /dev/sda4 is xfs
partition: exiting
2018-08-28T10:11:10.837353-03:00 polar 50mounted-tests: debug: running
subtest /usr/lib/os-probes/mounted/10freedos
2018-08-28T10:11:10.851042-03:00 polar 10freedos: debug: /dev/sda4 is not a
FAT partition: exiting
2018-08-28T10:11:10.854580-03:00 polar 50mounted-tests: debug: running
subtest /usr/lib/os-probes/mounted/10qnx
2018-08-28T10:11:10.863539-03:00 polar 10qnx: debug: /dev/sda4 is not a
QNX4 partition: exiting
2018-08-28T10:11:10.865876-03:00 polar 50mounted-tests: debug: running
subtest /usr/lib/os-probes/mounted/20macosx
2018-08-28T10:11:10.871781-03:00 polar macosx-prober: debug: /dev/sda4 is
not an HFS+ partition: exiting
2018-08-28T10:11:10.873708-03:00 polar 50mounted-tests: debug: running
subtest /usr/lib/os-probes/mounted/20microsoft
2018-08-28T10:11:10.879146-03:00 polar 20microsoft: debug: Skipping legacy
bootloaders on UEFI system
2018-08-28T10:11:10.880798-03:00 polar 50mounted-tests: debug: running
subtest /usr/lib/os-probes/mounted/30utility
2018-08-28T10:11:10.885707-03:00 polar 30utility: debug: /dev/sda4 is not a
FAT partition: exiting
2018-08-28T10:11:10.887422-03:00 polar 50mounted-tests: debug: running
subtest
/usr/lib/os-probes/mounted/40lsb

2018-08-28T10:11:10.892547-03:00 polar 50mounted-tests: debug: running
subtest
/usr/lib/os-probes/mounted/70hurd

2018-08-28T10:11:10.897110-03:00 polar 50mounted-tests: debug: running
subtest
/usr/lib/os-probes/mounted/80minix

2018-08-28T10:11:10.901133-03:00 polar 50mounted-tests: debug: running
subtest
/usr/lib/os-probes/mounted/83haiku

2018-08-28T10:11:10.904998-03:00 polar 83haiku: debug: /dev/sda4 is not a
BeFS partition: exiting
2018-08-28T10:11:10.906289-03:00 polar 50mounted-tests: debug: running
subtest
/usr/lib/os-probes/mounted/90linux-distro

2018-08-28T10:11:10.912016-03:00 polar 50mounted-tests: debug: running
subtest
/usr/lib/os-probes/mounted/90solaris

2018-08-28T10:11:10.915838-03:00 polar 50mounted-tests: debug: running
subtest
/usr/lib/os-probes/mounted/efi

2018-08-28T10:11:11.757030-03:00 polar [RPM][4789]: erase
kernel-default-4.12.14-lp150.12.16.1.x86_64:
success

2018-08-28T10:11:11.757912-03:00 polar [RPM][4789]: Transaction ID 5b8549e8
finished:
0

--
2018-08-28T10:13:08.815753-03:00 polar kernel: [2.885213] ata2.00:
configured for

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-31 Thread Eugen Block

Hi,

I'm not sure if there's a misunderstanding. You need to track the logs  
during the osd deployment step (stage.3), that is where it fails, and  
this is where /var/log/messages could be useful. Since the deployment  
failed you have no systemd-units (ceph-osd@.service) to log  
anything.


Before running stage.3 again try something like

grep -C5 ceph-disk /var/log/messages (or messages-201808*.xz)

or

grep -C5 sda4 /var/log/messages (or messages-201808*.xz)

If that doesn't reveal anything run stage.3 again and watch the logs.

Regards,
Eugen


Zitat von Jones de Andrade :


Hi Eugen.

Ok, edited the file /etc/salt/minion, uncommented the "log_level_logfile"
line and set it to "debug" level.

Turned off the computer, waited a few minutes so that the time frame would
stand out in the /var/log/messages file, and restarted the computer.

Using vi I "greped out" (awful wording) the reboot section. From that, I
also removed most of what it seemed totally unrelated to ceph, salt,
minions, grafana, prometheus, whatever.

I got the lines below. It does not seem to complain about anything that I
can see. :(


2018-08-30T15:41:46.455383-03:00 torcello systemd[1]: systemd 234 running
in system mode. (+PAM -AUDIT +SELINUX -IMA +APPARMOR -SMACK +SYSVINIT +UTMP
+LIBCRYPTSETUP +GCRYPT -GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID -ELFUTILS
+KMOD -IDN2 -IDN default-hierarchy=hybrid)
2018-08-30T15:41:46.456330-03:00 torcello systemd[1]: Detected architecture
x86-64.
2018-08-30T15:41:46.456350-03:00 torcello systemd[1]: nss-lookup.target:
Dependency Before=nss-lookup.target dropped
2018-08-30T15:41:46.456357-03:00 torcello systemd[1]: Started Load Kernel
Modules.
2018-08-30T15:41:46.456369-03:00 torcello systemd[1]: Starting Apply Kernel
Variables...
2018-08-30T15:41:46.457230-03:00 torcello systemd[1]: Started Alertmanager
for prometheus.
2018-08-30T15:41:46.457237-03:00 torcello systemd[1]: Started Monitoring
system and time series database.
2018-08-30T15:41:46.457403-03:00 torcello systemd[1]: Starting NTP
client/server...






*2018-08-30T15:41:46.457425-03:00 torcello systemd[1]: Started Prometheus
exporter for machine metrics.2018-08-30T15:41:46.457706-03:00 torcello
prometheus[695]: level=info ts=2018-08-30T18:41:44.797896888Z
caller=main.go:225 msg="Starting Prometheus" version="(version=2.1.0,
branch=non-git, revision=non-git)"2018-08-30T15:41:46.457712-03:00 torcello
prometheus[695]: level=info ts=2018-08-30T18:41:44.797969232Z
caller=main.go:226 build_context="(go=go1.9.4, user=abuild@lamb69,
date=20180513-03:46:03)"2018-08-30T15:41:46.457719-03:00 torcello
prometheus[695]: level=info ts=2018-08-30T18:41:44.798008802Z
caller=main.go:227 host_details="(Linux 4.12.14-lp150.12.4-default #1 SMP
Tue May 22 05:17:22 UTC 2018 (66b2eda) x86_64 torcello
(none))"2018-08-30T15:41:46.457726-03:00 torcello prometheus[695]:
level=info ts=2018-08-30T18:41:44.798044088Z caller=main.go:228
fd_limits="(soft=1024, hard=4096)"2018-08-30T15:41:46.457738-03:00 torcello
prometheus[695]: level=info ts=2018-08-30T18:41:44.802067189Z
caller=web.go:383 component=web msg="Start listening for connections"
address=0.0.0.0:9090 2018-08-30T15:41:46.457745-03:00
torcello prometheus[695]: level=info ts=2018-08-30T18:41:44.802037354Z
caller=main.go:499 msg="Starting TSDB ..."*
2018-08-30T15:41:46.458145-03:00 torcello smartd[809]: Monitoring 1
ATA/SATA, 0 SCSI/SAS and 0 NVMe devices
2018-08-30T15:41:46.458321-03:00 torcello systemd[1]: Started NTP
client/server.
*2018-08-30T15:41:50.387157-03:00 torcello ceph_exporter[690]: 2018/08/30
15:41:50 Starting ceph exporter on ":9128"*
2018-08-30T15:41:52.658272-03:00 torcello wicked[905]: lo  up
2018-08-30T15:41:52.658738-03:00 torcello wicked[905]: eth0up
2018-08-30T15:41:52.659989-03:00 torcello systemd[1]: Started wicked
managed network interfaces.
2018-08-30T15:41:52.660514-03:00 torcello systemd[1]: Reached target
Network.
2018-08-30T15:41:52.667938-03:00 torcello systemd[1]: Starting OpenSSH
Daemon...
2018-08-30T15:41:52.668292-03:00 torcello systemd[1]: Reached target
Network is Online.




*2018-08-30T15:41:52.669132-03:00 torcello systemd[1]: Started Ceph cluster
monitor daemon.2018-08-30T15:41:52.669328-03:00 torcello systemd[1]:
Reached target ceph target allowing to start/stop all ceph-mon@.service
instances at once.2018-08-30T15:41:52.670346-03:00 torcello systemd[1]:
Started Ceph cluster manager daemon.2018-08-30T15:41:52.670565-03:00
torcello systemd[1]: Reached target ceph target allowing to start/stop all
ceph-mgr@.service instances at once.2018-08-30T15:41:52.670839-03:00
torcello systemd[1]: Reached target ceph target allowing to start/stop all
ceph*@.service instances at once.*
2018-08-30T15:41:52.671246-03:00 torcello systemd[1]: Starting Login and
scanning of iSCSI devices...
*2018-08-30T15:41:52.672402-03:00 torcello systemd[1]: Starting Grafana
instance...*
2018-08-30T15:41:52.678922-03:00 torcello systemd[1]: Started 

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-30 Thread Jones de Andrade
Hi Eugen.

Ok, edited the file /etc/salt/minion, uncommented the "log_level_logfile"
line and set it to "debug" level.

Turned off the computer, waited a few minutes so that the time frame would
stand out in the /var/log/messages file, and restarted the computer.

Using vi I "greped out" (awful wording) the reboot section. From that, I
also removed most of what it seemed totally unrelated to ceph, salt,
minions, grafana, prometheus, whatever.

I got the lines below. It does not seem to complain about anything that I
can see. :(


2018-08-30T15:41:46.455383-03:00 torcello systemd[1]: systemd 234 running
in system mode. (+PAM -AUDIT +SELINUX -IMA +APPARMOR -SMACK +SYSVINIT +UTMP
+LIBCRYPTSETUP +GCRYPT -GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID -ELFUTILS
+KMOD -IDN2 -IDN default-hierarchy=hybrid)
2018-08-30T15:41:46.456330-03:00 torcello systemd[1]: Detected architecture
x86-64.
2018-08-30T15:41:46.456350-03:00 torcello systemd[1]: nss-lookup.target:
Dependency Before=nss-lookup.target dropped
2018-08-30T15:41:46.456357-03:00 torcello systemd[1]: Started Load Kernel
Modules.
2018-08-30T15:41:46.456369-03:00 torcello systemd[1]: Starting Apply Kernel
Variables...
2018-08-30T15:41:46.457230-03:00 torcello systemd[1]: Started Alertmanager
for prometheus.
2018-08-30T15:41:46.457237-03:00 torcello systemd[1]: Started Monitoring
system and time series database.
2018-08-30T15:41:46.457403-03:00 torcello systemd[1]: Starting NTP
client/server...






*2018-08-30T15:41:46.457425-03:00 torcello systemd[1]: Started Prometheus
exporter for machine metrics.2018-08-30T15:41:46.457706-03:00 torcello
prometheus[695]: level=info ts=2018-08-30T18:41:44.797896888Z
caller=main.go:225 msg="Starting Prometheus" version="(version=2.1.0,
branch=non-git, revision=non-git)"2018-08-30T15:41:46.457712-03:00 torcello
prometheus[695]: level=info ts=2018-08-30T18:41:44.797969232Z
caller=main.go:226 build_context="(go=go1.9.4, user=abuild@lamb69,
date=20180513-03:46:03)"2018-08-30T15:41:46.457719-03:00 torcello
prometheus[695]: level=info ts=2018-08-30T18:41:44.798008802Z
caller=main.go:227 host_details="(Linux 4.12.14-lp150.12.4-default #1 SMP
Tue May 22 05:17:22 UTC 2018 (66b2eda) x86_64 torcello
(none))"2018-08-30T15:41:46.457726-03:00 torcello prometheus[695]:
level=info ts=2018-08-30T18:41:44.798044088Z caller=main.go:228
fd_limits="(soft=1024, hard=4096)"2018-08-30T15:41:46.457738-03:00 torcello
prometheus[695]: level=info ts=2018-08-30T18:41:44.802067189Z
caller=web.go:383 component=web msg="Start listening for connections"
address=0.0.0.0:9090 2018-08-30T15:41:46.457745-03:00
torcello prometheus[695]: level=info ts=2018-08-30T18:41:44.802037354Z
caller=main.go:499 msg="Starting TSDB ..."*
2018-08-30T15:41:46.458145-03:00 torcello smartd[809]: Monitoring 1
ATA/SATA, 0 SCSI/SAS and 0 NVMe devices
2018-08-30T15:41:46.458321-03:00 torcello systemd[1]: Started NTP
client/server.
*2018-08-30T15:41:50.387157-03:00 torcello ceph_exporter[690]: 2018/08/30
15:41:50 Starting ceph exporter on ":9128"*
2018-08-30T15:41:52.658272-03:00 torcello wicked[905]: lo  up
2018-08-30T15:41:52.658738-03:00 torcello wicked[905]: eth0up
2018-08-30T15:41:52.659989-03:00 torcello systemd[1]: Started wicked
managed network interfaces.
2018-08-30T15:41:52.660514-03:00 torcello systemd[1]: Reached target
Network.
2018-08-30T15:41:52.667938-03:00 torcello systemd[1]: Starting OpenSSH
Daemon...
2018-08-30T15:41:52.668292-03:00 torcello systemd[1]: Reached target
Network is Online.




*2018-08-30T15:41:52.669132-03:00 torcello systemd[1]: Started Ceph cluster
monitor daemon.2018-08-30T15:41:52.669328-03:00 torcello systemd[1]:
Reached target ceph target allowing to start/stop all ceph-mon@.service
instances at once.2018-08-30T15:41:52.670346-03:00 torcello systemd[1]:
Started Ceph cluster manager daemon.2018-08-30T15:41:52.670565-03:00
torcello systemd[1]: Reached target ceph target allowing to start/stop all
ceph-mgr@.service instances at once.2018-08-30T15:41:52.670839-03:00
torcello systemd[1]: Reached target ceph target allowing to start/stop all
ceph*@.service instances at once.*
2018-08-30T15:41:52.671246-03:00 torcello systemd[1]: Starting Login and
scanning of iSCSI devices...
*2018-08-30T15:41:52.672402-03:00 torcello systemd[1]: Starting Grafana
instance...*
2018-08-30T15:41:52.678922-03:00 torcello systemd[1]: Started Backup of
/etc/sysconfig.
2018-08-30T15:41:52.679109-03:00 torcello systemd[1]: Reached target Timers.
*2018-08-30T15:41:52.679630-03:00 torcello systemd[1]: Started The Salt
API.*
2018-08-30T15:41:52.692944-03:00 torcello systemd[1]: Starting Postfix Mail
Transport Agent...
*2018-08-30T15:41:52.694687-03:00 torcello systemd[1]: Started The Salt
Master Server.*
*2018-08-30T15:41:52.696821-03:00 torcello systemd[1]: Starting The Salt
Minion...*
2018-08-30T15:41:52.772750-03:00 torcello sshd-gen-keys-start[1408]:
Checking for missing server keys in /etc/ssh

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-30 Thread Eugen Block

Hi,


So, it only contains logs concerning the node itself (is it correct? sincer
node01 is also the master, I was expecting it to have logs from the other
too) and, moreover, no ceph-osd* files. Also, I'm looking the logs I have
available, and nothing "shines out" (sorry for my poor english) as a
possible error.


the logging is not configured to be centralised per default, you would  
have to configure that yourself.


Regarding the OSDs, if there are OSD logs created, they're created on  
the OSD nodes, not on the master. But since the OSD deployment fails,  
there probably are no OSD specific logs yet. So you'll have to take a  
look into the syslog (/var/log/messages), that's where the salt-minion  
reports its attempts to create the OSDs. Chances are high that you'll  
find the root cause in here.


If the output is not enough, set the log-level to debug:

osd-1:~ # grep -E "^log_level" /etc/salt/minion
log_level: debug


Regards,
Eugen


Zitat von Jones de Andrade :


Hi Eugen.

Sorry for the delay in answering.

Just looked in the /var/log/ceph/ directory. It only contains the following
files (for example on node01):

###
# ls -lart
total 3864
-rw--- 1 ceph ceph 904 ago 24 13:11 ceph.audit.log-20180829.xz
drwxr-xr-x 1 root root 898 ago 28 10:07 ..
-rw-r--r-- 1 ceph ceph  189464 ago 28 23:59 ceph-mon.node01.log-20180829.xz
-rw--- 1 ceph ceph   24360 ago 28 23:59 ceph.log-20180829.xz
-rw-r--r-- 1 ceph ceph   48584 ago 29 00:00 ceph-mgr.node01.log-20180829.xz
-rw--- 1 ceph ceph   0 ago 29 00:00 ceph.audit.log
drwxrws--T 1 ceph ceph 352 ago 29 00:00 .
-rw-r--r-- 1 ceph ceph 1908122 ago 29 12:46 ceph-mon.node01.log
-rw--- 1 ceph ceph  175229 ago 29 12:48 ceph.log
-rw-r--r-- 1 ceph ceph 1599920 ago 29 12:49 ceph-mgr.node01.log
###

So, it only contains logs concerning the node itself (is it correct? sincer
node01 is also the master, I was expecting it to have logs from the other
too) and, moreover, no ceph-osd* files. Also, I'm looking the logs I have
available, and nothing "shines out" (sorry for my poor english) as a
possible error.

Any suggestion on how to proceed?

Thanks a lot in advance,

Jones


On Mon, Aug 27, 2018 at 5:29 AM Eugen Block  wrote:


Hi Jones,

all ceph logs are in the directory /var/log/ceph/, each daemon has its
own log file, e.g. OSD logs are named ceph-osd.*.

I haven't tried it but I don't think SUSE Enterprise Storage deploys
OSDs on partitioned disks. Is there a way to attach a second disk to
the OSD nodes, maybe via USB or something?

Although this thread is ceph related it is referring to a specific
product, so I would recommend to post your question in the SUSE forum
[1].

Regards,
Eugen

[1] https://forums.suse.com/forumdisplay.php?99-SUSE-Enterprise-Storage

Zitat von Jones de Andrade :

> Hi Eugen.
>
> Thanks for the suggestion. I'll look for the logs (since it's our first
> attempt with ceph, I'll have to discover where they are, but no problem).
>
> One thing called my attention on your response however:
>
> I haven't made myself clear, but one of the failures we encountered were
> that the files now containing:
>
> node02:
>--
>storage:
>--
>osds:
>--
>/dev/sda4:
>--
>format:
>bluestore
>standalone:
>True
>
> Were originally empty, and we filled them by hand following a model found
> elsewhere on the web. It was necessary, so that we could continue, but
the
> model indicated that, for example, it should have the path for /dev/sda
> here, not /dev/sda4. We chosen to include the specific partition
> identification because we won't have dedicated disks here, rather just
the
> very same partition as all disks were partitioned exactly the same.
>
> While that was enough for the procedure to continue at that point, now I
> wonder if it was the right call and, if it indeed was, if it was done
> properly.  As such, I wonder: what you mean by "wipe" the partition here?
> /dev/sda4 is created, but is both empty and unmounted: Should a different
> operation be performed on it, should I remove it first, should I have
> written the files above with only /dev/sda as target?
>
> I know that probably I wouldn't run in this issues with dedicated discks,
> but unfortunately that is absolutely not an option.
>
> Thanks a lot in advance for any comments and/or extra suggestions.
>
> Sincerely yours,
>
> Jones
>
> On Sat, Aug 25, 2018 at 5:46 PM Eugen Block  wrote:
>
>> Hi,
>>
>> take a look into the logs, they should point you in the right direction.
>> Since the deployment stage fails at the OSD level, start with the OSD
>> logs. Something's not right with the disks/partitions, did you wipe
>> the partition from previous attempts?
>>
>> Regards,
>> Eugen
>>
>> Zitat von Jones de Andrade :
>>
>>> (Please forgive my previous email: I was using another message and
>>> completely 

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-29 Thread Jones de Andrade
Hi Eugen.

Sorry for the delay in answering.

Just looked in the /var/log/ceph/ directory. It only contains the following
files (for example on node01):

###
# ls -lart
total 3864
-rw--- 1 ceph ceph 904 ago 24 13:11 ceph.audit.log-20180829.xz
drwxr-xr-x 1 root root 898 ago 28 10:07 ..
-rw-r--r-- 1 ceph ceph  189464 ago 28 23:59 ceph-mon.node01.log-20180829.xz
-rw--- 1 ceph ceph   24360 ago 28 23:59 ceph.log-20180829.xz
-rw-r--r-- 1 ceph ceph   48584 ago 29 00:00 ceph-mgr.node01.log-20180829.xz
-rw--- 1 ceph ceph   0 ago 29 00:00 ceph.audit.log
drwxrws--T 1 ceph ceph 352 ago 29 00:00 .
-rw-r--r-- 1 ceph ceph 1908122 ago 29 12:46 ceph-mon.node01.log
-rw--- 1 ceph ceph  175229 ago 29 12:48 ceph.log
-rw-r--r-- 1 ceph ceph 1599920 ago 29 12:49 ceph-mgr.node01.log
###

So, it only contains logs concerning the node itself (is it correct? sincer
node01 is also the master, I was expecting it to have logs from the other
too) and, moreover, no ceph-osd* files. Also, I'm looking the logs I have
available, and nothing "shines out" (sorry for my poor english) as a
possible error.

Any suggestion on how to proceed?

Thanks a lot in advance,

Jones


On Mon, Aug 27, 2018 at 5:29 AM Eugen Block  wrote:

> Hi Jones,
>
> all ceph logs are in the directory /var/log/ceph/, each daemon has its
> own log file, e.g. OSD logs are named ceph-osd.*.
>
> I haven't tried it but I don't think SUSE Enterprise Storage deploys
> OSDs on partitioned disks. Is there a way to attach a second disk to
> the OSD nodes, maybe via USB or something?
>
> Although this thread is ceph related it is referring to a specific
> product, so I would recommend to post your question in the SUSE forum
> [1].
>
> Regards,
> Eugen
>
> [1] https://forums.suse.com/forumdisplay.php?99-SUSE-Enterprise-Storage
>
> Zitat von Jones de Andrade :
>
> > Hi Eugen.
> >
> > Thanks for the suggestion. I'll look for the logs (since it's our first
> > attempt with ceph, I'll have to discover where they are, but no problem).
> >
> > One thing called my attention on your response however:
> >
> > I haven't made myself clear, but one of the failures we encountered were
> > that the files now containing:
> >
> > node02:
> >--
> >storage:
> >--
> >osds:
> >--
> >/dev/sda4:
> >--
> >format:
> >bluestore
> >standalone:
> >True
> >
> > Were originally empty, and we filled them by hand following a model found
> > elsewhere on the web. It was necessary, so that we could continue, but
> the
> > model indicated that, for example, it should have the path for /dev/sda
> > here, not /dev/sda4. We chosen to include the specific partition
> > identification because we won't have dedicated disks here, rather just
> the
> > very same partition as all disks were partitioned exactly the same.
> >
> > While that was enough for the procedure to continue at that point, now I
> > wonder if it was the right call and, if it indeed was, if it was done
> > properly.  As such, I wonder: what you mean by "wipe" the partition here?
> > /dev/sda4 is created, but is both empty and unmounted: Should a different
> > operation be performed on it, should I remove it first, should I have
> > written the files above with only /dev/sda as target?
> >
> > I know that probably I wouldn't run in this issues with dedicated discks,
> > but unfortunately that is absolutely not an option.
> >
> > Thanks a lot in advance for any comments and/or extra suggestions.
> >
> > Sincerely yours,
> >
> > Jones
> >
> > On Sat, Aug 25, 2018 at 5:46 PM Eugen Block  wrote:
> >
> >> Hi,
> >>
> >> take a look into the logs, they should point you in the right direction.
> >> Since the deployment stage fails at the OSD level, start with the OSD
> >> logs. Something's not right with the disks/partitions, did you wipe
> >> the partition from previous attempts?
> >>
> >> Regards,
> >> Eugen
> >>
> >> Zitat von Jones de Andrade :
> >>
> >>> (Please forgive my previous email: I was using another message and
> >>> completely forget to update the subject)
> >>>
> >>> Hi all.
> >>>
> >>> I'm new to ceph, and after having serious problems in ceph stages 0, 1
> >> and
> >>> 2 that I could solve myself, now it seems that I have hit a wall harder
> >>> than my head. :)
> >>>
> >>> When I run salt-run state.orch ceph.stage.deploy, i monitor I see it
> >> going
> >>> up to here:
> >>>
> >>> ###
> >>> [14/71]   ceph.sysctl on
> >>>   node01... ✓ (0.5s)
> >>>   node02 ✓ (0.7s)
> >>>   node03... ✓ (0.6s)
> >>>   node04. ✓ (0.5s)
> >>>   node05... ✓ (0.6s)
> >>>   node06.. ✓ 

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-27 Thread Eugen Block

Hi Jones,

all ceph logs are in the directory /var/log/ceph/, each daemon has its  
own log file, e.g. OSD logs are named ceph-osd.*.


I haven't tried it but I don't think SUSE Enterprise Storage deploys  
OSDs on partitioned disks. Is there a way to attach a second disk to  
the OSD nodes, maybe via USB or something?


Although this thread is ceph related it is referring to a specific  
product, so I would recommend to post your question in the SUSE forum  
[1].


Regards,
Eugen

[1] https://forums.suse.com/forumdisplay.php?99-SUSE-Enterprise-Storage

Zitat von Jones de Andrade :


Hi Eugen.

Thanks for the suggestion. I'll look for the logs (since it's our first
attempt with ceph, I'll have to discover where they are, but no problem).

One thing called my attention on your response however:

I haven't made myself clear, but one of the failures we encountered were
that the files now containing:

node02:
   --
   storage:
   --
   osds:
   --
   /dev/sda4:
   --
   format:
   bluestore
   standalone:
   True

Were originally empty, and we filled them by hand following a model found
elsewhere on the web. It was necessary, so that we could continue, but the
model indicated that, for example, it should have the path for /dev/sda
here, not /dev/sda4. We chosen to include the specific partition
identification because we won't have dedicated disks here, rather just the
very same partition as all disks were partitioned exactly the same.

While that was enough for the procedure to continue at that point, now I
wonder if it was the right call and, if it indeed was, if it was done
properly.  As such, I wonder: what you mean by "wipe" the partition here?
/dev/sda4 is created, but is both empty and unmounted: Should a different
operation be performed on it, should I remove it first, should I have
written the files above with only /dev/sda as target?

I know that probably I wouldn't run in this issues with dedicated discks,
but unfortunately that is absolutely not an option.

Thanks a lot in advance for any comments and/or extra suggestions.

Sincerely yours,

Jones

On Sat, Aug 25, 2018 at 5:46 PM Eugen Block  wrote:


Hi,

take a look into the logs, they should point you in the right direction.
Since the deployment stage fails at the OSD level, start with the OSD
logs. Something's not right with the disks/partitions, did you wipe
the partition from previous attempts?

Regards,
Eugen

Zitat von Jones de Andrade :


(Please forgive my previous email: I was using another message and
completely forget to update the subject)

Hi all.

I'm new to ceph, and after having serious problems in ceph stages 0, 1

and

2 that I could solve myself, now it seems that I have hit a wall harder
than my head. :)

When I run salt-run state.orch ceph.stage.deploy, i monitor I see it

going

up to here:

###
[14/71]   ceph.sysctl on
  node01... ✓ (0.5s)
  node02 ✓ (0.7s)
  node03... ✓ (0.6s)
  node04. ✓ (0.5s)
  node05... ✓ (0.6s)
  node06.. ✓ (0.5s)

[15/71]   ceph.osd on
  node01.. ❌ (0.7s)
  node02 ❌ (0.7s)
  node03... ❌ (0.7s)
  node04. ❌ (0.6s)
  node05... ❌ (0.6s)
  node06.. ❌ (0.7s)

Ended stage: ceph.stage.deploy succeeded=14/71 failed=1/71 time=624.7s

Failures summary:

ceph.osd (/srv/salt/ceph/osd):
  node02:
deploy OSDs: Module function osd.deploy threw an exception.

Exception:

Mine on node02 for cephdisks.list
  node03:
deploy OSDs: Module function osd.deploy threw an exception.

Exception:

Mine on node03 for cephdisks.list
  node01:
deploy OSDs: Module function osd.deploy threw an exception.

Exception:

Mine on node01 for cephdisks.list
  node04:
deploy OSDs: Module function osd.deploy threw an exception.

Exception:

Mine on node04 for cephdisks.list
  node05:
deploy OSDs: Module function osd.deploy threw an exception.

Exception:

Mine on node05 for cephdisks.list
  node06:
deploy OSDs: Module function osd.deploy threw an exception.

Exception:

Mine on node06 for cephdisks.list
###

Since this is a first attempt in 6 simple test machines, we are going to
put the mon, osds, etc, in all nodes at first. Only the master is left

in a

single machine (node01) by now.

As they are simple machines, they have a single hdd, which is partitioned
as follows (the hda4 partition is unmounted and left for the ceph

system):



Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-26 Thread Jones de Andrade
Hi Eugen.

Thanks for the suggestion. I'll look for the logs (since it's our first
attempt with ceph, I'll have to discover where they are, but no problem).

One thing called my attention on your response however:

I haven't made myself clear, but one of the failures we encountered were
that the files now containing:

node02:
--
storage:
--
osds:
--
/dev/sda4:
--
format:
bluestore
standalone:
True

Were originally empty, and we filled them by hand following a model found
elsewhere on the web. It was necessary, so that we could continue, but the
model indicated that, for example, it should have the path for /dev/sda
here, not /dev/sda4. We chosen to include the specific partition
identification because we won't have dedicated disks here, rather just the
very same partition as all disks were partitioned exactly the same.

While that was enough for the procedure to continue at that point, now I
wonder if it was the right call and, if it indeed was, if it was done
properly.  As such, I wonder: what you mean by "wipe" the partition here?
/dev/sda4 is created, but is both empty and unmounted: Should a different
operation be performed on it, should I remove it first, should I have
written the files above with only /dev/sda as target?

I know that probably I wouldn't run in this issues with dedicated discks,
but unfortunately that is absolutely not an option.

Thanks a lot in advance for any comments and/or extra suggestions.

Sincerely yours,

Jones

On Sat, Aug 25, 2018 at 5:46 PM Eugen Block  wrote:

> Hi,
>
> take a look into the logs, they should point you in the right direction.
> Since the deployment stage fails at the OSD level, start with the OSD
> logs. Something's not right with the disks/partitions, did you wipe
> the partition from previous attempts?
>
> Regards,
> Eugen
>
> Zitat von Jones de Andrade :
>
> > (Please forgive my previous email: I was using another message and
> > completely forget to update the subject)
> >
> > Hi all.
> >
> > I'm new to ceph, and after having serious problems in ceph stages 0, 1
> and
> > 2 that I could solve myself, now it seems that I have hit a wall harder
> > than my head. :)
> >
> > When I run salt-run state.orch ceph.stage.deploy, i monitor I see it
> going
> > up to here:
> >
> > ###
> > [14/71]   ceph.sysctl on
> >   node01... ✓ (0.5s)
> >   node02 ✓ (0.7s)
> >   node03... ✓ (0.6s)
> >   node04. ✓ (0.5s)
> >   node05... ✓ (0.6s)
> >   node06.. ✓ (0.5s)
> >
> > [15/71]   ceph.osd on
> >   node01.. ❌ (0.7s)
> >   node02 ❌ (0.7s)
> >   node03... ❌ (0.7s)
> >   node04. ❌ (0.6s)
> >   node05... ❌ (0.6s)
> >   node06.. ❌ (0.7s)
> >
> > Ended stage: ceph.stage.deploy succeeded=14/71 failed=1/71 time=624.7s
> >
> > Failures summary:
> >
> > ceph.osd (/srv/salt/ceph/osd):
> >   node02:
> > deploy OSDs: Module function osd.deploy threw an exception.
> Exception:
> > Mine on node02 for cephdisks.list
> >   node03:
> > deploy OSDs: Module function osd.deploy threw an exception.
> Exception:
> > Mine on node03 for cephdisks.list
> >   node01:
> > deploy OSDs: Module function osd.deploy threw an exception.
> Exception:
> > Mine on node01 for cephdisks.list
> >   node04:
> > deploy OSDs: Module function osd.deploy threw an exception.
> Exception:
> > Mine on node04 for cephdisks.list
> >   node05:
> > deploy OSDs: Module function osd.deploy threw an exception.
> Exception:
> > Mine on node05 for cephdisks.list
> >   node06:
> > deploy OSDs: Module function osd.deploy threw an exception.
> Exception:
> > Mine on node06 for cephdisks.list
> > ###
> >
> > Since this is a first attempt in 6 simple test machines, we are going to
> > put the mon, osds, etc, in all nodes at first. Only the master is left
> in a
> > single machine (node01) by now.
> >
> > As they are simple machines, they have a single hdd, which is partitioned
> > as follows (the hda4 partition is unmounted and left for the ceph
> system):
> >
> > ###
> > # lsblk
> > NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
> > sda  8:00 465,8G  0 disk
> > ├─sda1   8:10   500M  0 part /boot/efi
> > ├─sda2   8:2016G  0 part [SWAP]
> > ├─sda3   8:30  49,3G  0 part /
> > └─sda4   8:40   400G  0 part
> > sr0 11:01   3,7G  0 rom
> >
> > # salt -I 

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-25 Thread Eugen Block

Hi,

take a look into the logs, they should point you in the right direction.
Since the deployment stage fails at the OSD level, start with the OSD  
logs. Something's not right with the disks/partitions, did you wipe  
the partition from previous attempts?


Regards,
Eugen

Zitat von Jones de Andrade :


(Please forgive my previous email: I was using another message and
completely forget to update the subject)

Hi all.

I'm new to ceph, and after having serious problems in ceph stages 0, 1 and
2 that I could solve myself, now it seems that I have hit a wall harder
than my head. :)

When I run salt-run state.orch ceph.stage.deploy, i monitor I see it going
up to here:

###
[14/71]   ceph.sysctl on
  node01... ✓ (0.5s)
  node02 ✓ (0.7s)
  node03... ✓ (0.6s)
  node04. ✓ (0.5s)
  node05... ✓ (0.6s)
  node06.. ✓ (0.5s)

[15/71]   ceph.osd on
  node01.. ❌ (0.7s)
  node02 ❌ (0.7s)
  node03... ❌ (0.7s)
  node04. ❌ (0.6s)
  node05... ❌ (0.6s)
  node06.. ❌ (0.7s)

Ended stage: ceph.stage.deploy succeeded=14/71 failed=1/71 time=624.7s

Failures summary:

ceph.osd (/srv/salt/ceph/osd):
  node02:
deploy OSDs: Module function osd.deploy threw an exception. Exception:
Mine on node02 for cephdisks.list
  node03:
deploy OSDs: Module function osd.deploy threw an exception. Exception:
Mine on node03 for cephdisks.list
  node01:
deploy OSDs: Module function osd.deploy threw an exception. Exception:
Mine on node01 for cephdisks.list
  node04:
deploy OSDs: Module function osd.deploy threw an exception. Exception:
Mine on node04 for cephdisks.list
  node05:
deploy OSDs: Module function osd.deploy threw an exception. Exception:
Mine on node05 for cephdisks.list
  node06:
deploy OSDs: Module function osd.deploy threw an exception. Exception:
Mine on node06 for cephdisks.list
###

Since this is a first attempt in 6 simple test machines, we are going to
put the mon, osds, etc, in all nodes at first. Only the master is left in a
single machine (node01) by now.

As they are simple machines, they have a single hdd, which is partitioned
as follows (the hda4 partition is unmounted and left for the ceph system):

###
# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda  8:00 465,8G  0 disk
├─sda1   8:10   500M  0 part /boot/efi
├─sda2   8:2016G  0 part [SWAP]
├─sda3   8:30  49,3G  0 part /
└─sda4   8:40   400G  0 part
sr0 11:01   3,7G  0 rom

# salt -I 'roles:storage' cephdisks.list
node01:
node02:
node03:
node04:
node05:
node06:

# salt -I 'roles:storage' pillar.get ceph
node02:
--
storage:
--
osds:
--
/dev/sda4:
--
format:
bluestore
standalone:
True
(and so on for all 6 machines)
##

Finally and just in case, my policy.cfg file reads:

#
#cluster-unassigned/cluster/*.sls
cluster-ceph/cluster/*.sls
profile-default/cluster/*.sls
profile-default/stack/default/ceph/minions/*yml
config/stack/default/global.yml
config/stack/default/ceph/cluster.yml
role-master/cluster/node01.sls
role-admin/cluster/*.sls
role-mon/cluster/*.sls
role-mgr/cluster/*.sls
role-mds/cluster/*.sls
role-ganesha/cluster/*.sls
role-client-nfs/cluster/*.sls
role-client-cephfs/cluster/*.sls
##

Please, could someone help me and shed some light on this issue?

Thanks a lot in advance,

Regasrds,

Jones




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-10 Thread Robert Stanford
 Just FYI.  I asked about cluster names a month or two back and was told
that support for them is being phased out.  I've had all sorts of problems
using clusters with cluster names, and stopped using it myself.

On Fri, Aug 10, 2018 at 2:06 AM, Glen Baars 
wrote:

> I have now gotten this working. Thanks everyone for the help. The
> RBD-Mirror service is co-located on a MON server.
>
> Key points are:
>
> Start the services on the boxes with the following syntax ( depending on
> your config file names )
>
> On primary
> systemctl start ceph-rbd-mirror@primary
>
> On secondary
> systemctl start ceph-rbd-mirror@secondary
>
> Ensure this works on both boxes
> ceph --cluster secondary -n client.secondary -s
> ceph --cluster primary -n client.primary -s
>
> check the log files under - /var/log/ceph/ceph-client.primary.log and
> /var/log/ceph/ceph-client.secondary.log
>
> My primary server had these files in it.
>
> ceph.client.admin.keyring
> ceph.client.primary.keyring
> ceph.conf
> primary.client.primary.keyring
> primary.conf
> secondary.client.secondary.keyring
> secondary.conf
>
> Kind regards,
> Glen Baars
>
> -Original Message-
> From: Thode Jocelyn 
> Sent: Thursday, 9 August 2018 1:41 PM
> To: Erik McCormick 
> Cc: Glen Baars ; Vasu Kulkarni <
> vakul...@redhat.com>; ceph-users@lists.ceph.com
> Subject: RE: [ceph-users] [Ceph-deploy] Cluster Name
>
> Hi Erik,
>
> The thing is that the rbd-mirror service uses the /etc/sysconfig/ceph file
> to determine which configuration file to use (from CLUSTER_NAME). So you
> need to set this to the name you chose for rbd-mirror to work. However
> setting this CLUSTER_NAME variable in /etc/sysconfig/ceph makes it so that
> the mon, osd etc services will also use this variable. Because of this they
> cannot start anymore as all their path are set with "ceph" as cluster name.
>
> However there might be something that I missed which would make this point
> moot
>
> Best Regards
> Jocelyn Thode
>
> -Original Message-
> From: Erik McCormick [mailto:emccorm...@cirrusseven.com]
> Sent: mercredi, 8 août 2018 16:39
> To: Thode Jocelyn 
> Cc: Glen Baars ; Vasu Kulkarni <
> vakul...@redhat.com>; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name
>
> I'm not using this feature, so maybe I'm missing something, but from the
> way I understand cluster naming to work...
>
> I still don't understand why this is blocking for you. Unless you are
> attempting to mirror between two clusters running on the same hosts (why
> would you do this?) then systemd doesn't come into play. The --cluster flag
> on the rbd command will simply set the name of a configuration file with
> the FSID and settings of the appropriate cluster. Cluster name is just a
> way of telling ceph commands and systemd units where to find the configs.
>
> So, what you end up with is something like:
>
> /etc/ceph/ceph.conf (your local cluster configuration) on both clusters
> /etc/ceph/local.conf (config of the source cluster. Just a copy of
> ceph.conf of the source clsuter) /etc/ceph/remote.conf (config of
> destination peer cluster. Just a copy of ceph.conf of the remote cluster).
>
> Run all your rbd mirror commands against local and remote names.
> However when starting things like mons, osds, mds, etc. you need no
> cluster name as it can use ceph.conf (cluster name of ceph).
>
> Am I making sense, or have I completely missed something?
>
> -Erik
>
> On Wed, Aug 8, 2018 at 8:34 AM, Thode Jocelyn 
> wrote:
> > Hi,
> >
> >
> >
> > We are still blocked by this problem on our end. Glen did you  or
> > someone else figure out something for this ?
> >
> >
> >
> > Regards
> >
> > Jocelyn Thode
> >
> >
> >
> > From: Glen Baars [mailto:g...@onsitecomputers.com.au]
> > Sent: jeudi, 2 août 2018 05:43
> > To: Erik McCormick 
> > Cc: Thode Jocelyn ; Vasu Kulkarni
> > ; ceph-users@lists.ceph.com
> > Subject: RE: [ceph-users] [Ceph-deploy] Cluster Name
> >
> >
> >
> > Hello Erik,
> >
> >
> >
> > We are going to use RBD-mirror to replicate the clusters. This seems
> > to need separate cluster names.
> >
> > Kind regards,
> >
> > Glen Baars
> >
> >
> >
> > From: Erik McCormick 
> > Sent: Thursday, 2 August 2018 9:39 AM
> > To: Glen Baars 
> > Cc: Thode Jocelyn ; Vasu Kulkarni
> > ; ceph-users@lists.ceph.com
> > Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name
> >
> >
> >
> > Don't set a cluster name. It's

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-10 Thread Glen Baars
I have now gotten this working. Thanks everyone for the help. The RBD-Mirror 
service is co-located on a MON server.

Key points are:

Start the services on the boxes with the following syntax ( depending on your 
config file names )

On primary
systemctl start ceph-rbd-mirror@primary

On secondary
systemctl start ceph-rbd-mirror@secondary

Ensure this works on both boxes
ceph --cluster secondary -n client.secondary -s
ceph --cluster primary -n client.primary -s

check the log files under - /var/log/ceph/ceph-client.primary.log and 
/var/log/ceph/ceph-client.secondary.log

My primary server had these files in it.

ceph.client.admin.keyring
ceph.client.primary.keyring
ceph.conf
primary.client.primary.keyring
primary.conf
secondary.client.secondary.keyring
secondary.conf

Kind regards,
Glen Baars

-Original Message-
From: Thode Jocelyn 
Sent: Thursday, 9 August 2018 1:41 PM
To: Erik McCormick 
Cc: Glen Baars ; Vasu Kulkarni 
; ceph-users@lists.ceph.com
Subject: RE: [ceph-users] [Ceph-deploy] Cluster Name

Hi Erik,

The thing is that the rbd-mirror service uses the /etc/sysconfig/ceph file to 
determine which configuration file to use (from CLUSTER_NAME). So you need to 
set this to the name you chose for rbd-mirror to work. However setting this 
CLUSTER_NAME variable in /etc/sysconfig/ceph makes it so that the mon, osd etc 
services will also use this variable. Because of this they cannot start anymore 
as all their path are set with "ceph" as cluster name.

However there might be something that I missed which would make this point moot

Best Regards
Jocelyn Thode

-Original Message-
From: Erik McCormick [mailto:emccorm...@cirrusseven.com]
Sent: mercredi, 8 août 2018 16:39
To: Thode Jocelyn 
Cc: Glen Baars ; Vasu Kulkarni 
; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name

I'm not using this feature, so maybe I'm missing something, but from the way I 
understand cluster naming to work...

I still don't understand why this is blocking for you. Unless you are 
attempting to mirror between two clusters running on the same hosts (why would 
you do this?) then systemd doesn't come into play. The --cluster flag on the 
rbd command will simply set the name of a configuration file with the FSID and 
settings of the appropriate cluster. Cluster name is just a way of telling ceph 
commands and systemd units where to find the configs.

So, what you end up with is something like:

/etc/ceph/ceph.conf (your local cluster configuration) on both clusters 
/etc/ceph/local.conf (config of the source cluster. Just a copy of ceph.conf of 
the source clsuter) /etc/ceph/remote.conf (config of destination peer cluster. 
Just a copy of ceph.conf of the remote cluster).

Run all your rbd mirror commands against local and remote names.
However when starting things like mons, osds, mds, etc. you need no cluster 
name as it can use ceph.conf (cluster name of ceph).

Am I making sense, or have I completely missed something?

-Erik

On Wed, Aug 8, 2018 at 8:34 AM, Thode Jocelyn  wrote:
> Hi,
>
>
>
> We are still blocked by this problem on our end. Glen did you  or
> someone else figure out something for this ?
>
>
>
> Regards
>
> Jocelyn Thode
>
>
>
> From: Glen Baars [mailto:g...@onsitecomputers.com.au]
> Sent: jeudi, 2 août 2018 05:43
> To: Erik McCormick 
> Cc: Thode Jocelyn ; Vasu Kulkarni
> ; ceph-users@lists.ceph.com
> Subject: RE: [ceph-users] [Ceph-deploy] Cluster Name
>
>
>
> Hello Erik,
>
>
>
> We are going to use RBD-mirror to replicate the clusters. This seems
> to need separate cluster names.
>
> Kind regards,
>
> Glen Baars
>
>
>
> From: Erik McCormick 
> Sent: Thursday, 2 August 2018 9:39 AM
> To: Glen Baars 
> Cc: Thode Jocelyn ; Vasu Kulkarni
> ; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name
>
>
>
> Don't set a cluster name. It's no longer supported. It really only
> matters if you're running two or more independent clusters on the same
> boxes. That's generally inadvisable anyway.
>
>
>
> Cheers,
>
> Erik
>
>
>
> On Wed, Aug 1, 2018, 9:17 PM Glen Baars  wrote:
>
> Hello Ceph Users,
>
> Does anyone know how to set the Cluster Name when deploying with
> Ceph-deploy? I have 3 clusters to configure and need to correctly set
> the name.
>
> Kind regards,
> Glen Baars
>
> -Original Message-
> From: ceph-users  On Behalf Of Glen
> Baars
> Sent: Monday, 23 July 2018 5:59 PM
> To: Thode Jocelyn ; Vasu Kulkarni
> 
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name
>
> How very timely, I am facing the exact same issue.
>
> Kind regards,
> Glen Baars
>
> -Original Message-
> From: ceph-users  On Behalf Of
> Thode Jocelyn
> Sent: M

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-09 Thread Thode Jocelyn
Hi Magnus,

Yes this is a workaroudn for the problem. However this means that if you want 
to have your rbd-mirroring daemon HA, you will need to create 2+ more machines 
in your infrastructure instead of being able to collocate it on the same 
machines as your MDS,MGR and MON.

Best Regards
Jocelyn Thode

From: Magnus Grönlund [mailto:mag...@gronlund.se]
Sent: jeudi, 9 août 2018 14:33
To: Thode Jocelyn 
Cc: Erik McCormick ; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name

Hi Jocelyn,

I'm in the process of setting up rdb-mirroring myself and stumbled on the same 
problem. But I think that the "trick" here is to _not_ colocate the RDB-mirror 
daemon with any other part of the cluster(s), it should be run on a separate 
host. That way you can change the CLUSTER_NAME variable in /etc/sysconfig/ceph 
without affecting any of the mons, osd etc.

Best regards
/Magnus

2018-08-09 7:41 GMT+02:00 Thode Jocelyn 
mailto:jocelyn.th...@elca.ch>>:
Hi Erik,

The thing is that the rbd-mirror service uses the /etc/sysconfig/ceph file to 
determine which configuration file to use (from CLUSTER_NAME). So you need to 
set this to the name you chose for rbd-mirror to work. However setting this 
CLUSTER_NAME variable in /etc/sysconfig/ceph makes it so that the mon, osd etc 
services will also use this variable. Because of this they cannot start anymore 
as all their path are set with "ceph" as cluster name.

However there might be something that I missed which would make this point moot

Best Regards
Jocelyn Thode

-Original Message-
From: Erik McCormick 
[mailto:emccorm...@cirrusseven.com<mailto:emccorm...@cirrusseven.com>]
Sent: mercredi, 8 août 2018 16:39
To: Thode Jocelyn mailto:jocelyn.th...@elca.ch>>
Cc: Glen Baars 
mailto:g...@onsitecomputers.com.au>>; Vasu 
Kulkarni mailto:vakul...@redhat.com>>; 
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name

I'm not using this feature, so maybe I'm missing something, but from the way I 
understand cluster naming to work...

I still don't understand why this is blocking for you. Unless you are 
attempting to mirror between two clusters running on the same hosts (why would 
you do this?) then systemd doesn't come into play. The --cluster flag on the 
rbd command will simply set the name of a configuration file with the FSID and 
settings of the appropriate cluster. Cluster name is just a way of telling ceph 
commands and systemd units where to find the configs.

So, what you end up with is something like:

/etc/ceph/ceph.conf (your local cluster configuration) on both clusters 
/etc/ceph/local.conf (config of the source cluster. Just a copy of ceph.conf of 
the source clsuter) /etc/ceph/remote.conf (config of destination peer cluster. 
Just a copy of ceph.conf of the remote cluster).

Run all your rbd mirror commands against local and remote names.
However when starting things like mons, osds, mds, etc. you need no cluster 
name as it can use ceph.conf (cluster name of ceph).

Am I making sense, or have I completely missed something?

-Erik

On Wed, Aug 8, 2018 at 8:34 AM, Thode Jocelyn 
mailto:jocelyn.th...@elca.ch>> wrote:
> Hi,
>
>
>
> We are still blocked by this problem on our end. Glen did you  or
> someone else figure out something for this ?
>
>
>
> Regards
>
> Jocelyn Thode
>
>
>
> From: Glen Baars 
> [mailto:g...@onsitecomputers.com.au<mailto:g...@onsitecomputers.com.au>]
> Sent: jeudi, 2 août 2018 05:43
> To: Erik McCormick 
> mailto:emccorm...@cirrusseven.com>>
> Cc: Thode Jocelyn mailto:jocelyn.th...@elca.ch>>; Vasu 
> Kulkarni
> mailto:vakul...@redhat.com>>; 
> ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
> Subject: RE: [ceph-users] [Ceph-deploy] Cluster Name
>
>
>
> Hello Erik,
>
>
>
> We are going to use RBD-mirror to replicate the clusters. This seems
> to need separate cluster names.
>
> Kind regards,
>
> Glen Baars
>
>
>
> From: Erik McCormick 
> mailto:emccorm...@cirrusseven.com>>
> Sent: Thursday, 2 August 2018 9:39 AM
> To: Glen Baars 
> mailto:g...@onsitecomputers.com.au>>
> Cc: Thode Jocelyn mailto:jocelyn.th...@elca.ch>>; Vasu 
> Kulkarni
> mailto:vakul...@redhat.com>>; 
> ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
> Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name
>
>
>
> Don't set a cluster name. It's no longer supported. It really only
> matters if you're running two or more independent clusters on the same
> boxes. That's generally inadvisable anyway.
>
>
>
> Cheers,
>
> Erik
>
>
>
> On Wed, Aug 1, 2018, 9:17 PM Glen Baars 
> mailto:g...@onsitecomputers.com.au>> wrote:
>
> Hello Ceph U

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-09 Thread Magnus Grönlund
Hi Jocelyn,

I'm in the process of setting up rdb-mirroring myself and stumbled on the
same problem. But I think that the "trick" here is to _not_ colocate the
RDB-mirror daemon with any other part of the cluster(s), it should be run
on a separate host. That way you can change the CLUSTER_NAME variable
in /etc/sysconfig/ceph
without affecting any of the mons, osd etc.

Best regards
/Magnus

2018-08-09 7:41 GMT+02:00 Thode Jocelyn :

> Hi Erik,
>
> The thing is that the rbd-mirror service uses the /etc/sysconfig/ceph file
> to determine which configuration file to use (from CLUSTER_NAME). So you
> need to set this to the name you chose for rbd-mirror to work. However
> setting this CLUSTER_NAME variable in /etc/sysconfig/ceph makes it so that
> the mon, osd etc services will also use this variable. Because of this they
> cannot start anymore as all their path are set with "ceph" as cluster name.
>
> However there might be something that I missed which would make this point
> moot
>
> Best Regards
> Jocelyn Thode
>
> -Original Message-
> From: Erik McCormick [mailto:emccorm...@cirrusseven.com]
> Sent: mercredi, 8 août 2018 16:39
> To: Thode Jocelyn 
> Cc: Glen Baars ; Vasu Kulkarni <
> vakul...@redhat.com>; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name
>
> I'm not using this feature, so maybe I'm missing something, but from the
> way I understand cluster naming to work...
>
> I still don't understand why this is blocking for you. Unless you are
> attempting to mirror between two clusters running on the same hosts (why
> would you do this?) then systemd doesn't come into play. The --cluster flag
> on the rbd command will simply set the name of a configuration file with
> the FSID and settings of the appropriate cluster. Cluster name is just a
> way of telling ceph commands and systemd units where to find the configs.
>
> So, what you end up with is something like:
>
> /etc/ceph/ceph.conf (your local cluster configuration) on both clusters
> /etc/ceph/local.conf (config of the source cluster. Just a copy of
> ceph.conf of the source clsuter) /etc/ceph/remote.conf (config of
> destination peer cluster. Just a copy of ceph.conf of the remote cluster).
>
> Run all your rbd mirror commands against local and remote names.
> However when starting things like mons, osds, mds, etc. you need no
> cluster name as it can use ceph.conf (cluster name of ceph).
>
> Am I making sense, or have I completely missed something?
>
> -Erik
>
> On Wed, Aug 8, 2018 at 8:34 AM, Thode Jocelyn 
> wrote:
> > Hi,
> >
> >
> >
> > We are still blocked by this problem on our end. Glen did you  or
> > someone else figure out something for this ?
> >
> >
> >
> > Regards
> >
> > Jocelyn Thode
> >
> >
> >
> > From: Glen Baars [mailto:g...@onsitecomputers.com.au]
> > Sent: jeudi, 2 août 2018 05:43
> > To: Erik McCormick 
> > Cc: Thode Jocelyn ; Vasu Kulkarni
> > ; ceph-users@lists.ceph.com
> > Subject: RE: [ceph-users] [Ceph-deploy] Cluster Name
> >
> >
> >
> > Hello Erik,
> >
> >
> >
> > We are going to use RBD-mirror to replicate the clusters. This seems
> > to need separate cluster names.
> >
> > Kind regards,
> >
> > Glen Baars
> >
> >
> >
> > From: Erik McCormick 
> > Sent: Thursday, 2 August 2018 9:39 AM
> > To: Glen Baars 
> > Cc: Thode Jocelyn ; Vasu Kulkarni
> > ; ceph-users@lists.ceph.com
> > Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name
> >
> >
> >
> > Don't set a cluster name. It's no longer supported. It really only
> > matters if you're running two or more independent clusters on the same
> > boxes. That's generally inadvisable anyway.
> >
> >
> >
> > Cheers,
> >
> > Erik
> >
> >
> >
> > On Wed, Aug 1, 2018, 9:17 PM Glen Baars 
> wrote:
> >
> > Hello Ceph Users,
> >
> > Does anyone know how to set the Cluster Name when deploying with
> > Ceph-deploy? I have 3 clusters to configure and need to correctly set
> > the name.
> >
> > Kind regards,
> > Glen Baars
> >
> > -Original Message-
> > From: ceph-users  On Behalf Of Glen
> > Baars
> > Sent: Monday, 23 July 2018 5:59 PM
> > To: Thode Jocelyn ; Vasu Kulkarni
> > 
> > Cc: ceph-users@lists.ceph.com
> > Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name
> >
> > How very timely, I am facing the exact same issue.
> >
> > Kind regards,
> > Glen Baars
> >
> &

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-08 Thread Thode Jocelyn
Hi Erik,

The thing is that the rbd-mirror service uses the /etc/sysconfig/ceph file to 
determine which configuration file to use (from CLUSTER_NAME). So you need to 
set this to the name you chose for rbd-mirror to work. However setting this 
CLUSTER_NAME variable in /etc/sysconfig/ceph makes it so that the mon, osd etc 
services will also use this variable. Because of this they cannot start anymore 
as all their path are set with "ceph" as cluster name.

However there might be something that I missed which would make this point moot

Best Regards
Jocelyn Thode 

-Original Message-
From: Erik McCormick [mailto:emccorm...@cirrusseven.com] 
Sent: mercredi, 8 août 2018 16:39
To: Thode Jocelyn 
Cc: Glen Baars ; Vasu Kulkarni 
; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name

I'm not using this feature, so maybe I'm missing something, but from the way I 
understand cluster naming to work...

I still don't understand why this is blocking for you. Unless you are 
attempting to mirror between two clusters running on the same hosts (why would 
you do this?) then systemd doesn't come into play. The --cluster flag on the 
rbd command will simply set the name of a configuration file with the FSID and 
settings of the appropriate cluster. Cluster name is just a way of telling ceph 
commands and systemd units where to find the configs.

So, what you end up with is something like:

/etc/ceph/ceph.conf (your local cluster configuration) on both clusters 
/etc/ceph/local.conf (config of the source cluster. Just a copy of ceph.conf of 
the source clsuter) /etc/ceph/remote.conf (config of destination peer cluster. 
Just a copy of ceph.conf of the remote cluster).

Run all your rbd mirror commands against local and remote names.
However when starting things like mons, osds, mds, etc. you need no cluster 
name as it can use ceph.conf (cluster name of ceph).

Am I making sense, or have I completely missed something?

-Erik

On Wed, Aug 8, 2018 at 8:34 AM, Thode Jocelyn  wrote:
> Hi,
>
>
>
> We are still blocked by this problem on our end. Glen did you  or 
> someone else figure out something for this ?
>
>
>
> Regards
>
> Jocelyn Thode
>
>
>
> From: Glen Baars [mailto:g...@onsitecomputers.com.au]
> Sent: jeudi, 2 août 2018 05:43
> To: Erik McCormick 
> Cc: Thode Jocelyn ; Vasu Kulkarni 
> ; ceph-users@lists.ceph.com
> Subject: RE: [ceph-users] [Ceph-deploy] Cluster Name
>
>
>
> Hello Erik,
>
>
>
> We are going to use RBD-mirror to replicate the clusters. This seems 
> to need separate cluster names.
>
> Kind regards,
>
> Glen Baars
>
>
>
> From: Erik McCormick 
> Sent: Thursday, 2 August 2018 9:39 AM
> To: Glen Baars 
> Cc: Thode Jocelyn ; Vasu Kulkarni 
> ; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name
>
>
>
> Don't set a cluster name. It's no longer supported. It really only 
> matters if you're running two or more independent clusters on the same 
> boxes. That's generally inadvisable anyway.
>
>
>
> Cheers,
>
> Erik
>
>
>
> On Wed, Aug 1, 2018, 9:17 PM Glen Baars  wrote:
>
> Hello Ceph Users,
>
> Does anyone know how to set the Cluster Name when deploying with 
> Ceph-deploy? I have 3 clusters to configure and need to correctly set 
> the name.
>
> Kind regards,
> Glen Baars
>
> -----Original Message-
> From: ceph-users  On Behalf Of Glen 
> Baars
> Sent: Monday, 23 July 2018 5:59 PM
> To: Thode Jocelyn ; Vasu Kulkarni 
> 
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name
>
> How very timely, I am facing the exact same issue.
>
> Kind regards,
> Glen Baars
>
> -Original Message-
> From: ceph-users  On Behalf Of 
> Thode Jocelyn
> Sent: Monday, 23 July 2018 1:42 PM
> To: Vasu Kulkarni 
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name
>
> Hi,
>
> Yes my rbd-mirror is coloctaed with my mon/osd. It only affects nodes 
> where they are collocated as they all use the "/etc/sysconfig/ceph" 
> configuration file.
>
> Best
> Jocelyn Thode
>
> -Original Message-
> From: Vasu Kulkarni [mailto:vakul...@redhat.com]
> Sent: vendredi, 20 juillet 2018 17:25
> To: Thode Jocelyn 
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name
>
> On Fri, Jul 20, 2018 at 7:29 AM, Thode Jocelyn 
> wrote:
>> Hi,
>>
>>
>>
>> I noticed that in commit
>> https://github.com/ceph/ceph-deploy/commit/b1c27b85d524f2553af2487a98
>> 0 23b60efe421f3, the ability to specify a cluster name was removed. 
>> Is there a reason for this removal ?
>>
>>
>>
>&

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-08 Thread Erik McCormick
I'm not using this feature, so maybe I'm missing something, but from
the way I understand cluster naming to work...

I still don't understand why this is blocking for you. Unless you are
attempting to mirror between two clusters running on the same hosts
(why would you do this?) then systemd doesn't come into play. The
--cluster flag on the rbd command will simply set the name of a
configuration file with the FSID and settings of the appropriate
cluster. Cluster name is just a way of telling ceph commands and
systemd units where to find the configs.

So, what you end up with is something like:

/etc/ceph/ceph.conf (your local cluster configuration) on both clusters
/etc/ceph/local.conf (config of the source cluster. Just a copy of
ceph.conf of the source clsuter)
/etc/ceph/remote.conf (config of destination peer cluster. Just a copy
of ceph.conf of the remote cluster).

Run all your rbd mirror commands against local and remote names.
However when starting things like mons, osds, mds, etc. you need no
cluster name as it can use ceph.conf (cluster name of ceph).

Am I making sense, or have I completely missed something?

-Erik

On Wed, Aug 8, 2018 at 8:34 AM, Thode Jocelyn  wrote:
> Hi,
>
>
>
> We are still blocked by this problem on our end. Glen did you  or someone
> else figure out something for this ?
>
>
>
> Regards
>
> Jocelyn Thode
>
>
>
> From: Glen Baars [mailto:g...@onsitecomputers.com.au]
> Sent: jeudi, 2 août 2018 05:43
> To: Erik McCormick 
> Cc: Thode Jocelyn ; Vasu Kulkarni
> ; ceph-users@lists.ceph.com
> Subject: RE: [ceph-users] [Ceph-deploy] Cluster Name
>
>
>
> Hello Erik,
>
>
>
> We are going to use RBD-mirror to replicate the clusters. This seems to need
> separate cluster names.
>
> Kind regards,
>
> Glen Baars
>
>
>
> From: Erik McCormick 
> Sent: Thursday, 2 August 2018 9:39 AM
> To: Glen Baars 
> Cc: Thode Jocelyn ; Vasu Kulkarni
> ; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name
>
>
>
> Don't set a cluster name. It's no longer supported. It really only matters
> if you're running two or more independent clusters on the same boxes. That's
> generally inadvisable anyway.
>
>
>
> Cheers,
>
> Erik
>
>
>
> On Wed, Aug 1, 2018, 9:17 PM Glen Baars  wrote:
>
> Hello Ceph Users,
>
> Does anyone know how to set the Cluster Name when deploying with
> Ceph-deploy? I have 3 clusters to configure and need to correctly set the
> name.
>
> Kind regards,
> Glen Baars
>
> -----Original Message-
> From: ceph-users  On Behalf Of Glen Baars
> Sent: Monday, 23 July 2018 5:59 PM
> To: Thode Jocelyn ; Vasu Kulkarni
> 
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name
>
> How very timely, I am facing the exact same issue.
>
> Kind regards,
> Glen Baars
>
> -Original Message-
> From: ceph-users  On Behalf Of Thode
> Jocelyn
> Sent: Monday, 23 July 2018 1:42 PM
> To: Vasu Kulkarni 
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name
>
> Hi,
>
> Yes my rbd-mirror is coloctaed with my mon/osd. It only affects nodes where
> they are collocated as they all use the "/etc/sysconfig/ceph" configuration
> file.
>
> Best
> Jocelyn Thode
>
> -Original Message-
> From: Vasu Kulkarni [mailto:vakul...@redhat.com]
> Sent: vendredi, 20 juillet 2018 17:25
> To: Thode Jocelyn 
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name
>
> On Fri, Jul 20, 2018 at 7:29 AM, Thode Jocelyn 
> wrote:
>> Hi,
>>
>>
>>
>> I noticed that in commit
>> https://github.com/ceph/ceph-deploy/commit/b1c27b85d524f2553af2487a980
>> 23b60efe421f3, the ability to specify a cluster name was removed. Is
>> there a reason for this removal ?
>>
>>
>>
>> Because right now, there are no possibility to create a ceph cluster
>> with a different name with ceph-deploy which is a big problem when
>> having two clusters replicating with rbd-mirror as we need different
>> names.
>>
>>
>>
>> And even when following the doc here:
>> https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/h
>> tml/block_device_guide/block_device_mirroring#rbd-mirroring-clusters-w
>> ith-the-same-name
>>
>>
>>
>> This is not sufficient as once we change the CLUSTER variable in the
>> sysconfig file, mon,osd, mds etc. all use it and fail to start on a
>> reboot as they then try to load data from a path in /var/lib/ceph
>> containing the cluster name.
>
> Is you rbd-mirror client also colocated wit

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-08 Thread Thode Jocelyn
Hi,

We are still blocked by this problem on our end. Glen did you  or someone else 
figure out something for this ?

Regards
Jocelyn Thode

From: Glen Baars [mailto:g...@onsitecomputers.com.au]
Sent: jeudi, 2 août 2018 05:43
To: Erik McCormick 
Cc: Thode Jocelyn ; Vasu Kulkarni ; 
ceph-users@lists.ceph.com
Subject: RE: [ceph-users] [Ceph-deploy] Cluster Name

Hello Erik,

We are going to use RBD-mirror to replicate the clusters. This seems to need 
separate cluster names.
Kind regards,
Glen Baars

From: Erik McCormick 
mailto:emccorm...@cirrusseven.com>>
Sent: Thursday, 2 August 2018 9:39 AM
To: Glen Baars mailto:g...@onsitecomputers.com.au>>
Cc: Thode Jocelyn mailto:jocelyn.th...@elca.ch>>; Vasu 
Kulkarni mailto:vakul...@redhat.com>>; 
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name

Don't set a cluster name. It's no longer supported. It really only matters if 
you're running two or more independent clusters on the same boxes. That's 
generally inadvisable anyway.

Cheers,
Erik

On Wed, Aug 1, 2018, 9:17 PM Glen Baars 
mailto:g...@onsitecomputers.com.au>> wrote:
Hello Ceph Users,

Does anyone know how to set the Cluster Name when deploying with Ceph-deploy? I 
have 3 clusters to configure and need to correctly set the name.

Kind regards,
Glen Baars

-Original Message-
From: ceph-users 
mailto:ceph-users-boun...@lists.ceph.com>> 
On Behalf Of Glen Baars
Sent: Monday, 23 July 2018 5:59 PM
To: Thode Jocelyn mailto:jocelyn.th...@elca.ch>>; Vasu 
Kulkarni mailto:vakul...@redhat.com>>
Cc: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name

How very timely, I am facing the exact same issue.

Kind regards,
Glen Baars

-Original Message-
From: ceph-users 
mailto:ceph-users-boun...@lists.ceph.com>> 
On Behalf Of Thode Jocelyn
Sent: Monday, 23 July 2018 1:42 PM
To: Vasu Kulkarni mailto:vakul...@redhat.com>>
Cc: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name

Hi,

Yes my rbd-mirror is coloctaed with my mon/osd. It only affects nodes where 
they are collocated as they all use the "/etc/sysconfig/ceph" configuration 
file.

Best
Jocelyn Thode

-Original Message-
From: Vasu Kulkarni [mailto:vakul...@redhat.com<mailto:vakul...@redhat.com>]
Sent: vendredi, 20 juillet 2018 17:25
To: Thode Jocelyn mailto:jocelyn.th...@elca.ch>>
Cc: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name

On Fri, Jul 20, 2018 at 7:29 AM, Thode Jocelyn 
mailto:jocelyn.th...@elca.ch>> wrote:
> Hi,
>
>
>
> I noticed that in commit
> https://github.com/ceph/ceph-deploy/commit/b1c27b85d524f2553af2487a980
> 23b60efe421f3, the ability to specify a cluster name was removed. Is
> there a reason for this removal ?
>
>
>
> Because right now, there are no possibility to create a ceph cluster
> with a different name with ceph-deploy which is a big problem when
> having two clusters replicating with rbd-mirror as we need different names.
>
>
>
> And even when following the doc here:
> https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/h
> tml/block_device_guide/block_device_mirroring#rbd-mirroring-clusters-w
> ith-the-same-name
>
>
>
> This is not sufficient as once we change the CLUSTER variable in the
> sysconfig file, mon,osd, mds etc. all use it and fail to start on a
> reboot as they then try to load data from a path in /var/lib/ceph
> containing the cluster name.

Is you rbd-mirror client also colocated with mon/osd? This needs to be changed 
only on the client side where you are doing mirroring, rest of the nodes are 
not affected?


>
>
>
> Is there a solution to this problem ?
>
>
>
> Best Regards
>
> Jocelyn Thode
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
This e-mail is intended solely for the benefit of the addressee(s) and any 
other named recipient. It is confidential and may contain legally privileged or 
confidential information. If you are not the recipient, any use, distribution, 
disclosure or copying of this e-mail is prohibited. The confidentiality and 
legal privilege attached to this communication is not waived or lost by reason 
of the mistaken transmission or delivery to you. If you have received this 
e-mail in error, please notify us immedi

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-01 Thread Glen Baars
Hello Erik,

We are going to use RBD-mirror to replicate the clusters. This seems to need 
separate cluster names.
Kind regards,
Glen Baars

From: Erik McCormick 
Sent: Thursday, 2 August 2018 9:39 AM
To: Glen Baars 
Cc: Thode Jocelyn ; Vasu Kulkarni ; 
ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name

Don't set a cluster name. It's no longer supported. It really only matters if 
you're running two or more independent clusters on the same boxes. That's 
generally inadvisable anyway.

Cheers,
Erik

On Wed, Aug 1, 2018, 9:17 PM Glen Baars 
mailto:g...@onsitecomputers.com.au>> wrote:
Hello Ceph Users,

Does anyone know how to set the Cluster Name when deploying with Ceph-deploy? I 
have 3 clusters to configure and need to correctly set the name.

Kind regards,
Glen Baars

-Original Message-
From: ceph-users 
mailto:ceph-users-boun...@lists.ceph.com>> 
On Behalf Of Glen Baars
Sent: Monday, 23 July 2018 5:59 PM
To: Thode Jocelyn mailto:jocelyn.th...@elca.ch>>; Vasu 
Kulkarni mailto:vakul...@redhat.com>>
Cc: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name

How very timely, I am facing the exact same issue.

Kind regards,
Glen Baars

-Original Message-
From: ceph-users 
mailto:ceph-users-boun...@lists.ceph.com>> 
On Behalf Of Thode Jocelyn
Sent: Monday, 23 July 2018 1:42 PM
To: Vasu Kulkarni mailto:vakul...@redhat.com>>
Cc: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name

Hi,

Yes my rbd-mirror is coloctaed with my mon/osd. It only affects nodes where 
they are collocated as they all use the "/etc/sysconfig/ceph" configuration 
file.

Best
Jocelyn Thode

-Original Message-
From: Vasu Kulkarni [mailto:vakul...@redhat.com<mailto:vakul...@redhat.com>]
Sent: vendredi, 20 juillet 2018 17:25
To: Thode Jocelyn mailto:jocelyn.th...@elca.ch>>
Cc: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name

On Fri, Jul 20, 2018 at 7:29 AM, Thode Jocelyn 
mailto:jocelyn.th...@elca.ch>> wrote:
> Hi,
>
>
>
> I noticed that in commit
> https://github.com/ceph/ceph-deploy/commit/b1c27b85d524f2553af2487a980
> 23b60efe421f3, the ability to specify a cluster name was removed. Is
> there a reason for this removal ?
>
>
>
> Because right now, there are no possibility to create a ceph cluster
> with a different name with ceph-deploy which is a big problem when
> having two clusters replicating with rbd-mirror as we need different names.
>
>
>
> And even when following the doc here:
> https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/h
> tml/block_device_guide/block_device_mirroring#rbd-mirroring-clusters-w
> ith-the-same-name
>
>
>
> This is not sufficient as once we change the CLUSTER variable in the
> sysconfig file, mon,osd, mds etc. all use it and fail to start on a
> reboot as they then try to load data from a path in /var/lib/ceph
> containing the cluster name.

Is you rbd-mirror client also colocated with mon/osd? This needs to be changed 
only on the client side where you are doing mirroring, rest of the nodes are 
not affected?


>
>
>
> Is there a solution to this problem ?
>
>
>
> Best Regards
>
> Jocelyn Thode
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
This e-mail is intended solely for the benefit of the addressee(s) and any 
other named recipient. It is confidential and may contain legally privileged or 
confidential information. If you are not the recipient, any use, distribution, 
disclosure or copying of this e-mail is prohibited. The confidentiality and 
legal privilege attached to this communication is not waived or lost by reason 
of the mistaken transmission or delivery to you. If you have received this 
e-mail in error, please notify us immediately.
___
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
This e-mail is intended solely for the benefit of the addressee(s) and any 
other named recipient. It is confidential and may contain legally privileged or 
confidential information. If you are not the recipient, any use, distribution, 
disclosure or copying of this e-mail is prohibited. The confidentiality and 
legal privilege attached to this communication is 

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-01 Thread Erik McCormick
Don't set a cluster name. It's no longer supported. It really only matters
if you're running two or more independent clusters on the same boxes.
That's generally inadvisable anyway.

Cheers,
Erik

On Wed, Aug 1, 2018, 9:17 PM Glen Baars  wrote:

> Hello Ceph Users,
>
> Does anyone know how to set the Cluster Name when deploying with
> Ceph-deploy? I have 3 clusters to configure and need to correctly set the
> name.
>
> Kind regards,
> Glen Baars
>
> -Original Message-
> From: ceph-users  On Behalf Of Glen
> Baars
> Sent: Monday, 23 July 2018 5:59 PM
> To: Thode Jocelyn ; Vasu Kulkarni <
> vakul...@redhat.com>
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name
>
> How very timely, I am facing the exact same issue.
>
> Kind regards,
> Glen Baars
>
> -Original Message-
> From: ceph-users  On Behalf Of Thode
> Jocelyn
> Sent: Monday, 23 July 2018 1:42 PM
> To: Vasu Kulkarni 
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name
>
> Hi,
>
> Yes my rbd-mirror is coloctaed with my mon/osd. It only affects nodes
> where they are collocated as they all use the "/etc/sysconfig/ceph"
> configuration file.
>
> Best
> Jocelyn Thode
>
> -Original Message-
> From: Vasu Kulkarni [mailto:vakul...@redhat.com]
> Sent: vendredi, 20 juillet 2018 17:25
> To: Thode Jocelyn 
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name
>
> On Fri, Jul 20, 2018 at 7:29 AM, Thode Jocelyn 
> wrote:
> > Hi,
> >
> >
> >
> > I noticed that in commit
> > https://github.com/ceph/ceph-deploy/commit/b1c27b85d524f2553af2487a980
> > 23b60efe421f3, the ability to specify a cluster name was removed. Is
> > there a reason for this removal ?
> >
> >
> >
> > Because right now, there are no possibility to create a ceph cluster
> > with a different name with ceph-deploy which is a big problem when
> > having two clusters replicating with rbd-mirror as we need different
> names.
> >
> >
> >
> > And even when following the doc here:
> > https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/h
> > tml/block_device_guide/block_device_mirroring#rbd-mirroring-clusters-w
> > ith-the-same-name
> >
> >
> >
> > This is not sufficient as once we change the CLUSTER variable in the
> > sysconfig file, mon,osd, mds etc. all use it and fail to start on a
> > reboot as they then try to load data from a path in /var/lib/ceph
> > containing the cluster name.
>
> Is you rbd-mirror client also colocated with mon/osd? This needs to be
> changed only on the client side where you are doing mirroring, rest of the
> nodes are not affected?
>
>
> >
> >
> >
> > Is there a solution to this problem ?
> >
> >
> >
> > Best Regards
> >
> > Jocelyn Thode
> >
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> This e-mail is intended solely for the benefit of the addressee(s) and any
> other named recipient. It is confidential and may contain legally
> privileged or confidential information. If you are not the recipient, any
> use, distribution, disclosure or copying of this e-mail is prohibited. The
> confidentiality and legal privilege attached to this communication is not
> waived or lost by reason of the mistaken transmission or delivery to you.
> If you have received this e-mail in error, please notify us immediately.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> This e-mail is intended solely for the benefit of the addressee(s) and any
> other named recipient. It is confidential and may contain legally
> privileged or confidential information. If you are not the recipient, any
> use, distribution, disclosure or copying of this e-mail is prohibited. The
> confidentiality and legal privilege attached to this communication is not
> waived or lost by reason of the mistaken transmission or delivery to you.
> If you have received this e-mail in error, please notify us immediately.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-01 Thread Glen Baars
Hello Ceph Users,

Does anyone know how to set the Cluster Name when deploying with Ceph-deploy? I 
have 3 clusters to configure and need to correctly set the name.

Kind regards,
Glen Baars

-Original Message-
From: ceph-users  On Behalf Of Glen Baars
Sent: Monday, 23 July 2018 5:59 PM
To: Thode Jocelyn ; Vasu Kulkarni 
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name

How very timely, I am facing the exact same issue.

Kind regards,
Glen Baars

-Original Message-
From: ceph-users  On Behalf Of Thode Jocelyn
Sent: Monday, 23 July 2018 1:42 PM
To: Vasu Kulkarni 
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name

Hi,

Yes my rbd-mirror is coloctaed with my mon/osd. It only affects nodes where 
they are collocated as they all use the "/etc/sysconfig/ceph" configuration 
file.

Best
Jocelyn Thode

-Original Message-
From: Vasu Kulkarni [mailto:vakul...@redhat.com]
Sent: vendredi, 20 juillet 2018 17:25
To: Thode Jocelyn 
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name

On Fri, Jul 20, 2018 at 7:29 AM, Thode Jocelyn  wrote:
> Hi,
>
>
>
> I noticed that in commit
> https://github.com/ceph/ceph-deploy/commit/b1c27b85d524f2553af2487a980
> 23b60efe421f3, the ability to specify a cluster name was removed. Is
> there a reason for this removal ?
>
>
>
> Because right now, there are no possibility to create a ceph cluster
> with a different name with ceph-deploy which is a big problem when
> having two clusters replicating with rbd-mirror as we need different names.
>
>
>
> And even when following the doc here:
> https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/h
> tml/block_device_guide/block_device_mirroring#rbd-mirroring-clusters-w
> ith-the-same-name
>
>
>
> This is not sufficient as once we change the CLUSTER variable in the
> sysconfig file, mon,osd, mds etc. all use it and fail to start on a
> reboot as they then try to load data from a path in /var/lib/ceph
> containing the cluster name.

Is you rbd-mirror client also colocated with mon/osd? This needs to be changed 
only on the client side where you are doing mirroring, rest of the nodes are 
not affected?


>
>
>
> Is there a solution to this problem ?
>
>
>
> Best Regards
>
> Jocelyn Thode
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
This e-mail is intended solely for the benefit of the addressee(s) and any 
other named recipient. It is confidential and may contain legally privileged or 
confidential information. If you are not the recipient, any use, distribution, 
disclosure or copying of this e-mail is prohibited. The confidentiality and 
legal privilege attached to this communication is not waived or lost by reason 
of the mistaken transmission or delivery to you. If you have received this 
e-mail in error, please notify us immediately.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
This e-mail is intended solely for the benefit of the addressee(s) and any 
other named recipient. It is confidential and may contain legally privileged or 
confidential information. If you are not the recipient, any use, distribution, 
disclosure or copying of this e-mail is prohibited. The confidentiality and 
legal privilege attached to this communication is not waived or lost by reason 
of the mistaken transmission or delivery to you. If you have received this 
e-mail in error, please notify us immediately.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-07-23 Thread Glen Baars
How very timely, I am facing the exact same issue.

Kind regards,
Glen Baars

-Original Message-
From: ceph-users  On Behalf Of Thode Jocelyn
Sent: Monday, 23 July 2018 1:42 PM
To: Vasu Kulkarni 
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name

Hi,

Yes my rbd-mirror is coloctaed with my mon/osd. It only affects nodes where 
they are collocated as they all use the "/etc/sysconfig/ceph" configuration 
file.

Best
Jocelyn Thode

-Original Message-
From: Vasu Kulkarni [mailto:vakul...@redhat.com]
Sent: vendredi, 20 juillet 2018 17:25
To: Thode Jocelyn 
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name

On Fri, Jul 20, 2018 at 7:29 AM, Thode Jocelyn  wrote:
> Hi,
>
>
>
> I noticed that in commit
> https://github.com/ceph/ceph-deploy/commit/b1c27b85d524f2553af2487a980
> 23b60efe421f3, the ability to specify a cluster name was removed. Is
> there a reason for this removal ?
>
>
>
> Because right now, there are no possibility to create a ceph cluster
> with a different name with ceph-deploy which is a big problem when
> having two clusters replicating with rbd-mirror as we need different names.
>
>
>
> And even when following the doc here:
> https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/h
> tml/block_device_guide/block_device_mirroring#rbd-mirroring-clusters-w
> ith-the-same-name
>
>
>
> This is not sufficient as once we change the CLUSTER variable in the
> sysconfig file, mon,osd, mds etc. all use it and fail to start on a
> reboot as they then try to load data from a path in /var/lib/ceph
> containing the cluster name.

Is you rbd-mirror client also colocated with mon/osd? This needs to be changed 
only on the client side where you are doing mirroring, rest of the nodes are 
not affected?


>
>
>
> Is there a solution to this problem ?
>
>
>
> Best Regards
>
> Jocelyn Thode
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
This e-mail is intended solely for the benefit of the addressee(s) and any 
other named recipient. It is confidential and may contain legally privileged or 
confidential information. If you are not the recipient, any use, distribution, 
disclosure or copying of this e-mail is prohibited. The confidentiality and 
legal privilege attached to this communication is not waived or lost by reason 
of the mistaken transmission or delivery to you. If you have received this 
e-mail in error, please notify us immediately.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-07-22 Thread Thode Jocelyn
Hi,

Yes my rbd-mirror is coloctaed with my mon/osd. It only affects nodes where 
they are collocated as they all use the "/etc/sysconfig/ceph" configuration 
file.

Best
Jocelyn Thode

-Original Message-
From: Vasu Kulkarni [mailto:vakul...@redhat.com] 
Sent: vendredi, 20 juillet 2018 17:25
To: Thode Jocelyn 
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name

On Fri, Jul 20, 2018 at 7:29 AM, Thode Jocelyn  wrote:
> Hi,
>
>
>
> I noticed that in commit
> https://github.com/ceph/ceph-deploy/commit/b1c27b85d524f2553af2487a980
> 23b60efe421f3, the ability to specify a cluster name was removed. Is 
> there a reason for this removal ?
>
>
>
> Because right now, there are no possibility to create a ceph cluster 
> with a different name with ceph-deploy which is a big problem when 
> having two clusters replicating with rbd-mirror as we need different names.
>
>
>
> And even when following the doc here:
> https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/h
> tml/block_device_guide/block_device_mirroring#rbd-mirroring-clusters-w
> ith-the-same-name
>
>
>
> This is not sufficient as once we change the CLUSTER variable in the 
> sysconfig file, mon,osd, mds etc. all use it and fail to start on a 
> reboot as they then try to load data from a path in /var/lib/ceph 
> containing the cluster name.

Is you rbd-mirror client also colocated with mon/osd? This needs to be changed 
only on the client side where you are doing mirroring, rest of the nodes are 
not affected?


>
>
>
> Is there a solution to this problem ?
>
>
>
> Best Regards
>
> Jocelyn Thode
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-07-20 Thread Vasu Kulkarni
On Fri, Jul 20, 2018 at 7:29 AM, Thode Jocelyn  wrote:
> Hi,
>
>
>
> I noticed that in commit
> https://github.com/ceph/ceph-deploy/commit/b1c27b85d524f2553af2487a98023b60efe421f3,
> the ability to specify a cluster name was removed. Is there a reason for
> this removal ?
>
>
>
> Because right now, there are no possibility to create a ceph cluster with a
> different name with ceph-deploy which is a big problem when having two
> clusters replicating with rbd-mirror as we need different names.
>
>
>
> And even when following the doc here:
> https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/block_device_guide/block_device_mirroring#rbd-mirroring-clusters-with-the-same-name
>
>
>
> This is not sufficient as once we change the CLUSTER variable in the
> sysconfig file, mon,osd, mds etc. all use it and fail to start on a reboot
> as they then try to load data from a path in /var/lib/ceph containing the
> cluster name.

Is you rbd-mirror client also colocated with mon/osd? This needs to be
changed only on the client side where you are doing mirroring, rest of
the nodes are not affected?


>
>
>
> Is there a solution to this problem ?
>
>
>
> Best Regards
>
> Jocelyn Thode
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy: is it a requirement that the name of each node of the ceph cluster must be resolved to the public IP ?

2018-05-10 Thread Massimo Sgaravatto
This [*] is my ceph.conf

10.70.42.9 is the public address

And it is indeed the IP used by the MON daemon:

[root@c-mon-02 ~]# netstat -anp | grep 6789
tcp0  0 10.70.42.9:6789 0.0.0.0:*   LISTEN
3835/ceph-mon
tcp0  0 10.70.42.9:3359210.70.42.10:6789
ESTABLISHED 3835/ceph-mon
tcp0  0 10.70.42.9:4178610.70.42.8:6789
 ESTABLISHED 3835/ceph-mon
tcp   106008  0 10.70.42.9:3321010.70.42.10:6789
CLOSE_WAIT  1162/ceph-mgr
tcp   100370  0 10.70.42.9:3321810.70.42.10:6789
CLOSE_WAIT  1162/ceph-mgr
tcp0  0 10.70.42.9:3357810.70.42.10:6789
ESTABLISHED 1162/ceph-mgr


But the command:

/usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon.
--keyring=/var/lib/ceph/mon/ceph-c-mon-02/keyring auth get client.admin
exported keyring for client.admin

fails with c-mon-02 resolved to the management IP.

As wokaround I can add in /etc/hosts the mapping with the public address:

10.70.42.9  c-mon-02


but I wonder if this is the expected behavior


Cheers, Massimo

[*]

[global]
fsid = 7a8cb8ff-562b-47da-a6aa-507136587dcf
public network = 10.70.42.0/24
cluster network = 10.69.42.0/24


auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

osd pool default size = 3  # Write an object 3 times.
osd pool default min size = 2


osd pool default pg num = 128
osd pool default pgp num = 128


[mon]
mon host =  c-mon-01, c-mon-02, c-mon-03
mon addr =  10.70.42.10:6789, 10.70.42.9:6789, 10.70.42.8:6789

[mon.c-mon-01]
mon addr = 10.70.42.10:6789
host = c-mon-01

[mon.c-mon-02]
mon addr = 10.70.42.9:6789
host = c-mon-02

[mon.c-mon-03]
mon addr = 10.70.42.8:6789
host = c-mon-03

[osd]
osd mount options xfs = rw,noatime,inode64,logbufs=8,logbsize=256k



On Thu, May 10, 2018 at 1:12 PM, Paul Emmerich 
wrote:

> check ceph.conf, it controls to which mon IP the client tries to connect.
>
> 2018-05-10 12:57 GMT+02:00 Massimo Sgaravatto <
> massimo.sgarava...@gmail.com>:
>
>> I configured the "public network" attribute in the ceph configuration
>> file.
>>
>> But it looks like to me that in the "auth get client.admin" command [*]
>> issued by ceph-deploy the address of the management network is used (I
>> guess because c-mon-02 gets resolved to the IP management address)
>>
>> Cheers, Massimo
>>
>> [*]
>> /usr/bin/ceph  --connect-timeout=25 --cluster=ceph --name mon.
>> --keyring=/var/lib/ceph/mon/ceph-c-mon-02/keyring auth get client.admin
>>
>> On Thu, May 10, 2018 at 12:49 PM, Paul Emmerich 
>> wrote:
>>
>>> Monitors can use only exactly one IP address. ceph-deploy uses some
>>> heuristics
>>> based on hostname resolution and ceph public addr configuration to guess
>>> which
>>> one to use during setup. (Which I've always found to be a quite annoying
>>> feature.)
>>>
>>> The mon's IP must be reachable from all ceph daemons and clients, so it
>>> should be
>>> on your "public" network. Changing the IP of a mon is possible but
>>> annoying, it is
>>> often easier to remove and then re-add with a new IP (if possible):
>>>
>>> http://docs.ceph.com/docs/master/rados/operations/add-or-rm-
>>> mons/#changing-a-monitor-s-ip-address
>>>
>>>
>>> Paul
>>>
>>> 2018-05-10 12:36 GMT+02:00 Massimo Sgaravatto <
>>> massimo.sgarava...@gmail.com>:
>>>
 I have a ceph cluster that I manually deployed, and now I am trying to
 see if I can use ceph-deploy to deploy new nodes (in particular the object
 gw).

 The network configuration is the following:

 Each MON node has two network IP: one on a "management network" (not
 used for ceph related stuff) and one on a "public network",
 The MON daemon listens to on the pub network

 Each OSD node  has three network IPs: one on a "management network"
 (not used for ceph related stuff), one on a "public network" and the third
 one is an internal network to be used as ceph cluster network (for ceph
 internal traffic: replication, recovery, etc)


 Name resolution works, but names are resolved to the IP address of the
 management network.
 And it looks like this is a problem. E.g. the following command (used
 in ceph-deploy gatherkeys) issued on a MON host (c-mon-02) doesn't work:

 /usr/bin/ceph --verbose --connect-timeout=25 --cluster=ceph --name mon.
 --keyring=/var/lib/ceph/mon/ceph-c-mon-02/keyring auth get client.admin

 unless I change the name resolution of c-mon-02 to the public address


 Is it a requirement (at least for ceph-deploy) that the name of each
 node of the ceph cluster must be resolved to the public IP address ?


 Thanks, Massimo

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


>>>
>>>
>>> --
>>> --
>>> Paul Emmerich
>>>
>>> 

Re: [ceph-users] ceph-deploy: is it a requirement that the name of each node of the ceph cluster must be resolved to the public IP ?

2018-05-10 Thread Paul Emmerich
check ceph.conf, it controls to which mon IP the client tries to connect.

2018-05-10 12:57 GMT+02:00 Massimo Sgaravatto 
:

> I configured the "public network" attribute in the ceph configuration file.
>
> But it looks like to me that in the "auth get client.admin" command [*]
> issued by ceph-deploy the address of the management network is used (I
> guess because c-mon-02 gets resolved to the IP management address)
>
> Cheers, Massimo
>
> [*]
> /usr/bin/ceph  --connect-timeout=25 --cluster=ceph --name mon.
> --keyring=/var/lib/ceph/mon/ceph-c-mon-02/keyring auth get client.admin
>
> On Thu, May 10, 2018 at 12:49 PM, Paul Emmerich 
> wrote:
>
>> Monitors can use only exactly one IP address. ceph-deploy uses some
>> heuristics
>> based on hostname resolution and ceph public addr configuration to guess
>> which
>> one to use during setup. (Which I've always found to be a quite annoying
>> feature.)
>>
>> The mon's IP must be reachable from all ceph daemons and clients, so it
>> should be
>> on your "public" network. Changing the IP of a mon is possible but
>> annoying, it is
>> often easier to remove and then re-add with a new IP (if possible):
>>
>> http://docs.ceph.com/docs/master/rados/operations/add-or-rm-
>> mons/#changing-a-monitor-s-ip-address
>>
>>
>> Paul
>>
>> 2018-05-10 12:36 GMT+02:00 Massimo Sgaravatto <
>> massimo.sgarava...@gmail.com>:
>>
>>> I have a ceph cluster that I manually deployed, and now I am trying to
>>> see if I can use ceph-deploy to deploy new nodes (in particular the object
>>> gw).
>>>
>>> The network configuration is the following:
>>>
>>> Each MON node has two network IP: one on a "management network" (not
>>> used for ceph related stuff) and one on a "public network",
>>> The MON daemon listens to on the pub network
>>>
>>> Each OSD node  has three network IPs: one on a "management network" (not
>>> used for ceph related stuff), one on a "public network" and the third one
>>> is an internal network to be used as ceph cluster network (for ceph
>>> internal traffic: replication, recovery, etc)
>>>
>>>
>>> Name resolution works, but names are resolved to the IP address of the
>>> management network.
>>> And it looks like this is a problem. E.g. the following command (used in
>>> ceph-deploy gatherkeys) issued on a MON host (c-mon-02) doesn't work:
>>>
>>> /usr/bin/ceph --verbose --connect-timeout=25 --cluster=ceph --name mon.
>>> --keyring=/var/lib/ceph/mon/ceph-c-mon-02/keyring auth get client.admin
>>>
>>> unless I change the name resolution of c-mon-02 to the public address
>>>
>>>
>>> Is it a requirement (at least for ceph-deploy) that the name of each
>>> node of the ceph cluster must be resolved to the public IP address ?
>>>
>>>
>>> Thanks, Massimo
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>
>>
>> --
>> --
>> Paul Emmerich
>>
>> Looking for help with your Ceph cluster? Contact us at https://croit.io
>>
>> croit GmbH
>> Freseniusstr. 31h
>> 
>> 81247 München
>> 
>> www.croit.io
>> Tel: +49 89 1896585 90
>>
>
>


-- 
-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy: is it a requirement that the name of each node of the ceph cluster must be resolved to the public IP ?

2018-05-10 Thread Massimo Sgaravatto
I configured the "public network" attribute in the ceph configuration file.

But it looks like to me that in the "auth get client.admin" command [*]
issued by ceph-deploy the address of the management network is used (I
guess because c-mon-02 gets resolved to the IP management address)

Cheers, Massimo

[*]
/usr/bin/ceph  --connect-timeout=25 --cluster=ceph --name mon.
--keyring=/var/lib/ceph/mon/ceph-c-mon-02/keyring auth get client.admin

On Thu, May 10, 2018 at 12:49 PM, Paul Emmerich 
wrote:

> Monitors can use only exactly one IP address. ceph-deploy uses some
> heuristics
> based on hostname resolution and ceph public addr configuration to guess
> which
> one to use during setup. (Which I've always found to be a quite annoying
> feature.)
>
> The mon's IP must be reachable from all ceph daemons and clients, so it
> should be
> on your "public" network. Changing the IP of a mon is possible but
> annoying, it is
> often easier to remove and then re-add with a new IP (if possible):
>
> http://docs.ceph.com/docs/master/rados/operations/add-
> or-rm-mons/#changing-a-monitor-s-ip-address
>
>
> Paul
>
> 2018-05-10 12:36 GMT+02:00 Massimo Sgaravatto <
> massimo.sgarava...@gmail.com>:
>
>> I have a ceph cluster that I manually deployed, and now I am trying to
>> see if I can use ceph-deploy to deploy new nodes (in particular the object
>> gw).
>>
>> The network configuration is the following:
>>
>> Each MON node has two network IP: one on a "management network" (not used
>> for ceph related stuff) and one on a "public network",
>> The MON daemon listens to on the pub network
>>
>> Each OSD node  has three network IPs: one on a "management network" (not
>> used for ceph related stuff), one on a "public network" and the third one
>> is an internal network to be used as ceph cluster network (for ceph
>> internal traffic: replication, recovery, etc)
>>
>>
>> Name resolution works, but names are resolved to the IP address of the
>> management network.
>> And it looks like this is a problem. E.g. the following command (used in
>> ceph-deploy gatherkeys) issued on a MON host (c-mon-02) doesn't work:
>>
>> /usr/bin/ceph --verbose --connect-timeout=25 --cluster=ceph --name mon.
>> --keyring=/var/lib/ceph/mon/ceph-c-mon-02/keyring auth get client.admin
>>
>> unless I change the name resolution of c-mon-02 to the public address
>>
>>
>> Is it a requirement (at least for ceph-deploy) that the name of each node
>> of the ceph cluster must be resolved to the public IP address ?
>>
>>
>> Thanks, Massimo
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
>
> --
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 
> 81247 München
> 
> www.croit.io
> Tel: +49 89 1896585 90
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy: is it a requirement that the name of each node of the ceph cluster must be resolved to the public IP ?

2018-05-10 Thread Paul Emmerich
Monitors can use only exactly one IP address. ceph-deploy uses some
heuristics
based on hostname resolution and ceph public addr configuration to guess
which
one to use during setup. (Which I've always found to be a quite annoying
feature.)

The mon's IP must be reachable from all ceph daemons and clients, so it
should be
on your "public" network. Changing the IP of a mon is possible but
annoying, it is
often easier to remove and then re-add with a new IP (if possible):

http://docs.ceph.com/docs/master/rados/operations/add-or-rm-mons/#changing-a-monitor-s-ip-address


Paul

2018-05-10 12:36 GMT+02:00 Massimo Sgaravatto 
:

> I have a ceph cluster that I manually deployed, and now I am trying to see
> if I can use ceph-deploy to deploy new nodes (in particular the object gw).
>
> The network configuration is the following:
>
> Each MON node has two network IP: one on a "management network" (not used
> for ceph related stuff) and one on a "public network",
> The MON daemon listens to on the pub network
>
> Each OSD node  has three network IPs: one on a "management network" (not
> used for ceph related stuff), one on a "public network" and the third one
> is an internal network to be used as ceph cluster network (for ceph
> internal traffic: replication, recovery, etc)
>
>
> Name resolution works, but names are resolved to the IP address of the
> management network.
> And it looks like this is a problem. E.g. the following command (used in
> ceph-deploy gatherkeys) issued on a MON host (c-mon-02) doesn't work:
>
> /usr/bin/ceph --verbose --connect-timeout=25 --cluster=ceph --name mon.
> --keyring=/var/lib/ceph/mon/ceph-c-mon-02/keyring auth get client.admin
>
> unless I change the name resolution of c-mon-02 to the public address
>
>
> Is it a requirement (at least for ceph-deploy) that the name of each node
> of the ceph cluster must be resolved to the public IP address ?
>
>
> Thanks, Massimo
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 
-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy on 14.04

2018-04-30 Thread Scottix
Alright I'll try that.

Thanks
On Mon, Apr 30, 2018 at 5:45 PM Vasu Kulkarni  wrote:

> If you are on 14.04 or need to use ceph-disk, then you can  install
> version 1.5.39 from pip. to downgrade just uninstall the current one
> and reinstall 1.5.39 you dont have to delete your conf file folder.
>
> On Mon, Apr 30, 2018 at 5:31 PM, Scottix  wrote:
> > It looks like ceph-deploy@2.0.0 is incompatible with systems running
> 14.04
> > and it got released in the luminous branch with the new deployment
> commands.
> >
> > Is there anyway to downgrade to an older version?
> >
> > Log of osd list
> >
> > XYZ@XYZStat200:~/XYZ-cluster$ ceph-deploy --overwrite-conf osd list
> > XYZCeph204
> > [ceph_deploy.conf][DEBUG ] found configuration file at:
> > /home/XYZ/.cephdeploy.conf
> > [ceph_deploy.cli][INFO  ] Invoked (2.0.0): /usr/bin/ceph-deploy
> > --overwrite-conf osd list XYZCeph204
> > [ceph_deploy.cli][INFO  ] ceph-deploy options:
> > [ceph_deploy.cli][INFO  ]  username  : None
> > [ceph_deploy.cli][INFO  ]  verbose   : False
> > [ceph_deploy.cli][INFO  ]  debug : False
> > [ceph_deploy.cli][INFO  ]  overwrite_conf: True
> > [ceph_deploy.cli][INFO  ]  subcommand: list
> > [ceph_deploy.cli][INFO  ]  quiet : False
> > [ceph_deploy.cli][INFO  ]  cd_conf   :
> > 
> > [ceph_deploy.cli][INFO  ]  cluster   : ceph
> > [ceph_deploy.cli][INFO  ]  host  : ['XYZCeph204']
> > [ceph_deploy.cli][INFO  ]  func  :  at
> > 0x7f12af1e80c8>
> > [ceph_deploy.cli][INFO  ]  ceph_conf : None
> > [ceph_deploy.cli][INFO  ]  default_release   : False
> > XYZ@XYZceph204's password:
> > [XYZCeph204][DEBUG ] connection detected need for sudo
> > XYZ@XYZceph204's password:
> > [XYZCeph204][DEBUG ] connected to host: XYZCeph204
> > [XYZCeph204][DEBUG ] detect platform information from remote host
> > [XYZCeph204][DEBUG ] detect machine type
> > [XYZCeph204][DEBUG ] find the location of an executable
> > [XYZCeph204][INFO  ] Running command: sudo /sbin/initctl version
> > [XYZCeph204][DEBUG ] find the location of an executable
> > [ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
> > [ceph_deploy.osd][DEBUG ] Listing disks on XYZCeph204...
> > [XYZCeph204][DEBUG ] find the location of an executable
> > [XYZCeph204][INFO  ] Running command: sudo /usr/sbin/ceph-volume lvm list
> > [XYZCeph204][DEBUG ]  stderr: /sbin/lvs: unrecognized option '--readonly'
> > [XYZCeph204][WARNIN] No valid Ceph devices found
> > [XYZCeph204][DEBUG ]  stderr: Error during parsing of command line.
> > [XYZCeph204][DEBUG ]  stderr: /sbin/lvs: unrecognized option '--readonly'
> > [XYZCeph204][DEBUG ]  stderr: Error during parsing of command line.
> > [XYZCeph204][ERROR ] RuntimeError: command returned non-zero exit
> status: 1
> > [ceph_deploy][ERROR ] RuntimeError: Failed to execute command:
> > /usr/sbin/ceph-volume lvm list
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy on 14.04

2018-04-30 Thread Vasu Kulkarni
If you are on 14.04 or need to use ceph-disk, then you can  install
version 1.5.39 from pip. to downgrade just uninstall the current one
and reinstall 1.5.39 you dont have to delete your conf file folder.

On Mon, Apr 30, 2018 at 5:31 PM, Scottix  wrote:
> It looks like ceph-deploy@2.0.0 is incompatible with systems running 14.04
> and it got released in the luminous branch with the new deployment commands.
>
> Is there anyway to downgrade to an older version?
>
> Log of osd list
>
> XYZ@XYZStat200:~/XYZ-cluster$ ceph-deploy --overwrite-conf osd list
> XYZCeph204
> [ceph_deploy.conf][DEBUG ] found configuration file at:
> /home/XYZ/.cephdeploy.conf
> [ceph_deploy.cli][INFO  ] Invoked (2.0.0): /usr/bin/ceph-deploy
> --overwrite-conf osd list XYZCeph204
> [ceph_deploy.cli][INFO  ] ceph-deploy options:
> [ceph_deploy.cli][INFO  ]  username  : None
> [ceph_deploy.cli][INFO  ]  verbose   : False
> [ceph_deploy.cli][INFO  ]  debug : False
> [ceph_deploy.cli][INFO  ]  overwrite_conf: True
> [ceph_deploy.cli][INFO  ]  subcommand: list
> [ceph_deploy.cli][INFO  ]  quiet : False
> [ceph_deploy.cli][INFO  ]  cd_conf   :
> 
> [ceph_deploy.cli][INFO  ]  cluster   : ceph
> [ceph_deploy.cli][INFO  ]  host  : ['XYZCeph204']
> [ceph_deploy.cli][INFO  ]  func  :  0x7f12af1e80c8>
> [ceph_deploy.cli][INFO  ]  ceph_conf : None
> [ceph_deploy.cli][INFO  ]  default_release   : False
> XYZ@XYZceph204's password:
> [XYZCeph204][DEBUG ] connection detected need for sudo
> XYZ@XYZceph204's password:
> [XYZCeph204][DEBUG ] connected to host: XYZCeph204
> [XYZCeph204][DEBUG ] detect platform information from remote host
> [XYZCeph204][DEBUG ] detect machine type
> [XYZCeph204][DEBUG ] find the location of an executable
> [XYZCeph204][INFO  ] Running command: sudo /sbin/initctl version
> [XYZCeph204][DEBUG ] find the location of an executable
> [ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
> [ceph_deploy.osd][DEBUG ] Listing disks on XYZCeph204...
> [XYZCeph204][DEBUG ] find the location of an executable
> [XYZCeph204][INFO  ] Running command: sudo /usr/sbin/ceph-volume lvm list
> [XYZCeph204][DEBUG ]  stderr: /sbin/lvs: unrecognized option '--readonly'
> [XYZCeph204][WARNIN] No valid Ceph devices found
> [XYZCeph204][DEBUG ]  stderr: Error during parsing of command line.
> [XYZCeph204][DEBUG ]  stderr: /sbin/lvs: unrecognized option '--readonly'
> [XYZCeph204][DEBUG ]  stderr: Error during parsing of command line.
> [XYZCeph204][ERROR ] RuntimeError: command returned non-zero exit status: 1
> [ceph_deploy][ERROR ] RuntimeError: Failed to execute command:
> /usr/sbin/ceph-volume lvm list
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy: recommended?

2018-04-06 Thread Anthony D'Atri
> ?I read a couple of versions ago that ceph-deploy was not recommended
> for production clusters.? 

InkTank had sort of discouraged the use of ceph-deploy; in 2014 we used it only 
to deploy OSDs.

Some time later the message changed.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy: recommended?

2018-04-06 Thread David Turner
I looked through the backlog for ceph-deploy.  It has some pretty intense
stuff including bugs for random environments that aren't ubuntu or
redhat/centos.  Not really something I could manage in my off time.

On Thu, Apr 5, 2018 at 2:15 PM <ceph.nov...@habmalnefrage.de> wrote:

> ... we use (only!) ceph-deploy in all our environments, tools and scripts.
>
> If I look in the efforts went into ceph-volume and all the related issues,
> "manual LVM" overhead and/or still missing features, PLUS the in the same
> discussions mentioned recommendations to use something like ceph-ansible in
> parallel for the missing stuff, I can only hope we will find a (full
> time?!) maintainer for ceph-deploy and keep it alive. PLEASE ;)
>
>
>
> Gesendet: Donnerstag, 05. April 2018 um 08:53 Uhr
> Von: "Wido den Hollander" <w...@42on.com>
> An: ceph-users@lists.ceph.com
> Betreff: Re: [ceph-users] ceph-deploy: recommended?
>
> On 04/04/2018 08:58 PM, Robert Stanford wrote:
> >
> >  I read a couple of versions ago that ceph-deploy was not recommended
> > for production clusters.  Why was that?  Is this still the case?  We
> > have a lot of problems automating deployment without ceph-deploy.
> >
> >
>
> In the end it is just a Python tool which deploys the daemons. It is not
> active in any way. Stability of the cluster is not determined by the use
> of ceph-deploy, but by the runnings daemons.
>
> I use ceph-deploy sometimes in very large deployments to make my life a
> bit easier.
>
> Wido
>
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com[http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com]
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy: recommended?

2018-04-05 Thread ceph . novice
... we use (only!) ceph-deploy in all our environments, tools and scripts.

If I look in the efforts went into ceph-volume and all the related issues, 
"manual LVM" overhead and/or still missing features, PLUS the in the same 
discussions mentioned recommendations to use something like ceph-ansible in 
parallel for the missing stuff, I can only hope we will find a (full time?!) 
maintainer for ceph-deploy and keep it alive. PLEASE ;)
 
 

Gesendet: Donnerstag, 05. April 2018 um 08:53 Uhr
Von: "Wido den Hollander" <w...@42on.com>
An: ceph-users@lists.ceph.com
Betreff: Re: [ceph-users] ceph-deploy: recommended?

On 04/04/2018 08:58 PM, Robert Stanford wrote:
>
>  I read a couple of versions ago that ceph-deploy was not recommended
> for production clusters.  Why was that?  Is this still the case?  We
> have a lot of problems automating deployment without ceph-deploy.
>
>

In the end it is just a Python tool which deploys the daemons. It is not
active in any way. Stability of the cluster is not determined by the use
of ceph-deploy, but by the runnings daemons.

I use ceph-deploy sometimes in very large deployments to make my life a
bit easier.

Wido

> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com[http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com]
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy: recommended?

2018-04-05 Thread Wido den Hollander


On 04/04/2018 08:58 PM, Robert Stanford wrote:
> 
>  I read a couple of versions ago that ceph-deploy was not recommended
> for production clusters.  Why was that?  Is this still the case?  We
> have a lot of problems automating deployment without ceph-deploy.
> 
> 

In the end it is just a Python tool which deploys the daemons. It is not
active in any way. Stability of the cluster is not determined by the use
of ceph-deploy, but by the runnings daemons.

I use ceph-deploy sometimes in very large deployments to make my life a
bit easier.

Wido

> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy: recommended?

2018-04-05 Thread Dietmar Rieder
On 04/04/2018 08:58 PM, Robert Stanford wrote:
> 
>  I read a couple of versions ago that ceph-deploy was not recommended
> for production clusters.  Why was that?  Is this still the case?  We
> have a lot of problems automating deployment without ceph-deploy.
> 

We are using it in production on our luminous cluster for deploying and
updating. No problems so far.  It is very helpful.

Dietmar

-- 
_
D i e t m a r  R i e d e r, Mag.Dr.
Innsbruck Medical University
Biocenter - Division for Bioinformatics
Innrain 80, 6020 Innsbruck
Email: dietmar.rie...@i-med.ac.at
Web:   http://www.icbi.at

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy: recommended?

2018-04-04 Thread Brady Deetz
We use ceph-deploy in production. That said, our crush map is getting more
complex and we are starting to make use of other tooling as that occurs.
But we still use ceph-deploy to install ceph and bootstrap OSDs.

On Wed, Apr 4, 2018, 1:58 PM Robert Stanford 
wrote:

>
>  I read a couple of versions ago that ceph-deploy was not recommended for
> production clusters.  Why was that?  Is this still the case?  We have a lot
> of problems automating deployment without ceph-deploy.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy: recommended?

2018-04-04 Thread ceph


Am 4. April 2018 20:58:19 MESZ schrieb Robert Stanford 
:
>I read a couple of versions ago that ceph-deploy was not recommended
>for
>production clusters.  Why was that?  Is this still the case?  We have a
I cannot Imagine that. Did use it Now a few versions before 2.0 and it works 
Great. We use it in production. 

- Mehmet 
>lot
>of problems automating deployment without ceph-deploy.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy won't install luminous (but Jewel instead)

2018-03-01 Thread David Turner
You mean documentation like `ceph-deploy --help` or `man ceph-deploy` or
the [1] online documentation? Spoiler, they all document and explain what
`--release` does. I do agree that the [2] documentation talking about
deploying a luminous cluster should mention it if jewel was left the
default installation on purpose.  I'm guessing that was an oversight as
Luminous has been considered stable since 12.2.0. It will likely be fixed
now that it's brought up but the page talking about deploying would do well
to have a note about being able to choose the release you want for
simplicity.

As a side note, it is possible for any one of us to make change requests to
the documentation as this is an open source project. I have a goal to be
more proactive with taking care of goofs, mistakes, and vague parts of the
documentation as I find them. I'll try to do a write-up for others to
easily get involved in this as well.

[1] http://docs.ceph.com/docs/luminous/man/8/ceph-deploy/
[2] http://docs.ceph.com/docs/luminous/rados/deployment/ceph-deploy-new/

On Thu, Mar 1, 2018, 7:06 AM Max Cuttins  wrote:

> Ah!
> So you think this is done by design?
>
> However that command is very very very usefull.
> Please add that to documentation.
> Next time it will save me 2/3 hours.
>
>
>
> Il 01/03/2018 06:12, Sébastien VIGNERON ha scritto:
>
> Hi Max,
>
> I had the same issue (under Ubuntu 1*6*.04) but I have read the
> ceph-deploy 2.0.0 source code and saw a "—-release" flag for the install
> subcommand. You can found the flag with the following command: ceph-deploy
> install --help
>
> It looks like the culprit part of ceph-deploy can be found around line 20
> of /usr/lib/python2.7/dist-packages/ceph_deploy/install.py:
>
> …
> 14 def sanitize_args(args):
> 15"""
> 16args may need a bunch of logic to set proper defaults that
> argparse is
> 17not well suited for.
> 18"""
> 19if args.release is None:
> 20args.release = 'jewel'
> 21args.default_release = True
> 22
> 23# XXX This whole dance is because --stable is getting deprecated
> 24if args.stable is not None:
> 25LOG.warning('the --stable flag is deprecated, use --release
> instead')
> 26args.release = args.stable
> 27# XXX Tango ends here.
> 28
> 29return args
>
> …
>
> Which means we now have to specify "—-release luminous" when we want to
> install a luminous cluster, at least until luminous is considered stable
> and the ceph-deploy tool is changed.
> I think it may be a Kernel version consideration: not all distro have the
> needed minimum version of the kernel (and features) for a full use of
> luminous.
>
> Cordialement / Best regards,
>
> Sébastien VIGNERON
> CRIANN,
> Ingénieur / Engineer
> Technopôle du Madrillet
> 745, avenue de l'Université
> 76800 Saint-Etienne du Rouvray - France
> tél. +33 2 32 91 42 91 <+33%202%2032%2091%2042%2091>
> fax. +33 2 32 91 42 92 <+33%202%2032%2091%2042%2092>
> http://www.criann.fr
> mailto:sebastien.vigne...@criann.fr 
> support: supp...@criann.fr
>
> Le 1 mars 2018 à 00:37, Max Cuttins  a écrit :
>
> Didn't check at time.
>
> I deployed everything from VM standalone.
> The VM was just build up with fresh new centOS7.4 using minimal
> installation ISO1708.
> It's a completly new/fresh/empty system.
> Then I run:
>
> yum update -y
> yum install wget zip unzip vim pciutils -y
> yum install epel-release -y
> yum update -y
> yum install ceph-deploy -y
> yum install yum-plugin-priorities -y
>
> it installed:
>
> Feb 27 19:24:47 Installed: ceph-deploy-1.5.37-0.noarch
>
> -> install ceph with ceph-deploy on 3 nodes.
>
> As a result I get Jewel.
>
> Then... I purge everything from all the 3 nodes
> yum update again on ceph deployer node and get:
>
> Feb 27 20:33:20 Updated: ceph-deploy-2.0.0-0.noarch
>
> ... then I tried to reinstall over and over but I always get Jewel.
> I tryed to install after removed .ceph file config in my homedir.
> I tryed to install after change default repo to repo-luminous
> ... got always Jewel.
>
> Only force the release in the ceph-deploy command allow me to install
> luminous.
>
> Probably yum-plugin-priorities should not be installed after ceph-deploy
> even if I didn't run still any command.
> But what is so strange is that purge and reinstall everything will always
> reinstall Jewel.
> It seems that some lock file has been write somewhere to use Jewel.
>
>
>
> Il 28/02/2018 22:08, David Turner ha scritto:
>
> Which version of ceph-deploy are you using?
>
> On Wed, Feb 28, 2018 at 4:37 AM Massimiliano Cuttini 
> wrote:
>
>> This worked.
>>
>> However somebody should investigate why default is still jewel on Centos
>> 7.4
>>
>> Il 28/02/2018 00:53, jorpilo ha scritto:
>>
>> Try using:
>> ceph-deploy --release luminous host1...
>>
>>  Mensaje original 
>> De: Massimiliano Cuttini 

Re: [ceph-users] ceph-deploy won't install luminous (but Jewel instead)

2018-03-01 Thread Max Cuttins

Ah!
So you think this is done by design?

However that command is very very very usefull.
Please add that to documentation.
Next time it will save me 2/3 hours.



Il 01/03/2018 06:12, Sébastien VIGNERON ha scritto:

Hi Max,

I had the same issue (under Ubuntu 1/6/.04) but I have read the 
ceph-deploy 2.0.0 source code and saw a "—-release" flag for the 
install subcommand. You can found the flag with the following 
command: ceph-deploy install --help


It looks like the culprit part of ceph-deploy can be found around line 
20 of /usr/lib/python2.7/dist-packages/ceph_deploy/install.py:


…
    14def sanitize_args(args):
    15   """
    16   args may need a bunch of logic to set proper defaults that 
argparse is

    17   not well suited for.
    18   """
    19   if args.release is None:
    20       args.release = 'jewel'
    21       args.default_release = True
    22
    23   # XXX This whole dance is because --stable is getting deprecated
    24   if args.stable is not None:
    25       LOG.warning('the --stable flag is deprecated, use 
--release instead')

    26       args.release = args.stable
    27   # XXX Tango ends here.
    28
    29   return args

…

Which means we now have to specify "—-release luminous" when we want 
to install a luminous cluster, at least until luminous is considered 
stable and the ceph-deploy tool is changed.
I think it may be a Kernel version consideration: not all distro have 
the needed minimum version of the kernel (and features) for a full use 
of luminous.


Cordialement / Best regards,

Sébastien VIGNERON
CRIANN,
Ingénieur / Engineer
Technopôle du Madrillet
745, avenue de l'Université
76800 Saint-Etienne du Rouvray - France
tél. +33 2 32 91 42 91
fax. +33 2 32 91 42 92
http://www.criann.fr
mailto:sebastien.vigne...@criann.fr
support: supp...@criann.fr

Le 1 mars 2018 à 00:37, Max Cuttins > a écrit :


Didn't check at time.

I deployed everything from VM standalone.
The VM was just build up with fresh new centOS7.4 using minimal 
installation ISO1708.

It's a completly new/fresh/empty system.
Then I run:

yum update -y
yum install wget zip unzip vim pciutils -y
yum install epel-release -y
yum update -y
yum install ceph-deploy -y
yum install yum-plugin-priorities -y

it installed:

Feb 27 19:24:47 Installed: ceph-deploy-1.5.37-0.noarch

-> install ceph with ceph-deploy on 3 nodes.

As a result I get Jewel.

Then... I purge everything from all the 3 nodes
yum update again on ceph deployer node and get:

Feb 27 20:33:20 Updated: ceph-deploy-2.0.0-0.noarch

... then I tried to reinstall over and over but I always get Jewel.
I tryed to install after removed .ceph file config in my homedir.
I tryed to install after change default repo to repo-luminous
... got always Jewel.

Only force the release in the ceph-deploy command allow me to install 
luminous.


Probably yum-plugin-priorities should not be installed after 
ceph-deploy even if I didn't run still any command.
But what is so strange is that purge and reinstall everything will 
always reinstall Jewel.

It seems that some lock file has been write somewhere to use Jewel.



Il 28/02/2018 22:08, David Turner ha scritto:

Which version of ceph-deploy are you using?

On Wed, Feb 28, 2018 at 4:37 AM Massimiliano Cuttini 
> wrote:


This worked.

However somebody should investigate why default is still jewel
on Centos 7.4


Il 28/02/2018 00:53, jorpilo ha scritto:

Try using:
ceph-deploy --release luminous host1...

 Mensaje original 
De: Massimiliano Cuttini 

Fecha: 28/2/18 12:42 a. m. (GMT+01:00)
Para: ceph-users@lists.ceph.com 
Asunto: [ceph-users] ceph-deploy won't install luminous (but
Jewel instead)

This is the 5th time that I install and after purge the
installation.
Ceph Deploy is alway install JEWEL instead of Luminous.

No way even if I force the repo from default to luminous:

|https://download.ceph.com/rpm-luminous/el7/noarch|

It still install Jewel it's stuck.

I've already checked if I had installed yum-plugin-priorities,
and I did it.
Everything is exaclty as the documentation request.
But still I get always Jewel and not Luminous.




___
ceph-users mailing list
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy won't install luminous (but Jewel instead)

2018-02-28 Thread Sébastien VIGNERON
Hi Max,

I had the same issue (under Ubuntu 16.04) but I have read the ceph-deploy 2.0.0 
source code and saw a "—-release" flag for the install subcommand. You can 
found the flag with the following command: ceph-deploy install --help

It looks like the culprit part of ceph-deploy can be found around line 20 of 
/usr/lib/python2.7/dist-packages/ceph_deploy/install.py:

…
14  def sanitize_args(args):
15  """
16  args may need a bunch of logic to set proper defaults that argparse 
is
17  not well suited for.
18  """
19  if args.release is None:
20  args.release = 'jewel'
21  args.default_release = True
22
23  # XXX This whole dance is because --stable is getting deprecated
24  if args.stable is not None:
25  LOG.warning('the --stable flag is deprecated, use --release 
instead')
26  args.release = args.stable
27  # XXX Tango ends here.
28
29  return args

…

Which means we now have to specify "—-release luminous" when we want to install 
a luminous cluster, at least until luminous is considered stable and the 
ceph-deploy tool is changed. 
I think it may be a Kernel version consideration: not all distro have the 
needed minimum version of the kernel (and features) for a full use of luminous.

Cordialement / Best regards,

Sébastien VIGNERON 
CRIANN, 
Ingénieur / Engineer
Technopôle du Madrillet 
745, avenue de l'Université 
76800 Saint-Etienne du Rouvray - France 
tél. +33 2 32 91 42 91 
fax. +33 2 32 91 42 92 
http://www.criann.fr 
mailto:sebastien.vigne...@criann.fr
support: supp...@criann.fr

> Le 1 mars 2018 à 00:37, Max Cuttins  a écrit :
> 
> Didn't check at time.
> 
> I deployed everything from VM standalone.
> The VM was just build up with fresh new centOS7.4 using minimal installation 
> ISO1708.
> It's a completly new/fresh/empty system.
> Then I run:
> 
> yum update -y
> yum install wget zip unzip vim pciutils -y
> yum install epel-release -y
> yum update -y 
> yum install ceph-deploy -y
> yum install yum-plugin-priorities -y
> 
> it installed:
> 
> Feb 27 19:24:47 Installed: ceph-deploy-1.5.37-0.noarch
> -> install ceph with ceph-deploy on 3 nodes.
> 
> As a result I get Jewel.
> Then... I purge everything from all the 3 nodes
> yum update again on ceph deployer node and get:
> 
> Feb 27 20:33:20 Updated: ceph-deploy-2.0.0-0.noarch
> 
> ... then I tried to reinstall over and over but I always get Jewel.
> I tryed to install after removed .ceph file config in my homedir.
> I tryed to install after change default repo to repo-luminous
> ... got always Jewel.
> 
> Only force the release in the ceph-deploy command allow me to install 
> luminous.
> 
> Probably yum-plugin-priorities should not be installed after ceph-deploy even 
> if I didn't run still any command.
> But what is so strange is that purge and reinstall everything will always 
> reinstall Jewel.
> It seems that some lock file has been write somewhere to use Jewel.
> 
> 
> 
> Il 28/02/2018 22:08, David Turner ha scritto:
>> Which version of ceph-deploy are you using?
>> 
>> On Wed, Feb 28, 2018 at 4:37 AM Massimiliano Cuttini > > wrote:
>> This worked.
>> 
>> However somebody should investigate why default is still jewel on Centos 7.4
>> 
>> Il 28/02/2018 00:53, jorpilo ha scritto:
>>> Try using:
>>> ceph-deploy --release luminous host1...
>>> 
>>>  Mensaje original 
>>> De: Massimiliano Cuttini  
>>> Fecha: 28/2/18 12:42 a. m. (GMT+01:00)
>>> Para: ceph-users@lists.ceph.com 
>>> Asunto: [ceph-users] ceph-deploy won't install luminous (but Jewel instead)
>>> 
>>> This is the 5th time that I install and after purge the installation.
>>> Ceph Deploy is alway install JEWEL instead of Luminous.
>>> No way even if I force the repo from default to luminous:
>>> 
>>> https://download.ceph.com/rpm-luminous/el7/noarch 
>>> 
>>> It still install Jewel it's stuck.
>>> I've already checked if I had installed yum-plugin-priorities, and I did it.
>>> Everything is exaclty as the documentation request.
>>> But still I get always Jewel and not Luminous.
>>> 
>>> 
>>> 
>> 
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com 
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
>> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy won't install luminous (but Jewel instead)

2018-02-28 Thread Max Cuttins

Didn't check at time.

I deployed everything from VM standalone.
The VM was just build up with fresh new centOS7.4 using minimal 
installation ISO1708.

It's a completly new/fresh/empty system.
Then I run:

yum update -y
yum install wget zip unzip vim pciutils -y
yum install epel-release -y
yum update -y
yum install ceph-deploy -y
yum install yum-plugin-priorities -y

it installed:

Feb 27 19:24:47 Installed: ceph-deploy-1.5.37-0.noarch

-> install ceph with ceph-deploy on 3 nodes.

As a result I get Jewel.

Then... I purge everything from all the 3 nodes
yum update again on ceph deployer node and get:

Feb 27 20:33:20 Updated: ceph-deploy-2.0.0-0.noarch

... then I tried to reinstall over and over but I always get Jewel.
I tryed to install after removed .ceph file config in my homedir.
I tryed to install after change default repo to repo-luminous
... got always Jewel.

Only force the release in the ceph-deploy command allow me to install 
luminous.


Probably yum-plugin-priorities should not be installed after ceph-deploy 
even if I didn't run still any command.
But what is so strange is that purge and reinstall everything will 
always reinstall Jewel.

It seems that some lock file has been write somewhere to use Jewel.



Il 28/02/2018 22:08, David Turner ha scritto:

Which version of ceph-deploy are you using?

On Wed, Feb 28, 2018 at 4:37 AM Massimiliano Cuttini 
> wrote:


This worked.

However somebody should investigate why default is still jewel on
Centos 7.4


Il 28/02/2018 00:53, jorpilo ha scritto:

Try using:
ceph-deploy --release luminous host1...

 Mensaje original 
De: Massimiliano Cuttini 

Fecha: 28/2/18 12:42 a. m. (GMT+01:00)
Para: ceph-users@lists.ceph.com 
Asunto: [ceph-users] ceph-deploy won't install luminous (but
Jewel instead)

This is the 5th time that I install and after purge the installation.
Ceph Deploy is alway install JEWEL instead of Luminous.

No way even if I force the repo from default to luminous:

|https://download.ceph.com/rpm-luminous/el7/noarch|

It still install Jewel it's stuck.

I've already checked if I had installed yum-plugin-priorities,
and I did it.
Everything is exaclty as the documentation request.
But still I get always Jewel and not Luminous.




___
ceph-users mailing list
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy won't install luminous (but Jewel instead)

2018-02-28 Thread David Turner
Which version of ceph-deploy are you using?

On Wed, Feb 28, 2018 at 4:37 AM Massimiliano Cuttini 
wrote:

> This worked.
>
> However somebody should investigate why default is still jewel on Centos
> 7.4
>
> Il 28/02/2018 00:53, jorpilo ha scritto:
>
> Try using:
> ceph-deploy --release luminous host1...
>
>  Mensaje original 
> De: Massimiliano Cuttini  
> Fecha: 28/2/18 12:42 a. m. (GMT+01:00)
> Para: ceph-users@lists.ceph.com
> Asunto: [ceph-users] ceph-deploy won't install luminous (but Jewel
> instead)
>
> This is the 5th time that I install and after purge the installation.
> Ceph Deploy is alway install JEWEL instead of Luminous.
>
> No way even if I force the repo from default to luminous:
>
> https://download.ceph.com/rpm-luminous/el7/noarch
>
> It still install Jewel it's stuck.
>
> I've already checked if I had installed yum-plugin-priorities, and I did
> it.
> Everything is exaclty as the documentation request.
> But still I get always Jewel and not Luminous.
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy won't install luminous (but Jewel instead)

2018-02-28 Thread Massimiliano Cuttini

This worked.

However somebody should investigate why default is still jewel on Centos 7.4


Il 28/02/2018 00:53, jorpilo ha scritto:

Try using:
ceph-deploy --release luminous host1...

 Mensaje original 
De: Massimiliano Cuttini 
Fecha: 28/2/18 12:42 a. m. (GMT+01:00)
Para: ceph-users@lists.ceph.com
Asunto: [ceph-users] ceph-deploy won't install luminous (but Jewel 
instead)


This is the 5th time that I install and after purge the installation.
Ceph Deploy is alway install JEWEL instead of Luminous.

No way even if I force the repo from default to luminous:

|https://download.ceph.com/rpm-luminous/el7/noarch|

It still install Jewel it's stuck.

I've already checked if I had installed yum-plugin-priorities, and I 
did it.

Everything is exaclty as the documentation request.
But still I get always Jewel and not Luminous.




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy won't install luminous (but Jewel instead)

2018-02-27 Thread jorpilo
Try using:ceph-deploy --release luminous host1...
 Mensaje original De: Massimiliano Cuttini  
Fecha: 28/2/18  12:42 a. m.  (GMT+01:00) Para: ceph-users@lists.ceph.com 
Asunto: [ceph-users] ceph-deploy won't install luminous (but Jewel instead) 

This is the 5th time that I install and after purge the
  installation.

  Ceph Deploy is alway install JEWEL instead of Luminous.


No way even if I force the repo from default to luminous:
https://download.ceph.com/rpm-luminous/el7/noarch
It still install Jewel it's stuck.


I've already checked if I had installed yum-plugin-priorities,
  and I did it.

  Everything is exaclty as the documentation request.

  But still I get always Jewel and not Luminous.

  





  ___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy failed to deploy osd randomly

2017-11-16 Thread Alfredo Deza
On Wed, Nov 15, 2017 at 8:31 AM, Wei Jin  wrote:
> I tried to do purge/purgedata and then redo the deploy command for a
> few times, and it still fails to start osd.
> And there is no error log, anyone know what's the problem?

Seems like this is OSD 0, right? Have you checked the startup errors
on /var/log/ceph/ ? Or by checking the output of the daemon with
systemctl?

If nothing is working still, maybe try running the OSD in the
foreground with (assuming OSD 0):

/usr/bin/ceph-osd --debug_osd 20 -d -f --cluster ceph --id 0
--setuser ceph --setgroup ceph

Behind the scenes, ceph-disk is getting these devices ready and
associated with the cluster as OSD 0, if you've tried this many times
already I am suspicious
on the same OSD id being used or drives being polluted.

Seems like you are using filestore as well, so sdb1 will probably be
your data and mounted at /var/lib/ceph/osd/ceph-0 and sdb2 your
journal, linked at /var/lib/ceph/osd/ceph-0/journal

Make sure those are mounted and linked properly.

> BTW, my os is dedian with 4.4 kernel.
> Thanks.
>
>
> On Wed, Nov 15, 2017 at 8:24 PM, Wei Jin  wrote:
>> Hi, List,
>>
>> My machine has 12 SSDs disk, and I use ceph-deploy to deploy them. But for
>> some machine/disks,it failed to start osd.
>> I tried many times, some success but others failed. But there is no error
>> info.
>> Following is ceph-deploy log for one disk:
>>
>>
>> root@n10-075-012:~# ceph-deploy osd create --zap-disk n10-075-094:sdb:sdb
>> [ceph_deploy.conf][DEBUG ] found configuration file at:
>> /root/.cephdeploy.conf
>> [ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy osd create
>> --zap-disk n10-075-094:sdb:sdb
>> [ceph_deploy.cli][INFO  ] ceph-deploy options:
>> [ceph_deploy.cli][INFO  ]  username  : None
>> [ceph_deploy.cli][INFO  ]  block_db  : None
>> [ceph_deploy.cli][INFO  ]  disk  : [('n10-075-094',
>> '/dev/sdb', '/dev/sdb')]
>> [ceph_deploy.cli][INFO  ]  dmcrypt   : False
>> [ceph_deploy.cli][INFO  ]  verbose   : False
>> [ceph_deploy.cli][INFO  ]  bluestore : None
>> [ceph_deploy.cli][INFO  ]  block_wal : None
>> [ceph_deploy.cli][INFO  ]  overwrite_conf: False
>> [ceph_deploy.cli][INFO  ]  subcommand: create
>> [ceph_deploy.cli][INFO  ]  dmcrypt_key_dir   :
>> /etc/ceph/dmcrypt-keys
>> [ceph_deploy.cli][INFO  ]  quiet : False
>> [ceph_deploy.cli][INFO  ]  cd_conf   :
>> 
>> [ceph_deploy.cli][INFO  ]  cluster   : ceph
>> [ceph_deploy.cli][INFO  ]  fs_type   : xfs
>> [ceph_deploy.cli][INFO  ]  filestore : None
>> [ceph_deploy.cli][INFO  ]  func  : > 0x7f566ae9a938>
>> [ceph_deploy.cli][INFO  ]  ceph_conf : None
>> [ceph_deploy.cli][INFO  ]  default_release   : False
>> [ceph_deploy.cli][INFO  ]  zap_disk  : True
>> [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks
>> n10-075-094:/dev/sdb:/dev/sdb
>> [n10-075-094][DEBUG ] connected to host: n10-075-094
>> [n10-075-094][DEBUG ] detect platform information from remote host
>> [n10-075-094][DEBUG ] detect machine type
>> [n10-075-094][DEBUG ] find the location of an executable
>> [ceph_deploy.osd][INFO  ] Distro info: debian 8.9 jessie
>> [ceph_deploy.osd][DEBUG ] Deploying osd to n10-075-094
>> [n10-075-094][DEBUG ] write cluster configuration to
>> /etc/ceph/{cluster}.conf
>> [ceph_deploy.osd][DEBUG ] Preparing host n10-075-094 disk /dev/sdb journal
>> /dev/sdb activate True
>> [n10-075-094][DEBUG ] find the location of an executable
>> [n10-075-094][INFO  ] Running command: /usr/sbin/ceph-disk -v prepare
>> --zap-disk --cluster ceph --fs-type xfs -- /dev/sdb /dev/sdb
>> [n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-osd
>> --cluster=ceph --show-config-value=fsid
>> [n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-osd
>> --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log
>> --cluster ceph --setuser ceph --setgroup ceph
>> [n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-osd
>> --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log
>> --cluster ceph --setuser ceph --setgroup ceph
>> [n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-osd
>> --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log
>> --cluster ceph --setuser ceph --setgroup ceph
>> [n10-075-094][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is
>> /sys/dev/block/8:16/dm/uuid
>> [n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-osd
>> --cluster=ceph --show-config-value=osd_journal_size
>> [n10-075-094][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is
>> /sys/dev/block/8:16/dm/uuid
>> [n10-075-094][WARNIN] get_dm_uuid: 

Re: [ceph-users] ceph-deploy failed to deploy osd randomly

2017-11-15 Thread Wei Jin
I tried to do purge/purgedata and then redo the deploy command for a
few times, and it still fails to start osd.
And there is no error log, anyone know what's the problem?
BTW, my os is dedian with 4.4 kernel.
Thanks.


On Wed, Nov 15, 2017 at 8:24 PM, Wei Jin  wrote:
> Hi, List,
>
> My machine has 12 SSDs disk, and I use ceph-deploy to deploy them. But for
> some machine/disks,it failed to start osd.
> I tried many times, some success but others failed. But there is no error
> info.
> Following is ceph-deploy log for one disk:
>
>
> root@n10-075-012:~# ceph-deploy osd create --zap-disk n10-075-094:sdb:sdb
> [ceph_deploy.conf][DEBUG ] found configuration file at:
> /root/.cephdeploy.conf
> [ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy osd create
> --zap-disk n10-075-094:sdb:sdb
> [ceph_deploy.cli][INFO  ] ceph-deploy options:
> [ceph_deploy.cli][INFO  ]  username  : None
> [ceph_deploy.cli][INFO  ]  block_db  : None
> [ceph_deploy.cli][INFO  ]  disk  : [('n10-075-094',
> '/dev/sdb', '/dev/sdb')]
> [ceph_deploy.cli][INFO  ]  dmcrypt   : False
> [ceph_deploy.cli][INFO  ]  verbose   : False
> [ceph_deploy.cli][INFO  ]  bluestore : None
> [ceph_deploy.cli][INFO  ]  block_wal : None
> [ceph_deploy.cli][INFO  ]  overwrite_conf: False
> [ceph_deploy.cli][INFO  ]  subcommand: create
> [ceph_deploy.cli][INFO  ]  dmcrypt_key_dir   :
> /etc/ceph/dmcrypt-keys
> [ceph_deploy.cli][INFO  ]  quiet : False
> [ceph_deploy.cli][INFO  ]  cd_conf   :
> 
> [ceph_deploy.cli][INFO  ]  cluster   : ceph
> [ceph_deploy.cli][INFO  ]  fs_type   : xfs
> [ceph_deploy.cli][INFO  ]  filestore : None
> [ceph_deploy.cli][INFO  ]  func  :  0x7f566ae9a938>
> [ceph_deploy.cli][INFO  ]  ceph_conf : None
> [ceph_deploy.cli][INFO  ]  default_release   : False
> [ceph_deploy.cli][INFO  ]  zap_disk  : True
> [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks
> n10-075-094:/dev/sdb:/dev/sdb
> [n10-075-094][DEBUG ] connected to host: n10-075-094
> [n10-075-094][DEBUG ] detect platform information from remote host
> [n10-075-094][DEBUG ] detect machine type
> [n10-075-094][DEBUG ] find the location of an executable
> [ceph_deploy.osd][INFO  ] Distro info: debian 8.9 jessie
> [ceph_deploy.osd][DEBUG ] Deploying osd to n10-075-094
> [n10-075-094][DEBUG ] write cluster configuration to
> /etc/ceph/{cluster}.conf
> [ceph_deploy.osd][DEBUG ] Preparing host n10-075-094 disk /dev/sdb journal
> /dev/sdb activate True
> [n10-075-094][DEBUG ] find the location of an executable
> [n10-075-094][INFO  ] Running command: /usr/sbin/ceph-disk -v prepare
> --zap-disk --cluster ceph --fs-type xfs -- /dev/sdb /dev/sdb
> [n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-osd
> --cluster=ceph --show-config-value=fsid
> [n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-osd
> --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log
> --cluster ceph --setuser ceph --setgroup ceph
> [n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-osd
> --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log
> --cluster ceph --setuser ceph --setgroup ceph
> [n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-osd
> --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log
> --cluster ceph --setuser ceph --setgroup ceph
> [n10-075-094][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is
> /sys/dev/block/8:16/dm/uuid
> [n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-osd
> --cluster=ceph --show-config-value=osd_journal_size
> [n10-075-094][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is
> /sys/dev/block/8:16/dm/uuid
> [n10-075-094][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is
> /sys/dev/block/8:16/dm/uuid
> [n10-075-094][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is
> /sys/dev/block/8:16/dm/uuid
> [n10-075-094][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is
> /sys/dev/block/8:17/dm/uuid
> [n10-075-094][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb2 uuid path is
> /sys/dev/block/8:18/dm/uuid
> [n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-conf
> --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
> [n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-conf
> --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
> [n10-075-094][WARNIN] command: Running command: /usr/bin/ceph-conf
> --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
> [n10-075-094][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is
> /sys/dev/block/8:16/dm/uuid
> [n10-075-094][WARNIN] zap: Zapping partition table on /dev/sdb
> [n10-075-094][WARNIN] 

Re: [ceph-users] ceph-deploy mgr create error No such file or directory:

2017-07-14 Thread Vasu Kulkarni
On Fri, Jul 14, 2017 at 10:37 AM, Oscar Segarra 
wrote:

> I'm testing on latest Jewell version I've found in repositories:
>
you can skip that command then, I will fix the document to add a note for
jewel or pre luminous build.


>
> [root@vdicnode01 yum.repos.d]# ceph --version
> ceph version 10.2.8 (f5b1f1fd7c0be0506ba73502a675de9d048b744e)
>
> thanks a lot!
>
> 2017-07-14 19:21 GMT+02:00 Vasu Kulkarni :
>
>> It is tested for master and is working fine, I will run those same tests
>> on luminous and check if there is an issue and update here. mgr create is
>> needed for luminous+ bulids only.
>>
>> On Fri, Jul 14, 2017 at 10:18 AM, Roger Brown 
>> wrote:
>>
>>> I've been trying to work through similar mgr issues for
>>> Xenial-Luminous...
>>>
>>> roger@desktop:~/ceph-cluster$ ceph-deploy mgr create mon1 nuc2
>>> [ceph_deploy.conf][DEBUG ] found configuration file at:
>>> /home/roger/.cephdeploy.conf
>>> [ceph_deploy.cli][INFO  ] Invoked (1.5.38): /usr/bin/ceph-deploy mgr
>>> create mon1 nuc2
>>> [ceph_deploy.cli][INFO  ] ceph-deploy options:
>>> [ceph_deploy.cli][INFO  ]  username  : None
>>> [ceph_deploy.cli][INFO  ]  verbose   : False
>>> [ceph_deploy.cli][INFO  ]  mgr   : [('mon1',
>>> 'mon1'), ('nuc2', 'nuc2')]
>>> [ceph_deploy.cli][INFO  ]  overwrite_conf: False
>>> [ceph_deploy.cli][INFO  ]  subcommand: create
>>> [ceph_deploy.cli][INFO  ]  quiet : False
>>> [ceph_deploy.cli][INFO  ]  cd_conf   :
>>> 
>>> [ceph_deploy.cli][INFO  ]  cluster   : ceph
>>> [ceph_deploy.cli][INFO  ]  func  : >> at 0x7f25b4772668>
>>> [ceph_deploy.cli][INFO  ]  ceph_conf : None
>>> [ceph_deploy.cli][INFO  ]  default_release   : False
>>> [ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts mon1:mon1
>>> nuc2:nuc2
>>> [mon1][DEBUG ] connection detected need for sudo
>>> [mon1][DEBUG ] connected to host: mon1
>>> [mon1][DEBUG ] detect platform information from remote host
>>> [mon1][DEBUG ] detect machine type
>>> [ceph_deploy.mgr][INFO  ] Distro info: Ubuntu 16.04 xenial
>>> [ceph_deploy.mgr][DEBUG ] remote host will use systemd
>>> [ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to mon1
>>> [mon1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
>>> [mon1][DEBUG ] create path if it doesn't exist
>>> [mon1][INFO  ] Running command: sudo ceph --cluster ceph --name
>>> client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring
>>> auth get-or-create mgr.mon1 mon allow profile mgr osd allow * mds allow *
>>> -o /var/lib/ceph/mgr/ceph-mon1/keyring
>>> [mon1][ERROR ] 2017-07-14 11:17:19.667418 7f309613f700  0 librados:
>>> client.bootstrap-mgr authentication error (22) Invalid argument
>>> [mon1][ERROR ] (22, 'error connecting to the cluster')
>>> [mon1][ERROR ] exit code from command was: 1
>>> [ceph_deploy.mgr][ERROR ] could not create mgr
>>> [nuc2][DEBUG ] connection detected need for sudo
>>> [nuc2][DEBUG ] connected to host: nuc2
>>> [nuc2][DEBUG ] detect platform information from remote host
>>> [nuc2][DEBUG ] detect machine type
>>> [ceph_deploy.mgr][INFO  ] Distro info: Ubuntu 16.04 xenial
>>> [ceph_deploy.mgr][DEBUG ] remote host will use systemd
>>> [ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to nuc2
>>> [nuc2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
>>> [nuc2][DEBUG ] create path if it doesn't exist
>>> [nuc2][INFO  ] Running command: sudo ceph --cluster ceph --name
>>> client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring
>>> auth get-or-create mgr.nuc2 mon allow profile mgr osd allow * mds allow *
>>> -o /var/lib/ceph/mgr/ceph-nuc2/keyring
>>> [nuc2][ERROR ] 2017-07-14 17:17:21.800166 7fe344f32700  0 librados:
>>> client.bootstrap-mgr authentication error (22) Invalid argument
>>> [nuc2][ERROR ] (22, 'error connecting to the cluster')
>>> [nuc2][ERROR ] exit code from command was: 1
>>> [ceph_deploy.mgr][ERROR ] could not create mgr
>>> [ceph_deploy][ERROR ] GenericError: Failed to create 2 MGRs
>>> roger@desktop:~/ceph-cluster$
>>>
>>>
>>>
>>> On Fri, Jul 14, 2017 at 11:01 AM Oscar Segarra 
>>> wrote:
>>>
 Hi,

 I'm following the instructions of the web (
 http://docs.ceph.com/docs/master/start/quick-ceph-deploy/) and I'm
 trying to create a manager on my first node.

 In my environment I have 2 nodes:

 - vdicnode01 (mon, mgr and osd)
 - vdicnode02 (osd)

 Each server has to NIC, the public and the private where all ceph
 trafic will go over.

 I have created .local entries in /etc/hosts:

 192.168.100.101   vdicnode01.local
 192.168.100.102   vdicnode02.local

 Public names are resolved via DNS.

 When I try to create the mgr in a 

Re: [ceph-users] ceph-deploy mgr create error No such file or directory:

2017-07-14 Thread Oscar Segarra
I'm testing on latest Jewell version I've found in repositories:

[root@vdicnode01 yum.repos.d]# ceph --version
ceph version 10.2.8 (f5b1f1fd7c0be0506ba73502a675de9d048b744e)

thanks a lot!

2017-07-14 19:21 GMT+02:00 Vasu Kulkarni :

> It is tested for master and is working fine, I will run those same tests
> on luminous and check if there is an issue and update here. mgr create is
> needed for luminous+ bulids only.
>
> On Fri, Jul 14, 2017 at 10:18 AM, Roger Brown 
> wrote:
>
>> I've been trying to work through similar mgr issues for Xenial-Luminous...
>>
>> roger@desktop:~/ceph-cluster$ ceph-deploy mgr create mon1 nuc2
>> [ceph_deploy.conf][DEBUG ] found configuration file at:
>> /home/roger/.cephdeploy.conf
>> [ceph_deploy.cli][INFO  ] Invoked (1.5.38): /usr/bin/ceph-deploy mgr
>> create mon1 nuc2
>> [ceph_deploy.cli][INFO  ] ceph-deploy options:
>> [ceph_deploy.cli][INFO  ]  username  : None
>> [ceph_deploy.cli][INFO  ]  verbose   : False
>> [ceph_deploy.cli][INFO  ]  mgr   : [('mon1',
>> 'mon1'), ('nuc2', 'nuc2')]
>> [ceph_deploy.cli][INFO  ]  overwrite_conf: False
>> [ceph_deploy.cli][INFO  ]  subcommand: create
>> [ceph_deploy.cli][INFO  ]  quiet : False
>> [ceph_deploy.cli][INFO  ]  cd_conf   :
>> 
>> [ceph_deploy.cli][INFO  ]  cluster   : ceph
>> [ceph_deploy.cli][INFO  ]  func  : > at 0x7f25b4772668>
>> [ceph_deploy.cli][INFO  ]  ceph_conf : None
>> [ceph_deploy.cli][INFO  ]  default_release   : False
>> [ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts mon1:mon1
>> nuc2:nuc2
>> [mon1][DEBUG ] connection detected need for sudo
>> [mon1][DEBUG ] connected to host: mon1
>> [mon1][DEBUG ] detect platform information from remote host
>> [mon1][DEBUG ] detect machine type
>> [ceph_deploy.mgr][INFO  ] Distro info: Ubuntu 16.04 xenial
>> [ceph_deploy.mgr][DEBUG ] remote host will use systemd
>> [ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to mon1
>> [mon1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
>> [mon1][DEBUG ] create path if it doesn't exist
>> [mon1][INFO  ] Running command: sudo ceph --cluster ceph --name
>> client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring
>> auth get-or-create mgr.mon1 mon allow profile mgr osd allow * mds allow *
>> -o /var/lib/ceph/mgr/ceph-mon1/keyring
>> [mon1][ERROR ] 2017-07-14 11:17:19.667418 7f309613f700  0 librados:
>> client.bootstrap-mgr authentication error (22) Invalid argument
>> [mon1][ERROR ] (22, 'error connecting to the cluster')
>> [mon1][ERROR ] exit code from command was: 1
>> [ceph_deploy.mgr][ERROR ] could not create mgr
>> [nuc2][DEBUG ] connection detected need for sudo
>> [nuc2][DEBUG ] connected to host: nuc2
>> [nuc2][DEBUG ] detect platform information from remote host
>> [nuc2][DEBUG ] detect machine type
>> [ceph_deploy.mgr][INFO  ] Distro info: Ubuntu 16.04 xenial
>> [ceph_deploy.mgr][DEBUG ] remote host will use systemd
>> [ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to nuc2
>> [nuc2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
>> [nuc2][DEBUG ] create path if it doesn't exist
>> [nuc2][INFO  ] Running command: sudo ceph --cluster ceph --name
>> client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring
>> auth get-or-create mgr.nuc2 mon allow profile mgr osd allow * mds allow *
>> -o /var/lib/ceph/mgr/ceph-nuc2/keyring
>> [nuc2][ERROR ] 2017-07-14 17:17:21.800166 7fe344f32700  0 librados:
>> client.bootstrap-mgr authentication error (22) Invalid argument
>> [nuc2][ERROR ] (22, 'error connecting to the cluster')
>> [nuc2][ERROR ] exit code from command was: 1
>> [ceph_deploy.mgr][ERROR ] could not create mgr
>> [ceph_deploy][ERROR ] GenericError: Failed to create 2 MGRs
>> roger@desktop:~/ceph-cluster$
>>
>>
>>
>> On Fri, Jul 14, 2017 at 11:01 AM Oscar Segarra 
>> wrote:
>>
>>> Hi,
>>>
>>> I'm following the instructions of the web (http://docs.ceph.com/docs/mas
>>> ter/start/quick-ceph-deploy/) and I'm trying to create a manager on my
>>> first node.
>>>
>>> In my environment I have 2 nodes:
>>>
>>> - vdicnode01 (mon, mgr and osd)
>>> - vdicnode02 (osd)
>>>
>>> Each server has to NIC, the public and the private where all ceph trafic
>>> will go over.
>>>
>>> I have created .local entries in /etc/hosts:
>>>
>>> 192.168.100.101   vdicnode01.local
>>> 192.168.100.102   vdicnode02.local
>>>
>>> Public names are resolved via DNS.
>>>
>>> When I try to create the mgr in a fresh install I get the following
>>> error:
>>>
>>> [vdicceph@vdicnode01 ceph]$ ceph-deploy --username vdicceph mgr create
>>> vdicnode01.local
>>> [ceph_deploy.conf][DEBUG ] found configuration file at:
>>> /home/vdicceph/.cephdeploy.conf
>>> [ceph_deploy.cli][INFO  ] Invoked (1.5.38): 

Re: [ceph-users] ceph-deploy mgr create error No such file or directory:

2017-07-14 Thread Vasu Kulkarni
It is tested for master and is working fine, I will run those same tests on
luminous and check if there is an issue and update here. mgr create is
needed for luminous+ bulids only.

On Fri, Jul 14, 2017 at 10:18 AM, Roger Brown  wrote:

> I've been trying to work through similar mgr issues for Xenial-Luminous...
>
> roger@desktop:~/ceph-cluster$ ceph-deploy mgr create mon1 nuc2
> [ceph_deploy.conf][DEBUG ] found configuration file at:
> /home/roger/.cephdeploy.conf
> [ceph_deploy.cli][INFO  ] Invoked (1.5.38): /usr/bin/ceph-deploy mgr
> create mon1 nuc2
> [ceph_deploy.cli][INFO  ] ceph-deploy options:
> [ceph_deploy.cli][INFO  ]  username  : None
> [ceph_deploy.cli][INFO  ]  verbose   : False
> [ceph_deploy.cli][INFO  ]  mgr   : [('mon1',
> 'mon1'), ('nuc2', 'nuc2')]
> [ceph_deploy.cli][INFO  ]  overwrite_conf: False
> [ceph_deploy.cli][INFO  ]  subcommand: create
> [ceph_deploy.cli][INFO  ]  quiet : False
> [ceph_deploy.cli][INFO  ]  cd_conf   :
> 
> [ceph_deploy.cli][INFO  ]  cluster   : ceph
> [ceph_deploy.cli][INFO  ]  func  :  at 0x7f25b4772668>
> [ceph_deploy.cli][INFO  ]  ceph_conf : None
> [ceph_deploy.cli][INFO  ]  default_release   : False
> [ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts mon1:mon1
> nuc2:nuc2
> [mon1][DEBUG ] connection detected need for sudo
> [mon1][DEBUG ] connected to host: mon1
> [mon1][DEBUG ] detect platform information from remote host
> [mon1][DEBUG ] detect machine type
> [ceph_deploy.mgr][INFO  ] Distro info: Ubuntu 16.04 xenial
> [ceph_deploy.mgr][DEBUG ] remote host will use systemd
> [ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to mon1
> [mon1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
> [mon1][DEBUG ] create path if it doesn't exist
> [mon1][INFO  ] Running command: sudo ceph --cluster ceph --name
> client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring
> auth get-or-create mgr.mon1 mon allow profile mgr osd allow * mds allow *
> -o /var/lib/ceph/mgr/ceph-mon1/keyring
> [mon1][ERROR ] 2017-07-14 11:17:19.667418 7f309613f700  0 librados:
> client.bootstrap-mgr authentication error (22) Invalid argument
> [mon1][ERROR ] (22, 'error connecting to the cluster')
> [mon1][ERROR ] exit code from command was: 1
> [ceph_deploy.mgr][ERROR ] could not create mgr
> [nuc2][DEBUG ] connection detected need for sudo
> [nuc2][DEBUG ] connected to host: nuc2
> [nuc2][DEBUG ] detect platform information from remote host
> [nuc2][DEBUG ] detect machine type
> [ceph_deploy.mgr][INFO  ] Distro info: Ubuntu 16.04 xenial
> [ceph_deploy.mgr][DEBUG ] remote host will use systemd
> [ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to nuc2
> [nuc2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
> [nuc2][DEBUG ] create path if it doesn't exist
> [nuc2][INFO  ] Running command: sudo ceph --cluster ceph --name
> client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring
> auth get-or-create mgr.nuc2 mon allow profile mgr osd allow * mds allow *
> -o /var/lib/ceph/mgr/ceph-nuc2/keyring
> [nuc2][ERROR ] 2017-07-14 17:17:21.800166 7fe344f32700  0 librados:
> client.bootstrap-mgr authentication error (22) Invalid argument
> [nuc2][ERROR ] (22, 'error connecting to the cluster')
> [nuc2][ERROR ] exit code from command was: 1
> [ceph_deploy.mgr][ERROR ] could not create mgr
> [ceph_deploy][ERROR ] GenericError: Failed to create 2 MGRs
> roger@desktop:~/ceph-cluster$
>
>
>
> On Fri, Jul 14, 2017 at 11:01 AM Oscar Segarra 
> wrote:
>
>> Hi,
>>
>> I'm following the instructions of the web (http://docs.ceph.com/docs/
>> master/start/quick-ceph-deploy/) and I'm trying to create a manager on
>> my first node.
>>
>> In my environment I have 2 nodes:
>>
>> - vdicnode01 (mon, mgr and osd)
>> - vdicnode02 (osd)
>>
>> Each server has to NIC, the public and the private where all ceph trafic
>> will go over.
>>
>> I have created .local entries in /etc/hosts:
>>
>> 192.168.100.101   vdicnode01.local
>> 192.168.100.102   vdicnode02.local
>>
>> Public names are resolved via DNS.
>>
>> When I try to create the mgr in a fresh install I get the following error:
>>
>> [vdicceph@vdicnode01 ceph]$ ceph-deploy --username vdicceph mgr create
>> vdicnode01.local
>> [ceph_deploy.conf][DEBUG ] found configuration file at:
>> /home/vdicceph/.cephdeploy.conf
>> [ceph_deploy.cli][INFO  ] Invoked (1.5.38): /bin/ceph-deploy --username
>> vdicceph mgr create vdicnode01.local
>> [ceph_deploy.cli][INFO  ] ceph-deploy options:
>> [ceph_deploy.cli][INFO  ]  username  : vdicceph
>> [ceph_deploy.cli][INFO  ]  verbose   : False
>> [ceph_deploy.cli][INFO  ]  mgr   :
>> [('vdicnode01.local', 'vdicnode01.local')]
>> 

Re: [ceph-users] ceph-deploy mgr create error No such file or directory:

2017-07-14 Thread Roger Brown
I've been trying to work through similar mgr issues for Xenial-Luminous...

roger@desktop:~/ceph-cluster$ ceph-deploy mgr create mon1 nuc2
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/roger/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.38): /usr/bin/ceph-deploy mgr create
mon1 nuc2
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username  : None
[ceph_deploy.cli][INFO  ]  verbose   : False
[ceph_deploy.cli][INFO  ]  mgr   : [('mon1',
'mon1'), ('nuc2', 'nuc2')]
[ceph_deploy.cli][INFO  ]  overwrite_conf: False
[ceph_deploy.cli][INFO  ]  subcommand: create
[ceph_deploy.cli][INFO  ]  quiet : False
[ceph_deploy.cli][INFO  ]  cd_conf   :

[ceph_deploy.cli][INFO  ]  cluster   : ceph
[ceph_deploy.cli][INFO  ]  func  : 
[ceph_deploy.cli][INFO  ]  ceph_conf : None
[ceph_deploy.cli][INFO  ]  default_release   : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts mon1:mon1
nuc2:nuc2
[mon1][DEBUG ] connection detected need for sudo
[mon1][DEBUG ] connected to host: mon1
[mon1][DEBUG ] detect platform information from remote host
[mon1][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: Ubuntu 16.04 xenial
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to mon1
[mon1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[mon1][DEBUG ] create path if it doesn't exist
[mon1][INFO  ] Running command: sudo ceph --cluster ceph --name
client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring
auth get-or-create mgr.mon1 mon allow profile mgr osd allow * mds allow *
-o /var/lib/ceph/mgr/ceph-mon1/keyring
[mon1][ERROR ] 2017-07-14 11:17:19.667418 7f309613f700  0 librados:
client.bootstrap-mgr authentication error (22) Invalid argument
[mon1][ERROR ] (22, 'error connecting to the cluster')
[mon1][ERROR ] exit code from command was: 1
[ceph_deploy.mgr][ERROR ] could not create mgr
[nuc2][DEBUG ] connection detected need for sudo
[nuc2][DEBUG ] connected to host: nuc2
[nuc2][DEBUG ] detect platform information from remote host
[nuc2][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: Ubuntu 16.04 xenial
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to nuc2
[nuc2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[nuc2][DEBUG ] create path if it doesn't exist
[nuc2][INFO  ] Running command: sudo ceph --cluster ceph --name
client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring
auth get-or-create mgr.nuc2 mon allow profile mgr osd allow * mds allow *
-o /var/lib/ceph/mgr/ceph-nuc2/keyring
[nuc2][ERROR ] 2017-07-14 17:17:21.800166 7fe344f32700  0 librados:
client.bootstrap-mgr authentication error (22) Invalid argument
[nuc2][ERROR ] (22, 'error connecting to the cluster')
[nuc2][ERROR ] exit code from command was: 1
[ceph_deploy.mgr][ERROR ] could not create mgr
[ceph_deploy][ERROR ] GenericError: Failed to create 2 MGRs
roger@desktop:~/ceph-cluster$



On Fri, Jul 14, 2017 at 11:01 AM Oscar Segarra 
wrote:

> Hi,
>
> I'm following the instructions of the web (
> http://docs.ceph.com/docs/master/start/quick-ceph-deploy/) and I'm trying
> to create a manager on my first node.
>
> In my environment I have 2 nodes:
>
> - vdicnode01 (mon, mgr and osd)
> - vdicnode02 (osd)
>
> Each server has to NIC, the public and the private where all ceph trafic
> will go over.
>
> I have created .local entries in /etc/hosts:
>
> 192.168.100.101   vdicnode01.local
> 192.168.100.102   vdicnode02.local
>
> Public names are resolved via DNS.
>
> When I try to create the mgr in a fresh install I get the following error:
>
> [vdicceph@vdicnode01 ceph]$ ceph-deploy --username vdicceph mgr create
> vdicnode01.local
> [ceph_deploy.conf][DEBUG ] found configuration file at:
> /home/vdicceph/.cephdeploy.conf
> [ceph_deploy.cli][INFO  ] Invoked (1.5.38): /bin/ceph-deploy --username
> vdicceph mgr create vdicnode01.local
> [ceph_deploy.cli][INFO  ] ceph-deploy options:
> [ceph_deploy.cli][INFO  ]  username  : vdicceph
> [ceph_deploy.cli][INFO  ]  verbose   : False
> [ceph_deploy.cli][INFO  ]  mgr   :
> [('vdicnode01.local', 'vdicnode01.local')]
> [ceph_deploy.cli][INFO  ]  overwrite_conf: False
> [ceph_deploy.cli][INFO  ]  subcommand: create
> [ceph_deploy.cli][INFO  ]  quiet : False
> [ceph_deploy.cli][INFO  ]  cd_conf   :
> 
> [ceph_deploy.cli][INFO  ]  cluster   : ceph
> [ceph_deploy.cli][INFO  ]  func  :  at 0x1916848>
> [ceph_deploy.cli][INFO  ]  ceph_conf  

Re: [ceph-users] ceph-deploy , osd_journal_size and entire disk partiton for journal

2017-06-12 Thread David Turner
Then you want separate partitions for each OSD journal.  if you have 4 HDD
OSDs using this as they're journal, you should have 4x 5GB partitions on
the SSD.

On Mon, Jun 12, 2017 at 12:07 PM Deepak Naidu  wrote:

> Thanks for the note, yes I know them all. It will be shared among multiple
> 3-4 HDD OSD Disks.
>
> --
> Deepak
>
> On Jun 12, 2017, at 7:07 AM, David Turner  wrote:
>
> Why do you want a 70GB journal?  You linked to the documentation, so I'm
> assuming that you followed the formula stated to figure out how big your
> journal should be... "osd journal size = {2 * (expected throughput *
> filestore max sync interval)}".  I've never heard of a cluster that
> requires such a large journal size.  The default is there because it works
> for 99.999% of situations.  I actually can't think of a use case that would
> require a larger journal than 10GB, especially on an SSD.  The vast
> majority of the time the space on the SSD is practically empty.  It doesn't
> fill up like a cache or anything.  It's just a place that writes happen
> quickly and then quickly flushes it to the disk.
>
> Using 100% of your SSD size is also a bad idea based on how SSD's recover
> from unwritable sectors... they mark them as dead and move the data to an
> unused sector.  The manufacturer overprovisions the drive in the factory,
> but you can help out by not using 100% of your available size.  If you have
> a 70GB SSD and only use 5-10GB, then you will drastically increase the life
> of the SSD as a journal.
>
> If you really want to get a 70GB journal partition, then stop the osd,
> flush the journal, set up the journal partition manually, and make sure
> that /var/lib/ceph/osd/ceph-##/journal is pointing to the proper journal
> before starting it back up.
>
> Unless you REALLY NEED a 70GB journal partition... don't do it.
>
> On Mon, Jun 12, 2017 at 1:07 AM Deepak Naidu  wrote:
>
>> Hello folks,
>>
>>
>>
>> I am trying to use an entire ssd partition for journal disk ie example
>> /dev/sdf1 partition(70GB). But when I look up the osd config using below
>> command I see ceph-deploy sets journal_size as 5GB. More confusing, I see
>> the OSD logs showing the correct size in blocks in the
>> /var/log/ceph/ceph-osd.x.log
>>
>> So my question is, whether ceph is using the entire disk partition or
>> just 5GB(default value of ceph deploy) for my OSD journal ?
>>
>>
>>
>> I know I can set per OSD or global OSD value for journal size in
>> ceph.conf . I am using Jewel 10.2.7
>>
>>
>>
>> ceph --admin-daemon /var/run/ceph/ceph-osd.3.asok config get
>> osd_journal_size
>>
>> {
>>
>> "osd_journal_size": "5120"
>>
>> }
>>
>>
>>
>> I tried the below, but the get osd_journal_size shows as 0, which is what
>> its set, so still confused more.
>>
>>
>>
>> http://docs.ceph.com/docs/master/rados/configuration/osd-config-ref/
>>
>>
>>
>>
>>
>> Any info is appreciated.
>>
>>
>>
>>
>>
>> PS: I search to find similar issue, but no response on that thread.
>>
>>
>>
>> --
>>
>> Deepak
>>
>>
>> --
>> This email message is for the sole use of the intended recipient(s) and
>> may contain confidential information.  Any unauthorized review, use,
>> disclosure or distribution is prohibited.  If you are not the intended
>> recipient, please contact the sender by reply email and destroy all copies
>> of the original message.
>> --
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy , osd_journal_size and entire disk partiton for journal

2017-06-12 Thread Deepak Naidu
Thanks for the note, yes I know them all. It will be shared among multiple 3-4 
HDD OSD Disks.

--
Deepak

On Jun 12, 2017, at 7:07 AM, David Turner 
> wrote:

Why do you want a 70GB journal?  You linked to the documentation, so I'm 
assuming that you followed the formula stated to figure out how big your 
journal should be... "osd journal size = {2 * (expected throughput * filestore 
max sync interval)}".  I've never heard of a cluster that requires such a large 
journal size.  The default is there because it works for 99.999% of situations. 
 I actually can't think of a use case that would require a larger journal than 
10GB, especially on an SSD.  The vast majority of the time the space on the SSD 
is practically empty.  It doesn't fill up like a cache or anything.  It's just 
a place that writes happen quickly and then quickly flushes it to the disk.

Using 100% of your SSD size is also a bad idea based on how SSD's recover from 
unwritable sectors... they mark them as dead and move the data to an unused 
sector.  The manufacturer overprovisions the drive in the factory, but you can 
help out by not using 100% of your available size.  If you have a 70GB SSD and 
only use 5-10GB, then you will drastically increase the life of the SSD as a 
journal.

If you really want to get a 70GB journal partition, then stop the osd, flush 
the journal, set up the journal partition manually, and make sure that 
/var/lib/ceph/osd/ceph-##/journal is pointing to the proper journal before 
starting it back up.

Unless you REALLY NEED a 70GB journal partition... don't do it.

On Mon, Jun 12, 2017 at 1:07 AM Deepak Naidu 
> wrote:
Hello folks,

I am trying to use an entire ssd partition for journal disk ie example 
/dev/sdf1 partition(70GB). But when I look up the osd config using below 
command I see ceph-deploy sets journal_size as 5GB. More confusing, I see the 
OSD logs showing the correct size in blocks in the /var/log/ceph/ceph-osd.x.log
So my question is, whether ceph is using the entire disk partition or just 
5GB(default value of ceph deploy) for my OSD journal ?

I know I can set per OSD or global OSD value for journal size in ceph.conf . I 
am using Jewel 10.2.7

ceph --admin-daemon /var/run/ceph/ceph-osd.3.asok config get osd_journal_size
{
"osd_journal_size": "5120"
}

I tried the below, but the get osd_journal_size shows as 0, which is what its 
set, so still confused more.

http://docs.ceph.com/docs/master/rados/configuration/osd-config-ref/
[cid:image001.png@01D2E2FF.1F9A9D10]


Any info is appreciated.


PS: I search to find similar issue, but no response on that thread.

--
Deepak


This email message is for the sole use of the intended recipient(s) and may 
contain confidential information.  Any unauthorized review, use, disclosure or 
distribution is prohibited.  If you are not the intended recipient, please 
contact the sender by reply email and destroy all copies of the original 
message.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy , osd_journal_size and entire disk partiton for journal

2017-06-12 Thread David Turner
Why do you want a 70GB journal?  You linked to the documentation, so I'm
assuming that you followed the formula stated to figure out how big your
journal should be... "osd journal size = {2 * (expected throughput *
filestore max sync interval)}".  I've never heard of a cluster that
requires such a large journal size.  The default is there because it works
for 99.999% of situations.  I actually can't think of a use case that would
require a larger journal than 10GB, especially on an SSD.  The vast
majority of the time the space on the SSD is practically empty.  It doesn't
fill up like a cache or anything.  It's just a place that writes happen
quickly and then quickly flushes it to the disk.

Using 100% of your SSD size is also a bad idea based on how SSD's recover
from unwritable sectors... they mark them as dead and move the data to an
unused sector.  The manufacturer overprovisions the drive in the factory,
but you can help out by not using 100% of your available size.  If you have
a 70GB SSD and only use 5-10GB, then you will drastically increase the life
of the SSD as a journal.

If you really want to get a 70GB journal partition, then stop the osd,
flush the journal, set up the journal partition manually, and make sure
that /var/lib/ceph/osd/ceph-##/journal is pointing to the proper journal
before starting it back up.

Unless you REALLY NEED a 70GB journal partition... don't do it.

On Mon, Jun 12, 2017 at 1:07 AM Deepak Naidu  wrote:

> Hello folks,
>
>
>
> I am trying to use an entire ssd partition for journal disk ie example
> /dev/sdf1 partition(70GB). But when I look up the osd config using below
> command I see ceph-deploy sets journal_size as 5GB. More confusing, I see
> the OSD logs showing the correct size in blocks in the
> /var/log/ceph/ceph-osd.x.log
>
> So my question is, whether ceph is using the entire disk partition or just
> 5GB(default value of ceph deploy) for my OSD journal ?
>
>
>
> I know I can set per OSD or global OSD value for journal size in ceph.conf
> . I am using Jewel 10.2.7
>
>
>
> ceph --admin-daemon /var/run/ceph/ceph-osd.3.asok config get
> osd_journal_size
>
> {
>
> "osd_journal_size": "5120"
>
> }
>
>
>
> I tried the below, but the get osd_journal_size shows as 0, which is what
> its set, so still confused more.
>
>
>
> http://docs.ceph.com/docs/master/rados/configuration/osd-config-ref/
>
>
>
>
>
> Any info is appreciated.
>
>
>
>
>
> PS: I search to find similar issue, but no response on that thread.
>
>
>
> --
>
> Deepak
>
>
> --
> This email message is for the sole use of the intended recipient(s) and
> may contain confidential information.  Any unauthorized review, use,
> disclosure or distribution is prohibited.  If you are not the intended
> recipient, please contact the sender by reply email and destroy all copies
> of the original message.
> --
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy to a particular version

2017-05-02 Thread German Anders
I think you can do *$ceph-deploy install --release  --repo-url
http://download.ceph.com/. .. *, also you
can change the --release flag with --dev or --testing and specify the
version, I've done with release and dev flags and work great :)

hope it helps

best,


*German*

2017-05-02 10:03 GMT-03:00 David Turner :

> You can indeed install ceph via yum and then utilize ceph-deploy to finish
> things up. You just skip the Ceph install portion. I haven't done it in a
> while and you might need to manually place the config and key on the new
> servers yourself.
>
> On Tue, May 2, 2017, 8:57 AM Puff, Jonathon 
> wrote:
>
>> From what I can find ceph-deploy only allows installs for a release, i.e
>> jewel which is giving me 10.2.7, but I’d like to specify the particular
>> update.  For instance, I want to go to 10.2.3.Do I need to avoid
>> ceph-deploy entirely to do this or can I install the correct version via
>> yum then leverage ceph-deploy for the remaining configuration?
>>
>>
>>
>> -JP
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-16 Thread Shain Miley
I ended up using a newer version of ceph-deploy and things went more smoothly 
after that.

Thanks again to everyone for all the help!

Shain 

> On Mar 16, 2017, at 10:29 AM, Shain Miley  wrote:
> 
> This sender failed our fraud detection checks and may not be who they appear 
> to be. Learn about spoofing    Feedback 
> 
> It looks like things are working a bit better today…however now I am getting 
> the following error:
> 
> [hqosd6][DEBUG ] detect platform information from remote host
> [hqosd6][DEBUG ] detect machine type
> [ceph_deploy.install][INFO  ] Distro info: Ubuntu 14.04 trusty
> [hqosd6][INFO  ] installing ceph on hqosd6
> [hqosd6][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive apt-get 
> -q install --assume-yes ca-certificates
> [hqosd6][DEBUG ] Reading package lists...
> [hqosd6][DEBUG ] Building dependency tree...
> [hqosd6][DEBUG ] Reading state information...
> [hqosd6][DEBUG ] ca-certificates is already the newest version.
> [hqosd6][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 3 not 
> upgraded.
> [hqosd6][INFO  ] Running command: wget -O release.asc 
> https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc 
> 
> [hqosd6][WARNIN] --2017-03-16 10:25:17--  
> https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc 
> 
> [hqosd6][WARNIN] Resolving ceph.com  (ceph.com 
> )... 158.69.68.141
> [hqosd6][WARNIN] Connecting to ceph.com  (ceph.com 
> )|158.69.68.141|:443... connected.
> [hqosd6][WARNIN] HTTP request sent, awaiting response... 301 Moved Permanently
> [hqosd6][WARNIN] Location: 
> https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc 
>  [following]
> [hqosd6][WARNIN] --2017-03-16 10:25:17--  
> https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc 
> 
> [hqosd6][WARNIN] Resolving git.ceph.com  (git.ceph.com 
> )... 8.43.84.132
> [hqosd6][WARNIN] Connecting to git.ceph.com  
> (git.ceph.com )|8.43.84.132|:443... connected.
> [hqosd6][WARNIN] HTTP request sent, awaiting response... 200 OK
> [hqosd6][WARNIN] Length: 1645 (1.6K) [text/plain]
> [hqosd6][WARNIN] Saving to: ‘release.asc’
> [hqosd6][WARNIN] 
> [hqosd6][WARNIN]  0K .
>  100%  219M=0s
> [hqosd6][WARNIN] 
> [hqosd6][WARNIN] 2017-03-16 10:25:17 (219 MB/s) - ‘release.asc’ saved 
> [1645/1645]
> [hqosd6][WARNIN] 
> [hqosd6][INFO  ] Running command: apt-key add release.asc
> [hqosd6][DEBUG ] OK
> [hqosd6][DEBUG ] add deb repo to sources.list
> [hqosd6][INFO  ] Running command: apt-get -q update
> [hqosd6][DEBUG ] Ign http://us.archive.ubuntu.com 
>  trusty InRelease
> [hqosd6][DEBUG ] Hit http://us.archive.ubuntu.com 
>  trusty-updates InRelease
> [hqosd6][DEBUG ] Hit http://us.archive.ubuntu.com 
>  trusty-backports InRelease
> [hqosd6][DEBUG ] Get:1 http://us.archive.ubuntu.com 
>  trusty Release.gpg [933 B]
> [hqosd6][DEBUG ] Hit http://security.ubuntu.com  
> trusty-security InRelease
> [hqosd6][DEBUG ] Hit http://us.archive.ubuntu.com 
>  trusty-updates/main Sources
> [hqosd6][DEBUG ] Hit http://us.archive.ubuntu.com 
>  trusty-updates/restricted Sources
> [hqosd6][DEBUG ] Get:2 http://us.archive.ubuntu.com 
>  trusty-updates/universe Sources [175 kB]
> [hqosd6][DEBUG ] Hit http://security.ubuntu.com  
> trusty-security/main Sources
> [hqosd6][DEBUG ] Get:3 http://ceph.com  trusty InRelease
> [hqosd6][WARNIN] Splitting up 
> /var/lib/apt/lists/partial/ceph.com_debian-hammer_dists_trusty_InRelease into 
> data and signature failedE: GPG error: http://ceph.com  
> trusty InRelease: Clearsigned file isn't valid, got 'NODATA' (does the 
> network require authentication?)
> [hqosd6][DEBUG ] Ign http://ceph.com  trusty InRelease
> [hqosd6][ERROR ] RuntimeError: command returned non-zero exit status: 100
> [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: apt-get -q 
> update
> 
> Does anyone know if there is still an issue ongoing issue….or is this 
> something that should be working at this point?
> 
> Thanks again,
> Shain
> 
> 
> 
>> On Mar 15, 2017, at 2:08 PM, Shain Miley > > wrote:
>> 
>> This sender failed our fraud detection 

Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-16 Thread Shain Miley
It looks like things are working a bit better today…however now I am getting 
the following error:

[hqosd6][DEBUG ] detect platform information from remote host
[hqosd6][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: Ubuntu 14.04 trusty
[hqosd6][INFO  ] installing ceph on hqosd6
[hqosd6][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive apt-get -q 
install --assume-yes ca-certificates
[hqosd6][DEBUG ] Reading package lists...
[hqosd6][DEBUG ] Building dependency tree...
[hqosd6][DEBUG ] Reading state information...
[hqosd6][DEBUG ] ca-certificates is already the newest version.
[hqosd6][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 3 not upgraded.
[hqosd6][INFO  ] Running command: wget -O release.asc 
https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
[hqosd6][WARNIN] --2017-03-16 10:25:17--  
https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
[hqosd6][WARNIN] Resolving ceph.com (ceph.com)... 158.69.68.141
[hqosd6][WARNIN] Connecting to ceph.com (ceph.com)|158.69.68.141|:443... 
connected.
[hqosd6][WARNIN] HTTP request sent, awaiting response... 301 Moved Permanently
[hqosd6][WARNIN] Location: 
https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc [following]
[hqosd6][WARNIN] --2017-03-16 10:25:17--  
https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
[hqosd6][WARNIN] Resolving git.ceph.com (git.ceph.com)... 8.43.84.132
[hqosd6][WARNIN] Connecting to git.ceph.com (git.ceph.com)|8.43.84.132|:443... 
connected.
[hqosd6][WARNIN] HTTP request sent, awaiting response... 200 OK
[hqosd6][WARNIN] Length: 1645 (1.6K) [text/plain]
[hqosd6][WARNIN] Saving to: ‘release.asc’
[hqosd6][WARNIN] 
[hqosd6][WARNIN]  0K . 
100%  219M=0s
[hqosd6][WARNIN] 
[hqosd6][WARNIN] 2017-03-16 10:25:17 (219 MB/s) - ‘release.asc’ saved 
[1645/1645]
[hqosd6][WARNIN] 
[hqosd6][INFO  ] Running command: apt-key add release.asc
[hqosd6][DEBUG ] OK
[hqosd6][DEBUG ] add deb repo to sources.list
[hqosd6][INFO  ] Running command: apt-get -q update
[hqosd6][DEBUG ] Ign http://us.archive.ubuntu.com trusty InRelease
[hqosd6][DEBUG ] Hit http://us.archive.ubuntu.com trusty-updates InRelease
[hqosd6][DEBUG ] Hit http://us.archive.ubuntu.com trusty-backports InRelease
[hqosd6][DEBUG ] Get:1 http://us.archive.ubuntu.com trusty Release.gpg [933 B]
[hqosd6][DEBUG ] Hit http://security.ubuntu.com trusty-security InRelease
[hqosd6][DEBUG ] Hit http://us.archive.ubuntu.com trusty-updates/main Sources
[hqosd6][DEBUG ] Hit http://us.archive.ubuntu.com trusty-updates/restricted 
Sources
[hqosd6][DEBUG ] Get:2 http://us.archive.ubuntu.com trusty-updates/universe 
Sources [175 kB]
[hqosd6][DEBUG ] Hit http://security.ubuntu.com trusty-security/main Sources
[hqosd6][DEBUG ] Get:3 http://ceph.com trusty InRelease
[hqosd6][WARNIN] Splitting up 
/var/lib/apt/lists/partial/ceph.com_debian-hammer_dists_trusty_InRelease into 
data and signature failedE: GPG error: http://ceph.com trusty InRelease: 
Clearsigned file isn't valid, got 'NODATA' (does the network require 
authentication?)
[hqosd6][DEBUG ] Ign http://ceph.com trusty InRelease
[hqosd6][ERROR ] RuntimeError: command returned non-zero exit status: 100
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: apt-get -q update

Does anyone know if there is still an issue ongoing issue….or is this something 
that should be working at this point?

Thanks again,
Shain



> On Mar 15, 2017, at 2:08 PM, Shain Miley  wrote:
> 
> This sender failed our fraud detection checks and may not be who they appear 
> to be. Learn about spoofing    Feedback 
> 
> Thanks for all the help so far.
> 
> Just to be clear…if I am planning on upgrading the cluster from Hammer in say 
> the next 3 months…what is the suggested upgrade path?
> 
> Thanks again,
> Shain 
> 
>> On Mar 15, 2017, at 2:05 PM, Abhishek Lekshmanan > > wrote:
>> 
>> 
>> 
>> On 15/03/17 18:32, Shinobu Kinjo wrote:
>>> So description of Jewel is wrong?
>>> 
>>> http://docs.ceph.com/docs/master/releases/ 
>>> 
>> Yeah we missed updating jewel dates as well when updating about hammer, 
>> Jewel is an LTS and would get more upgrades. Once Luminous is released, 
>> however, we'll eventually shift focus on bugs that would hinder upgrades to 
>> Luminous itself
>> 
>> Abhishek
>>> On Thu, Mar 16, 2017 at 2:27 AM, John Spray >> > wrote:
 On Wed, Mar 15, 2017 at 5:04 PM, Shinobu Kinjo > wrote:
> It may be probably kind of challenge but please consider Kraken (or
> later) because Jewel will be retired:
> 
> http://docs.ceph.com/docs/master/releases/ 
> 
 Nope, Jewel is LTS, Kraken 

Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread Shain Miley
Thanks for all the help so far.

Just to be clear…if I am planning on upgrading the cluster from Hammer in say 
the next 3 months…what is the suggested upgrade path?

Thanks again,
Shain 

> On Mar 15, 2017, at 2:05 PM, Abhishek Lekshmanan  wrote:
> 
> 
> 
> On 15/03/17 18:32, Shinobu Kinjo wrote:
>> So description of Jewel is wrong?
>> 
>> http://docs.ceph.com/docs/master/releases/ 
>> 
> Yeah we missed updating jewel dates as well when updating about hammer, Jewel 
> is an LTS and would get more upgrades. Once Luminous is released, however, 
> we'll eventually shift focus on bugs that would hinder upgrades to Luminous 
> itself
> 
> Abhishek
>> On Thu, Mar 16, 2017 at 2:27 AM, John Spray  wrote:
>>> On Wed, Mar 15, 2017 at 5:04 PM, Shinobu Kinjo  wrote:
 It may be probably kind of challenge but please consider Kraken (or
 later) because Jewel will be retired:
 
 http://docs.ceph.com/docs/master/releases/
>>> Nope, Jewel is LTS, Kraken is not.
>>> 
>>> Kraken will only receive updates until the next stable release.  Jewel
>>> will receive updates for longer.
>>> 
>>> John
>>> 
 On Thu, Mar 16, 2017 at 1:48 AM, Shain Miley  wrote:
> No this is a production cluster that I have not had a chance to upgrade 
> yet.
> 
> We had an is with the OS on a node so I am just trying to reinstall ceph 
> and
> hope that the osd data is still in tact.
> 
> Once I get things stable again I was planning on upgrading…but the upgrade
> is a bit intensive by the looks of it so I need to set aside a decent 
> amount
> of time.
> 
> Thanks all!
> 
> Shain
> 
> On Mar 15, 2017, at 12:38 PM, Vasu Kulkarni  wrote:
> 
> Just curious, why you still want to deploy new hammer instead of stable
> jewel? Is this a test environment? the last .10 release was basically for
> bug fixes for 0.94.9.
> 
> 
> 
> On Wed, Mar 15, 2017 at 9:16 AM, Shinobu Kinjo  wrote:
>> FYI:
>> https://plus.google.com/+Cephstorage/posts/HuCaTi7Egg3
>> 
>> On Thu, Mar 16, 2017 at 1:05 AM, Shain Miley  wrote:
>>> Hello,
>>> I am trying to deploy ceph to a new server using ceph-deply which I have
>>> done in the past many times without issue.
>>> 
>>> Right now I am seeing a timeout trying to connect to git.ceph.com:
>>> 
>>> 
>>> [hqosd6][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive
>>> apt-get
>>> -q install --assume-yes ca-certificates
>>> [hqosd6][DEBUG ] Reading package lists...
>>> [hqosd6][DEBUG ] Building dependency tree...
>>> [hqosd6][DEBUG ] Reading state information...
>>> [hqosd6][DEBUG ] ca-certificates is already the newest version.
>>> [hqosd6][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 3 not
>>> upgraded.
>>> [hqosd6][INFO  ] Running command: wget -O release.asc
>>> https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
>>> [hqosd6][WARNIN] --2017-03-15 11:49:16--
>>> https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
>>> [hqosd6][WARNIN] Resolving ceph.com (ceph.com)... 158.69.68.141
>>> [hqosd6][WARNIN] Connecting to ceph.com (ceph.com)|158.69.68.141|:443...
>>> connected.
>>> [hqosd6][WARNIN] HTTP request sent, awaiting response... 301 Moved
>>> Permanently
>>> [hqosd6][WARNIN] Location:
>>> https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
>>> [following]
>>> [hqosd6][WARNIN] --2017-03-15 11:49:17--
>>> https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
>>> [hqosd6][WARNIN] Resolving git.ceph.com (git.ceph.com)... 8.43.84.132
>>> [hqosd6][WARNIN] Connecting to git.ceph.com
>>> (git.ceph.com)|8.43.84.132|:443... failed: Connection timed out.
>>> [hqosd6][WARNIN] Retrying.
>>> [hqosd6][WARNIN]
>>> [hqosd6][WARNIN] --2017-03-15 11:51:25--  (try: 2)
>>> https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
>>> [hqosd6][WARNIN] Connecting to git.ceph.com
>>> (git.ceph.com)|8.43.84.132|:443... failed: Connection timed out.
>>> [hqosd6][WARNIN] Retrying.
>>> [hqosd6][WARNIN]
>>> [hqosd6][WARNIN] --2017-03-15 11:53:34--  (try: 3)
>>> https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
>>> [hqosd6][WARNIN] Connecting to git.ceph.com
>>> (git.ceph.com)|8.43.84.132|:443... failed: Connection timed out.
>>> [hqosd6][WARNIN] Retrying.
>>> 
>>> 
>>> I am wondering if this is a known issue.
>>> 
>>> Just an fyi...I am using an older version of ceph-deply (1.5.36) because
>>> in
>>> the past upgrading to a newer version I was not able to install hammer
>>> on
>>> the cluster…so the workaround was to use a slightly older version.
>>> 

Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread Abhishek Lekshmanan



On 15/03/17 18:32, Shinobu Kinjo wrote:

So description of Jewel is wrong?

http://docs.ceph.com/docs/master/releases/
Yeah we missed updating jewel dates as well when updating about hammer, 
Jewel is an LTS and would get more upgrades. Once Luminous is released, 
however, we'll eventually shift focus on bugs that would hinder upgrades 
to Luminous itself


Abhishek

On Thu, Mar 16, 2017 at 2:27 AM, John Spray  wrote:

On Wed, Mar 15, 2017 at 5:04 PM, Shinobu Kinjo  wrote:

It may be probably kind of challenge but please consider Kraken (or
later) because Jewel will be retired:

http://docs.ceph.com/docs/master/releases/

Nope, Jewel is LTS, Kraken is not.

Kraken will only receive updates until the next stable release.  Jewel
will receive updates for longer.

John


On Thu, Mar 16, 2017 at 1:48 AM, Shain Miley  wrote:

No this is a production cluster that I have not had a chance to upgrade yet.

We had an is with the OS on a node so I am just trying to reinstall ceph and
hope that the osd data is still in tact.

Once I get things stable again I was planning on upgrading…but the upgrade
is a bit intensive by the looks of it so I need to set aside a decent amount
of time.

Thanks all!

Shain

On Mar 15, 2017, at 12:38 PM, Vasu Kulkarni  wrote:

Just curious, why you still want to deploy new hammer instead of stable
jewel? Is this a test environment? the last .10 release was basically for
bug fixes for 0.94.9.



On Wed, Mar 15, 2017 at 9:16 AM, Shinobu Kinjo  wrote:

FYI:
https://plus.google.com/+Cephstorage/posts/HuCaTi7Egg3

On Thu, Mar 16, 2017 at 1:05 AM, Shain Miley  wrote:

Hello,
I am trying to deploy ceph to a new server using ceph-deply which I have
done in the past many times without issue.

Right now I am seeing a timeout trying to connect to git.ceph.com:


[hqosd6][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive
apt-get
-q install --assume-yes ca-certificates
[hqosd6][DEBUG ] Reading package lists...
[hqosd6][DEBUG ] Building dependency tree...
[hqosd6][DEBUG ] Reading state information...
[hqosd6][DEBUG ] ca-certificates is already the newest version.
[hqosd6][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 3 not
upgraded.
[hqosd6][INFO  ] Running command: wget -O release.asc
https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
[hqosd6][WARNIN] --2017-03-15 11:49:16--
https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
[hqosd6][WARNIN] Resolving ceph.com (ceph.com)... 158.69.68.141
[hqosd6][WARNIN] Connecting to ceph.com (ceph.com)|158.69.68.141|:443...
connected.
[hqosd6][WARNIN] HTTP request sent, awaiting response... 301 Moved
Permanently
[hqosd6][WARNIN] Location:
https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
[following]
[hqosd6][WARNIN] --2017-03-15 11:49:17--
https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
[hqosd6][WARNIN] Resolving git.ceph.com (git.ceph.com)... 8.43.84.132
[hqosd6][WARNIN] Connecting to git.ceph.com
(git.ceph.com)|8.43.84.132|:443... failed: Connection timed out.
[hqosd6][WARNIN] Retrying.
[hqosd6][WARNIN]
[hqosd6][WARNIN] --2017-03-15 11:51:25--  (try: 2)
https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
[hqosd6][WARNIN] Connecting to git.ceph.com
(git.ceph.com)|8.43.84.132|:443... failed: Connection timed out.
[hqosd6][WARNIN] Retrying.
[hqosd6][WARNIN]
[hqosd6][WARNIN] --2017-03-15 11:53:34--  (try: 3)
https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
[hqosd6][WARNIN] Connecting to git.ceph.com
(git.ceph.com)|8.43.84.132|:443... failed: Connection timed out.
[hqosd6][WARNIN] Retrying.


I am wondering if this is a known issue.

Just an fyi...I am using an older version of ceph-deply (1.5.36) because
in
the past upgrading to a newer version I was not able to install hammer
on
the cluster…so the workaround was to use a slightly older version.

Thanks in advance for any help you may be able to provide.

Shain


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread Shinobu Kinjo
So description of Jewel is wrong?

http://docs.ceph.com/docs/master/releases/

On Thu, Mar 16, 2017 at 2:27 AM, John Spray  wrote:
> On Wed, Mar 15, 2017 at 5:04 PM, Shinobu Kinjo  wrote:
>> It may be probably kind of challenge but please consider Kraken (or
>> later) because Jewel will be retired:
>>
>> http://docs.ceph.com/docs/master/releases/
>
> Nope, Jewel is LTS, Kraken is not.
>
> Kraken will only receive updates until the next stable release.  Jewel
> will receive updates for longer.
>
> John
>
>>
>> On Thu, Mar 16, 2017 at 1:48 AM, Shain Miley  wrote:
>>> No this is a production cluster that I have not had a chance to upgrade yet.
>>>
>>> We had an is with the OS on a node so I am just trying to reinstall ceph and
>>> hope that the osd data is still in tact.
>>>
>>> Once I get things stable again I was planning on upgrading…but the upgrade
>>> is a bit intensive by the looks of it so I need to set aside a decent amount
>>> of time.
>>>
>>> Thanks all!
>>>
>>> Shain
>>>
>>> On Mar 15, 2017, at 12:38 PM, Vasu Kulkarni  wrote:
>>>
>>> Just curious, why you still want to deploy new hammer instead of stable
>>> jewel? Is this a test environment? the last .10 release was basically for
>>> bug fixes for 0.94.9.
>>>
>>>
>>>
>>> On Wed, Mar 15, 2017 at 9:16 AM, Shinobu Kinjo  wrote:

 FYI:
 https://plus.google.com/+Cephstorage/posts/HuCaTi7Egg3

 On Thu, Mar 16, 2017 at 1:05 AM, Shain Miley  wrote:
 > Hello,
 > I am trying to deploy ceph to a new server using ceph-deply which I have
 > done in the past many times without issue.
 >
 > Right now I am seeing a timeout trying to connect to git.ceph.com:
 >
 >
 > [hqosd6][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive
 > apt-get
 > -q install --assume-yes ca-certificates
 > [hqosd6][DEBUG ] Reading package lists...
 > [hqosd6][DEBUG ] Building dependency tree...
 > [hqosd6][DEBUG ] Reading state information...
 > [hqosd6][DEBUG ] ca-certificates is already the newest version.
 > [hqosd6][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 3 not
 > upgraded.
 > [hqosd6][INFO  ] Running command: wget -O release.asc
 > https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
 > [hqosd6][WARNIN] --2017-03-15 11:49:16--
 > https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
 > [hqosd6][WARNIN] Resolving ceph.com (ceph.com)... 158.69.68.141
 > [hqosd6][WARNIN] Connecting to ceph.com (ceph.com)|158.69.68.141|:443...
 > connected.
 > [hqosd6][WARNIN] HTTP request sent, awaiting response... 301 Moved
 > Permanently
 > [hqosd6][WARNIN] Location:
 > https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
 > [following]
 > [hqosd6][WARNIN] --2017-03-15 11:49:17--
 > https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
 > [hqosd6][WARNIN] Resolving git.ceph.com (git.ceph.com)... 8.43.84.132
 > [hqosd6][WARNIN] Connecting to git.ceph.com
 > (git.ceph.com)|8.43.84.132|:443... failed: Connection timed out.
 > [hqosd6][WARNIN] Retrying.
 > [hqosd6][WARNIN]
 > [hqosd6][WARNIN] --2017-03-15 11:51:25--  (try: 2)
 > https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
 > [hqosd6][WARNIN] Connecting to git.ceph.com
 > (git.ceph.com)|8.43.84.132|:443... failed: Connection timed out.
 > [hqosd6][WARNIN] Retrying.
 > [hqosd6][WARNIN]
 > [hqosd6][WARNIN] --2017-03-15 11:53:34--  (try: 3)
 > https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
 > [hqosd6][WARNIN] Connecting to git.ceph.com
 > (git.ceph.com)|8.43.84.132|:443... failed: Connection timed out.
 > [hqosd6][WARNIN] Retrying.
 >
 >
 > I am wondering if this is a known issue.
 >
 > Just an fyi...I am using an older version of ceph-deply (1.5.36) because
 > in
 > the past upgrading to a newer version I was not able to install hammer
 > on
 > the cluster…so the workaround was to use a slightly older version.
 >
 > Thanks in advance for any help you may be able to provide.
 >
 > Shain
 >
 >
 > ___
 > ceph-users mailing list
 > ceph-users@lists.ceph.com
 > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 >
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread John Spray
On Wed, Mar 15, 2017 at 5:04 PM, Shinobu Kinjo  wrote:
> It may be probably kind of challenge but please consider Kraken (or
> later) because Jewel will be retired:
>
> http://docs.ceph.com/docs/master/releases/

Nope, Jewel is LTS, Kraken is not.

Kraken will only receive updates until the next stable release.  Jewel
will receive updates for longer.

John

>
> On Thu, Mar 16, 2017 at 1:48 AM, Shain Miley  wrote:
>> No this is a production cluster that I have not had a chance to upgrade yet.
>>
>> We had an is with the OS on a node so I am just trying to reinstall ceph and
>> hope that the osd data is still in tact.
>>
>> Once I get things stable again I was planning on upgrading…but the upgrade
>> is a bit intensive by the looks of it so I need to set aside a decent amount
>> of time.
>>
>> Thanks all!
>>
>> Shain
>>
>> On Mar 15, 2017, at 12:38 PM, Vasu Kulkarni  wrote:
>>
>> Just curious, why you still want to deploy new hammer instead of stable
>> jewel? Is this a test environment? the last .10 release was basically for
>> bug fixes for 0.94.9.
>>
>>
>>
>> On Wed, Mar 15, 2017 at 9:16 AM, Shinobu Kinjo  wrote:
>>>
>>> FYI:
>>> https://plus.google.com/+Cephstorage/posts/HuCaTi7Egg3
>>>
>>> On Thu, Mar 16, 2017 at 1:05 AM, Shain Miley  wrote:
>>> > Hello,
>>> > I am trying to deploy ceph to a new server using ceph-deply which I have
>>> > done in the past many times without issue.
>>> >
>>> > Right now I am seeing a timeout trying to connect to git.ceph.com:
>>> >
>>> >
>>> > [hqosd6][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive
>>> > apt-get
>>> > -q install --assume-yes ca-certificates
>>> > [hqosd6][DEBUG ] Reading package lists...
>>> > [hqosd6][DEBUG ] Building dependency tree...
>>> > [hqosd6][DEBUG ] Reading state information...
>>> > [hqosd6][DEBUG ] ca-certificates is already the newest version.
>>> > [hqosd6][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 3 not
>>> > upgraded.
>>> > [hqosd6][INFO  ] Running command: wget -O release.asc
>>> > https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
>>> > [hqosd6][WARNIN] --2017-03-15 11:49:16--
>>> > https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
>>> > [hqosd6][WARNIN] Resolving ceph.com (ceph.com)... 158.69.68.141
>>> > [hqosd6][WARNIN] Connecting to ceph.com (ceph.com)|158.69.68.141|:443...
>>> > connected.
>>> > [hqosd6][WARNIN] HTTP request sent, awaiting response... 301 Moved
>>> > Permanently
>>> > [hqosd6][WARNIN] Location:
>>> > https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
>>> > [following]
>>> > [hqosd6][WARNIN] --2017-03-15 11:49:17--
>>> > https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
>>> > [hqosd6][WARNIN] Resolving git.ceph.com (git.ceph.com)... 8.43.84.132
>>> > [hqosd6][WARNIN] Connecting to git.ceph.com
>>> > (git.ceph.com)|8.43.84.132|:443... failed: Connection timed out.
>>> > [hqosd6][WARNIN] Retrying.
>>> > [hqosd6][WARNIN]
>>> > [hqosd6][WARNIN] --2017-03-15 11:51:25--  (try: 2)
>>> > https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
>>> > [hqosd6][WARNIN] Connecting to git.ceph.com
>>> > (git.ceph.com)|8.43.84.132|:443... failed: Connection timed out.
>>> > [hqosd6][WARNIN] Retrying.
>>> > [hqosd6][WARNIN]
>>> > [hqosd6][WARNIN] --2017-03-15 11:53:34--  (try: 3)
>>> > https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
>>> > [hqosd6][WARNIN] Connecting to git.ceph.com
>>> > (git.ceph.com)|8.43.84.132|:443... failed: Connection timed out.
>>> > [hqosd6][WARNIN] Retrying.
>>> >
>>> >
>>> > I am wondering if this is a known issue.
>>> >
>>> > Just an fyi...I am using an older version of ceph-deply (1.5.36) because
>>> > in
>>> > the past upgrading to a newer version I was not able to install hammer
>>> > on
>>> > the cluster…so the workaround was to use a slightly older version.
>>> >
>>> > Thanks in advance for any help you may be able to provide.
>>> >
>>> > Shain
>>> >
>>> >
>>> > ___
>>> > ceph-users mailing list
>>> > ceph-users@lists.ceph.com
>>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> >
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread Shinobu Kinjo
Would you file this as a doc bug? So we discuss properly with tracking.

http://tracker.ceph.com

On Thu, Mar 16, 2017 at 2:17 AM, Deepak Naidu  wrote:
>>> because Jewel will be retired:
> Hmm.  Isn't Jewel LTS ?
>
> Every other stable releases is a LTS (Long Term Stable) and will receive 
> updates until two LTS are published.
>
> --
> Deepak
>
>> On Mar 15, 2017, at 10:09 AM, Shinobu Kinjo  wrote:
>>
>> It may be probably kind of challenge but please consider Kraken (or
>> later) because Jewel will be retired:
>>
>> http://docs.ceph.com/docs/master/releases/
>>
>>> On Thu, Mar 16, 2017 at 1:48 AM, Shain Miley  wrote:
>>> No this is a production cluster that I have not had a chance to upgrade yet.
>>>
>>> We had an is with the OS on a node so I am just trying to reinstall ceph and
>>> hope that the osd data is still in tact.
>>>
>>> Once I get things stable again I was planning on upgrading…but the upgrade
>>> is a bit intensive by the looks of it so I need to set aside a decent amount
>>> of time.
>>>
>>> Thanks all!
>>>
>>> Shain
>>>
>>> On Mar 15, 2017, at 12:38 PM, Vasu Kulkarni  wrote:
>>>
>>> Just curious, why you still want to deploy new hammer instead of stable
>>> jewel? Is this a test environment? the last .10 release was basically for
>>> bug fixes for 0.94.9.
>>>
>>>
>>>
 On Wed, Mar 15, 2017 at 9:16 AM, Shinobu Kinjo  wrote:

 FYI:
 https://plus.google.com/+Cephstorage/posts/HuCaTi7Egg3

> On Thu, Mar 16, 2017 at 1:05 AM, Shain Miley  wrote:
> Hello,
> I am trying to deploy ceph to a new server using ceph-deply which I have
> done in the past many times without issue.
>
> Right now I am seeing a timeout trying to connect to git.ceph.com:
>
>
> [hqosd6][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive
> apt-get
> -q install --assume-yes ca-certificates
> [hqosd6][DEBUG ] Reading package lists...
> [hqosd6][DEBUG ] Building dependency tree...
> [hqosd6][DEBUG ] Reading state information...
> [hqosd6][DEBUG ] ca-certificates is already the newest version.
> [hqosd6][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 3 not
> upgraded.
> [hqosd6][INFO  ] Running command: wget -O release.asc
> https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
> [hqosd6][WARNIN] --2017-03-15 11:49:16--
> https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
> [hqosd6][WARNIN] Resolving ceph.com (ceph.com)... 158.69.68.141
> [hqosd6][WARNIN] Connecting to ceph.com (ceph.com)|158.69.68.141|:443...
> connected.
> [hqosd6][WARNIN] HTTP request sent, awaiting response... 301 Moved
> Permanently
> [hqosd6][WARNIN] Location:
> https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
> [following]
> [hqosd6][WARNIN] --2017-03-15 11:49:17--
> https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
> [hqosd6][WARNIN] Resolving git.ceph.com (git.ceph.com)... 8.43.84.132
> [hqosd6][WARNIN] Connecting to git.ceph.com
> (git.ceph.com)|8.43.84.132|:443... failed: Connection timed out.
> [hqosd6][WARNIN] Retrying.
> [hqosd6][WARNIN]
> [hqosd6][WARNIN] --2017-03-15 11:51:25--  (try: 2)
> https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
> [hqosd6][WARNIN] Connecting to git.ceph.com
> (git.ceph.com)|8.43.84.132|:443... failed: Connection timed out.
> [hqosd6][WARNIN] Retrying.
> [hqosd6][WARNIN]
> [hqosd6][WARNIN] --2017-03-15 11:53:34--  (try: 3)
> https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
> [hqosd6][WARNIN] Connecting to git.ceph.com
> (git.ceph.com)|8.43.84.132|:443... failed: Connection timed out.
> [hqosd6][WARNIN] Retrying.
>
>
> I am wondering if this is a known issue.
>
> Just an fyi...I am using an older version of ceph-deply (1.5.36) because
> in
> the past upgrading to a newer version I was not able to install hammer
> on
> the cluster…so the workaround was to use a slightly older version.
>
> Thanks in advance for any help you may be able to provide.
>
> Shain
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ---
> This email message is for the sole use of the intended recipient(s) and may 
> 

Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread Deepak Naidu
>> because Jewel will be retired:
Hmm.  Isn't Jewel LTS ? 

Every other stable releases is a LTS (Long Term Stable) and will receive 
updates until two LTS are published. 

--
Deepak

> On Mar 15, 2017, at 10:09 AM, Shinobu Kinjo  wrote:
> 
> It may be probably kind of challenge but please consider Kraken (or
> later) because Jewel will be retired:
> 
> http://docs.ceph.com/docs/master/releases/
> 
>> On Thu, Mar 16, 2017 at 1:48 AM, Shain Miley  wrote:
>> No this is a production cluster that I have not had a chance to upgrade yet.
>> 
>> We had an is with the OS on a node so I am just trying to reinstall ceph and
>> hope that the osd data is still in tact.
>> 
>> Once I get things stable again I was planning on upgrading…but the upgrade
>> is a bit intensive by the looks of it so I need to set aside a decent amount
>> of time.
>> 
>> Thanks all!
>> 
>> Shain
>> 
>> On Mar 15, 2017, at 12:38 PM, Vasu Kulkarni  wrote:
>> 
>> Just curious, why you still want to deploy new hammer instead of stable
>> jewel? Is this a test environment? the last .10 release was basically for
>> bug fixes for 0.94.9.
>> 
>> 
>> 
>>> On Wed, Mar 15, 2017 at 9:16 AM, Shinobu Kinjo  wrote:
>>> 
>>> FYI:
>>> https://plus.google.com/+Cephstorage/posts/HuCaTi7Egg3
>>> 
 On Thu, Mar 16, 2017 at 1:05 AM, Shain Miley  wrote:
 Hello,
 I am trying to deploy ceph to a new server using ceph-deply which I have
 done in the past many times without issue.
 
 Right now I am seeing a timeout trying to connect to git.ceph.com:
 
 
 [hqosd6][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive
 apt-get
 -q install --assume-yes ca-certificates
 [hqosd6][DEBUG ] Reading package lists...
 [hqosd6][DEBUG ] Building dependency tree...
 [hqosd6][DEBUG ] Reading state information...
 [hqosd6][DEBUG ] ca-certificates is already the newest version.
 [hqosd6][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 3 not
 upgraded.
 [hqosd6][INFO  ] Running command: wget -O release.asc
 https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
 [hqosd6][WARNIN] --2017-03-15 11:49:16--
 https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
 [hqosd6][WARNIN] Resolving ceph.com (ceph.com)... 158.69.68.141
 [hqosd6][WARNIN] Connecting to ceph.com (ceph.com)|158.69.68.141|:443...
 connected.
 [hqosd6][WARNIN] HTTP request sent, awaiting response... 301 Moved
 Permanently
 [hqosd6][WARNIN] Location:
 https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
 [following]
 [hqosd6][WARNIN] --2017-03-15 11:49:17--
 https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
 [hqosd6][WARNIN] Resolving git.ceph.com (git.ceph.com)... 8.43.84.132
 [hqosd6][WARNIN] Connecting to git.ceph.com
 (git.ceph.com)|8.43.84.132|:443... failed: Connection timed out.
 [hqosd6][WARNIN] Retrying.
 [hqosd6][WARNIN]
 [hqosd6][WARNIN] --2017-03-15 11:51:25--  (try: 2)
 https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
 [hqosd6][WARNIN] Connecting to git.ceph.com
 (git.ceph.com)|8.43.84.132|:443... failed: Connection timed out.
 [hqosd6][WARNIN] Retrying.
 [hqosd6][WARNIN]
 [hqosd6][WARNIN] --2017-03-15 11:53:34--  (try: 3)
 https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
 [hqosd6][WARNIN] Connecting to git.ceph.com
 (git.ceph.com)|8.43.84.132|:443... failed: Connection timed out.
 [hqosd6][WARNIN] Retrying.
 
 
 I am wondering if this is a known issue.
 
 Just an fyi...I am using an older version of ceph-deply (1.5.36) because
 in
 the past upgrading to a newer version I was not able to install hammer
 on
 the cluster…so the workaround was to use a slightly older version.
 
 Thanks in advance for any help you may be able to provide.
 
 Shain
 
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 
>> 
>> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.

Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread Shinobu Kinjo
It may be probably kind of challenge but please consider Kraken (or
later) because Jewel will be retired:

http://docs.ceph.com/docs/master/releases/

On Thu, Mar 16, 2017 at 1:48 AM, Shain Miley  wrote:
> No this is a production cluster that I have not had a chance to upgrade yet.
>
> We had an is with the OS on a node so I am just trying to reinstall ceph and
> hope that the osd data is still in tact.
>
> Once I get things stable again I was planning on upgrading…but the upgrade
> is a bit intensive by the looks of it so I need to set aside a decent amount
> of time.
>
> Thanks all!
>
> Shain
>
> On Mar 15, 2017, at 12:38 PM, Vasu Kulkarni  wrote:
>
> Just curious, why you still want to deploy new hammer instead of stable
> jewel? Is this a test environment? the last .10 release was basically for
> bug fixes for 0.94.9.
>
>
>
> On Wed, Mar 15, 2017 at 9:16 AM, Shinobu Kinjo  wrote:
>>
>> FYI:
>> https://plus.google.com/+Cephstorage/posts/HuCaTi7Egg3
>>
>> On Thu, Mar 16, 2017 at 1:05 AM, Shain Miley  wrote:
>> > Hello,
>> > I am trying to deploy ceph to a new server using ceph-deply which I have
>> > done in the past many times without issue.
>> >
>> > Right now I am seeing a timeout trying to connect to git.ceph.com:
>> >
>> >
>> > [hqosd6][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive
>> > apt-get
>> > -q install --assume-yes ca-certificates
>> > [hqosd6][DEBUG ] Reading package lists...
>> > [hqosd6][DEBUG ] Building dependency tree...
>> > [hqosd6][DEBUG ] Reading state information...
>> > [hqosd6][DEBUG ] ca-certificates is already the newest version.
>> > [hqosd6][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 3 not
>> > upgraded.
>> > [hqosd6][INFO  ] Running command: wget -O release.asc
>> > https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
>> > [hqosd6][WARNIN] --2017-03-15 11:49:16--
>> > https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
>> > [hqosd6][WARNIN] Resolving ceph.com (ceph.com)... 158.69.68.141
>> > [hqosd6][WARNIN] Connecting to ceph.com (ceph.com)|158.69.68.141|:443...
>> > connected.
>> > [hqosd6][WARNIN] HTTP request sent, awaiting response... 301 Moved
>> > Permanently
>> > [hqosd6][WARNIN] Location:
>> > https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
>> > [following]
>> > [hqosd6][WARNIN] --2017-03-15 11:49:17--
>> > https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
>> > [hqosd6][WARNIN] Resolving git.ceph.com (git.ceph.com)... 8.43.84.132
>> > [hqosd6][WARNIN] Connecting to git.ceph.com
>> > (git.ceph.com)|8.43.84.132|:443... failed: Connection timed out.
>> > [hqosd6][WARNIN] Retrying.
>> > [hqosd6][WARNIN]
>> > [hqosd6][WARNIN] --2017-03-15 11:51:25--  (try: 2)
>> > https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
>> > [hqosd6][WARNIN] Connecting to git.ceph.com
>> > (git.ceph.com)|8.43.84.132|:443... failed: Connection timed out.
>> > [hqosd6][WARNIN] Retrying.
>> > [hqosd6][WARNIN]
>> > [hqosd6][WARNIN] --2017-03-15 11:53:34--  (try: 3)
>> > https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
>> > [hqosd6][WARNIN] Connecting to git.ceph.com
>> > (git.ceph.com)|8.43.84.132|:443... failed: Connection timed out.
>> > [hqosd6][WARNIN] Retrying.
>> >
>> >
>> > I am wondering if this is a known issue.
>> >
>> > Just an fyi...I am using an older version of ceph-deply (1.5.36) because
>> > in
>> > the past upgrading to a newer version I was not able to install hammer
>> > on
>> > the cluster…so the workaround was to use a slightly older version.
>> >
>> > Thanks in advance for any help you may be able to provide.
>> >
>> > Shain
>> >
>> >
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread Shain Miley
No this is a production cluster that I have not had a chance to upgrade yet.

We had an is with the OS on a node so I am just trying to reinstall ceph and 
hope that the osd data is still in tact.

Once I get things stable again I was planning on upgrading…but the upgrade is a 
bit intensive by the looks of it so I need to set aside a decent amount of time.

Thanks all!

Shain

> On Mar 15, 2017, at 12:38 PM, Vasu Kulkarni  wrote:
> 
> Just curious, why you still want to deploy new hammer instead of stable 
> jewel? Is this a test environment? the last .10 release was basically for bug 
> fixes for 0.94.9.
> 
> 
> 
> On Wed, Mar 15, 2017 at 9:16 AM, Shinobu Kinjo  > wrote:
> FYI:
> https://plus.google.com/+Cephstorage/posts/HuCaTi7Egg3 
> 
> 
> On Thu, Mar 16, 2017 at 1:05 AM, Shain Miley  > wrote:
> > Hello,
> > I am trying to deploy ceph to a new server using ceph-deply which I have
> > done in the past many times without issue.
> >
> > Right now I am seeing a timeout trying to connect to git.ceph.com 
> > :
> >
> >
> > [hqosd6][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive apt-get
> > -q install --assume-yes ca-certificates
> > [hqosd6][DEBUG ] Reading package lists...
> > [hqosd6][DEBUG ] Building dependency tree...
> > [hqosd6][DEBUG ] Reading state information...
> > [hqosd6][DEBUG ] ca-certificates is already the newest version.
> > [hqosd6][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 3 not
> > upgraded.
> > [hqosd6][INFO  ] Running command: wget -O release.asc
> > https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc 
> > 
> > [hqosd6][WARNIN] --2017-03-15 11:49:16--
> > https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc 
> > 
> > [hqosd6][WARNIN] Resolving ceph.com  (ceph.com 
> > )... 158.69.68.141
> > [hqosd6][WARNIN] Connecting to ceph.com  (ceph.com 
> > )|158.69.68.141|:443...
> > connected.
> > [hqosd6][WARNIN] HTTP request sent, awaiting response... 301 Moved
> > Permanently
> > [hqosd6][WARNIN] Location:
> > https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc 
> >  
> > [following]
> > [hqosd6][WARNIN] --2017-03-15 11:49:17--
> > https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc 
> > 
> > [hqosd6][WARNIN] Resolving git.ceph.com  
> > (git.ceph.com )... 8.43.84.132
> > [hqosd6][WARNIN] Connecting to git.ceph.com 
> > (git.ceph.com )|8.43.84.132|:443... failed: 
> > Connection timed out.
> > [hqosd6][WARNIN] Retrying.
> > [hqosd6][WARNIN]
> > [hqosd6][WARNIN] --2017-03-15 11:51:25--  (try: 2)
> > https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc 
> > 
> > [hqosd6][WARNIN] Connecting to git.ceph.com 
> > (git.ceph.com )|8.43.84.132|:443... failed: 
> > Connection timed out.
> > [hqosd6][WARNIN] Retrying.
> > [hqosd6][WARNIN]
> > [hqosd6][WARNIN] --2017-03-15 11:53:34--  (try: 3)
> > https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc 
> > 
> > [hqosd6][WARNIN] Connecting to git.ceph.com 
> > (git.ceph.com )|8.43.84.132|:443... failed: 
> > Connection timed out.
> > [hqosd6][WARNIN] Retrying.
> >
> >
> > I am wondering if this is a known issue.
> >
> > Just an fyi...I am using an older version of ceph-deply (1.5.36) because in
> > the past upgrading to a newer version I was not able to install hammer on
> > the cluster…so the workaround was to use a slightly older version.
> >
> > Thanks in advance for any help you may be able to provide.
> >
> > Shain
> >
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com 
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> > 
> >
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> 
> 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread Vasu Kulkarni
Just curious, why you still want to deploy new hammer instead of stable
jewel? Is this a test environment? the last .10 release was basically for
bug fixes for 0.94.9.



On Wed, Mar 15, 2017 at 9:16 AM, Shinobu Kinjo  wrote:

> FYI:
> https://plus.google.com/+Cephstorage/posts/HuCaTi7Egg3
>
> On Thu, Mar 16, 2017 at 1:05 AM, Shain Miley  wrote:
> > Hello,
> > I am trying to deploy ceph to a new server using ceph-deply which I have
> > done in the past many times without issue.
> >
> > Right now I am seeing a timeout trying to connect to git.ceph.com:
> >
> >
> > [hqosd6][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive
> apt-get
> > -q install --assume-yes ca-certificates
> > [hqosd6][DEBUG ] Reading package lists...
> > [hqosd6][DEBUG ] Building dependency tree...
> > [hqosd6][DEBUG ] Reading state information...
> > [hqosd6][DEBUG ] ca-certificates is already the newest version.
> > [hqosd6][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 3 not
> > upgraded.
> > [hqosd6][INFO  ] Running command: wget -O release.asc
> > https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
> > [hqosd6][WARNIN] --2017-03-15 11:49:16--
> > https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
> > [hqosd6][WARNIN] Resolving ceph.com (ceph.com)... 158.69.68.141
> > [hqosd6][WARNIN] Connecting to ceph.com (ceph.com)|158.69.68.141|:443...
> > connected.
> > [hqosd6][WARNIN] HTTP request sent, awaiting response... 301 Moved
> > Permanently
> > [hqosd6][WARNIN] Location:
> > https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
> [following]
> > [hqosd6][WARNIN] --2017-03-15 11:49:17--
> > https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
> > [hqosd6][WARNIN] Resolving git.ceph.com (git.ceph.com)... 8.43.84.132
> > [hqosd6][WARNIN] Connecting to git.ceph.com
> > (git.ceph.com)|8.43.84.132|:443... failed: Connection timed out.
> > [hqosd6][WARNIN] Retrying.
> > [hqosd6][WARNIN]
> > [hqosd6][WARNIN] --2017-03-15 11:51:25--  (try: 2)
> > https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
> > [hqosd6][WARNIN] Connecting to git.ceph.com
> > (git.ceph.com)|8.43.84.132|:443... failed: Connection timed out.
> > [hqosd6][WARNIN] Retrying.
> > [hqosd6][WARNIN]
> > [hqosd6][WARNIN] --2017-03-15 11:53:34--  (try: 3)
> > https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
> > [hqosd6][WARNIN] Connecting to git.ceph.com
> > (git.ceph.com)|8.43.84.132|:443... failed: Connection timed out.
> > [hqosd6][WARNIN] Retrying.
> >
> >
> > I am wondering if this is a known issue.
> >
> > Just an fyi...I am using an older version of ceph-deply (1.5.36) because
> in
> > the past upgrading to a newer version I was not able to install hammer on
> > the cluster…so the workaround was to use a slightly older version.
> >
> > Thanks in advance for any help you may be able to provide.
> >
> > Shain
> >
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread Deepak Naidu
I had similar issue when using older version of ceph-deploy. I see the URL 
got.ceph.com  doesn't work on browser as well.

To resolve this, I installed the latest version of ceph-deploy and it worked 
fine. New version wasn't using git.ceph.com.

During ceph-deploy you can mention what version of ceph you want example jewel, 
etc..  


--
Deepak

> On Mar 15, 2017, at 9:06 AM, Shain Miley  wrote:
> 
> s
---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread Shinobu Kinjo
FYI:
https://plus.google.com/+Cephstorage/posts/HuCaTi7Egg3

On Thu, Mar 16, 2017 at 1:05 AM, Shain Miley  wrote:
> Hello,
> I am trying to deploy ceph to a new server using ceph-deply which I have
> done in the past many times without issue.
>
> Right now I am seeing a timeout trying to connect to git.ceph.com:
>
>
> [hqosd6][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive apt-get
> -q install --assume-yes ca-certificates
> [hqosd6][DEBUG ] Reading package lists...
> [hqosd6][DEBUG ] Building dependency tree...
> [hqosd6][DEBUG ] Reading state information...
> [hqosd6][DEBUG ] ca-certificates is already the newest version.
> [hqosd6][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 3 not
> upgraded.
> [hqosd6][INFO  ] Running command: wget -O release.asc
> https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
> [hqosd6][WARNIN] --2017-03-15 11:49:16--
> https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
> [hqosd6][WARNIN] Resolving ceph.com (ceph.com)... 158.69.68.141
> [hqosd6][WARNIN] Connecting to ceph.com (ceph.com)|158.69.68.141|:443...
> connected.
> [hqosd6][WARNIN] HTTP request sent, awaiting response... 301 Moved
> Permanently
> [hqosd6][WARNIN] Location:
> https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc [following]
> [hqosd6][WARNIN] --2017-03-15 11:49:17--
> https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
> [hqosd6][WARNIN] Resolving git.ceph.com (git.ceph.com)... 8.43.84.132
> [hqosd6][WARNIN] Connecting to git.ceph.com
> (git.ceph.com)|8.43.84.132|:443... failed: Connection timed out.
> [hqosd6][WARNIN] Retrying.
> [hqosd6][WARNIN]
> [hqosd6][WARNIN] --2017-03-15 11:51:25--  (try: 2)
> https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
> [hqosd6][WARNIN] Connecting to git.ceph.com
> (git.ceph.com)|8.43.84.132|:443... failed: Connection timed out.
> [hqosd6][WARNIN] Retrying.
> [hqosd6][WARNIN]
> [hqosd6][WARNIN] --2017-03-15 11:53:34--  (try: 3)
> https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
> [hqosd6][WARNIN] Connecting to git.ceph.com
> (git.ceph.com)|8.43.84.132|:443... failed: Connection timed out.
> [hqosd6][WARNIN] Retrying.
>
>
> I am wondering if this is a known issue.
>
> Just an fyi...I am using an older version of ceph-deply (1.5.36) because in
> the past upgrading to a newer version I was not able to install hammer on
> the cluster…so the workaround was to use a slightly older version.
>
> Thanks in advance for any help you may be able to provide.
>
> Shain
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph-deploy not creating osd's

2016-09-09 Thread Shain Miley
Can someone please suggest a course of action moving forward?

I don't fee comfortable making changes to the crush map without a better 
understanding of what exactly is going on here.

The new osd appears in the 'osd tree' but not in the current crush map. The 
sever that hosts the osd is not present in either the current crush map or the 
'osd tree'.

Thanks,

Shain

> On Sep 8, 2016, at 10:27 PM, Shain Miley  wrote:
> 
> I ended up starting from scratch and doing a purge and purgedata on that host 
> using ceph-deploy, after that things seemed to go better.
> The osd is up and in at this point, however when the osd was added to the 
> cluster...no data was being moved to the new osd.
> 
> Here is a copy of my current crush map:
> 
> http://pastebin.com/PMk3xZ0a
> 
> as you can see from the entry for osd number 108 (the last osd to be added to 
> the cluster)...the crush map does not contain a host entry for 
> hqosd10...which is the host for osd #108.
> 
> Any ideas on how to resolve this?
> 
> Thanks,
> Shain
> 
> 
>> On 9/8/16 2:20 PM, Shain Miley wrote:
>> Hello,
>> 
>> I am trying to use ceph-deploy to add some new osd's to our cluster.  I have 
>> used this method over the last few years to add all of our 107 osd's and 
>> things have seemed to work quite well.
>> 
>> One difference this time is that we are going to use a pci nvme card to 
>> journal the 16 disks in this server (Dell R730xd).
>> 
>> As you can see below it appears as though things complete successfully, 
>> however the osd count never increases, and when I look at hqosd10, there are 
>> no osd's mounted, and nothing in '/var/lib/ceph/osd', no ceph daemons 
>> running, etc.
>> 
>> I created the partitions on the nvme card by hand using parted (I was not 
>> sure if I ceph-deploy should take care of this part or not).
>> 
>> I have zapped the disk and re-run this command several times, and I have 
>> gotten the same result every time.
>> 
>> We are running Ceph version 0.94.9  on Ubuntu 14.04.5
>> 
>> Here is the output from my attempt:
>> 
>> root@hqceph1:/usr/local/ceph-deploy# ceph-deploy --verbose osd create 
>> hqosd10:sdb:/dev/nvme0n1p1
>> [ceph_deploy.conf][DEBUG ] found configuration file at: 
>> /root/.cephdeploy.conf
>> [ceph_deploy.cli][INFO  ] Invoked (1.5.36): /usr/local/bin/ceph-deploy 
>> --verbose osd create hqosd10:sdb:/dev/nvme0n1p1
>> [ceph_deploy.cli][INFO  ] ceph-deploy options:
>> [ceph_deploy.cli][INFO  ]  username  : None
>> [ceph_deploy.cli][INFO  ]  disk  : [('hqosd10', 
>> '/dev/sdb', '/dev/nvme0n1p1')]
>> [ceph_deploy.cli][INFO  ]  dmcrypt   : False
>> [ceph_deploy.cli][INFO  ]  verbose   : True
>> [ceph_deploy.cli][INFO  ]  bluestore : None
>> [ceph_deploy.cli][INFO  ]  overwrite_conf: False
>> [ceph_deploy.cli][INFO  ]  subcommand: create
>> [ceph_deploy.cli][INFO  ]  dmcrypt_key_dir   : 
>> /etc/ceph/dmcrypt-keys
>> [ceph_deploy.cli][INFO  ]  quiet : False
>> [ceph_deploy.cli][INFO  ]  cd_conf   : 
>> 
>> [ceph_deploy.cli][INFO  ]  cluster   : ceph
>> [ceph_deploy.cli][INFO  ]  fs_type   : xfs
>> [ceph_deploy.cli][INFO  ]  func  : > 0x7f6ba750cc80>
>> [ceph_deploy.cli][INFO  ]  ceph_conf : None
>> [ceph_deploy.cli][INFO  ]  default_release   : False
>> [ceph_deploy.cli][INFO  ]  zap_disk  : False
>> [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks 
>> hqosd10:/dev/sdb:/dev/nvme0n1p1
>> [hqosd10][DEBUG ] connected to host: hqosd10
>> [hqosd10][DEBUG ] detect platform information from remote host
>> [hqosd10][DEBUG ] detect machine type
>> [hqosd10][DEBUG ] find the location of an executable
>> [hqosd10][INFO  ] Running command: /sbin/initctl version
>> [hqosd10][DEBUG ] find the location of an executable
>> [ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
>> [ceph_deploy.osd][DEBUG ] Deploying osd to hqosd10
>> [hqosd10][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
>> [ceph_deploy.osd][DEBUG ] Preparing host hqosd10 disk /dev/sdb journal 
>> /dev/nvme0n1p1 activate True
>> [hqosd10][DEBUG ] find the location of an executable
>> [hqosd10][INFO  ] Running command: /usr/sbin/ceph-disk -v prepare --cluster 
>> ceph --fs-type xfs -- /dev/sdb /dev/nvme0n1p1
>> [hqosd10][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
>> --cluster=ceph --show-config-value=fsid
>> [hqosd10][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
>> --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
>> [hqosd10][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
>> --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
>> [hqosd10][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
>> --cluster=ceph 

Re: [ceph-users] Ceph-deploy not creating osd's

2016-09-08 Thread Shain Miley
I ended up starting from scratch and doing a purge and purgedata on that 
host using ceph-deploy, after that things seemed to go better.
The osd is up and in at this point, however when the osd was added to 
the cluster...no data was being moved to the new osd.


Here is a copy of my current crush map:

http://pastebin.com/PMk3xZ0a

as you can see from the entry for osd number 108 (the last osd to be 
added to the cluster)...the crush map does not contain a host entry for 
hqosd10...which is the host for osd #108.


Any ideas on how to resolve this?

Thanks,
Shain


On 9/8/16 2:20 PM, Shain Miley wrote:

Hello,

I am trying to use ceph-deploy to add some new osd's to our cluster.  
I have used this method over the last few years to add all of our 107 
osd's and things have seemed to work quite well.


One difference this time is that we are going to use a pci nvme card 
to journal the 16 disks in this server (Dell R730xd).


As you can see below it appears as though things complete 
successfully, however the osd count never increases, and when I look 
at hqosd10, there are no osd's mounted, and nothing in 
'/var/lib/ceph/osd', no ceph daemons running, etc.


I created the partitions on the nvme card by hand using parted (I was 
not sure if I ceph-deploy should take care of this part or not).


I have zapped the disk and re-run this command several times, and I 
have gotten the same result every time.


We are running Ceph version 0.94.9  on Ubuntu 14.04.5

Here is the output from my attempt:

root@hqceph1:/usr/local/ceph-deploy# ceph-deploy --verbose osd create 
hqosd10:sdb:/dev/nvme0n1p1
[ceph_deploy.conf][DEBUG ] found configuration file at: 
/root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.36): /usr/local/bin/ceph-deploy 
--verbose osd create hqosd10:sdb:/dev/nvme0n1p1

[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username  : None
[ceph_deploy.cli][INFO  ]  disk  : 
[('hqosd10', '/dev/sdb', '/dev/nvme0n1p1')]

[ceph_deploy.cli][INFO  ]  dmcrypt   : False
[ceph_deploy.cli][INFO  ]  verbose   : True
[ceph_deploy.cli][INFO  ]  bluestore : None
[ceph_deploy.cli][INFO  ]  overwrite_conf: False
[ceph_deploy.cli][INFO  ]  subcommand: create
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir   : 
/etc/ceph/dmcrypt-keys

[ceph_deploy.cli][INFO  ]  quiet : False
[ceph_deploy.cli][INFO  ]  cd_conf   : 


[ceph_deploy.cli][INFO  ]  cluster   : ceph
[ceph_deploy.cli][INFO  ]  fs_type   : xfs
[ceph_deploy.cli][INFO  ]  func  : osd at 0x7f6ba750cc80>

[ceph_deploy.cli][INFO  ]  ceph_conf : None
[ceph_deploy.cli][INFO  ]  default_release   : False
[ceph_deploy.cli][INFO  ]  zap_disk  : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks 
hqosd10:/dev/sdb:/dev/nvme0n1p1

[hqosd10][DEBUG ] connected to host: hqosd10
[hqosd10][DEBUG ] detect platform information from remote host
[hqosd10][DEBUG ] detect machine type
[hqosd10][DEBUG ] find the location of an executable
[hqosd10][INFO  ] Running command: /sbin/initctl version
[hqosd10][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] Deploying osd to hqosd10
[hqosd10][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host hqosd10 disk /dev/sdb journal 
/dev/nvme0n1p1 activate True

[hqosd10][DEBUG ] find the location of an executable
[hqosd10][INFO  ] Running command: /usr/sbin/ceph-disk -v prepare 
--cluster ceph --fs-type xfs -- /dev/sdb /dev/nvme0n1p1
[hqosd10][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
--cluster=ceph --show-config-value=fsid
[hqosd10][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[hqosd10][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[hqosd10][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
--cluster=ceph --show-config-value=osd_journal_size
[hqosd10][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
[hqosd10][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
[hqosd10][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_dmcrypt_type

[hqosd10][WARNIN] DEBUG:ceph-disk:Journal /dev/nvme0n1p1 is a partition
[hqosd10][WARNIN] WARNING:ceph-disk:OSD will not be hot-swappable if 
journal is not the same device as the osd data
[hqosd10][WARNIN] INFO:ceph-disk:Running command: /sbin/blkid -p -o 
udev /dev/nvme0n1p1
[hqosd10][WARNIN] 

Re: [ceph-users] Ceph-deploy on Jewel error

2016-08-03 Thread Chengwei Yang
On Thu, Aug 04, 2016 at 12:20:01AM +, EP Komarla wrote:
> Hi All,
> 
>  
> 
> I am trying to do a fresh install of Ceph Jewel on my cluster.  I went through
> all the steps in configuring the network, ssh, password, etc.  Now I am at the
> stage of running the ceph-deploy commands to install monitors and other 
> nodes. 
> I am getting the below error when I am deploying the first monitor.  Not able
> to figure out what it is that I am missing here.  Any pointers or help
> appreciated.
> 
>  
> 
> Thanks in advance.
> 
>  
> 
> - epk
> 
>  
> 
> [ep-c2-mon-01][DEBUG ] ---> Package librbd1.x86_64 1:0.94.7-0.el7 will be
> updated
> 
> [ep-c2-mon-01][DEBUG ] ---> Package librbd1.x86_64 1:10.2.2-0.el7 will be an
> update
> 
> [ep-c2-mon-01][DEBUG ] ---> Package python-cephfs.x86_64 1:0.94.7-0.el7 will 
> be
> updated
> 
> [ep-c2-mon-01][DEBUG ] ---> Package python-cephfs.x86_64 1:10.2.2-0.el7 will 
> be
> an update
> 
> [ep-c2-mon-01][DEBUG ] ---> Package python-rados.x86_64 1:0.94.7-0.el7 will be
> updated
> 
> [ep-c2-mon-01][DEBUG ] ---> Package python-rados.x86_64 1:10.2.2-0.el7 will be
> an update
> 
> [ep-c2-mon-01][DEBUG ] ---> Package python-rbd.x86_64 1:0.94.7-0.el7 will be
> updated
> 
> [ep-c2-mon-01][DEBUG ] ---> Package python-rbd.x86_64 1:10.2.2-0.el7 will be 
> an
> update
> 
> [ep-c2-mon-01][DEBUG ] --> Running transaction check
> 
> [ep-c2-mon-01][DEBUG ] ---> Package ceph-selinux.x86_64 1:10.2.2-0.el7 will be
> installed
> 
> [ep-c2-mon-01][DEBUG ] --> Processing Dependency: selinux-policy-base >=
> 3.13.1-60.el7_2.3 for package: 1:ceph-selinux-10.2.2-0.el7.x86_64
> 
> [ep-c2-mon-01][DEBUG ] ---> Package python-setuptools.noarch 0:0.9.8-4.el7 
> will
> be installed
> 
> [ep-c2-mon-01][DEBUG ] --> Finished Dependency Resolution
> 
> [ep-c2-mon-01][WARNIN] Error: Package: 1:ceph-selinux-10.2.2-0.el7.x86_64
> (ceph)
> 
> [ep-c2-mon-01][DEBUG ]  You could try using --skip-broken to work around the
> problem
> 
> [ep-c2-mon-01][WARNIN]Requires: selinux-policy-base >=
> 3.13.1-60.el7_2.3

It said it requires selinux-policy-base >= 3.13.1-60.el7_2.3

> 
> [ep-c2-mon-01][WARNIN]Installed:
> selinux-policy-targeted-3.13.1-60.el7.noarch (@CentOS/7)
> 
> [ep-c2-mon-01][WARNIN]selinux-policy-base = 3.13.1-60.el7
> 
> [ep-c2-mon-01][WARNIN]Available:
> selinux-policy-minimum-3.13.1-60.el7.noarch (CentOS-7)
> 
> [ep-c2-mon-01][WARNIN]selinux-policy-base = 3.13.1-60.el7
> 
> [ep-c2-mon-01][WARNIN]Available:
> selinux-policy-mls-3.13.1-60.el7.noarch (CentOS-7)
> 
> [ep-c2-mon-01][WARNIN]selinux-policy-base = 3.13.1-60.el7

However, both installed version and available versions are not meet the
requirement, so if fail.

You may have an incorrect repo configuration.

> 
> [ep-c2-mon-01][DEBUG ]  You could try running: rpm -Va --nofiles --nodigest
> 
> [ep-c2-mon-01][ERROR ] RuntimeError: command returned non-zero exit status: 1
> 
> [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: yum -y install
> ceph ceph-radosgw
> 
>  
> 
>  
> 
> EP KOMARLA,
> 
> Flex_RGB_Sml_tm
> 
> Emal: ep.koma...@flextronics.com
> 
> Address: 677 Gibraltor Ct, Building #2, Milpitas, CA 94035, USA
> 
> Phone: 408-674-6090 (mobile)
> 
>  
> 
> 
> Legal Disclaimer:
> The information contained in this message may be privileged and confidential.
> It is intended to be read only by the individual or entity to whom it is
> addressed or by their designee. If the reader of this message is not the
> intended recipient, you are on notice that any distribution of this message, 
> in
> any form, is strictly prohibited. If you have received this message in error,
> please immediately notify the sender and delete or destroy any copy of this
> message!
> SECURITY NOTE: file ~/.netrc must not be accessible by others



> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


-- 
Thanks,
Chengwei


signature.asc
Description: Digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph-deploy new OSD addition issue

2016-06-28 Thread Pisal, Ranjit Dnyaneshwar
This is another error I get while trying to activate disk -

[ceph@MYOPTPDN16 ~]$ sudo ceph-disk activate /dev/sdl1
2016-06-29 11:25:17.436256 7f8ed85ef700  0 -- :/1032777 >> 10.115.1.156:6789/0 
pipe(0x7f8ed4021610 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8ed40218a0).fault
2016-06-29 11:25:20.436362 7f8ed84ee700  0 -- :/1032777 >> 10.115.1.156:6789/0 
pipe(0x7f8ec4000c00 sd=6 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8ec4000e90).fault
^Z
[2]+  Stopped sudo ceph-disk activate /dev/sdl1

Best Regards,
Ranjit
+91-9823240750


From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Pisal, 
Ranjit Dnyaneshwar
Sent: Wednesday, June 29, 2016 10:59 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Ceph-deploy new OSD addition issue


Hi,

I am stuck at one point to new OSD Host to existing ceph cluster. I tried a 
multiple combinations for creating OSDs on new host but every time its failing 
while disk activation and no partition for OSD (/var/lib/ceph/osd/ceoh-xxx) is 
getting created instead (/var/lib/ceph/tmp/bhbjnk.mnt) temp partition is 
created. The host I have is combination of SSD and SAS disks. SSDs are parted 
to use for Journaling purpose. The sequence I tried to add the new host as 
follows -

1. Ceph-rpms installed on new Host
2. from INIT node - ceph-disk list for new host checked
3. Prepared disk - ceph-deploy --overwrite-conf osd create --fs-type xfs {OSD 
node}:{raw device}, - Result showed that Host is ready for OSD use; however it 
didn't reflect in OSD tree (Because crush was not updated (?) ) neither 
/var/lib/OSD.xx mount got created.
4. Although it showed Host ready for OSD use; before it threw a warning that 
disconnecting after 300 seconds as no data received from new Host
5.I tried to activate the disk manually - a. sudo ceph-disk activate /dev/sde1 -
This command failed to execute with erroneous values
ceph-disk: Cannot discover filesystem type: device /dev/sda: Line is truncated

After this I also tried to install ceph-deploy and prepare new host using below 
commands and repeated above steps but it still failed at same point of disk 
activation.

ceph-deploy install new Host
ceph-deploy new newHost

Attached logs for reference.

Please assist with any known workaround/resolution.

Thanks
Ranjit
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy jewel install dependencies

2016-06-14 Thread Noah Watkins
Working for me now. Thanks for taking care of this.

- Noah

On Tue, Jun 14, 2016 at 5:42 PM, Alfredo Deza  wrote:
> We are now good to go.
>
> Sorry for all the troubles, some packages were missed in the metadata,
> had to resync+re-sign them to get everything in order.
>
> Just tested it out and it works as expected. Let me know if you have any 
> issues.
>
> On Tue, Jun 14, 2016 at 5:57 PM, Noah Watkins  wrote:
>> Yeh, I'm still seeing the problem, too Thanks.
>>
>> On Tue, Jun 14, 2016 at 2:55 PM Alfredo Deza  wrote:
>>>
>>> On Tue, Jun 14, 2016 at 5:52 PM, Alfredo Deza  wrote:
>>> > Is it possible you tried to install just when I was syncing 10.2.2 ?
>>> >
>>> > :)
>>> >
>>> > Would you mind trying this again and see if you are good?
>>> >
>>> > On Tue, Jun 14, 2016 at 5:31 PM, Noah Watkins 
>>> > wrote:
>>> >> Installing Jewel with ceph-deploy has been working for weeks. Today I
>>> >> started to get some dependency issues:
>>> >>
>>> >> [b61808c8624c][DEBUG ] The following packages have unmet dependencies:
>>> >> [b61808c8624c][DEBUG ]  ceph : Depends: ceph-mon (= 10.2.1-1trusty) but
>>> >> it
>>> >> is not going to be installed
>>> >> [b61808c8624c][DEBUG ] Depends: ceph-osd (= 10.2.1-1trusty) but
>>> >> it
>>> >> is not going to be installed
>>> >> [b61808c8624c][DEBUG ]  ceph-mds : Depends: ceph-base (=
>>> >> 10.2.1-1trusty) but
>>> >> it is not going to be installed
>>> >> [b61808c8624c][WARNIN] E: Unable to correct problems, you have held
>>> >> broken
>>> >> packages.
>>> >> [b61808c8624c][ERROR ] RuntimeError: command returned non-zero exit
>>> >> status:
>>> >> 100
>>> >> [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: env
>>> >> DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get
>>> >> --assume-yes
>>> >> -q --no-install-recommends install -o Dpkg::Options::=--force-confnew
>>> >> ceph
>>> >> ceph-mds radosgw
>>> >>
>>> >> Seems to be an issue with 10.2.1 vs 10.2.2?
>>>
>>> Bah, it looks like this is still an issue even right now.
>>>
>>> I will update once I know what is going on
>>> >>
>>> >> root@b61808c8624c:/ceph-deploy# apt-get install ceph-mon ceph-base
>>> >> Reading package lists... Done
>>> >> Building dependency tree
>>> >> Reading state information... Done
>>> >> Some packages could not be installed. This may mean that you have
>>> >> requested an impossible situation or if you are using the unstable
>>> >> distribution that some required packages have not yet been created
>>> >> or been moved out of Incoming.
>>> >> The following information may help to resolve the situation:
>>> >>
>>> >> The following packages have unmet dependencies:
>>> >>  ceph-mon : Depends: ceph-base (= 10.2.1-1trusty) but 10.2.2-1trusty is
>>> >> to
>>> >> be installed
>>> >> E: Unable to correct problems, you have held broken packages.
>>> >>
>>> >>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy jewel install dependencies

2016-06-14 Thread Alfredo Deza
We are now good to go.

Sorry for all the troubles, some packages were missed in the metadata,
had to resync+re-sign them to get everything in order.

Just tested it out and it works as expected. Let me know if you have any issues.

On Tue, Jun 14, 2016 at 5:57 PM, Noah Watkins  wrote:
> Yeh, I'm still seeing the problem, too Thanks.
>
> On Tue, Jun 14, 2016 at 2:55 PM Alfredo Deza  wrote:
>>
>> On Tue, Jun 14, 2016 at 5:52 PM, Alfredo Deza  wrote:
>> > Is it possible you tried to install just when I was syncing 10.2.2 ?
>> >
>> > :)
>> >
>> > Would you mind trying this again and see if you are good?
>> >
>> > On Tue, Jun 14, 2016 at 5:31 PM, Noah Watkins 
>> > wrote:
>> >> Installing Jewel with ceph-deploy has been working for weeks. Today I
>> >> started to get some dependency issues:
>> >>
>> >> [b61808c8624c][DEBUG ] The following packages have unmet dependencies:
>> >> [b61808c8624c][DEBUG ]  ceph : Depends: ceph-mon (= 10.2.1-1trusty) but
>> >> it
>> >> is not going to be installed
>> >> [b61808c8624c][DEBUG ] Depends: ceph-osd (= 10.2.1-1trusty) but
>> >> it
>> >> is not going to be installed
>> >> [b61808c8624c][DEBUG ]  ceph-mds : Depends: ceph-base (=
>> >> 10.2.1-1trusty) but
>> >> it is not going to be installed
>> >> [b61808c8624c][WARNIN] E: Unable to correct problems, you have held
>> >> broken
>> >> packages.
>> >> [b61808c8624c][ERROR ] RuntimeError: command returned non-zero exit
>> >> status:
>> >> 100
>> >> [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: env
>> >> DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get
>> >> --assume-yes
>> >> -q --no-install-recommends install -o Dpkg::Options::=--force-confnew
>> >> ceph
>> >> ceph-mds radosgw
>> >>
>> >> Seems to be an issue with 10.2.1 vs 10.2.2?
>>
>> Bah, it looks like this is still an issue even right now.
>>
>> I will update once I know what is going on
>> >>
>> >> root@b61808c8624c:/ceph-deploy# apt-get install ceph-mon ceph-base
>> >> Reading package lists... Done
>> >> Building dependency tree
>> >> Reading state information... Done
>> >> Some packages could not be installed. This may mean that you have
>> >> requested an impossible situation or if you are using the unstable
>> >> distribution that some required packages have not yet been created
>> >> or been moved out of Incoming.
>> >> The following information may help to resolve the situation:
>> >>
>> >> The following packages have unmet dependencies:
>> >>  ceph-mon : Depends: ceph-base (= 10.2.1-1trusty) but 10.2.2-1trusty is
>> >> to
>> >> be installed
>> >> E: Unable to correct problems, you have held broken packages.
>> >>
>> >>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy jewel install dependencies

2016-06-14 Thread Noah Watkins
Yeh, I'm still seeing the problem, too Thanks.

On Tue, Jun 14, 2016 at 2:55 PM Alfredo Deza  wrote:

> On Tue, Jun 14, 2016 at 5:52 PM, Alfredo Deza  wrote:
> > Is it possible you tried to install just when I was syncing 10.2.2 ?
> >
> > :)
> >
> > Would you mind trying this again and see if you are good?
> >
> > On Tue, Jun 14, 2016 at 5:31 PM, Noah Watkins 
> wrote:
> >> Installing Jewel with ceph-deploy has been working for weeks. Today I
> >> started to get some dependency issues:
> >>
> >> [b61808c8624c][DEBUG ] The following packages have unmet dependencies:
> >> [b61808c8624c][DEBUG ]  ceph : Depends: ceph-mon (= 10.2.1-1trusty) but
> it
> >> is not going to be installed
> >> [b61808c8624c][DEBUG ] Depends: ceph-osd (= 10.2.1-1trusty) but
> it
> >> is not going to be installed
> >> [b61808c8624c][DEBUG ]  ceph-mds : Depends: ceph-base (=
> 10.2.1-1trusty) but
> >> it is not going to be installed
> >> [b61808c8624c][WARNIN] E: Unable to correct problems, you have held
> broken
> >> packages.
> >> [b61808c8624c][ERROR ] RuntimeError: command returned non-zero exit
> status:
> >> 100
> >> [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: env
> >> DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get
> --assume-yes
> >> -q --no-install-recommends install -o Dpkg::Options::=--force-confnew
> ceph
> >> ceph-mds radosgw
> >>
> >> Seems to be an issue with 10.2.1 vs 10.2.2?
>
> Bah, it looks like this is still an issue even right now.
>
> I will update once I know what is going on
> >>
> >> root@b61808c8624c:/ceph-deploy# apt-get install ceph-mon ceph-base
> >> Reading package lists... Done
> >> Building dependency tree
> >> Reading state information... Done
> >> Some packages could not be installed. This may mean that you have
> >> requested an impossible situation or if you are using the unstable
> >> distribution that some required packages have not yet been created
> >> or been moved out of Incoming.
> >> The following information may help to resolve the situation:
> >>
> >> The following packages have unmet dependencies:
> >>  ceph-mon : Depends: ceph-base (= 10.2.1-1trusty) but 10.2.2-1trusty is
> to
> >> be installed
> >> E: Unable to correct problems, you have held broken packages.
> >>
> >>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy jewel install dependencies

2016-06-14 Thread Alfredo Deza
On Tue, Jun 14, 2016 at 5:52 PM, Alfredo Deza  wrote:
> Is it possible you tried to install just when I was syncing 10.2.2 ?
>
> :)
>
> Would you mind trying this again and see if you are good?
>
> On Tue, Jun 14, 2016 at 5:31 PM, Noah Watkins  wrote:
>> Installing Jewel with ceph-deploy has been working for weeks. Today I
>> started to get some dependency issues:
>>
>> [b61808c8624c][DEBUG ] The following packages have unmet dependencies:
>> [b61808c8624c][DEBUG ]  ceph : Depends: ceph-mon (= 10.2.1-1trusty) but it
>> is not going to be installed
>> [b61808c8624c][DEBUG ] Depends: ceph-osd (= 10.2.1-1trusty) but it
>> is not going to be installed
>> [b61808c8624c][DEBUG ]  ceph-mds : Depends: ceph-base (= 10.2.1-1trusty) but
>> it is not going to be installed
>> [b61808c8624c][WARNIN] E: Unable to correct problems, you have held broken
>> packages.
>> [b61808c8624c][ERROR ] RuntimeError: command returned non-zero exit status:
>> 100
>> [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: env
>> DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes
>> -q --no-install-recommends install -o Dpkg::Options::=--force-confnew ceph
>> ceph-mds radosgw
>>
>> Seems to be an issue with 10.2.1 vs 10.2.2?

Bah, it looks like this is still an issue even right now.

I will update once I know what is going on
>>
>> root@b61808c8624c:/ceph-deploy# apt-get install ceph-mon ceph-base
>> Reading package lists... Done
>> Building dependency tree
>> Reading state information... Done
>> Some packages could not be installed. This may mean that you have
>> requested an impossible situation or if you are using the unstable
>> distribution that some required packages have not yet been created
>> or been moved out of Incoming.
>> The following information may help to resolve the situation:
>>
>> The following packages have unmet dependencies:
>>  ceph-mon : Depends: ceph-base (= 10.2.1-1trusty) but 10.2.2-1trusty is to
>> be installed
>> E: Unable to correct problems, you have held broken packages.
>>
>>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy jewel install dependencies

2016-06-14 Thread Alfredo Deza
Is it possible you tried to install just when I was syncing 10.2.2 ?

:)

Would you mind trying this again and see if you are good?

On Tue, Jun 14, 2016 at 5:31 PM, Noah Watkins  wrote:
> Installing Jewel with ceph-deploy has been working for weeks. Today I
> started to get some dependency issues:
>
> [b61808c8624c][DEBUG ] The following packages have unmet dependencies:
> [b61808c8624c][DEBUG ]  ceph : Depends: ceph-mon (= 10.2.1-1trusty) but it
> is not going to be installed
> [b61808c8624c][DEBUG ] Depends: ceph-osd (= 10.2.1-1trusty) but it
> is not going to be installed
> [b61808c8624c][DEBUG ]  ceph-mds : Depends: ceph-base (= 10.2.1-1trusty) but
> it is not going to be installed
> [b61808c8624c][WARNIN] E: Unable to correct problems, you have held broken
> packages.
> [b61808c8624c][ERROR ] RuntimeError: command returned non-zero exit status:
> 100
> [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: env
> DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes
> -q --no-install-recommends install -o Dpkg::Options::=--force-confnew ceph
> ceph-mds radosgw
>
> Seems to be an issue with 10.2.1 vs 10.2.2?
>
> root@b61808c8624c:/ceph-deploy# apt-get install ceph-mon ceph-base
> Reading package lists... Done
> Building dependency tree
> Reading state information... Done
> Some packages could not be installed. This may mean that you have
> requested an impossible situation or if you are using the unstable
> distribution that some required packages have not yet been created
> or been moved out of Incoming.
> The following information may help to resolve the situation:
>
> The following packages have unmet dependencies:
>  ceph-mon : Depends: ceph-base (= 10.2.1-1trusty) but 10.2.2-1trusty is to
> be installed
> E: Unable to correct problems, you have held broken packages.
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy prepare journal on software raid ( md device )

2016-06-12 Thread Oliver Dzombic
Hi to myself =)

just in case other's run into the same:

#1: You will have to update parted from version 3.1 to 3.2 ( for example
simply take the fedora package, its newer, and replace with it ) -which
is responsible for partprobe.

#2: Softwareraid will still not work, because of the guid of the
partition. ceph-deploy will recognize it as something different than
expected.

So ceph-deploy + software raid will not work.

Maybe it will work with a manual osd creation, i did not test it.

In any case: updating the parted package to make partprobe less
complaining is a very good idea if you work with any kind of raid devices.

-- 
Mit freundlichen Gruessen / Best regards

Oliver Dzombic
IP-Interactive

mailto:i...@ip-interactive.de

Anschrift:

IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen

HRB 93402 beim Amtsgericht Hanau
Geschäftsführung: Oliver Dzombic

Steuer Nr.: 35 236 3622 1
UST ID: DE274086107


Am 08.06.2016 um 19:55 schrieb Oliver Dzombic:
> Hi,
> 
> i red, that ceph-deploy does not support software raid devices
> 
> http://tracker.ceph.com/issues/13084
> 
> But thats already nearly 1 year ago, and the problem is different.
> 
> As it seems to me, the "only" major problem is, that the newly created
> journal partition remains in the "Device or ressource busy" state. So
> that ceph-deploy gives up after some time.
> 
> Does anyone knows a workaround ?
> 
> 
> [root@cephmon1 ceph-cluster-gen2]# ceph-deploy osd prepare
> cephosd1:/dev/sdf:/dev/md128
> [ceph_deploy.conf][DEBUG ] found configuration file at:
> /root/.cephdeploy.conf
> [ceph_deploy.cli][INFO  ] Invoked (1.5.33): /usr/bin/ceph-deploy osd
> prepare cephosd1:/dev/sdf:/dev/md128
> [ceph_deploy.cli][INFO  ] ceph-deploy options:
> [ceph_deploy.cli][INFO  ]  username  : None
> [ceph_deploy.cli][INFO  ]  disk  : [('cephosd1',
> '/dev/sdf', '/dev/md128')]
> [ceph_deploy.cli][INFO  ]  dmcrypt   : False
> [ceph_deploy.cli][INFO  ]  verbose   : False
> [ceph_deploy.cli][INFO  ]  bluestore : None
> [ceph_deploy.cli][INFO  ]  overwrite_conf: False
> [ceph_deploy.cli][INFO  ]  subcommand: prepare
> [ceph_deploy.cli][INFO  ]  dmcrypt_key_dir   :
> /etc/ceph/dmcrypt-keys
> [ceph_deploy.cli][INFO  ]  quiet : False
> [ceph_deploy.cli][INFO  ]  cd_conf   :
> 
> [ceph_deploy.cli][INFO  ]  cluster   : ceph
> [ceph_deploy.cli][INFO  ]  fs_type   : xfs
> [ceph_deploy.cli][INFO  ]  func  :  at 0x7f57abff9c08>
> [ceph_deploy.cli][INFO  ]  ceph_conf : None
> [ceph_deploy.cli][INFO  ]  default_release   : False
> [ceph_deploy.cli][INFO  ]  zap_disk  : False
> [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks
> cephosd1:/dev/sdf:/dev/md128
> [cephosd1][DEBUG ] connected to host: cephosd1
> [cephosd1][DEBUG ] detect platform information from remote host
> [cephosd1][DEBUG ] detect machine type
> [cephosd1][DEBUG ] find the location of an executable
> [ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.2.1511 Core
> [ceph_deploy.osd][DEBUG ] Deploying osd to cephosd1
> [cephosd1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
> [cephosd1][WARNIN] osd keyring does not exist yet, creating one
> [cephosd1][DEBUG ] create a keyring file
> [ceph_deploy.osd][DEBUG ] Preparing host cephosd1 disk /dev/sdf journal
> /dev/md128 activate False
> [cephosd1][DEBUG ] find the location of an executable
> [cephosd1][INFO  ] Running command: /usr/sbin/ceph-disk -v prepare
> --cluster ceph --fs-type xfs -- /dev/sdf /dev/md128
> [cephosd1][WARNIN] command: Running command: /usr/bin/ceph-osd
> --cluster=ceph --show-config-value=fsid
> [cephosd1][WARNIN] command: Running command: /usr/bin/ceph-osd
> --check-allows-journal -i 0 --cluster ceph
> [cephosd1][WARNIN] command: Running command: /usr/bin/ceph-osd
> --check-wants-journal -i 0 --cluster ceph
> [cephosd1][WARNIN] command: Running command: /usr/bin/ceph-osd
> --check-needs-journal -i 0 --cluster ceph
> [cephosd1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdf uuid path is
> /sys/dev/block/8:80/dm/uuid
> [cephosd1][WARNIN] command: Running command: /usr/bin/ceph-osd
> --cluster=ceph --show-config-value=osd_journal_size
> [cephosd1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdf uuid path is
> /sys/dev/block/8:80/dm/uuid
> [cephosd1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdf uuid path is
> /sys/dev/block/8:80/dm/uuid
> [cephosd1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdf uuid path is
> /sys/dev/block/8:80/dm/uuid
> [cephosd1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdf1 uuid path is
> /sys/dev/block/8:81/dm/uuid
> [cephosd1][WARNIN] command: Running command: /usr/bin/ceph-conf
> --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
> [cephosd1][WARNIN] command: Running command: 

Re: [ceph-users] ceph-deploy jewel stopped working

2016-04-21 Thread Stephen Lord
Sorry about the mangled urls in there, these are all from download.ceph.com 
rpm-jewel el7 xfs_64 

Steve


> On Apr 21, 2016, at 1:17 PM, Stephen Lord  wrote:
> 
> 
> 
> Running this command
> 
> ceph-deploy install --stable jewel  ceph00 
> 
> And using the 1.5.32 version of ceph-deploy onto a redhat 7.2 system is 
> failing today (worked yesterday)
> 
> [ceph00][DEBUG ] 
> 
> [ceph00][DEBUG ]  Package Arch  Version   
>  Repository   Size
> [ceph00][DEBUG ] 
> 
> [ceph00][DEBUG ] Installing:
> [ceph00][DEBUG ]  ceph-mdsx86_641:10.2.0-0.el7
>  ceph2.8 M
> [ceph00][DEBUG ]  ceph-monx86_641:10.2.0-0.el7
>  ceph2.8 M
> [ceph00][DEBUG ]  ceph-osdx86_641:10.2.0-0.el7
>  ceph9.0 M
> [ceph00][DEBUG ]  ceph-radosgwx86_641:10.2.0-0.el7
>  ceph245 k
> [ceph00][DEBUG ] Installing for dependencies:
> [ceph00][DEBUG ]  ceph-base   x86_641:10.2.0-0.el7
>  ceph4.2 M
> [ceph00][DEBUG ]  ceph-common x86_641:10.2.0-0.el7
>  ceph 15 M
> [ceph00][DEBUG ]  ceph-selinuxx86_641:10.2.0-0.el7
>  ceph 19 k
> [ceph00][DEBUG ] Updating for dependencies:
> [ceph00][DEBUG ]  libcephfs1  x86_641:10.2.0-0.el7
>  ceph1.8 M
> [ceph00][DEBUG ]  librados2   x86_641:10.2.0-0.el7
>  ceph1.9 M
> [ceph00][DEBUG ]  librados2-devel x86_641:10.2.0-0.el7
>  ceph474 k
> [ceph00][DEBUG ]  libradosstriper1x86_641:10.2.0-0.el7
>  ceph1.8 M
> [ceph00][DEBUG ]  librbd1 x86_641:10.2.0-0.el7
>  ceph2.4 M
> [ceph00][DEBUG ]  librgw2 x86_641:10.2.0-0.el7
>  ceph2.8 M
> [ceph00][DEBUG ]  python-cephfs   x86_641:10.2.0-0.el7
>  ceph 66 k
> [ceph00][DEBUG ]  python-radosx86_641:10.2.0-0.el7
>  ceph145 k
> [ceph00][DEBUG ]  python-rbd  x86_641:10.2.0-0.el7
>  ceph 61 k
> [ceph00][DEBUG ] 
> [ceph00][DEBUG ] Transaction Summary
> [ceph00][DEBUG ] 
> 
> [ceph00][DEBUG ] Install  4 Packages (+3 Dependent packages)
> [ceph00][DEBUG ] Upgrade ( 9 Dependent packages)
> [ceph00][DEBUG ] 
> [ceph00][DEBUG ] Total download size: 45 M
> [ceph00][DEBUG ] Downloading packages:
> [ceph00][DEBUG ] Delta RPMs disabled because /usr/bin/applydeltarpm not 
> installed.
> [ceph00][WARNIN] 
> https://urldefense.proofpoint.com/v2/url?u=http-3A__download.ceph.com_rpm-2Djewel_el7_x86-5F64_ceph-2Dcommon-2D10.2.0-2D0.el7.x86-5F64.rpm-3A=CwIGaQ=8S5idjlO_n28Ko3lg6lskTMwneSC-WqZ5EBTEEvDlkg=tA8AXp6f2QAGtnc-TrB3H1XZIqqTELvv3S6ZQGJZBLs=F9k0WXxF7xUfRHp4ZqBOkaLr0_5mni4JwI4czlkUybY=5BrqfSf5EOziOhw2Z4ZEzBgDMZchLGlpYl4EF7pBB_Y=
>   [Errno -1] Package does not match intended download. Suggestion: run yum 
> --enablerepo=ceph clean metadata
> [ceph00][WARNIN] Trying other mirror.
> …..
> 
> I have cleaned up all the repo info on this end and it makes no difference. I 
> suspect something in the last update to the site is wrong or missing, the 
> repomd.xml file here:
> 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__download.ceph.com_rpm-2Djewel_el7_x86-5F64_repodata_=CwIGaQ=8S5idjlO_n28Ko3lg6lskTMwneSC-WqZ5EBTEEvDlkg=tA8AXp6f2QAGtnc-TrB3H1XZIqqTELvv3S6ZQGJZBLs=F9k0WXxF7xUfRHp4ZqBOkaLr0_5mni4JwI4czlkUybY=NJLHwaWpdVogSPGcnGJz0e4wtL5Q_lUJZk6QpAabHnw=
>  
> 
> Is a day older than all the packages which may or may not be part of the 
> issue.
> 
> Steve
> 
> --
> The information contained in this transmission may be confidential. Any 
> disclosure, copying, or further distribution of confidential information is 
> not permitted unless such privilege is explicitly granted in writing by 
> Quantum. Quantum reserves the right to have electronic communications, 
> including email and attachments, sent across its networks filtered through 
> anti virus and spam software programs and retain such messages in order to 
> comply with applicable data security and retention requirements. Quantum is 
> not responsible for the proper and complete transmission of the substance of 
> this communication or for any delay in its receipt.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> 

Re: [ceph-users] ceph deploy osd install broken on centos 7 with hammer 0.94.6

2016-03-23 Thread Oliver Dzombic
Hi,

after i copied /lib/lsb/* ( was not existing on my new centos 7.2 ) system

now

# service ceph start
Error EINVAL: entity osd.18 exists but key does not match
ERROR:ceph-disk:Failed to activate
ceph-disk: Command '['/usr/bin/ceph', '--cluster', 'ceph', '--name',
'client.bootstrap-osd', '--keyring',
'/var/lib/ceph/bootstrap-osd/ceph.keyring', 'auth', 'add', 'osd.18',
'-i', '/var/lib/ceph/tmp/mnt.aK93bJ/keyring', 'osd', 'allow *', 'mon',
'allow profile osd']' returned non-zero exit status 22
ceph-disk: Error: One or more partitions failed to activate


and after i deleted the old ceph auth ids with:

ceph auth del osd.id

it started to work, after repeating all again.

So all in all, in the very end:

Thank you dear vendors, that you can not use eighter systemd or upstart
or sysv or anything else, but not all at once, mixed and removing it
during within a major release...

-- 
Mit freundlichen Gruessen / Best regards

Oliver Dzombic
IP-Interactive

mailto:i...@ip-interactive.de

Anschrift:

IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen

HRB 93402 beim Amtsgericht Hanau
Geschäftsführung: Oliver Dzombic

Steuer Nr.: 35 236 3622 1
UST ID: DE274086107


Am 24.03.2016 um 02:34 schrieb Oliver Dzombic:
> Hi,
> 
> i try to add a node to an existing cluster:
> 
> ceph-deploy install newceph2 --release hammer
> 
> works fine.
> 
> I try to add an osd:
> 
> ceph-deploy osd create newceph2:/dev/sdc:/dev/sda
> 
> works fine:
> 
> [newceph2][WARNIN] Executing /sbin/chkconfig ceph on
> [newceph2][INFO  ] checking OSD status...
> [newceph2][INFO  ] Running command: ceph --cluster=ceph osd stat
> --format=json
> [ceph_deploy.osd][DEBUG ] Host newceph2 is now ready for osd use.
> 
> ceph -s will show:
> 
>  osdmap e20602: 19 osds: 18 up, 18 in
> 
> ceph osd tree will show:
> 
> ceph osd tree
> ID WEIGHT   TYPE NAME  UP/DOWN REWEIGHT PRIMARY-AFFINITY
> -1 97.31982 root default
> -2 16.13997 host ceph1
>  0  5.37999 osd.0   up  1.0  1.0
>  1  5.37999 osd.1   up  1.0  1.0
>  2  5.37999 osd.2   up  1.0  1.0
> -3 16.13997 host ceph2
>  3  5.37999 osd.3   up  1.0  1.0
>  4  5.37999 osd.4   up  1.0  1.0
>  5  5.37999 osd.5   up  1.0  1.0
> -4 16.13997 host ceph3
>  6  5.37999 osd.6   up  1.0  1.0
>  7  5.37999 osd.7   up  1.0  1.0
>  8  5.37999 osd.8   up  1.0  1.0
> -5 16.13997 host ceph4
>  9  5.37999 osd.9   up  1.0  1.0
> 10  5.37999 osd.10  up  1.0  1.0
> 11  5.37999 osd.11  up  1.0  1.0
> -6 16.37997 host ceph5
> 12  5.45999 osd.12  up  1.0  1.0
> 13  5.45999 osd.13  up  1.0  1.0
> 14  5.45999 osd.14  up  1.0  1.0
> -7 16.37997 host ceph6
> 15  5.45999 osd.15  up  1.0  1.0
> 16  5.45999 osd.16  up  1.0  1.0
> 17  5.45999 osd.17  up  1.0  1.0
> 180 osd.18down0  1.0
> 
> 
> The last lines of the osd log on the node will show:
> 
> 2016-03-24 11:25:57.454637 7fa994474880 -1
> filestore(/var/lib/ceph/tmp/mnt.gFf0AJ) could not find
> 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
> 
> 2016-03-24 11:25:57.621629 7fa994474880  1 journal close
> /var/lib/ceph/tmp/mnt.gFf0AJ/journal
> 
> 2016-03-24 11:25:57.626038 7fa994474880 -1 created object store
> /var/lib/ceph/tmp/mnt.gFf0AJ journal
> /var/lib/ceph/tmp/mnt.gFf0AJ/journal for osd.18 fsid
> 292e15e5-bc38-41b0-9e7b-6f5ef1cf2e53
> 
> 2016-03-24 11:25:57.626131 7fa994474880 -1 auth: error reading file:
> /var/lib/ceph/tmp/mnt.gFf0AJ/keyring: can't open
> /var/lib/ceph/tmp/mnt.gFf0AJ/keyring: (2) No such file or directory
> 
> 2016-03-24 11:25:57.631470 7fa994474880 -1 created new key in keyring
> /var/lib/ceph/tmp/mnt.gFf0AJ/keyring
> 
> 
> thats hammer
> 
> ceph version 0.94.6 (e832001feaf8c176593e0325c8298e3f16dfb403)
> 
> on centos 7.2
> 
> there are no systemctl / services commands working:
> 
> # service ceph
> /etc/init.d/ceph: line 15: /lib/lsb/init-functions: No such file or
> directory
> 
> 
> --
> 
> 2-3 months ago this was just working fine. In the meanwhile something
> changed as it seems in the ceph-deploy code.
> 
> Any suggestions ? I more or less urgently need to add osd's :/
> 
> Thank you !
> 
> 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy create-initial errors out with "Some monitors have still not reached quorum"

2016-01-05 Thread Martin Palma
Hi Maruthi,

happy to hear that it is working now.

Yes, with the latest stable release, infernalis, the "ceph" username is
reserved  for the Ceph daemons.

Best,
Martin

On Tuesday, 5 January 2016, Maruthi Seshidhar 
wrote:

> Thank you Martin,
>
> Yes, "nslookup " was not working.
> After configuring DNS on all nodes, the nslookup issue got sorted out.
>
> But the "some monitors have still not reach quorun" issue was still seen.
> I was using user "ceph" for ceph deployment. The user "ceph" is reserved
> for ceph internal use.
> After creating a new user "cephdeploy", and running ceph-deploy commands
> from this user, the cluster came up.
>
> thanks & regards,
> Maruthi.
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy create-initial errors out with "Some monitors have still not reached quorum"

2016-01-04 Thread Maruthi Seshidhar
Thank you Martin,

Yes, "nslookup " was not working.
After configuring DNS on all nodes, the nslookup issue got sorted out.

But the "some monitors have still not reach quorun" issue was still seen.
I was using user "ceph" for ceph deployment. The user "ceph" is reserved
for ceph internal use.
After creating a new user "cephdeploy", and running ceph-deploy commands
from this user, the cluster came up.

thanks & regards,
Maruthi.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy create-initial errors out with "Some monitors have still not reached quorum"

2016-01-01 Thread Martin Palma
Hi Maruthi,

and did you test that DNS name lookup properly works (e.g. nslookup
ceph-mon1 etc...) on all hosts?

>From the output of 'ceph-deploy' it seem that the host can only resolve
it's own name but not the others:

[ceph-mon1][DEBUG ]   "monmap": {
[ceph-mon1][DEBUG ] "created": "0.00",
[ceph-mon1][DEBUG ] "epoch": 0,
[ceph-mon1][DEBUG ] "fsid": "d6ca9ac6-bfb9-4464-a128-459068637924",
[ceph-mon1][DEBUG ] "modified": "0.00",
[ceph-mon1][DEBUG ] "mons": [
[ceph-mon1][DEBUG ]   {
[ceph-mon1][DEBUG ] "addr": "10.31.141.76:6789/0",
[ceph-mon1][DEBUG ] "name": "ceph-mon1",
[ceph-mon1][DEBUG ] "rank": 0
[ceph-mon1][DEBUG ]   },
[ceph-mon1][DEBUG ]   {
[ceph-mon1][DEBUG ] "addr": "0.0.0.0:0/1",
[ceph-mon1][DEBUG ] "name": "ceph-mon2",
[ceph-mon1][DEBUG ] "rank": 1
[ceph-mon1][DEBUG ]   },
[ceph-mon1][DEBUG ]   {
[ceph-mon1][DEBUG ] "addr": "0.0.0.0:0/2",
[ceph-mon1][DEBUG ] "name": "ceph-mon3",
[ceph-mon1][DEBUG ] "rank": 2
[ceph-mon1][DEBUG ]   }
[ceph-mon1][DEBUG ] ]
[ceph-mon1][DEBUG ]   },


Best,
Martin


On Fri, Jan 1, 2016 at 3:21 AM, Maruthi Seshidhar <
maruthi.seshid...@gmail.com> wrote:

> hi Wade,
>
> Yes firewalld is disabled on all nodes.
>
> [ceph@ceph-mon1 ~]$ systemctl status firewalld
> firewalld.service - firewalld - dynamic firewall daemon
>Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled)
>Active: inactive (dead)
>
> thanks,
> Maruthi.
>
> On Fri, Jan 1, 2016 at 7:46 AM, Wade Holler  wrote:
>
>> I assume you have tested with firewalld disabled ?
>>
>> Best Regards
>> Wade
>> On Thu, Dec 31, 2015 at 9:13 PM Maruthi Seshidhar <
>> maruthi.seshid...@gmail.com> wrote:
>>
>>> hi fellow users,
>>>
>>> I am setting up a ceph cluster with 3 monitors, 4 osds on CentOS 7.1
>>>
>>> Each of the nodes have 2 NICs.
>>> 10.31.141.0/23 is the public n/w and 192.168.10.0/24 is the cluster n/w.
>>>
>>> Completed the "Preflight Checklist"
>>> .
>>> But in the "Storage Cluster Quick Start"
>>> , while
>>> doing "ceph-deploy create-initial" I see  error "Some monitors have still
>>> not reached quorum".
>>>
>>> [ceph@ceph-mgmt ceph-cluster]$ ceph-deploy --overwrite-conf mon
>>> create-initial
>>> [ceph_deploy.conf][DEBUG ] found configuration file at:
>>> /home/ceph/.cephdeploy.conf
>>> [ceph_deploy.cli][INFO  ] Invoked (1.5.30): /usr/bin/ceph-deploy
>>> --overwrite-conf mon create-initial
>>> [ceph_deploy.cli][INFO  ] ceph-deploy options:
>>> [ceph_deploy.cli][INFO  ]  username  : None
>>> [ceph_deploy.cli][INFO  ]  verbose   : False
>>> [ceph_deploy.cli][INFO  ]  overwrite_conf: True
>>> [ceph_deploy.cli][INFO  ]  subcommand: create-initial
>>> [ceph_deploy.cli][INFO  ]  quiet : False
>>> [ceph_deploy.cli][INFO  ]  cd_conf   :
>>> 
>>> [ceph_deploy.cli][INFO  ]  cluster   : ceph
>>> [ceph_deploy.cli][INFO  ]  func  : >> at 0x1fd6e60>
>>> [ceph_deploy.cli][INFO  ]  ceph_conf : None
>>> [ceph_deploy.cli][INFO  ]  default_release   : False
>>> [ceph_deploy.cli][INFO  ]  keyrings  : None
>>> [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph-mon1
>>> ceph-mon2 ceph-mon3
>>> [ceph_deploy.mon][DEBUG ] detecting platform for host ceph-mon1 ...
>>> [ceph-mon1][DEBUG ] connection detected need for sudo
>>> [ceph-mon1][DEBUG ] connected to host: ceph-mon1
>>> [ceph-mon1][DEBUG ] detect platform information from remote host
>>> [ceph-mon1][DEBUG ] detect machine type
>>> [ceph-mon1][DEBUG ] find the location of an executable
>>> [ceph_deploy.mon][INFO  ] distro info: CentOS Linux 7.1.1503 Core
>>> [ceph-mon1][DEBUG ] determining if provided host has same hostname in
>>> remote
>>> [ceph-mon1][DEBUG ] get remote short hostname
>>> [ceph-mon1][DEBUG ] deploying mon to ceph-mon1
>>> [ceph-mon1][DEBUG ] get remote short hostname
>>> [ceph-mon1][DEBUG ] remote hostname: ceph-mon1
>>> [ceph-mon1][DEBUG ] write cluster configuration to
>>> /etc/ceph/{cluster}.conf
>>> [ceph-mon1][DEBUG ] create the mon path if it does not exist
>>> [ceph-mon1][DEBUG ] checking for done path:
>>> /var/lib/ceph/mon/ceph-ceph-mon1/done
>>> [ceph-mon1][DEBUG ] create a done file to avoid re-doing the mon
>>> deployment
>>> [ceph-mon1][DEBUG ] create the init path if it does not exist
>>> [ceph-mon1][INFO  ] Running command: sudo systemctl enable ceph.target
>>> [ceph-mon1][INFO  ] Running command: sudo systemctl enable
>>> ceph-mon@ceph-mon1
>>> [ceph-mon1][INFO  ] Running command: sudo systemctl start
>>> ceph-mon@ceph-mon1
>>> [ceph-mon1][INFO  ] Running command: sudo ceph --cluster=ceph
>>> --admin-daemon 

Re: [ceph-users] ceph-deploy for "deb http://ceph.com/debian-hammer/ trusty main"

2015-11-13 Thread Jaime Melis
Hi,

can someone shed some light on the status of this issue? I can see that
Loic removed the target version a few days ago.

Is there any way we can help to fix this?

cheers,
Jaime

On Thu, Oct 22, 2015 at 10:16 PM, David Clarke 
wrote:

> On 23/10/15 09:08, Kjetil Jørgensen wrote:
> > Hi,
> >
> > this seems to not get me ceph-deploy from ceph.com .
> >
> >
> http://download.ceph.com/debian-hammer/pool/main/c/ceph/ceph_0.94.4-1trusty_amd64.deb
> > does seem to contain /usr/share/man/man8/ceph-deploy.8.gz, which
> > conflicts with ceph-deploy from elsewhere (ubuntu).
> >
> > Looking at:
> >
> http://download.ceph.com/debian-hammer/dists/trusty/main/binary-amd64/Packages
> > There's no ceph-deploy package in there (There is if you replace hammer
> > with giant, there is).
> >
> > Is ceph-deploy-by-debian-package from ceph.com 
> > discontinued ?
>
> This has been raised in the bug tracker a couple of times:
>
> http://tracker.ceph.com/issues/13544
> http://tracker.ceph.com/issues/13548
>
> The actual .deb files are in the repository, but not mentioned in the
> Packages files, so it looks like something has gone awry with the
> repository build scripts.
>
> A direct download in available at:
>
>
> http://download.ceph.com/debian-hammer/pool/main/c/ceph-deploy/ceph-deploy_1.5.28trusty_all.deb
>
> That version does not include /usr/share/man/man8/ceph-deploy.8.gz, and
> so does not cause issues when installed alongside ceph 0.94.4.
>
>
> --
> David Clarke
> Systems Architect
> Catalyst IT
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 
Jaime Melis
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | jme...@opennebula.org
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy on lxc container - 'initctl: Event failed'

2015-11-06 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

I've put monitors in LXC but I haven't done it with ceph-deploy. I've
had no problems with it.
- 
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Fri, Nov 6, 2015 at 12:55 PM, Bogdan SOLGA  wrote:
> Hello, everyone!
>
> I just tried to create a new Ceph cluster, using 3 LXC clusters as monitors,
> and the 'ceph-deploy mon create-initial' command fails for each of the
> monitors with a 'initctl: Event failed' error, when running the following
> command:
>
> [ceph-mon-01][INFO  ] Running command: sudo initctl emit ceph-mon
> cluster=ceph id=ceph-mon-01
> [ceph-mon-01][WARNIN] initctl: Event failed
>
> Is it OK to use LXC containers as Ceph MONs? if yes - is there anything
> special which needs to be done prior to the 'mon create-initial' phase?
>
> Thank you!
>
> Regards,
> Bogdan
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

-BEGIN PGP SIGNATURE-
Version: Mailvelope v1.2.3
Comment: https://www.mailvelope.com

wsFcBAEBCAAQBQJWPRu7CRDmVDuy+mK58QAAqoUP/0CM1aRSm6XRWVeRvWzb
kWWrgHyypNbHKhGXe07F8bHS1jberhKs9RCuU+RKN2aJ7M3zL1xr5ysspZ4R
+1fMHVW4enW5haBKa1Z1/1C5uPBQvVOwjEE+7k8XncvP4+mnICtBqtEQPc1g
+62CY9Ke39btPXwGJiTC8by2Uh6pvrtnfGf7UGh6nWrnoOxJmTnZImmQKbpg
PLvqw/Dl/KJD4DcQoS3nzLRXhZXOohpUsAJBMegq422+iYa31f0QVdddzoC7
DYfqxV2xszOeh24McTXZjOVulC1w2Xni3R9vOWjbJGPlMbg1xnBqX/G+Fn2z
2UAOYTMx5bK/j3wzAryMYs9/dtr4JhpO8cVWSm1fxM4J3V/96ug4Y3eYHoCZ
FoTGDmPwFDXQkwTFwjWWgoIMQh/1Zi6Nm6cLnggVlQcotdfka/glcLEHXXMb
uPXKcrY6kwwIbw+JFUbn6GUlK1ZSURKnmwXmVroHnoxnWH7bH7hhNv+GYzxJ
AjOxlds8E4igFHxwh0A7xIq/IosKgwxIuxbO2BlnYTYCoCrjOWoesiFtQdpX
q+tRSo03gC4PSqrjsm7xsMdSW/3uaIEzZPx/SQJU/JBDKarNY2eCo7VYntUx
7uxkWGEA4sibLdjNIGkRJHSrZDVdSJMlaPNBNrxmREl0t9b+DVBtbLgSvHeW
Tj4D
=aGAZ
-END PGP SIGNATURE-
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy - default release

2015-11-04 Thread Luke Jing Yuan
Hi,

I am also seeing the same issue here. My guess the ceph-deploy package was 
somewhat left out when the repository was updated? At least to my best 
understanding, the Packages file 
(http://download.ceph.com/debian-hammer/dists/trusty/main/binary-amd64/Packages)
 and Contents 
(http://download.ceph.com/debian-hammer/dists/trusty/Contents-amd64.bz2) don’t 
have any reference to ceph-deploy.

My temporary workaround is to download ceph-deploy from the site and manually 
install it.

Regards,
Luke

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Bogdan 
SOLGA
Sent: Wednesday, November 04, 2015 8:40 PM
To: ceph-users
Subject: Re: [ceph-users] ceph-deploy - default release

Hello!
A retry of this question, as I'm still stuck at the install step, due to the 
old version issue.
Any help is highly appreciated.
Regards,
Bogdan

On Sat, Oct 31, 2015 at 9:22 AM, Bogdan SOLGA 
<bogdan.so...@gmail.com<mailto:bogdan.so...@gmail.com>> wrote:
Hello everyone!
I'm struggling to get a new Ceph cluster installed, and I'm wondering why am I 
always getting the version 0.80.10 installed, regardless if I'm running just 
'ceph-deploy install' or 'ceph-deploy install --release hammer'.

Trying a 'ceph-deploy install -h', on the --release command option it says 
'default: emperor'. Is this intended? or am I doing something incorrectly?
My environment:

  *   Ubuntu 14.04.3
  *   ran 'sudo echo deb http://eu.ceph.com/debian-hammer/ $(lsb_release -sc) 
main | sudo tee /etc/apt/sources.list.d/ceph.list' and the subsequent apt-get 
update and install ceph-deploy

Any hint or advice is appreciated.

Thank you!

Bogdan



DISCLAIMER:


This e-mail (including any attachments) is for the addressee(s) only and may be 
confidential, especially as regards personal data. If you are not the intended 
recipient, please note that any dealing, review, distribution, printing, 
copying or use of this e-mail is strictly prohibited. If you have received this 
email in error, please notify the sender immediately and delete the original 
message (including any attachments).

MIMOS Berhad is a research and development institution under the purview of the 
Malaysian Ministry of Science, Technology and Innovation. Opinions, conclusions 
and other information in this e-mail that do not relate to the official 
business of MIMOS Berhad and/or its subsidiaries shall be understood as neither 
given nor endorsed by MIMOS Berhad and/or its subsidiaries and neither MIMOS 
Berhad nor its subsidiaries accepts responsibility for the same. All liability 
arising from or in connection with computer viruses and/or corrupted e-mails is 
excluded to the fullest extent permitted by law.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy - default release

2015-11-04 Thread Bogdan SOLGA
Hello!

A retry of this question, as I'm still stuck at the install step, due to
the old version issue.

Any help is highly appreciated.

Regards,
Bogdan


On Sat, Oct 31, 2015 at 9:22 AM, Bogdan SOLGA 
wrote:

> Hello everyone!
>
> I'm struggling to get a new Ceph cluster installed, and I'm wondering why
> am I always getting the version 0.80.10 installed, regardless if I'm
> running just 'ceph-deploy install' or 'ceph-deploy install --release
> hammer'.
>
> Trying a 'ceph-deploy install -h', on the --release command option it says
> 'default: emperor'. Is this intended? or am I doing something incorrectly?
>
> My environment:
>
>- Ubuntu 14.04.3
>- ran 'sudo echo deb http://eu.ceph.com/debian-hammer/ $(lsb_release
>-sc) main | sudo tee /etc/apt/sources.list.d/ceph.list' and the subsequent
>apt-get update and install ceph-deploy
>
> Any hint or advice is appreciated.
>
> Thank you!
>
> Bogdan
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


  1   2   3   4   >