Re: [ceph-users] ceph-volume does not support upstart

2017-12-30 Thread 赵赵贺东
Hello Cary,

Thank you for your detailed description, it’s really helpful for me!
I will have a try when I get back to my office!

Thank you for your attention to this matter.

> 在 2017年12月30日,上午3:51,Cary  写道:
> 
> Hello,
> 
> I mount my Bluestore OSDs in /etc/fstab:
> 
> vi /etc/fstab
> 
> tmpfs   /var/lib/ceph/osd/ceph-12  tmpfs   rw,relatime 0 0
> =
> Then mount everyting in fstab with:
> mount -a
> ==
> I activate my OSDs this way on startup: You can find the fsid with
> 
> cat /var/lib/ceph/osd/ceph-12/fsid
> 
> Then add file named ceph.start so ceph-volume will be run at startup.
> 
> vi /etc/local.d/ceph.start
> ceph-volume lvm activate 12 827f4a2c-8c1b-427b-bd6c-66d31a0468ac
> ==
> Make it excitable:
> chmod 700 /etc/local.d/ceph.start
> ==
> cd /etc/local.d/
> ./ceph.start
> ==
> I am a Gentoo user and use OpenRC, so this may not apply to you.
> ==
> cd /etc/init.d/
> ln -s ceph ceph-osd.12
> /etc/init.d/ceph-osd.12 start
> rc-update add ceph-osd.12 default
> 
> Cary
> 
> On Fri, Dec 29, 2017 at 8:47 AM, 赵赵贺东  wrote:
>> Hello Cary!
>> It’s really big surprise for me to receive your reply!
>> Sincere thanks to you!
>> I know it’s a fake execute file, but it works!
>> 
>> >
>> $ cat /usr/sbin/systemctl
>> #!/bin/bash
>> exit 0
>> <
>> 
>> I can start my osd by following command
>> /usr/bin/ceph-osd --cluster=ceph -i 12 -f --setuser ceph --setgroup ceph
>> 
>> But, threre are still problems.
>> 1.Though ceph-osd can start successfully, prepare log and activate log looks
>> like errors occurred.
>> 
>> Prepare log:
>> ===>
>> # ceph-volume lvm prepare --bluestore --data vggroup/lv
>> Running command: sudo mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-12
>> Running command: chown -R ceph:ceph /dev/dm-0
>> Running command: sudo ln -s /dev/vggroup/lv /var/lib/ceph/osd/ceph-12/block
>> Running command: sudo ceph --cluster ceph --name client.bootstrap-osd
>> --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o
>> /var/lib/ceph/osd/ceph-12/activate.monmap
>> stderr: got monmap epoch 1
>> Running command: ceph-authtool /var/lib/ceph/osd/ceph-12/keyring
>> --create-keyring --name osd.12 --add-key
>> AQAQ+UVa4z2ANRAAmmuAExQauFinuJuL6A56ww==
>> stdout: creating /var/lib/ceph/osd/ceph-12/keyring
>> stdout: added entity osd.12 auth auth(auid = 18446744073709551615
>> key=AQAQ+UVa4z2ANRAAmmuAExQauFinuJuL6A56ww== with 0 caps)
>> Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-12/keyring
>> Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-12/
>> Running command: sudo ceph-osd --cluster ceph --osd-objectstore bluestore
>> --mkfs -i 12 --monmap /var/lib/ceph/osd/ceph-12/activate.monmap --key
>>  --osd-data
>> /var/lib/ceph/osd/ceph-12/ --osd-uuid 827f4a2c-8c1b-427b-bd6c-66d31a0468ac
>> --setuser ceph --setgroup ceph
>> stderr: warning: unable to create /var/run/ceph: (13) Permission denied
>> stderr: 2017-12-29 08:13:08.609127 b66f3000 -1 asok(0x850c62a0)
>> AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to
>> bind the UNIX domain socket to '/var/run/ceph/ceph-osd.12.asok': (2) No such
>> file or directory
>> stderr:
>> stderr: 2017-12-29 08:13:08.643410 b66f3000 -1
>> bluestore(/var/lib/ceph/osd/ceph-12//block) _read_bdev_label unable to
>> decode label at offset 66: buffer::malformed_input: void
>> bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past
>> end of struct encoding
>> stderr: 2017-12-29 08:13:08.644055 b66f3000 -1
>> bluestore(/var/lib/ceph/osd/ceph-12//block) _read_bdev_label unable to
>> decode label at offset 66: buffer::malformed_input: void
>> bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past
>> end of struct encoding
>> stderr: 2017-12-29 08:13:08.644722 b66f3000 -1
>> bluestore(/var/lib/ceph/osd/ceph-12//block) _read_bdev_label unable to
>> decode label at offset 66: buffer::malformed_input: void
>> bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past
>> end of struct encoding
>> stderr: 2017-12-29 08:13:08.646722 b66f3000 -1
>> bluestore(/var/lib/ceph/osd/ceph-12/) _read_fsid unparsable uuid
>> stderr: 2017-12-29 08:14:00.697028 b66f3000 -1 key
>> AQAQ+UVa4z2ANRAAmmuAExQauFinuJuL6A56ww==
>> stderr: 2017-12-29 08:14:01.261659 b66f3000 -1 created object store
>> /var/lib/ceph/osd/ceph-12/ for osd.12 fsid
>> 4e5adad0-784c-41b4-ab72-5f4fae499b3a
>> <===
>> 
>> Activate log:
>> ===>
>> # ceph-volume lvm activate --bluestore 12
>> 

Re: [ceph-users] Running Jewel and Luminous mixed for a longer period

2017-12-30 Thread Martin, Jeremy
Also the ceps-deploy tool doesn’t seem to adequately support bluestone with its 
recommended configuration on lvm the osd create and prepare tools don’t work 
with any reliability with bluestone and lvm.  Posted some information and 
questions on this when deploying a new cluster on both test and production 
hardware and vm’s but was unable to get any information or proceed even though 
filestore deployed without any issue.  Focus the decision came down to 
deploying on bluestone or filestore but sense we couldn’t deploy on bluestone 
the choice then came to filestore or a competing product and we didn’t feel 
that deploying new clusters with a store likely to be replaced in the future 
(i.e) filestore was a good choice so test and consider bluestone carefully

Jeremy




> On Dec 29, 2017, at 3:05 PM, Travis Nielsen  
> wrote:
> 
> Since bluestore was declared stable in Luminous, is there any remaining
> scenario to use filestore in new deployments? Or is it safe to assume that
> bluestore is always better to use in Luminous? All documentation I can
> find points to bluestore being superior in all cases.
> 
> Thanks,
> Travis
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Running Jewel and Luminous mixed for a longer period

2017-12-30 Thread Konstantin Shalygin

Performance as well - in my testing FileStore was much quicker than BlueStore.


Proof?



k

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Running Jewel and Luminous mixed for a longer period

2017-12-30 Thread Milanov, Radoslav Nikiforov
Performance as well - in my testing FileStore was much quicker than BlueStore.

-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Sage 
Weil
Sent: Friday, December 29, 2017 3:51 PM
To: Travis Nielsen 
Cc: ceph-us...@ceph.com
Subject: Re: [ceph-users] Running Jewel and Luminous mixed for a longer period

On Fri, 29 Dec 2017, Travis Nielsen wrote:
> Since bluestore was declared stable in Luminous, is there any 
> remaining scenario to use filestore in new deployments? Or is it safe 
> to assume that bluestore is always better to use in Luminous? All 
> documentation I can find points to bluestore being superior in all cases.

The only real reason to run FileStore is for stability reasons: FileStore is 
older and well-tested, so the most conservative users may stick to FileStore 
for a bit longer.

sage

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com