[ceph-users] Luminous v12.2.2 released

2017-12-01 Thread Abhishek Lekshmanan
We're glad to announce the second bugfix release of Luminous v12.2.x
stable release series. It contains a range of bug fixes and a few
features across Bluestore, CephFS, RBD & RGW. We recommend all the users
of 12.2.x series update. 

For more detailed information, see the blog[1] and the complete
changelog[2]

A big thank you to everyone for the continual feedback & bug
reports we've received over this release cycle

Notable Changes
---
* Standby ceph-mgr daemons now redirect requests to the active messenger, easing
  configuration for tools & users accessing the web dashboard, restful API, or
  other ceph-mgr module services.
* The prometheus module has several significant updates and improvements.
* The new balancer module enables automatic optimization of CRUSH weights to
  balance data across the cluster.
* The ceph-volume tool has been updated to include support for BlueStore as well
  as FileStore. The only major missing ceph-volume feature is dm-crypt support.
* RGW's dynamic bucket index resharding is disabled in multisite environments,
  as it can cause inconsistencies in replication of bucket indexes to remote
  sites

Other Notable Changes
-
* build/ops: bump sphinx to 1.6 (issue#21717, pr#18167, Kefu Chai, Alfredo Deza)
* build/ops: macros expanding in spec file comment (issue#22250, pr#19173, Ken 
Dreyer)
* build/ops: python-numpy-devel build dependency for SUSE (issue#21176, 
pr#17692, Nathan Cutler)
* build/ops: selinux: Allow getattr on lnk sysfs files (issue#21492, pr#18650, 
Boris Ranto)
* build/ops: Ubuntu amd64 client can not discover the ubuntu arm64 ceph cluster 
(issue#19705, pr#18293, Kefu Chai)
* core: buffer: fix ABI breakage by removing list _mempool member (issue#21573, 
pr#18491, Sage Weil)
* core: Daemons(OSD, Mon…) exit abnormally at injectargs command (issue#21365, 
pr#17864, Yan Jun)
* core: Disable messenger logging (debug ms = 0/0) for clients unless 
overridden (issue#21860, pr#18529, Jason Dillaman)
* core: Improve OSD startup time by only scanning for omap corruption once 
(issue#21328, pr#17889, Luo Kexue, David Zafman)
* core: upmap does not respect osd reweights (issue#21538, pr#18699, Theofilos 
Mouratidis)
* dashboard: barfs on nulls where it expects numbers (issue#21570, pr#18728, 
John Spray)
* dashboard: OSD list has servers and osds in arbitrary order (issue#21572, 
pr#18736, John Spray)
* dashboard: the dashboard uses absolute links for filesystems and clients 
(issue#20568, pr#18737, Nick Erdmann)
* filestore: set default readahead and compaction threads for rocksdb 
(issue#21505, pr#18234, Josh Durgin, Mark Nelson)
* librbd: object map batch update might cause OSD suicide timeout (issue#21797, 
pr#18416, Jason Dillaman)
* librbd: snapshots should be created/removed against data pool (issue#21567, 
pr#18336, Jason Dillaman)
* mds: make sure snap inode’s last matches its parent dentry’s last 
(issue#21337, pr#17994, “Yan, Zheng”)
* mds: sanitize mdsmap of removed pools (issue#21945, issue#21568, pr#18628, 
Patrick Donnelly)
* mgr: bulk backport of ceph-mgr improvements (issue#21594, issue#17460,
  issue#21197, issue#21158, issue#21593, pr#18675, Benjeman Meekhof,
  Sage Weil, Jan Fajerski, John Spray, Kefu Chai, My Do, Spandan Kumar Sahu)
* mgr: ceph-mgr gets process called “exe” after respawn (issue#21404, pr#18738, 
John Spray)
* mgr: fix crashable DaemonStateIndex::get calls (issue#17737, pr#18412, John 
Spray)
* mgr: key mismatch for mgr after upgrade from jewel to luminous(dev) 
(issue#20950, pr#18727, John Spray)
* mgr: mgr status module uses base 10 units (issue#21189, issue#21752, 
pr#18257, John Spray, Yanhu Cao)
* mgr: mgr[zabbix] float division by zero (issue#21518, pr#18734, John Spray)
* mgr: Prometheus crash when update (issue#21253, pr#17867, John Spray)
* mgr: prometheus module generates invalid output when counter names contain 
non-alphanum characters (issue#20899, pr#17868, John Spray, Jeremy H Austin)
* mgr: Quieten scary RuntimeError from restful module on startup (issue#21292, 
pr#17866, John Spray)
* mgr: Spurious ceph-mgr failovers during mon elections (issue#20629, pr#18726, 
John Spray)
* mon: Client client.admin marked osd.2 out, after it was down for 1504627577 
seconds (issue#21249, pr#17862, John Spray)
* mon: DNS SRV default service name not used anymore (issue#21204, pr#17863, 
Kefu Chai)
* mon/MgrMonitor: handle cmd descs to/from disk in the absence of active mgr 
(issue#21300, pr#18038, Joao Eduardo Luis)
* mon/mgr: sync “mgr_command_descs”,”osd_metadata” and “mgr_metadata” prefixes 
to new mons (issue#21527, pr#18620, huanwen ren)
* mon: osd feature checks with 0 up osds (issue#21471, issue#20751, pr#18364, 
Brad Hubbard, Sage Weil)
* mon,osd: fix “pg ls {forced_backfill, backfilling}” (issue#21609, pr#18236, 
Kefu Chai)
* mon/OSDMonitor: add option to fix up ruleset-* to crush-* for ec profiles 
(issue#22128, pr#18945, Sage Weil)
* mon, osd: per pool space-full flag support (issue#21409, 

Re: [ceph-users] ceph-disk removal roadmap (was ceph-disk is now deprecated)

2017-12-01 Thread Alfredo Deza
On Fri, Dec 1, 2017 at 3:28 AM, Stefan Kooman  wrote:
> Quoting Fabian Grünbichler (f.gruenbich...@proxmox.com):
>> I think the above roadmap is a good compromise for all involved parties,
>> and I hope we can use the remainder of Luminous to prepare for a
>> seam- and painless transition to ceph-volume in time for the Mimic
>> release, and then finally retire ceph-disk for good!
>
> Will the upcoming 12.2.2 release ship with a ceph-volume capable of
> doing bluestore on top of LVM?

Yes, see the open pr for it (https://github.com/ceph/ceph-deploy/pull/455)

> Eager to use ceph-volume for that, and
> skip entirely over ceph-disk and our manual osd prepare process ...

Now please note that the API will change in a non backwards compatible
way, so a major release of ceph-deploy will
be done after that is merged.

>
> Gr. Stefan
>
> --
> | BIT BV  http://www.bit.nl/Kamer van Koophandel 09090351
> | GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-disk removal roadmap (was ceph-disk is now deprecated)

2017-12-01 Thread Alfredo Deza
On Fri, Dec 1, 2017 at 11:35 AM, Dennis Lijnsveld  wrote:
> On 12/01/2017 01:45 PM, Alfredo Deza wrote:
>>> Will the upcoming 12.2.2 release ship with a ceph-volume capable of
>>> doing bluestore on top of LVM? Eager to use ceph-volume for that, and
>>> skip entirely over ceph-disk and our manual osd prepare process ...
>>
>> Yes. I think that for 12.2.1 this was the case as well, in 12.2.2 is
>> the default.
>
> Just updated Ceph and received an update for 12.2.2 and afterwards tried
> to prepare an osd with the following command:
>
> ceph-volume lvm prepare --bluestore --data osd.9/osd.9
>
> in which osd.9 is both the name for the VG as for the LV. After running
> the command I got the error on screen:
>
> -->  ValueError: need more than 1 value to unpack
>
> I checked the logs /var/log/ceph-volume.log which gave me the output
> underneath. Am I hitting some kind of bug or am I doing something wrong
> perhaps?

Looks like there is a tag in there that broke it. Lets follow up on a
tracker issue so that we don't hijack this thread?

http://tracker.ceph.com/projects/ceph-volume/issues/new

>
> [2017-12-01 17:25:25,234][ceph_volume.process][INFO  ] Running command:
> ceph-authtool --gen-print-key
> [2017-12-01 17:25:25,278][ceph_volume.process][INFO  ] stdout
> AQB1giFayoNDEBAAtOCZgErrB02Hrs370zBDcA==
> [2017-12-01 17:25:25,279][ceph_volume.process][INFO  ] Running command:
> ceph --cluster ceph --name client.bootstrap-osd --keyring
> /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
> 063c7de3-d4b2-463b-9f56-7a76b0b48197
> [2017-12-01 17:25:25,940][ceph_volume.process][INFO  ] stdout 25
> [2017-12-01 17:25:25,940][ceph_volume.process][INFO  ] Running command:
> sudo lvs --noheadings --separator=";" -o
> lv_tags,lv_path,lv_name,vg_name,lv_uuid
> [2017-12-01 17:25:25,977][ceph_volume.process][INFO  ] stdout
> ";"/dev/LVM0/CEPH";"CEPH";"LVM0";"y4Al1c-SFHH-VARl-XQf3-Qsc8-H3MN-LLIIj4
> [2017-12-01 17:25:25,978][ceph_volume.process][INFO  ] stdout
> ";"/dev/LVM0/ROOT";"ROOT";"LVM0";"31V3cd-E2b1-LcDz-2loq-egvh-lz4e-3u20ZN
> [2017-12-01 17:25:25,978][ceph_volume.process][INFO  ] stdout
> ";"/dev/LVM0/SWAP";"SWAP";"LVM0";"hI3cNL-sddl-yXFB-BOXT-5R6j-fDtZ-kNixYa
> [2017-12-01 17:25:25,978][ceph_volume.process][INFO  ] stdout
> d77bfa9f-4d8d-40df-852a-692a94929ed2";"/dev/osd.9/osd.9";"osd.9";"osd.9";"3NAmK8-U3Fx-KUOm-f8x8-aEtO-MbYh-uPGHhR
> [2017-12-01 17:25:25,979][ceph_volume][ERROR ] exception caught by decorator
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/dist-packages/ceph_volume/decorators.py",
> line 59, in newfunc
> return f(*a, **kw)
>   File "/usr/lib/python2.7/dist-packages/ceph_volume/main.py", line 144,
> in main
> terminal.dispatch(self.mapper, subcommand_args)
>   File "/usr/lib/python2.7/dist-packages/ceph_volume/terminal.py", line
> 131, in dispatch
> instance.main()
>   File
> "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/main.py", line
> 38, in main
> terminal.dispatch(self.mapper, self.argv)
>   File "/usr/lib/python2.7/dist-packages/ceph_volume/terminal.py", line
> 131, in dispatch
> instance.main()
>   File
> "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/prepare.py",
> line 293, in main
> self.prepare(args)
>   File "/usr/lib/python2.7/dist-packages/ceph_volume/decorators.py",
> line 16, in is_root
> return func(*a, **kw)
>   File
> "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/prepare.py",
> line 206, in prepare
> block_lv = self.get_lv(args.data)
>   File
> "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/prepare.py",
> line 102, in get_lv
> return api.get_lv(lv_name=lv_name, vg_name=vg_name)
>   File "/usr/lib/python2.7/dist-packages/ceph_volume/api/lvm.py", line
> 162, in get_lv
> lvs = Volumes()
>   File "/usr/lib/python2.7/dist-packages/ceph_volume/api/lvm.py", line
> 411, in __init__
> self._populate()
>   File "/usr/lib/python2.7/dist-packages/ceph_volume/api/lvm.py", line
> 416, in _populate
> self.append(Volume(**lv_item))
>   File "/usr/lib/python2.7/dist-packages/ceph_volume/api/lvm.py", line
> 638, in __init__
> self.tags = parse_tags(kw['lv_tags'])
>   File "/usr/lib/python2.7/dist-packages/ceph_volume/api/lvm.py", line
> 66, in parse_tags
> key, value = tag_assignment.split('=', 1)
> ValueError: need more than 1 value to unpack
>
> --
> Dennis Lijnsveld
> BIT BV - http://www.bit.nl
> Kvk: 09090351
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-disk removal roadmap (was ceph-disk is now deprecated)

2017-12-01 Thread Dennis Lijnsveld
On 12/01/2017 01:45 PM, Alfredo Deza wrote:
>> Will the upcoming 12.2.2 release ship with a ceph-volume capable of
>> doing bluestore on top of LVM? Eager to use ceph-volume for that, and
>> skip entirely over ceph-disk and our manual osd prepare process ...
> 
> Yes. I think that for 12.2.1 this was the case as well, in 12.2.2 is
> the default.

Just updated Ceph and received an update for 12.2.2 and afterwards tried
to prepare an osd with the following command:

ceph-volume lvm prepare --bluestore --data osd.9/osd.9

in which osd.9 is both the name for the VG as for the LV. After running
the command I got the error on screen:

-->  ValueError: need more than 1 value to unpack

I checked the logs /var/log/ceph-volume.log which gave me the output
underneath. Am I hitting some kind of bug or am I doing something wrong
perhaps?

[2017-12-01 17:25:25,234][ceph_volume.process][INFO  ] Running command:
ceph-authtool --gen-print-key
[2017-12-01 17:25:25,278][ceph_volume.process][INFO  ] stdout
AQB1giFayoNDEBAAtOCZgErrB02Hrs370zBDcA==
[2017-12-01 17:25:25,279][ceph_volume.process][INFO  ] Running command:
ceph --cluster ceph --name client.bootstrap-osd --keyring
/var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
063c7de3-d4b2-463b-9f56-7a76b0b48197
[2017-12-01 17:25:25,940][ceph_volume.process][INFO  ] stdout 25
[2017-12-01 17:25:25,940][ceph_volume.process][INFO  ] Running command:
sudo lvs --noheadings --separator=";" -o
lv_tags,lv_path,lv_name,vg_name,lv_uuid
[2017-12-01 17:25:25,977][ceph_volume.process][INFO  ] stdout
";"/dev/LVM0/CEPH";"CEPH";"LVM0";"y4Al1c-SFHH-VARl-XQf3-Qsc8-H3MN-LLIIj4
[2017-12-01 17:25:25,978][ceph_volume.process][INFO  ] stdout
";"/dev/LVM0/ROOT";"ROOT";"LVM0";"31V3cd-E2b1-LcDz-2loq-egvh-lz4e-3u20ZN
[2017-12-01 17:25:25,978][ceph_volume.process][INFO  ] stdout
";"/dev/LVM0/SWAP";"SWAP";"LVM0";"hI3cNL-sddl-yXFB-BOXT-5R6j-fDtZ-kNixYa
[2017-12-01 17:25:25,978][ceph_volume.process][INFO  ] stdout
d77bfa9f-4d8d-40df-852a-692a94929ed2";"/dev/osd.9/osd.9";"osd.9";"osd.9";"3NAmK8-U3Fx-KUOm-f8x8-aEtO-MbYh-uPGHhR
[2017-12-01 17:25:25,979][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/ceph_volume/decorators.py",
line 59, in newfunc
return f(*a, **kw)
  File "/usr/lib/python2.7/dist-packages/ceph_volume/main.py", line 144,
in main
terminal.dispatch(self.mapper, subcommand_args)
  File "/usr/lib/python2.7/dist-packages/ceph_volume/terminal.py", line
131, in dispatch
instance.main()
  File
"/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/main.py", line
38, in main
terminal.dispatch(self.mapper, self.argv)
  File "/usr/lib/python2.7/dist-packages/ceph_volume/terminal.py", line
131, in dispatch
instance.main()
  File
"/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/prepare.py",
line 293, in main
self.prepare(args)
  File "/usr/lib/python2.7/dist-packages/ceph_volume/decorators.py",
line 16, in is_root
return func(*a, **kw)
  File
"/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/prepare.py",
line 206, in prepare
block_lv = self.get_lv(args.data)
  File
"/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/prepare.py",
line 102, in get_lv
return api.get_lv(lv_name=lv_name, vg_name=vg_name)
  File "/usr/lib/python2.7/dist-packages/ceph_volume/api/lvm.py", line
162, in get_lv
lvs = Volumes()
  File "/usr/lib/python2.7/dist-packages/ceph_volume/api/lvm.py", line
411, in __init__
self._populate()
  File "/usr/lib/python2.7/dist-packages/ceph_volume/api/lvm.py", line
416, in _populate
self.append(Volume(**lv_item))
  File "/usr/lib/python2.7/dist-packages/ceph_volume/api/lvm.py", line
638, in __init__
self.tags = parse_tags(kw['lv_tags'])
  File "/usr/lib/python2.7/dist-packages/ceph_volume/api/lvm.py", line
66, in parse_tags
key, value = tag_assignment.split('=', 1)
ValueError: need more than 1 value to unpack

-- 
Dennis Lijnsveld
BIT BV - http://www.bit.nl
Kvk: 09090351
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-disk removal roadmap (was ceph-disk is now deprecated)

2017-12-01 Thread Dietmar Rieder
On 12/01/2017 01:45 PM, Alfredo Deza wrote:
> On Fri, Dec 1, 2017 at 3:28 AM, Stefan Kooman  wrote:
>> Quoting Fabian Grünbichler (f.gruenbich...@proxmox.com):
>>> I think the above roadmap is a good compromise for all involved parties,
>>> and I hope we can use the remainder of Luminous to prepare for a
>>> seam- and painless transition to ceph-volume in time for the Mimic
>>> release, and then finally retire ceph-disk for good!
>>
>> Will the upcoming 12.2.2 release ship with a ceph-volume capable of
>> doing bluestore on top of LVM? Eager to use ceph-volume for that, and
>> skip entirely over ceph-disk and our manual osd prepare process ...
> 
> Yes. I think that for 12.2.1 this was the case as well, in 12.2.2 is
> the default.


...and will ceph-deploy be ceph-volume capable and default to it in the
12.2.2  release?

Dietmar



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph+RBD+ISCSI = ESXI issue

2017-12-01 Thread nigel davies
Hay

Ceph version 10.2.5

i have had an Ceph cluster going for a few months, with iscsi servers that
are linked to Ceph by RBD.

All of an sudden i am starting the ESXI server will louse the isscsi data
store (disk space goes to 0 B) and i only fix this by rebooting the ISCSI
server

When checking syslogs on the iscsi server i get a loads of errors like

SENDING TMR_TASK_DOES_NOT_EXIST for ref_tag: 
like 100+ lines

i looked at the logs and cant see anything saying hung io or an OSD come
out and back in.

does any one have any susgestions on whats going on??


Thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-disk removal roadmap (was ceph-disk is now deprecated)

2017-12-01 Thread Alfredo Deza
On Fri, Dec 1, 2017 at 3:28 AM, Stefan Kooman  wrote:
> Quoting Fabian Grünbichler (f.gruenbich...@proxmox.com):
>> I think the above roadmap is a good compromise for all involved parties,
>> and I hope we can use the remainder of Luminous to prepare for a
>> seam- and painless transition to ceph-volume in time for the Mimic
>> release, and then finally retire ceph-disk for good!
>
> Will the upcoming 12.2.2 release ship with a ceph-volume capable of
> doing bluestore on top of LVM? Eager to use ceph-volume for that, and
> skip entirely over ceph-disk and our manual osd prepare process ...

Yes. I think that for 12.2.1 this was the case as well, in 12.2.2 is
the default.


>
> Gr. Stefan
>
> --
> | BIT BV  http://www.bit.nl/Kamer van Koophandel 09090351
> | GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Single disk per OSD ?

2017-12-01 Thread Piotr Dałek

On 17-12-01 12:23 PM, Maged Mokhtar wrote:

Hi all,

I believe most exiting setups use 1 disk per OSD. Is this going to be the 
most common setup in the future ? With the move to lvm, will this prefer the 
use of multiple disks per OSD ? On the other side i also see nvme vendors 
recommending multiple OSDs ( 2,4 ) per disk as disks are getting faster for 
a single OSD process.


Can anyone shed some light/recommendations into this please ?


You don't put more than one OSD on spinning disk because access times will 
kill your performance - they already do [kill your performance] and asking 
hdds to do double/triple/quadruple/... duty is only going to make it far 
more worse. On the other hand, SSD drives have access time so short that 
they're most often bottlenecked by SSD users and not SSD itself, so it makes 
perfect sense to put 2-4 OSDs on one OSD.
LVM isn't going to change much in that pattern, it may be easier to setup 
RAID0 HDD OSDs, but that's questionable use case, and OSDs with JBODs under 
them are counterproductive (single disk failure would be caught by Ceph, but 
replacing failed drives will be more difficult -- plus, JBOD OSDs 
significantly extend the damage area once such OSD fails).


--
Piotr Dałek
piotr.da...@corp.ovh.com
https://www.ovh.com/us/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Single disk per OSD ?

2017-12-01 Thread Maged Mokhtar
Hi all, 

I believe most exiting setups use 1 disk per OSD. Is this going to be
the most common setup in the future ? With the move to lvm, will this
prefer the use of multiple disks per OSD ? On the other side i also see
nvme vendors recommending multiple OSDs ( 2,4 ) per disk as disks are
getting faster for a single OSD process. 

Can anyone shed some light/recommendations into this please ? 

Thanks a lot. 

Maged___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-volume lvm for bluestore for newer disk

2017-12-01 Thread Brad Hubbard
On Fri, Dec 1, 2017 at 7:28 PM, nokia ceph  wrote:
> THanks brad, that got worked.. :)

No problem.

I created http://tracker.ceph.com/issues/22297

>
> On Fri, Dec 1, 2017 at 12:18 PM, Brad Hubbard  wrote:
>>
>>
>>
>> On Thu, Nov 30, 2017 at 5:30 PM, nokia ceph 
>> wrote:
>> > Hello,
>> >
>> > I'm following
>> >
>> > http://docs.ceph.com/docs/master/ceph-volume/lvm/prepare/#ceph-volume-lvm-prepare-bluestore
>> > to create new OSD's.
>> >
>> > I took the latest branch from
>> > https://shaman.ceph.com/repos/ceph/luminous/
>> >
>> > # ceph -v
>> > ceph version 12.2.1-851-g6d9f216
>> >
>> > What I did, formatted the device.
>> >
>> > #sgdisk -Z /dev/sdv
>> > Creating new GPT entries.
>> > GPT data structures destroyed! You may now partition the disk using
>> > fdisk or
>> > other utilities.
>> >
>> >
>> > Getting below error while the creation of bluestore OSD's
>> >
>> > # ceph-volume lvm prepare --bluestore  --data /dev/sdv
>> > Running command: sudo vgcreate --force --yes
>> > ceph-b2f1b9b9-eecc-4c17-8b92-cfa60b31c121 # use uuidgen to create an ID,
>> > use
>> > this for all ceph nodes in your cluster /dev/sdv
>> >  stderr: Name contains invalid character, valid set includes:
>> > [a-zA-Z0-9.-_+].
>> >   New volume group name "ceph-b2f1b9b9-eecc-4c17-8b92-cfa60b31c121 # use
>> > uuidgen to create an ID, use this for all ceph nodes in your cluster" is
>> > invalid.
>> >   Run `vgcreate --help' for more information.
>> > -->  RuntimeError: command returned non-zero exit status: 3
>>
>> Can you remove the comment "# use `uuidgen` to generate your own UUID"
>> from the
>> line for 'fsid' in your ceph.conf and try again?
>>
>> >
>> > # grep fsid /etc/ceph/ceph.conf
>> > fsid = b2f1b9b9-eecc-4c17-8b92-cfa60b31c121
>> >
>> >
>> > My question
>> >
>> > 1. We have 68 disks per server so for all the 68 disks sharing same
>> > Volume
>> > group --> "ceph-b2f1b9b9-eecc-4c17-8b92-cfa60b31c121" ?
>> > 2. Why ceph-volume failed to create vg name with this name, even I
>> > manually
>> > tried to create, as it will ask for Physical volume as argument
>> > #vgcreate --force --yes "ceph-b2f1b9b9-eecc-4c17-8b92-cfa60b31c121"
>> >   No command with matching syntax recognised.  Run 'vgcreate --help' for
>> > more information.
>> >   Correct command syntax is:
>> >   vgcreate VG_new PV ...
>> >
>> > Please let me know the comments.
>> >
>> > Thanks
>> > Jayaram
>> >
>> >
>> >
>> >
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>>
>>
>>
>> --
>> Cheers,
>> Brad
>
>



-- 
Cheers,
Brad
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-volume lvm for bluestore for newer disk

2017-12-01 Thread nokia ceph
THanks brad, that got worked.. :)

On Fri, Dec 1, 2017 at 12:18 PM, Brad Hubbard  wrote:

>
>
> On Thu, Nov 30, 2017 at 5:30 PM, nokia ceph 
> wrote:
> > Hello,
> >
> > I'm following
> > http://docs.ceph.com/docs/master/ceph-volume/lvm/
> prepare/#ceph-volume-lvm-prepare-bluestore
> > to create new OSD's.
> >
> > I took the latest branch from https://shaman.ceph.com/repos/
> ceph/luminous/
> >
> > # ceph -v
> > ceph version 12.2.1-851-g6d9f216
> >
> > What I did, formatted the device.
> >
> > #sgdisk -Z /dev/sdv
> > Creating new GPT entries.
> > GPT data structures destroyed! You may now partition the disk using
> fdisk or
> > other utilities.
> >
> >
> > Getting below error while the creation of bluestore OSD's
> >
> > # ceph-volume lvm prepare --bluestore  --data /dev/sdv
> > Running command: sudo vgcreate --force --yes
> > ceph-b2f1b9b9-eecc-4c17-8b92-cfa60b31c121 # use uuidgen to create an
> ID, use
> > this for all ceph nodes in your cluster /dev/sdv
> >  stderr: Name contains invalid character, valid set includes:
> > [a-zA-Z0-9.-_+].
> >   New volume group name "ceph-b2f1b9b9-eecc-4c17-8b92-cfa60b31c121 # use
> > uuidgen to create an ID, use this for all ceph nodes in your cluster" is
> > invalid.
> >   Run `vgcreate --help' for more information.
> > -->  RuntimeError: command returned non-zero exit status: 3
>
> Can you remove the comment "# use `uuidgen` to generate your own UUID"
> from the
> line for 'fsid' in your ceph.conf and try again?
>
> >
> > # grep fsid /etc/ceph/ceph.conf
> > fsid = b2f1b9b9-eecc-4c17-8b92-cfa60b31c121
> >
> >
> > My question
> >
> > 1. We have 68 disks per server so for all the 68 disks sharing same
> Volume
> > group --> "ceph-b2f1b9b9-eecc-4c17-8b92-cfa60b31c121" ?
> > 2. Why ceph-volume failed to create vg name with this name, even I
> manually
> > tried to create, as it will ask for Physical volume as argument
> > #vgcreate --force --yes "ceph-b2f1b9b9-eecc-4c17-8b92-cfa60b31c121"
> >   No command with matching syntax recognised.  Run 'vgcreate --help' for
> > more information.
> >   Correct command syntax is:
> >   vgcreate VG_new PV ...
> >
> > Please let me know the comments.
> >
> > Thanks
> > Jayaram
> >
> >
> >
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
>
>
> --
> Cheers,
> Brad
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-disk removal roadmap (was ceph-disk is now deprecated)

2017-12-01 Thread Stefan Kooman
Quoting Fabian Grünbichler (f.gruenbich...@proxmox.com):
> I think the above roadmap is a good compromise for all involved parties,
> and I hope we can use the remainder of Luminous to prepare for a
> seam- and painless transition to ceph-volume in time for the Mimic
> release, and then finally retire ceph-disk for good!

Will the upcoming 12.2.2 release ship with a ceph-volume capable of
doing bluestore on top of LVM? Eager to use ceph-volume for that, and
skip entirely over ceph-disk and our manual osd prepare process ...

Gr. Stefan

-- 
| BIT BV  http://www.bit.nl/Kamer van Koophandel 09090351
| GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-disk removal roadmap (was ceph-disk is now deprecated)

2017-12-01 Thread Fabian Grünbichler
On Thu, Nov 30, 2017 at 11:25:03AM -0500, Alfredo Deza wrote:
> Thanks all for your feedback on deprecating ceph-disk, we are very
> excited to be able to move forwards on a much more robust tool and
> process for deploying and handling activation of OSDs, removing the
> dependency on UDEV which has been a tremendous source of constant
> issues.
> 
> Initially (see "killing ceph-disk" thread [0]) we planned for removal
> of Mimic, but we didn't want to introduce the deprecation warnings up
> until we had an out for those who had OSDs deployed in previous
> releases with ceph-disk (we are now able to handle those as well).
> That is the reason ceph-volume, although present since the first
> Luminous release, hasn't been pushed forward much.
> 
> Now that we feel like we can cover almost all cases, we would really
> like to see a wider usage so that we can improve on issues/experience.
> 
> Given that 12.2.2 is already in the process of getting released, we
> can't undo the deprecation warnings for that version, but we will
> remove them for 12.2.3, add them back again in Mimic, which will mean
> ceph-disk will be kept around a bit longer, and finally fully removed
> by N.
> 
> To recap:
> 
> * ceph-disk deprecation warnings will stay for 12.2.2
> * deprecation warnings will be removed in 12.2.3 (and from all later
> Luminous releases)
> * deprecation warnings will be added again in ceph-disk for all Mimic releases
> * ceph-disk will no longer be available for the 'N' release, along
> with the UDEV rules
> 
> I believe these four points address most of the concerns voiced in
> this thread, and should give enough time to port clusters over to
> ceph-volume.
> 
> [0] 
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-October/021358.html

Thank you for listening to the feedback - I think most of us know the
balance that needs to be struck between moving a project forward and
decrufting a code base versus providing a stable enough interface for
users is not always easy to find.

I think the above roadmap is a good compromise for all involved parties,
and I hope we can use the remainder of Luminous to prepare for a
seam- and painless transition to ceph-volume in time for the Mimic
release, and then finally retire ceph-disk for good!

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com