>>>Try
>>>ceph-disk -v activate /dev/sdaa1
ceph-disk -v activate /dev/sdaa1
/dev/sdaa1: ambivalent result (probably more filesystems on the device, use
wipefs(8) to see more details)
>>>as there is probably a partition there. And/or tell us what
>>>/proc/partitions contains,
cat /proc/partitions
major minor #blocks name
....
65 160 2930266584 sdaa
65 161 2930265543 sdaa1
....
>>>and/or what you get from
>>>ceph-disk list
ceph-disk list
Traceback (most recent call last):
File "/usr/sbin/ceph-disk", line 2328, in <module>
main()
File "/usr/sbin/ceph-disk", line 2317, in main
args.func(args)
File "/usr/sbin/ceph-disk", line 2001, in main_list
tpath = mount(dev=dev, fstype=fs_type, options='')
File "/usr/sbin/ceph-disk", line 678, in mount
path,
File "/usr/lib/python2.7/subprocess.py", line 506, in check_call
retcode = call(*popenargs, **kwargs)
File "/usr/lib/python2.7/subprocess.py", line 493, in call
return Popen(*popenargs, **kwargs).wait()
File "/usr/lib/python2.7/subprocess.py", line 679, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1249, in _execute_child
raise child_exception
TypeError: execv() arg 2 must contain only strings
==================================================================
-----Original Message-----
From: Sage Weil [mailto:[email protected]]
Sent: Thursday, September 05, 2013 6:37 PM
To: Pavel Timoschenkov
Cc: Alfredo Deza; [email protected]
Subject: RE: [ceph-users] trouble with ceph-deploy
On Thu, 5 Sep 2013, Pavel Timoschenkov wrote:
> >>>What happens if you do
> >>>ceph-disk -v activate /dev/sdaa1
> >>>on ceph001?
>
> Hi. My issue has not been solved. When i execute ceph-disk -v activate
> /dev/sdaa - all is ok:
> ceph-disk -v activate /dev/sdaa
Try
ceph-disk -v activate /dev/sdaa1
as there is probably a partition there. And/or tell us what /proc/partitions
contains, and/or what you get from
ceph-disk list
Thanks!
sage
> DEBUG:ceph-disk:Mounting /dev/sdaa on /var/lib/ceph/tmp/mnt.yQuXIa
> with options noatime
> mount: Structure needs cleaning
> but OSD not created all the same:
> ceph -k ceph.client.admin.keyring -s
> cluster 0a2e18d2-fd53-4f01-b63a-84851576c076
> health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
> monmap e1: 1 mons at {ceph001=172.16.4.32:6789/0}, election epoch 2,
> quorum 0 ceph001
> osdmap e1: 0 osds: 0 up, 0 in
> pgmap v2: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB / 0 KB
> avail
> mdsmap e1: 0/0/1 up
>
> -----Original Message-----
> From: Sage Weil [mailto:[email protected]]
> Sent: Friday, August 30, 2013 6:14 PM
> To: Pavel Timoschenkov
> Cc: Alfredo Deza; [email protected]
> Subject: Re: [ceph-users] trouble with ceph-deploy
>
> On Fri, 30 Aug 2013, Pavel Timoschenkov wrote:
>
> >
> > <<<<Can you share the output of the commands that do not work for you?
> > How <<<<did `create` not work ? what did you see in the logs?
> >
> >
> >
> > In logs everything looks good. After
> >
> > ceph-deploy disk zap ceph001:sdaa ceph001:sda1
> >
> > and
> >
> > ceph-deploy osd create ceph001:sdaa:/dev/sda1
> >
> > where:
> >
> > HOST: ceph001
> >
> > DISK: sdaa
> >
> > JOURNAL: /dev/sda1
> >
> > in log:
> >
> > ==============================================
> >
> > cat ceph.log
> >
> > 2013-08-30 13:06:42,030 [ceph_deploy.osd][DEBUG ] Preparing cluster
> > ceph disks ceph001:/dev/sdaa:/dev/sda1
> >
> > 2013-08-30 13:06:42,590 [ceph_deploy.osd][DEBUG ] Deploying osd to
> > ceph001
> >
> > 2013-08-30 13:06:42,627 [ceph_deploy.osd][DEBUG ] Host ceph001 is
> > now ready for osd use.
> >
> > 2013-08-30 13:06:42,627 [ceph_deploy.osd][DEBUG ] Preparing host
> > ceph001 disk /dev/sdaa journal /dev/sda1 activate True
> >
> > +++++++++++++++++++++++++++++++++++++++++++++++
> >
> > But:
> >
> > +++++++++++++++++++++++++++++++++++++++++++++++
> >
> > ceph -k ceph.client.admin.keyring -s
> >
> > cluster 0a2e18d2-fd53-4f01-b63a-84851576c076
> >
> > health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean;
> > no osds
> >
> > monmap e1: 1 mons at {ceph001=172.16.4.32:6789/0}, election epoch
> > 2, quorum 0 ceph001
> >
> > osdmap e1: 0 osds: 0 up, 0 in
> >
> > pgmap v2: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB /
> > 0 KB avail
> >
> > mdsmap e1: 0/0/1 up
> >
> > +++++++++++++++++++++++++++++++++++++++++++++++
> >
> > And
> >
> > +++++++++++++++++++++++++++++++++++++++++++++++
> >
> > ceph -k ceph.client.admin.keyring osd tree
> >
> > # id weight type name up/down reweight
> >
> > -1 0 root default
> >
> > +++++++++++++++++++++++++++++++++++++++++++++++
> >
> > OSD not created (
>
> What happens if you do
>
> ceph-disk -v activate /dev/sdaa1
>
> on ceph001?
>
> sage
>
>
> >
> >
> >
> > From: Alfredo Deza [mailto:[email protected]]
> > Sent: Thursday, August 29, 2013 5:41 PM
> > To: Pavel Timoschenkov
> > Cc: [email protected]
> > Subject: Re: [ceph-users] trouble with ceph-deploy
> >
> >
> >
> >
> >
> >
> >
> > On Thu, Aug 29, 2013 at 10:23 AM, Pavel Timoschenkov
> > <[email protected]> wrote:
> >
> > Hi.
> >
> > If I use the example of the doc:
> >
> > http://ceph.com/docs/master/rados/deployment/ceph-deploy-osd/#create
> > -o
> > sds
> >
> > ceph-deploy osd prepare ceph001:sdaa:/dev/sda1
> > ceph-deploy osd activate ceph001:sdaa:/dev/sda1
> > or
> > ceph-deploy osd prepare ceph001:/dev/sdaa1:/dev/sda1
> > ceph-deploy osd activate ceph001:/dev/sdaa:/dev/sda1
> >
> > or
> >
> > ceph-deploy osd create ceph001:sdaa:/dev/sda1
> >
> > OSD is not created. No errors, but when I execute
> >
> > ceph -k ceph.client.admin.keyring ?s
> >
> > I see the following:
> >
> > cluster 4b91a9e9-0e6c-4570-98c6-1398c6900a9e
> > health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean;
> > no osds
> > monmap e1: 1 mons at {ceph001=172.16.4.32:6789/0}, election epoch
> > 2, quorum 0 ceph001
> > osdmap e1: 0 osds: 0 up, 0 in
> > pgmap v2: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB /
> > 0 KB avail
> > mdsmap e1: 0/0/1 up
> >
> >
> >
> > 0 OSD.
> >
> >
> >
> > But if I use as an DISK argument to a local folder
> > (/var/lib/ceph/osd/osd001) - it works, but only if used prepare +
> > activate construction:
> >
> > ceph-deploy osd prepare ceph001:/var/lib/ceph/osd/osd001:/dev/sda1
> > ceph-deploy osd activate ceph001:/var/lib/ceph/osd/osd001:/dev/sda1
> >
> > If I use CREATE, OSD is not created also.
> >
> >
> >
> >
> >
> > From: Alfredo Deza [mailto:[email protected]]
> > Sent: Thursday, August 29, 2013 4:36 PM
> > To: Pavel Timoschenkov
> > Cc: [email protected]
> > Subject: Re: [ceph-users] trouble with ceph-deploy
> >
> >
> >
> >
> >
> >
> >
> > On Thu, Aug 29, 2013 at 8:00 AM, Pavel Timoschenkov
> > <[email protected]> wrote:
> >
> > Hi.
> > New trouble with ceph-deploy. When i'm executing:
> >
> > ceph-deploy osd prepare ceph001:sdaa:/dev/sda1
> > ceph-deploy osd activate ceph001:sdaa:/dev/sda1
> > or
> > ceph-deploy osd prepare ceph001:/dev/sdaa1:/dev/sda1
> > ceph-deploy osd activate ceph001:/dev/sdaa:/dev/sda1
> >
> >
> >
> > Have you tried with
> >
> > ceph-deploy osd create ceph001:sdaa:/dev/sda1
> >
> > ?
> >
> > `create` should do `prepare` and `activate` for you. Also be mindful
> > that the requirements for the arguments are that you need to pass
> > something like:
> >
> > HOST:DISK[:JOURNAL]
> >
> > Where JOURNAL is completely optional, this is also detailed here:
> > http://ceph.com/docs/master/rados/deployment/ceph-deploy-osd/#create
> > -o
> > sds
> >
> > Have you followed those instructions to deploy your OSDs ?
> >
> >
> >
> >
> > OSD not created:
> >
> > ceph -k ceph.client.admin.keyring -s
> > cluster 4b91a9e9-0e6c-4570-98c6-1398c6900a9e
> > health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck
> > unclean; no osds
> > monmap e1: 1 mons at {ceph001=172.16.4.32:6789/0},
> > election epoch 2, quorum 0 ceph001
> > osdmap e1: 0 osds: 0 up, 0 in
> > pgmap v2: 192 pgs: 192 creating; 0 bytes data, 0 KB
> > used, 0 KB / 0 KB avail
> > mdsmap e1: 0/0/1 up
> >
> > ceph -k ceph.client.admin.keyring osd tree
> > # id weight type name up/down reweight
> > -1 0 root default
> >
> > but if i'm creating folder for ceph data and executing:
> >
> > ceph-deploy osd prepare
> > ceph001:/var/lib/ceph/osd/osd001:/dev/sda1
> > ceph-deploy osd activate
> > ceph001:/var/lib/ceph/osd/osd001:/dev/sda1
> >
> > Those do not look right to me.
> >
> >
> >
> > OSD created:
> >
> > ceph -k ceph.client.admin.keyring -s
> > cluster 4b91a9e9-0e6c-4570-98c6-1398c6900a9e
> > health HEALTH_WARN 192 pgs stuck inactive; 192 pgs
> > stuck unclean
> > monmap e1: 1 mons at {ceph001=172.16.4.32:6789/0},
> > election epoch 2, quorum 0 ceph001
> > osdmap e5: 1 osds: 1 up, 1 in
> > pgmap v6: 192 pgs: 192 creating; 0 bytes data, 0 KB
> > used, 0 KB / 0 KB avail
> > mdsmap e1: 0/0/1 up
> >
> > ceph -k ceph.client.admin.keyring osd
> > tree
> > # id weight type name up/down reweight
> > -1 0.03999 root default
> > -2 0.03999 host ceph001
> > 0 0.03999 osd.0 up 1
> >
> > This is a bug or should I mount disks for data to some
> > catalog?
> >
> >
> > and more:
> > The 'ceph-deploy osd create' construction don't work from
> > me. Only 'prepare&activate'.
> >
> >
> >
> > When you say `create` didn't work for you, how so? What output did
> > you see? Can you share some logs/output?
> >
> >
> >
> > Can you share the output of the commands that do not work for you?
> > How did `create` not work ? what did you see in the logs?
> >
> >
> > dpkg -s ceph-deploy
> > Version: 1.2.1-1precise
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > [email protected]
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> >
> >
> >
> >
> >
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com