New post bellow...

From: Greg [mailto:[email protected]]
Sent: Monday, May 06, 2013 2:31 PM
To: Glen Aidukas
Subject: Re: [ceph-users] problem readding an osd

Le 06/05/2013 20:05, Glen Aidukas a écrit :
Greg,

Not sure where to use the -d switch.  I tried the following:

                Service ceph start -d
                Service ceph -d start

Both do not work.

I did see an error in my log though...

2013-05-06 13:03:38.432479 7f0007ef2780 -1 filestore(/srv/ceph/osd/osd.2) 
limited size xattrs -- filestore_xattr_use_omap enabled
2013-05-06 13:03:38.438563 7f0007ef2780  0 filestore(/srv/ceph/osd/osd.2) mount 
FIEMAP ioctl is supported and appears to work
2013-05-06 13:03:38.438591 7f0007ef2780  0 filestore(/srv/ceph/osd/osd.2) mount 
FIEMAP ioctl is disabled via 'filestore fiemap' config option
2013-05-06 13:03:38.438804 7f0007ef2780  0 filestore(/srv/ceph/osd/osd.2) mount 
did NOT detect btrfs
2013-05-06 13:03:38.484841 7f0007ef2780  0 filestore(/srv/ceph/osd/osd.2) mount 
syncfs(2) syscall fully supported (by glibc and kernel)
2013-05-06 13:03:38.485010 7f0007ef2780  0 filestore(/srv/ceph/osd/osd.2) mount 
found snaps <>
2013-05-06 13:03:38.488631 7f0007ef2780  0 filestore(/srv/ceph/osd/osd.2) 
mount: enabling WRITEAHEAD journal mode: btrfs not detected
2013-05-06 13:03:38.488936 7f0007ef2780  1 journal _open 
/srv/ceph/osd/osd.2/journal fd 19: 1048576000 bytes, block size 4096 bytes, 
directio = 1, aio = 0
2013-05-06 13:03:38.489095 7f0007ef2780  1 journal _open 
/srv/ceph/osd/osd.2/journal fd 19: 1048576000 bytes, block size 4096 bytes, 
directio = 1, aio = 0
2013-05-06 13:03:38.490116 7f0007ef2780  1 journal close 
/srv/ceph/osd/osd.2/journal
2013-05-06 13:03:38.538302 7f0007ef2780 -1 filestore(/srv/ceph/osd/osd.2) 
limited size xattrs -- filestore_xattr_use_omap enabled
2013-05-06 13:03:38.559813 7f0007ef2780  0 filestore(/srv/ceph/osd/osd.2) mount 
FIEMAP ioctl is supported and appears to work
2013-05-06 13:03:38.559848 7f0007ef2780  0 filestore(/srv/ceph/osd/osd.2) mount 
FIEMAP ioctl is disabled via 'filestore fiemap' config option
2013-05-06 13:03:38.560082 7f0007ef2780  0 filestore(/srv/ceph/osd/osd.2) mount 
did NOT detect btrfs
2013-05-06 13:03:38.566015 7f0007ef2780  0 filestore(/srv/ceph/osd/osd.2) mount 
syncfs(2) syscall fully supported (by glibc and kernel)
2013-05-06 13:03:38.566106 7f0007ef2780  0 filestore(/srv/ceph/osd/osd.2) mount 
found snaps <>
2013-05-06 13:03:38.569047 7f0007ef2780  0 filestore(/srv/ceph/osd/osd.2) 
mount: enabling WRITEAHEAD journal mode: btrfs not detected
2013-05-06 13:03:38.569237 7f0007ef2780  1 journal _open 
/srv/ceph/osd/osd.2/journal fd 27: 1048576000 bytes, block size 4096 bytes, 
directio = 1, aio = 0
2013-05-06 13:03:38.569316 7f0007ef2780  1 journal _open 
/srv/ceph/osd/osd.2/journal fd 27: 1048576000 bytes, block size 4096 bytes, 
directio = 1, aio = 0
2013-05-06 13:03:38.574317 7f0007ef2780  1 journal close 
/srv/ceph/osd/osd.2/journal
2013-05-06 13:03:38.574801 7f0007ef2780 -1  ** ERROR: osd init failed: (1) 
Operation not permitted


Glen Aidukas  [Manager IT Infrasctructure]


From: 
[email protected]<mailto:[email protected]> 
[mailto:[email protected]] On Behalf Of Greg
Sent: Monday, May 06, 2013 1:47 PM
To: [email protected]<mailto:[email protected]>
Subject: Re: [ceph-users] problem readding an osd

Le 06/05/2013 19:23, Glen Aidukas a écrit :
Hello,

I think this is a newbe question but I tested everything and, yes I FTFM as 
best I could.

I'm evaluating ceph and so I setup a cluster of 4 nodes.  The nodes are KVM 
virtual machines named ceph01 to ceph04 all running Ubuntu 12.04.2 LTS each 
with a single osd named osd.1 though osd.4 respective to the host they were 
running on.  Each host also has a 1TB disk for ceph to use '/dev/vdb1'.

After some work I was able to get the cluster up and running and even mounted 
it on a test client host (named ceph00).  I ran into issues when I was testing 
a failure.  I shut off ceph02 and watched via (ceph -w) it recover and move the 
data around.  At this point all is fine.

When I turned the host back on, it did not auto reconnect.  I expected this.  I 
then send through many attempts to re add it but all failed.

Here is an output from:  ceph osd tree

# id    weight  type name       up/down reweight
-1      4       root default
-3      4               rack unknownrack
-2      1                       host ceph01
1       1                               osd.1   up      1
-4      1                       host ceph02
2       1                               osd.2   down    0
-5      1                       host ceph03
3       1                               osd.3   up      1
-6      1                       host ceph04
4       1                               osd.4   up      1
-7      0               rack unkownrack

ceph -s
   health HEALTH_WARN 208 pgs peering; 208 pgs stuck inactive; 208 pgs stuck 
unclean; 1/4 in osds are down
   monmap e1: 1 mons at {a=10.30.20.81:6789/0}, election epoch 1, quorum 0 a
   osdmap e172: 4 osds: 3 up, 4 in
    pgmap v1970: 960 pgs: 752 active+clean, 208 peering; 5917 MB data, 61702 MB 
used, 2854 GB / 3068 GB avail
   mdsmap e39: 1/1/1 up {0=a=up:active}

While I'm able to get it to be in the 'in' state, I cant seem to bring it up.

Any ideas on how to fix this?

Glen,

try to bring up your OSD daemon with -d switch, this will probably give you 
some information. (alternatively look in the logs)

Cheers,
Glen,

1/ please reply to the list so everyone can benefit
2/ please quote downwards as it is easier to read
3/ did "testing everything" not include reading the logs ?
4/ now know you have an error, you can dig into your problem

Regards,

Greg,


1)      I hit reply and didn't realize it was going directly to you until after 
I hit send. I will make sure I send back to the mail list for future posts.


2)      Got it!



3)      I did see the logs at one point but forgot to mention it in the 
original post.  My bad... :(



4)      I did dig into this error but was not able to determine the cause.  
Like I said, I did a lot of research before posting.


I'm sure I must be stuck and looking in the wrong direction.  This is why I 
came here to have my head pointed in the correct location.  :)

Regards,

-Glen


_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to