On Sun, Mar 22, 2015 at 11:22 AM, Somnath Roy <somnath....@sandisk.com> wrote:
> You should be having replicated copies on other OSDs (disks), so, no need to 
> worry about the data loss. You add a new drive and follow the steps in the 
> following link (either 1 or 2)

Except that's not the case if you only had one copy of the PG, as
seems to be indicated by the "last acting[1]" output all over that
health warning. :/
You certainly should have a copy of the data elsewhere, but that
message means you *didn't*; presumably you had 2 copies of everything
and either your CRUSH map was bad (which should have provoked lots of
warnings?) or you've lost more than one OSD.
-Greg

>
> 1. For manual deployment, 
> http://ceph.com/docs/master/rados/operations/add-or-rm-osds/
>
> 2. With ceph-deploy, 
> http://ceph.com/docs/master/rados/deployment/ceph-deploy-osd/
>
> After successful deployment, rebalancing should start and eventually cluster 
> will come to healthy state.
>
> Thanks & Regards
> Somnath
>
>
> -----Original Message-----
> From: Noah Mehl [mailto:noahm...@combinedpublic.com]
> Sent: Sunday, March 22, 2015 11:15 AM
> To: Somnath Roy
> Cc: ceph-users@lists.ceph.com
> Subject: Re: Can't Start OSD
>
> Somnath,
>
> You are correct, there are dmesg errors about the drive.  How can I replace 
> the drive?  Can I copy all of the readable contents from this drive to a new 
> one?  Because I have the following output from “ceph health detail”
>
> HEALTH_WARN 43 pgs stale; 43 pgs stuck stale pg 7.5b7 is stuck stale for 
> 5954121.993990, current state stale+active+clean, last acting [1] pg 7.42a is 
> stuck stale for 5954121.993885, current state stale+active+clean, last acting 
> [1] pg 7.669 is stuck stale for 5954121.994072, current state 
> stale+active+clean, last acting [1] pg 7.121 is stuck stale for 
> 5954121.993586, current state stale+active+clean, last acting [1] pg 7.4ec is 
> stuck stale for 5954121.993956, current state stale+active+clean, last acting 
> [1] pg 7.1e4 is stuck stale for 5954121.993670, current state 
> stale+active+clean, last acting [1] pg 7.41f is stuck stale for 
> 5954121.993901, current state stale+active+clean, last acting [1] pg 7.59f is 
> stuck stale for 5954121.994024, current state stale+active+clean, last acting 
> [1] pg 7.39 is stuck stale for 5954121.993490, current state 
> stale+active+clean, last acting [1] pg 7.584 is stuck stale for 
> 5954121.994026, current state stale+active+clean, last acting [1] pg 7.fd is 
> stuck stale for 5954121.993600, current state stale+active+clean, last acting 
> [1] pg 7.6fd is stuck stale for 5954121.994158, current state 
> stale+active+clean, last acting [1] pg 7.4b5 is stuck stale for 
> 5954121.993975, current state stale+active+clean, last acting [1] pg 7.328 is 
> stuck stale for 5954121.993840, current state stale+active+clean, last acting 
> [1] pg 7.4a9 is stuck stale for 5954121.993981, current state 
> stale+active+clean, last acting [1] pg 7.569 is stuck stale for 
> 5954121.994046, current state stale+active+clean, last acting [1] pg 7.629 is 
> stuck stale for 5954121.994119, current state stale+active+clean, last acting 
> [1] pg 7.623 is stuck stale for 5954121.994118, current state 
> stale+active+clean, last acting [1] pg 7.6dd is stuck stale for 
> 5954121.994179, current state stale+active+clean, last acting [1] pg 7.3d5 is 
> stuck stale for 5954121.993935, current state stale+active+clean, last acting 
> [1] pg 7.54b is stuck stale for 5954121.994058, current state 
> stale+active+clean, last acting [1] pg 7.3cf is stuck stale for 
> 5954121.993938, current state stale+active+clean, last acting [1] pg 7.c4 is 
> stuck stale for 5954121.993633, current state stale+active+clean, last acting 
> [1] pg 7.178 is stuck stale for 5954121.993719, current state 
> stale+active+clean, last acting [1] pg 7.3b8 is stuck stale for 
> 5954121.993946, current state stale+active+clean, last acting [1] pg 7.b1 is 
> stuck stale for 5954121.993635, current state stale+active+clean, last acting 
> [1] pg 7.5fb is stuck stale for 5954121.994146, current state 
> stale+active+clean, last acting [1] pg 7.236 is stuck stale for 
> 5954121.993801, current state stale+active+clean, last acting [1] pg 7.2f5 is 
> stuck stale for 5954121.993881, current state stale+active+clean, last acting 
> [1] pg 7.ac is stuck stale for 5954121.993643, current state 
> stale+active+clean, last acting [1] pg 7.16d is stuck stale for 
> 5954121.993738, current state stale+active+clean, last acting [1] pg 7.6b7 is 
> stuck stale for 5954121.994223, current state stale+active+clean, last acting 
> [1] pg 7.5ea is stuck stale for 5954121.994166, current state 
> stale+active+clean, last acting [1] pg 7.a3 is stuck stale for 
> 5954121.993654, current state stale+active+clean, last acting [1] pg 7.52d is 
> stuck stale for 5954121.994110, current state stale+active+clean, last acting 
> [1] pg 7.2d8 is stuck stale for 5954121.993904, current state 
> stale+active+clean, last acting [1] pg 7.2db is stuck stale for 
> 5954121.993903, current state stale+active+clean, last acting [1] pg 7.5d9 is 
> stuck stale for 5954121.994181, current state stale+active+clean, last acting 
> [1] pg 7.395 is stuck stale for 5954121.993989, current state 
> stale+active+clean, last acting [1] pg 7.38e is stuck stale for 
> 5954121.993988, current state stale+active+clean, last acting [1] pg 7.13a is 
> stuck stale for 5954121.993766, current state stale+active+clean, last acting 
> [1] pg 7.683 is stuck stale for 5954121.994255, current state 
> stale+active+clean, last acting [1] pg 7.439 is stuck stale for 
> 5954121.994079, current state stale+active+clean, last acting [1]
>
> It’s osd id=1 that’s problematic, but I should have a replica of the data 
> somewhere else?
>
> Thanks!
>
> ~Noah
>
>> On Mar 22, 2015, at 2:04 PM, Somnath Roy <somnath....@sandisk.com> wrote:
>>
>> Are you seeing any error related to the disk (where OSD is mounted) in dmesg 
>> ?
>> Could be a leveldb corruption or ceph bug.
>> Now, unfortunately not enough log in that portion of the code base to
>> reveal exactly why we are not getting infoos object from leveldb :-(
>>
>> Thanks & Regards
>> Somnath
>>
>> -----Original Message-----
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf
>> Of Noah Mehl
>> Sent: Sunday, March 22, 2015 10:11 AM
>> To: ceph-users@lists.ceph.com
>> Subject: Re: [ceph-users] Can't Start OSD
>>
>> In production for over a year, and no upgrades.
>>
>> Thanks!
>>
>> ~Noah
>>
>>> On Mar 22, 2015, at 1:01 PM, Somnath Roy <somnath....@sandisk.com> wrote:
>>>
>>> Noah,
>>> Is this fresh installation or after upgrade ?
>>>
>>> It seems related to omap (leveldb) stuff.
>>>
>>> Thanks & Regards
>>> Somnath
>>> -----Original Message-----
>>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf
>>> Of Noah Mehl
>>> Sent: Sunday, March 22, 2015 9:34 AM
>>> To: ceph-users@lists.ceph.com
>>> Subject: [ceph-users] Can't Start OSD
>>>
>>> I have an OSD that’s failing to start.  I can’t make heads or tails of the 
>>> error (pasted below).
>>>
>>> Thanks!
>>>
>>> ~Noah
>>>
>>> 2015-03-22 16:32:39.265116 7f4da7fa0780  0 ceph version 0.67.4
>>> (ad85b8bfafea6232d64cb7ba76a8b6e8252fa0c7), process ceph-osd, pid
>>> 13483
>>> 2015-03-22 16:32:39.269499 7f4da7fa0780  1
>>> filestore(/var/lib/ceph/osd/ceph-1) mount detected xfs
>>> 2015-03-22 16:32:39.269509 7f4da7fa0780  1
>>> filestore(/var/lib/ceph/osd/ceph-1)  disabling 'filestore replica
>>> fadvise' due to known issues with fadvise(DONTNEED) on xfs
>>> 2015-03-22 16:32:39.450031 7f4da7fa0780  0
>>> filestore(/var/lib/ceph/osd/ceph-1) mount FIEMAP ioctl is supported
>>> and appears to work
>>> 2015-03-22 16:32:39.450069 7f4da7fa0780  0
>>> filestore(/var/lib/ceph/osd/ceph-1) mount FIEMAP ioctl is disabled
>>> via 'filestore fiemap' config option
>>> 2015-03-22 16:32:39.450743 7f4da7fa0780  0
>>> filestore(/var/lib/ceph/osd/ceph-1) mount did NOT detect btrfs
>>> 2015-03-22 16:32:39.499753 7f4da7fa0780  0
>>> filestore(/var/lib/ceph/osd/ceph-1) mount syncfs(2) syscall fully
>>> supported (by glibc and kernel)
>>> 2015-03-22 16:32:39.500078 7f4da7fa0780  0
>>> filestore(/var/lib/ceph/osd/ceph-1) mount found snaps <>
>>> 2015-03-22 16:32:40.765736 7f4da7fa0780  0
>>> filestore(/var/lib/ceph/osd/ceph-1) mount: enabling WRITEAHEAD
>>> journal mode: btrfs not detected
>>> 2015-03-22 16:32:40.777156 7f4da7fa0780  1 journal _open
>>> /var/lib/ceph/osd/ceph-1/journal fd 2551: 5368709120 bytes, block
>>> size 4096 bytes, directio = 1, aio = 1
>>> 2015-03-22 16:32:40.777278 7f4da7fa0780  1 journal _open
>>> /var/lib/ceph/osd/ceph-1/journal fd 2551: 5368709120 bytes, block
>>> size 4096 bytes, directio = 1, aio = 1
>>> 2015-03-22 16:32:40.778223 7f4da7fa0780  1 journal close
>>> /var/lib/ceph/osd/ceph-1/journal
>>> 2015-03-22 16:32:41.066655 7f4da7fa0780  1
>>> filestore(/var/lib/ceph/osd/ceph-1) mount detected xfs
>>> 2015-03-22 16:32:41.150578 7f4da7fa0780  0
>>> filestore(/var/lib/ceph/osd/ceph-1) mount FIEMAP ioctl is supported
>>> and appears to work
>>> 2015-03-22 16:32:41.150624 7f4da7fa0780  0
>>> filestore(/var/lib/ceph/osd/ceph-1) mount FIEMAP ioctl is disabled
>>> via 'filestore fiemap' config option
>>> 2015-03-22 16:32:41.151359 7f4da7fa0780  0
>>> filestore(/var/lib/ceph/osd/ceph-1) mount did NOT detect btrfs
>>> 2015-03-22 16:32:41.225302 7f4da7fa0780  0
>>> filestore(/var/lib/ceph/osd/ceph-1) mount syncfs(2) syscall fully
>>> supported (by glibc and kernel)
>>> 2015-03-22 16:32:41.225498 7f4da7fa0780  0
>>> filestore(/var/lib/ceph/osd/ceph-1) mount found snaps <>
>>> 2015-03-22 16:32:42.375558 7f4da7fa0780  0
>>> filestore(/var/lib/ceph/osd/ceph-1) mount: enabling WRITEAHEAD
>>> journal mode: btrfs not detected
>>> 2015-03-22 16:32:42.382958 7f4da7fa0780  1 journal _open
>>> /var/lib/ceph/osd/ceph-1/journal fd 1429: 5368709120 bytes, block
>>> size 4096 bytes, directio = 1, aio = 1
>>> 2015-03-22 16:32:42.383187 7f4da7fa0780  1 journal _open
>>> /var/lib/ceph/osd/ceph-1/journal fd 1481: 5368709120 bytes, block
>>> size 4096 bytes, directio = 1, aio = 1
>>> 2015-03-22 16:32:43.076434 7f4da7fa0780 -1 osd/PG.cc: In function
>>> 'static epoch_t PG::peek_map_epoch(ObjectStore*, coll_t, hobject_t&,
>>> ceph::bufferlist*)' thread 7f4da7fa0780 time 2015-03-22
>>> 16:32:43.075101
>>> osd/PG.cc: 2270: FAILED assert(values.size() == 1)
>>>
>>> ceph version 0.67.4 (ad85b8bfafea6232d64cb7ba76a8b6e8252fa0c7)
>>> 1: (PG::peek_map_epoch(ObjectStore*, coll_t, hobject_t&,
>>> ceph::buffer::list*)+0x4d7) [0x70ebf7]
>>> 2: (OSD::load_pgs()+0x14ce) [0x694efe]
>>> 3: (OSD::init()+0x11be) [0x69cffe]
>>> 4: (main()+0x1d09) [0x5c3509]
>>> 5: (__libc_start_main()+0xed) [0x7f4da5bde76d]
>>> 6: /usr/bin/ceph-osd() [0x5c6e1d]
>>> NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to 
>>> interpret this.
>>>
>>> --- begin dump of recent events ---
>>>  -75> 2015-03-22 16:32:39.259280 7f4da7fa0780  5 asok(0x1aec1c0)
>>> register_command perfcounters_dump hook 0x1ae4010  -74> 2015-03-22
>>> 16:32:39.259373 7f4da7fa0780  5 asok(0x1aec1c0) register_command 1
>>> hook 0x1ae4010  -73> 2015-03-22 16:32:39.259393 7f4da7fa0780  5
>>> asok(0x1aec1c0) register_command perf dump hook 0x1ae4010  -72>
>>> 2015-03-22 16:32:39.259429 7f4da7fa0780  5 asok(0x1aec1c0)
>>> register_command perfcounters_schema hook 0x1ae4010  -71> 2015-03-22
>>> 16:32:39.259445 7f4da7fa0780  5 asok(0x1aec1c0) register_command 2
>>> hook 0x1ae4010  -70> 2015-03-22 16:32:39.259453 7f4da7fa0780  5
>>> asok(0x1aec1c0) register_command perf schema hook 0x1ae4010  -69>
>>> 2015-03-22 16:32:39.259467 7f4da7fa0780  5 asok(0x1aec1c0)
>>> register_command config show hook 0x1ae4010  -68> 2015-03-22
>>> 16:32:39.259481 7f4da7fa0780  5 asok(0x1aec1c0) register_command
>>> config set hook 0x1ae4010  -67> 2015-03-22 16:32:39.259495
>>> 7f4da7fa0780  5 asok(0x1aec1c0) register_command config get hook
>>> 0x1ae4010  -66> 2015-03-22 16:32:39.259505 7f4da7fa0780  5
>>> asok(0x1aec1c0) register_command log flush hook 0x1ae4010  -65>
>>> 2015-03-22 16:32:39.259519 7f4da7fa0780  5 asok(0x1aec1c0)
>>> register_command log dump hook 0x1ae4010  -64> 2015-03-22
>>> 16:32:39.259536 7f4da7fa0780  5 asok(0x1aec1c0) register_command log
>>> reopen hook 0x1ae4010  -63> 2015-03-22 16:32:39.265116 7f4da7fa0780
>>> 0 ceph version 0.67.4 (ad85b8bfafea6232d64cb7ba76a8b6e8252fa0c7),
>>> process ceph-osd, pid 13483  -62> 2015-03-22 16:32:39.266443
>>> 7f4da7fa0780  1 -- 192.168.41.42:0/0 learned my addr
>>> 192.168.41.42:0/0  -61> 2015-03-22 16:32:39.266462 7f4da7fa0780  1
>>> accepter.accepter.bind my_inst.addr is 192.168.41.42:6803/13483
>>> need_addr=0  -60> 2015-03-22 16:32:39.266500 7f4da7fa0780  1 --
>>> 192.168.42.42:0/0 learned my addr 192.168.42.42:0/0  -59> 2015-03-22
>>> 16:32:39.266537 7f4da7fa0780  1 accepter.accepter.bind my_inst.addr
>>> is 192.168.42.42:6802/13483 need_addr=0  -58> 2015-03-22
>>> 16:32:39.266551 7f4da7fa0780  1 -- 192.168.42.42:0/0 learned my addr
>>> 192.168.42.42:0/0  -57> 2015-03-22 16:32:39.266560 7f4da7fa0780  1
>>> accepter.accepter.bind my_inst.addr is 192.168.42.42:6803/13483
>>> need_addr=0  -56> 2015-03-22 16:32:39.266580 7f4da7fa0780  1 --
>>> 192.168.41.42:0/0 learned my addr 192.168.41.42:0/0  -55> 2015-03-22
>>> 16:32:39.266602 7f4da7fa0780  1 accepter.accepter.bind my_inst.addr
>>> is 192.168.41.42:6808/13483 need_addr=0  -54> 2015-03-22
>>> 16:32:39.269108 7f4da7fa0780  5 asok(0x1aec1c0) init
>>> /var/run/ceph/ceph-osd.1.asok  -53> 2015-03-22 16:32:39.269138
>>> 7f4da7fa0780  5 asok(0x1aec1c0) bind_and_listen
>>> /var/run/ceph/ceph-osd.1.asok  -52> 2015-03-22 16:32:39.269185
>>> 7f4da7fa0780  5 asok(0x1aec1c0) register_command 0 hook 0x1ae30b0  -51> 
>>> 2015-03-22 16:32:39.269203 7f4da7fa0780  5 asok(0x1aec1c0) register_command 
>>> version hook 0x1ae30b0  -50> 2015-03-22 16:32:39.269206 7f4da7fa0780  5 
>>> asok(0x1aec1c0) register_command git_version hook 0x1ae30b0  -49> 
>>> 2015-03-22 16:32:39.269210 7f4da7fa0780  5 asok(0x1aec1c0) register_command 
>>> help hook 0x1ae40d0  -48> 2015-03-22 16:32:39.269231 7f4da7fa0780  5 
>>> asok(0x1aec1c0) register_command get_command_descriptions hook 0x1ae40c0  
>>> -47> 2015-03-22 16:32:39.269273 7f4da3c28700  5 asok(0x1aec1c0) entry start 
>>>  -46> 2015-03-22 16:32:39.269499 7f4da7fa0780  1 
>>> filestore(/var/lib/ceph/osd/ceph-1) mount detected xfs  -45> 2015-03-22 
>>> 16:32:39.269509 7f4da7fa0780  1 filestore(/var/lib/ceph/osd/ceph-1)  
>>> disabling 'filestore replica fadvise' due to known issues with 
>>> fadvise(DONTNEED) on xfs  -44> 2015-03-22 16:32:39.450031 7f4da7fa0780  0 
>>> filestore(/var/lib/ceph/osd/ceph-1) mount FIEMAP ioctl is supported and 
>>> appears to work  -43> 2015-03-22 16:32:39.450069 7f4da7fa0780  0 
>>> filestore(/var/lib/ceph/osd/ceph-1) mount FIEMAP ioctl is disabled via 
>>> 'filestore fiemap' config option  -42> 2015-03-22 16:32:39.450743 
>>> 7f4da7fa0780  0 filestore(/var/lib/ceph/osd/ceph-1) mount did NOT detect 
>>> btrfs  -41> 2015-03-22 16:32:39.499753 7f4da7fa0780  0 
>>> filestore(/var/lib/ceph/osd/ceph-1) mount syncfs(2) syscall fully supported 
>>> (by glibc and kernel)  -40> 2015-03-22 16:32:39.500078 7f4da7fa0780  0 
>>> filestore(/var/lib/ceph/osd/ceph-1) mount found snaps <>  -39> 2015-03-22 
>>> 16:32:40.765736 7f4da7fa0780  0 filestore(/var/lib/ceph/osd/ceph-1) mount: 
>>> enabling WRITEAHEAD journal mode: btrfs not detected  -38> 2015-03-22 
>>> 16:32:40.777088 7f4da7fa0780  2 journal open 
>>> /var/lib/ceph/osd/ceph-1/journal fsid e2ad61ec-c581-4159-8671-bab77c7d4e97 
>>> fs_op_seq 81852815  -37> 2015-03-22 16:32:40.777156 7f4da7fa0780  1 journal 
>>> _open /var/lib/ceph/osd/ceph-1/journal fd 2551: 5368709120 bytes, block 
>>> size 4096 bytes, directio = 1, aio = 1  -36> 2015-03-22 16:32:40.777242 
>>> 7f4da7fa0780  2 journal No further valid entries found, journal is most 
>>> likely valid  -35> 2015-03-22 16:32:40.777252 7f4da7fa0780  2 journal No 
>>> further valid entries found, journal is most likely valid  -34> 2015-03-22 
>>> 16:32:40.777255 7f4da7fa0780  3 journal journal_replay: end of journal, 
>>> done.
>>>  -33> 2015-03-22 16:32:40.777278 7f4da7fa0780  1 journal _open
>>> /var/lib/ceph/osd/ceph-1/journal fd 2551: 5368709120 bytes, block
>>> size 4096 bytes, directio = 1, aio = 1  -32> 2015-03-22
>>> 16:32:40.777874 7f4d9fc20700  1 FileStore::op_tp worker finish  -31>
>>> 2015-03-22 16:32:40.777930 7f4da0421700  1 FileStore::op_tp worker
>>> finish  -30> 2015-03-22 16:32:40.778223 7f4da7fa0780  1 journal close
>>> /var/lib/ceph/osd/ceph-1/journal  -29> 2015-03-22 16:32:41.066043
>>> 7f4da7fa0780 10 monclient(hunting): build_initial_monmap  -28>
>>> 2015-03-22 16:32:41.066137 7f4da7fa0780  5 adding auth protocol:
>>> cephx  -27> 2015-03-22 16:32:41.066147 7f4da7fa0780  5 adding auth
>>> protocol: cephx  -26> 2015-03-22 16:32:41.066384 7f4da7fa0780  1 --
>>> 192.168.41.42:6803/13483 messenger.start  -25> 2015-03-22
>>> 16:32:41.066418 7f4da7fa0780  1 -- :/0 messenger.start  -24>
>>> 2015-03-22 16:32:41.066444 7f4da7fa0780  1 --
>>> 192.168.41.42:6808/13483 messenger.start  -23> 2015-03-22
>>> 16:32:41.066469 7f4da7fa0780  1 -- 192.168.42.42:6803/13483
>>> messenger.start  -22> 2015-03-22 16:32:41.066512 7f4da7fa0780  1 --
>>> 192.168.42.42:6802/13483 messenger.start  -21> 2015-03-22
>>> 16:32:41.066610 7f4da7fa0780  2 osd.1 0 mounting
>>> /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal  -20>
>>> 2015-03-22 16:32:41.066655 7f4da7fa0780  1
>>> filestore(/var/lib/ceph/osd/ceph-1) mount detected xfs  -19>
>>> 2015-03-22 16:32:41.150578 7f4da7fa0780  0
>>> filestore(/var/lib/ceph/osd/ceph-1) mount FIEMAP ioctl is supported and 
>>> appears to work  -18> 2015-03-22 16:32:41.150624 7f4da7fa0780  0 
>>> filestore(/var/lib/ceph/osd/ceph-1) mount FIEMAP ioctl is disabled via 
>>> 'filestore fiemap' config option  -17> 2015-03-22 16:32:41.151359 
>>> 7f4da7fa0780  0 filestore(/var/lib/ceph/osd/ceph-1) mount did NOT detect 
>>> btrfs  -16> 2015-03-22 16:32:41.225302 7f4da7fa0780  0 
>>> filestore(/var/lib/ceph/osd/ceph-1) mount syncfs(2) syscall fully supported 
>>> (by glibc and kernel)  -15> 2015-03-22 16:32:41.225498 7f4da7fa0780  0 
>>> filestore(/var/lib/ceph/osd/ceph-1) mount found snaps <>  -14> 2015-03-22 
>>> 16:32:42.375558 7f4da7fa0780  0 filestore(/var/lib/ceph/osd/ceph-1) mount: 
>>> enabling WRITEAHEAD journal mode: btrfs not detected  -13> 2015-03-22 
>>> 16:32:42.382825 7f4da7fa0780  2 journal open 
>>> /var/lib/ceph/osd/ceph-1/journal fsid e2ad61ec-c581-4159-8671-bab77c7d4e97 
>>> fs_op_seq 81852815  -12> 2015-03-22 16:32:42.382958 7f4da7fa0780  1 journal 
>>> _open /var/lib/ceph/osd/ceph-1/journal fd 1429: 5368709120 bytes, block 
>>> size 4096 bytes, directio = 1, aio = 1  -11> 2015-03-22 16:32:42.383091 
>>> 7f4da7fa0780  2 journal No further valid entries found, journal is most 
>>> likely valid  -10> 2015-03-22 16:32:42.383108 7f4da7fa0780  2 journal No 
>>> further valid entries found, journal is most likely valid
>>>   -9> 2015-03-22 16:32:42.383111 7f4da7fa0780  3 journal journal_replay: 
>>> end of journal, done.
>>>   -8> 2015-03-22 16:32:42.383187 7f4da7fa0780  1 journal _open 
>>> /var/lib/ceph/osd/ceph-1/journal fd 1481: 5368709120 bytes, block size 4096 
>>> bytes, directio = 1, aio = 1
>>>   -7> 2015-03-22 16:32:42.383761 7f4da7fa0780  2 osd.1 0 boot
>>>   -6> 2015-03-22 16:32:42.388322 7f4da7fa0780  1 <cls> 
>>> cls/rgw/cls_rgw.cc:1596: Loaded rgw class!
>>>   -5> 2015-03-22 16:32:42.389272 7f4da7fa0780  1 <cls> 
>>> cls/log/cls_log.cc:313: Loaded log class!
>>>   -4> 2015-03-22 16:32:42.392742 7f4da7fa0780  1 <cls> 
>>> cls/refcount/cls_refcount.cc:231: Loaded refcount class!
>>>   -3> 2015-03-22 16:32:42.393520 7f4da7fa0780  1 <cls> 
>>> cls/statelog/cls_statelog.cc:306: Loaded log class!
>>>   -2> 2015-03-22 16:32:42.394181 7f4da7fa0780  1 <cls> 
>>> cls/replica_log/cls_replica_log.cc:141: Loaded replica log class!
>>>   -1> 2015-03-22 16:32:42.394476 7f4da7fa0780  1 <cls> 
>>> cls/version/cls_version.cc:227: Loaded version class!
>>>    0> 2015-03-22 16:32:43.076434 7f4da7fa0780 -1 osd/PG.cc: In
>>> function 'static epoch_t PG::peek_map_epoch(ObjectStore*, coll_t,
>>> hobject_t&, ceph::bufferlist*)' thread 7f4da7fa0780 time 2015-03-22
>>> 16:32:43.075101
>>> osd/PG.cc: 2270: FAILED assert(values.size() == 1)
>>>
>>> ceph version 0.67.4 (ad85b8bfafea6232d64cb7ba76a8b6e8252fa0c7)
>>> 1: (PG::peek_map_epoch(ObjectStore*, coll_t, hobject_t&,
>>> ceph::buffer::list*)+0x4d7) [0x70ebf7]
>>> 2: (OSD::load_pgs()+0x14ce) [0x694efe]
>>> 3: (OSD::init()+0x11be) [0x69cffe]
>>> 4: (main()+0x1d09) [0x5c3509]
>>> 5: (__libc_start_main()+0xed) [0x7f4da5bde76d]
>>> 6: /usr/bin/ceph-osd() [0x5c6e1d]
>>> NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to 
>>> interpret this.
>>>
>>> --- logging levels ---
>>>  0/ 5 none
>>>  0/ 1 lockdep
>>>  0/ 1 context
>>>  1/ 1 crush
>>>  1/ 5 mds
>>>  1/ 5 mds_balancer
>>>  1/ 5 mds_locker
>>>  1/ 5 mds_log
>>>  1/ 5 mds_log_expire
>>>  1/ 5 mds_migrator
>>>  0/ 1 buffer
>>>  0/ 1 timer
>>>  0/ 1 filer
>>>  0/ 1 striper
>>>  0/ 1 objecter
>>>  0/ 5 rados
>>>  0/ 5 rbd
>>>  0/ 5 journaler
>>>  0/ 5 objectcacher
>>>  0/ 5 client
>>>  0/ 5 osd
>>>  0/ 5 optracker
>>>  0/ 5 objclass
>>>  1/ 3 filestore
>>>  1/ 3 journal
>>>  0/ 5 ms
>>>  1/ 5 mon
>>>  0/10 monc
>>>  1/ 5 paxos
>>>  0/ 5 tp
>>>  1/ 5 auth
>>>  1/ 5 crypto
>>>  1/ 1 finisher
>>>  1/ 5 heartbeatmap
>>>  1/ 5 perfcounter
>>>  1/ 5 rgw
>>>  1/ 5 hadoop
>>>  1/ 5 javaclient
>>>  1/ 5 asok
>>>  1/ 1 throttle
>>> -2/-2 (syslog threshold)
>>> -1/-1 (stderr threshold)
>>> max_recent     10000
>>> max_new         1000
>>> log_file /var/log/ceph/ceph-osd.1.log
>>> --- end dump of recent events ---
>>> 2015-03-22 16:32:43.079587 7f4da7fa0780 -1 *** Caught signal
>>> (Aborted) **  in thread 7f4da7fa0780
>>>
>>> ceph version 0.67.4 (ad85b8bfafea6232d64cb7ba76a8b6e8252fa0c7)
>>> 1: /usr/bin/ceph-osd() [0x8001ea]
>>> 2: (()+0xfcb0) [0x7f4da743acb0]
>>> 3: (gsignal()+0x35) [0x7f4da5bf3425]
>>> 4: (abort()+0x17b) [0x7f4da5bf6b8b]
>>> 5: (__gnu_cxx::__verbose_terminate_handler()+0x11d) [0x7f4da654569d]
>>> 6: (()+0xb5846) [0x7f4da6543846]
>>> 7: (()+0xb5873) [0x7f4da6543873]
>>> 8: (()+0xb596e) [0x7f4da654396e]
>>> 9: (ceph::__ceph_assert_fail(char const*, char const*, int, char
>>> const*)+0x1df) [0x8c5e7f]
>>> 10: (PG::peek_map_epoch(ObjectStore*, coll_t, hobject_t&,
>>> ceph::buffer::list*)+0x4d7) [0x70ebf7]
>>> 11: (OSD::load_pgs()+0x14ce) [0x694efe]
>>> 12: (OSD::init()+0x11be) [0x69cffe]
>>> 13: (main()+0x1d09) [0x5c3509]
>>> 14: (__libc_start_main()+0xed) [0x7f4da5bde76d]
>>> 15: /usr/bin/ceph-osd() [0x5c6e1d]
>>> NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to 
>>> interpret this.
>>>
>>> --- begin dump of recent events ---
>>>    0> 2015-03-22 16:32:43.079587 7f4da7fa0780 -1 *** Caught signal
>>> (Aborted) **  in thread 7f4da7fa0780
>>>
>>> ceph version 0.67.4 (ad85b8bfafea6232d64cb7ba76a8b6e8252fa0c7)
>>> 1: /usr/bin/ceph-osd() [0x8001ea]
>>> 2: (()+0xfcb0) [0x7f4da743acb0]
>>> 3: (gsignal()+0x35) [0x7f4da5bf3425]
>>> 4: (abort()+0x17b) [0x7f4da5bf6b8b]
>>> 5: (__gnu_cxx::__verbose_terminate_handler()+0x11d) [0x7f4da654569d]
>>> 6: (()+0xb5846) [0x7f4da6543846]
>>> 7: (()+0xb5873) [0x7f4da6543873]
>>> 8: (()+0xb596e) [0x7f4da654396e]
>>> 9: (ceph::__ceph_assert_fail(char const*, char const*, int, char
>>> const*)+0x1df) [0x8c5e7f]
>>> 10: (PG::peek_map_epoch(ObjectStore*, coll_t, hobject_t&,
>>> ceph::buffer::list*)+0x4d7) [0x70ebf7]
>>> 11: (OSD::load_pgs()+0x14ce) [0x694efe]
>>> 12: (OSD::init()+0x11be) [0x69cffe]
>>> 13: (main()+0x1d09) [0x5c3509]
>>> 14: (__libc_start_main()+0xed) [0x7f4da5bde76d]
>>> 15: /usr/bin/ceph-osd() [0x5c6e1d]
>>> NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to 
>>> interpret this.
>>>
>>> --- logging levels ---
>>>  0/ 5 none
>>>  0/ 1 lockdep
>>>  0/ 1 context
>>>  1/ 1 crush
>>>  1/ 5 mds
>>>  1/ 5 mds_balancer
>>>  1/ 5 mds_locker
>>>  1/ 5 mds_log
>>>  1/ 5 mds_log_expire
>>>  1/ 5 mds_migrator
>>>  0/ 1 buffer
>>>  0/ 1 timer
>>>  0/ 1 filer
>>>  0/ 1 striper
>>>  0/ 1 objecter
>>>  0/ 5 rados
>>>  0/ 5 rbd
>>>  0/ 5 journaler
>>>  0/ 5 objectcacher
>>>  0/ 5 client
>>>  0/ 5 osd
>>>  0/ 5 optracker
>>>  0/ 5 objclass
>>>  1/ 3 filestore
>>>  1/ 3 journal
>>>  0/ 5 ms
>>>  1/ 5 mon
>>>  0/10 monc
>>>  1/ 5 paxos
>>>  0/ 5 tp
>>>  1/ 5 auth
>>>  1/ 5 crypto
>>>  1/ 1 finisher
>>>  1/ 5 heartbeatmap
>>>  1/ 5 perfcounter
>>>  1/ 5 rgw
>>>  1/ 5 hadoop
>>>  1/ 5 javaclient
>>>  1/ 5 asok
>>>  1/ 1 throttle
>>> -2/-2 (syslog threshold)
>>> -1/-1 (stderr threshold)
>>> max_recent     10000
>>> max_new         1000
>>> log_file /var/log/ceph/ceph-osd.1.log
>>> --- end dump of recent events ---
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>> ________________________________
>>>
>>> PLEASE NOTE: The information contained in this electronic mail message is 
>>> intended only for the use of the designated recipient(s) named above. If 
>>> the reader of this message is not the intended recipient, you are hereby 
>>> notified that you have received this message in error and that any review, 
>>> dissemination, distribution, or copying of this message is strictly 
>>> prohibited. If you have received this communication in error, please notify 
>>> the sender by telephone or e-mail (as shown above) immediately and destroy 
>>> any and all copies of this message in your possession (whether hard copies 
>>> or electronically stored copies).
>>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to