I tried removing the corrupt log file (and all corrupt_log files) with
no success.

I then tried doing a journal flush and mkjournal.

Here is the log file after trying that (with the enhanced logging
turned on - ~50K lines)

http://dl.dropbox.com/u/766198/ceph-2.log.gz

On Mon, Feb 11, 2013 at 5:20 PM, Samuel Just <sam.j...@inktank.com> wrote:
> The actual problem appears to be a corrupted log file.  You should
> rename out of the way the directory:
> /mnt/osd97/current/corrupt_log_2013-02-08_18:50_2.fa8.  Then, restart
> the osd with debug osd = 20, debug filestore = 20, and debug ms = 1 in
> the [osd] section of the ceph.conf.
> -Sam
>
> On Mon, Feb 11, 2013 at 2:21 PM, Mandell Degerness
> <mand...@pistoncloud.com> wrote:
>> Since the attachment didn't work, apparently, here is a link to the log:
>>
>> http://dl.dropbox.com/u/766198/error17.log.gz
>>
>> On Mon, Feb 11, 2013 at 1:42 PM, Samuel Just <sam.j...@inktank.com> wrote:
>>> I don't see the more complete log.
>>> -Sam
>>>
>>> On Mon, Feb 11, 2013 at 11:12 AM, Mandell Degerness
>>> <mand...@pistoncloud.com> wrote:
>>>> Anyone have any thoughts on this???  It looks like I may have to wipe
>>>> out the OSDs effected and rebuild them, but I'm afraid that may result
>>>> in data loss because of the old OSD first crush map in place :(.
>>>>
>>>> On Fri, Feb 8, 2013 at 1:36 PM, Mandell Degerness
>>>> <mand...@pistoncloud.com> wrote:
>>>>> We ran into an error which appears very much like a bug fixed in 0.44.
>>>>>
>>>>> This cluster is running version:
>>>>>
>>>>> ceph version 0.48.1argonaut 
>>>>> (commit:a7ad701b9bd479f20429f19e6fea7373ca6bba7c)
>>>>>
>>>>> The error line is:
>>>>>
>>>>> Feb  8 18:50:07 192.168.8.14 ceph-osd: 2013-02-08 18:50:07.545682
>>>>> 7f40f9f08700  0 filestore(/mnt/osd97)  error (17) File exists not
>>>>> handled on operation 20 (11279344.0.0, or op 0, counting from 0)
>>>>>
>>>>> A more complete log is attached.
>>>>>
>>>>> First question: is this a know bug fixed in more recent versions?
>>>>>
>>>>> Second question: is there any hope of recovery?
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>>> the body of a message to majord...@vger.kernel.org
>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to