I submitted https://issues.apache.org/jira/browse/AMQ-6931 to capture this
issue. Please add any additional information you think might be useful. If
possible, please attach the db-531.log file to the issue in case that helps
whoever investigates the issue.

So the question now is how you move forward. If you're able to live with
reprocessing the 511 messages that those acks acknowledged, then just
delete that file and continue on without it. But note that if you've got
10GB in the persistence store and the only file that has a problem is the
most-recent one, you're going to very quickly hit the store limit again, so
you'll probably want to figure out why db-29.log is still alive (check the
DLQ) and solve that problem.

If you can't afford to delete the data file, your best remaining option may
be to try to patch the code yourself, to account for the possibility that
it sounds like some messages don't have a 5-byte header. You're going to
want to confirm the changes you make against the data file in question, to
avoid making a code change that misinterprets your data files, so if you're
going to go down that path, you'll want to get very familiar with the file
format of KahaDB journal files.

If neither of those options sounds good, I think your best remaining option
is to delete all of the journal files and start over with an empty KahaDB
instance. That way, you'll lose data, but you won't process any messages
more than once.

Tim

On Wed, Mar 14, 2018 at 2:06 AM, norinos <tainookasir...@gmail.com> wrote:

> >    1. Download the 5.13.1 source code (via a sources JAR or Git).
> >    2. In your debugger, set a breakpoint on the catch block in
> >    DataFileAccessor.readRecord ().
> >    3. Attach a debugger to the broker when starting it (use suspend=y
> > since
> >    this is occurring during initialization).
> >    4. When you hit the breakpoint, use the Drop To Frame feature (in
> >    Eclipse, or similar in whatever debugger you're using) to return to
> the
> >    beginning of the method, then step through again to confirm that the
> >    initializer of the data local variable is the problem. If it is,
> you'll
> >    find that location.getSize() is less than 5, and the question will be
> > "why?"
>
> I tried the above.
>
> My kahadb consists of "db-29.log" to "db-531.log".(reached the store limit
> of 10GB)
> I set breakpoint at DataFileAccessor.readRecord.
>
> First, "db-512.log" is read and the location size is "9188".
> Next, "db-531.log" is read and the location size is "0".
>
> Alos I checked "db-531.log" using the amq-kahadb-tool.
> https://github.com/Hill30/amq-kahadb-tool
>
> The result is below.
>
> -----------------------------------------------------------------------
> Command statistics:
> - Topics: 0 (messages: 0, +subscriptions: 0, -subscription: 0).
> - Queues: 0 (messages: 0).
> - Other messages: 511.
>
> Commands:
> + CmdType: KAHA_ACK_MESSAGE_FILE_MAP_COMMAND (Count: 511, TotalSize: 10.95
> MB (11478593), ~AvrgSize: 21.94 KB (22463), LastBigSize: 21.94 KB (22463),
> LastSize: 21.94 KB (22463))
> All commands: 511 (Total size: 10.95 MB (11478593).
>
> -----------------------------------------------------------------------
>
>
>
>
>
>
>
> --
> Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-
> f2341805.html
>

Reply via email to