Thanks.
My co-worker that got the jffs2 dump of the  file system reports that
there is a bug in the python script, so my above post is a false
trail.

He analysed the raw JFFS2 data and found it is consistent in that all
the node header and data checksums are correct. There is no corruption
in system.db from the JFFS2 point of view. I can tell that the
corruption occurred during the f/w update on Dec 3 because all the
timestamps for the system.db nodes are during that time period. From
the f/w update log file, sysmgr was stopped at the start of the update
procedure along with everything else. I tried replicating what would
have happened by downgrading my unit to the same version of f/w that
was running at the time of the update, writing the raw JFFS2 to my
unit, and then applying the Dec 3 update. The update completely
successfully and system.db was ok.

There weren't any holes in the file, he wrote a program to analyze
this specifically.

He successfully simulated the above update three times but each time
the binary system.db was different (not identical). I'll get a copy
later to do a binary comparison. It may be that there are other system
events. I've asked that a test be run under sub-optimal power
conditions.  It seems unlikely this is an sqlite issue. It may be a
file system / hardware / harsh environment thing.


Adam

On Wed, Jan 13, 2016 at 7:00 PM, David Woodhouse <dwmw2 at infradead.org> wrote:
> On Tue, 2016-01-12 at 12:18 -0500, Adam Devita wrote:
>>
>> A co-worker managed to get an copy of the db by as interpreted by
>> jffs2dump of the file system, that was extracted by the jffs2dump
>> python script (from git hub). It is interesting that it is also
>> corrupt but in a different way.
>
> Forgetting sqlite, can you compare the binary files?
>
> JFFS2 creates each file from the log entries, each of which carry a
> sequence number, and cover a given range of the file (not more than a
> 4KiB page).
>
> There should never be any *holes* in the file, which are not covered by
> any data node. Were there in your dump? That would imply that a data
> node was lost (its CRC failed, perhaps, and wasn't caught in time to be
> written out elsewhere).
>
> --
> David Woodhouse                            Open Source Technology Centre
> David.Woodhouse at intel.com                              Intel Corporation
>



-- 
--------------
VerifEye Technologies Inc.
151 Whitehall Dr. Unit 2
Markham, ON
L3R 9T1

Reply via email to