Re: [ntfs-3g-devel] ntfsrecover fails to recover due to "Bad first record at offset"

2017-06-19 Thread Gil Barash via ntfs-3g-devel
Hey,

Thanks a lot. Worked like a charm.

On Mon, Jun 19, 2017 at 10:22 AM, Jean-Pierre André
 wrote:
> Hi,
>
> The fix is available at :
> http://jp-andre.pagesperso-orange.fr/redo-vcn.zip
>
> Jean-Pierre
>
>
> Jean-Pierre André wrote:
>>
>> Hi,
>>
>> Gil Barash via ntfs-3g-devel wrote:
>>>
>>> Hey,
>>>
>>> Downloading the latest did the trick. Thanks.
>>>
>>> I agree that the data replacement operations are easier, but the
>>> bigger problems are when an attribute is added/deleted, because those
>>> operations shift the data for the following redo operations.
>>> I would love to hear the rational behind all this, extremely
>>> complicated, journal scheme from Microsoft, but that would probably
>>> never happen :-)
>>>
>>> I asked this because I am trying to debug a partition I created when
>>> crashing a Windows10 machine (you can find it here:
>>>
>>> https://s3-eu-west-1.amazonaws.com/gilbucket1/ntfs-disks/ntfs_shutdown_win10.raw).
>>>
>>>
>>> You can see that operation 239b94 fails the recovery process.
>>>
>>> The undo data of operation 239b94 is "0f" which differs from the "a0"
>>> found at 0x360. I think maybe the operation was targeting the "0f"  at
>>> 0x3d8.
>>
>>
>> Yes, this was meant to replace "0f" by "1f" to record a
>> fifth cluster being added to the index.
>>
>>> Note that, an earlier operation, for the same inode (45), operation
>>> 229f2a (DeleteIndexEntryRoot) - is not executed because the undo data
>>> doesn't match. It has a length of 0x78 which is exactly 0x3d8-0x360
>>>
>>> I tried to trace back, to see why operation 229f2a find the "wrong"
>>> data - perhaps an earlier operation also failed to run - but I didn't
>>> find anything definitive yet.
>>
>>
>> Yes. The offending earlier operation is 0x22705f which
>> puts the vcn at a wrong location. In redo_update_root_vcn()
>> there is some logic which remains to be understood.
>>
>> In the situations I met so far, I had to add 16 to the
>> offset in order to get correct behavior, but for this
>> specific action the correct value to add is 0x70.
>> When forcing this value, the full log can be processed
>> thus restoring the partition to a consistent state.
>>
>> Comparing two different situations leads to a possible
>> explanation : the attribute_offset (0x40) tells where
>> the index entry begins, and the vcn to insert is the last
>> field of the entry, so just add the length of entry (0x78)
>> minus 8.
>>
>> I need some time to check this theory (which probably also
>> applies to redo_update_vcn()).
>>
>> Jean-Pierre
>>
>>>
>>> Regards,
>>> Gil
>>>
>>> On Fri, Jun 16, 2017 at 10:59 AM, Jean-Pierre André
>>>  wrote:


 Hi,

 Update...

 Jean-Pierre André wrote:
>
>
> Gil Barash via ntfs-3g-devel wrote:
>
>> Hey,
>>
>> I have two follow up issues:



 [...]

>>
>> --- 2 --
>> General question about redo process:
>> In change_resident_expect (called by redo_update_root_vcn, for
>> example) we chose not to apply the redo data if the current buffer
>> state doesn't match the undo data. Note that the operation is
>> considered successful in this case.
>
>
>
> I think the reasoning is a follows :
>
> - the old state was A
> - an update is made, the state is now B
> - a second update is made, the state is now C
> - C is synced to disk, but a failure occurs before
> the syncing is recorded in the log.
>
> When restarting, both updates have to be replayed,
> and when applying the first one, the state on disk
> is not the undo state (which is A), so it is correct
> to apply the update (whose result will be overwritten
> by the second update).
>
> Avoiding the updates if the undo data does not match
> the state on disk probably leads to the same result,
> but IMHO it is safer to make sure the final state is
> what the redo data says. Other updates which overlap
> could be intertwined and they would be processed
> incorrectly when not applied to the same state as in
> the initial execution.



 Well, actually the current code is doing the opposite
 (that is applying the update if the current state matches
 the undo data).

 I have run my tests again after reversing the rule (so
 applying the update if the current state does not match
 the redo data), and the tests fail when the second
 update destroys the attribute (e.g. deleting a file).
 In this situation there is nothing the redo data can
 be compared against (more exactly the current state is
 meaningless and should not be compared to redo data).
 There is not enough information to rebuild the intermediate
 state, the only possible action is doing nothing, and
 some criterion is needed to go this way.

 Jean-Pierre


>> Can you please provide some small explanation or direct me to some
>> form of doc

Re: [ntfs-3g-devel] ntfsrecover fails to recover due to "Bad first record at offset"

2017-06-19 Thread Jean-Pierre André

Hi,

The fix is available at :
http://jp-andre.pagesperso-orange.fr/redo-vcn.zip

Jean-Pierre

Jean-Pierre André wrote:

Hi,

Gil Barash via ntfs-3g-devel wrote:

Hey,

Downloading the latest did the trick. Thanks.

I agree that the data replacement operations are easier, but the
bigger problems are when an attribute is added/deleted, because those
operations shift the data for the following redo operations.
I would love to hear the rational behind all this, extremely
complicated, journal scheme from Microsoft, but that would probably
never happen :-)

I asked this because I am trying to debug a partition I created when
crashing a Windows10 machine (you can find it here:
https://s3-eu-west-1.amazonaws.com/gilbucket1/ntfs-disks/ntfs_shutdown_win10.raw).


You can see that operation 239b94 fails the recovery process.

The undo data of operation 239b94 is "0f" which differs from the "a0"
found at 0x360. I think maybe the operation was targeting the "0f"  at
0x3d8.


Yes, this was meant to replace "0f" by "1f" to record a
fifth cluster being added to the index.


Note that, an earlier operation, for the same inode (45), operation
229f2a (DeleteIndexEntryRoot) - is not executed because the undo data
doesn't match. It has a length of 0x78 which is exactly 0x3d8-0x360

I tried to trace back, to see why operation 229f2a find the "wrong"
data - perhaps an earlier operation also failed to run - but I didn't
find anything definitive yet.


Yes. The offending earlier operation is 0x22705f which
puts the vcn at a wrong location. In redo_update_root_vcn()
there is some logic which remains to be understood.

In the situations I met so far, I had to add 16 to the
offset in order to get correct behavior, but for this
specific action the correct value to add is 0x70.
When forcing this value, the full log can be processed
thus restoring the partition to a consistent state.

Comparing two different situations leads to a possible
explanation : the attribute_offset (0x40) tells where
the index entry begins, and the vcn to insert is the last
field of the entry, so just add the length of entry (0x78)
minus 8.

I need some time to check this theory (which probably also
applies to redo_update_vcn()).

Jean-Pierre



Regards,
Gil

On Fri, Jun 16, 2017 at 10:59 AM, Jean-Pierre André
 wrote:


Hi,

Update...

Jean-Pierre André wrote:


Gil Barash via ntfs-3g-devel wrote:


Hey,

I have two follow up issues:



[...]



--- 2 --
General question about redo process:
In change_resident_expect (called by redo_update_root_vcn, for
example) we chose not to apply the redo data if the current buffer
state doesn't match the undo data. Note that the operation is
considered successful in this case.



I think the reasoning is a follows :

- the old state was A
- an update is made, the state is now B
- a second update is made, the state is now C
- C is synced to disk, but a failure occurs before
the syncing is recorded in the log.

When restarting, both updates have to be replayed,
and when applying the first one, the state on disk
is not the undo state (which is A), so it is correct
to apply the update (whose result will be overwritten
by the second update).

Avoiding the updates if the undo data does not match
the state on disk probably leads to the same result,
but IMHO it is safer to make sure the final state is
what the redo data says. Other updates which overlap
could be intertwined and they would be processed
incorrectly when not applied to the same state as in
the initial execution.



Well, actually the current code is doing the opposite
(that is applying the update if the current state matches
the undo data).

I have run my tests again after reversing the rule (so
applying the update if the current state does not match
the redo data), and the tests fail when the second
update destroys the attribute (e.g. deleting a file).
In this situation there is nothing the redo data can
be compared against (more exactly the current state is
meaningless and should not be compared to redo data).
There is not enough information to rebuild the intermediate
state, the only possible action is doing nothing, and
some criterion is needed to go this way.

Jean-Pierre



Can you please provide some small explanation or direct me to some
form of documentation as to why it is OK to skip an operation.



I would also be interested to get some form of
documentation...

Regards

Jean-Pierre


I suspect it might cause some kind of "chain-reaction" where future
operations on this "cluster" would also be skipped because they expect
to see the data as it should have been after applying the redo
operation.

Thanks,
Gil







--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
ntfs-3g-devel mailing list
ntfs-3g-devel@lists.sourceforge.net
https://lists.sourceforge.net/

Re: [ntfs-3g-devel] ntfsrecover fails to recover due to "Bad first record at offset"

2017-06-18 Thread Jean-Pierre André

Hi,

Gil Barash via ntfs-3g-devel wrote:

Hey,

Downloading the latest did the trick. Thanks.

I agree that the data replacement operations are easier, but the
bigger problems are when an attribute is added/deleted, because those
operations shift the data for the following redo operations.
I would love to hear the rational behind all this, extremely
complicated, journal scheme from Microsoft, but that would probably
never happen :-)

I asked this because I am trying to debug a partition I created when
crashing a Windows10 machine (you can find it here:
https://s3-eu-west-1.amazonaws.com/gilbucket1/ntfs-disks/ntfs_shutdown_win10.raw).

You can see that operation 239b94 fails the recovery process.

The undo data of operation 239b94 is "0f" which differs from the "a0"
found at 0x360. I think maybe the operation was targeting the "0f"  at
0x3d8.


Yes, this was meant to replace "0f" by "1f" to record a
fifth cluster being added to the index.


Note that, an earlier operation, for the same inode (45), operation
229f2a (DeleteIndexEntryRoot) - is not executed because the undo data
doesn't match. It has a length of 0x78 which is exactly 0x3d8-0x360

I tried to trace back, to see why operation 229f2a find the "wrong"
data - perhaps an earlier operation also failed to run - but I didn't
find anything definitive yet.


Yes. The offending earlier operation is 0x22705f which
puts the vcn at a wrong location. In redo_update_root_vcn()
there is some logic which remains to be understood.

In the situations I met so far, I had to add 16 to the
offset in order to get correct behavior, but for this
specific action the correct value to add is 0x70.
When forcing this value, the full log can be processed
thus restoring the partition to a consistent state.

Comparing two different situations leads to a possible
explanation : the attribute_offset (0x40) tells where
the index entry begins, and the vcn to insert is the last
field of the entry, so just add the length of entry (0x78)
minus 8.

I need some time to check this theory (which probably also
applies to redo_update_vcn()).

Jean-Pierre



Regards,
Gil

On Fri, Jun 16, 2017 at 10:59 AM, Jean-Pierre André
 wrote:


Hi,

Update...

Jean-Pierre André wrote:


Gil Barash via ntfs-3g-devel wrote:


Hey,

I have two follow up issues:



[...]



--- 2 --
General question about redo process:
In change_resident_expect (called by redo_update_root_vcn, for
example) we chose not to apply the redo data if the current buffer
state doesn't match the undo data. Note that the operation is
considered successful in this case.



I think the reasoning is a follows :

- the old state was A
- an update is made, the state is now B
- a second update is made, the state is now C
- C is synced to disk, but a failure occurs before
the syncing is recorded in the log.

When restarting, both updates have to be replayed,
and when applying the first one, the state on disk
is not the undo state (which is A), so it is correct
to apply the update (whose result will be overwritten
by the second update).

Avoiding the updates if the undo data does not match
the state on disk probably leads to the same result,
but IMHO it is safer to make sure the final state is
what the redo data says. Other updates which overlap
could be intertwined and they would be processed
incorrectly when not applied to the same state as in
the initial execution.



Well, actually the current code is doing the opposite
(that is applying the update if the current state matches
the undo data).

I have run my tests again after reversing the rule (so
applying the update if the current state does not match
the redo data), and the tests fail when the second
update destroys the attribute (e.g. deleting a file).
In this situation there is nothing the redo data can
be compared against (more exactly the current state is
meaningless and should not be compared to redo data).
There is not enough information to rebuild the intermediate
state, the only possible action is doing nothing, and
some criterion is needed to go this way.

Jean-Pierre



Can you please provide some small explanation or direct me to some
form of documentation as to why it is OK to skip an operation.



I would also be interested to get some form of
documentation...

Regards

Jean-Pierre


I suspect it might cause some kind of "chain-reaction" where future
operations on this "cluster" would also be skipped because they expect
to see the data as it should have been after applying the redo
operation.

Thanks,
Gil




--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
ntfs-3g-devel mailing list
ntfs-3g-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ntfs-3g-devel


Re: [ntfs-3g-devel] ntfsrecover fails to recover due to "Bad first record at offset"

2017-06-18 Thread Gil Barash via ntfs-3g-devel
Hey,

Downloading the latest did the trick. Thanks.

I agree that the data replacement operations are easier, but the
bigger problems are when an attribute is added/deleted, because those
operations shift the data for the following redo operations.
I would love to hear the rational behind all this, extremely
complicated, journal scheme from Microsoft, but that would probably
never happen :-)

I asked this because I am trying to debug a partition I created when
crashing a Windows10 machine (you can find it here:
https://s3-eu-west-1.amazonaws.com/gilbucket1/ntfs-disks/ntfs_shutdown_win10.raw).

You can see that operation 239b94 fails the recovery process.

The undo data of operation 239b94 is "0f" which differs from the "a0"
found at 0x360. I think maybe the operation was targeting the "0f"  at
0x3d8.
Note that, an earlier operation, for the same inode (45), operation
229f2a (DeleteIndexEntryRoot) - is not executed because the undo data
doesn't match. It has a length of 0x78 which is exactly 0x3d8-0x360

I tried to trace back, to see why operation 229f2a find the "wrong"
data - perhaps an earlier operation also failed to run - but I didn't
find anything definitive yet.

Regards,
Gil

On Fri, Jun 16, 2017 at 10:59 AM, Jean-Pierre André
 wrote:
>
> Hi,
>
> Update...
>
> Jean-Pierre André wrote:
>>
>> Gil Barash via ntfs-3g-devel wrote:
>>
>>> Hey,
>>>
>>> I have two follow up issues:
>
>
> [...]
>
>>>
>>> --- 2 --
>>> General question about redo process:
>>> In change_resident_expect (called by redo_update_root_vcn, for
>>> example) we chose not to apply the redo data if the current buffer
>>> state doesn't match the undo data. Note that the operation is
>>> considered successful in this case.
>>
>>
>> I think the reasoning is a follows :
>>
>> - the old state was A
>> - an update is made, the state is now B
>> - a second update is made, the state is now C
>> - C is synced to disk, but a failure occurs before
>>the syncing is recorded in the log.
>>
>> When restarting, both updates have to be replayed,
>> and when applying the first one, the state on disk
>> is not the undo state (which is A), so it is correct
>> to apply the update (whose result will be overwritten
>> by the second update).
>>
>> Avoiding the updates if the undo data does not match
>> the state on disk probably leads to the same result,
>> but IMHO it is safer to make sure the final state is
>> what the redo data says. Other updates which overlap
>> could be intertwined and they would be processed
>> incorrectly when not applied to the same state as in
>> the initial execution.
>
>
> Well, actually the current code is doing the opposite
> (that is applying the update if the current state matches
> the undo data).
>
> I have run my tests again after reversing the rule (so
> applying the update if the current state does not match
> the redo data), and the tests fail when the second
> update destroys the attribute (e.g. deleting a file).
> In this situation there is nothing the redo data can
> be compared against (more exactly the current state is
> meaningless and should not be compared to redo data).
> There is not enough information to rebuild the intermediate
> state, the only possible action is doing nothing, and
> some criterion is needed to go this way.
>
> Jean-Pierre
>
>
>>> Can you please provide some small explanation or direct me to some
>>> form of documentation as to why it is OK to skip an operation.
>>
>>
>> I would also be interested to get some form of
>> documentation...
>>
>> Regards
>>
>> Jean-Pierre
>>
>>> I suspect it might cause some kind of "chain-reaction" where future
>>> operations on this "cluster" would also be skipped because they expect
>>> to see the data as it should have been after applying the redo
>>> operation.
>>>
>>> Thanks,
>>> Gil
>
>
>

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
ntfs-3g-devel mailing list
ntfs-3g-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ntfs-3g-devel


Re: [ntfs-3g-devel] ntfsrecover fails to recover due to "Bad first record at offset"

2017-06-16 Thread Jean-Pierre André

Hi,

Update...

Jean-Pierre André wrote:

Gil Barash via ntfs-3g-devel wrote:


Hey,

I have two follow up issues:


[...]



--- 2 --
General question about redo process:
In change_resident_expect (called by redo_update_root_vcn, for
example) we chose not to apply the redo data if the current buffer
state doesn't match the undo data. Note that the operation is
considered successful in this case.


I think the reasoning is a follows :

- the old state was A
- an update is made, the state is now B
- a second update is made, the state is now C
- C is synced to disk, but a failure occurs before
   the syncing is recorded in the log.

When restarting, both updates have to be replayed,
and when applying the first one, the state on disk
is not the undo state (which is A), so it is correct
to apply the update (whose result will be overwritten
by the second update).

Avoiding the updates if the undo data does not match
the state on disk probably leads to the same result,
but IMHO it is safer to make sure the final state is
what the redo data says. Other updates which overlap
could be intertwined and they would be processed
incorrectly when not applied to the same state as in
the initial execution.


Well, actually the current code is doing the opposite
(that is applying the update if the current state matches
the undo data).

I have run my tests again after reversing the rule (so
applying the update if the current state does not match
the redo data), and the tests fail when the second
update destroys the attribute (e.g. deleting a file).
In this situation there is nothing the redo data can
be compared against (more exactly the current state is
meaningless and should not be compared to redo data).
There is not enough information to rebuild the intermediate
state, the only possible action is doing nothing, and
some criterion is needed to go this way.

Jean-Pierre


Can you please provide some small explanation or direct me to some
form of documentation as to why it is OK to skip an operation.


I would also be interested to get some form of
documentation...

Regards

Jean-Pierre


I suspect it might cause some kind of "chain-reaction" where future
operations on this "cluster" would also be skipped because they expect
to see the data as it should have been after applying the redo
operation.

Thanks,
Gil




--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
ntfs-3g-devel mailing list
ntfs-3g-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ntfs-3g-devel


Re: [ntfs-3g-devel] ntfsrecover fails to recover due to "Bad first record at offset"

2017-06-15 Thread Jean-Pierre André

Gil Barash via ntfs-3g-devel wrote:


Hey,

I have two follow up issues:

--- 1 ---

When trying to run the patched version on
"ntfs_poweroff_fullLogs.partition.raw" (provided earlier) all I get
is:

Capacity 533724672 bytes (533 MB)
sectors 1042431 (0xfe7ff), sector size 512
clusters 130303 (0x1fcff), cluster size 4096 (12 bits)
MFT at cluster 43434 (0xa9aa), entry size 1024
4 MFT entries per cluster
* Using initial restart page, syncing from 0xd2170fb, dirty
* Block size 4096 bytes

The command I use is "ntfsrecover
ntfs_poweroff_fullLogs.partition.raw" (trying to use -f or -b gives
the same result). Can you assist please?


This is what I get from the image you posted earlier.
Apparently the initial syncing point is not the same
(I get 0x237146)

[linux@dimension]$ ntfsprogs/ntfsrecover barash2.ntfs
** Fast restart mode detected, data could be lost
   Use option --kill-fast-restart to bypass

[linux@dimension]$ ntfsprogs/ntfsrecover -sk barash2.ntfs
* Syncing successful after playing 3881 actions

[linux@dimension]$ ntfsprogs/ntfsrecover -vsk barash2.ntfs
Capacity 533724672 bytes (533 MB)
sectors 1042431 (0xfe7ff), sector size 512
clusters 130303 (0x1fcff), cluster size 4096 (12 bits)
MFT at cluster 43434 (0xa9aa), entry size 1024
4 MFT entries per cluster
* Using initial restart page, syncing from 0x237146, dirty
* Block size 4096 bytes
[approx 175100 lines deleted]
* Syncing successful after playing 3881 actions
Redo actions which were executed :
Noop
CompensationlogRecord
InitializeFileRecordSegment
DeallocateFileRecordSegment
CreateAttribute
DeleteAttribute
UpdateResidentValue
UpdateMappingPairs
SetNewAttributeSizes
AddIndexEntryAllocation
DeleteIndexEntryAllocation
UpdateFileNameAllocation
SetBitsInNonResidentBitMap
ClearBitsInNonResidentBitMap
ForgetTransaction
OpenAttributeTableDump
AttributeNamesDump
DirtyPageTableDump

[starting chkdsk.exe from Windows 10]

The type of the file system is NTFS.
Volume label is New Volume.

Stage 1: Examining basic file system structure ...
  256 file records processed.
File verification completed.
  0 large file records processed.
  0 bad file records processed.

Stage 2: Examining file name linkage ...
  282 index entries processed.
Index verification completed.
  0 unindexed files scanned.
  0 unindexed files recovered to lost and found.

Stage 3: Examining security descriptors ...
Security descriptor verification completed.
  13 data files processed.

Windows has scanned the file system and found no problems.
No further action is required.

521215 KB total disk space.
 55724 KB in 55 files.
20 KB in 15 indexes.
 0 KB in bad sectors.
  5339 KB in use by the system.

460132 KB available on disk.

  4096 bytes in each allocation unit.
130303 total allocation units on disk.
115033 allocation units available on disk.
[linux@dimension]$

I get the same result on 32-bit and 64-bit CPUs and
on little endian and big endian ones.

I have made minor changes since I posted a patch, but
they should not be much relevant (the posted patch ran
successfully).

Maybe you get the latest (full) state from
https://sourceforge.net/p/ntfs-3g/ntfs-3g/ci/edge/tree/

Or you apply to the latest release ntfs-3g-2017.3.23
the following couple of patches :
https://sourceforge.net/p/ntfs-3g/ntfs-3g/ci/1797ab5ecd5a544f34c83a9c7efabd0c33e3c954/
and
https://sourceforge.net/p/ntfs-3g/ntfs-3g/ci/5be0b9f62a7eabddbac3119a12b98cc1b4c233d4/



I am sure that I have successfully applied the patch and that I am
using the patched version.

--- 2 --
General question about redo process:
In change_resident_expect (called by redo_update_root_vcn, for
example) we chose not to apply the redo data if the current buffer
state doesn't match the undo data. Note that the operation is
considered successful in this case.


I think the reasoning is a follows :

- the old state was A
- an update is made, the state is now B
- a second update is made, the state is now C
- C is synced to disk, but a failure occurs before
  the syncing is recorded in the log.

When restarting, both updates have to be replayed,
and when applying the first one, the state on disk
is not the undo state (which is A), so it is correct
to apply the update (whose result will be overwritten
by the second update).

Avoiding the updates if the undo data does not match
the state on disk probably leads to the same result,
but IMHO it is safer to make sure the final state is
what the redo data says. Other updates which overlap
could be intertwined and they would be processed
incorrectly when not applied to the same state as in
the initial execution.


Can you please provide some small explanation or direct me to some
form of documentation as to why it is OK to skip an operation.


I would also be interested to get some form of
documentation...

Regards

Jean-Pierre


I suspect it might cause some kind of "chain-reaction" where future
operations on this "cluster" would also be skipped because they expect
to

Re: [ntfs-3g-devel] ntfsrecover fails to recover due to "Bad first record at offset"

2017-06-15 Thread Gil Barash via ntfs-3g-devel
Hey,

I have two follow up issues:

--- 1 ---

When trying to run the patched version on
"ntfs_poweroff_fullLogs.partition.raw" (provided earlier) all I get
is:

Capacity 533724672 bytes (533 MB)
sectors 1042431 (0xfe7ff), sector size 512
clusters 130303 (0x1fcff), cluster size 4096 (12 bits)
MFT at cluster 43434 (0xa9aa), entry size 1024
4 MFT entries per cluster
* Using initial restart page, syncing from 0xd2170fb, dirty
* Block size 4096 bytes

The command I use is "ntfsrecover
ntfs_poweroff_fullLogs.partition.raw" (trying to use -f or -b gives
the same result). Can you assist please?
I am sure that I have successfully applied the patch and that I am
using the patched version.

--- 2 --
General question about redo process:
In change_resident_expect (called by redo_update_root_vcn, for
example) we chose not to apply the redo data if the current buffer
state doesn't match the undo data. Note that the operation is
considered successful in this case.
Can you please provide some small explanation or direct me to some
form of documentation as to why it is OK to skip an operation.
I suspect it might cause some kind of "chain-reaction" where future
operations on this "cluster" would also be skipped because they expect
to see the data as it should have been after applying the redo
operation.

Thanks,
Gil

On Mon, May 22, 2017 at 11:47 AM, Gil Barash  wrote:
>
> Hey,
>
> I was surprised to learn, by your previous comment, that the journal
> format has changed.
>
> It is great to hear that you were able to create a patch to support
> this new format.
>
> I will probably check the ntfsrecover tool (with the patch) in several
> scenarios (using Windows 7, 8 & 10), and compare the "fixed"
> filesystem against the one "fixed" by windows itself (excluding the
> hibernation file, of course).
>
> I'll be sure to inform you if any other problem arise.
>
> Thanks,
> Gil
>
> On Mon, May 22, 2017 at 11:22 AM, Jean-Pierre André
>  wrote:
> > Hi,
> >
> > It appears that the latest log records are stored very
> > differently in logfile 2.0 (which is the format used by
> > Windows 8 and Windows 10 when the fast restart mode is
> > activated).
> >
> > I have prepared an upgrade to support the new format.
> > The metadata of both your partition images can now be
> > repaired. These are the only partitions which I have
> > checked so far, so problems could still be experienced.
> >
> > The patch against the latest ntfs-3g (v 2017.3.23) is
> > available at :
> > http://jp-andre.pagesperso-orange.fr/logfile-2.0.patch.zip
> >
> > Jean-Pierre
> >
> > Jean-Pierre André wrote:
> >> Gil Barash wrote:
> >>> Hello Jean-Pierre, and thank you for the quick response.
> >>>
> >>> On Thu, May 11, 2017 at 7:38 PM, Jean-Pierre André
> >>>  wrote:
> 
>  Gil Barash wrote:
> > Hello,
> >
> > I'm trying to use the ntfsrecover tool to recover the partition.
> > I have a data disk (~500MB) on a Windows machine. I wrote to it some
> > files while powering off the machine (pulling the cable). The resulting
> > filesystem has some corrupt files - doing "ls" gives me stuff like:
> > ? -? ? ??  ?? file_20K_24385
> > So, I tried to use the ntfsrecover tool in order to fix those file
> > entries. However, it never succeed (I did this experiment a few times).
> >
> > Here, as an example, is the output of "./ntfsrecover -v
> > --kill-fast-restart /mnt/data/ntfs_poweroff_fullLogs.partition.raw" (my
> > "disk" is a file representing a partition):
> >
> >>
> >> [...]
> >>
> >>>
> >>> Indeed, I am using Windows 8 (Windows Server 2012R2).
> >>> I don't mind deleting the hibernation file since I'm not going to boot
> >>> from this disk - I just want to extract some files out of it. To the
> >>> best of my understanding, the filesystem should be consistent without
> >>> the hibernation file (i.e. everything written in the hibernation file
> >>> is also written to, or can be extracted from, the filesystem itself).
> >>> Also note that I tried mounting this disk (or actually, a copy of it)
> >>> on a different Windows machine, as a data disk (so the hibernation
> >>> file is not used), and Windows was able to show me a consistent list
> >>> of files (all of the files were readable), which was a bit different
> >>> from the one I got from ntfs-3g.
> >>
> >> I checked the consistency of the metadata could be restored
> >> by Windows 10 (but not by Windows 7).
> >>
> 
>  Locating the first record may also be buggy in ntfsrecover.
>  To investigate it, I need the first 16K bytes from the
> >>
> >> I can confirm ntfsrecover could not locate the first log
> >> record (the oldest one which was committed while not
> >> synced).
> >>
>  log file :
>  dd if='/mntpnt/$LogFile' of=temp bs=4096 count=4
>  (important : mount as readonly, replace mntpnt be the
>  actual mount point).
> >>>
> >>> Note that running "ntfsrecover -t --kill-fast-restart
> >>> ntfs_powe

Re: [ntfs-3g-devel] ntfsrecover fails to recover due to "Bad first record at offset"

2017-05-22 Thread Gil Barash
Hey,

I was surprised to learn, by your previous comment, that the journal
format has changed.

It is great to hear that you were able to create a patch to support
this new format.

I will probably check the ntfsrecover tool (with the patch) in several
scenarios (using Windows 7, 8 & 10), and compare the "fixed"
filesystem against the one "fixed" by windows itself (excluding the
hibernation file, of course).

I'll be sure to inform you if any other problem arise.

Thanks,
Gil

On Mon, May 22, 2017 at 11:22 AM, Jean-Pierre André
 wrote:
> Hi,
>
> It appears that the latest log records are stored very
> differently in logfile 2.0 (which is the format used by
> Windows 8 and Windows 10 when the fast restart mode is
> activated).
>
> I have prepared an upgrade to support the new format.
> The metadata of both your partition images can now be
> repaired. These are the only partitions which I have
> checked so far, so problems could still be experienced.
>
> The patch against the latest ntfs-3g (v 2017.3.23) is
> available at :
> http://jp-andre.pagesperso-orange.fr/logfile-2.0.patch.zip
>
> Jean-Pierre
>
> Jean-Pierre André wrote:
>> Gil Barash wrote:
>>> Hello Jean-Pierre, and thank you for the quick response.
>>>
>>> On Thu, May 11, 2017 at 7:38 PM, Jean-Pierre André
>>>  wrote:

 Gil Barash wrote:
> Hello,
>
> I'm trying to use the ntfsrecover tool to recover the partition.
> I have a data disk (~500MB) on a Windows machine. I wrote to it some
> files while powering off the machine (pulling the cable). The resulting
> filesystem has some corrupt files - doing "ls" gives me stuff like:
> ? -? ? ??  ?? file_20K_24385
> So, I tried to use the ntfsrecover tool in order to fix those file
> entries. However, it never succeed (I did this experiment a few times).
>
> Here, as an example, is the output of "./ntfsrecover -v
> --kill-fast-restart /mnt/data/ntfs_poweroff_fullLogs.partition.raw" (my
> "disk" is a file representing a partition):
>
>>
>> [...]
>>
>>>
>>> Indeed, I am using Windows 8 (Windows Server 2012R2).
>>> I don't mind deleting the hibernation file since I'm not going to boot
>>> from this disk - I just want to extract some files out of it. To the
>>> best of my understanding, the filesystem should be consistent without
>>> the hibernation file (i.e. everything written in the hibernation file
>>> is also written to, or can be extracted from, the filesystem itself).
>>> Also note that I tried mounting this disk (or actually, a copy of it)
>>> on a different Windows machine, as a data disk (so the hibernation
>>> file is not used), and Windows was able to show me a consistent list
>>> of files (all of the files were readable), which was a bit different
>>> from the one I got from ntfs-3g.
>>
>> I checked the consistency of the metadata could be restored
>> by Windows 10 (but not by Windows 7).
>>

 Locating the first record may also be buggy in ntfsrecover.
 To investigate it, I need the first 16K bytes from the
>>
>> I can confirm ntfsrecover could not locate the first log
>> record (the oldest one which was committed while not
>> synced).
>>
 log file :
 dd if='/mntpnt/$LogFile' of=temp bs=4096 count=4
 (important : mount as readonly, replace mntpnt be the
 actual mount point).
>>>
>>> Note that running "ntfsrecover -t --kill-fast-restart
>>> ntfs_poweroff_fullLogs.partition.raw" does seem to work, as a lot of
>>> entries are listed and the print does not end with any kind of error
>>> message (leading me to believe that the last entry printed is indeed
>>> the last valid entry).
>>>
>>> I hope I'm not causing any confusion, but I would like to share two
>>> disks which show different symptoms:
>>> --- 1 ---
>>> ntfsrecover --kill-fast-restart 
>>> /mnt/data/ntfs_poweroff_fullLogs.partition.raw
>>> ** Bad first record at offset 0x288
>>> ** Error : searchlikely() used for syncing
>>> * Syncing failed after playing 0 actions
>>>
>>> LogFIle: 
>>> https://s3-eu-west-1.amazonaws.com/gilbucket1/ntfs-disks/ntfs_poweroff_fullLogs.LogFile
>>> Entire partition:
>>> https://s3-eu-west-1.amazonaws.com/gilbucket1/ntfs-disks/ntfs_poweroff_fullLogs.partition.raw
>>>
>>> --- 2 ---
>>> ntfsrecover --kill-fast-restart /mnt/data/ntfs_poweroff_2.raw
>>> * Reaching free space at end of block 2
>>> * Syncing failed after playing 0 actions
>>>
>>> LogFile: 
>>> https://s3-eu-west-1.amazonaws.com/gilbucket1/ntfs-disks/ntntfs_poweroff_2.LogFile
>>> Entire partition:
>>> https://s3-eu-west-1.amazonaws.com/gilbucket1/ntfs-disks/ntfs_poweroff_2.raw.bak
>>>
>>
>> You were using this partition with the fast restart mode
>> activated, which implies a different log format (2.0), see
>> https://social.technet.microsoft.com/wiki/contents/articles/15645.windows-8-volume-compatibility-considerations-with-prior-versions-of-windows.aspx
>>
>> I have not yet decoded the format changes, so recovering from a
>> partit

Re: [ntfs-3g-devel] ntfsrecover fails to recover due to "Bad first record at offset"

2017-05-22 Thread Jean-Pierre André
Hi,

It appears that the latest log records are stored very
differently in logfile 2.0 (which is the format used by
Windows 8 and Windows 10 when the fast restart mode is
activated).

I have prepared an upgrade to support the new format.
The metadata of both your partition images can now be
repaired. These are the only partitions which I have
checked so far, so problems could still be experienced.

The patch against the latest ntfs-3g (v 2017.3.23) is
available at :
http://jp-andre.pagesperso-orange.fr/logfile-2.0.patch.zip

Jean-Pierre

Jean-Pierre André wrote:
> Gil Barash wrote:
>> Hello Jean-Pierre, and thank you for the quick response.
>>
>> On Thu, May 11, 2017 at 7:38 PM, Jean-Pierre André
>>  wrote:
>>>
>>> Gil Barash wrote:
 Hello,

 I'm trying to use the ntfsrecover tool to recover the partition.
 I have a data disk (~500MB) on a Windows machine. I wrote to it some
 files while powering off the machine (pulling the cable). The resulting
 filesystem has some corrupt files - doing "ls" gives me stuff like:
 ? -? ? ??  ?? file_20K_24385
 So, I tried to use the ntfsrecover tool in order to fix those file
 entries. However, it never succeed (I did this experiment a few times).

 Here, as an example, is the output of "./ntfsrecover -v
 --kill-fast-restart /mnt/data/ntfs_poweroff_fullLogs.partition.raw" (my
 "disk" is a file representing a partition):

>
> [...]
>
>>
>> Indeed, I am using Windows 8 (Windows Server 2012R2).
>> I don't mind deleting the hibernation file since I'm not going to boot
>> from this disk - I just want to extract some files out of it. To the
>> best of my understanding, the filesystem should be consistent without
>> the hibernation file (i.e. everything written in the hibernation file
>> is also written to, or can be extracted from, the filesystem itself).
>> Also note that I tried mounting this disk (or actually, a copy of it)
>> on a different Windows machine, as a data disk (so the hibernation
>> file is not used), and Windows was able to show me a consistent list
>> of files (all of the files were readable), which was a bit different
>> from the one I got from ntfs-3g.
>
> I checked the consistency of the metadata could be restored
> by Windows 10 (but not by Windows 7).
>
>>>
>>> Locating the first record may also be buggy in ntfsrecover.
>>> To investigate it, I need the first 16K bytes from the
>
> I can confirm ntfsrecover could not locate the first log
> record (the oldest one which was committed while not
> synced).
>
>>> log file :
>>> dd if='/mntpnt/$LogFile' of=temp bs=4096 count=4
>>> (important : mount as readonly, replace mntpnt be the
>>> actual mount point).
>>
>> Note that running "ntfsrecover -t --kill-fast-restart
>> ntfs_poweroff_fullLogs.partition.raw" does seem to work, as a lot of
>> entries are listed and the print does not end with any kind of error
>> message (leading me to believe that the last entry printed is indeed
>> the last valid entry).
>>
>> I hope I'm not causing any confusion, but I would like to share two
>> disks which show different symptoms:
>> --- 1 ---
>> ntfsrecover --kill-fast-restart 
>> /mnt/data/ntfs_poweroff_fullLogs.partition.raw
>> ** Bad first record at offset 0x288
>> ** Error : searchlikely() used for syncing
>> * Syncing failed after playing 0 actions
>>
>> LogFIle: 
>> https://s3-eu-west-1.amazonaws.com/gilbucket1/ntfs-disks/ntfs_poweroff_fullLogs.LogFile
>> Entire partition:
>> https://s3-eu-west-1.amazonaws.com/gilbucket1/ntfs-disks/ntfs_poweroff_fullLogs.partition.raw
>>
>> --- 2 ---
>> ntfsrecover --kill-fast-restart /mnt/data/ntfs_poweroff_2.raw
>> * Reaching free space at end of block 2
>> * Syncing failed after playing 0 actions
>>
>> LogFile: 
>> https://s3-eu-west-1.amazonaws.com/gilbucket1/ntfs-disks/ntntfs_poweroff_2.LogFile
>> Entire partition:
>> https://s3-eu-west-1.amazonaws.com/gilbucket1/ntfs-disks/ntfs_poweroff_2.raw.bak
>>
>
> You were using this partition with the fast restart mode
> activated, which implies a different log format (2.0), see
> https://social.technet.microsoft.com/wiki/contents/articles/15645.windows-8-volume-compatibility-considerations-with-prior-versions-of-windows.aspx
>
> I have not yet decoded the format changes, so recovering from a
> partition used with fast restart mode activated will generally
> fail if there are unsynced committed changes.
>
> Users who want to share data between Windows 8+ and Linux
> should disable the fast restart mode.
>
> Jean-Pierre
>
>> Gil
>>
>>>
>>> Jean-Pierre
>>>

 Thanks,
 Gil





--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
ntfs-3g-devel mailing list
ntfs-3g-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ntfs-3g-devel


Re: [ntfs-3g-devel] ntfsrecover fails to recover due to "Bad first record at offset"

2017-05-20 Thread Jean-Pierre André
Gil Barash wrote:
> Hello Jean-Pierre, and thank you for the quick response.
>
> On Thu, May 11, 2017 at 7:38 PM, Jean-Pierre André
>  wrote:
>>
>> Gil Barash wrote:
>>> Hello,
>>>
>>> I'm trying to use the ntfsrecover tool to recover the partition.
>>> I have a data disk (~500MB) on a Windows machine. I wrote to it some
>>> files while powering off the machine (pulling the cable). The resulting
>>> filesystem has some corrupt files - doing "ls" gives me stuff like:
>>> ? -? ? ??  ?? file_20K_24385
>>> So, I tried to use the ntfsrecover tool in order to fix those file
>>> entries. However, it never succeed (I did this experiment a few times).
>>>
>>> Here, as an example, is the output of "./ntfsrecover -v
>>> --kill-fast-restart /mnt/data/ntfs_poweroff_fullLogs.partition.raw" (my
>>> "disk" is a file representing a partition):
>>>

[...]

>
> Indeed, I am using Windows 8 (Windows Server 2012R2).
> I don't mind deleting the hibernation file since I'm not going to boot
> from this disk - I just want to extract some files out of it. To the
> best of my understanding, the filesystem should be consistent without
> the hibernation file (i.e. everything written in the hibernation file
> is also written to, or can be extracted from, the filesystem itself).
> Also note that I tried mounting this disk (or actually, a copy of it)
> on a different Windows machine, as a data disk (so the hibernation
> file is not used), and Windows was able to show me a consistent list
> of files (all of the files were readable), which was a bit different
> from the one I got from ntfs-3g.

I checked the consistency of the metadata could be restored
by Windows 10 (but not by Windows 7).

>>
>> Locating the first record may also be buggy in ntfsrecover.
>> To investigate it, I need the first 16K bytes from the

I can confirm ntfsrecover could not locate the first log
record (the oldest one which was committed while not
synced).

>> log file :
>> dd if='/mntpnt/$LogFile' of=temp bs=4096 count=4
>> (important : mount as readonly, replace mntpnt be the
>> actual mount point).
>
> Note that running "ntfsrecover -t --kill-fast-restart
> ntfs_poweroff_fullLogs.partition.raw" does seem to work, as a lot of
> entries are listed and the print does not end with any kind of error
> message (leading me to believe that the last entry printed is indeed
> the last valid entry).
>
> I hope I'm not causing any confusion, but I would like to share two
> disks which show different symptoms:
> --- 1 ---
> ntfsrecover --kill-fast-restart /mnt/data/ntfs_poweroff_fullLogs.partition.raw
> ** Bad first record at offset 0x288
> ** Error : searchlikely() used for syncing
> * Syncing failed after playing 0 actions
>
> LogFIle: 
> https://s3-eu-west-1.amazonaws.com/gilbucket1/ntfs-disks/ntfs_poweroff_fullLogs.LogFile
> Entire partition:
> https://s3-eu-west-1.amazonaws.com/gilbucket1/ntfs-disks/ntfs_poweroff_fullLogs.partition.raw
>
> --- 2 ---
> ntfsrecover --kill-fast-restart /mnt/data/ntfs_poweroff_2.raw
> * Reaching free space at end of block 2
> * Syncing failed after playing 0 actions
>
> LogFile: 
> https://s3-eu-west-1.amazonaws.com/gilbucket1/ntfs-disks/ntntfs_poweroff_2.LogFile
> Entire partition:
> https://s3-eu-west-1.amazonaws.com/gilbucket1/ntfs-disks/ntfs_poweroff_2.raw.bak
>

You were using this partition with the fast restart mode
activated, which implies a different log format (2.0), see
https://social.technet.microsoft.com/wiki/contents/articles/15645.windows-8-volume-compatibility-considerations-with-prior-versions-of-windows.aspx

I have not yet decoded the format changes, so recovering from a
partition used with fast restart mode activated will generally
fail if there are unsynced committed changes.

Users who want to share data between Windows 8+ and Linux
should disable the fast restart mode.

Jean-Pierre

> Gil
>
>>
>> Jean-Pierre
>>
>>>
>>> Thanks,
>>> Gil




--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
ntfs-3g-devel mailing list
ntfs-3g-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ntfs-3g-devel


Re: [ntfs-3g-devel] ntfsrecover fails to recover due to "Bad first record at offset"

2017-05-14 Thread Gil Barash
Hello Jean-Pierre, and thank you for the quick response.

On Thu, May 11, 2017 at 7:38 PM, Jean-Pierre André
 wrote:
>
> Gil Barash wrote:
> > Hello,
> >
> > I'm trying to use the ntfsrecover tool to recover the partition.
> > I have a data disk (~500MB) on a Windows machine. I wrote to it some
> > files while powering off the machine (pulling the cable). The resulting
> > filesystem has some corrupt files - doing "ls" gives me stuff like:
> > ? -? ? ??  ?? file_20K_24385
> > So, I tried to use the ntfsrecover tool in order to fix those file
> > entries. However, it never succeed (I did this experiment a few times).
> >
> > Here, as an example, is the output of "./ntfsrecover -v
> > --kill-fast-restart /mnt/data/ntfs_poweroff_fullLogs.partition.raw" (my
> > "disk" is a file representing a partition):
> >
> > Capacity 533724672 bytes (533 MB)
> > sectors 1042431 (0xfe7ff), sector size 512
> > clusters 130303 (0x1fcff), cluster size 4096 (12 bits)
> > MFT at cluster 43434 (0xa9aa), entry size 1024
> > 4 MFT entries per cluster
> > * Using initial restart page, syncing from 0xd2170fb, dirty
> > * Block size 4096 bytes
> >
> > * block 0 at 0xa08c000
> > * RSTR in block 0 0x0 (addr 0xa08c000)
> > magic  52545352
> > usa_ofs001e
> > usa_count  0009
> > chkdsk_lsn 
> > system_page_size   1000
> > log_page_size  1000
> > restart_area_offset 0030
> > minor_vers 0
> > major_vers 2
> > usn2666
> >
> > current_lsn0d217473
> > log_clients0001
> > client_free_list   
> > client_in_use_list 
> > flags  
> > seq_number_bits002c
> > restart_area_length00e0
> > client_array_offset0040
> > file_size  0048c000
> > last_lsn_data_len  0070
> > record_length  0030
> > log_page_data_offs 0040
> > restart_log_open_count 761d453f
> >
> > oldest_lsn 0d2170df
> > client_restart_lsn 0d217473
> > prev_client
> > next_client
> > seq_number 
> > client_name_length 0008
> > client_nameNTFS
> >
> > * block 1 at 0xa08d000
> > * RSTR in block 1 0x1 (addr 0xa08d000)
> > magic  52545352
> > usa_ofs001e
> > usa_count  0009
> > chkdsk_lsn 
> > system_page_size   1000
> > log_page_size  1000
> > restart_area_offset 0030
> > minor_vers 0
> > major_vers 2
> > usn2667
> >
> > current_lsn0d2226c2
> > log_clients0001
> > client_free_list   
> > client_in_use_list 
> > flags  
> > seq_number_bits002c
> > restart_area_length00e0
> > client_array_offset0040
> > file_size  0048c000
> > last_lsn_data_len  0070
> > record_length  0030
> > log_page_data_offs 0040
> > restart_log_open_count 761d453f
> >
> > oldest_lsn 0d2170fb
> > client_restart_lsn 0d2226c2
> > prev_client
> > next_client
> > seq_number 
> > client_name_length 0008
> > client_nameNTFS
> > * Ignored block 2 at 0xa08e000
> > magic  44524352
> > usa_ofs0028
> > usa_count  0009
> > file_offset0d2281d8
> > flags  0001
> > page_count 1
> > page_position  1
> > next_record_offset 0f18
> > reserved4    
> > last_end_lsn   0d2281d8 (synced+69853)
> > usnb424
> >
> > * Restart page was obsolete
> >
> > * block 2 at 0xa08e000
> > * RCRD in block 2 0x2 (addr 0xa08e000)
> > magic  44524352
> > usa_ofs0028
> > usa_count  0009
> > file_offset0d2281d8
> > flags  0001
> > page_count 1
> > page_position  1
> > next_record_offset 0f18
> > reserved4    
> > last_end_lsn   0d2281d8 (synced+69853)
> > usnb424
> >
> > ** Bad first record at offset 0x288
> > this_lsn   0001006800380060 (synced-216625307) synced
> > client_previous_lsn000805c0
> > client_undo_next_lsn   
> > client_data_length 002c
> > seq_number 0
> > client_index   0
> > record_typec282c8ef
> > transaction_id 01d2a948
> > log_record_flags   ffcf
> > reserved1  5d5c adcf 01d2
> > ** Unknown action type
> > client_data for record type 3263351023
> >   cfff5c5d cfadd201 cfff5c5d cfadd201  ..\]..\]
> > 0010       
> > 0020  0010  efc882c2   
> > ** Error 

Re: [ntfs-3g-devel] ntfsrecover fails to recover due to "Bad first record at offset"

2017-05-11 Thread Jean-Pierre André
Gil Barash wrote:
> Hello,
>
> I'm trying to use the ntfsrecover tool to recover the partition.
> I have a data disk (~500MB) on a Windows machine. I wrote to it some
> files while powering off the machine (pulling the cable). The resulting
> filesystem has some corrupt files - doing "ls" gives me stuff like:
> ? -? ? ??  ?? file_20K_24385
> So, I tried to use the ntfsrecover tool in order to fix those file
> entries. However, it never succeed (I did this experiment a few times).
>
> Here, as an example, is the output of "./ntfsrecover -v
> --kill-fast-restart /mnt/data/ntfs_poweroff_fullLogs.partition.raw" (my
> "disk" is a file representing a partition):
>
> Capacity 533724672 bytes (533 MB)
> sectors 1042431 (0xfe7ff), sector size 512
> clusters 130303 (0x1fcff), cluster size 4096 (12 bits)
> MFT at cluster 43434 (0xa9aa), entry size 1024
> 4 MFT entries per cluster
> * Using initial restart page, syncing from 0xd2170fb, dirty
> * Block size 4096 bytes
>
> * block 0 at 0xa08c000
> * RSTR in block 0 0x0 (addr 0xa08c000)
> magic  52545352
> usa_ofs001e
> usa_count  0009
> chkdsk_lsn 
> system_page_size   1000
> log_page_size  1000
> restart_area_offset 0030
> minor_vers 0
> major_vers 2
> usn2666
>
> current_lsn0d217473
> log_clients0001
> client_free_list   
> client_in_use_list 
> flags  
> seq_number_bits002c
> restart_area_length00e0
> client_array_offset0040
> file_size  0048c000
> last_lsn_data_len  0070
> record_length  0030
> log_page_data_offs 0040
> restart_log_open_count 761d453f
>
> oldest_lsn 0d2170df
> client_restart_lsn 0d217473
> prev_client
> next_client
> seq_number 
> client_name_length 0008
> client_nameNTFS
>
> * block 1 at 0xa08d000
> * RSTR in block 1 0x1 (addr 0xa08d000)
> magic  52545352
> usa_ofs001e
> usa_count  0009
> chkdsk_lsn 
> system_page_size   1000
> log_page_size  1000
> restart_area_offset 0030
> minor_vers 0
> major_vers 2
> usn2667
>
> current_lsn0d2226c2
> log_clients0001
> client_free_list   
> client_in_use_list 
> flags  
> seq_number_bits002c
> restart_area_length00e0
> client_array_offset0040
> file_size  0048c000
> last_lsn_data_len  0070
> record_length  0030
> log_page_data_offs 0040
> restart_log_open_count 761d453f
>
> oldest_lsn 0d2170fb
> client_restart_lsn 0d2226c2
> prev_client
> next_client
> seq_number 
> client_name_length 0008
> client_nameNTFS
> * Ignored block 2 at 0xa08e000
> magic  44524352
> usa_ofs0028
> usa_count  0009
> file_offset0d2281d8
> flags  0001
> page_count 1
> page_position  1
> next_record_offset 0f18
> reserved4    
> last_end_lsn   0d2281d8 (synced+69853)
> usnb424
>
> * Restart page was obsolete
>
> * block 2 at 0xa08e000
> * RCRD in block 2 0x2 (addr 0xa08e000)
> magic  44524352
> usa_ofs0028
> usa_count  0009
> file_offset0d2281d8
> flags  0001
> page_count 1
> page_position  1
> next_record_offset 0f18
> reserved4    
> last_end_lsn   0d2281d8 (synced+69853)
> usnb424
>
> ** Bad first record at offset 0x288
> this_lsn   0001006800380060 (synced-216625307) synced
> client_previous_lsn000805c0
> client_undo_next_lsn   
> client_data_length 002c
> seq_number 0
> client_index   0
> record_typec282c8ef
> transaction_id 01d2a948
> log_record_flags   ffcf
> reserved1  5d5c adcf 01d2
> ** Unknown action type
> client_data for record type 3263351023
>   cfff5c5d cfadd201 cfff5c5d cfadd201  ..\]..\]
> 0010       
> 0020  0010  efc882c2   
> ** Error : searchlikely() used for syncing
> * Syncing failed after playing 0 actions
>
> I trying debugging it a bit, but couldn't find any solid lead.
>
> Do you have any idea why this happens? I would be happy to provide any
> addition information (I can provide the entire disk, if that would help).

The last log record could not be located.

Which was the Windows version used ? You were trying to