Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-04-04 Thread Jean-Pierre André
Hi again,

Please retry with :
http://jp-andre.pagesperso-orange.fr/dedup124-beta.zip

I had to take the number of index entries from another
record. Up to now, both locations showed the same value,
and I do not know the specific meaning of each. The change
should fix your latest case, and it does not break my own
tests.

Thank you again for your cooperation.

Jean-Pierre

Jelle de Jong wrote:
> Dear Jean-Pierre,
>
> Hereby the full stream directory: /mnt/sr7-sdb2/System\ Volume\
> Information/Dedup/ChunkStore/\{0DECAE8D-71D2-4BDE-8798-530201C72D8D\}.ddp/Stream/
>
>
> # f66ede437b46789891bcddc2ab424cfe  stream.data.full.dir.tar.gz
> https://powermail.nu/nextcloud/index.php/s/JfGOU6O9isWLtMl
>
> I did not see another 0019.*.ccc
>
> Kind regards,
>
> Jelle de Jong
>
> On 04/04/17 13:00, Jean-Pierre André wrote:
>> Jelle de Jong wrote:
>>> Dear Jean-Pierre,
>>>
>>> I can now read the "algemeen.txt" file, thank you!
>>>
>>> However I still got an stream error on an other file.
>>
>> You have a very interesting configuration for testing
>> corner situations. I would not be able to get such a
>> rich testbed without your cooperation. Unfortunately
>> this leads to a lot of communication
>>
>>> http://paste.debian.net/plainh/cb081fa9
>>>
>>> I did not get how you calculated that inode 1545290 was in cluster
>>> 30817428, but assuming this is still the case here is the information:
>>
>> Normally, this calculation is not needed, just do
>> an ntfsinfo, for example :
>> ntfsinfo -fvi 1545290 /dev/...
>>
>> I asked you because you showed an inconsistency, and
>> ntfsinfo could not display the contents (the inode
>> had probably been freed). It is now apparently too
>> late to catch the inconsistency.
>>
>> The explanation for the computation is :
>> - divide 1545290 by 4, which gives 386322 or 0x5e513
>> - lookup for a run in mft which contains vcn 0x5e513
>> we find vcn 0x578ac lcn 0x1d5d02d length 0xc813
>> then lcn is 0x1d5d02d + (0x5e513 - 0x578ac) = 0x1d63c94
>> and 0x1d63c94 is 30817428
>>
>>>
>>> http://paste.debian.net/plainh/535a9df8
>>>
>>> Thank you very much!
>>>
>>> If it is a quick fix I can still test it today, almost there.
>>
>> Probably not a quick fix. The reparse data points at offset
>> 90622928, which is beyond the end of 0019.00010002.ccc
>> whose size is 87789056. Moreover the identification of the
>> user file (D8A64E6D...) is not present in 0019.00010002.ccc
>>
>> What comes to mind is that 0019.00010002.ccc was being
>> updated and a newer one should be used.
>>
>> Is there another 0019.*.ccc in the same Stream directory ?
>>
>> Jean-Pierre
>>
>>> Kind regards,
>>>
>>> Jelle de Jong
>>>
>>> On 03/04/17 11:44, Jean-Pierre André wrote:
 Hi,

 Please try
 http://jp-andre.pagesperso-orange.fr/dedup123-beta.zip

 Regards

 Jean-Pierre

 Jean-Pierre André wrote:
> Hi,
>
> I see what is happening here : this file "algemeen.txt"
> is recorded the traditional way (like mine), not the
> way most of your files are recorded. So I have to improve
> the selection of modes.
>
> I will upload a fix next week.
>
> Was this file created recently ? Maybe, when a file is
> updated, it is recorded differently from when it was created
> initially.
>
> Regards
>
> Jean-Pierre
>
> Jelle de Jong wrote:
>> Beste Jean-Pierre,
>>
>> #getfattr -h -e hex -n system.ntfs_reparse_data ...
>> #dd if=/dev/mapper/.. skip=30817428 bs=4096 count=1 | od -t x1
>> #and the other commands outputs:
>> https://powermail.nu/nextcloud/index.php/s/HwzpBqaIzOyZedg
>>
>> # stream.data.full.0019.00010002.gz:
>> https://powermail.nu/nextcloud/index.php/s/KKlByC8qEPKZFrC
>>
>> Thank you in advance,
>>
>> Kind regards,
>>
>> Jelle de Jong
>>
>> On 28/03/17 16:32, Jean-Pierre André wrote:
>>> Jelle de Jong wrote:
 Hi Jean-Pierre,

 When trying to read algemeen.txt I get a reproducible "Bad stream
 for
 offset" (I got a few more files with the same issue)
>>>
>>> So, please post the reparse tag for this file algemeen.txt :
>>> getfattr -h -e hex -n system.ntfs_reparse_data somepath/algemeen.txt
>>>
>>> Also please post the contents of file 1545290. I do not
>>> know its name, but it must be in the Stream directory,
>>> with a suffix .ccc. To get its name do :
>>> ls -li *as*usual*/Stream/*.ccc | grep 1545290
>>>

 # ntfsinfo -fvi 0 /dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2
 http://paste.debian.net/plainh/486b1a01
>>>
>>> So inode 1582022 must be in cluster 30826611 and
>>> inode 1545290 in cluster 30817428.
>>> Please post the outputs of :
>>> dd if=/dev/mapper/lvm1--vol* skip=30826611 bs=4096 count=1 | od
>>> -t x1
>>> dd if=/dev/mapper/lvm1--vol* skip=30817428 bs=4096 count=1 | od
>>> -t x1

Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-04-04 Thread Jelle de Jong
Dear Jean-Pierre,

Hereby the full stream directory: /mnt/sr7-sdb2/System\ Volume\ 
Information/Dedup/ChunkStore/\{0DECAE8D-71D2-4BDE-8798-530201C72D8D\}.ddp/Stream/

# f66ede437b46789891bcddc2ab424cfe  stream.data.full.dir.tar.gz
https://powermail.nu/nextcloud/index.php/s/JfGOU6O9isWLtMl

I did not see another 0019.*.ccc

Kind regards,

Jelle de Jong

On 04/04/17 13:00, Jean-Pierre André wrote:
> Jelle de Jong wrote:
>> Dear Jean-Pierre,
>>
>> I can now read the "algemeen.txt" file, thank you!
>>
>> However I still got an stream error on an other file.
>
> You have a very interesting configuration for testing
> corner situations. I would not be able to get such a
> rich testbed without your cooperation. Unfortunately
> this leads to a lot of communication
>
>> http://paste.debian.net/plainh/cb081fa9
>>
>> I did not get how you calculated that inode 1545290 was in cluster
>> 30817428, but assuming this is still the case here is the information:
>
> Normally, this calculation is not needed, just do
> an ntfsinfo, for example :
> ntfsinfo -fvi 1545290 /dev/...
>
> I asked you because you showed an inconsistency, and
> ntfsinfo could not display the contents (the inode
> had probably been freed). It is now apparently too
> late to catch the inconsistency.
>
> The explanation for the computation is :
> - divide 1545290 by 4, which gives 386322 or 0x5e513
> - lookup for a run in mft which contains vcn 0x5e513
> we find vcn 0x578ac lcn 0x1d5d02d length 0xc813
> then lcn is 0x1d5d02d + (0x5e513 - 0x578ac) = 0x1d63c94
> and 0x1d63c94 is 30817428
>
>>
>> http://paste.debian.net/plainh/535a9df8
>>
>> Thank you very much!
>>
>> If it is a quick fix I can still test it today, almost there.
>
> Probably not a quick fix. The reparse data points at offset
> 90622928, which is beyond the end of 0019.00010002.ccc
> whose size is 87789056. Moreover the identification of the
> user file (D8A64E6D...) is not present in 0019.00010002.ccc
>
> What comes to mind is that 0019.00010002.ccc was being
> updated and a newer one should be used.
>
> Is there another 0019.*.ccc in the same Stream directory ?
>
> Jean-Pierre
>
>> Kind regards,
>>
>> Jelle de Jong
>>
>> On 03/04/17 11:44, Jean-Pierre André wrote:
>>> Hi,
>>>
>>> Please try
>>> http://jp-andre.pagesperso-orange.fr/dedup123-beta.zip
>>>
>>> Regards
>>>
>>> Jean-Pierre
>>>
>>> Jean-Pierre André wrote:
 Hi,

 I see what is happening here : this file "algemeen.txt"
 is recorded the traditional way (like mine), not the
 way most of your files are recorded. So I have to improve
 the selection of modes.

 I will upload a fix next week.

 Was this file created recently ? Maybe, when a file is
 updated, it is recorded differently from when it was created
 initially.

 Regards

 Jean-Pierre

 Jelle de Jong wrote:
> Beste Jean-Pierre,
>
> #getfattr -h -e hex -n system.ntfs_reparse_data ...
> #dd if=/dev/mapper/.. skip=30817428 bs=4096 count=1 | od -t x1
> #and the other commands outputs:
> https://powermail.nu/nextcloud/index.php/s/HwzpBqaIzOyZedg
>
> # stream.data.full.0019.00010002.gz:
> https://powermail.nu/nextcloud/index.php/s/KKlByC8qEPKZFrC
>
> Thank you in advance,
>
> Kind regards,
>
> Jelle de Jong
>
> On 28/03/17 16:32, Jean-Pierre André wrote:
>> Jelle de Jong wrote:
>>> Hi Jean-Pierre,
>>>
>>> When trying to read algemeen.txt I get a reproducible "Bad stream
>>> for
>>> offset" (I got a few more files with the same issue)
>>
>> So, please post the reparse tag for this file algemeen.txt :
>> getfattr -h -e hex -n system.ntfs_reparse_data somepath/algemeen.txt
>>
>> Also please post the contents of file 1545290. I do not
>> know its name, but it must be in the Stream directory,
>> with a suffix .ccc. To get its name do :
>> ls -li *as*usual*/Stream/*.ccc | grep 1545290
>>
>>>
>>> # ntfsinfo -fvi 0 /dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2
>>> http://paste.debian.net/plainh/486b1a01
>>
>> So inode 1582022 must be in cluster 30826611 and
>> inode 1545290 in cluster 30817428.
>> Please post the outputs of :
>> dd if=/dev/mapper/lvm1--vol* skip=30826611 bs=4096 count=1 | od -t x1
>> dd if=/dev/mapper/lvm1--vol* skip=30817428 bs=4096 count=1 | od -t x1
>>
>> The sequencing error which had been detected may have
>> disappeared with no evidence left, but it is worth
>> trying.
>>
>>> I feel we may be almost there (support for data deduplication), but
>>> still hitting a few unreadable files...
>>
>> I hope so, this is an uneasy situation for me, as I
>> cannot reproduce the errors. Thanks for you cooperation
>> to the bug hunt.
>>
>> Jean-Pierre
>>
>>>
>>> Thanks you again!
>>>
>>> Kind regards,
>>>
>>> Jelle de Jong
>>>
>>>

Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-04-04 Thread Jean-Pierre André
Jelle de Jong wrote:
> Dear Jean-Pierre,
>
> I can now read the "algemeen.txt" file, thank you!
>
> However I still got an stream error on an other file.

You have a very interesting configuration for testing
corner situations. I would not be able to get such a
rich testbed without your cooperation. Unfortunately
this leads to a lot of communication

> http://paste.debian.net/plainh/cb081fa9
>
> I did not get how you calculated that inode 1545290 was in cluster
> 30817428, but assuming this is still the case here is the information:

Normally, this calculation is not needed, just do
an ntfsinfo, for example :
ntfsinfo -fvi 1545290 /dev/...

I asked you because you showed an inconsistency, and
ntfsinfo could not display the contents (the inode
had probably been freed). It is now apparently too
late to catch the inconsistency.

The explanation for the computation is :
- divide 1545290 by 4, which gives 386322 or 0x5e513
- lookup for a run in mft which contains vcn 0x5e513
we find vcn 0x578ac lcn 0x1d5d02d length 0xc813
then lcn is 0x1d5d02d + (0x5e513 - 0x578ac) = 0x1d63c94
and 0x1d63c94 is 30817428

>
> http://paste.debian.net/plainh/535a9df8
>
> Thank you very much!
>
> If it is a quick fix I can still test it today, almost there.

Probably not a quick fix. The reparse data points at offset
90622928, which is beyond the end of 0019.00010002.ccc
whose size is 87789056. Moreover the identification of the
user file (D8A64E6D...) is not present in 0019.00010002.ccc

What comes to mind is that 0019.00010002.ccc was being
updated and a newer one should be used.

Is there another 0019.*.ccc in the same Stream directory ?

Jean-Pierre

> Kind regards,
>
> Jelle de Jong
>
> On 03/04/17 11:44, Jean-Pierre André wrote:
>> Hi,
>>
>> Please try
>> http://jp-andre.pagesperso-orange.fr/dedup123-beta.zip
>>
>> Regards
>>
>> Jean-Pierre
>>
>> Jean-Pierre André wrote:
>>> Hi,
>>>
>>> I see what is happening here : this file "algemeen.txt"
>>> is recorded the traditional way (like mine), not the
>>> way most of your files are recorded. So I have to improve
>>> the selection of modes.
>>>
>>> I will upload a fix next week.
>>>
>>> Was this file created recently ? Maybe, when a file is
>>> updated, it is recorded differently from when it was created
>>> initially.
>>>
>>> Regards
>>>
>>> Jean-Pierre
>>>
>>> Jelle de Jong wrote:
 Beste Jean-Pierre,

 #getfattr -h -e hex -n system.ntfs_reparse_data ...
 #dd if=/dev/mapper/.. skip=30817428 bs=4096 count=1 | od -t x1
 #and the other commands outputs:
 https://powermail.nu/nextcloud/index.php/s/HwzpBqaIzOyZedg

 # stream.data.full.0019.00010002.gz:
 https://powermail.nu/nextcloud/index.php/s/KKlByC8qEPKZFrC

 Thank you in advance,

 Kind regards,

 Jelle de Jong

 On 28/03/17 16:32, Jean-Pierre André wrote:
> Jelle de Jong wrote:
>> Hi Jean-Pierre,
>>
>> When trying to read algemeen.txt I get a reproducible "Bad stream for
>> offset" (I got a few more files with the same issue)
>
> So, please post the reparse tag for this file algemeen.txt :
> getfattr -h -e hex -n system.ntfs_reparse_data somepath/algemeen.txt
>
> Also please post the contents of file 1545290. I do not
> know its name, but it must be in the Stream directory,
> with a suffix .ccc. To get its name do :
> ls -li *as*usual*/Stream/*.ccc | grep 1545290
>
>>
>> # ntfsinfo -fvi 0 /dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2
>> http://paste.debian.net/plainh/486b1a01
>
> So inode 1582022 must be in cluster 30826611 and
> inode 1545290 in cluster 30817428.
> Please post the outputs of :
> dd if=/dev/mapper/lvm1--vol* skip=30826611 bs=4096 count=1 | od -t x1
> dd if=/dev/mapper/lvm1--vol* skip=30817428 bs=4096 count=1 | od -t x1
>
> The sequencing error which had been detected may have
> disappeared with no evidence left, but it is worth
> trying.
>
>> I feel we may be almost there (support for data deduplication), but
>> still hitting a few unreadable files...
>
> I hope so, this is an uneasy situation for me, as I
> cannot reproduce the errors. Thanks for you cooperation
> to the bug hunt.
>
> Jean-Pierre
>
>>
>> Thanks you again!
>>
>> Kind regards,
>>
>> Jelle de Jong
>>
>>
>> On 23/03/17 22:03, Jean-Pierre André wrote:
>>> Hi,
>>>
>>> Jelle de Jong wrote:
 Hi Jean-Pierre,

 Thank you again!

 Could you take a look at the following output, I got some "Bad
 stream
 for offset" messages: http://paste.debian.net/plainh/c145e066

 Follow-up on the previous mail

 I was not able to get any node information, and have not been
 able to
 reproduce the NTFS messages.

 ntfsinfo -fvi 1582022
 

Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-04-03 Thread Jean-Pierre André
Hi,

Please try
http://jp-andre.pagesperso-orange.fr/dedup123-beta.zip

Regards

Jean-Pierre

Jean-Pierre André wrote:
> Hi,
>
> I see what is happening here : this file "algemeen.txt"
> is recorded the traditional way (like mine), not the
> way most of your files are recorded. So I have to improve
> the selection of modes.
>
> I will upload a fix next week.
>
> Was this file created recently ? Maybe, when a file is
> updated, it is recorded differently from when it was created
> initially.
>
> Regards
>
> Jean-Pierre
>
> Jelle de Jong wrote:
>> Beste Jean-Pierre,
>>
>> #getfattr -h -e hex -n system.ntfs_reparse_data ...
>> #dd if=/dev/mapper/.. skip=30817428 bs=4096 count=1 | od -t x1
>> #and the other commands outputs:
>> https://powermail.nu/nextcloud/index.php/s/HwzpBqaIzOyZedg
>>
>> # stream.data.full.0019.00010002.gz:
>> https://powermail.nu/nextcloud/index.php/s/KKlByC8qEPKZFrC
>>
>> Thank you in advance,
>>
>> Kind regards,
>>
>> Jelle de Jong
>>
>> On 28/03/17 16:32, Jean-Pierre André wrote:
>>> Jelle de Jong wrote:
 Hi Jean-Pierre,

 When trying to read algemeen.txt I get a reproducible "Bad stream for
 offset" (I got a few more files with the same issue)
>>>
>>> So, please post the reparse tag for this file algemeen.txt :
>>> getfattr -h -e hex -n system.ntfs_reparse_data somepath/algemeen.txt
>>>
>>> Also please post the contents of file 1545290. I do not
>>> know its name, but it must be in the Stream directory,
>>> with a suffix .ccc. To get its name do :
>>> ls -li *as*usual*/Stream/*.ccc | grep 1545290
>>>

 # ntfsinfo -fvi 0 /dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2
 http://paste.debian.net/plainh/486b1a01
>>>
>>> So inode 1582022 must be in cluster 30826611 and
>>> inode 1545290 in cluster 30817428.
>>> Please post the outputs of :
>>> dd if=/dev/mapper/lvm1--vol* skip=30826611 bs=4096 count=1 | od -t x1
>>> dd if=/dev/mapper/lvm1--vol* skip=30817428 bs=4096 count=1 | od -t x1
>>>
>>> The sequencing error which had been detected may have
>>> disappeared with no evidence left, but it is worth
>>> trying.
>>>
 I feel we may be almost there (support for data deduplication), but
 still hitting a few unreadable files...
>>>
>>> I hope so, this is an uneasy situation for me, as I
>>> cannot reproduce the errors. Thanks for you cooperation
>>> to the bug hunt.
>>>
>>> Jean-Pierre
>>>

 Thanks you again!

 Kind regards,

 Jelle de Jong


 On 23/03/17 22:03, Jean-Pierre André wrote:
> Hi,
>
> Jelle de Jong wrote:
>> Hi Jean-Pierre,
>>
>> Thank you again!
>>
>> Could you take a look at the following output, I got some "Bad stream
>> for offset" messages: http://paste.debian.net/plainh/c145e066
>>
>> Follow-up on the previous mail
>>
>> I was not able to get any node information, and have not been able to
>> reproduce the NTFS messages.
>>
>> ntfsinfo -fvi 1582022
>> /dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2
>> Error loading node: No such file or directory
>
> Before digging into the "Bad stream for offset" messages,
> I would like to get details about those inconsistencies
> before they are wiped out (hoping it is not too late).
> Maybe the issues are related, maybe they are not.
>
> The ntfsinfo error means either that the faulty inodes
> have been freed or they are extents to other inodes.
>
> I will have to do it in two steps : first determine where
> they are located, then extract their contents.
>
> To get the mapping of inodes, please post the output of
> ntfsinfo -fvi 0 /dev/mapper/lvm1--vol-sr7*
>
> Jean-Pierre
>
>>
>> Kind regards,
>>
>> Jelle de Jong
>>
>> On 21/03/17 12:01, Jean-Pierre André wrote:
>>> Hi,
>>>
>>> Jelle de Jong wrote:
 Dear Jean-Pierre,

 I am still testing the dedup plugin, I have issues with
 rdiff-backup
 that the checksum of the source and destination are not the same,
 but
 this can be an issue with rdiff-backup.

 However I did get these messages, but I do not know if they are
 related
 to the dedup plugin?

 Mar 21 06:57:37 backup ntfs-3g[12620]: Record 1582022 has wrong
 SeqNo
 (159 <> 156)
 Mar 21 06:57:37 backup ntfs-3g[12620]: Could not decode the type of
 inode 1582022
 Mar 21 06:57:37 backup ntfs-3g[12620]: Record 1545290 has wrong
 SeqNo
 (296 <> 295)
 Mar 21 06:57:37 backup ntfs-3g[12620]: Could not decode the type of
 inode 1545290
>>>
>>> These errors are thrown by ntfs-3g proper, not by the
>>> deduplication plugin.
>>>
>>> Inconsistencies have been found in the mentioned files,
>>> and they are unlikely to be created by the plugin (which
>>> does not have updating support), ... but one 

Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-03-30 Thread Jelle de Jong
Beste Jean-Pierre,

#getfattr -h -e hex -n system.ntfs_reparse_data ...
#dd if=/dev/mapper/.. skip=30817428 bs=4096 count=1 | od -t x1
#and the other commands outputs:
https://powermail.nu/nextcloud/index.php/s/HwzpBqaIzOyZedg

# stream.data.full.0019.00010002.gz:
https://powermail.nu/nextcloud/index.php/s/KKlByC8qEPKZFrC

Thank you in advance,

Kind regards,

Jelle de Jong

On 28/03/17 16:32, Jean-Pierre André wrote:
> Jelle de Jong wrote:
>> Hi Jean-Pierre,
>>
>> When trying to read algemeen.txt I get a reproducible "Bad stream for
>> offset" (I got a few more files with the same issue)
>
> So, please post the reparse tag for this file algemeen.txt :
> getfattr -h -e hex -n system.ntfs_reparse_data somepath/algemeen.txt
>
> Also please post the contents of file 1545290. I do not
> know its name, but it must be in the Stream directory,
> with a suffix .ccc. To get its name do :
> ls -li *as*usual*/Stream/*.ccc | grep 1545290
>
>>
>> # ntfsinfo -fvi 0 /dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2
>> http://paste.debian.net/plainh/486b1a01
>
> So inode 1582022 must be in cluster 30826611 and
> inode 1545290 in cluster 30817428.
> Please post the outputs of :
> dd if=/dev/mapper/lvm1--vol* skip=30826611 bs=4096 count=1 | od -t x1
> dd if=/dev/mapper/lvm1--vol* skip=30817428 bs=4096 count=1 | od -t x1
>
> The sequencing error which had been detected may have
> disappeared with no evidence left, but it is worth
> trying.
>
>> I feel we may be almost there (support for data deduplication), but
>> still hitting a few unreadable files...
>
> I hope so, this is an uneasy situation for me, as I
> cannot reproduce the errors. Thanks for you cooperation
> to the bug hunt.
>
> Jean-Pierre
>
>>
>> Thanks you again!
>>
>> Kind regards,
>>
>> Jelle de Jong
>>
>>
>> On 23/03/17 22:03, Jean-Pierre André wrote:
>>> Hi,
>>>
>>> Jelle de Jong wrote:
 Hi Jean-Pierre,

 Thank you again!

 Could you take a look at the following output, I got some "Bad stream
 for offset" messages: http://paste.debian.net/plainh/c145e066

 Follow-up on the previous mail

 I was not able to get any node information, and have not been able to
 reproduce the NTFS messages.

 ntfsinfo -fvi 1582022 /dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2
 Error loading node: No such file or directory
>>>
>>> Before digging into the "Bad stream for offset" messages,
>>> I would like to get details about those inconsistencies
>>> before they are wiped out (hoping it is not too late).
>>> Maybe the issues are related, maybe they are not.
>>>
>>> The ntfsinfo error means either that the faulty inodes
>>> have been freed or they are extents to other inodes.
>>>
>>> I will have to do it in two steps : first determine where
>>> they are located, then extract their contents.
>>>
>>> To get the mapping of inodes, please post the output of
>>> ntfsinfo -fvi 0 /dev/mapper/lvm1--vol-sr7*
>>>
>>> Jean-Pierre
>>>

 Kind regards,

 Jelle de Jong

 On 21/03/17 12:01, Jean-Pierre André wrote:
> Hi,
>
> Jelle de Jong wrote:
>> Dear Jean-Pierre,
>>
>> I am still testing the dedup plugin, I have issues with rdiff-backup
>> that the checksum of the source and destination are not the same, but
>> this can be an issue with rdiff-backup.
>>
>> However I did get these messages, but I do not know if they are
>> related
>> to the dedup plugin?
>>
>> Mar 21 06:57:37 backup ntfs-3g[12620]: Record 1582022 has wrong SeqNo
>> (159 <> 156)
>> Mar 21 06:57:37 backup ntfs-3g[12620]: Could not decode the type of
>> inode 1582022
>> Mar 21 06:57:37 backup ntfs-3g[12620]: Record 1545290 has wrong SeqNo
>> (296 <> 295)
>> Mar 21 06:57:37 backup ntfs-3g[12620]: Could not decode the type of
>> inode 1545290
>
> These errors are thrown by ntfs-3g proper, not by the
> deduplication plugin.
>
> Inconsistencies have been found in the mentioned files,
> and they are unlikely to be created by the plugin (which
> does not have updating support), ... but one never knows.
> Owing to the inconsistencies, ntfs-3g cannot tell if the
> said files are plain files, directories, sockets, etc.
>
> You have to run chkdsk to fix these inconsistencies, but
> this can lead to losing data, so before doing that, the
> parameters of these files should be analyzed to assess
> the possible damage.
>
> Please post the outputs of :
> ntfsinfo -fvi 1582022 /dev/partition
> ntfsinfo -fvi 1545290 /dev/partition
>
> This must be done as root, replacing /dev/partition by
> the actual partition path (something like /dev/sdc2)
>
> Jean-Pierre
>
>>
>> Kind regards,
>>
>> Jelle de Jong

>>>
>>>
>>
>
>

--
Check out the vibrant tech community on one of the world's most
engaging 

Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-03-28 Thread Jelle de Jong
Hi Jean-Pierre,

When trying to read algemeen.txt I get a reproducible "Bad stream for 
offset" (I got a few more files with the same issue)

# ntfsinfo -fvi 0 /dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2
http://paste.debian.net/plainh/486b1a01

I feel we may be almost there (support for data deduplication), but 
still hitting a few unreadable files...

Thanks you again!

Kind regards,

Jelle de Jong


On 23/03/17 22:03, Jean-Pierre André wrote:
> Hi,
>
> Jelle de Jong wrote:
>> Hi Jean-Pierre,
>>
>> Thank you again!
>>
>> Could you take a look at the following output, I got some "Bad stream
>> for offset" messages: http://paste.debian.net/plainh/c145e066
>>
>> Follow-up on the previous mail
>>
>> I was not able to get any node information, and have not been able to
>> reproduce the NTFS messages.
>>
>> ntfsinfo -fvi 1582022 /dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2
>> Error loading node: No such file or directory
>
> Before digging into the "Bad stream for offset" messages,
> I would like to get details about those inconsistencies
> before they are wiped out (hoping it is not too late).
> Maybe the issues are related, maybe they are not.
>
> The ntfsinfo error means either that the faulty inodes
> have been freed or they are extents to other inodes.
>
> I will have to do it in two steps : first determine where
> they are located, then extract their contents.
>
> To get the mapping of inodes, please post the output of
> ntfsinfo -fvi 0 /dev/mapper/lvm1--vol-sr7*
>
> Jean-Pierre
>
>>
>> Kind regards,
>>
>> Jelle de Jong
>>
>> On 21/03/17 12:01, Jean-Pierre André wrote:
>>> Hi,
>>>
>>> Jelle de Jong wrote:
 Dear Jean-Pierre,

 I am still testing the dedup plugin, I have issues with rdiff-backup
 that the checksum of the source and destination are not the same, but
 this can be an issue with rdiff-backup.

 However I did get these messages, but I do not know if they are related
 to the dedup plugin?

 Mar 21 06:57:37 backup ntfs-3g[12620]: Record 1582022 has wrong SeqNo
 (159 <> 156)
 Mar 21 06:57:37 backup ntfs-3g[12620]: Could not decode the type of
 inode 1582022
 Mar 21 06:57:37 backup ntfs-3g[12620]: Record 1545290 has wrong SeqNo
 (296 <> 295)
 Mar 21 06:57:37 backup ntfs-3g[12620]: Could not decode the type of
 inode 1545290
>>>
>>> These errors are thrown by ntfs-3g proper, not by the
>>> deduplication plugin.
>>>
>>> Inconsistencies have been found in the mentioned files,
>>> and they are unlikely to be created by the plugin (which
>>> does not have updating support), ... but one never knows.
>>> Owing to the inconsistencies, ntfs-3g cannot tell if the
>>> said files are plain files, directories, sockets, etc.
>>>
>>> You have to run chkdsk to fix these inconsistencies, but
>>> this can lead to losing data, so before doing that, the
>>> parameters of these files should be analyzed to assess
>>> the possible damage.
>>>
>>> Please post the outputs of :
>>> ntfsinfo -fvi 1582022 /dev/partition
>>> ntfsinfo -fvi 1545290 /dev/partition
>>>
>>> This must be done as root, replacing /dev/partition by
>>> the actual partition path (something like /dev/sdc2)
>>>
>>> Jean-Pierre
>>>

 Kind regards,

 Jelle de Jong
>>
>
>

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
ntfs-3g-devel mailing list
ntfs-3g-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ntfs-3g-devel


Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-03-21 Thread Jelle de Jong
Dear Jean-Pierre,

I am still testing the dedup plugin, I have issues with rdiff-backup 
that the checksum of the source and destination are not the same, but 
this can be an issue with rdiff-backup.

However I did get these messages, but I do not know if they are related 
to the dedup plugin?

Mar 21 06:57:37 backup ntfs-3g[12620]: Record 1582022 has wrong SeqNo 
(159 <> 156)
Mar 21 06:57:37 backup ntfs-3g[12620]: Could not decode the type of 
inode 1582022
Mar 21 06:57:37 backup ntfs-3g[12620]: Record 1545290 has wrong SeqNo 
(296 <> 295)
Mar 21 06:57:37 backup ntfs-3g[12620]: Could not decode the type of 
inode 1545290

Kind regards,

Jelle de Jong

On 25/02/17 12:30, Jean-Pierre André wrote:
> [ Repeating, forgot to cc to the list ]
>
> Jean-Pierre André wrote:
>> Hi,
>>
>> There was a bug in the index location, which in bad conditions
>> could lead to an endless loop in the indexed search. So I have
>> fixed the bug, and protected against a corrupted index leading
>> to a similar loop.
>>
>> With the posted data, I can access the first byte of 865,675
>> dummy files similar to yours... Of course they are not your
>> actual files and there is still room for problems.
>>
>> Could you try :
>>
>> http://jp-andre.pagesperso-orange.fr/dedup122-beta.zip
>>
>> Regards
>>
>> Jean-Pierre
>>
>> Jelle de Jong wrote:
>>> Hi Jean-Pierre,
>>>
>>> # output: md5sum *.gz
>>> https://powermail.nu/nextcloud/index.php/s/jxler2rZOqBdpr2
>>>
>>> 879499f9187b0f590ae92460f4949dfd  stream.data.full.dir.tar.gz
>>> a8fc902613486e332898f92aba26c61f  reparse-tags.gz
>>>
>>> Kind regards,
>>>
>>> Jelle de Jong
>>>
>>> On 23/02/17 14:24, Jean-Pierre André wrote:
 Hi,

 Can you also post the md5 (or sha1, or ...) of the big
 file. The connection is frequently interrupted, and I
 cannot rely on the downloaded file without a check.

 Jean-Pierre

 Jelle de Jong wrote:
> Hi Jean-Pierre,
>
> Thank you!
>
> The reparse-tags.gz file:
> https://powermail.nu/nextcloud/index.php/s/fS6Y6bpzoMgPiZ0
>
> Generated by running: getfattr -e hex -n system.ntfs_reparse_data -R
> /mnt/sr7-sdb2/ 2> /dev/null | grep ntfs_reparse_data | gzip >
> /root/reparse-tags.gz
>
> Kind regards,
>
> Jelle de Jong
>
> On 23/02/17 12:07, Jean-Pierre André wrote:
>> Hi,
>>
>> Jelle de Jong wrote:
>>> Dear Jean-Pierre,
>>>
>>> I thought version 1.2.1 of the plug-in was working so I took it
>>> further
>>> into production, but during backups with rdiff-backup and
>>> guestmount it
>>> created a 100% cpu load in qemu process that stayed there for days
>>> until
>>> I killed them, I tested this twice. So I went back to a
>>> xpart/mount -t
>>> ntfs command and found more "Bad stream for offset" and found that
>>> the
>>> /sbin/mount.ntfs-3g command was running at 100% cpu load and hanged
>>> there.
>>
>> Too bad.
>>
>>> I have added the whole Stream directory here: (1.1GB)
>>> https://powermail.nu/nextcloud/index.php/s/vbq85qZ2wcVYxrG
>>>
>>> Separate stream file: stream.data.full.000c.00020001.gz
>>> https://powermail.nu/nextcloud/index.php/s/QinV51XE4jrAH7a
>>>
>>> All the commands I used:
>>> http://paste.debian.net/plainh/c0ea5950
>>>
>>> I do not know how to get the reparse tags of all the files, maybe
>>> you
>>> can help me how to get all the information you need.
>>
>> Just use option -R on the base directory :
>>
>> getfattr -e hex -n system.ntfs_reparse_data -R base-dir
>>
>> Notes :
>> 1) files with no reparse tags (those which are not deduplicated)
>> will throw an error
>> 2) this will output the file names, which you might not want
>> to disclose. Fortunately I do not need them for now.
>>
>> So you may append to the above command :
>>
>> 2> /dev/null | grep ntfs_reparse_data | gz > reparse-tags.gz
>>
>> With that, I will be able to build a configuration similar
>> to yours... apart from the files themselves.
>>
>> Regards
>>
>> Jean-Pierre
>>
>>>
>>> Thank you for your help!
>>>
>>> Kind regards,
>>>
>>> Jelle de Jong
>>>
>>> On 14/02/17 15:55, Jean-Pierre André wrote:
 Hi,

 Jelle de Jong wrote:
> Hi Jean-Pierre,
>
> If we have to switch to Windows 2012 and thereby having an
> environment
> similar to yours then we can switch to an other Windows version.

 I do not have any Windows Server, and my analysis
 and tests are based on an unofficial deduplication
 package which was adapted to Windows 10 Pro.

 A few months ago, following a bug report, I had to
 make changes for Windows Server 2012 which uses an
 older data format, and my only experience about this
 format is related 

Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-02-25 Thread Jean-Pierre André
[ Repeating, forgot to cc to the list ]

Jean-Pierre André wrote:
> Hi,
>
> There was a bug in the index location, which in bad conditions
> could lead to an endless loop in the indexed search. So I have
> fixed the bug, and protected against a corrupted index leading
> to a similar loop.
>
> With the posted data, I can access the first byte of 865,675
> dummy files similar to yours... Of course they are not your
> actual files and there is still room for problems.
>
> Could you try :
>
> http://jp-andre.pagesperso-orange.fr/dedup122-beta.zip
>
> Regards
>
> Jean-Pierre
>
> Jelle de Jong wrote:
>> Hi Jean-Pierre,
>>
>> # output: md5sum *.gz
>> https://powermail.nu/nextcloud/index.php/s/jxler2rZOqBdpr2
>>
>> 879499f9187b0f590ae92460f4949dfd  stream.data.full.dir.tar.gz
>> a8fc902613486e332898f92aba26c61f  reparse-tags.gz
>>
>> Kind regards,
>>
>> Jelle de Jong
>>
>> On 23/02/17 14:24, Jean-Pierre André wrote:
>>> Hi,
>>>
>>> Can you also post the md5 (or sha1, or ...) of the big
>>> file. The connection is frequently interrupted, and I
>>> cannot rely on the downloaded file without a check.
>>>
>>> Jean-Pierre
>>>
>>> Jelle de Jong wrote:
 Hi Jean-Pierre,

 Thank you!

 The reparse-tags.gz file:
 https://powermail.nu/nextcloud/index.php/s/fS6Y6bpzoMgPiZ0

 Generated by running: getfattr -e hex -n system.ntfs_reparse_data -R
 /mnt/sr7-sdb2/ 2> /dev/null | grep ntfs_reparse_data | gzip >
 /root/reparse-tags.gz

 Kind regards,

 Jelle de Jong

 On 23/02/17 12:07, Jean-Pierre André wrote:
> Hi,
>
> Jelle de Jong wrote:
>> Dear Jean-Pierre,
>>
>> I thought version 1.2.1 of the plug-in was working so I took it
>> further
>> into production, but during backups with rdiff-backup and
>> guestmount it
>> created a 100% cpu load in qemu process that stayed there for days
>> until
>> I killed them, I tested this twice. So I went back to a
>> xpart/mount -t
>> ntfs command and found more "Bad stream for offset" and found that
>> the
>> /sbin/mount.ntfs-3g command was running at 100% cpu load and hanged
>> there.
>
> Too bad.
>
>> I have added the whole Stream directory here: (1.1GB)
>> https://powermail.nu/nextcloud/index.php/s/vbq85qZ2wcVYxrG
>>
>> Separate stream file: stream.data.full.000c.00020001.gz
>> https://powermail.nu/nextcloud/index.php/s/QinV51XE4jrAH7a
>>
>> All the commands I used:
>> http://paste.debian.net/plainh/c0ea5950
>>
>> I do not know how to get the reparse tags of all the files, maybe you
>> can help me how to get all the information you need.
>
> Just use option -R on the base directory :
>
> getfattr -e hex -n system.ntfs_reparse_data -R base-dir
>
> Notes :
> 1) files with no reparse tags (those which are not deduplicated)
> will throw an error
> 2) this will output the file names, which you might not want
> to disclose. Fortunately I do not need them for now.
>
> So you may append to the above command :
>
> 2> /dev/null | grep ntfs_reparse_data | gz > reparse-tags.gz
>
> With that, I will be able to build a configuration similar
> to yours... apart from the files themselves.
>
> Regards
>
> Jean-Pierre
>
>>
>> Thank you for your help!
>>
>> Kind regards,
>>
>> Jelle de Jong
>>
>> On 14/02/17 15:55, Jean-Pierre André wrote:
>>> Hi,
>>>
>>> Jelle de Jong wrote:
 Hi Jean-Pierre,

 If we have to switch to Windows 2012 and thereby having an
 environment
 similar to yours then we can switch to an other Windows version.
>>>
>>> I do not have any Windows Server, and my analysis
>>> and tests are based on an unofficial deduplication
>>> package which was adapted to Windows 10 Pro.
>>>
>>> A few months ago, following a bug report, I had to
>>> make changes for Windows Server 2012 which uses an
>>> older data format, and my only experience about this
>>> format is related to this report. So switching to
>>> Windows 2012 is not guaranteed to make debugging easier.
>>>
 We are running out of disk space here so if switching Windows
 versions
 makes the process of having data deduplication working easer
 then me
 know.
>>>
>>> I have not yet analyzed your latest report, but it
>>> would probably be useful I build a full copy of
>>> non-user data from your partition :
>>> - the reparse tags of all your files,
>>> - all the "*.ccc" files in the Stream directory
>>>
>>> Do not do it now, I must first dig into the data you
>>> posted.
>>>
>>> Regards
>>>
>>> Jean-Pierre
>>>
>>>
 Kind regards,

 Jelle de Jong

 On 09/02/17 13:46, Jelle de Jong wrote:
> Hi 

Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-02-23 Thread Jelle de Jong
Hi Jean-Pierre,

# output: md5sum *.gz
https://powermail.nu/nextcloud/index.php/s/jxler2rZOqBdpr2

879499f9187b0f590ae92460f4949dfd  stream.data.full.dir.tar.gz
a8fc902613486e332898f92aba26c61f  reparse-tags.gz

Kind regards,

Jelle de Jong

On 23/02/17 14:24, Jean-Pierre André wrote:
> Hi,
>
> Can you also post the md5 (or sha1, or ...) of the big
> file. The connection is frequently interrupted, and I
> cannot rely on the downloaded file without a check.
>
> Jean-Pierre
>
> Jelle de Jong wrote:
>> Hi Jean-Pierre,
>>
>> Thank you!
>>
>> The reparse-tags.gz file:
>> https://powermail.nu/nextcloud/index.php/s/fS6Y6bpzoMgPiZ0
>>
>> Generated by running: getfattr -e hex -n system.ntfs_reparse_data -R
>> /mnt/sr7-sdb2/ 2> /dev/null | grep ntfs_reparse_data | gzip >
>> /root/reparse-tags.gz
>>
>> Kind regards,
>>
>> Jelle de Jong
>>
>> On 23/02/17 12:07, Jean-Pierre André wrote:
>>> Hi,
>>>
>>> Jelle de Jong wrote:
 Dear Jean-Pierre,

 I thought version 1.2.1 of the plug-in was working so I took it further
 into production, but during backups with rdiff-backup and guestmount it
 created a 100% cpu load in qemu process that stayed there for days
 until
 I killed them, I tested this twice. So I went back to a xpart/mount -t
 ntfs command and found more "Bad stream for offset" and found that the
 /sbin/mount.ntfs-3g command was running at 100% cpu load and hanged
 there.
>>>
>>> Too bad.
>>>
 I have added the whole Stream directory here: (1.1GB)
 https://powermail.nu/nextcloud/index.php/s/vbq85qZ2wcVYxrG

 Separate stream file: stream.data.full.000c.00020001.gz
 https://powermail.nu/nextcloud/index.php/s/QinV51XE4jrAH7a

 All the commands I used:
 http://paste.debian.net/plainh/c0ea5950

 I do not know how to get the reparse tags of all the files, maybe you
 can help me how to get all the information you need.
>>>
>>> Just use option -R on the base directory :
>>>
>>> getfattr -e hex -n system.ntfs_reparse_data -R base-dir
>>>
>>> Notes :
>>> 1) files with no reparse tags (those which are not deduplicated)
>>> will throw an error
>>> 2) this will output the file names, which you might not want
>>> to disclose. Fortunately I do not need them for now.
>>>
>>> So you may append to the above command :
>>>
>>> 2> /dev/null | grep ntfs_reparse_data | gz > reparse-tags.gz
>>>
>>> With that, I will be able to build a configuration similar
>>> to yours... apart from the files themselves.
>>>
>>> Regards
>>>
>>> Jean-Pierre
>>>

 Thank you for your help!

 Kind regards,

 Jelle de Jong

 On 14/02/17 15:55, Jean-Pierre André wrote:
> Hi,
>
> Jelle de Jong wrote:
>> Hi Jean-Pierre,
>>
>> If we have to switch to Windows 2012 and thereby having an
>> environment
>> similar to yours then we can switch to an other Windows version.
>
> I do not have any Windows Server, and my analysis
> and tests are based on an unofficial deduplication
> package which was adapted to Windows 10 Pro.
>
> A few months ago, following a bug report, I had to
> make changes for Windows Server 2012 which uses an
> older data format, and my only experience about this
> format is related to this report. So switching to
> Windows 2012 is not guaranteed to make debugging easier.
>
>> We are running out of disk space here so if switching Windows
>> versions
>> makes the process of having data deduplication working easer then me
>> know.
>
> I have not yet analyzed your latest report, but it
> would probably be useful I build a full copy of
> non-user data from your partition :
> - the reparse tags of all your files,
> - all the "*.ccc" files in the Stream directory
>
> Do not do it now, I must first dig into the data you
> posted.
>
> Regards
>
> Jean-Pierre
>
>
>> Kind regards,
>>
>> Jelle de Jong
>>
>> On 09/02/17 13:46, Jelle de Jong wrote:
>>> Hi Jean-Pierre,
>>>
>>> In case you are wondering:
>>>
>>> I am using data deduplication in Windows 2016 for my test
>>> environment
>>> iso:
>>> SW_DVD9_Win_Svr_STD_Core_and_DataCtr_Core_2016_64Bit_English_-2_MLF_X21-22843.ISO
>>>
>>>
>>>
>>>
>>>
>>>
>>> Kind regards,
>>>
>>> Jelle de Jong
>>>
>>> On 09/02/17 11:41, Jean-Pierre André wrote:
 Hi,

 Jelle de Jong wrote:
> Hi Jean-Pierre,
>
> Thank you!
>
> The new plug-in seems to work for now, I am moving it into testing
> phase
> with-in our production back-up scripts.

 Please wait a few hours, I have found a bug which
 I have fixed. I am currently inserting your data
 into my test base in order to rerun all my tests.

> Will you release the source code eventually, would like 

Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-02-23 Thread Jean-Pierre André
Hi,

Can you also post the md5 (or sha1, or ...) of the big
file. The connection is frequently interrupted, and I
cannot rely on the downloaded file without a check.

Jean-Pierre

Jelle de Jong wrote:
> Hi Jean-Pierre,
>
> Thank you!
>
> The reparse-tags.gz file:
> https://powermail.nu/nextcloud/index.php/s/fS6Y6bpzoMgPiZ0
>
> Generated by running: getfattr -e hex -n system.ntfs_reparse_data -R
> /mnt/sr7-sdb2/ 2> /dev/null | grep ntfs_reparse_data | gzip >
> /root/reparse-tags.gz
>
> Kind regards,
>
> Jelle de Jong
>
> On 23/02/17 12:07, Jean-Pierre André wrote:
>> Hi,
>>
>> Jelle de Jong wrote:
>>> Dear Jean-Pierre,
>>>
>>> I thought version 1.2.1 of the plug-in was working so I took it further
>>> into production, but during backups with rdiff-backup and guestmount it
>>> created a 100% cpu load in qemu process that stayed there for days until
>>> I killed them, I tested this twice. So I went back to a xpart/mount -t
>>> ntfs command and found more "Bad stream for offset" and found that the
>>> /sbin/mount.ntfs-3g command was running at 100% cpu load and hanged
>>> there.
>>
>> Too bad.
>>
>>> I have added the whole Stream directory here: (1.1GB)
>>> https://powermail.nu/nextcloud/index.php/s/vbq85qZ2wcVYxrG
>>>
>>> Separate stream file: stream.data.full.000c.00020001.gz
>>> https://powermail.nu/nextcloud/index.php/s/QinV51XE4jrAH7a
>>>
>>> All the commands I used:
>>> http://paste.debian.net/plainh/c0ea5950
>>>
>>> I do not know how to get the reparse tags of all the files, maybe you
>>> can help me how to get all the information you need.
>>
>> Just use option -R on the base directory :
>>
>> getfattr -e hex -n system.ntfs_reparse_data -R base-dir
>>
>> Notes :
>> 1) files with no reparse tags (those which are not deduplicated)
>> will throw an error
>> 2) this will output the file names, which you might not want
>> to disclose. Fortunately I do not need them for now.
>>
>> So you may append to the above command :
>>
>> 2> /dev/null | grep ntfs_reparse_data | gz > reparse-tags.gz
>>
>> With that, I will be able to build a configuration similar
>> to yours... apart from the files themselves.
>>
>> Regards
>>
>> Jean-Pierre
>>
>>>
>>> Thank you for your help!
>>>
>>> Kind regards,
>>>
>>> Jelle de Jong
>>>
>>> On 14/02/17 15:55, Jean-Pierre André wrote:
 Hi,

 Jelle de Jong wrote:
> Hi Jean-Pierre,
>
> If we have to switch to Windows 2012 and thereby having an environment
> similar to yours then we can switch to an other Windows version.

 I do not have any Windows Server, and my analysis
 and tests are based on an unofficial deduplication
 package which was adapted to Windows 10 Pro.

 A few months ago, following a bug report, I had to
 make changes for Windows Server 2012 which uses an
 older data format, and my only experience about this
 format is related to this report. So switching to
 Windows 2012 is not guaranteed to make debugging easier.

> We are running out of disk space here so if switching Windows versions
> makes the process of having data deduplication working easer then me
> know.

 I have not yet analyzed your latest report, but it
 would probably be useful I build a full copy of
 non-user data from your partition :
 - the reparse tags of all your files,
 - all the "*.ccc" files in the Stream directory

 Do not do it now, I must first dig into the data you
 posted.

 Regards

 Jean-Pierre


> Kind regards,
>
> Jelle de Jong
>
> On 09/02/17 13:46, Jelle de Jong wrote:
>> Hi Jean-Pierre,
>>
>> In case you are wondering:
>>
>> I am using data deduplication in Windows 2016 for my test environment
>> iso:
>> SW_DVD9_Win_Svr_STD_Core_and_DataCtr_Core_2016_64Bit_English_-2_MLF_X21-22843.ISO
>>
>>
>>
>>
>>
>> Kind regards,
>>
>> Jelle de Jong
>>
>> On 09/02/17 11:41, Jean-Pierre André wrote:
>>> Hi,
>>>
>>> Jelle de Jong wrote:
 Hi Jean-Pierre,

 Thank you!

 The new plug-in seems to work for now, I am moving it into testing
 phase
 with-in our production back-up scripts.
>>>
>>> Please wait a few hours, I have found a bug which
>>> I have fixed. I am currently inserting your data
>>> into my test base in order to rerun all my tests.
>>>
 Will you release the source code eventually, would like to write a
 blog
 post about how to add the support.
>>>
>>> What exactly do you mean ? If it is about how to
>>> collect the data in a unsupported condition, it is
>>> difficult, because unsupported generally means
>>> unknown territory...
>>>
 What do you think the changes are of the plug-in stop working
 again?
>>>
>>> (assuming a typo changes -> chances)
>>> Your files were in a condition not met before : 

Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-02-23 Thread Jelle de Jong
Hi Jean-Pierre,

Thank you!

The reparse-tags.gz file: 
https://powermail.nu/nextcloud/index.php/s/fS6Y6bpzoMgPiZ0

Generated by running: getfattr -e hex -n system.ntfs_reparse_data -R 
/mnt/sr7-sdb2/ 2> /dev/null | grep ntfs_reparse_data | gzip > 
/root/reparse-tags.gz

Kind regards,

Jelle de Jong

On 23/02/17 12:07, Jean-Pierre André wrote:
> Hi,
>
> Jelle de Jong wrote:
>> Dear Jean-Pierre,
>>
>> I thought version 1.2.1 of the plug-in was working so I took it further
>> into production, but during backups with rdiff-backup and guestmount it
>> created a 100% cpu load in qemu process that stayed there for days until
>> I killed them, I tested this twice. So I went back to a xpart/mount -t
>> ntfs command and found more "Bad stream for offset" and found that the
>> /sbin/mount.ntfs-3g command was running at 100% cpu load and hanged
>> there.
>
> Too bad.
>
>> I have added the whole Stream directory here: (1.1GB)
>> https://powermail.nu/nextcloud/index.php/s/vbq85qZ2wcVYxrG
>>
>> Separate stream file: stream.data.full.000c.00020001.gz
>> https://powermail.nu/nextcloud/index.php/s/QinV51XE4jrAH7a
>>
>> All the commands I used:
>> http://paste.debian.net/plainh/c0ea5950
>>
>> I do not know how to get the reparse tags of all the files, maybe you
>> can help me how to get all the information you need.
>
> Just use option -R on the base directory :
>
> getfattr -e hex -n system.ntfs_reparse_data -R base-dir
>
> Notes :
> 1) files with no reparse tags (those which are not deduplicated)
> will throw an error
> 2) this will output the file names, which you might not want
> to disclose. Fortunately I do not need them for now.
>
> So you may append to the above command :
>
> 2> /dev/null | grep ntfs_reparse_data | gz > reparse-tags.gz
>
> With that, I will be able to build a configuration similar
> to yours... apart from the files themselves.
>
> Regards
>
> Jean-Pierre
>
>>
>> Thank you for your help!
>>
>> Kind regards,
>>
>> Jelle de Jong
>>
>> On 14/02/17 15:55, Jean-Pierre André wrote:
>>> Hi,
>>>
>>> Jelle de Jong wrote:
 Hi Jean-Pierre,

 If we have to switch to Windows 2012 and thereby having an environment
 similar to yours then we can switch to an other Windows version.
>>>
>>> I do not have any Windows Server, and my analysis
>>> and tests are based on an unofficial deduplication
>>> package which was adapted to Windows 10 Pro.
>>>
>>> A few months ago, following a bug report, I had to
>>> make changes for Windows Server 2012 which uses an
>>> older data format, and my only experience about this
>>> format is related to this report. So switching to
>>> Windows 2012 is not guaranteed to make debugging easier.
>>>
 We are running out of disk space here so if switching Windows versions
 makes the process of having data deduplication working easer then me
 know.
>>>
>>> I have not yet analyzed your latest report, but it
>>> would probably be useful I build a full copy of
>>> non-user data from your partition :
>>> - the reparse tags of all your files,
>>> - all the "*.ccc" files in the Stream directory
>>>
>>> Do not do it now, I must first dig into the data you
>>> posted.
>>>
>>> Regards
>>>
>>> Jean-Pierre
>>>
>>>
 Kind regards,

 Jelle de Jong

 On 09/02/17 13:46, Jelle de Jong wrote:
> Hi Jean-Pierre,
>
> In case you are wondering:
>
> I am using data deduplication in Windows 2016 for my test environment
> iso:
> SW_DVD9_Win_Svr_STD_Core_and_DataCtr_Core_2016_64Bit_English_-2_MLF_X21-22843.ISO
>
>
>
>
> Kind regards,
>
> Jelle de Jong
>
> On 09/02/17 11:41, Jean-Pierre André wrote:
>> Hi,
>>
>> Jelle de Jong wrote:
>>> Hi Jean-Pierre,
>>>
>>> Thank you!
>>>
>>> The new plug-in seems to work for now, I am moving it into testing
>>> phase
>>> with-in our production back-up scripts.
>>
>> Please wait a few hours, I have found a bug which
>> I have fixed. I am currently inserting your data
>> into my test base in order to rerun all my tests.
>>
>>> Will you release the source code eventually, would like to write a
>>> blog
>>> post about how to add the support.
>>
>> What exactly do you mean ? If it is about how to
>> collect the data in a unsupported condition, it is
>> difficult, because unsupported generally means
>> unknown territory...
>>
>>> What do you think the changes are of the plug-in stop working again?
>>
>> (assuming a typo changes -> chances)
>> Your files were in a condition not met before : data
>> has been relocated according to a logic I do not fully
>> understand. Maybe this is an intermediate step in the
>> process of updating the files, anyway this can happen.
>>
>> The situation I am facing is that I have a single
>> example from which it is difficult to derive the rules.
>> So yes, the plugin may stop working again.
>>
>> 

Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-02-23 Thread Jean-Pierre André
Hi,

Jelle de Jong wrote:
> Dear Jean-Pierre,
>
> I thought version 1.2.1 of the plug-in was working so I took it further
> into production, but during backups with rdiff-backup and guestmount it
> created a 100% cpu load in qemu process that stayed there for days until
> I killed them, I tested this twice. So I went back to a xpart/mount -t
> ntfs command and found more "Bad stream for offset" and found that the
> /sbin/mount.ntfs-3g command was running at 100% cpu load and hanged there.

Too bad.

> I have added the whole Stream directory here: (1.1GB)
> https://powermail.nu/nextcloud/index.php/s/vbq85qZ2wcVYxrG
>
> Separate stream file: stream.data.full.000c.00020001.gz
> https://powermail.nu/nextcloud/index.php/s/QinV51XE4jrAH7a
>
> All the commands I used:
> http://paste.debian.net/plainh/c0ea5950
>
> I do not know how to get the reparse tags of all the files, maybe you
> can help me how to get all the information you need.

Just use option -R on the base directory :

getfattr -e hex -n system.ntfs_reparse_data -R base-dir

Notes :
1) files with no reparse tags (those which are not deduplicated)
will throw an error
2) this will output the file names, which you might not want
to disclose. Fortunately I do not need them for now.

So you may append to the above command :

2> /dev/null | grep ntfs_reparse_data | gz > reparse-tags.gz

With that, I will be able to build a configuration similar
to yours... apart from the files themselves.

Regards

Jean-Pierre

>
> Thank you for your help!
>
> Kind regards,
>
> Jelle de Jong
>
> On 14/02/17 15:55, Jean-Pierre André wrote:
>> Hi,
>>
>> Jelle de Jong wrote:
>>> Hi Jean-Pierre,
>>>
>>> If we have to switch to Windows 2012 and thereby having an environment
>>> similar to yours then we can switch to an other Windows version.
>>
>> I do not have any Windows Server, and my analysis
>> and tests are based on an unofficial deduplication
>> package which was adapted to Windows 10 Pro.
>>
>> A few months ago, following a bug report, I had to
>> make changes for Windows Server 2012 which uses an
>> older data format, and my only experience about this
>> format is related to this report. So switching to
>> Windows 2012 is not guaranteed to make debugging easier.
>>
>>> We are running out of disk space here so if switching Windows versions
>>> makes the process of having data deduplication working easer then me
>>> know.
>>
>> I have not yet analyzed your latest report, but it
>> would probably be useful I build a full copy of
>> non-user data from your partition :
>> - the reparse tags of all your files,
>> - all the "*.ccc" files in the Stream directory
>>
>> Do not do it now, I must first dig into the data you
>> posted.
>>
>> Regards
>>
>> Jean-Pierre
>>
>>
>>> Kind regards,
>>>
>>> Jelle de Jong
>>>
>>> On 09/02/17 13:46, Jelle de Jong wrote:
 Hi Jean-Pierre,

 In case you are wondering:

 I am using data deduplication in Windows 2016 for my test environment
 iso:
 SW_DVD9_Win_Svr_STD_Core_and_DataCtr_Core_2016_64Bit_English_-2_MLF_X21-22843.ISO



 Kind regards,

 Jelle de Jong

 On 09/02/17 11:41, Jean-Pierre André wrote:
> Hi,
>
> Jelle de Jong wrote:
>> Hi Jean-Pierre,
>>
>> Thank you!
>>
>> The new plug-in seems to work for now, I am moving it into testing
>> phase
>> with-in our production back-up scripts.
>
> Please wait a few hours, I have found a bug which
> I have fixed. I am currently inserting your data
> into my test base in order to rerun all my tests.
>
>> Will you release the source code eventually, would like to write a
>> blog
>> post about how to add the support.
>
> What exactly do you mean ? If it is about how to
> collect the data in a unsupported condition, it is
> difficult, because unsupported generally means
> unknown territory...
>
>> What do you think the changes are of the plug-in stop working again?
>
> (assuming a typo changes -> chances)
> Your files were in a condition not met before : data
> has been relocated according to a logic I do not fully
> understand. Maybe this is an intermediate step in the
> process of updating the files, anyway this can happen.
>
> The situation I am facing is that I have a single
> example from which it is difficult to derive the rules.
> So yes, the plugin may stop working again.
>
> Note : there are strict consistency checks in the plugin,
> so it is unlikely you read invalid data. Moreover if
> you only mount read-only you cannot damage the deduplicated
> partition.
>
>> We do not have an automatic test running to verify the back-ups at
>> this
>> moment _yet_, so if the plug-in stops working, incremental file-based
>> back-ups with empty files will slowly get in the back-ups this way :|
>
> Usually a deduplicated partition is only used for backups,

Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-02-23 Thread Jelle de Jong
Dear Jean-Pierre,

I thought version 1.2.1 of the plug-in was working so I took it further 
into production, but during backups with rdiff-backup and guestmount it 
created a 100% cpu load in qemu process that stayed there for days until 
I killed them, I tested this twice. So I went back to a xpart/mount -t 
ntfs command and found more "Bad stream for offset" and found that the 
/sbin/mount.ntfs-3g command was running at 100% cpu load and hanged there.

I have added the whole Stream directory here: (1.1GB)
https://powermail.nu/nextcloud/index.php/s/vbq85qZ2wcVYxrG

Separate stream file: stream.data.full.000c.00020001.gz
https://powermail.nu/nextcloud/index.php/s/QinV51XE4jrAH7a

All the commands I used:
http://paste.debian.net/plainh/c0ea5950

I do not know how to get the reparse tags of all the files, maybe you 
can help me how to get all the information you need.

Thank you for your help!

Kind regards,

Jelle de Jong

On 14/02/17 15:55, Jean-Pierre André wrote:
> Hi,
>
> Jelle de Jong wrote:
>> Hi Jean-Pierre,
>>
>> If we have to switch to Windows 2012 and thereby having an environment
>> similar to yours then we can switch to an other Windows version.
>
> I do not have any Windows Server, and my analysis
> and tests are based on an unofficial deduplication
> package which was adapted to Windows 10 Pro.
>
> A few months ago, following a bug report, I had to
> make changes for Windows Server 2012 which uses an
> older data format, and my only experience about this
> format is related to this report. So switching to
> Windows 2012 is not guaranteed to make debugging easier.
>
>> We are running out of disk space here so if switching Windows versions
>> makes the process of having data deduplication working easer then me
>> know.
>
> I have not yet analyzed your latest report, but it
> would probably be useful I build a full copy of
> non-user data from your partition :
> - the reparse tags of all your files,
> - all the "*.ccc" files in the Stream directory
>
> Do not do it now, I must first dig into the data you
> posted.
>
> Regards
>
> Jean-Pierre
>
>
>> Kind regards,
>>
>> Jelle de Jong
>>
>> On 09/02/17 13:46, Jelle de Jong wrote:
>>> Hi Jean-Pierre,
>>>
>>> In case you are wondering:
>>>
>>> I am using data deduplication in Windows 2016 for my test environment
>>> iso:
>>> SW_DVD9_Win_Svr_STD_Core_and_DataCtr_Core_2016_64Bit_English_-2_MLF_X21-22843.ISO
>>>
>>>
>>> Kind regards,
>>>
>>> Jelle de Jong
>>>
>>> On 09/02/17 11:41, Jean-Pierre André wrote:
 Hi,

 Jelle de Jong wrote:
> Hi Jean-Pierre,
>
> Thank you!
>
> The new plug-in seems to work for now, I am moving it into testing
> phase
> with-in our production back-up scripts.

 Please wait a few hours, I have found a bug which
 I have fixed. I am currently inserting your data
 into my test base in order to rerun all my tests.

> Will you release the source code eventually, would like to write a
> blog
> post about how to add the support.

 What exactly do you mean ? If it is about how to
 collect the data in a unsupported condition, it is
 difficult, because unsupported generally means
 unknown territory...

> What do you think the changes are of the plug-in stop working again?

 (assuming a typo changes -> chances)
 Your files were in a condition not met before : data
 has been relocated according to a logic I do not fully
 understand. Maybe this is an intermediate step in the
 process of updating the files, anyway this can happen.

 The situation I am facing is that I have a single
 example from which it is difficult to derive the rules.
 So yes, the plugin may stop working again.

 Note : there are strict consistency checks in the plugin,
 so it is unlikely you read invalid data. Moreover if
 you only mount read-only you cannot damage the deduplicated
 partition.

> We do not have an automatic test running to verify the back-ups at
> this
> moment _yet_, so if the plug-in stops working, incremental file-based
> back-ups with empty files will slowly get in the back-ups this way :|

 Usually a deduplicated partition is only used for backups,
 and reading from backups is only for recovering former
 versions of files (on demand).

 If you access deduplicated files with no human control,
 you have to insert your own checks in the process. I
 would at least check whether the size of the recovered
 file is the same as the deduplicated one (also grep for
 messages in the syslog).

 Regards

 Jean-Pierre

> Again thank you for all your help so far!
>
> Kind regards,
>
> Jelle de Jong
>
>
> On 08/02/17 15:59, Jean-Pierre André wrote:
>> Hi,
>>
>> Can you please make a try with :
>> http://jp-andre.pagesperso-orange.fr/dedup120-beta.zip
>>
>> This is 

Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-02-14 Thread Jean-Pierre André
Hi,

Jelle de Jong wrote:
> Hi Jean-Pierre,
>
> If we have to switch to Windows 2012 and thereby having an environment
> similar to yours then we can switch to an other Windows version.

I do not have any Windows Server, and my analysis
and tests are based on an unofficial deduplication
package which was adapted to Windows 10 Pro.

A few months ago, following a bug report, I had to
make changes for Windows Server 2012 which uses an
older data format, and my only experience about this
format is related to this report. So switching to
Windows 2012 is not guaranteed to make debugging easier.

> We are running out of disk space here so if switching Windows versions
> makes the process of having data deduplication working easer then me know.

I have not yet analyzed your latest report, but it
would probably be useful I build a full copy of
non-user data from your partition :
- the reparse tags of all your files,
- all the "*.ccc" files in the Stream directory

Do not do it now, I must first dig into the data you
posted.

Regards

Jean-Pierre


> Kind regards,
>
> Jelle de Jong
>
> On 09/02/17 13:46, Jelle de Jong wrote:
>> Hi Jean-Pierre,
>>
>> In case you are wondering:
>>
>> I am using data deduplication in Windows 2016 for my test environment
>> iso:
>> SW_DVD9_Win_Svr_STD_Core_and_DataCtr_Core_2016_64Bit_English_-2_MLF_X21-22843.ISO
>>
>> Kind regards,
>>
>> Jelle de Jong
>>
>> On 09/02/17 11:41, Jean-Pierre André wrote:
>>> Hi,
>>>
>>> Jelle de Jong wrote:
 Hi Jean-Pierre,

 Thank you!

 The new plug-in seems to work for now, I am moving it into testing phase
 with-in our production back-up scripts.
>>>
>>> Please wait a few hours, I have found a bug which
>>> I have fixed. I am currently inserting your data
>>> into my test base in order to rerun all my tests.
>>>
 Will you release the source code eventually, would like to write a blog
 post about how to add the support.
>>>
>>> What exactly do you mean ? If it is about how to
>>> collect the data in a unsupported condition, it is
>>> difficult, because unsupported generally means
>>> unknown territory...
>>>
 What do you think the changes are of the plug-in stop working again?
>>>
>>> (assuming a typo changes -> chances)
>>> Your files were in a condition not met before : data
>>> has been relocated according to a logic I do not fully
>>> understand. Maybe this is an intermediate step in the
>>> process of updating the files, anyway this can happen.
>>>
>>> The situation I am facing is that I have a single
>>> example from which it is difficult to derive the rules.
>>> So yes, the plugin may stop working again.
>>>
>>> Note : there are strict consistency checks in the plugin,
>>> so it is unlikely you read invalid data. Moreover if
>>> you only mount read-only you cannot damage the deduplicated
>>> partition.
>>>
 We do not have an automatic test running to verify the back-ups at this
 moment _yet_, so if the plug-in stops working, incremental file-based
 back-ups with empty files will slowly get in the back-ups this way :|
>>>
>>> Usually a deduplicated partition is only used for backups,
>>> and reading from backups is only for recovering former
>>> versions of files (on demand).
>>>
>>> If you access deduplicated files with no human control,
>>> you have to insert your own checks in the process. I
>>> would at least check whether the size of the recovered
>>> file is the same as the deduplicated one (also grep for
>>> messages in the syslog).
>>>
>>> Regards
>>>
>>> Jean-Pierre
>>>
 Again thank you for all your help so far!

 Kind regards,

 Jelle de Jong


 On 08/02/17 15:59, Jean-Pierre André wrote:
> Hi,
>
> Can you please make a try with :
> http://jp-andre.pagesperso-orange.fr/dedup120-beta.zip
>
> This is experimental and based on assumptions which have
> to be clarified, but it should work in your environment.
>
> Regards
>
> Jean-Pierre
>>>
>>>
>>
>> --
>> Check out the vibrant tech community on one of the world's most
>> engaging tech sites, SlashDot.org! http://sdm.link/slashdot
>> ___
>> ntfs-3g-devel mailing list
>> ntfs-3g-devel@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/ntfs-3g-devel
>>
>
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, SlashDot.org! http://sdm.link/slashdot
> ___
> ntfs-3g-devel mailing list
> ntfs-3g-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/ntfs-3g-devel
>



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! 

Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-02-14 Thread Jelle de Jong
Hi Jean-Pierre,

If we have to switch to Windows 2012 and thereby having an environment 
similar to yours then we can switch to an other Windows version.

We are running out of disk space here so if switching Windows versions 
makes the process of having data deduplication working easer then me know.

Kind regards,

Jelle de Jong

On 09/02/17 13:46, Jelle de Jong wrote:
> Hi Jean-Pierre,
>
> In case you are wondering:
>
> I am using data deduplication in Windows 2016 for my test environment
> iso:
> SW_DVD9_Win_Svr_STD_Core_and_DataCtr_Core_2016_64Bit_English_-2_MLF_X21-22843.ISO
>
> Kind regards,
>
> Jelle de Jong
>
> On 09/02/17 11:41, Jean-Pierre André wrote:
>> Hi,
>>
>> Jelle de Jong wrote:
>>> Hi Jean-Pierre,
>>>
>>> Thank you!
>>>
>>> The new plug-in seems to work for now, I am moving it into testing phase
>>> with-in our production back-up scripts.
>>
>> Please wait a few hours, I have found a bug which
>> I have fixed. I am currently inserting your data
>> into my test base in order to rerun all my tests.
>>
>>> Will you release the source code eventually, would like to write a blog
>>> post about how to add the support.
>>
>> What exactly do you mean ? If it is about how to
>> collect the data in a unsupported condition, it is
>> difficult, because unsupported generally means
>> unknown territory...
>>
>>> What do you think the changes are of the plug-in stop working again?
>>
>> (assuming a typo changes -> chances)
>> Your files were in a condition not met before : data
>> has been relocated according to a logic I do not fully
>> understand. Maybe this is an intermediate step in the
>> process of updating the files, anyway this can happen.
>>
>> The situation I am facing is that I have a single
>> example from which it is difficult to derive the rules.
>> So yes, the plugin may stop working again.
>>
>> Note : there are strict consistency checks in the plugin,
>> so it is unlikely you read invalid data. Moreover if
>> you only mount read-only you cannot damage the deduplicated
>> partition.
>>
>>> We do not have an automatic test running to verify the back-ups at this
>>> moment _yet_, so if the plug-in stops working, incremental file-based
>>> back-ups with empty files will slowly get in the back-ups this way :|
>>
>> Usually a deduplicated partition is only used for backups,
>> and reading from backups is only for recovering former
>> versions of files (on demand).
>>
>> If you access deduplicated files with no human control,
>> you have to insert your own checks in the process. I
>> would at least check whether the size of the recovered
>> file is the same as the deduplicated one (also grep for
>> messages in the syslog).
>>
>> Regards
>>
>> Jean-Pierre
>>
>>> Again thank you for all your help so far!
>>>
>>> Kind regards,
>>>
>>> Jelle de Jong
>>>
>>>
>>> On 08/02/17 15:59, Jean-Pierre André wrote:
 Hi,

 Can you please make a try with :
 http://jp-andre.pagesperso-orange.fr/dedup120-beta.zip

 This is experimental and based on assumptions which have
 to be clarified, but it should work in your environment.

 Regards

 Jean-Pierre
>>
>>
>
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, SlashDot.org! http://sdm.link/slashdot
> ___
> ntfs-3g-devel mailing list
> ntfs-3g-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/ntfs-3g-devel
>

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
ntfs-3g-devel mailing list
ntfs-3g-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ntfs-3g-devel


Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-02-14 Thread Jelle de Jong
Dear Jean-Pierre,

No luck so far, I hope I provide the needed debug information as if:

# the file test, syslog and streams generation:
http://paste.debian.net/plainh/d495e075

# the stream.data.full.0012.0001.gz file:
https://powermail.nu/nextcloud/index.php/s/OA0iRf9yKYkc5IL

# the stream.data.full.0014.0001.gz file:
https://powermail.nu/nextcloud/index.php/s/tFtwjctpQ4rTzoD

Thank you for your help! Really hope to get this working right.

Kind regards,

Jelle de Jong

On 09/02/17 16:49, Jean-Pierre André wrote:
> Hi again,
>
> Jelle de Jong wrote:
>> Hi Jean-Pierre,
>>
>> Don't know if this is related to the bug you found.
>>
>> But I am getting scrambled data, one moment the file is empty then it
>> has data and the file sizes are not the same when copied:
>
> Probably the same bug, though difficult to tell for sure.
>
> Please retry with the (hopefully) fixed plugin in
> http://jp-andre.pagesperso-orange.fr/dedup.zip
>
> Regards
>
> Jean-Pierre
>
>>
>> root@backup:~# ls -hal "/mnt/sr7-sdb2/ALGEMEEN/DSC_1319.JPG"
>> -r-xr-xr-x 1 root root 2.4M Feb  5  2014
>> /mnt/sr7-sdb2/ALGEMEEN/DSC_1319.JPG
>> root@backup:~# cp "/mnt/sr7-sdb2/ALGEMEEN/DSC_1319.JPG" /root/ -v
>> '/mnt/sr7-sdb2/ALGEMEEN/DSC_1319.JPG' -> '/root/DSC_1319.JPG'
>> root@backup:~# ls -hal DSC_1319.JPG
>> -r-xr-xr-x 1 root root 0 Feb  9 13:31 DSC_1319.JPG
>> root@backup:~# file "/mnt/sr7-sdb2/ALGEMEEN/DSC_1319.JPG"
>> /mnt/sr7-sdb2/ALGEMEEN/DSC_1319.JPG: empty
>> root@backup:~# file "/mnt/sr7-sdb2/ALGEMEEN/DSC_1319.JPG"
>> /mnt/sr7-sdb2/ALGEMEEN/DSC_1319.JPG: data
>> root@backup:~# cp "/mnt/sr7-sdb2/ALGEMEEN/DSC_1319.JPG" /root/ -v
>> '/mnt/sr7-sdb2/ALGEMEEN/DSC_1319.JPG' -> '/root/DSC_1319.JPG'
>> root@backup:~# ls -hal DSC_1319.JPG
>> -r-xr-xr-x 1 root root 256K Feb  9 13:32 DSC_1319.JPG
>>
>> Kind regards,
>>
>> Jelle de Jong
>>
>> On 09/02/17 11:41, Jean-Pierre André wrote:
>>> Hi,
>>>
>>> Jelle de Jong wrote:
 Hi Jean-Pierre,

 Thank you!

 The new plug-in seems to work for now, I am moving it into testing
 phase
 with-in our production back-up scripts.
>>>
>>> Please wait a few hours, I have found a bug which
>>> I have fixed. I am currently inserting your data
>>> into my test base in order to rerun all my tests.
>>>
 Will you release the source code eventually, would like to write a blog
 post about how to add the support.
>>>
>>> What exactly do you mean ? If it is about how to
>>> collect the data in a unsupported condition, it is
>>> difficult, because unsupported generally means
>>> unknown territory...
>>>
 What do you think the changes are of the plug-in stop working again?
>>>
>>> (assuming a typo changes -> chances)
>>> Your files were in a condition not met before : data
>>> has been relocated according to a logic I do not fully
>>> understand. Maybe this is an intermediate step in the
>>> process of updating the files, anyway this can happen.
>>>
>>> The situation I am facing is that I have a single
>>> example from which it is difficult to derive the rules.
>>> So yes, the plugin may stop working again.
>>>
>>> Note : there are strict consistency checks in the plugin,
>>> so it is unlikely you read invalid data. Moreover if
>>> you only mount read-only you cannot damage the deduplicated
>>> partition.
>>>
 We do not have an automatic test running to verify the back-ups at this
 moment _yet_, so if the plug-in stops working, incremental file-based
 back-ups with empty files will slowly get in the back-ups this way :|
>>>
>>> Usually a deduplicated partition is only used for backups,
>>> and reading from backups is only for recovering former
>>> versions of files (on demand).
>>>
>>> If you access deduplicated files with no human control,
>>> you have to insert your own checks in the process. I
>>> would at least check whether the size of the recovered
>>> file is the same as the deduplicated one (also grep for
>>> messages in the syslog).
>>>
>>> Regards
>>>
>>> Jean-Pierre
>>>
 Again thank you for all your help so far!

 Kind regards,

 Jelle de Jong


 On 08/02/17 15:59, Jean-Pierre André wrote:
> Hi,
>
> Can you please make a try with :
> http://jp-andre.pagesperso-orange.fr/dedup120-beta.zip
>
> This is experimental and based on assumptions which have
> to be clarified, but it should work in your environment.
>
> Regards
>
> Jean-Pierre
>>>
>>>
>>
>
>

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
ntfs-3g-devel mailing list
ntfs-3g-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ntfs-3g-devel


Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-02-09 Thread Jean-Pierre André
Hi again,

Jelle de Jong wrote:
> Hi Jean-Pierre,
>
> Don't know if this is related to the bug you found.
>
> But I am getting scrambled data, one moment the file is empty then it
> has data and the file sizes are not the same when copied:

Probably the same bug, though difficult to tell for sure.

Please retry with the (hopefully) fixed plugin in
http://jp-andre.pagesperso-orange.fr/dedup.zip

Regards

Jean-Pierre

>
> root@backup:~# ls -hal "/mnt/sr7-sdb2/ALGEMEEN/DSC_1319.JPG"
> -r-xr-xr-x 1 root root 2.4M Feb  5  2014
> /mnt/sr7-sdb2/ALGEMEEN/DSC_1319.JPG
> root@backup:~# cp "/mnt/sr7-sdb2/ALGEMEEN/DSC_1319.JPG" /root/ -v
> '/mnt/sr7-sdb2/ALGEMEEN/DSC_1319.JPG' -> '/root/DSC_1319.JPG'
> root@backup:~# ls -hal DSC_1319.JPG
> -r-xr-xr-x 1 root root 0 Feb  9 13:31 DSC_1319.JPG
> root@backup:~# file "/mnt/sr7-sdb2/ALGEMEEN/DSC_1319.JPG"
> /mnt/sr7-sdb2/ALGEMEEN/DSC_1319.JPG: empty
> root@backup:~# file "/mnt/sr7-sdb2/ALGEMEEN/DSC_1319.JPG"
> /mnt/sr7-sdb2/ALGEMEEN/DSC_1319.JPG: data
> root@backup:~# cp "/mnt/sr7-sdb2/ALGEMEEN/DSC_1319.JPG" /root/ -v
> '/mnt/sr7-sdb2/ALGEMEEN/DSC_1319.JPG' -> '/root/DSC_1319.JPG'
> root@backup:~# ls -hal DSC_1319.JPG
> -r-xr-xr-x 1 root root 256K Feb  9 13:32 DSC_1319.JPG
>
> Kind regards,
>
> Jelle de Jong
>
> On 09/02/17 11:41, Jean-Pierre André wrote:
>> Hi,
>>
>> Jelle de Jong wrote:
>>> Hi Jean-Pierre,
>>>
>>> Thank you!
>>>
>>> The new plug-in seems to work for now, I am moving it into testing phase
>>> with-in our production back-up scripts.
>>
>> Please wait a few hours, I have found a bug which
>> I have fixed. I am currently inserting your data
>> into my test base in order to rerun all my tests.
>>
>>> Will you release the source code eventually, would like to write a blog
>>> post about how to add the support.
>>
>> What exactly do you mean ? If it is about how to
>> collect the data in a unsupported condition, it is
>> difficult, because unsupported generally means
>> unknown territory...
>>
>>> What do you think the changes are of the plug-in stop working again?
>>
>> (assuming a typo changes -> chances)
>> Your files were in a condition not met before : data
>> has been relocated according to a logic I do not fully
>> understand. Maybe this is an intermediate step in the
>> process of updating the files, anyway this can happen.
>>
>> The situation I am facing is that I have a single
>> example from which it is difficult to derive the rules.
>> So yes, the plugin may stop working again.
>>
>> Note : there are strict consistency checks in the plugin,
>> so it is unlikely you read invalid data. Moreover if
>> you only mount read-only you cannot damage the deduplicated
>> partition.
>>
>>> We do not have an automatic test running to verify the back-ups at this
>>> moment _yet_, so if the plug-in stops working, incremental file-based
>>> back-ups with empty files will slowly get in the back-ups this way :|
>>
>> Usually a deduplicated partition is only used for backups,
>> and reading from backups is only for recovering former
>> versions of files (on demand).
>>
>> If you access deduplicated files with no human control,
>> you have to insert your own checks in the process. I
>> would at least check whether the size of the recovered
>> file is the same as the deduplicated one (also grep for
>> messages in the syslog).
>>
>> Regards
>>
>> Jean-Pierre
>>
>>> Again thank you for all your help so far!
>>>
>>> Kind regards,
>>>
>>> Jelle de Jong
>>>
>>>
>>> On 08/02/17 15:59, Jean-Pierre André wrote:
 Hi,

 Can you please make a try with :
 http://jp-andre.pagesperso-orange.fr/dedup120-beta.zip

 This is experimental and based on assumptions which have
 to be clarified, but it should work in your environment.

 Regards

 Jean-Pierre
>>
>>
>



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
ntfs-3g-devel mailing list
ntfs-3g-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ntfs-3g-devel


Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-02-09 Thread Jelle de Jong
Hi Jean-Pierre,

In case you are wondering:

I am using data deduplication in Windows 2016 for my test environment 
iso: 
SW_DVD9_Win_Svr_STD_Core_and_DataCtr_Core_2016_64Bit_English_-2_MLF_X21-22843.ISO

Kind regards,

Jelle de Jong

On 09/02/17 11:41, Jean-Pierre André wrote:
> Hi,
>
> Jelle de Jong wrote:
>> Hi Jean-Pierre,
>>
>> Thank you!
>>
>> The new plug-in seems to work for now, I am moving it into testing phase
>> with-in our production back-up scripts.
>
> Please wait a few hours, I have found a bug which
> I have fixed. I am currently inserting your data
> into my test base in order to rerun all my tests.
>
>> Will you release the source code eventually, would like to write a blog
>> post about how to add the support.
>
> What exactly do you mean ? If it is about how to
> collect the data in a unsupported condition, it is
> difficult, because unsupported generally means
> unknown territory...
>
>> What do you think the changes are of the plug-in stop working again?
>
> (assuming a typo changes -> chances)
> Your files were in a condition not met before : data
> has been relocated according to a logic I do not fully
> understand. Maybe this is an intermediate step in the
> process of updating the files, anyway this can happen.
>
> The situation I am facing is that I have a single
> example from which it is difficult to derive the rules.
> So yes, the plugin may stop working again.
>
> Note : there are strict consistency checks in the plugin,
> so it is unlikely you read invalid data. Moreover if
> you only mount read-only you cannot damage the deduplicated
> partition.
>
>> We do not have an automatic test running to verify the back-ups at this
>> moment _yet_, so if the plug-in stops working, incremental file-based
>> back-ups with empty files will slowly get in the back-ups this way :|
>
> Usually a deduplicated partition is only used for backups,
> and reading from backups is only for recovering former
> versions of files (on demand).
>
> If you access deduplicated files with no human control,
> you have to insert your own checks in the process. I
> would at least check whether the size of the recovered
> file is the same as the deduplicated one (also grep for
> messages in the syslog).
>
> Regards
>
> Jean-Pierre
>
>> Again thank you for all your help so far!
>>
>> Kind regards,
>>
>> Jelle de Jong
>>
>>
>> On 08/02/17 15:59, Jean-Pierre André wrote:
>>> Hi,
>>>
>>> Can you please make a try with :
>>> http://jp-andre.pagesperso-orange.fr/dedup120-beta.zip
>>>
>>> This is experimental and based on assumptions which have
>>> to be clarified, but it should work in your environment.
>>>
>>> Regards
>>>
>>> Jean-Pierre
>
>

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
ntfs-3g-devel mailing list
ntfs-3g-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ntfs-3g-devel


Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-02-09 Thread Jelle de Jong
Hi Jean-Pierre,

Don't know if this is related to the bug you found.

But I am getting scrambled data, one moment the file is empty then it 
has data and the file sizes are not the same when copied:

root@backup:~# ls -hal "/mnt/sr7-sdb2/ALGEMEEN/DSC_1319.JPG"
-r-xr-xr-x 1 root root 2.4M Feb  5  2014 /mnt/sr7-sdb2/ALGEMEEN/DSC_1319.JPG
root@backup:~# cp "/mnt/sr7-sdb2/ALGEMEEN/DSC_1319.JPG" /root/ -v
'/mnt/sr7-sdb2/ALGEMEEN/DSC_1319.JPG' -> '/root/DSC_1319.JPG'
root@backup:~# ls -hal DSC_1319.JPG
-r-xr-xr-x 1 root root 0 Feb  9 13:31 DSC_1319.JPG
root@backup:~# file "/mnt/sr7-sdb2/ALGEMEEN/DSC_1319.JPG"
/mnt/sr7-sdb2/ALGEMEEN/DSC_1319.JPG: empty
root@backup:~# file "/mnt/sr7-sdb2/ALGEMEEN/DSC_1319.JPG"
/mnt/sr7-sdb2/ALGEMEEN/DSC_1319.JPG: data
root@backup:~# cp "/mnt/sr7-sdb2/ALGEMEEN/DSC_1319.JPG" /root/ -v
'/mnt/sr7-sdb2/ALGEMEEN/DSC_1319.JPG' -> '/root/DSC_1319.JPG'
root@backup:~# ls -hal DSC_1319.JPG
-r-xr-xr-x 1 root root 256K Feb  9 13:32 DSC_1319.JPG

Kind regards,

Jelle de Jong

On 09/02/17 11:41, Jean-Pierre André wrote:
> Hi,
>
> Jelle de Jong wrote:
>> Hi Jean-Pierre,
>>
>> Thank you!
>>
>> The new plug-in seems to work for now, I am moving it into testing phase
>> with-in our production back-up scripts.
>
> Please wait a few hours, I have found a bug which
> I have fixed. I am currently inserting your data
> into my test base in order to rerun all my tests.
>
>> Will you release the source code eventually, would like to write a blog
>> post about how to add the support.
>
> What exactly do you mean ? If it is about how to
> collect the data in a unsupported condition, it is
> difficult, because unsupported generally means
> unknown territory...
>
>> What do you think the changes are of the plug-in stop working again?
>
> (assuming a typo changes -> chances)
> Your files were in a condition not met before : data
> has been relocated according to a logic I do not fully
> understand. Maybe this is an intermediate step in the
> process of updating the files, anyway this can happen.
>
> The situation I am facing is that I have a single
> example from which it is difficult to derive the rules.
> So yes, the plugin may stop working again.
>
> Note : there are strict consistency checks in the plugin,
> so it is unlikely you read invalid data. Moreover if
> you only mount read-only you cannot damage the deduplicated
> partition.
>
>> We do not have an automatic test running to verify the back-ups at this
>> moment _yet_, so if the plug-in stops working, incremental file-based
>> back-ups with empty files will slowly get in the back-ups this way :|
>
> Usually a deduplicated partition is only used for backups,
> and reading from backups is only for recovering former
> versions of files (on demand).
>
> If you access deduplicated files with no human control,
> you have to insert your own checks in the process. I
> would at least check whether the size of the recovered
> file is the same as the deduplicated one (also grep for
> messages in the syslog).
>
> Regards
>
> Jean-Pierre
>
>> Again thank you for all your help so far!
>>
>> Kind regards,
>>
>> Jelle de Jong
>>
>>
>> On 08/02/17 15:59, Jean-Pierre André wrote:
>>> Hi,
>>>
>>> Can you please make a try with :
>>> http://jp-andre.pagesperso-orange.fr/dedup120-beta.zip
>>>
>>> This is experimental and based on assumptions which have
>>> to be clarified, but it should work in your environment.
>>>
>>> Regards
>>>
>>> Jean-Pierre
>
>

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
ntfs-3g-devel mailing list
ntfs-3g-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ntfs-3g-devel


Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-02-09 Thread Jean-Pierre André
Hi,

Jelle de Jong wrote:
> Hi Jean-Pierre,
>
> Thank you!
>
> The new plug-in seems to work for now, I am moving it into testing phase
> with-in our production back-up scripts.

Please wait a few hours, I have found a bug which
I have fixed. I am currently inserting your data
into my test base in order to rerun all my tests.

> Will you release the source code eventually, would like to write a blog
> post about how to add the support.

What exactly do you mean ? If it is about how to
collect the data in a unsupported condition, it is
difficult, because unsupported generally means
unknown territory...

> What do you think the changes are of the plug-in stop working again?

(assuming a typo changes -> chances)
Your files were in a condition not met before : data
has been relocated according to a logic I do not fully
understand. Maybe this is an intermediate step in the
process of updating the files, anyway this can happen.

The situation I am facing is that I have a single
example from which it is difficult to derive the rules.
So yes, the plugin may stop working again.

Note : there are strict consistency checks in the plugin,
so it is unlikely you read invalid data. Moreover if
you only mount read-only you cannot damage the deduplicated
partition.

> We do not have an automatic test running to verify the back-ups at this
> moment _yet_, so if the plug-in stops working, incremental file-based
> back-ups with empty files will slowly get in the back-ups this way :|

Usually a deduplicated partition is only used for backups,
and reading from backups is only for recovering former
versions of files (on demand).

If you access deduplicated files with no human control,
you have to insert your own checks in the process. I
would at least check whether the size of the recovered
file is the same as the deduplicated one (also grep for
messages in the syslog).

Regards

Jean-Pierre

> Again thank you for all your help so far!
>
> Kind regards,
>
> Jelle de Jong
>
>
> On 08/02/17 15:59, Jean-Pierre André wrote:
>> Hi,
>>
>> Can you please make a try with :
>> http://jp-andre.pagesperso-orange.fr/dedup120-beta.zip
>>
>> This is experimental and based on assumptions which have
>> to be clarified, but it should work in your environment.
>>
>> Regards
>>
>> Jean-Pierre



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
ntfs-3g-devel mailing list
ntfs-3g-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ntfs-3g-devel


Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-02-09 Thread Jelle de Jong
Hi Jean-Pierre,

Thank you!

The new plug-in seems to work for now, I am moving it into testing phase 
with-in our production back-up scripts.

Will you release the source code eventually, would like to write a blog 
post about how to add the support.

What do you think the changes are of the plug-in stop working again?

We do not have an automatic test running to verify the back-ups at this 
moment _yet_, so if the plug-in stops working, incremental file-based 
back-ups with empty files will slowly get in the back-ups this way :|

Again thank you for all your help so far!

Kind regards,

Jelle de Jong


On 08/02/17 15:59, Jean-Pierre André wrote:
> Hi,
>
> Can you please make a try with :
> http://jp-andre.pagesperso-orange.fr/dedup120-beta.zip
>
> This is experimental and based on assumptions which have
> to be clarified, but it should work in your environment.
>
> Regards
>
> Jean-Pierre
>
> Jelle de Jong wrote:
>> Dear Jean-Piere,
>>
>> The stream.data.full.gz file:
>> https://powermail.nu/owncloud/index.php/s/6Y0WXs7WQclpOBM
>>
>> # attempt to compile searchseq.c:
>> http://paste.debian.net/plainh/49252e03/
>>
>> # grep and other info how I generated the stream.data.full.gz file:
>> http://paste.debian.net/plainh/34a9c9ec
>>
>> Again thank you for your help!
>>
>> Kind regards,
>>
>> Jelle de Jong
>>
>> On 05/02/17 17:44, Jean-Pierre André wrote:
>>> Hi,
>>>
>>> Jelle de Jong wrote:
 Hi Jean-Piere,

 The requested stream.data.gz file:
 https://powermail.nu/nextcloud/index.php/s/QmbQLnrLZneIScT
>>>
>>> Well, this is unexpected : the data does not match.
>>> I assume it has been relocated, but I have no idea
>>> where it could be. This situation does not occur in
>>> my test partition, so I need your help.
>>>
>>> Basically, I have to find the following hexadecimal
>>> digest somewhere : C1F41B5197F9B31AFE5D65585CA4F8F8
>>> As the files are big, I have to rely on you doing
>>> some investigation.
>>>
>>> I first assume this is to be found in the expected
>>> file (0019.0001.ccc), and you may first check
>>> whether "grep" (or "strings") find the printable
>>> sub-sequence "]eX\" in it :
>>> grep '\]eX\\' /mnt/sr7-sdb2/..etc../0019.0001.ccc
>>>
>>> If so, use the attached program searchseq to find out
>>> precisely where the sequence is located :
>>> ./findseq C1F41B5197F9B31AFE5D65585CA4F8F8 /mnt/sr7-sdb2/..etc..
>>> (I have included the source code, you may compile it if
>>> you are uneasy executing foreign code).
>>>
>>> If it finds it, take the decimal location, divide by 512
>>> and post three records around this location. Assuming
>>> you get 123456789, dividing by 512 yields 241126, so you
>>> post three records from 241125 (so 241126 and nearby).
>>> Then play the dd command with needed adaptations and make
>>> the output available :
>>> dd if=FILE bs=512 skip=START count=3 | gzip > stream.data.gz
>>>
>>> Thank you for your help and good luck !
>>>
>>> Jean-Pierre
>>>
>>>
 root@backup:~# ls -hali
 /mnt/sr7-sdb2/System*/Dedup/ChunkStore/{0DECAE8D*/Stream | grep 2801748
 2801748 -rwxrwxrwx 1 root root  87M Jan 28 07:27 0019.0001.ccc
 root@backup:~# ls -hali
 /mnt/sr7-sdb2/System*/Dedup/ChunkStore/{0DECAE8D*/Stream/0019.0001.ccc




 2801748 -rwxrwxrwx 1 root root 87M Jan 28 07:27 /mnt/sr7-sdb2/System
 Volume
 Information/Dedup/ChunkStore/{0DECAE8D-71D2-4BDE-8798-530201C72D8D}.ddp/Stream/0019.0001.ccc




 root@backup:~# FILE="/mnt/sr7-sdb2/System Volume
 Information/Dedup/ChunkStore/{0DECAE8D-71D2-4BDE-8798-530201C72D8D}.ddp/Stream/0019.0001.ccc"




 root@backup:~# dd if="$FILE" bs=512 skip=133396 count=2 | gzip >
 stream.data.gz

 Thank you!

 Kind regards,

 Jelle de Jong

 On 04/02/17 21:52, Jean-Pierre André wrote:
> Hi again,
>
> A consistency check has failed, there is some unexpected
> data in your Stream file. Can you post an excerpt :
>
> dd if=FILE bs=512 skip=133396 count=2 | gzip > stream.data.gz
> (FILE being the one with name 0019.0001.ccc whose
> inode number is 2801748).
>
> Regards
>
> Jean-Pierre
>
> Jelle de Jong wrote:
>> Hi Jean-Pierre,
>>
>> Thank you!
>>
>> The requested information: http://paste.debian.net/hidden/bfcfff7c/
>>
>> The behaviour is with with plain mount:
>>
>> mount -o ro  -t ntfs-3g
>> /dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2  /mnt/sr7-sdb2
>>
>> I figured out how to load your awesome plugin into the guestmount
>> system, so if mount works I should have the rest working as well.
>>
>> Thank you in advance,
>>
>> Kind regards,
>>
>> Jelle de Jong
>>
>> On 04/02/17 14:37, Jean-Pierre André wrote:
>>> Hi,
>>>
>>> Jelle de Jong wrote:
 Dear Jean-Pierre,

 I could read the files for 

Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-02-08 Thread Jean-Pierre André
Hi,

Can you please make a try with :
http://jp-andre.pagesperso-orange.fr/dedup120-beta.zip

This is experimental and based on assumptions which have
to be clarified, but it should work in your environment.

Regards

Jean-Pierre

Jelle de Jong wrote:
> Dear Jean-Piere,
>
> The stream.data.full.gz file:
> https://powermail.nu/owncloud/index.php/s/6Y0WXs7WQclpOBM
>
> # attempt to compile searchseq.c:
> http://paste.debian.net/plainh/49252e03/
>
> # grep and other info how I generated the stream.data.full.gz file:
> http://paste.debian.net/plainh/34a9c9ec
>
> Again thank you for your help!
>
> Kind regards,
>
> Jelle de Jong
>
> On 05/02/17 17:44, Jean-Pierre André wrote:
>> Hi,
>>
>> Jelle de Jong wrote:
>>> Hi Jean-Piere,
>>>
>>> The requested stream.data.gz file:
>>> https://powermail.nu/nextcloud/index.php/s/QmbQLnrLZneIScT
>>
>> Well, this is unexpected : the data does not match.
>> I assume it has been relocated, but I have no idea
>> where it could be. This situation does not occur in
>> my test partition, so I need your help.
>>
>> Basically, I have to find the following hexadecimal
>> digest somewhere : C1F41B5197F9B31AFE5D65585CA4F8F8
>> As the files are big, I have to rely on you doing
>> some investigation.
>>
>> I first assume this is to be found in the expected
>> file (0019.0001.ccc), and you may first check
>> whether "grep" (or "strings") find the printable
>> sub-sequence "]eX\" in it :
>> grep '\]eX\\' /mnt/sr7-sdb2/..etc../0019.0001.ccc
>>
>> If so, use the attached program searchseq to find out
>> precisely where the sequence is located :
>> ./findseq C1F41B5197F9B31AFE5D65585CA4F8F8 /mnt/sr7-sdb2/..etc..
>> (I have included the source code, you may compile it if
>> you are uneasy executing foreign code).
>>
>> If it finds it, take the decimal location, divide by 512
>> and post three records around this location. Assuming
>> you get 123456789, dividing by 512 yields 241126, so you
>> post three records from 241125 (so 241126 and nearby).
>> Then play the dd command with needed adaptations and make
>> the output available :
>> dd if=FILE bs=512 skip=START count=3 | gzip > stream.data.gz
>>
>> Thank you for your help and good luck !
>>
>> Jean-Pierre
>>
>>
>>> root@backup:~# ls -hali
>>> /mnt/sr7-sdb2/System*/Dedup/ChunkStore/{0DECAE8D*/Stream | grep 2801748
>>> 2801748 -rwxrwxrwx 1 root root  87M Jan 28 07:27 0019.0001.ccc
>>> root@backup:~# ls -hali
>>> /mnt/sr7-sdb2/System*/Dedup/ChunkStore/{0DECAE8D*/Stream/0019.0001.ccc
>>>
>>>
>>>
>>> 2801748 -rwxrwxrwx 1 root root 87M Jan 28 07:27 /mnt/sr7-sdb2/System
>>> Volume
>>> Information/Dedup/ChunkStore/{0DECAE8D-71D2-4BDE-8798-530201C72D8D}.ddp/Stream/0019.0001.ccc
>>>
>>>
>>>
>>> root@backup:~# FILE="/mnt/sr7-sdb2/System Volume
>>> Information/Dedup/ChunkStore/{0DECAE8D-71D2-4BDE-8798-530201C72D8D}.ddp/Stream/0019.0001.ccc"
>>>
>>>
>>>
>>> root@backup:~# dd if="$FILE" bs=512 skip=133396 count=2 | gzip >
>>> stream.data.gz
>>>
>>> Thank you!
>>>
>>> Kind regards,
>>>
>>> Jelle de Jong
>>>
>>> On 04/02/17 21:52, Jean-Pierre André wrote:
 Hi again,

 A consistency check has failed, there is some unexpected
 data in your Stream file. Can you post an excerpt :

 dd if=FILE bs=512 skip=133396 count=2 | gzip > stream.data.gz
 (FILE being the one with name 0019.0001.ccc whose
 inode number is 2801748).

 Regards

 Jean-Pierre

 Jelle de Jong wrote:
> Hi Jean-Pierre,
>
> Thank you!
>
> The requested information: http://paste.debian.net/hidden/bfcfff7c/
>
> The behaviour is with with plain mount:
>
> mount -o ro  -t ntfs-3g
> /dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2  /mnt/sr7-sdb2
>
> I figured out how to load your awesome plugin into the guestmount
> system, so if mount works I should have the rest working as well.
>
> Thank you in advance,
>
> Kind regards,
>
> Jelle de Jong
>
> On 04/02/17 14:37, Jean-Pierre André wrote:
>> Hi,
>>
>> Jelle de Jong wrote:
>>> Dear Jean-Pierre,
>>>
>>> I could read the files for maybe one day, I tried to get it into
>>> production but then it didn't work anymore, going back to my base
>>> test
>>> mounting the volume without guestmount, so with just mount I get the
>>> following:
>>
>> Do you mean that the behavior with guestmount is
>> different from the one guestmount ?
>>
>> Also, are you not able to read any file any more, or
>> are there files which you can read (such as the one
>> you could read previously) ?
>>
>> (more below)
>>
>>> # with stream list:
>>> http://paste.debian.net/hidden/15d83c84/
>>>
>>> root@backup:~# grep ntfs-3g /var/log/syslog
>>> Feb  4 11:44:10 backup ntfs-3g[30845]: Version 2016.2.22AR.1
>>> integrated
>>> FUSE 28
>>> Feb  4 11:44:10 backup ntfs-3g[30845]: Mounted

Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-02-07 Thread Jean-Pierre André
Hi,

The needed data appears to be in the expected file, but
at a very unexpected location. So I am landing into
unknown territory and I need some time to investigate.

I do not need more data for now, you can allow the
file system to be updated again, and we will restart
from the beginning when I know more about the issue.

Regards

Jean-Pierre

(also see below)

Jelle de Jong wrote:
> Dear Jean-Piere,
>
> The stream.data.full.gz file:
> https://powermail.nu/owncloud/index.php/s/6Y0WXs7WQclpOBM

This is the full file, and it is ok. I was trying to avoid
this, but being in unknown land, I now have to expand the
investigation zone, so the full file would have to be needed.

No worry, there is no user data there.

> # attempt to compile searchseq.c:
> http://paste.debian.net/plainh/49252e03/

This failed because you used the option -c which prevents
from building an executable. I should have quoted the
compilation command :
gcc -o searchseq searchseq.c

>
> # grep and other info how I generated the stream.data.full.gz file:
> http://paste.debian.net/plainh/34a9c9ec
>
> Again thank you for your help!
>
> Kind regards,
>
> Jelle de Jong
>
> On 05/02/17 17:44, Jean-Pierre André wrote:
>> Hi,
>>
>> Jelle de Jong wrote:
>>> Hi Jean-Piere,
>>>
>>> The requested stream.data.gz file:
>>> https://powermail.nu/nextcloud/index.php/s/QmbQLnrLZneIScT
>>
>> Well, this is unexpected : the data does not match.
>> I assume it has been relocated, but I have no idea
>> where it could be. This situation does not occur in
>> my test partition, so I need your help.
>>
>> Basically, I have to find the following hexadecimal
>> digest somewhere : C1F41B5197F9B31AFE5D65585CA4F8F8
>> As the files are big, I have to rely on you doing
>> some investigation.
>>
>> I first assume this is to be found in the expected
>> file (0019.0001.ccc), and you may first check
>> whether "grep" (or "strings") find the printable
>> sub-sequence "]eX\" in it :
>> grep '\]eX\\' /mnt/sr7-sdb2/..etc../0019.0001.ccc
>>
>> If so, use the attached program searchseq to find out
>> precisely where the sequence is located :
>> ./findseq C1F41B5197F9B31AFE5D65585CA4F8F8 /mnt/sr7-sdb2/..etc..
>> (I have included the source code, you may compile it if
>> you are uneasy executing foreign code).
>>
>> If it finds it, take the decimal location, divide by 512
>> and post three records around this location. Assuming
>> you get 123456789, dividing by 512 yields 241126, so you
>> post three records from 241125 (so 241126 and nearby).
>> Then play the dd command with needed adaptations and make
>> the output available :
>> dd if=FILE bs=512 skip=START count=3 | gzip > stream.data.gz
>>
>> Thank you for your help and good luck !
>>
>> Jean-Pierre




--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
ntfs-3g-devel mailing list
ntfs-3g-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ntfs-3g-devel


Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-02-07 Thread Jelle de Jong
Dear Jean-Piere,

The stream.data.full.gz file:
https://powermail.nu/owncloud/index.php/s/6Y0WXs7WQclpOBM

# attempt to compile searchseq.c:
http://paste.debian.net/plainh/49252e03/

# grep and other info how I generated the stream.data.full.gz file:
http://paste.debian.net/plainh/34a9c9ec

Again thank you for your help!

Kind regards,

Jelle de Jong

On 05/02/17 17:44, Jean-Pierre André wrote:
> Hi,
>
> Jelle de Jong wrote:
>> Hi Jean-Piere,
>>
>> The requested stream.data.gz file:
>> https://powermail.nu/nextcloud/index.php/s/QmbQLnrLZneIScT
>
> Well, this is unexpected : the data does not match.
> I assume it has been relocated, but I have no idea
> where it could be. This situation does not occur in
> my test partition, so I need your help.
>
> Basically, I have to find the following hexadecimal
> digest somewhere : C1F41B5197F9B31AFE5D65585CA4F8F8
> As the files are big, I have to rely on you doing
> some investigation.
>
> I first assume this is to be found in the expected
> file (0019.0001.ccc), and you may first check
> whether "grep" (or "strings") find the printable
> sub-sequence "]eX\" in it :
> grep '\]eX\\' /mnt/sr7-sdb2/..etc../0019.0001.ccc
>
> If so, use the attached program searchseq to find out
> precisely where the sequence is located :
> ./findseq C1F41B5197F9B31AFE5D65585CA4F8F8 /mnt/sr7-sdb2/..etc..
> (I have included the source code, you may compile it if
> you are uneasy executing foreign code).
>
> If it finds it, take the decimal location, divide by 512
> and post three records around this location. Assuming
> you get 123456789, dividing by 512 yields 241126, so you
> post three records from 241125 (so 241126 and nearby).
> Then play the dd command with needed adaptations and make
> the output available :
> dd if=FILE bs=512 skip=START count=3 | gzip > stream.data.gz
>
> Thank you for your help and good luck !
>
> Jean-Pierre
>
>
>> root@backup:~# ls -hali
>> /mnt/sr7-sdb2/System*/Dedup/ChunkStore/{0DECAE8D*/Stream | grep 2801748
>> 2801748 -rwxrwxrwx 1 root root  87M Jan 28 07:27 0019.0001.ccc
>> root@backup:~# ls -hali
>> /mnt/sr7-sdb2/System*/Dedup/ChunkStore/{0DECAE8D*/Stream/0019.0001.ccc
>>
>>
>> 2801748 -rwxrwxrwx 1 root root 87M Jan 28 07:27 /mnt/sr7-sdb2/System
>> Volume
>> Information/Dedup/ChunkStore/{0DECAE8D-71D2-4BDE-8798-530201C72D8D}.ddp/Stream/0019.0001.ccc
>>
>>
>> root@backup:~# FILE="/mnt/sr7-sdb2/System Volume
>> Information/Dedup/ChunkStore/{0DECAE8D-71D2-4BDE-8798-530201C72D8D}.ddp/Stream/0019.0001.ccc"
>>
>>
>> root@backup:~# dd if="$FILE" bs=512 skip=133396 count=2 | gzip >
>> stream.data.gz
>>
>> Thank you!
>>
>> Kind regards,
>>
>> Jelle de Jong
>>
>> On 04/02/17 21:52, Jean-Pierre André wrote:
>>> Hi again,
>>>
>>> A consistency check has failed, there is some unexpected
>>> data in your Stream file. Can you post an excerpt :
>>>
>>> dd if=FILE bs=512 skip=133396 count=2 | gzip > stream.data.gz
>>> (FILE being the one with name 0019.0001.ccc whose
>>> inode number is 2801748).
>>>
>>> Regards
>>>
>>> Jean-Pierre
>>>
>>> Jelle de Jong wrote:
 Hi Jean-Pierre,

 Thank you!

 The requested information: http://paste.debian.net/hidden/bfcfff7c/

 The behaviour is with with plain mount:

 mount -o ro  -t ntfs-3g
 /dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2  /mnt/sr7-sdb2

 I figured out how to load your awesome plugin into the guestmount
 system, so if mount works I should have the rest working as well.

 Thank you in advance,

 Kind regards,

 Jelle de Jong

 On 04/02/17 14:37, Jean-Pierre André wrote:
> Hi,
>
> Jelle de Jong wrote:
>> Dear Jean-Pierre,
>>
>> I could read the files for maybe one day, I tried to get it into
>> production but then it didn't work anymore, going back to my base
>> test
>> mounting the volume without guestmount, so with just mount I get the
>> following:
>
> Do you mean that the behavior with guestmount is
> different from the one guestmount ?
>
> Also, are you not able to read any file any more, or
> are there files which you can read (such as the one
> you could read previously) ?
>
> (more below)
>
>> # with stream list:
>> http://paste.debian.net/hidden/15d83c84/
>>
>> root@backup:~# grep ntfs-3g /var/log/syslog
>> Feb  4 11:44:10 backup ntfs-3g[30845]: Version 2016.2.22AR.1
>> integrated
>> FUSE 28
>> Feb  4 11:44:10 backup ntfs-3g[30845]: Mounted
>> /dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2 (Read-Only, label
>> "DATA", NTFS 3.1)
>> Feb  4 11:44:10 backup ntfs-3g[30845]: Cmdline options: ro
>> Feb  4 11:44:10 backup ntfs-3g[30845]: Mount options:
>> ro,allow_other,nonempty,relatime,fsname=/dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2,blkdev,blksize=4096
>>
>>
>>
>>
>>
>>
>> Feb  4 11:44:10 backup ntfs-3g[30845]: 

Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-02-05 Thread Jean-Pierre André

Hi,

Jelle de Jong wrote:

Hi Jean-Piere,

The requested stream.data.gz file:
https://powermail.nu/nextcloud/index.php/s/QmbQLnrLZneIScT


Well, this is unexpected : the data does not match.
I assume it has been relocated, but I have no idea
where it could be. This situation does not occur in
my test partition, so I need your help.

Basically, I have to find the following hexadecimal
digest somewhere : C1F41B5197F9B31AFE5D65585CA4F8F8
As the files are big, I have to rely on you doing
some investigation.

I first assume this is to be found in the expected
file (0019.0001.ccc), and you may first check
whether "grep" (or "strings") find the printable
sub-sequence "]eX\" in it :
grep '\]eX\\' /mnt/sr7-sdb2/..etc../0019.0001.ccc

If so, use the attached program searchseq to find out
precisely where the sequence is located :
./findseq C1F41B5197F9B31AFE5D65585CA4F8F8 /mnt/sr7-sdb2/..etc..
(I have included the source code, you may compile it if
you are uneasy executing foreign code).

If it finds it, take the decimal location, divide by 512
and post three records around this location. Assuming
you get 123456789, dividing by 512 yields 241126, so you
post three records from 241125 (so 241126 and nearby).
Then play the dd command with needed adaptations and make
the output available :
dd if=FILE bs=512 skip=START count=3 | gzip > stream.data.gz

Thank you for your help and good luck !

Jean-Pierre



root@backup:~# ls -hali
/mnt/sr7-sdb2/System*/Dedup/ChunkStore/{0DECAE8D*/Stream | grep 2801748
2801748 -rwxrwxrwx 1 root root  87M Jan 28 07:27 0019.0001.ccc
root@backup:~# ls -hali
/mnt/sr7-sdb2/System*/Dedup/ChunkStore/{0DECAE8D*/Stream/0019.0001.ccc

2801748 -rwxrwxrwx 1 root root 87M Jan 28 07:27 /mnt/sr7-sdb2/System
Volume
Information/Dedup/ChunkStore/{0DECAE8D-71D2-4BDE-8798-530201C72D8D}.ddp/Stream/0019.0001.ccc

root@backup:~# FILE="/mnt/sr7-sdb2/System Volume
Information/Dedup/ChunkStore/{0DECAE8D-71D2-4BDE-8798-530201C72D8D}.ddp/Stream/0019.0001.ccc"

root@backup:~# dd if="$FILE" bs=512 skip=133396 count=2 | gzip >
stream.data.gz

Thank you!

Kind regards,

Jelle de Jong

On 04/02/17 21:52, Jean-Pierre André wrote:

Hi again,

A consistency check has failed, there is some unexpected
data in your Stream file. Can you post an excerpt :

dd if=FILE bs=512 skip=133396 count=2 | gzip > stream.data.gz
(FILE being the one with name 0019.0001.ccc whose
inode number is 2801748).

Regards

Jean-Pierre

Jelle de Jong wrote:

Hi Jean-Pierre,

Thank you!

The requested information: http://paste.debian.net/hidden/bfcfff7c/

The behaviour is with with plain mount:

mount -o ro  -t ntfs-3g
/dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2  /mnt/sr7-sdb2

I figured out how to load your awesome plugin into the guestmount
system, so if mount works I should have the rest working as well.

Thank you in advance,

Kind regards,

Jelle de Jong

On 04/02/17 14:37, Jean-Pierre André wrote:

Hi,

Jelle de Jong wrote:

Dear Jean-Pierre,

I could read the files for maybe one day, I tried to get it into
production but then it didn't work anymore, going back to my base test
mounting the volume without guestmount, so with just mount I get the
following:


Do you mean that the behavior with guestmount is
different from the one guestmount ?

Also, are you not able to read any file any more, or
are there files which you can read (such as the one
you could read previously) ?

(more below)


# with stream list:
http://paste.debian.net/hidden/15d83c84/

root@backup:~# grep ntfs-3g /var/log/syslog
Feb  4 11:44:10 backup ntfs-3g[30845]: Version 2016.2.22AR.1
integrated
FUSE 28
Feb  4 11:44:10 backup ntfs-3g[30845]: Mounted
/dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2 (Read-Only, label
"DATA", NTFS 3.1)
Feb  4 11:44:10 backup ntfs-3g[30845]: Cmdline options: ro
Feb  4 11:44:10 backup ntfs-3g[30845]: Mount options:
ro,allow_other,nonempty,relatime,fsname=/dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2,blkdev,blksize=4096





Feb  4 11:44:10 backup ntfs-3g[30845]: Ownership and permissions
disabled, configuration type 7
Feb  4 11:47:10 backup ntfs-3g[30845]: Bad stream at offset 0x0 for
file
2801748
Feb  4 11:47:10 backup ntfs-3g[30845]: Bad stream at offset 0x2
for
file 2801748
Feb  4 11:47:12 backup ntfs-3g[30845]: Bad stream at offset 0x0 for
file
2801748
Feb  4 11:47:12 backup ntfs-3g[30845]: Bad stream at offset 0x2
for
file 2801748


At first glance, this is related to the modification
made a few weeks ago. Probably a wrong file was used,
but I have no idea why.

To debug this, I will need some data. To begin with,
I need the reparse data of an unreadable file.
You can get the reparse data by :
(replace NAME by actual name)

getfattr -h -e hex -n system.ntfs_reparse_data NAME

I also need to know which one is the file 2801748.
This should be in the directory which you have already
listed (/mnt/sr7-sdb2/System*/  ... /Stream), just add
-i to the "ls" options ("ls -hali 

Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-02-04 Thread Jean-Pierre André
Hi again,

A consistency check has failed, there is some unexpected
data in your Stream file. Can you post an excerpt :

dd if=FILE bs=512 skip=133396 count=2 | gzip > stream.data.gz
(FILE being the one with name 0019.0001.ccc whose
inode number is 2801748).

Regards

Jean-Pierre

Jelle de Jong wrote:
> Hi Jean-Pierre,
>
> Thank you!
>
> The requested information: http://paste.debian.net/hidden/bfcfff7c/
>
> The behaviour is with with plain mount:
>
> mount -o ro  -t ntfs-3g
> /dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2  /mnt/sr7-sdb2
>
> I figured out how to load your awesome plugin into the guestmount
> system, so if mount works I should have the rest working as well.
>
> Thank you in advance,
>
> Kind regards,
>
> Jelle de Jong
>
> On 04/02/17 14:37, Jean-Pierre André wrote:
>> Hi,
>>
>> Jelle de Jong wrote:
>>> Dear Jean-Pierre,
>>>
>>> I could read the files for maybe one day, I tried to get it into
>>> production but then it didn't work anymore, going back to my base test
>>> mounting the volume without guestmount, so with just mount I get the
>>> following:
>>
>> Do you mean that the behavior with guestmount is
>> different from the one guestmount ?
>>
>> Also, are you not able to read any file any more, or
>> are there files which you can read (such as the one
>> you could read previously) ?
>>
>> (more below)
>>
>>> # with stream list:
>>> http://paste.debian.net/hidden/15d83c84/
>>>
>>> root@backup:~# grep ntfs-3g /var/log/syslog
>>> Feb  4 11:44:10 backup ntfs-3g[30845]: Version 2016.2.22AR.1 integrated
>>> FUSE 28
>>> Feb  4 11:44:10 backup ntfs-3g[30845]: Mounted
>>> /dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2 (Read-Only, label
>>> "DATA", NTFS 3.1)
>>> Feb  4 11:44:10 backup ntfs-3g[30845]: Cmdline options: ro
>>> Feb  4 11:44:10 backup ntfs-3g[30845]: Mount options:
>>> ro,allow_other,nonempty,relatime,fsname=/dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2,blkdev,blksize=4096
>>>
>>>
>>>
>>> Feb  4 11:44:10 backup ntfs-3g[30845]: Ownership and permissions
>>> disabled, configuration type 7
>>> Feb  4 11:47:10 backup ntfs-3g[30845]: Bad stream at offset 0x0 for file
>>> 2801748
>>> Feb  4 11:47:10 backup ntfs-3g[30845]: Bad stream at offset 0x2 for
>>> file 2801748
>>> Feb  4 11:47:12 backup ntfs-3g[30845]: Bad stream at offset 0x0 for file
>>> 2801748
>>> Feb  4 11:47:12 backup ntfs-3g[30845]: Bad stream at offset 0x2 for
>>> file 2801748
>>
>> At first glance, this is related to the modification
>> made a few weeks ago. Probably a wrong file was used,
>> but I have no idea why.
>>
>> To debug this, I will need some data. To begin with,
>> I need the reparse data of an unreadable file.
>> You can get the reparse data by :
>> (replace NAME by actual name)
>>
>> getfattr -h -e hex -n system.ntfs_reparse_data NAME
>>
>> I also need to know which one is the file 2801748.
>> This should be in the directory which you have already
>> listed (/mnt/sr7-sdb2/System*/  ... /Stream), just add
>> -i to the "ls" options ("ls -hali /mnt/sr7-sdb2 ...")
>> to display the inode numbers.
>>
>>>
>>> I am at Fosdem 2017 this weekend, if you are there let me know.
>>
>> No, I am not.
>>
>>> Could you help me out and tell me what data you would like to have
>>> from me.
>>
>> I need to go through three steps, and analyze the data
>> to get to the next step. The first step is the reparse
>> data, as mentioned above. This is difficult to automate,
>> as bugs tend to occur in unanticipated cases (I obviously
>> have no specification, so I have no information about
>> cases I never met myself).
>>
>> Note : please, only mount as read-only until the issue
>> is solved.
>>
>> Regards
>>
>> Jean-Pierre
>>
>>
>>> Kind regards,
>>>
>>> Jelle de Jong
>>>
>>>
>>> On 18/01/17 21:52, Jean-Pierre André wrote:
 Hi again,

 Jelle de Jong wrote:
> Dear Jean-Pierre,
>
> Thank you! I can read the files!

 Great !

>>
>>
>



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
ntfs-3g-devel mailing list
ntfs-3g-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ntfs-3g-devel


Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-02-04 Thread Jelle de Jong
Hi Jean-Pierre,

Thank you!

The requested information: http://paste.debian.net/hidden/bfcfff7c/

The behaviour is with with plain mount:

mount -o ro  -t ntfs-3g 
/dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2  /mnt/sr7-sdb2

I figured out how to load your awesome plugin into the guestmount 
system, so if mount works I should have the rest working as well.

Thank you in advance,

Kind regards,

Jelle de Jong

On 04/02/17 14:37, Jean-Pierre André wrote:
> Hi,
>
> Jelle de Jong wrote:
>> Dear Jean-Pierre,
>>
>> I could read the files for maybe one day, I tried to get it into
>> production but then it didn't work anymore, going back to my base test
>> mounting the volume without guestmount, so with just mount I get the
>> following:
>
> Do you mean that the behavior with guestmount is
> different from the one guestmount ?
>
> Also, are you not able to read any file any more, or
> are there files which you can read (such as the one
> you could read previously) ?
>
> (more below)
>
>> # with stream list:
>> http://paste.debian.net/hidden/15d83c84/
>>
>> root@backup:~# grep ntfs-3g /var/log/syslog
>> Feb  4 11:44:10 backup ntfs-3g[30845]: Version 2016.2.22AR.1 integrated
>> FUSE 28
>> Feb  4 11:44:10 backup ntfs-3g[30845]: Mounted
>> /dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2 (Read-Only, label
>> "DATA", NTFS 3.1)
>> Feb  4 11:44:10 backup ntfs-3g[30845]: Cmdline options: ro
>> Feb  4 11:44:10 backup ntfs-3g[30845]: Mount options:
>> ro,allow_other,nonempty,relatime,fsname=/dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2,blkdev,blksize=4096
>>
>>
>> Feb  4 11:44:10 backup ntfs-3g[30845]: Ownership and permissions
>> disabled, configuration type 7
>> Feb  4 11:47:10 backup ntfs-3g[30845]: Bad stream at offset 0x0 for file
>> 2801748
>> Feb  4 11:47:10 backup ntfs-3g[30845]: Bad stream at offset 0x2 for
>> file 2801748
>> Feb  4 11:47:12 backup ntfs-3g[30845]: Bad stream at offset 0x0 for file
>> 2801748
>> Feb  4 11:47:12 backup ntfs-3g[30845]: Bad stream at offset 0x2 for
>> file 2801748
>
> At first glance, this is related to the modification
> made a few weeks ago. Probably a wrong file was used,
> but I have no idea why.
>
> To debug this, I will need some data. To begin with,
> I need the reparse data of an unreadable file.
> You can get the reparse data by :
> (replace NAME by actual name)
>
> getfattr -h -e hex -n system.ntfs_reparse_data NAME
>
> I also need to know which one is the file 2801748.
> This should be in the directory which you have already
> listed (/mnt/sr7-sdb2/System*/  ... /Stream), just add
> -i to the "ls" options ("ls -hali /mnt/sr7-sdb2 ...")
> to display the inode numbers.
>
>>
>> I am at Fosdem 2017 this weekend, if you are there let me know.
>
> No, I am not.
>
>> Could you help me out and tell me what data you would like to have
>> from me.
>
> I need to go through three steps, and analyze the data
> to get to the next step. The first step is the reparse
> data, as mentioned above. This is difficult to automate,
> as bugs tend to occur in unanticipated cases (I obviously
> have no specification, so I have no information about
> cases I never met myself).
>
> Note : please, only mount as read-only until the issue
> is solved.
>
> Regards
>
> Jean-Pierre
>
>
>> Kind regards,
>>
>> Jelle de Jong
>>
>>
>> On 18/01/17 21:52, Jean-Pierre André wrote:
>>> Hi again,
>>>
>>> Jelle de Jong wrote:
 Dear Jean-Pierre,

 Thank you! I can read the files!
>>>
>>> Great !
>>>
>
>

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
ntfs-3g-devel mailing list
ntfs-3g-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ntfs-3g-devel


Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-02-04 Thread Jean-Pierre André
Hi,

Jelle de Jong wrote:
> Dear Jean-Pierre,
>
> I could read the files for maybe one day, I tried to get it into
> production but then it didn't work anymore, going back to my base test
> mounting the volume without guestmount, so with just mount I get the
> following:

Do you mean that the behavior with guestmount is
different from the one guestmount ?

Also, are you not able to read any file any more, or
are there files which you can read (such as the one
you could read previously) ?

(more below)

> # with stream list:
> http://paste.debian.net/hidden/15d83c84/
>
> root@backup:~# grep ntfs-3g /var/log/syslog
> Feb  4 11:44:10 backup ntfs-3g[30845]: Version 2016.2.22AR.1 integrated
> FUSE 28
> Feb  4 11:44:10 backup ntfs-3g[30845]: Mounted
> /dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2 (Read-Only, label
> "DATA", NTFS 3.1)
> Feb  4 11:44:10 backup ntfs-3g[30845]: Cmdline options: ro
> Feb  4 11:44:10 backup ntfs-3g[30845]: Mount options:
> ro,allow_other,nonempty,relatime,fsname=/dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2,blkdev,blksize=4096
>
> Feb  4 11:44:10 backup ntfs-3g[30845]: Ownership and permissions
> disabled, configuration type 7
> Feb  4 11:47:10 backup ntfs-3g[30845]: Bad stream at offset 0x0 for file
> 2801748
> Feb  4 11:47:10 backup ntfs-3g[30845]: Bad stream at offset 0x2 for
> file 2801748
> Feb  4 11:47:12 backup ntfs-3g[30845]: Bad stream at offset 0x0 for file
> 2801748
> Feb  4 11:47:12 backup ntfs-3g[30845]: Bad stream at offset 0x2 for
> file 2801748

At first glance, this is related to the modification
made a few weeks ago. Probably a wrong file was used,
but I have no idea why.

To debug this, I will need some data. To begin with,
I need the reparse data of an unreadable file.
You can get the reparse data by :
(replace NAME by actual name)

getfattr -h -e hex -n system.ntfs_reparse_data NAME

I also need to know which one is the file 2801748.
This should be in the directory which you have already
listed (/mnt/sr7-sdb2/System*/  ... /Stream), just add
-i to the "ls" options ("ls -hali /mnt/sr7-sdb2 ...")
to display the inode numbers.

>
> I am at Fosdem 2017 this weekend, if you are there let me know.

No, I am not.

> Could you help me out and tell me what data you would like to have from me.

I need to go through three steps, and analyze the data
to get to the next step. The first step is the reparse
data, as mentioned above. This is difficult to automate,
as bugs tend to occur in unanticipated cases (I obviously
have no specification, so I have no information about
cases I never met myself).

Note : please, only mount as read-only until the issue
is solved.

Regards

Jean-Pierre


> Kind regards,
>
> Jelle de Jong
>
>
> On 18/01/17 21:52, Jean-Pierre André wrote:
>> Hi again,
>>
>> Jelle de Jong wrote:
>>> Dear Jean-Pierre,
>>>
>>> Thank you! I can read the files!
>>
>> Great !
>>



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
ntfs-3g-devel mailing list
ntfs-3g-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ntfs-3g-devel


Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-01-18 Thread Jean-Pierre André
Hi again,

Jelle de Jong wrote:
> Dear Jean-Pierre,
>
> Thank you! I can read the files!

Great !

> On 18/01/17 15:20, Jean-Pierre André wrote:
>> Now, I have updated it again, so please redo the same.
>> If it still fails, please post all the file names which
>> appear in the directory "Stream", so that I can see which
>> one got into the way.
>
> How do I know everything is working and can be used as a reliable
> production method to make file based backups with rdiff-backup and
> ntfs-3g on GNU/Linux?

I do not know about a method to make sure everything
is working, so you should not be over-confident in a
software which has only been used by a few users so far.

Regards

Jean-Pierre

>
> Thank you in advance,
>
> Kind regards,
>
> Jelle de Jong
>
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, SlashDot.org! http://sdm.link/slashdot
> ___
> ntfs-3g-devel mailing list
> ntfs-3g-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/ntfs-3g-devel
>



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
ntfs-3g-devel mailing list
ntfs-3g-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ntfs-3g-devel


Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-01-18 Thread Jelle de Jong
Dear Jean-Pierre,

Thank you! I can read the files!

On 18/01/17 15:20, Jean-Pierre André wrote:
> Now, I have updated it again, so please redo the same.
> If it still fails, please post all the file names which
> appear in the directory "Stream", so that I can see which
> one got into the way.

How do I know everything is working and can be used as a reliable 
production method to make file based backups with rdiff-backup and 
ntfs-3g on GNU/Linux?

Thank you in advance,

Kind regards,

Jelle de Jong

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
ntfs-3g-devel mailing list
ntfs-3g-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ntfs-3g-devel


Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-01-18 Thread Jelle de Jong
Dear Jean-Piere,

Thank you! Here is the requested data:

http://paste.debian.net/plain/909296

I updated the plug-in by wget 
http://jp-andre.pagesperso-orange.fr/dedup.zip the file name was the 
same, so I hope the plugin was updated, the date stamp looked from the 
16th..

-rwxr-xr-x 1 root root  17K Jan 16 15:47 ntfs-plugin-8013.so

Kind regards,

Jelle de Jong


On 17/01/17 14:08, Jean-Pierre André wrote:
> Hi,
>
> Jelle de Jong wrote:
>> Hi Jean-Piere,
>>
>> I managed to get the plug-in loaded with guestfs tools, but I could not
>> read the files, so I went back to my basic ntfs mount test without
>> guestmount and I could also not read the files!
>>
>> The plug-in is located at:
>> /usr/lib/x86_64-linux-gnu/ntfs-3g/ntfs-plugin-8013.so without the
>> plug-in I get the "unsupported reparse point" messages, with the plug-in
>> I do not get these messages, but I am still unable to read the files. :(
>>
>> I added the getfattr -h -e hex -n system.ntfs_reparse_data of a sample
>> file as requested and the syslog messages.
>
> Ok. This shows a variation which only recently I have
> been aware of : the Smap id is apparently incremented
> when the Smap is updated.
>
> I have just updated the plugin, please load it again
> and replace it.
>
>> https://paste.debian.net/hidden/4d358e9e/
>
> This also shows the plugin was activated correctly.
>
>> Could you please help me out further?
>
> I can at least try, with some help from you.
>
> Please retry with the updated plugin, and please only
> mount the partition as read-only, because due to
> deduplication, when a file is updated, the layout of
> other files is changed, which makes debugging difficult.
>
> It the updated plugin does not work with the same unchanged
> file, please post a 1KB excerpt of the file 0012.000a.ccc
> from the directory whose path can be stated as :
> /mnt/sr7-sdb2/System*/Dedup/ChunkStore/{0DECAE8D*/Stream
>
> To extract the needed data you can use :
>
> dd if=/mnt/--etc--/Stream/0012.000a.ccc bs=512 skip=17862
> count=2 | od -t x1
> (I have put the full command as an attachment to avoid formatting
> by the mailer).
>
> I am of course assuming the partition was not changed since
> your earlier post.
>
> Note : in this excerpt there is no user data, you can
> safely post it.
>
> Regards
>
> Jean-Pierre
>
>> Thank you in advance!
>>
>> Kind regards,
>>
>> Jelle de Jong
>>
>>
>> lvcreate --snapshot --name sr7-disk2-snapshot-copy --size 50G
>> /dev/lvm1-vol/sr7-disk2
>> lvchange -ay /dev/lvm1-vol/sr7-disk2-snapshot-copy
>> blkid /dev/lvm1-vol/sr7-disk2-snapshot-copy
>> kpartx -avg /dev/lvm1-vol/sr7-disk2-snapshot-copy
>> blkid /dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2
>> mount -o ro  -t ntfs-3g
>> /dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2  /mnt/sr7-sdb2
>>
>> Jan 17 11:04:14 backup ntfs-3g[5032]: Version 2016.2.22AR.1 integrated
>> FUSE 28
>> Jan 17 11:04:14 backup ntfs-3g[5032]: Mounted
>> /dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2 (Read-Only, label
>> "DATA", NTFS 3.1)
>> Jan 17 11:04:14 backup ntfs-3g[5032]: Cmdline options: ro
>> Jan 17 11:04:14 backup ntfs-3g[5032]: Mount options:
>> ro,allow_other,nonempty,relatime,fsname=/dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2,blkdev,blksize=4096
>>
>>
>> Jan 17 11:04:14 backup ntfs-3g[5032]: Ownership and permissions
>> disabled, configuration type 7
>> Jan 17 11:04:33 backup ntfs-3g[5032]: Failed to open a dedup stream last
>> try was System Volume
>> Information/Dedup/ChunkStore/{0DECAE8D-71D2-4BDE-8798-530201C72D8D}.ddp/Stream/0012.0002.ccc
>>
>>
>> Jan 17 11:04:55 backup ntfs-3g[5032]: Failed to open a dedup stream last
>> try was System Volume
>> Information/Dedup/ChunkStore/{0DECAE8D-71D2-4BDE-8798-530201C72D8D}.ddp/Stream/0012.0002.ccc
>>
>>
>> Jan 17 11:04:55 backup ntfs-3g[5032]: Failed to open a dedup stream last
>> try was System Volume
>> Information/Dedup/ChunkStore/{0DECAE8D-71D2-4BDE-8798-530201C72D8D}.ddp/Stream/0012.0002.ccc
>>
>>
>> Jan 17 11:04:57 backup ntfs-3g[5032]: Failed to open a dedup stream last
>> try was System Volume
>> Information/Dedup/ChunkStore/{0DECAE8D-71D2-4BDE-8798-530201C72D8D}.ddp/Stream/0012.0002.ccc
>>
>>
>> Jan 17 11:04:57 backup ntfs-3g[5032]: Failed to open a dedup stream last
>> try was System Volume
>> Information/Dedup/ChunkStore/{0DECAE8D-71D2-4BDE-8798-530201C72D8D}.ddp/Stream/0012.0002.ccc
>>
>>
>>
>>
>> root@backup:~# getfattr -h -e hex -n system.ntfs_reparse_data
>> /mnt/sr7-sdb2/ALGEMEEN/2009-12-17\ Index\ mappenstructuur.txt
>> getfattr: Removing leading '/' from absolute path names
>> # file: mnt/sr7-sdb2/ALGEMEEN/2009-12-17 Index mappenstructuur.txt
>> 

Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-01-17 Thread Jean-Pierre André

Hi,

Jelle de Jong wrote:

Hi Jean-Piere,

I managed to get the plug-in loaded with guestfs tools, but I could not
read the files, so I went back to my basic ntfs mount test without
guestmount and I could also not read the files!

The plug-in is located at:
/usr/lib/x86_64-linux-gnu/ntfs-3g/ntfs-plugin-8013.so without the
plug-in I get the "unsupported reparse point" messages, with the plug-in
I do not get these messages, but I am still unable to read the files. :(

I added the getfattr -h -e hex -n system.ntfs_reparse_data of a sample
file as requested and the syslog messages.


Ok. This shows a variation which only recently I have
been aware of : the Smap id is apparently incremented
when the Smap is updated.

I have just updated the plugin, please load it again
and replace it.


https://paste.debian.net/hidden/4d358e9e/


This also shows the plugin was activated correctly.


Could you please help me out further?


I can at least try, with some help from you.

Please retry with the updated plugin, and please only
mount the partition as read-only, because due to
deduplication, when a file is updated, the layout of
other files is changed, which makes debugging difficult.

It the updated plugin does not work with the same unchanged
file, please post a 1KB excerpt of the file 0012.000a.ccc
from the directory whose path can be stated as :
/mnt/sr7-sdb2/System*/Dedup/ChunkStore/{0DECAE8D*/Stream

To extract the needed data you can use :

dd if=/mnt/--etc--/Stream/0012.000a.ccc bs=512 skip=17862 
count=2 | od -t x1

(I have put the full command as an attachment to avoid formatting
by the mailer).

I am of course assuming the partition was not changed since
your earlier post.

Note : in this excerpt there is no user data, you can
safely post it.

Regards

Jean-Pierre


Thank you in advance!

Kind regards,

Jelle de Jong


lvcreate --snapshot --name sr7-disk2-snapshot-copy --size 50G
/dev/lvm1-vol/sr7-disk2
lvchange -ay /dev/lvm1-vol/sr7-disk2-snapshot-copy
blkid /dev/lvm1-vol/sr7-disk2-snapshot-copy
kpartx -avg /dev/lvm1-vol/sr7-disk2-snapshot-copy
blkid /dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2
mount -o ro  -t ntfs-3g
/dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2  /mnt/sr7-sdb2

Jan 17 11:04:14 backup ntfs-3g[5032]: Version 2016.2.22AR.1 integrated
FUSE 28
Jan 17 11:04:14 backup ntfs-3g[5032]: Mounted
/dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2 (Read-Only, label
"DATA", NTFS 3.1)
Jan 17 11:04:14 backup ntfs-3g[5032]: Cmdline options: ro
Jan 17 11:04:14 backup ntfs-3g[5032]: Mount options:
ro,allow_other,nonempty,relatime,fsname=/dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2,blkdev,blksize=4096

Jan 17 11:04:14 backup ntfs-3g[5032]: Ownership and permissions
disabled, configuration type 7
Jan 17 11:04:33 backup ntfs-3g[5032]: Failed to open a dedup stream last
try was System Volume
Information/Dedup/ChunkStore/{0DECAE8D-71D2-4BDE-8798-530201C72D8D}.ddp/Stream/0012.0002.ccc

Jan 17 11:04:55 backup ntfs-3g[5032]: Failed to open a dedup stream last
try was System Volume
Information/Dedup/ChunkStore/{0DECAE8D-71D2-4BDE-8798-530201C72D8D}.ddp/Stream/0012.0002.ccc

Jan 17 11:04:55 backup ntfs-3g[5032]: Failed to open a dedup stream last
try was System Volume
Information/Dedup/ChunkStore/{0DECAE8D-71D2-4BDE-8798-530201C72D8D}.ddp/Stream/0012.0002.ccc

Jan 17 11:04:57 backup ntfs-3g[5032]: Failed to open a dedup stream last
try was System Volume
Information/Dedup/ChunkStore/{0DECAE8D-71D2-4BDE-8798-530201C72D8D}.ddp/Stream/0012.0002.ccc

Jan 17 11:04:57 backup ntfs-3g[5032]: Failed to open a dedup stream last
try was System Volume
Information/Dedup/ChunkStore/{0DECAE8D-71D2-4BDE-8798-530201C72D8D}.ddp/Stream/0012.0002.ccc



root@backup:~# getfattr -h -e hex -n system.ntfs_reparse_data
/mnt/sr7-sdb2/ALGEMEEN/2009-12-17\ Index\ mappenstructuur.txt
getfattr: Removing leading '/' from absolute path names
# file: mnt/sr7-sdb2/ALGEMEEN/2009-12-17 Index mappenstructuur.txt
system.ntfs_reparse_data=0x1381020100010b0004000400630004006400030004006800060008006c00090010007400050008008400060008008c000a002000d40008004000940005000800f4000c008daeec0dd271de4b8798530201c72d8d60003fd1bb66d2012719120027191200b08c8b0005000100480488019427b7d5fcd18238484f1121abc9175a202e08009f238a585b38f16276213e2cc9e63d3769232008d45ba70a03f02e356e463c68d31118e8



On 10/01/17 17:45, Jean-Pierre André wrote:

Hi again,

Jelle de Jong wrote:

Dear Jean-Pierre André,

root@backup:~# ls -hal
/usr/lib/x86_64-linux-gnu/ntfs-3g/ntfs-plugin-8013.so
-rw-r--r-- 1 root root 16K Jan 10 13:34
/usr/lib/x86_64-linux-gnu/ntfs-3g/ntfs-plugin-8013.so

How do I know if ntfs-3g is using the plug-in?


The plugin is dynamically loaded when you first access
a deduplicated file. This is recorded in the 

Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-01-17 Thread Jelle de Jong
Hi Jean-Piere,

I managed to get the plug-in loaded with guestfs tools, but I could not 
read the files, so I went back to my basic ntfs mount test without 
guestmount and I could also not read the files!

The plug-in is located at: 
/usr/lib/x86_64-linux-gnu/ntfs-3g/ntfs-plugin-8013.so without the 
plug-in I get the "unsupported reparse point" messages, with the plug-in 
I do not get these messages, but I am still unable to read the files. :(

I added the getfattr -h -e hex -n system.ntfs_reparse_data of a sample 
file as requested and the syslog messages.

https://paste.debian.net/hidden/4d358e9e/

Could you please help me out further?

Thank you in advance!

Kind regards,

Jelle de Jong


lvcreate --snapshot --name sr7-disk2-snapshot-copy --size 50G 
/dev/lvm1-vol/sr7-disk2
lvchange -ay /dev/lvm1-vol/sr7-disk2-snapshot-copy
blkid /dev/lvm1-vol/sr7-disk2-snapshot-copy
kpartx -avg /dev/lvm1-vol/sr7-disk2-snapshot-copy
blkid /dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2
mount -o ro  -t ntfs-3g 
/dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2  /mnt/sr7-sdb2

Jan 17 11:04:14 backup ntfs-3g[5032]: Version 2016.2.22AR.1 integrated 
FUSE 28
Jan 17 11:04:14 backup ntfs-3g[5032]: Mounted 
/dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2 (Read-Only, label 
"DATA", NTFS 3.1)
Jan 17 11:04:14 backup ntfs-3g[5032]: Cmdline options: ro
Jan 17 11:04:14 backup ntfs-3g[5032]: Mount options: 
ro,allow_other,nonempty,relatime,fsname=/dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2,blkdev,blksize=4096
Jan 17 11:04:14 backup ntfs-3g[5032]: Ownership and permissions 
disabled, configuration type 7
Jan 17 11:04:33 backup ntfs-3g[5032]: Failed to open a dedup stream last 
try was System Volume 
Information/Dedup/ChunkStore/{0DECAE8D-71D2-4BDE-8798-530201C72D8D}.ddp/Stream/0012.0002.ccc
Jan 17 11:04:55 backup ntfs-3g[5032]: Failed to open a dedup stream last 
try was System Volume 
Information/Dedup/ChunkStore/{0DECAE8D-71D2-4BDE-8798-530201C72D8D}.ddp/Stream/0012.0002.ccc
Jan 17 11:04:55 backup ntfs-3g[5032]: Failed to open a dedup stream last 
try was System Volume 
Information/Dedup/ChunkStore/{0DECAE8D-71D2-4BDE-8798-530201C72D8D}.ddp/Stream/0012.0002.ccc
Jan 17 11:04:57 backup ntfs-3g[5032]: Failed to open a dedup stream last 
try was System Volume 
Information/Dedup/ChunkStore/{0DECAE8D-71D2-4BDE-8798-530201C72D8D}.ddp/Stream/0012.0002.ccc
Jan 17 11:04:57 backup ntfs-3g[5032]: Failed to open a dedup stream last 
try was System Volume 
Information/Dedup/ChunkStore/{0DECAE8D-71D2-4BDE-8798-530201C72D8D}.ddp/Stream/0012.0002.ccc


root@backup:~# getfattr -h -e hex -n system.ntfs_reparse_data 
/mnt/sr7-sdb2/ALGEMEEN/2009-12-17\ Index\ mappenstructuur.txt
getfattr: Removing leading '/' from absolute path names
# file: mnt/sr7-sdb2/ALGEMEEN/2009-12-17 Index mappenstructuur.txt
system.ntfs_reparse_data=0x1381020100010b0004000400630004006400030004006800060008006c00090010007400050008008400060008008c000a002000d40008004000940005000800f4000c008daeec0dd271de4b8798530201c72d8d60003fd1bb66d2012719120027191200b08c8b0005000100480488019427b7d5fcd18238484f1121abc9175a202e08009f238a585b38f16276213e2cc9e63d3769232008d45ba70a03f02e356e463c68d31118e8


On 10/01/17 17:45, Jean-Pierre André wrote:
> Hi again,
>
> Jelle de Jong wrote:
>> Dear Jean-Pierre André,
>>
>> root@backup:~# ls -hal
>> /usr/lib/x86_64-linux-gnu/ntfs-3g/ntfs-plugin-8013.so
>> -rw-r--r-- 1 root root 16K Jan 10 13:34
>> /usr/lib/x86_64-linux-gnu/ntfs-3g/ntfs-plugin-8013.so
>>
>> How do I know if ntfs-3g is using the plug-in?
>
> The plugin is dynamically loaded when you first access
> a deduplicated file. This is recorded in the syslog.
>
> Note : you should probably make it executable (e.g. by chmod 755)
>
>>
>> lvcreate --snapshot --name sr7-disk2-snapshot-copy --size 50G
>> /dev/lvm1-vol/sr7-disk2
>> lvchange -ay /dev/lvm1-vol/sr7-disk2-snapshot-copy
>> blkid /dev/lvm1-vol/sr7-disk2-snapshot-copy
>> kpartx -avg /dev/lvm1-vol/sr7-disk2-snapshot-copy
>> blkid /dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2
>> mount -o ro  -t ntfs-3g
>> /dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2  /mnt/sr7-sdb2
>> ls -hal /mnt/sr7-sdb2/ALGEMEEN/ | grep "unsupported reparse point"
>>
>> Shows no more unsupported reparse point!
>
> This might be a good indication that the plugin is used.
>
> Did you check whether the files are readable ?
>
>> umount /mnt/sr7-sdb2/
>> kpartx -dv /dev/lvm1-vol/sr7-disk2-snapshot-copy
>> lvchange -an /dev/lvm1-vol/sr7-disk2-snapshot-copy
>> lvremove /dev/lvm1-vol/sr7-disk2-snapshot-copy
>>
>> root@backup:~# fgrep ntfs-3g /var/log/syslog
>> Jan 10 16:14:28 backup ntfs-3g[13082]: Version 2016.2.22AR.1 integrated
>> FUSE 28
>> Jan 10 16:14:28 backup ntfs-3g[13082]: Mounted
>> 

Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-01-10 Thread Jean-Pierre André
Hi again,

Jelle de Jong wrote:
> Dear Jean-Pierre André,
>
> root@backup:~# ls -hal
> /usr/lib/x86_64-linux-gnu/ntfs-3g/ntfs-plugin-8013.so
> -rw-r--r-- 1 root root 16K Jan 10 13:34
> /usr/lib/x86_64-linux-gnu/ntfs-3g/ntfs-plugin-8013.so
>
> How do I know if ntfs-3g is using the plug-in?

The plugin is dynamically loaded when you first access
a deduplicated file. This is recorded in the syslog.

Note : you should probably make it executable (e.g. by chmod 755)

>
> lvcreate --snapshot --name sr7-disk2-snapshot-copy --size 50G
> /dev/lvm1-vol/sr7-disk2
> lvchange -ay /dev/lvm1-vol/sr7-disk2-snapshot-copy
> blkid /dev/lvm1-vol/sr7-disk2-snapshot-copy
> kpartx -avg /dev/lvm1-vol/sr7-disk2-snapshot-copy
> blkid /dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2
> mount -o ro  -t ntfs-3g
> /dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2  /mnt/sr7-sdb2
> ls -hal /mnt/sr7-sdb2/ALGEMEEN/ | grep "unsupported reparse point"
>
> Shows no more unsupported reparse point!

This might be a good indication that the plugin is used.

Did you check whether the files are readable ?

> umount /mnt/sr7-sdb2/
> kpartx -dv /dev/lvm1-vol/sr7-disk2-snapshot-copy
> lvchange -an /dev/lvm1-vol/sr7-disk2-snapshot-copy
> lvremove /dev/lvm1-vol/sr7-disk2-snapshot-copy
>
> root@backup:~# fgrep ntfs-3g /var/log/syslog
> Jan 10 16:14:28 backup ntfs-3g[13082]: Version 2016.2.22AR.1 integrated
> FUSE 28
> Jan 10 16:14:28 backup ntfs-3g[13082]: Mounted
> /dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2 (Read-Only, label
> "DATA", NTFS 3.1)
> Jan 10 16:14:28 backup ntfs-3g[13082]: Cmdline options: ro
> Jan 10 16:14:28 backup ntfs-3g[13082]: Mount options:
> ro,allow_other,nonempty,relatime,fsname=/dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2,blkdev,blksize=4096
>
> Jan 10 16:14:28 backup ntfs-3g[13082]: Ownership and permissions
> disabled, configuration type 7
> Jan 10 16:23:32 backup ntfs-3g[13082]: Unmounting

In this syslog excerpt I do not see the plugin loading,
did you access a deduplicated file before unmounting ?

> /dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2 (DATA)
>
> However when using with the virt-filesystems and guestmount tools it
> does not work:
>
> root@backup:~# lvcreate --snapshot --name sr7-disk2-snapshot-copy --size
> 50G /dev/lvm1-vol/sr7-disk2
> root@backup:~# lvchange -ay /dev/lvm1-vol/sr7-disk2-snapshot-copy
>
> root@backup:~# blkid /dev/lvm1-vol/sr7-disk2-snapshot-copy
> /dev/lvm1-vol/sr7-disk2-snapshot-copy:
> PTUUID="8d00ec46-cb6d-457f-bfc8-703089c83fb9" PTTYPE="gpt"
>
> root@backup:~# virt-filesystems -a /dev/lvm1-vol/sr7-disk2-snapshot-copy
> /dev/sda2
>
> root@backup:~# guestmount --ro -a /dev/lvm1-vol/sr7-disk2-snapshot-copy
> -m /dev/sda2  /mnt/sr7-sda2
>
> root@backup:~# ls -hal /mnt/sr7-sda2/ALGEMEEN/ | grep "unsupported
> reparse point"
> lrwxrwxrwx 1 root root   26 May 19  2008 080514.Index I Directory
> algemeen.xls -> unsupported reparse point
> lrwxrwxrwx 1 root root   26 Dec 17  2009 2009-12-17 Index
> mappenstructuur.txt -> unsupported reparse point
> lrwxrwxrwx 1 root root   26 May  2  2013 ber_folders -> unsupported
> reparse point
>
> # verbose output:
> root@backup:~# guestmount -  --ro -a
> /dev/lvm1-vol/sr7-disk2-snapshot-copy -m /dev/sda2  /mnt/sr7-sda2
>
> http://paste.debian.net/hidden/be1c989f/
>
> I will sent an email to libgues...@redhat.com to ask what they think.

Indeed, as some extra layer could come across, but first
check you can actually read files from a plain partition.

Regards

Jean-Pierre

>
> Kind regards,
>
> Jelle de Jong
>
> On 10/01/17 12:44, Jean-Pierre André wrote:
>> Jelle de Jong wrote:
>>> Dear Jean-Pierre,
>>>
>>> We created a test environment with Windows 2016 with data
>>> deduplication on.
>>>
>>> I upgraded the backup server to install ntfs-3g 2016.2.22AR.1. I tried
>>> to mount the volume to see if I could read the files, but they show up
>>> as unsupported reparse point.
>>>
>>> I listed the Dedup/ChunkStore in the following pastebin:
>>> http://paste.debian.net/hidden/ca51ee46/
>>>
>>> I tried to use the ntfs-plugin-8013.so but I am not sure if it is
>>> being used, the bellow is not working so far?
>>
>> Apparently, the expected location on Debian is
>> /usr/lib/x86_64-linux-gnu/ntfs-3g/ntfs-plugin-8013.so
>> but you are getting libntfs-3g from /lib/x86_64-linux-gnu
>> maybe there is a symlink. Anyway the same directory is
>> expected (check possible errors in your syslog).
>>
>> If you get errors, please only mount as read-only until
>> the issue is solved, choose a sample file of moderate size
>> (say 100KB), and post its reparse data, which you can
>> get by :
>>
>> getfattr -h -e hex -n system.ntfs_reparse_data your-sample-file
>>
>> Regards
>>
>> Jean-Pierre
>



--
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support 

Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-01-10 Thread Jelle de Jong
Dear Jean-Pierre André,

root@backup:~# ls -hal 
/usr/lib/x86_64-linux-gnu/ntfs-3g/ntfs-plugin-8013.so
-rw-r--r-- 1 root root 16K Jan 10 13:34 
/usr/lib/x86_64-linux-gnu/ntfs-3g/ntfs-plugin-8013.so

How do I know if ntfs-3g is using the plug-in?

lvcreate --snapshot --name sr7-disk2-snapshot-copy --size 50G 
/dev/lvm1-vol/sr7-disk2
lvchange -ay /dev/lvm1-vol/sr7-disk2-snapshot-copy
blkid /dev/lvm1-vol/sr7-disk2-snapshot-copy
kpartx -avg /dev/lvm1-vol/sr7-disk2-snapshot-copy
blkid /dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2
mount -o ro  -t ntfs-3g 
/dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2  /mnt/sr7-sdb2
ls -hal /mnt/sr7-sdb2/ALGEMEEN/ | grep "unsupported reparse point"

Shows no more unsupported reparse point!

umount /mnt/sr7-sdb2/
kpartx -dv /dev/lvm1-vol/sr7-disk2-snapshot-copy
lvchange -an /dev/lvm1-vol/sr7-disk2-snapshot-copy
lvremove /dev/lvm1-vol/sr7-disk2-snapshot-copy

root@backup:~# fgrep ntfs-3g /var/log/syslog
Jan 10 16:14:28 backup ntfs-3g[13082]: Version 2016.2.22AR.1 integrated 
FUSE 28
Jan 10 16:14:28 backup ntfs-3g[13082]: Mounted 
/dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2 (Read-Only, label 
"DATA", NTFS 3.1)
Jan 10 16:14:28 backup ntfs-3g[13082]: Cmdline options: ro
Jan 10 16:14:28 backup ntfs-3g[13082]: Mount options: 
ro,allow_other,nonempty,relatime,fsname=/dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2,blkdev,blksize=4096
Jan 10 16:14:28 backup ntfs-3g[13082]: Ownership and permissions 
disabled, configuration type 7
Jan 10 16:23:32 backup ntfs-3g[13082]: Unmounting 
/dev/mapper/lvm1--vol-sr7--disk2--snapshot--copy2 (DATA)

However when using with the virt-filesystems and guestmount tools it 
does not work:

root@backup:~# lvcreate --snapshot --name sr7-disk2-snapshot-copy --size 
50G /dev/lvm1-vol/sr7-disk2
root@backup:~# lvchange -ay /dev/lvm1-vol/sr7-disk2-snapshot-copy

root@backup:~# blkid /dev/lvm1-vol/sr7-disk2-snapshot-copy
/dev/lvm1-vol/sr7-disk2-snapshot-copy: 
PTUUID="8d00ec46-cb6d-457f-bfc8-703089c83fb9" PTTYPE="gpt"

root@backup:~# virt-filesystems -a /dev/lvm1-vol/sr7-disk2-snapshot-copy
/dev/sda2

root@backup:~# guestmount --ro -a /dev/lvm1-vol/sr7-disk2-snapshot-copy 
-m /dev/sda2  /mnt/sr7-sda2

root@backup:~# ls -hal /mnt/sr7-sda2/ALGEMEEN/ | grep "unsupported 
reparse point"
lrwxrwxrwx 1 root root   26 May 19  2008 080514.Index I Directory 
algemeen.xls -> unsupported reparse point
lrwxrwxrwx 1 root root   26 Dec 17  2009 2009-12-17 Index 
mappenstructuur.txt -> unsupported reparse point
lrwxrwxrwx 1 root root   26 May  2  2013 ber_folders -> unsupported 
reparse point

# verbose output:
root@backup:~# guestmount -  --ro -a 
/dev/lvm1-vol/sr7-disk2-snapshot-copy -m /dev/sda2  /mnt/sr7-sda2

http://paste.debian.net/hidden/be1c989f/

I will sent an email to libgues...@redhat.com to ask what they think.

Kind regards,

Jelle de Jong

On 10/01/17 12:44, Jean-Pierre André wrote:
> Jelle de Jong wrote:
>> Dear Jean-Pierre,
>>
>> We created a test environment with Windows 2016 with data
>> deduplication on.
>>
>> I upgraded the backup server to install ntfs-3g 2016.2.22AR.1. I tried
>> to mount the volume to see if I could read the files, but they show up
>> as unsupported reparse point.
>>
>> I listed the Dedup/ChunkStore in the following pastebin:
>> http://paste.debian.net/hidden/ca51ee46/
>>
>> I tried to use the ntfs-plugin-8013.so but I am not sure if it is
>> being used, the bellow is not working so far?
>
> Apparently, the expected location on Debian is
> /usr/lib/x86_64-linux-gnu/ntfs-3g/ntfs-plugin-8013.so
> but you are getting libntfs-3g from /lib/x86_64-linux-gnu
> maybe there is a symlink. Anyway the same directory is
> expected (check possible errors in your syslog).
>
> If you get errors, please only mount as read-only until
> the issue is solved, choose a sample file of moderate size
> (say 100KB), and post its reparse data, which you can
> get by :
>
> getfattr -h -e hex -n system.ntfs_reparse_data your-sample-file
>
> Regards
>
> Jean-Pierre

--
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
___
ntfs-3g-devel mailing list
ntfs-3g-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ntfs-3g-devel


Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-01-10 Thread Jean-Pierre André
Jelle de Jong wrote:
> Dear Jean-Pierre,
>
> We created a test environment with Windows 2016 with data deduplication on.
>
> I upgraded the backup server to install ntfs-3g 2016.2.22AR.1. I tried
> to mount the volume to see if I could read the files, but they show up
> as unsupported reparse point.
>
> I listed the Dedup/ChunkStore in the following pastebin:
> http://paste.debian.net/hidden/ca51ee46/
>
> I tried to use the ntfs-plugin-8013.so but I am not sure if it is
> being used, the bellow is not working so far?

Apparently, the expected location on Debian is
/usr/lib/x86_64-linux-gnu/ntfs-3g/ntfs-plugin-8013.so
but you are getting libntfs-3g from /lib/x86_64-linux-gnu
maybe there is a symlink. Anyway the same directory is
expected (check possible errors in your syslog).

If you get errors, please only mount as read-only until
the issue is solved, choose a sample file of moderate size
(say 100KB), and post its reparse data, which you can
get by :

getfattr -h -e hex -n system.ntfs_reparse_data your-sample-file

Regards

Jean-Pierre

--
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
___
ntfs-3g-devel mailing list
ntfs-3g-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ntfs-3g-devel


Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-01-10 Thread Jelle de Jong
Dear Jean-Pierre,

We created a test environment with Windows 2016 with data deduplication on.

I upgraded the backup server to install ntfs-3g 2016.2.22AR.1. I tried 
to mount the volume to see if I could read the files, but they show up 
as unsupported reparse point.

I listed the Dedup/ChunkStore in the following pastebin:
http://paste.debian.net/hidden/ca51ee46/

I tried to use the ntfs-plugin-8013.so but I am not sure if it is 
being used, the bellow is not working so far?

http://www.tuxera.com/community/ntfs-3g-advanced/junction-points-and-symbolic-links/

root@backup:~# ldd /bin/ntfs-3g
 linux-vdso.so.1 (0x7ffd37ed)
 libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7f2c3f203000)
 libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
(0x7f2c3efe6000)
 libntfs-3g.so.871 => /lib/x86_64-linux-gnu/libntfs-3g.so.871 
(0x7f2c3ed94000)
 libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7f2c3e9f6000)
 /lib64/ld-linux-x86-64.so.2 (0x7f2c3f62b000)

wget http://jp-andre.pagesperso-orange.fr/dedup.zip
unzip dedup.zip
cp --verbose dedup/linux-64/ntfs-plugin-8013.so /usr/local/lib/

chmod 555 /usr/local/lib/ntfs-plugin-8013.so
chown root:root /usr/local/lib/ntfs-plugin-8013.so

root@backup:~# cat /etc/ld.so.conf.d/libc.conf
# libc default configuration
/usr/local/lib

root@backup:~# cat /etc/ld.so.conf.d/x86_64-linux-gnu.conf
# Multiarch support
/lib/x86_64-linux-gnu
/usr/lib/x86_64-linux-gnu

root@backup:~# chmod 555 /usr/local/lib/ntfs-plugin-8013.so

root@backup:~# ls -hal /usr/local/lib/ntfs-plugin-8013.so
-r-xr-xr-x 1 root root 16K Jan 10 11:51 
/usr/local/lib/ntfs-plugin-8013.so

root@backup:~# ldconfig

root@backup:~# ldd /usr/local/lib/ntfs-plugin-8013.so
 linux-vdso.so.1 (0x720a9000)
 libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7fec8f5d1000)
 /lib64/ld-linux-x86-64.so.2 (0x7fec8fb73000)

root@backup:~# guestmount --ro -a /dev/lvm1-vol/sr7-disk1 -a 
/dev/lvm1-vol/sr7-disk2 -m /dev/sdb2 /mnt/sr7-sdb2
root@backup:~# ls -hal /mnt/sr7-sdb2/ALGEMEEN/ | grep "unsupported 
reparse point"
lrwxrwxrwx 1 root root   26 May 19  2008 080514.Index I Directory 
algemeen.xls -> unsupported reparse point
lrwxrwxrwx 1 root root   26 Dec 17  2009 2009-12-17 Index 
mappenstructuur.txt -> unsupported reparse point
lrwxrwxrwx 1 root root   26 May  2  2013 ber_folders -> unsupported 
reparse point

Kind regards,

Jelle de Jong

On 30/08/16 17:02, Jean-Pierre André wrote:
> Jelle de Jong wrote:
>> Dear ntfs-3g developers,
>>
>> I am using libguestfs to make file-based back-ups of kvm quests.
>>
>> Some of the guests are Windows Servers with NTFS formatting.
>>
>> We want to enable data deduplication on the Windows 2012 file servers,
>> however a few years ago this caused libguestfs to stop working.
>>
>> What is the current status of ntfs-3g and support for mounting and
>> reading Windows data volumes with ntfs and data deduplication enabled?
>
> I have uploaded to
> http://jp-andre.pagesperso-orange.fr/advanced-ntfs-3g.html
> a beta-test version of an ntfs-3g plugin for reading
> deduplicated files stored by Windows 2012. The required
> ntfs-3g version is at least 2016.2.22AR.1 (shipped with
> a few distributions, and also available on the same page).
>
> Note that this is only for reading, so mounting the partition
> as read-only is a safety measure.
>
> Testers welcome.
>
> Jean-Pierre
>
>>
>> Kind regards,
>>
>> Jelle de Jong (GNU/Linux Consultant)
>

--
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
___
ntfs-3g-devel mailing list
ntfs-3g-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ntfs-3g-devel


Re: [ntfs-3g-devel] ntfs-3g support for volumes with data deduplication windows 2012

2017-01-10 Thread Jelle de Jong
Dear Jean-Pierre,

We created a test environment with Windows 2016 with data deduplication on.

I upgraded the backup server to install ntfs-3g 2016.2.22AR.1. I tried 
to mount the volume to see if I could read the files, but they show up 
as unsupported reparse point.

I listed the Dedup/ChunkStore in the following pastebin:
http://paste.debian.net/hidden/ca51ee46/

I tried to use the ntfs-plugin-8013.so but I am not sure if it is 
being used, the bellow is not working so far?

http://www.tuxera.com/community/ntfs-3g-advanced/junction-points-and-symbolic-links/

root@backup:~# ldd /bin/ntfs-3g
 linux-vdso.so.1 (0x7ffd37ed)
 libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7f2c3f203000)
 libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
(0x7f2c3efe6000)
 libntfs-3g.so.871 => /lib/x86_64-linux-gnu/libntfs-3g.so.871 
(0x7f2c3ed94000)
 libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7f2c3e9f6000)
 /lib64/ld-linux-x86-64.so.2 (0x7f2c3f62b000)

wget http://jp-andre.pagesperso-orange.fr/dedup.zip
unzip dedup.zip
cp --verbose dedup/linux-64/ntfs-plugin-8013.so /usr/local/lib/

chmod 555 /usr/local/lib/ntfs-plugin-8013.so
chown root:root /usr/local/lib/ntfs-plugin-8013.so

root@backup:~# cat /etc/ld.so.conf.d/libc.conf
# libc default configuration
/usr/local/lib

root@backup:~# cat /etc/ld.so.conf.d/x86_64-linux-gnu.conf
# Multiarch support
/lib/x86_64-linux-gnu
/usr/lib/x86_64-linux-gnu

root@backup:~# chmod 555 /usr/local/lib/ntfs-plugin-8013.so

root@backup:~# ls -hal /usr/local/lib/ntfs-plugin-8013.so
-r-xr-xr-x 1 root root 16K Jan 10 11:51 
/usr/local/lib/ntfs-plugin-8013.so

root@backup:~# ldconfig

root@backup:~# ldd /usr/local/lib/ntfs-plugin-8013.so
 linux-vdso.so.1 (0x720a9000)
 libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7fec8f5d1000)
 /lib64/ld-linux-x86-64.so.2 (0x7fec8fb73000)

root@backup:~# guestmount --ro -a /dev/lvm1-vol/sr7-disk1 -a 
/dev/lvm1-vol/sr7-disk2 -m /dev/sdb2 /mnt/sr7-sdb2
root@backup:~# ls -hal /mnt/sr7-sdb2/ALGEMEEN/ | grep "unsupported 
reparse point"
lrwxrwxrwx 1 root root   26 May 19  2008 080514.Index I Directory 
algemeen.xls -> unsupported reparse point
lrwxrwxrwx 1 root root   26 Dec 17  2009 2009-12-17 Index 
mappenstructuur.txt -> unsupported reparse point
lrwxrwxrwx 1 root root   26 May  2  2013 ber_folders -> unsupported 
reparse point

Kind regards,

Jelle de Jong

On 30/08/16 17:02, Jean-Pierre André wrote:
> Jelle de Jong wrote:
>> Dear ntfs-3g developers,
>>
>> I am using libguestfs to make file-based back-ups of kvm quests.
>>
>> Some of the guests are Windows Servers with NTFS formatting.
>>
>> We want to enable data deduplication on the Windows 2012 file servers,
>> however a few years ago this caused libguestfs to stop working.
>>
>> What is the current status of ntfs-3g and support for mounting and
>> reading Windows data volumes with ntfs and data deduplication enabled?
>
> I have uploaded to
> http://jp-andre.pagesperso-orange.fr/advanced-ntfs-3g.html
> a beta-test version of an ntfs-3g plugin for reading
> deduplicated files stored by Windows 2012. The required
> ntfs-3g version is at least 2016.2.22AR.1 (shipped with
> a few distributions, and also available on the same page).
>
> Note that this is only for reading, so mounting the partition
> as read-only is a safety measure.
>
> Testers welcome.
>
> Jean-Pierre
>
>>
>> Kind regards,
>>
>> Jelle de Jong (GNU/Linux Consultant)
>

--
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
___
ntfs-3g-devel mailing list
ntfs-3g-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ntfs-3g-devel