Hi,

g4hx wrote:
> On 07/03/2012 01:12 PM, Jean-Pierre André wrote:
>    
>> Hi,
>>
>> g4hx wrote:
>>      
>>> On 07/02/2012 06:08 PM, Jean-Pierre André wrote:
>>>
>>> Hi,
>>>
>>> just to prove you wrong: I just reformatted the partition using
>>>
>>> mkfs.ntfs --label DATA -c 65536 -f /dev/mapper/truecrypt1
>>>
>>> and I am experiencing the exact same behaviour: My CPU load goes to
>>> about 100% and the write rate using dd is about 3 MB/s. If I use an ext3
>>> file system the write speed is about 90 MB/s.
>>>
>>> g4hx
>>>
>>>        
>> Well, I am not sure I get all the consequences
>> of your configuration (dev mapper, truecrypt),
>> I just hope there is not something which splits
>> the buffers into 512 byte chunks.
>>
>> Anyway, can you retry with a debug version,
>> letting the file being filled to about 125GB, so
>> that the low throughput is more likely to be visible.
>> However you are saying 3MB/s, so this would
>> last about 12 hours, and the debug version is
>> slower...
>>
>> If you have already filled a big file (I mean really
>> filled, not a sparse one), say with 1,800,000
>> clusters of 65536 bytes, you can just append
>> 100,000 more clusters, so that I see what is going
>> on. For instance :
>> "dd bs=65536 seek=1800000 count=100000"
>> and of course be sure to mount with option
>> big_writes.
>>
>> Note : I have an experimental version with some
>> improvement for very fragmented files. Would
>> you like to test it ?
>>
>> Also what version of ntfs-3g are you using ?
>>
>> Regards
>>
>> Jean-Pierre
>>
>>
>>
>>      
>
> Since I already repartitioned, I also removed the truecrypt layer, so I
> now have a clean, unencrypted partition on my hard disk.
>
> Curiously I just realized that when it comes to using dd the bs
> parameter has a huge impact on the write speed: when I use dd with
> bs=512 (which is the default), I get a write rate of about 3 MB/s,
> whereas using a bs of 65536 speeds the write process up to about 20
> MB/s. Granted, that is still much worse than say an ext3 filesystem, but
> it is a start.
>    

If you use the mount option big_writes, the number
of contexte changes and activations of ntfs-3g is
inversely proportional to the write buffer size.
For big buffers, ntfs-3g needs to access the file
parameters less frequently. For big files getting
the list of clusters allocated to the file is time
consuming, because this is stored in a compressed
format.

If you do not use big_writes, the buffer size is
generally 4096 bytes.

> However, there is still a high CPU load, which supposedly bounds the
> write speed. I am convinced that the different bs value of dd somehow
> reduce the overhead of the write operation, so the question is why the
> CPU needs to be used so excessively in the first place.
>    

The main source of CPU load in your situation is
the updating of the compressed table of clusters
(runlist). If you use bigger clusters, this table is
smaller and easier to update.

If your partition is empty and you are not creating
two files at the same time, and not deleting any
file, you should get no fragmentation, thus the
minimal runlist, and the CPU load should drop.

Regards

Jean-Pierre



------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
ntfs-3g-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/ntfs-3g-devel

Reply via email to