Matt, I read in comments your FAT32 lib doesn't support fragmented files. Do 
you actually try to implement this feature, and did you give up because not 
enough resource ?
I feel adressed as "guy" and try to answer as much as i know ;-)

There are two approaches to deal with fragmented files:

1.) Start reading, and as soon as we reach the end of the fragment, go back to 
the FAT and find the next fragment. This could need a lot of time occasionally 
while reading in the middle of a file. The same FAT sector may have to be read 
multiple times during acces of a single file.

2.) When opening, parse the FAT and collect all the fragments to a buffer. 
Needs only time when opening the file, but needs a buffer that could limit the 
allowable number of fragments.

Matt chose to use this second variant, which IMO is a good choice generally. It 
is not universal, however. Theoretically, a file of 2GB can consist of four 
millions of fragments. The 16MByte (or more?) buffer that would be needed is 
beyond anything you could reasonably connect to a PIC. 

This is the reason that made me think about a "new simple" file system that 
uses a kind of linked list. By mixing user data and meta data in each sector, 
there would be no need to neither cache fragment data nor jumping to a system 
area to find the next sector.

Greets,
Kiste



-- 
You received this message because you are subscribed to the Google Groups 
"jallib" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/jallib?hl=en.

Reply via email to