Well said. Yes, I do implement fragments as Kiste said. It does take
extra RAM, but this can be stored in an external ram for a cost of $1.

Another reason I did this is that if you don't store fragments, it is
hard to jump around in the file. For example, you may start reading at
the beginning of the file (reading fragments as you go forward), but
it is then difficult to go back and read something again. This could
be done without fragments in memory, but reading/writing a file
backwards would be very slow. In my library, the user only needs to
specify a file's sector address.

If you have a windows system available, you can get statistics on
number of fragments. My current fragmented C drive with windows on it
has 5 files with > 100 fragments, out of 209,247 files

There would also be a app layer that the user will create. As an
example, with a text editor the user can either read the entire file
into memory, or only read into memory what is needed on the lcd. In
the second case (using less memory), if the user scrolls up, fat32
will need to read/write the file backwards. The final design will be
greatly based on the application.

What I do need in my library is a fail safe for when there are more
fragments in the file then allowed by the "number of fragments
allowed" constant.

Matt.

On May 6, 10:52 am, Oliver Seitz <[email protected]> wrote:
> Matt, I read in comments your FAT32 lib doesn't support fragmented files. Do 
> you actually try to implement this feature, and did you give up because not 
> enough resource ?
> I feel adressed as "guy" and try to answer as much as i know ;-)
>
> There are two approaches to deal with fragmented files:
>
> 1.) Start reading, and as soon as we reach the end of the fragment, go back 
> to the FAT and find the next fragment. This could need a lot of time 
> occasionally while reading in the middle of a file. The same FAT sector may 
> have to be read multiple times during acces of a single file.
>
> 2.) When opening, parse the FAT and collect all the fragments to a buffer. 
> Needs only time when opening the file, but needs a buffer that could limit 
> the allowable number of fragments.
>
> Matt chose to use this second variant, which IMO is a good choice generally. 
> It is not universal, however. Theoretically, a file of 2GB can consist of 
> four millions of fragments. The 16MByte (or more?) buffer that would be 
> needed is beyond anything you could reasonably connect to a PIC.
>
> This is the reason that made me think about a "new simple" file system that 
> uses a kind of linked list. By mixing user data and meta data in each sector, 
> there would be no need to neither cache fragment data nor jumping to a system 
> area to find the next sector.
>
> Greets,
> Kiste

-- 
You received this message because you are subscribed to the Google Groups 
"jallib" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/jallib?hl=en.

Reply via email to