> In fact, the
> highest number i've measured was about 520kByte/s. 

> That's realy fast and also means you fill up all memory in
> 2ms. 

Sure, It's no use to be able to read really fast just throwing away all the 
data (like I did in the tests).

> then you have to actually do something with this data.

And that of course takes its time, too. And that is my point of view: I'll need 
time to work with the data, so I want to have it read as fast as possible to 
save processor cycles for the "important" things the main program will do.

> Years ago I read a chapter on optimisation in the book
> 'programming
> pearls' of Jon Bently. The major message was: look at the
> big picture,
> or you'd optimise the wrong issues. For us, library
> developers, it's
> not possible to anticipate on each application, we can
> imagine an
> average app with our libraries and optimise for efficent
> resource use
> (ram, flash and speed).

You're very true, and if I would play to my own personal rules I should kick my 
own ass for suggesting to optimize a not-yet-fully-functional set of libraries. 
This 100ns-issue is one that can be easily postponed. It's something else with 
the sector_buffer-issue, but that's another thread ;-)

Greets,
Kiste


-- 
You received this message because you are subscribed to the Google Groups 
"jallib" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/jallib?hl=en.

Reply via email to