> I'm going to look into this in more detail. Who 
> knows, it might be trivial to do :)

I've been playing with design patterns and C++ in 
combination with PalmOS for a while and one of the 
things I wrote during the Christmas holidays was a 
kind of multithreaded implementation using something
I think is called run-to-completion tasks, i.e. you 
have a queue of tasks that each will be run to 
completion before the next task can start running. By 
having one special task that puts itself back into 
the queue every time it is run (unless there is e.g. 
an appStopEvent) the queue will always contain at 
least one task to run.

That way you could add tasks to the queue that will be 
run when there are no current events to handle. Should 
be "easy" to implement in C, too.

However, it is not only a matter of *rendering* the 
pages; there are other "problems", too ;-)

Anyway, I will not take part in any work to "fix" the 
32k limit at the moment (I have other things I want to 
do first.) Actually, considering how much I have been 
involved in the current implementation it would 
probably be better if someone else (with fresh and new 
ideas) worked on this feature ;-)

Earlier in this thread Laurens M. Fridael wrote:

> The fact that you can't judge a page's total length, 
> only the length of the current segment, makes 
> Plucker nearly useless for e-books.

And here I was thinking that it was the *contents* of 
a book that was most important; apparently it is the 
*format* ;-)

You'll read a book from beginning to end, so why 
should it matter that it is split up in a few 
more "pages" than you expected?

About making it "nearly useless for e-books" I can 
only say that I have used it for that purpose long 
before most of you even knew about Plucker...


Dave P added the following:

> I wonder if anyone has looked at the source code 
> used by Weasel Reader (GNU GPL).

Yep (I have even contributed code to it:), but Weasel 
Reader and Plucker work in quite diferent ways...

> I can open The Three Musketeers (DOC format, SD 
> card) in less than 30 seconds and at top scroll 
> speed (about 22,000 cpm) I notice no blips in the 
> first 100k or so (I got bored after that).

Well, comparing the DOC format with the Plucker format 
isn't really fair (to the DOC format:) The DOC format 
is nothing else than slightly compressed text; Plucker 
includes a few more things ;-)

> I realize that Plucker might not be able to achieve
> quite that speed due depending on the decompression
> algorithm

It has nothing to do with the compression; the time to 
uncompress a DOC compressed document or a ZLib 
compressed document is more or less equal (at least 
you won't notice the difference.)


Then Adam McDaniel wrote:

> Then you have to worry about images. Since each 
> individual image's height/width isn't stored 
> anywhere in the .pdb we have to open up each unique 
> record, size it, and find out how it relates to the
> text to push the overall size of the page down.

That won't be necessary as soon as Bill and I have
added support for a new kind of image references in
the parser and viewer that will provide the viewer
with the necessary info *without* accessing the image
record.

/Mike


_______________________________________________
plucker-list mailing list
[EMAIL PROTECTED]
http://lists.rubberchicken.org/mailman/listinfo/plucker-list

Reply via email to