I think I've found a solution to our little 64k max memory size
problem in images. I have the viewer code done to handle it (not
committed to CVS yet) but I need a volunteer to handle the parser
side...

My python and Java experience is limited at best, so I guess I'm
asking either Bill or Laurens about the feasibility of this idea:

It's actually pretty simple. Lets say the user wants to parse a fairly
large image. Obviously the standard DATATYPE_TBMP or
DATATYPE_TBMP_COMPRESSED won't be able to handle it, but what if we
chop the image up horizontally? Say into something like
(tbmp size)/(max memory size)+1 pieces?

Then take the total height of the image, divide it by the number of
pieces to get the height of each piece. Except for height of the last
piece, add to its height the modulus of the total height and the
number of pieces (just to make everything fit nicely).

So for example, lets say the user has a 640x480x8 jpeg that converts
into a 307k tbmp:

width = 640
height = 480
tbmpsize = 307k
pieces = tbmpsize / 64k + 1

      width
_________________
:_______________: h1 = height / pieces
:_______________: h2 = height / pieces
:_______________: h3 = height / pieces
:_______________: h4 = height / pieces
:               : h5 = height / pieces + height % pieces
-----------------

Now we have five images, the first four are 640x96, and the last one
is as well 640x96 (since 480 devices into 5 evenly)

Now comes the fun part. Each of those images becomes a normal
DATATYPE_TBMP since they will all fit under the 64k limit, all with
their own unique ID number for each record they occupy.

Each of those uids are then summarized from within a new record
datatype, say, DATATYPE_MULTIRECORD. Its purpose in life is to keep
watch over the individual pieces of the larger image, and keep them in
order.

FINALLY, when the original intended image is called upon in the viewer
(either referenced by a 0x1A embedded image, or directly) it will
actually be the unique ID for the DATATYPE_MULTIRECORD which will be
referenced.

The viewer sees that the document is requesting an image, but the
reference ID is in fact a multirecord record. It cracks open that
record and finds the uids of the true DATATYPE_TBMP records, in their
original order. The viewer then displays each of them, one on top of
the other.

The end result is plucker supports >64k images :) .. simply by
splitting them up into digestible <64k smaller images.

Now comes the problem. I have written the code in the viewer to
support this but I have no idea how to handle support in the python
parser or jpluck. Any volunteers? :)

-- 
Adam McDaniel
Array.org
Calgary, AB, Canada
_______________________________________________
plucker-dev mailing list
[EMAIL PROTECTED]
http://lists.rubberchicken.org/mailman/listinfo/plucker-dev

Reply via email to