Re: [PATCH] GSoC #09 Bitmap scaling

2008-10-15 Thread Vesa Jääskeläinen
Colin D Bennett wrote:
 On Tue, 14 Oct 2008 00:11:42 +0300
 Vesa Jääskeläinen [EMAIL PROTECTED] wrote:
 A minor point:  You mentioned RGB and RGBA format--do you mean true
 color (either RGB[A] or BGR[A] layout) or do you mean literal RGB byte
 order? If we are talking about picking a single component order to
 standardize on, it should be BGR[A].  Every video adapter I have tested
 on so far (nVidia 7600GT, VIA Unichrome, VMware, QEMU) has supported
 BGRA (32 bpp) or BGR (24 bpp) but not RGBA or RGB.  Perhaps others have
 found the contrary to be true; if so I would like to know.

As I stated before order is not an issue. We can use BGR[A].

I am just used to speak about RGB :)

 As we have same handle all the time for bitmap we can just continue to
 use it as is. If we would make new instance of it, would either need
 to delete previous one and replace its usage with new one. Of course
 use case here affects.
 
 Ok.  That's fine.  I'm still a little confused about the purpose of
 lock/unlock, however.  Is it basically a way of catching mistakes in
 the code where we accidentally try to modify bitmap data when we don't
 want to?  I guess what I'm asking is, do lock/unlock do anything more
 than set a flag that is checked, as in: (pseudocode)
 
 
 void *get_ptr(bitmap b)
 {
   if (b.optimized) return error();
   return b.ptr; 
 }
 void optimize(bitmap b)
 {
   if (b.locked) error();
   /* optimize b... */
 }
 void lock(bitmap b)
 {
   if (b.locked) error();
   b.locked = 1; 
 }
 void unlock(bitmap b) { b.locked = 0; }

No, more like:

void *lock(bitmap b)
{
  if (b.locked) return NULL;
  if (b.optimized) return NULL;

  b.locked = 1;

  return b.dataptr;
}

void unlock(bitmap b)
{
  b.locked = 0;
}

grub_err_t optimize(bitmap b, rendertarget r)
{
  if (b.locked) return error;
  if (b.optimized) return error;

  // do magic
  b.optimized = 1;

  return success;
}

Now one would use it like:

ptr = lock();
if (ptr)
{
  // modify it.
  ptr[0] = blue;
  ptr[1] = green;
  ptr[2] = red;
  if (has_alpha)
ptr[3] = alpha;

  unlock();
}

 Are you thinking that it would be best to have the
 'optimize'/'unoptimize' operations actually just modify the bitmap
 in place?  I guess this is nice from a memory conservation
 standpoint, but in some cases it wouldn't work well (i.e., 24-bit
 to 32-bit formats).
 I do not think at this point how optimize() or unoptimize() will be
 implemented. Just general idea. Whether it is in-place operation or
 re-allocation for memory, is same to me. It just have to work :)
 
 Ok.
 
 Another idea:  What if the image-loading functions automatically
 optimize()'d the bitmaps when loaded, since we don't normally expect to
 modify loaded bitmaps before display.  Then most all the bitmaps in use
 would automatically get the performance benefit with no change to all
 the users of the code.  The only thing we do with loaded images is the
 scale them and blit them.

No. If user has low color mode optimize will really make image look
ugly. So best to postpone this conversion to far as possible.

 The scaling algorithms can easily work on any 24 or 32 bit color mode
 without knowing details of which components are which (the process is
 the same regardless of the color component).  Thus optimized images
 could still be scaled highly efficiently (without an
 unoptimize-scale-optimize process).  For 15/16 bit or indexed color
 modes, we would have to unopt-scale-opt, but I really think that no
 one should be using those modes.  If your video memory is too limited
 for 24 or 32 bit color, then just use the perfectly great text mode
 menu.

I would still like to support all RGB modes. Indexed color modes are
just backup option.



___
Grub-devel mailing list
Grub-devel@gnu.org
http://lists.gnu.org/mailman/listinfo/grub-devel


Re: [PATCH] GSoC #09 Bitmap scaling

2008-10-14 Thread Colin D Bennett
On Tue, 14 Oct 2008 00:11:42 +0300
Vesa Jääskeläinen [EMAIL PROTECTED] wrote:

 Colin D Bennett wrote:
  On Mon, 13 Oct 2008 21:27:46 +0300
  Vesa Jääskeläinen [EMAIL PROTECTED] wrote:
  Idea is that if bitmap is still locked you cannot optimize it or
  you cannot lock it again. And if bitmap is optimized you cannot get
  pointer to modify it. Eg. Function call returns memory pointer.
  
  I thought perhaps the 'optimize' operation would simply return a
  _new_ (and completely independent from that point forward) bitmap
  equivalent to the input bitmap but in the same format as the
  current video mode uses.
 
 Problem with that is that it makes supporting code harder to use. With
 only handful of supported formats it much easier to write support code
 to modify bitmap. If you allow to use any format supported by video
 subsystem it is nightmare to support them all.
 
 So if we just support two formats. We only need to care about RGB and
 RGBA formats, rather easy task. Can be modified by using simple loops
 or indexing. When we know that there is no modifications to be done
 for bitmap, we can just call optimize() and it will convert (edited)
 bitmap to fast to blit format.

A minor point:  You mentioned RGB and RGBA format--do you mean true
color (either RGB[A] or BGR[A] layout) or do you mean literal RGB byte
order? If we are talking about picking a single component order to
standardize on, it should be BGR[A].  Every video adapter I have tested
on so far (nVidia 7600GT, VIA Unichrome, VMware, QEMU) has supported
BGRA (32 bpp) or BGR (24 bpp) but not RGBA or RGB.  Perhaps others have
found the contrary to be true; if so I would like to know.

 As we have same handle all the time for bitmap we can just continue to
 use it as is. If we would make new instance of it, would either need
 to delete previous one and replace its usage with new one. Of course
 use case here affects.

Ok.  That's fine.  I'm still a little confused about the purpose of
lock/unlock, however.  Is it basically a way of catching mistakes in
the code where we accidentally try to modify bitmap data when we don't
want to?  I guess what I'm asking is, do lock/unlock do anything more
than set a flag that is checked, as in: (pseudocode)


void *get_ptr(bitmap b)
{
  if (b.optimized) return error();
  return b.ptr; 
}
void optimize(bitmap b)
{
  if (b.locked) error();
  /* optimize b... */
}
void lock(bitmap b)
{
  if (b.locked) error();
  b.locked = 1; 
}
void unlock(bitmap b) { b.locked = 0; }


  Are you thinking that it would be best to have the
  'optimize'/'unoptimize' operations actually just modify the bitmap
  in place?  I guess this is nice from a memory conservation
  standpoint, but in some cases it wouldn't work well (i.e., 24-bit
  to 32-bit formats).
 
 I do not think at this point how optimize() or unoptimize() will be
 implemented. Just general idea. Whether it is in-place operation or
 re-allocation for memory, is same to me. It just have to work :)

Ok.

Another idea:  What if the image-loading functions automatically
optimize()'d the bitmaps when loaded, since we don't normally expect to
modify loaded bitmaps before display.  Then most all the bitmaps in use
would automatically get the performance benefit with no change to all
the users of the code.  The only thing we do with loaded images is the
scale them and blit them.

The scaling algorithms can easily work on any 24 or 32 bit color mode
without knowing details of which components are which (the process is
the same regardless of the color component).  Thus optimized images
could still be scaled highly efficiently (without an
unoptimize-scale-optimize process).  For 15/16 bit or indexed color
modes, we would have to unopt-scale-opt, but I really think that no
one should be using those modes.  If your video memory is too limited
for 24 or 32 bit color, then just use the perfectly great text mode
menu.

Regards,
Colin


signature.asc
Description: PGP signature
___
Grub-devel mailing list
Grub-devel@gnu.org
http://lists.gnu.org/mailman/listinfo/grub-devel


Re: [PATCH] GSoC #09 Bitmap scaling

2008-10-03 Thread Vesa Jääskeläinen
Vesa Jääskeläinen wrote:
 Vesa Jääskeläinen wrote:
 It is really necessary that all generic graphical menu code like bitmap
 scaler works always. Not just for some optimized formats as otherwise it
 would render graphical menu unusable in some cases. It can be not so
 pretty on low end systems, but it should at least work. That is the
 reason there is map/unmap functions in video api.

 So there has to be first generic implementation and then you can use
 optimized one if there exist one.
 
 Just thinking as this is operating in a bitmap, we could just make sure
 we use only couple of formats and let video driver dot the rest. That
 would simplify it a bit.

Hi,

Now that Colin is back in action I am reviving this thread :).

I have been thinking this a bit and I think best solution to bitmaps is
something like following.

Two types of bitmap: easily accessible and optimized bitmaps for hardware.

Easily accessible are meant to be modified by user and provides slow
blitting performance. Basically we should only support two formats.
RGB888 and ARGB (order can be different). This way we can make easy
code to modify bitmaps.

When there is no need to modify bitmap anymore, one calls
grub_video_bitmap_optimize(bitmap, rendering_target) and then bitmap is
optimized to match reder_targets format.

there would be two new helpers that gives access to bitmap data.
grub_video_bitmap_lock() and to release it grub_video_bitmap_unlock().
Lock will fail if bitmap is optimized.

I am wondering should we have grub_video_bitmap_unoptimize() to map back
to editable mode. But that might be more pain and promote bad ways to do
conversion.

Now bitmap loaders would be modified to return data in easily accessible
format so bitmaps can be modified and so on.

Now to bitmap scaling. This can only be done for easily accessible
formats and could be something like this:

+grub_err_t
+grub_video_bitmap_create_scaled (struct grub_video_bitmap **dst,
+ int dst_width, int dst_height,
+ struct grub_video_bitmap *src,
+ enum
+ grub_video_bitmap_scale_method
scale_method);

Now in example static bitmaps would be optimized right away in order to
make their blitting fast. I do not know if we need special handling for
transparent bitmaps. May need some experimenting.

Actual scalers would be hidden from API and can only be specified by
enum type.

It could be a good idea to implement all scalers in same file. Unless
there is some weird scaler that needs thousands of lines of code.

Any opinions?

Thanks,
Vesa Jääskeläinen


___
Grub-devel mailing list
Grub-devel@gnu.org
http://lists.gnu.org/mailman/listinfo/grub-devel


Re: [PATCH] GSoC #09 Bitmap scaling

2008-09-02 Thread Vesa Jääskeläinen
Vesa Jääskeläinen wrote:
 It is really necessary that all generic graphical menu code like bitmap
 scaler works always. Not just for some optimized formats as otherwise it
 would render graphical menu unusable in some cases. It can be not so
 pretty on low end systems, but it should at least work. That is the
 reason there is map/unmap functions in video api.
 
 So there has to be first generic implementation and then you can use
 optimized one if there exist one.

Just thinking as this is operating in a bitmap, we could just make sure
we use only couple of formats and let video driver dot the rest. That
would simplify it a bit.


___
Grub-devel mailing list
Grub-devel@gnu.org
http://lists.gnu.org/mailman/listinfo/grub-devel