On Tue, 15 Mar 2005, Ville [iso-8859-1] Syrjälä wrote:
> I think that making the assumption that all memory is preserved when the 
> memory layout (virtual resolution and depth) doesn't change is perfectly 
> valid too. That would allow X to do it's Ctrl-Alt-+ and - things without 
> repainting the whole screen.

Indeed. The `geometry' part of the screen isn't changed, only the `timings'
part (cfr. the split of fb_var_screeninfo parameters I did in fbutils (which
BTW wasn't ever finished).

> If radeonfb will allocate the buffer for the second head from the top of 
> the memory users would basically have to guess it's location. matroxfb 
> simply cuts the memory in two pieces and allocates the buffers from the 
> start of each piece. I don't really like that approach. Adding a simple 
> byte_offset field to fb_var_screeninfo would solve the problem quite 
> nicely but I don't know if such API changes are acceptable at this stage.

You wouldn't have to guess its location, look at fix.smem_start.

I once did a similar thing for an embedded prototype: take a fixed amount of
memory for both frame buffers (this was a UMA system), fb0 starts from the top,
fb1 starts from the bottom. You can enlarge each frame buffer, until you read
the memory of the other. Each fix.smem_{start,len} corresponds exactly to the
memory allocated to each frame buffer.

Of course, if you also want off-screen memory (i.e. memory beyond
xres_virtual*yres_virtual*bpp/8), things get more complicated, since currently
there's no way for the application to ask for a minimum amount of off-screen
memory. Perhaps a new field in fb_var_screeninfo (and zero means `I don't
care', for backwards compatibility).

Gr{oetje,eeting}s,

                                                Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED]

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                                            -- Linus Torvalds

Reply via email to