Re: Video memory (Was: Re: GGIMesa updates)

2000-11-21 Thread Steffen Seeger

"Jon M. Taylor" wrote:

 Antonio Campos wrote:

  1) Installing KGI is not an easy task. (It only supports a few cards).
 
 _Running_ KGI is not an easy task, because it only supports a few
 cards.  Installing it is actually pretty easy, unless you don't already
 know about kernel development issues in which case it would be _very_
 difficult.  KGI is not meant for the end user yet, although it is close
 than you might think.

This is correct but also to some way intended. I have concentrated on getting
the framework and concepts worked out, not to write as many drivers as possible.

  2) It doesn't expose a way to handle 2D and 3D graphics in an unified
  (inside kernel) way.
 
 Yes, it does.  Read the docs, please.

To what I only like to add that they may be found at http://kgi.sourceforge.net
...

  3) It doesn't handle the resource allocation of buffers (framebuffer
  memory (back and front, double and triple buffers, etc.), stencil
  buffers, and the like...)
 
 Yes, it does.  Or rather, it provides support for resource
 allocation of abstract buffer types, and the individual KGI drivers
 themselves map out whatever buffers their hardware provides.

It does handle mode specification and initialization of almost all buffer
formats I know. Even more. You can specify z-buffered modes, whith/without
alpha channels, overlays, stereo, anything you can think of.

E.g. initialization of a double-buffered 16bpp stero mode with 16bit z-buffer
works fine with the Permedia2 driver.

However, splitting resources is not yet addressed by KGI-0.9, but is planned
once I have an accelerated X server going.

So, in that way we are in the right direction (though not yet really there
where we want to go).

Steffen

___
Steffen Seeger  mailto:[EMAIL PROTECTED]




Re: Video memory (Was: Re: GGIMesa updates)

2000-11-20 Thread Antonio Campos

Steffen Seeger wrote:

 Antonio Campos wrote:

  People want an OS for accesing the hardware in a clean, fast and reliable way.
  That includes the graphics hardware. And I must say that this handling is one of
  the most important tasks in modern operating systems (and one of the things that
  the user sees more quickly). And this handling is one of the things that Linux
  users can't feel pride about.
  We have that strange and quite limited fbdev kernel hack, the slow and
  uncomfortable to program Xlib (DGA, DRI, etc...), and of course, an ununified way
  of handling 2D and 3D graphics...
  I hoped the GGI/KGI project filled this gap (the same way I hoped the Alsa+OpenAL
  projects would deprecate the OSS sound drivers in an unified sound system), but it
  seems to me that is not going in the right direction (I'm sorry about saying
  this).

 So, in your opinion, what is wrong about the direction KGI is heading to?


Maybe I should have said that GGI is going in the wrong direction, not
KGI. Anyways I
don't know KGI or GGI internals very well, but it seems to me that:

1) Installing KGI is not an easy task. (It only supports a few cards).
2) It doesn't expose a way to handle 2D and 3D graphics in an unified
(inside kernel)
way. (Maybe I'm misunderstanding something, and this is the task of GGI,
etc...)
3) It doesn't handle the resource allocation of buffers (framebuffer
memory (back and front, double and triple buffers, etc.), stencil
buffers, and the like...)

Just to name some holes I see...
front), stencil,


 Steffen

 ___
 Steffen Seeger  mailto:[EMAIL PROTECTED]
 TU-Chemnitz  http://www.tu-chemnitz.de/~sse




Re: Video memory (Was: Re: GGIMesa updates)

2000-11-20 Thread Jon M. Taylor

On Tue, 21 Nov 2000, Antonio Campos wrote:

 Steffen Seeger wrote:
 
  Antonio Campos wrote:
 
   People want an OS for accesing the hardware in a clean, fast and reliable way.
   That includes the graphics hardware. And I must say that this handling is one of
   the most important tasks in modern operating systems (and one of the things that
   the user sees more quickly). And this handling is one of the things that Linux
   users can't feel pride about.
   We have that strange and quite limited fbdev kernel hack, the slow and
   uncomfortable to program Xlib (DGA, DRI, etc...), and of course, an ununified way
   of handling 2D and 3D graphics...
   I hoped the GGI/KGI project filled this gap (the same way I hoped the Alsa+OpenAL
   projects would deprecate the OSS sound drivers in an unified sound system), but 
it
   seems to me that is not going in the right direction (I'm sorry about saying
   this).
 
  So, in your opinion, what is wrong about the direction KGI is heading to?
 
 
 Maybe I should have said that GGI is going in the wrong direction, not
 KGI. Anyways I
 don't know KGI or GGI internals very well, but it seems to me that:
 
 1) Installing KGI is not an easy task. (It only supports a few cards).

_Running_ KGI is not an easy task, because it only supports a few
cards.  Installing it is actually pretty easy, unless you don't already
know about kernel development issues in which case it would be _very_
difficult.  KGI is not meant for the end user yet, although it is close
than you might think.

 2) It doesn't expose a way to handle 2D and 3D graphics in an unified
 (inside kernel) way. 

Yes, it does.  Read the docs, please.

 (Maybe I'm misunderstanding something, and this is the task of GGI,
 etc...)

No, it is the task of KGI.

 3) It doesn't handle the resource allocation of buffers (framebuffer
 memory (back and front, double and triple buffers, etc.), stencil
 buffers, and the like...)

Yes, it does.  Or rather, it provides support for resource
allocation of abstract buffer types, and the individual KGI drivers
themselves map out whatever buffers their hardware provides.
 
 Just to name some holes I see...
 front), stencil,

KGI_A_STENCIL is clearly defined in kgi.h, and the Permedia driver
uses it.

Jon 

---
'Cloning and the reprogramming of DNA is the first serious step in 
becoming one with God.'
- Scientist G. Richard Seed




Re: Video memory (Was: Re: GGIMesa updates)

2000-11-12 Thread Antonio Campos

Lee Brown wrote:

 Antonio:

  People want an OS for accesing the hardware in a clean, fast and reliable way.
  That includes the graphics hardware. And I must say that this handling is one of
  the most important tasks in modern operating systems (and one of the things that
  the user sees more quickly).

 Thanks for your input.   I am starting to get more involved with the KGI
 project even though I am not sure that I agree  with it 100% .


My input just intended to show a general view of what I think GGI/KGI should address.
(And currently it doesn't).


 BTW: What in GGI/KGI are you interested in?


Correctly handling graphics hardware resource at the kernel level.
With this settled, one can then construct libraries, servers ( X, Berlin or MAC OS X
pdf renderer), or even graphics console programs, easily.


 --
 Lee Brown Jr.
 [EMAIL PROTECTED]




Re: Video memory (Was: Re: GGIMesa updates)

2000-11-10 Thread Antonio Campos

Stefan Seefeld wrote:

 Lee Brown wrote:

   Perhaps you can clue me in.  I still don't
  understand the difficulty in accessing video memory.  The fbdev already mmaps
  all of video memory.  There it is. Let people have at it.

 may be you should play with DOS, pre protected mode. (remember ?)
 Here is your memory, do what you want with it...

 What do you want an OS for ?


People want an OS for accesing the hardware in a clean, fast and reliable way.
That includes the graphics hardware. And I must say that this handling is one of
the most important tasks in modern operating systems (and one of the things that
the user sees more quickly). And this handling is one of the things that Linux
users can't feel pride about.
We have that strange and quite limited fbdev kernel hack, the slow and
uncomfortable to program Xlib (DGA, DRI, etc...), and of course, an ununified way
of handling 2D and 3D graphics...
I hoped the GGI/KGI project filled this gap (the same way I hoped the Alsa+OpenAL
projects would deprecate the OSS sound drivers in an unified sound system), but it
seems to me that is not going in the right direction (I'm sorry about saying
this).


 Stefan
 ___

 Stefan Seefeld
 Departement de Physique
 Universite de Montreal
 email: [EMAIL PROTECTED]

 ___

   ...ich hab' noch einen Koffer in Berlin...





Re: Video memory (Was: Re: GGIMesa updates)

2000-11-10 Thread Antonio Campos

Lee Brown wrote:

 On Sat, 04 Nov 2000, Stefan Seefeld wrote:
  Lee Brown wrote:
   Why can't we just let the client (Stefan) draw to the offscreen part
   of the framebuffer?
  had you followed the recent discussion, you would know. As always, GGI's aim is
  to insulate h/w specifics from the client. Some graphics cards might have special
  memory for this kind of things, z-buffer, etc.
  What if my card doesn't have as much memory as I request ? What if I want multiple
  offscreen buffers ?

 What if GGI just told you how much memory was available, gave you the ability
 to access it, and let you regulate it  for yourself? Would that be an
 improvement?

  In fact, I think video memory management should be at the very core of GGI, 
together
  with drawing primitives. Every advanced program will require that.

 I agree that the concept of a visual needs to address the fact that it is
 possible to have non-viewable target regions and give the user the ability to
 make full use of this resource.  IMHO, GGI should make things possible, not
 limit the possiblilties.

 Lee Brown Jr.
 [EMAIL PROTECTED]

It seems to me that in the end we're asking for something like DirectDraw (on Windows,
you know...) and its surfaces.
Although DirectDraw is quite messy (from the programmer and the user point of view,
COM architecture, etc... ) because it doesn't protect the video memory from malicious
programs, it's a working implementation on many graphics boards. So, maybe the GGI team
should take a look at it. By the way, has this team told with the DRI one?. I think the
DRI project is doing the things wrong. They are putting all this 3D management stuff in
the X Server (and in the kernel), but they don't manage 2D graphics well (nor does the
X Server, nor even the DGA architecture). Aren't they putting they noses in a terrain
that the awaited KGI direct video memory hardware management system (that should reside
in the kernel, at least in part,... ) should conquest?






Re: Video memory (Was: Re: GGIMesa updates)

2000-11-07 Thread Stefan Seefeld

Lee Brown wrote:

  Perhaps you can clue me in.  I still don't
 understand the difficulty in accessing video memory.  The fbdev already mmaps
 all of video memory.  There it is. Let people have at it.

may be you should play with DOS, pre protected mode. (remember ?)
Here is your memory, do what you want with it...

What do you want an OS for ?

Stefan
___  
  
Stefan Seefeld
Departement de Physique
Universite de Montreal
email: [EMAIL PROTECTED]

___

  ...ich hab' noch einen Koffer in Berlin...




Re: GGIMesa updates

2000-11-04 Thread Marcus Sundberg

"Jon M. Taylor" [EMAIL PROTECTED] writes:

 On 3 Nov 2000, Marcus Sundberg wrote:
 
  "Jon M. Taylor" [EMAIL PROTECTED] writes:
  
   On Thu, 2 Nov 2000, [iso-8859-1] Niklas Höglund wrote:
At that time I found that using a main loop looking like this does sort of
proper double-buffering using GGIMesa. Note that the SetMode call sets the
virtual width to twice the physical width.
  
  Why in the world would you want to use SetOrigin to just flip pages
  when there's a perfectly good API for handling multiple frames?
 
   QuickHack.

Requesting multiple frames properly is much quicker to implement
and works on more targets.

 Thanks for the input, but I'm afraid that the "pageflip using
   SetOrigin" hack won't work on all targets.
  
  Neither does normal multiple frames, so?!?
 
   So the point was to find a QH which would always enable
 doublebuffering on all targets, no matter the inefficiency.  Lots of GL
 code requires doublebuffering.

Sure, I'm not arguing against the reason for the original malloc()
hack. I just find the current discussion about how to fix things
strange, when there is exactly one obviously correct way to do it.

  I'd like to have a look at this "problem", but yesterday the Mesa
  CVS didn't compile at all. :(
 
   Yeah, they chose yesterday to add a whole new separated software
 rasterizer cut-in layer to Mesa CVS |-/.  I wish Brian had decided to keep
 that stuff in the 3.5 branch only - kind of odd, when he said that he
 wanted to release 3.4 a few days ago

Ah well, hope it will start working again soon then...

//Marcus
-- 
---+
Marcus Sundberg| http://www.stacken.kth.se/~mackan
 Royal Institute of Technology |   Phone: +46 707 452062
   Stockholm, Sweden   |   E-Mail: [EMAIL PROTECTED]




Re: Video memory (Was: Re: GGIMesa updates)

2000-11-04 Thread Lee Brown

what is the support for offscreen video memory allocation ?
I'm not sure I use the correct terminology, so here is what
I have in mind:

Why can't we just let the client (Stefan) draw to the offscreen part
of the framebuffer?  I wrote a little demo (with minor changes to the fbdev
code) program that allowed me to draw offscreen (outside of the virtual area)
and then use ggiCopyBox to blit it to the viewable (virtual/pannable) area when
needed. What am I missing here?

fntPrintChar(rootvis, font, 'a', xpos, ypos, pixs);  /* offscreen */

ggiGetc(rootvis); /* nothing is viewable */
ggiCopyBox(rootvis, xpos + dim.dx, ypos + dim.dy , dim.width, dim.height, 100 + 
dim.dx,100 + dim.dy); /*  all of a sudden an 'a' appears  */ 



-- 
Lee Brown Jr.
[EMAIL PROTECTED]




Re: Video memory (Was: Re: GGIMesa updates)

2000-11-04 Thread Jon M. Taylor

On Sat, 4 Nov 2000, Lee Brown wrote:

 what is the support for offscreen video memory allocation ?
 I'm not sure I use the correct terminology, so here is what
 I have in mind:
 
 Why can't we just let the client (Stefan) draw to the offscreen part
 of the framebuffer?  

There may not always BE an offscreen part of the framebuffer on
all targets.  In particular, the targets which do not support one or more
DirectBuffer mappings cannot use this method.

 I wrote a little demo (with minor changes to the fbdev
 code) program that allowed me to draw offscreen (outside of the virtual area)
 and then use ggiCopyBox to blit it to the viewable (virtual/pannable) area when
 needed. What am I missing here?

Did you try it on all targets?  

Jon 

---
'Cloning and the reprogramming of DNA is the first serious step in 
becoming one with God.'
- Scientist G. Richard Seed




Re: GGIMesa updates

2000-11-03 Thread Marcus Sundberg

"Jon M. Taylor" [EMAIL PROTECTED] writes:

 On Thu, 2 Nov 2000, [iso-8859-1] Niklas Höglund wrote:
  At that time I found that using a main loop looking like this does sort of
  proper double-buffering using GGIMesa. Note that the SetMode call sets the
  virtual width to twice the physical width.

Why in the world would you want to use SetOrigin to just flip pages
when there's a perfectly good API for handling multiple frames?

   Thanks for the input, but I'm afraid that the "pageflip using
 SetOrigin" hack won't work on all targets.

Neither does normal multiple frames, so?!?
If we'd only support features that works on every piece of hardware
the entire project would fit into an empty file...

*Of course* you should implement back/front buffering by simply
having two separate buffers and switch between them! If that's not
possible with the current target/mode then touch luck, you just have
to fall back to:
 allocate a DirectBuffer or a memory_visual and use that as a
 backbuffer on every target.


I'd like to have a look at this "problem", but yesterday the Mesa
CVS didn't compile at all. :(

//Marcus
-- 
---+
Marcus Sundberg| http://www.stacken.kth.se/~mackan
 Royal Institute of Technology |   Phone: +46 707 452062
   Stockholm, Sweden   |   E-Mail: [EMAIL PROTECTED]




Re: Video memory (Was: Re: GGIMesa updates)

2000-11-03 Thread Jon M. Taylor

On Fri, 3 Nov 2000, Stefan Seefeld wrote:

 "Jon M. Taylor" wrote:
 
   I might know when allocating visuals (drawing buffers) that some are
   updated more frequently than others, i.e. they would profit much more
   from being close to the graphic card. Is there (or could there be) any way
   to expose some policy issues like these through an API for drawable memory
   management ?
  
  Sure.  This is not such an easy task, though.  Such an API
  (libDirectBuffer?) would need to be able to:
  
  * Establish a set of targets which would know about all the different
  types and configurations of memory buffers available for each target
 
 why ? To be able to implement crossblits ? 

No, to be able to set modes intelligently in the presence of
arbitrarily complex extensions attached to any number of visuals which
might use ggi_resources which are exposed and managed by a particular
target.

 Can't you use a simple adapter
 interface (some form of 'marshalling') ? 

We already do, sort of.  The target strings and the
request-strings which the KGI/kgicon targets use do something like this.
Resource request strings will presumably have the same sort of "namespace
tree" format.  I've proposed a resource request hierarchy based on this
type of system before - search the archives.

 I mean, the interesting case is
 blitting from video memory to video memory, 

Define "video memory".  PCI and especially AGP memory mapping
tricks make this potentially quite complex.  Is the memory on the card, or
system RAM mapped across the AGP GART pagetables?  Is it tiled, and if so
how?  Has the region been marked cacheable, and if not can it be?  What
about MTRRs?  The issue CAN be simplified, but not if you expect to retain
any significant degree of optimization potential.

 and there I assume that all
 parameters (alignment etc.) are identical.

Not necessarily, in the case of tiled or GART-mapped AGP aperture
memory spaces.
 
  * Establish global resource pools for each (e.g. dividing up a 32MB
  AGP-mapped video memory aperture into front, back, z, stencil, texture,
  etc buffers)
 
 does this division need to be static ? 

It _cannot_ be static.

 Can't you have a single manager
 instance which keeps track of which memory is allocated for which purpose ?

Yes, _in the target code_.  This stuff must ultimately be mapped
into some sort of target-independent resource namespace.  We cannot even
assume that only one target (or target instance) is managing the whole of
the video card's resources.

  * Know what all the tradeoffs between various resource allocation requests
  are (i.e. if you double your framebuffer size, you cannot have double
  buffering, or you can choose to give up your z-buffer instead)
 
 Right. Can't that be a simple table ? (which would indicate how much memory
 the different buffer types need, etc.)

Not in all cases.  There are potentially many, many different
restrictions on what types of buffers can be mapped where, and in what
combinations, and all of this is highly chipset-dependent |-.
 
  * Be able to map abstract QoS requirement types to various combinations of
  the mapped resources, in a sufficiently generic manner that there's a
  _point_ to using one single API for this instead of just hardcoding the
  target knowledge into the app or a specific library (e.g.
  'libNvidiaRivaTNT2AGP32MB' or somesuch'.
 
 Hmm. I don't know whether that is of *any* relevance. But I'm studying the
 CORBA architecture, especially its historical evolution. CORBA is a middleware
 architecture to provide a set of 'services', encapsulating all the nasty details
 of OS dependence, transport protocols, etc.
 The more CORBA evolves, the more it becomes clear that users might want to 
explicitely
 control low level features, such as messaging strategies, concurrency strategies, 
etc.
 Therefor, there are more and more specifications added to CORBA which allow to
 control these features in terms of 'policies', 'interceptors' (some sophisticated
 form of callbacks), etc.

CORBA is also slow - WAY too slow for a system layer such as GGI.
We are avoiding C++ altogether because of performance issues, so CORBA
seems to be out |-.

Jon

---
'Cloning and the reprogramming of DNA is the first serious step in 
becoming one with God.'
- Scientist G. Richard Seed




Re: GGIMesa updates

2000-11-02 Thread Niklas Höglund

On Tue, Oct 31, 2000 at 01:59:37PM -0800, Jon M. Taylor wrote:
 On Mon, 30 Oct 2000, beef wrote:
 
  On Sat, 28 Oct 2000, Jon M. Taylor wrote:
  It kindof works, but flickers horribly on the fbdev.
  
  what/where _could_ this doublebuffer problem be?
 
   So, I did a QuickHack(tm) to work around the problem - I pointed
 both buffers to the ggi_visual |-.  This let me render to either the
 front or back buffer, mapped to either hardware or software
 front/backbuffers, with or without hardware acceleration for both drawing
 triangles and the page flips.  As you have seen it also causes horrible
 flickering.  But it "worked" and at the time that was all I was interested
 in.  The hack was never meant to be more than a stopgap until I figured
 out how to do it all properly.  Unfortunately, there wasn't much in the
 way of buffer management API cut-ins in Mesa at the time, so it turned out
 to be more work than I had anticipated, and a few weeks later my Savage4
 driver project got canned and I stopped working on GGIMesa except for the
 occasional build fixes to keep up with the changing Mesa internals.

At that time I found that using a main loop looking like this does sort of
proper double-buffering using GGIMesa. Note that the SetMode call sets the
virtual width to twice the physical width.

int main(int argc, char *argv[])
{
  int wid=800, hei=600;
  ggi_visual_t vis;

  if(ggiInit()) {
fprintf(stderr, "Can't initialize ggi.\n");
return EXIT_FAILURE;
  }
  vis = ggiOpen(NULL);
  ggiSetFlags(vis, GGIFLAG_ASYNC);
  if(!vis) {
fprintf(stderr, "Can't open default ggi target.\n");
ggiExit();
return EXIT_FAILURE;
  }
  if(ggiSetGraphMode(vis, wid, hei, 2*wid, hei, 0)  0) {
fprintf(stderr, "Can't set mode on ggi visual.\n");
ggiClose(vis);
ggiExit();
return EXIT_FAILURE;
  }
  GGIMesaContext ctx = GGIMesaCreateContext();
  GGIMesaSetVisual(ctx, vis, true, false);
  GGIMesaMakeCurrent(ctx);

  Initialize();

  for(;;) {
draw();
static bool first=true;
glFlush();
glFinish();
ggiFlush(vis);
ggiSetOrigin(vis, first ? 0 : wid,0);
reshape(first ? wid : 0,0,wid,hei);
glClear(GL_DEPTH_BUFFER_BIT);
ggiDrawBox(vis, first ? wid : 0,0,wid,hei);
first=!first;
  }

  return EXIT_SUCCESS;
}


The reshape call takes four parameters (x,y,width,height), and sets the GL viewport
to draw in that area only. It can look like this:

static void reshape(int x, int y, int width, int height)
{
  GLfloat h = (GLfloat) height / (GLfloat) width;

  glViewport((GLint) x, (GLint) y, (GLint) width, (GLint) height);
  glMatrixMode(GL_PROJECTION);
  glLoadIdentity();
  glFrustum(-1.0, 1.0, -h, h, 5.0, 60.0);
  glTranslatef(0.0, 0.0, -7.0);
  glMatrixMode(GL_MODELVIEW);
}

This can still flicker a bit as the ggiSetOrigin() call isn't synchronized with the
physical display rate. This synchronization needs support for the (fb|KGI)con driver.
(It wasn't synchronized at the time I made this, maybe it is now?)


I think something like this should me added to GGIMesa. Let the application set up
the display (using GGI) and tell GGIMesa to draw into an area of a frame. Let GGI
deal with double-buffering. All GGIMesa needs to do is allow changing of what frame
to draw into, and what area of it.

-- 
   Niklas




Video memory (Was: Re: GGIMesa updates)

2000-11-02 Thread Stefan Seefeld

This brings up another interesting point:

what is the support for offscreen video memory allocation ?
I'm not sure I use the correct terminology, so here is what
I have in mind:

There is often a need to double buffer content in some form,
and map (blit) it into the screen at specific times. Of course,
the way to do that with GGI is to allocate a set of (memory) 
visuals and work with these.
So, what memory are the visuals allocated from ? Assuming that
they are allocated from video memory (framebuffer ?), I'd suggest
to think about a QoS (Quality of Service) issue: Given that video
memory is limited, some visuals would need to be allocated on regular
heap.
I might know when allocating visuals (drawing buffers) that some are
updated more frequently than others, i.e. they would profit much more
from being close to the graphic card. Is there (or could there be) any way 
to expose some policy issues like these through an API for drawable memory
management ?

You will notice that this is an issue which I brought up a couple of
months ago already: I'm thinking of a 'Backing Store' for berlin, i.e.
for example for video intensive graphics, I'd like to make backups of
the scene graph in front and behind the graphic with the high frame rate,
such that I then don't need to traverse the scene graph on each redraw,
but rather map the three layers (back, animated graphic, front) into the
screen to keep it consistent with the scene graph (for example if the
exposed region of the animation isn't regular (rectangular), or if the
layers are translucent, such that I need to blend them together, rather
than just blitting them in.

Regards,Stefan

___  
  
Stefan Seefeld
Departement de Physique
Universite de Montreal
email: [EMAIL PROTECTED]

___

  ...ich hab' noch einen Koffer in Berlin...




Re: Video memory (Was: Re: GGIMesa updates)

2000-11-02 Thread Marcus Sundberg

Stefan Seefeld [EMAIL PROTECTED] writes:

 This brings up another interesting point:
 
 what is the support for offscreen video memory allocation ?
 I'm not sure I use the correct terminology, so here is what
 I have in mind:
 
 There is often a need to double buffer content in some form,
 and map (blit) it into the screen at specific times. Of course,
 the way to do that with GGI is to allocate a set of (memory) 
 visuals and work with these.

It is *NOT* the way, and will never be!
The correct way is to use the not-yet-written blitting extension,
so you can get hw accelerated blits when supported.

Until that has been written you should first try to set a mode
with a virtual Y-resolution higher than the physical and use the
offscreen area for caching images, and ggiCopyBox() for blitting.
Only if that fails you should resort to using a memory visual and
crossblit.

 So, what memory are the visuals allocated from ? Assuming that
 they are allocated from video memory (framebuffer ?),

Your assumption is wrong, from targets.txt:

memory-target
=

Description
+++

Emulates a linear framebuffer in main memory. This memory area can be
a shared memory segemnt, an area specified by the application, or be
malloc()ed by the memory-target itself.

 I'd suggest
 to think about a QoS (Quality of Service) issue: Given that video
 memory is limited, some visuals would need to be allocated on regular
 heap.
 I might know when allocating visuals (drawing buffers) that some are
 updated more frequently than others, i.e. they would profit much more
 from being close to the graphic card. Is there (or could there be) any way 
 to expose some policy issues like these through an API for drawable memory
 management ?

The idea is to implement simple offscreen memory requesting in
LibGGI, and to let the blitting extension have all the intelligence.
The blitting extension will have some sort of priority based API
for allocating areas of either offscreen video memory or RAM, and
also moving areas between these two types of memory. Something in the
line of http://www.xfree86.org/4.0.1/DESIGN12.html

 You will notice that this is an issue which I brought up a couple of
 months ago already: I'm thinking of a 'Backing Store' for berlin, i.e.
 for example for video intensive graphics, I'd like to make backups of
 the scene graph in front and behind the graphic with the high frame rate,
 such that I then don't need to traverse the scene graph on each redraw,
 but rather map the three layers (back, animated graphic, front) into the
 screen to keep it consistent with the scene graph (for example if the
 exposed region of the animation isn't regular (rectangular), or if the
 layers are translucent, such that I need to blend them together, rather
 than just blitting them in.

//Marcus
-- 
---+
Marcus Sundberg| http://www.stacken.kth.se/~mackan
 Royal Institute of Technology |   Phone: +46 707 452062
   Stockholm, Sweden   |   E-Mail: [EMAIL PROTECTED]




Re: Video memory (Was: Re: GGIMesa updates)

2000-11-02 Thread Andreas Beck

 It is *NOT* the way, and will never be!
 The correct way is to use the not-yet-written blitting extension,
 so you can get hw accelerated blits when supported.

Umm - good point ... Marcus: We should talk about the region management once
again ... and finally implement it. I have stubs for blit functions from my
Libbse experimental thingy ...

 Until that has been written you should first try to set a mode
 with a virtual Y-resolution higher than the physical and use the
 offscreen area for caching images, and ggiCopyBox() for blitting.
 Only if that fails you should resort to using a memory visual and
 crossblit.

Yes. This is more or less what said extension will then do internally.

  So, what memory are the visuals allocated from ? Assuming that
  they are allocated from video memory (framebuffer ?),

 Your assumption is wrong, from targets.txt:

Not totally ... though in a nonobvious way, but I think I should mention 
it:

 Emulates a linear framebuffer in main memory. This memory area can be
 a shared memory segemnt, an area specified by the application, or be
 malloc()ed by the memory-target itself.

If you use mmap together with the option "an area specified by the
application", you can place a memvisual into vidmem.

 The idea is to implement simple offscreen memory requesting in
 LibGGI, and to let the blitting extension have all the intelligence.
 The blitting extension will have some sort of priority based API
 for allocating areas of either offscreen video memory or RAM, and
 also moving areas between these two types of memory. Something in the
 line of http://www.xfree86.org/4.0.1/DESIGN12.html

Hmm - got to read that ...

CU, Andy

-- 
= Andreas Beck|  Email :  [EMAIL PROTECTED]=




Re: Video memory (Was: Re: GGIMesa updates)

2000-11-02 Thread Lee Brown

On Thu, 02 Nov 2000, Marcus Sundberg wrote:
 Stefan Seefeld [EMAIL PROTECTED] writes:

 It is *NOT* the way, and will never be!
 The correct way is to use the not-yet-written blitting extension,
 so you can get hw accelerated blits when supported.

What would the extension API look like?

Thanks ahead,
-- 
Lee Brown Jr.
[EMAIL PROTECTED]




Re: Video memory (Was: Re: GGIMesa updates)

2000-11-02 Thread Lee Brown

Scratch that last question.  I see the X documentation.


-- 
Lee Brown Jr.
[EMAIL PROTECTED]




Re: GGIMesa updates

2000-11-02 Thread Jon M. Taylor

On Thu, 2 Nov 2000, [iso-8859-1] Niklas Höglund wrote:

 On Tue, Oct 31, 2000 at 01:59:37PM -0800, Jon M. Taylor wrote:
  On Mon, 30 Oct 2000, beef wrote:
  
   On Sat, 28 Oct 2000, Jon M. Taylor wrote:
   It kindof works, but flickers horribly on the fbdev.
   
   what/where _could_ this doublebuffer problem be?
  
  So, I did a QuickHack(tm) to work around the problem - I pointed
  both buffers to the ggi_visual |-.  This let me render to either the
  front or back buffer, mapped to either hardware or software
  front/backbuffers, with or without hardware acceleration for both drawing
  triangles and the page flips.  As you have seen it also causes horrible
  flickering.  But it "worked" and at the time that was all I was interested
  in.  The hack was never meant to be more than a stopgap until I figured
  out how to do it all properly.  Unfortunately, there wasn't much in the
  way of buffer management API cut-ins in Mesa at the time, so it turned out
  to be more work than I had anticipated, and a few weeks later my Savage4
  driver project got canned and I stopped working on GGIMesa except for the
  occasional build fixes to keep up with the changing Mesa internals.
 
 At that time I found that using a main loop looking like this does sort of
 proper double-buffering using GGIMesa. Note that the SetMode call sets the
 virtual width to twice the physical width.

[snip]

Thanks for the input, but I'm afraid that the "pageflip using
SetOrigin" hack won't work on all targets.  You _can_ allocate a
DirectBuffer or a memory_visual and use that as a backbuffer on every
target.

Jon
 
---
'Cloning and the reprogramming of DNA is the first serious step in 
becoming one with God.'
- Scientist G. Richard Seed




Re: Video memory (Was: Re: GGIMesa updates)

2000-11-02 Thread Jon M. Taylor

On Thu, 2 Nov 2000, Stefan Seefeld wrote:

 This brings up another interesting point:
 
 what is the support for offscreen video memory allocation ?
 I'm not sure I use the correct terminology, so here is what
 I have in mind:
 
 There is often a need to double buffer content in some form,
 and map (blit) it into the screen at specific times. Of course,
 the way to do that with GGI is to allocate a set of (memory) 
 visuals and work with these.

The _unaccelerated_ way.

 So, what memory are the visuals allocated from ? 

System memory.

 Assuming that
 they are allocated from video memory (framebuffer ?), 

They aren't.

 I'd suggest
 to think about a QoS (Quality of Service) issue: Given that video
 memory is limited, some visuals would need to be allocated on regular
 heap.

All memory_visuals already are.

 I might know when allocating visuals (drawing buffers) that some are
 updated more frequently than others, i.e. they would profit much more
 from being close to the graphic card. Is there (or could there be) any way 
 to expose some policy issues like these through an API for drawable memory
 management ?

Sure.  This is not such an easy task, though.  Such an API
(libDirectBuffer?) would need to be able to:

* Establish a set of targets which would know about all the different
types and configurations of memory buffers available for each target

* Establish global resource pools for each (e.g. dividing up a 32MB
AGP-mapped video memory aperture into front, back, z, stencil, texture,
etc buffers)

* Know what all the tradeoffs between various resource allocation requests
are (i.e. if you double your framebuffer size, you cannot have double
buffering, or you can choose to give up your z-buffer instead)

* Be able to map abstract QoS requirement types to various combinations of
the mapped resources, in a sufficiently generic manner that there's a
_point_ to using one single API for this instead of just hardcoding the
target knowledge into the app or a specific library (e.g.
'libNvidiaRivaTNT2AGP32MB' or somesuch'.

Ideas are welcome.
 
 You will notice that this is an issue which I brought up a couple of
 months ago already: I'm thinking of a 'Backing Store' for berlin, i.e.
 for example for video intensive graphics, I'd like to make backups of
 the scene graph in front and behind the graphic with the high frame rate,
 such that I then don't need to traverse the scene graph on each redraw,
 but rather map the three layers (back, animated graphic, front) into the
 screen to keep it consistent with the scene graph (for example if the
 exposed region of the animation isn't regular (rectangular), or if the
 layers are translucent, such that I need to blend them together, rather
 than just blitting them in.

Think about the API and target complexity that will be necessary
to intelligently ask for what you just described.

Jon

---
'Cloning and the reprogramming of DNA is the first serious step in 
becoming one with God.'
- Scientist G. Richard Seed




Re: Video memory (Was: Re: GGIMesa updates)

2000-11-02 Thread Stefan Seefeld

"Jon M. Taylor" wrote:

  I might know when allocating visuals (drawing buffers) that some are
  updated more frequently than others, i.e. they would profit much more
  from being close to the graphic card. Is there (or could there be) any way
  to expose some policy issues like these through an API for drawable memory
  management ?
 
 Sure.  This is not such an easy task, though.  Such an API
 (libDirectBuffer?) would need to be able to:
 
 * Establish a set of targets which would know about all the different
 types and configurations of memory buffers available for each target

why ? To be able to implement crossblits ? Can't you use a simple adapter
interface (some form of 'marshalling') ? I mean, the interesting case is
blitting from video memory to video memory, and there I assume that all
parameters (alignment etc.) are identical.

 * Establish global resource pools for each (e.g. dividing up a 32MB
 AGP-mapped video memory aperture into front, back, z, stencil, texture,
 etc buffers)

does this division need to be static ? Can't you have a single manager
instance which keeps track of which memory is allocated for which purpose ?
That would help in the implementation of QoS policies...

 * Know what all the tradeoffs between various resource allocation requests
 are (i.e. if you double your framebuffer size, you cannot have double
 buffering, or you can choose to give up your z-buffer instead)

Right. Can't that be a simple table ? (which would indicate how much memory
the different buffer types need, etc.)

 * Be able to map abstract QoS requirement types to various combinations of
 the mapped resources, in a sufficiently generic manner that there's a
 _point_ to using one single API for this instead of just hardcoding the
 target knowledge into the app or a specific library (e.g.
 'libNvidiaRivaTNT2AGP32MB' or somesuch'.

Hmm. I don't know whether that is of *any* relevance. But I'm studying the
CORBA architecture, especially its historical evolution. CORBA is a middleware
architecture to provide a set of 'services', encapsulating all the nasty details
of OS dependence, transport protocols, etc.
The more CORBA evolves, the more it becomes clear that users might want to explicitely
control low level features, such as messaging strategies, concurrency strategies, etc.
Therefor, there are more and more specifications added to CORBA which allow to
control these features in terms of 'policies', 'interceptors' (some sophisticated
form of callbacks), etc.
May be it would be interesting for you to look into it, or to let us discuss this,
as I think that some general architectural principles would apply equally well
for GGI, where you try to encapsulate the OS, and video hardware, from the user,
while still trying to provide a maximum of flexibility and efficiency. In other
words, some knowledge which is needed to optimize efficiently, can't be known
while you implement GGI, so you need some cooperation from the user. The question
is how to interface this.

Best regards,   Stefan
___  
  
Stefan Seefeld
Departement de Physique
Universite de Montreal
email: [EMAIL PROTECTED]

___

  ...ich hab' noch einen Koffer in Berlin...




Re: [Berlin-design] GGIMesa updates

2000-10-31 Thread soyt

Quoting "Jon M. Taylor" [EMAIL PROTECTED]:

   Yep, that's what I'm seeing as well.  I haven't been able to track
 down the problem yet.  For now, I am hacking around the problem by
 manually editing Mesa/src/GGI/libMesaGGI.la and changing the line that
 reads:
 
 dependency_libs=' /usr/local/lib/libggi.la -lgii -lgg'
 
 to:
 
 dependency_libs=' -lggi -lgii -lgg'

I had a similar problem some time ago with the .la files.
The problem was: on 'make install' the lib paths
are not correclty set in lib*.la. They still point to the
lib in the build tree:

from /usr/local/lib/libgii.la:
-
# Libraries that this one depends upon.
dependency_libs=' ../gg/.libs/libgg.so'
--

I don't know the actual reason but I had it working by manually
changing the dependencies in *.la

Hope it helps.
Regards.




Re: GGIMesa updates

2000-10-31 Thread Jon M. Taylor

On Mon, 30 Oct 2000, beef wrote:

 On Sat, 28 Oct 2000, Jon M. Taylor wrote:
 
  I just committed a bunch of GGIMesa fixes to the Mesa CVS tree. It
 _should_ all build just fine again, but I have weird libtool and autoconf
 incompatibilities popping up which are preventing the final library
 install so I can't test it over here.  If someone else could test it for
 me, that would be cool.  Brian, I still have to merge those config file
 patches you sent me - some of that stuff isn't strictly correct.
 
 Jon
 
 I have Mesa-HEAD-20001029, ggi-devel-20001028:
 see attachement for the bits i changed to build.
 
 It kindof works, but flickers horribly on the fbdev.

Argh!  Why are you and Stefan getting this to work, when I get
segfaults???

 A 3rd party demo complained about 'too few' stencil bits. are there any?

Stencil buffers are not supported in GGIMesa at this time.  I'll
look into it.
 
 what/where _could_ this doublebuffer problem be?
 
 -- 
 #berlin
 stefan bvc: I had hoped Jon would fix the double buffer problem as well...
 stefan bvc: mesa / ggi on /dev/fb flickers awefully
 stefan bvc: unfortunately, it appears Jon is the only person knowing
  MesaGGI. There is nobody else who can fix that. :(


OK, here's the whole story in detail.  Way back in mid-1999, I was
working at Creative Labs on an accelerated KGIcon device driver for the S3
Savage4 chipset (this project died an ugly death when S3 bought STB and
became a competitior...).  This meant that I needed to be able to handle
both software and hardware accelerations in the GGIMesa targets, including
soft/hard front and backbuffer mappings and page flipping.  The
doublebuffer implementation in GGIMesa at the time was based on
malloc()ing a separate backbuffer, drawing into that and blitting it to
the frontbuffer (the ggi_visual) every flush().  This was not compatible
with the acceleration cut-in architecture I had in mind at the time - no
way to hook a separate buffer-mapping function and no possibility to use
hardware acceleration.

So, I did a QuickHack(tm) to work around the problem - I pointed
both buffers to the ggi_visual |-.  This let me render to either the
front or back buffer, mapped to either hardware or software
front/backbuffers, with or without hardware acceleration for both drawing
triangles and the page flips.  As you have seen it also causes horrible
flickering.  But it "worked" and at the time that was all I was interested
in.  The hack was never meant to be more than a stopgap until I figured
out how to do it all properly.  Unfortunately, there wasn't much in the
way of buffer management API cut-ins in Mesa at the time, so it turned out
to be more work than I had anticipated, and a few weeks later my Savage4
driver project got canned and I stopped working on GGIMesa except for the
occasional build fixes to keep up with the changing Mesa internals.

I never implemented a better buffer-mamangement scheme, because I
was (and still am) unsure as to the best way to provide target hooks for
buffer-management and page flipping functions in the GGIMesa targets.  I'm
going to try again - I'm a lot better at writing GGI extensions after my
work on LibXMI earlier this year and Mesa's internals have gotten a LOT
better recently.  But in the meantime, I'm going to revert back to the
software-only double buffering scheme I threw away last year so people can
run on fbdev without horrible flickering.  Note that it will still flicker
somewhat, because fbdev has no way to poll for VSYNC and thus the GGI
fbdev target has no way to synchronize ggiFlush() calls with the vertical
retrace.

Jon

---
'Cloning and the reprogramming of DNA is the first serious step in 
becoming one with God.'
- Scientist G. Richard Seed




Re: [Berlin-design] GGIMesa updates

2000-10-30 Thread Stefan Seefeld

"Jon M. Taylor" wrote:
 
 I just committed a bunch of GGIMesa fixes to the Mesa CVS tree. It
 _should_ all build just fine again, but I have weird libtool and autoconf
 incompatibilities popping up which are preventing the final library
 install so I can't test it over here.  If someone else could test it for
 me, that would be cool.  Brian, I still have to merge those config file
 patches you sent me - some of that stuff isn't strictly correct.

Trying to build Mesa with GGI support, I get the following linking error.
Since I'v seen a similar report on the GGI mailing list some months ago, I
Cc it to the list, may be it is an obvious problem to some of you...

The problem seems to be related to libtool, as the linker complains in the
final link stage about:

/usr/local/lib/libggi.la: file not recognized: File format not recognized

(the file in question is indeed a libtool generated shell script).

I hope this is an easy to fix configuration problem, as I'm very eager to
see GGIMesa in action on my /dev/fb :)

Best regards,   Stefan
___  
  
Stefan Seefeld
Departement de Physique
Universite de Montreal
email: [EMAIL PROTECTED]

___

  ...ich hab' noch einen Koffer in Berlin...




GGIMesa updates

2000-10-28 Thread Jon M. Taylor

I just committed a bunch of GGIMesa fixes to the Mesa CVS tree. It
_should_ all build just fine again, but I have weird libtool and autoconf
incompatibilities popping up which are preventing the final library
install so I can't test it over here.  If someone else could test it for
me, that would be cool.  Brian, I still have to merge those config file
patches you sent me - some of that stuff isn't strictly correct.

Jon

---
'Cloning and the reprogramming of DNA is the first serious step in 
becoming one with God.'
- Scientist G. Richard Seed





Minor GGIMesa updates

2000-01-06 Thread Jon M. Taylor

If any of you have been playing with the newest GGIMesa CVS
sources, you probably noticed that they don't build due to internal Mesa
changes.  Well, I just fixed those problems and GGIMesa CVS now builds
(against the latest GGI CVS) and runs again.  The only functional changes
are a revamped debugging-print system which matches that used by LibGIC
and is quite a bit cleaner than the older stuff.  Also, the genkgi target
is always disabled in configure.in now, since it isn't being used yet and
was causing some DL loading bugs for some people.

Jon

---
'Cloning and the reprogramming of DNA is the first serious step in 
becoming one with God.'
- Scientist G. Richard Seed