GGIMesa

2001-12-14 Thread Filip Spacek


Hello everyone!

I have posted a first version of my GGIMesa patch at

http://www.student.math.uwaterloo.ca/~fspacek/ggimesa.diff

It is mostly functional, but there are still some (mostly implementation)
details that need to be worked out, so I didn't post it to Mesa3D ml, but
rather I seek advice here.

I have tested the patch on 8 and 16 bit single and double buffered X
target as well as the aalib target. I currently have no other targets to
test it on (though KGI is very very very close) so if anybody wants to
give it a shot somewhere else I'd be grateful.

I've radicaly changed the interface. It follows the ideas I've outlined in
my previous mail (since nobody seemed to have any serious objections).

First of all, I've attempted to follow the usual extension use patterns
more closely. The functions ggiMesaInit and ggiMesaAttach are now
mandatary and GGIMesa will _not_ take care of them silently if you forget
them.

The new interface is as follows:

int ggiMesaExtendVisual(ggi_visual_t vis, GLboolean alpha_flag,
GLboolean stereo_flag, GLint depth_size,
GLint stencil_size, GLint accum_red_size,
GLint accum_green_size, GLint accum_blue_size,
GLint accum_alpha_size, GLint num_samples);

This is probably the first call one should make. It will extend
visual by certain capabilities that cannot be specified by
ggi_mode. Note however that everything that can be specified by
the visual's mode is. That means RGB/COLOR_INDEX as well as
single/double buffering is set up depending on the current mode.

Note that call to this function is not strictly necessary. As the
name suggests, it merely extends the visual. So if the
capabilities that can be provided by the ggi-only visual are
sufficient you don't need to call this.

ggi_mesa_context_t ggiMesaCreateContext(ggi_visual_t vis);

Create a context capable of being used to render on visual vis.
This is a fundamental one, and one would usually call it after the
visual is properly extended.

void ggiMesaMakeCurrent(ggi_mesa_context_t ctx, ggi_visual_t vis);

Bind the context to the visual and select the context as the
current one.

Now for the issues: I would very much like to make Mesa use as transparent
as possible. What that means is that once the user extends the visual, it
should be possible to ggiSetMode and GGIMesa would adapt. Currently (using
the GGI_CHG_APILIST) it is capable of resizing the necessary buffers
(there is a bug in the code and it crashes after about a 20 resizes).
However I am not at all sure about what to do about change of graphtype.
Obviously there will be a lot of state that will no longer be valid in a
different graphtype. Currently I'm thinking of complely invalidating
contexts bound to the visual but that violates my ideal of making the use
totally transparent. If anybody has a better idea please help me out.

What I'm trying to do is make up an interface that would be useable even
later when a windowing system uses ggi and wants to do Mesa with direct
rendering. What I'm envisioning is some sort of flow-through visual for
the window in question (I know Stefan has done something similar for
Berlin). For this to work, it must be possible to resize the visual
without disturbing the context.

I have not submitted this patch to Mesa3d yet, and I will not do so until
the above issues are resolved. Since most of the problems are GGI and not
Mesa related, I figured I'd post this here now.


-Filip





Re: GGIMesa

2001-12-14 Thread Christoph Egger


On Fri, 14 Dec 2001, Filip Spacek wrote:

 Hello everyone!

 I have posted a first version of my GGIMesa patch at

 http://www.student.math.uwaterloo.ca/~fspacek/ggimesa.diff

Wow! 64K is an amazing size... (If it would only be compressed 64K... ;-))

And your gear-on-AAlib screenshot looks very cool, too.

Great work!


 It is mostly functional, but there are still some (mostly
 implementation) details that need to be worked out, so I didn't post
 it to Mesa3D ml, but rather I seek advice here.

 I have tested the patch on 8 and 16 bit single and double buffered X
 target as well as the aalib target. I currently have no other targets
 to test it on (though KGI is very very very close) so if anybody wants
 to give it a shot somewhere else I'd be grateful.

 I've radicaly changed the interface. It follows the ideas I've
 outlined in my previous mail (since nobody seemed to have any serious
 objections).

 First of all, I've attempted to follow the usual extension use
 patterns more closely. The functions ggiMesaInit and ggiMesaAttach are
 now mandatary and GGIMesa will _not_ take care of them silently if you
 forget them.

 The new interface is as follows:

 int ggiMesaExtendVisual(ggi_visual_t vis, GLboolean alpha_flag,
 GLboolean stereo_flag, GLint depth_size,
 GLint stencil_size, GLint accum_red_size,
 GLint accum_green_size, GLint accum_blue_size,
 GLint accum_alpha_size, GLint num_samples);

 This is probably the first call one should make. It will extend
 visual by certain capabilities that cannot be specified by
 ggi_mode. Note however that everything that can be specified by
 the visual's mode is. That means RGB/COLOR_INDEX as well as
 single/double buffering is set up depending on the current mode.

 Note that call to this function is not strictly necessary. As the
 name suggests, it merely extends the visual. So if the
 capabilities that can be provided by the ggi-only visual are
 sufficient you don't need to call this.

 ggi_mesa_context_t ggiMesaCreateContext(ggi_visual_t vis);

 Create a context capable of being used to render on visual vis.
 This is a fundamental one, and one would usually call it after the
 visual is properly extended.

 void ggiMesaMakeCurrent(ggi_mesa_context_t ctx, ggi_visual_t vis);

 Bind the context to the visual and select the context as the
 current one.

 Now for the issues: I would very much like to make Mesa use as transparent
 as possible. What that means is that once the user extends the visual, it
 should be possible to ggiSetMode and GGIMesa would adapt. Currently (using
 the GGI_CHG_APILIST) it is capable of resizing the necessary buffers
 (there is a bug in the code and it crashes after about a 20 resizes).

Do you make use of libbuf for buffer support? If yes, you might found a
bug in libgalloc (libgalloc is responsible for resizing of resources).

 However I am not at all sure about what to do about change of graphtype.

You have to overload libggi's internal getapi function. The libxmi's
X/Xlib targets gives you an idea how to do that. But be careful: you must
call libggi's getapi function, too. Otherwise LibGGI wouldn't handle the
reloading of its own default sublibs anymore.

 Obviously there will be a lot of state that will no longer be valid in a
 different graphtype. Currently I'm thinking of complely invalidating
 contexts bound to the visual but that violates my ideal of making the use
 totally transparent. If anybody has a better idea please help me out.

 What I'm trying to do is make up an interface that would be useable even
 later when a windowing system uses ggi and wants to do Mesa with direct
 rendering. What I'm envisioning is some sort of flow-through visual for
 the window in question (I know Stefan has done something similar for
 Berlin). For this to work, it must be possible to resize the visual
 without disturbing the context.

Sounds good.

 I have not submitted this patch to Mesa3d yet, and I will not do so until
 the above issues are resolved. Since most of the problems are GGI and not
 Mesa related, I figured I'd post this here now.



CU,

Christoph Egger
E-Mail: [EMAIL PROTECTED]




Re: GGIMesa Interface

2001-12-05 Thread Stefan Seefeld

Filip Spacek wrote:

 I've recently started fixing up GGI Mesa (which in the current 4.0 release
 of Mesa doesn't even compile).


Whoa, that's an excellent news ! We'd *love* to be able to run mesa over
GGI again as a powerful renderer implementation in the berlin project !
Be the power with you !

Stefan

PS: heh, I just now recognize your name...:)








GGIMesa Interface

2001-12-04 Thread Filip Spacek


I've recently started fixing up GGI Mesa (which in the current 4.0 release
of Mesa doesn't even compile). I have it mostly working, but I am a bit
concerned about the interface. Normaly, the usual sequence of calls is
create a visual, then context and then any buffers. The current GGIMesa
creates the context first and then the visual which leads to some
nastiness. I'm planning to change the interface so that it follows a bit
closely the xmesa interface. I'm still rather new to GGI and I don't know
exactly what kind of behaviour wrt ggi visual and mode is expected of an
extension so I hope somebody here will be able to point out any
inconsitencies.

The xmesa interface follows roughly the sequence (this is a crude
simplification): 

1. XMesaCreateVisual
Create a new visual which is composed out of the standard X11
visual and extended by the info that is required by OpenGL and
cannot be described the the X11 visual (depth, stencil, accum
sizes and so on)

2. XMesaCreateContext
Using the previously created visual create a new context

3. XMesaCreate{Window|Pixmap}Buffer
Create a buffer for a window or a pixmap as described by the
previously create visual

4. XMesaMakeCurrent
Bind the context to the buffer (and make it current)


I'm assuming that similary sounding things is X mean similar things in
GGI, hopefuly this is not totally incorrect. This led me to the following
interface:

1. int GGIMesaCreateVisual(ggi_visual_t vis, GLboolean alpha_flag,
   GLboolean db_flag, GLboolean stereo_flag,
   GLint depth_size, GLint stencil_size,
   GLint accum_red_size, GLint accum_green_size,
   GLint accum_blue_size, GLint accum_alpha_size,
   GLint num_samples)
This prototype corresponds exactly to XMesaCreateVisual except
that it applies to ggi_visual_t. The big difference is that in ggi
visual also corresponds to all drawing buffers so this function
effectively replaces XMesaCreateWindowBuffer, in that it assumes
that a valid mode is set and it creates all necessary buffers of
appropriate size and requested depth. This is also where any
interaction with libGAlloc would occur to get any necessary
hardware resources.

2. GGIMesaContext GGIMesaCreateContext(ggi_visual_t vis)
Assuming that GGIMesaCreateVisual succeeded on the visual this
function will create the necessary GL context.

3. void GGIMesaMakeCurrent(GGIMesaContext ctx)
I'm not exactly sure whether this function is strictly necessary.
Is it even possible to have more than one ggi_visual_t open at a
time?


I an uncertain about some parts of the above interface: Is extending a
visual through GGIMesaCreateVisual a good idea? Currently it assumes that
a valid mode is set and uses the set dimensions to allocate necessary
buffers. Would it be a better idea to instead create a full blown
GGIMesaSetMode? It would make integration with libGAlloc much easier but
I'm not sure whether it would be flexible enough.


Now for an implemetation issue: Double Buffering. Currently, I have
implemented double buffering using ggiSet*Frame() family of calls. Is this
the only implementation I should provide? Should I provide alternate
implementation using ggiSetOrigin in case the virtual resolution is large
enough? And what about the case when the target does not support any sort
double buffering and the user requests double buffered OpenGL? If the
target didn't bother emulating double buffering should Mesa do it?


-Filip





Re: GGIMesa Interface

2001-12-04 Thread Brian S. Julin


On Tue, 4 Dec 2001, Filip Spacek wrote:
 exactly what kind of behaviour wrt ggi visual and mode is expected of an
 extension so I hope somebody here will be able to point out any
 inconsitencies.

Well, of a pure GGI extension, we'd expect that it could deal with 
re-creating whatever extra context it has when ggiSetMode is called.
I wouldn't hold GGIMesa to that standard, though, since it is meant
to provide Mesa look and feel.

 3. void GGIMesaMakeCurrent(GGIMesaContext ctx)
   I'm not exactly sure whether this function is strictly necessary.
   Is it even possible to have more than one ggi_visual_t open at a
   time?

Yes, but in GGI each visual has it's own context.  I don't know
enough about Mesa to even know properly what the Mesa function does :-)

 I an uncertain about some parts of the above interface: Is extending a
 visual through GGIMesaCreateVisual a good idea? Currently it assumes that
 a valid mode is set and uses the set dimensions to allocate necessary
 buffers. Would it be a better idea to instead create a full blown
 GGIMesaSetMode? It would make integration with libGAlloc much easier but
 I'm not sure whether it would be flexible enough.

I would concentrate on what would make Mesa API users most comfortable.

 Now for an implemetation issue: Double Buffering. Currently, I have
 implemented double buffering using ggiSet*Frame() family of calls. Is this
 the only implementation I should provide? Should I provide alternate
 implementation using ggiSetOrigin in case the virtual resolution is large
 enough? And what about the case when the target does not support any sort
 double buffering and the user requests double buffered OpenGL? If the
 target didn't bother emulating double buffering should Mesa do it?

I'd say not, frames basically use the SetOrigin trick but conceal it
from the user.  If a GGI target doesn't support them but does support
setorigin, time is better spent fixing that target IMO.

--
Brian




Re: GGIMesa (Was: Re: Presentation)

2001-06-10 Thread Brian S. Julin



 David Pettersson wrote:
  I just joined this list, in order to follow the project's progress. I have
  been a happy user of the library since 1999 when I was desperately looking
  for a simple but powerful graphics interface :).
  
  Anyway, I have a few ideas on how to extends GGI further, but after doing
  some research it seems my idea about using OpenGL and GGI together aren't
  unique (Mesa already has GGI support -- the guys in #ggi probably already
  knew that but they were far asleep :).

Hold on for a bit and Christoph and I will be making a proposal for GGI's
own advanced 2D API, the basic ideas behind which should extend easily into 3D.

  So, I am now without any particular idea, but I will gladly help out with
  the development. I thought I'd start out as a reviewer to get a better idea
  of what the code does. If anyone has any ideas, comments or suggestions,
  please let me know. 

Having just been mucking around in the source code a lot, these are some 
of the major things that stuck out to me as needing doing, aside from
the new extensions Christoph and I have started (which are also in need
of work.)

1) If you value the LibGII input library a whole lot, then some work needs 
to be done there.  We once had two objects in LibGII, inputs and filters.  
Then they were unified, but HOWTOs were never really written for writing/using
filters.  Filters are the second punch in the one-two combo that make LibGII 
something really special (target independence being the first.)  If people 
were to grasp their potential for practical application (e.g. advanced game 
control tuning, input emulation for the disabled, etc.) then LibGII would 
be more actively used IMO.

2) We need to flesh out generic default renderers, especially in extensions
and especially ggiCrossBlt.  In many places, these have been implemented for 
the common cases, but never followed through with more thorough optimization.
Part of the point of LibGGI is to take advantage of a modular system to 
dynamically optimize stuff; we should be sure we have a full menu of dynamic 
modules.

3) Embedded folks want to link LibGGI statically.  This has been on the 
TODO list for some time now.  Marcus finished up some preliminary work that 
makes this much less hairy to accomplish about a year ago.  Basically, a 
compile-time option that replaces the dl loading mechanism with a simple 
table lookup needs to be added in order to accomplish this.

On Sun, 10 Jun 2001, Stefan Seefeld wrote:
 contact Jon Taylor for news about GGIMesa. Half a year ago he was about to 
 add some acceleration support for matrox cards. Dunno how far he has been 
 getting with that.

4) We should take the following tact as far as hardware drivers are concerned:
We need two things -- a reliable, secure kernel graphics system in the long
term (that is, KGI) and in the short term we need enough hardware acceleration
to keep LibGGI competantly fast, and to test new APIs or libraries like 
GGIMesa for 2D and 3D so they are ready to use once KGI comes to fruition.  
So if to-the-metal programming is your strong point, help out KGI.  But for 
the short term, I think we should NOT invest time in writing our own 
userspace drivers, because other projects (DirectFB, DRI) are already 
doing this and it really is a needless duplication of effort.  Instead we 
should figure out how to tap into these drivers to accelerate GGI and its 
extensions (I already have some preliminary code for DirectFB drivers.)  
This latter work requires a good understanding of linking object modules 
and preprocessing/makefile systems.  Using DRI modules will be a tougher 
challenge than DirectFB.

--
Brian 

P.S. to the list: sorry for the lack of activity; a number of personal
issues came up e.g. visiting parents and stuff, and I have a presentation
for a conference I must give on Thursday, and of course I am hardly yet 
prepared for it :-).  I will try to get back to stamping out those last few 
bugs as soon as I can.





problems with GGIMesa (post Mesa 3.4)

2000-12-05 Thread Stefan Seefeld

hi there (in particular Jon),

I'm trying to get MesaGGI up and running. Doing this I
experience a couple of difficulties:

First of all, Mesa 3.4 doesn't compile, more specifically,
it stops in the GGI code ('ctx.Texture.Enabled' not being defined).
So I check out from cvs, hoping that it is more stable than
the latest stable release...

All seems to compile fine, but when I run my application, I
get unresolved symbols:

LibGG: unable to open lib: /usr/local/lib/ggi/mesa/default/stubs.so: undefined symbol:
_swsetup_UnregisterVB
update_state == NULL!
Please check your config files!

What's wrong ?

Stefan
___  
  
Stefan Seefeld
Departement de Physique
Universite de Montreal
email: [EMAIL PROTECTED]

___

  ...ich hab' noch einen Koffer in Berlin...




Re: Video memory (Was: Re: GGIMesa updates)

2000-11-21 Thread Steffen Seeger

"Jon M. Taylor" wrote:

 Antonio Campos wrote:

  1) Installing KGI is not an easy task. (It only supports a few cards).
 
 _Running_ KGI is not an easy task, because it only supports a few
 cards.  Installing it is actually pretty easy, unless you don't already
 know about kernel development issues in which case it would be _very_
 difficult.  KGI is not meant for the end user yet, although it is close
 than you might think.

This is correct but also to some way intended. I have concentrated on getting
the framework and concepts worked out, not to write as many drivers as possible.

  2) It doesn't expose a way to handle 2D and 3D graphics in an unified
  (inside kernel) way.
 
 Yes, it does.  Read the docs, please.

To what I only like to add that they may be found at http://kgi.sourceforge.net
...

  3) It doesn't handle the resource allocation of buffers (framebuffer
  memory (back and front, double and triple buffers, etc.), stencil
  buffers, and the like...)
 
 Yes, it does.  Or rather, it provides support for resource
 allocation of abstract buffer types, and the individual KGI drivers
 themselves map out whatever buffers their hardware provides.

It does handle mode specification and initialization of almost all buffer
formats I know. Even more. You can specify z-buffered modes, whith/without
alpha channels, overlays, stereo, anything you can think of.

E.g. initialization of a double-buffered 16bpp stero mode with 16bit z-buffer
works fine with the Permedia2 driver.

However, splitting resources is not yet addressed by KGI-0.9, but is planned
once I have an accelerated X server going.

So, in that way we are in the right direction (though not yet really there
where we want to go).

Steffen

___
Steffen Seeger  mailto:[EMAIL PROTECTED]




Re: Video memory (Was: Re: GGIMesa updates)

2000-11-20 Thread Antonio Campos

Steffen Seeger wrote:

 Antonio Campos wrote:

  People want an OS for accesing the hardware in a clean, fast and reliable way.
  That includes the graphics hardware. And I must say that this handling is one of
  the most important tasks in modern operating systems (and one of the things that
  the user sees more quickly). And this handling is one of the things that Linux
  users can't feel pride about.
  We have that strange and quite limited fbdev kernel hack, the slow and
  uncomfortable to program Xlib (DGA, DRI, etc...), and of course, an ununified way
  of handling 2D and 3D graphics...
  I hoped the GGI/KGI project filled this gap (the same way I hoped the Alsa+OpenAL
  projects would deprecate the OSS sound drivers in an unified sound system), but it
  seems to me that is not going in the right direction (I'm sorry about saying
  this).

 So, in your opinion, what is wrong about the direction KGI is heading to?


Maybe I should have said that GGI is going in the wrong direction, not
KGI. Anyways I
don't know KGI or GGI internals very well, but it seems to me that:

1) Installing KGI is not an easy task. (It only supports a few cards).
2) It doesn't expose a way to handle 2D and 3D graphics in an unified
(inside kernel)
way. (Maybe I'm misunderstanding something, and this is the task of GGI,
etc...)
3) It doesn't handle the resource allocation of buffers (framebuffer
memory (back and front, double and triple buffers, etc.), stencil
buffers, and the like...)

Just to name some holes I see...
front), stencil,


 Steffen

 ___
 Steffen Seeger  mailto:[EMAIL PROTECTED]
 TU-Chemnitz  http://www.tu-chemnitz.de/~sse




Re: Video memory (Was: Re: GGIMesa updates)

2000-11-20 Thread Jon M. Taylor

On Tue, 21 Nov 2000, Antonio Campos wrote:

 Steffen Seeger wrote:
 
  Antonio Campos wrote:
 
   People want an OS for accesing the hardware in a clean, fast and reliable way.
   That includes the graphics hardware. And I must say that this handling is one of
   the most important tasks in modern operating systems (and one of the things that
   the user sees more quickly). And this handling is one of the things that Linux
   users can't feel pride about.
   We have that strange and quite limited fbdev kernel hack, the slow and
   uncomfortable to program Xlib (DGA, DRI, etc...), and of course, an ununified way
   of handling 2D and 3D graphics...
   I hoped the GGI/KGI project filled this gap (the same way I hoped the Alsa+OpenAL
   projects would deprecate the OSS sound drivers in an unified sound system), but 
it
   seems to me that is not going in the right direction (I'm sorry about saying
   this).
 
  So, in your opinion, what is wrong about the direction KGI is heading to?
 
 
 Maybe I should have said that GGI is going in the wrong direction, not
 KGI. Anyways I
 don't know KGI or GGI internals very well, but it seems to me that:
 
 1) Installing KGI is not an easy task. (It only supports a few cards).

_Running_ KGI is not an easy task, because it only supports a few
cards.  Installing it is actually pretty easy, unless you don't already
know about kernel development issues in which case it would be _very_
difficult.  KGI is not meant for the end user yet, although it is close
than you might think.

 2) It doesn't expose a way to handle 2D and 3D graphics in an unified
 (inside kernel) way. 

Yes, it does.  Read the docs, please.

 (Maybe I'm misunderstanding something, and this is the task of GGI,
 etc...)

No, it is the task of KGI.

 3) It doesn't handle the resource allocation of buffers (framebuffer
 memory (back and front, double and triple buffers, etc.), stencil
 buffers, and the like...)

Yes, it does.  Or rather, it provides support for resource
allocation of abstract buffer types, and the individual KGI drivers
themselves map out whatever buffers their hardware provides.
 
 Just to name some holes I see...
 front), stencil,

KGI_A_STENCIL is clearly defined in kgi.h, and the Permedia driver
uses it.

Jon 

---
'Cloning and the reprogramming of DNA is the first serious step in 
becoming one with God.'
- Scientist G. Richard Seed




Re: Video memory (Was: Re: GGIMesa updates)

2000-11-12 Thread Antonio Campos

Lee Brown wrote:

 Antonio:

  People want an OS for accesing the hardware in a clean, fast and reliable way.
  That includes the graphics hardware. And I must say that this handling is one of
  the most important tasks in modern operating systems (and one of the things that
  the user sees more quickly).

 Thanks for your input.   I am starting to get more involved with the KGI
 project even though I am not sure that I agree  with it 100% .


My input just intended to show a general view of what I think GGI/KGI should address.
(And currently it doesn't).


 BTW: What in GGI/KGI are you interested in?


Correctly handling graphics hardware resource at the kernel level.
With this settled, one can then construct libraries, servers ( X, Berlin or MAC OS X
pdf renderer), or even graphics console programs, easily.


 --
 Lee Brown Jr.
 [EMAIL PROTECTED]




Re: Video memory (Was: Re: GGIMesa updates)

2000-11-10 Thread Antonio Campos

Stefan Seefeld wrote:

 Lee Brown wrote:

   Perhaps you can clue me in.  I still don't
  understand the difficulty in accessing video memory.  The fbdev already mmaps
  all of video memory.  There it is. Let people have at it.

 may be you should play with DOS, pre protected mode. (remember ?)
 Here is your memory, do what you want with it...

 What do you want an OS for ?


People want an OS for accesing the hardware in a clean, fast and reliable way.
That includes the graphics hardware. And I must say that this handling is one of
the most important tasks in modern operating systems (and one of the things that
the user sees more quickly). And this handling is one of the things that Linux
users can't feel pride about.
We have that strange and quite limited fbdev kernel hack, the slow and
uncomfortable to program Xlib (DGA, DRI, etc...), and of course, an ununified way
of handling 2D and 3D graphics...
I hoped the GGI/KGI project filled this gap (the same way I hoped the Alsa+OpenAL
projects would deprecate the OSS sound drivers in an unified sound system), but it
seems to me that is not going in the right direction (I'm sorry about saying
this).


 Stefan
 ___

 Stefan Seefeld
 Departement de Physique
 Universite de Montreal
 email: [EMAIL PROTECTED]

 ___

   ...ich hab' noch einen Koffer in Berlin...





Re: Video memory (Was: Re: GGIMesa updates)

2000-11-10 Thread Antonio Campos

Lee Brown wrote:

 On Sat, 04 Nov 2000, Stefan Seefeld wrote:
  Lee Brown wrote:
   Why can't we just let the client (Stefan) draw to the offscreen part
   of the framebuffer?
  had you followed the recent discussion, you would know. As always, GGI's aim is
  to insulate h/w specifics from the client. Some graphics cards might have special
  memory for this kind of things, z-buffer, etc.
  What if my card doesn't have as much memory as I request ? What if I want multiple
  offscreen buffers ?

 What if GGI just told you how much memory was available, gave you the ability
 to access it, and let you regulate it  for yourself? Would that be an
 improvement?

  In fact, I think video memory management should be at the very core of GGI, 
together
  with drawing primitives. Every advanced program will require that.

 I agree that the concept of a visual needs to address the fact that it is
 possible to have non-viewable target regions and give the user the ability to
 make full use of this resource.  IMHO, GGI should make things possible, not
 limit the possiblilties.

 Lee Brown Jr.
 [EMAIL PROTECTED]

It seems to me that in the end we're asking for something like DirectDraw (on Windows,
you know...) and its surfaces.
Although DirectDraw is quite messy (from the programmer and the user point of view,
COM architecture, etc... ) because it doesn't protect the video memory from malicious
programs, it's a working implementation on many graphics boards. So, maybe the GGI team
should take a look at it. By the way, has this team told with the DRI one?. I think the
DRI project is doing the things wrong. They are putting all this 3D management stuff in
the X Server (and in the kernel), but they don't manage 2D graphics well (nor does the
X Server, nor even the DGA architecture). Aren't they putting they noses in a terrain
that the awaited KGI direct video memory hardware management system (that should reside
in the kernel, at least in part,... ) should conquest?






Re: Video memory (Was: Re: GGIMesa updates)

2000-11-07 Thread Stefan Seefeld

Lee Brown wrote:

  Perhaps you can clue me in.  I still don't
 understand the difficulty in accessing video memory.  The fbdev already mmaps
 all of video memory.  There it is. Let people have at it.

may be you should play with DOS, pre protected mode. (remember ?)
Here is your memory, do what you want with it...

What do you want an OS for ?

Stefan
___  
  
Stefan Seefeld
Departement de Physique
Universite de Montreal
email: [EMAIL PROTECTED]

___

  ...ich hab' noch einen Koffer in Berlin...




Re: GGIMesa updates

2000-11-04 Thread Marcus Sundberg

"Jon M. Taylor" [EMAIL PROTECTED] writes:

 On 3 Nov 2000, Marcus Sundberg wrote:
 
  "Jon M. Taylor" [EMAIL PROTECTED] writes:
  
   On Thu, 2 Nov 2000, [iso-8859-1] Niklas Höglund wrote:
At that time I found that using a main loop looking like this does sort of
proper double-buffering using GGIMesa. Note that the SetMode call sets the
virtual width to twice the physical width.
  
  Why in the world would you want to use SetOrigin to just flip pages
  when there's a perfectly good API for handling multiple frames?
 
   QuickHack.

Requesting multiple frames properly is much quicker to implement
and works on more targets.

 Thanks for the input, but I'm afraid that the "pageflip using
   SetOrigin" hack won't work on all targets.
  
  Neither does normal multiple frames, so?!?
 
   So the point was to find a QH which would always enable
 doublebuffering on all targets, no matter the inefficiency.  Lots of GL
 code requires doublebuffering.

Sure, I'm not arguing against the reason for the original malloc()
hack. I just find the current discussion about how to fix things
strange, when there is exactly one obviously correct way to do it.

  I'd like to have a look at this "problem", but yesterday the Mesa
  CVS didn't compile at all. :(
 
   Yeah, they chose yesterday to add a whole new separated software
 rasterizer cut-in layer to Mesa CVS |-/.  I wish Brian had decided to keep
 that stuff in the 3.5 branch only - kind of odd, when he said that he
 wanted to release 3.4 a few days ago

Ah well, hope it will start working again soon then...

//Marcus
-- 
---+
Marcus Sundberg| http://www.stacken.kth.se/~mackan
 Royal Institute of Technology |   Phone: +46 707 452062
   Stockholm, Sweden   |   E-Mail: [EMAIL PROTECTED]




Re: Video memory (Was: Re: GGIMesa updates)

2000-11-04 Thread Lee Brown

what is the support for offscreen video memory allocation ?
I'm not sure I use the correct terminology, so here is what
I have in mind:

Why can't we just let the client (Stefan) draw to the offscreen part
of the framebuffer?  I wrote a little demo (with minor changes to the fbdev
code) program that allowed me to draw offscreen (outside of the virtual area)
and then use ggiCopyBox to blit it to the viewable (virtual/pannable) area when
needed. What am I missing here?

fntPrintChar(rootvis, font, 'a', xpos, ypos, pixs);  /* offscreen */

ggiGetc(rootvis); /* nothing is viewable */
ggiCopyBox(rootvis, xpos + dim.dx, ypos + dim.dy , dim.width, dim.height, 100 + 
dim.dx,100 + dim.dy); /*  all of a sudden an 'a' appears  */ 



-- 
Lee Brown Jr.
[EMAIL PROTECTED]




Re: Video memory (Was: Re: GGIMesa updates)

2000-11-04 Thread Jon M. Taylor

On Sat, 4 Nov 2000, Lee Brown wrote:

 what is the support for offscreen video memory allocation ?
 I'm not sure I use the correct terminology, so here is what
 I have in mind:
 
 Why can't we just let the client (Stefan) draw to the offscreen part
 of the framebuffer?  

There may not always BE an offscreen part of the framebuffer on
all targets.  In particular, the targets which do not support one or more
DirectBuffer mappings cannot use this method.

 I wrote a little demo (with minor changes to the fbdev
 code) program that allowed me to draw offscreen (outside of the virtual area)
 and then use ggiCopyBox to blit it to the viewable (virtual/pannable) area when
 needed. What am I missing here?

Did you try it on all targets?  

Jon 

---
'Cloning and the reprogramming of DNA is the first serious step in 
becoming one with God.'
- Scientist G. Richard Seed




Re: GGIMesa updates

2000-11-03 Thread Marcus Sundberg

"Jon M. Taylor" [EMAIL PROTECTED] writes:

 On Thu, 2 Nov 2000, [iso-8859-1] Niklas Höglund wrote:
  At that time I found that using a main loop looking like this does sort of
  proper double-buffering using GGIMesa. Note that the SetMode call sets the
  virtual width to twice the physical width.

Why in the world would you want to use SetOrigin to just flip pages
when there's a perfectly good API for handling multiple frames?

   Thanks for the input, but I'm afraid that the "pageflip using
 SetOrigin" hack won't work on all targets.

Neither does normal multiple frames, so?!?
If we'd only support features that works on every piece of hardware
the entire project would fit into an empty file...

*Of course* you should implement back/front buffering by simply
having two separate buffers and switch between them! If that's not
possible with the current target/mode then touch luck, you just have
to fall back to:
 allocate a DirectBuffer or a memory_visual and use that as a
 backbuffer on every target.


I'd like to have a look at this "problem", but yesterday the Mesa
CVS didn't compile at all. :(

//Marcus
-- 
---+
Marcus Sundberg| http://www.stacken.kth.se/~mackan
 Royal Institute of Technology |   Phone: +46 707 452062
   Stockholm, Sweden   |   E-Mail: [EMAIL PROTECTED]




Re: Video memory (Was: Re: GGIMesa updates)

2000-11-03 Thread Jon M. Taylor

On Fri, 3 Nov 2000, Stefan Seefeld wrote:

 "Jon M. Taylor" wrote:
 
   I might know when allocating visuals (drawing buffers) that some are
   updated more frequently than others, i.e. they would profit much more
   from being close to the graphic card. Is there (or could there be) any way
   to expose some policy issues like these through an API for drawable memory
   management ?
  
  Sure.  This is not such an easy task, though.  Such an API
  (libDirectBuffer?) would need to be able to:
  
  * Establish a set of targets which would know about all the different
  types and configurations of memory buffers available for each target
 
 why ? To be able to implement crossblits ? 

No, to be able to set modes intelligently in the presence of
arbitrarily complex extensions attached to any number of visuals which
might use ggi_resources which are exposed and managed by a particular
target.

 Can't you use a simple adapter
 interface (some form of 'marshalling') ? 

We already do, sort of.  The target strings and the
request-strings which the KGI/kgicon targets use do something like this.
Resource request strings will presumably have the same sort of "namespace
tree" format.  I've proposed a resource request hierarchy based on this
type of system before - search the archives.

 I mean, the interesting case is
 blitting from video memory to video memory, 

Define "video memory".  PCI and especially AGP memory mapping
tricks make this potentially quite complex.  Is the memory on the card, or
system RAM mapped across the AGP GART pagetables?  Is it tiled, and if so
how?  Has the region been marked cacheable, and if not can it be?  What
about MTRRs?  The issue CAN be simplified, but not if you expect to retain
any significant degree of optimization potential.

 and there I assume that all
 parameters (alignment etc.) are identical.

Not necessarily, in the case of tiled or GART-mapped AGP aperture
memory spaces.
 
  * Establish global resource pools for each (e.g. dividing up a 32MB
  AGP-mapped video memory aperture into front, back, z, stencil, texture,
  etc buffers)
 
 does this division need to be static ? 

It _cannot_ be static.

 Can't you have a single manager
 instance which keeps track of which memory is allocated for which purpose ?

Yes, _in the target code_.  This stuff must ultimately be mapped
into some sort of target-independent resource namespace.  We cannot even
assume that only one target (or target instance) is managing the whole of
the video card's resources.

  * Know what all the tradeoffs between various resource allocation requests
  are (i.e. if you double your framebuffer size, you cannot have double
  buffering, or you can choose to give up your z-buffer instead)
 
 Right. Can't that be a simple table ? (which would indicate how much memory
 the different buffer types need, etc.)

Not in all cases.  There are potentially many, many different
restrictions on what types of buffers can be mapped where, and in what
combinations, and all of this is highly chipset-dependent |-.
 
  * Be able to map abstract QoS requirement types to various combinations of
  the mapped resources, in a sufficiently generic manner that there's a
  _point_ to using one single API for this instead of just hardcoding the
  target knowledge into the app or a specific library (e.g.
  'libNvidiaRivaTNT2AGP32MB' or somesuch'.
 
 Hmm. I don't know whether that is of *any* relevance. But I'm studying the
 CORBA architecture, especially its historical evolution. CORBA is a middleware
 architecture to provide a set of 'services', encapsulating all the nasty details
 of OS dependence, transport protocols, etc.
 The more CORBA evolves, the more it becomes clear that users might want to 
explicitely
 control low level features, such as messaging strategies, concurrency strategies, 
etc.
 Therefor, there are more and more specifications added to CORBA which allow to
 control these features in terms of 'policies', 'interceptors' (some sophisticated
 form of callbacks), etc.

CORBA is also slow - WAY too slow for a system layer such as GGI.
We are avoiding C++ altogether because of performance issues, so CORBA
seems to be out |-.

Jon

---
'Cloning and the reprogramming of DNA is the first serious step in 
becoming one with God.'
- Scientist G. Richard Seed




Re: GGIMesa updates

2000-11-02 Thread Niklas Höglund

On Tue, Oct 31, 2000 at 01:59:37PM -0800, Jon M. Taylor wrote:
 On Mon, 30 Oct 2000, beef wrote:
 
  On Sat, 28 Oct 2000, Jon M. Taylor wrote:
  It kindof works, but flickers horribly on the fbdev.
  
  what/where _could_ this doublebuffer problem be?
 
   So, I did a QuickHack(tm) to work around the problem - I pointed
 both buffers to the ggi_visual |-.  This let me render to either the
 front or back buffer, mapped to either hardware or software
 front/backbuffers, with or without hardware acceleration for both drawing
 triangles and the page flips.  As you have seen it also causes horrible
 flickering.  But it "worked" and at the time that was all I was interested
 in.  The hack was never meant to be more than a stopgap until I figured
 out how to do it all properly.  Unfortunately, there wasn't much in the
 way of buffer management API cut-ins in Mesa at the time, so it turned out
 to be more work than I had anticipated, and a few weeks later my Savage4
 driver project got canned and I stopped working on GGIMesa except for the
 occasional build fixes to keep up with the changing Mesa internals.

At that time I found that using a main loop looking like this does sort of
proper double-buffering using GGIMesa. Note that the SetMode call sets the
virtual width to twice the physical width.

int main(int argc, char *argv[])
{
  int wid=800, hei=600;
  ggi_visual_t vis;

  if(ggiInit()) {
fprintf(stderr, "Can't initialize ggi.\n");
return EXIT_FAILURE;
  }
  vis = ggiOpen(NULL);
  ggiSetFlags(vis, GGIFLAG_ASYNC);
  if(!vis) {
fprintf(stderr, "Can't open default ggi target.\n");
ggiExit();
return EXIT_FAILURE;
  }
  if(ggiSetGraphMode(vis, wid, hei, 2*wid, hei, 0)  0) {
fprintf(stderr, "Can't set mode on ggi visual.\n");
ggiClose(vis);
ggiExit();
return EXIT_FAILURE;
  }
  GGIMesaContext ctx = GGIMesaCreateContext();
  GGIMesaSetVisual(ctx, vis, true, false);
  GGIMesaMakeCurrent(ctx);

  Initialize();

  for(;;) {
draw();
static bool first=true;
glFlush();
glFinish();
ggiFlush(vis);
ggiSetOrigin(vis, first ? 0 : wid,0);
reshape(first ? wid : 0,0,wid,hei);
glClear(GL_DEPTH_BUFFER_BIT);
ggiDrawBox(vis, first ? wid : 0,0,wid,hei);
first=!first;
  }

  return EXIT_SUCCESS;
}


The reshape call takes four parameters (x,y,width,height), and sets the GL viewport
to draw in that area only. It can look like this:

static void reshape(int x, int y, int width, int height)
{
  GLfloat h = (GLfloat) height / (GLfloat) width;

  glViewport((GLint) x, (GLint) y, (GLint) width, (GLint) height);
  glMatrixMode(GL_PROJECTION);
  glLoadIdentity();
  glFrustum(-1.0, 1.0, -h, h, 5.0, 60.0);
  glTranslatef(0.0, 0.0, -7.0);
  glMatrixMode(GL_MODELVIEW);
}

This can still flicker a bit as the ggiSetOrigin() call isn't synchronized with the
physical display rate. This synchronization needs support for the (fb|KGI)con driver.
(It wasn't synchronized at the time I made this, maybe it is now?)


I think something like this should me added to GGIMesa. Let the application set up
the display (using GGI) and tell GGIMesa to draw into an area of a frame. Let GGI
deal with double-buffering. All GGIMesa needs to do is allow changing of what frame
to draw into, and what area of it.

-- 
   Niklas




Video memory (Was: Re: GGIMesa updates)

2000-11-02 Thread Stefan Seefeld

This brings up another interesting point:

what is the support for offscreen video memory allocation ?
I'm not sure I use the correct terminology, so here is what
I have in mind:

There is often a need to double buffer content in some form,
and map (blit) it into the screen at specific times. Of course,
the way to do that with GGI is to allocate a set of (memory) 
visuals and work with these.
So, what memory are the visuals allocated from ? Assuming that
they are allocated from video memory (framebuffer ?), I'd suggest
to think about a QoS (Quality of Service) issue: Given that video
memory is limited, some visuals would need to be allocated on regular
heap.
I might know when allocating visuals (drawing buffers) that some are
updated more frequently than others, i.e. they would profit much more
from being close to the graphic card. Is there (or could there be) any way 
to expose some policy issues like these through an API for drawable memory
management ?

You will notice that this is an issue which I brought up a couple of
months ago already: I'm thinking of a 'Backing Store' for berlin, i.e.
for example for video intensive graphics, I'd like to make backups of
the scene graph in front and behind the graphic with the high frame rate,
such that I then don't need to traverse the scene graph on each redraw,
but rather map the three layers (back, animated graphic, front) into the
screen to keep it consistent with the scene graph (for example if the
exposed region of the animation isn't regular (rectangular), or if the
layers are translucent, such that I need to blend them together, rather
than just blitting them in.

Regards,Stefan

___  
  
Stefan Seefeld
Departement de Physique
Universite de Montreal
email: [EMAIL PROTECTED]

___

  ...ich hab' noch einen Koffer in Berlin...




Re: Video memory (Was: Re: GGIMesa updates)

2000-11-02 Thread Marcus Sundberg

Stefan Seefeld [EMAIL PROTECTED] writes:

 This brings up another interesting point:
 
 what is the support for offscreen video memory allocation ?
 I'm not sure I use the correct terminology, so here is what
 I have in mind:
 
 There is often a need to double buffer content in some form,
 and map (blit) it into the screen at specific times. Of course,
 the way to do that with GGI is to allocate a set of (memory) 
 visuals and work with these.

It is *NOT* the way, and will never be!
The correct way is to use the not-yet-written blitting extension,
so you can get hw accelerated blits when supported.

Until that has been written you should first try to set a mode
with a virtual Y-resolution higher than the physical and use the
offscreen area for caching images, and ggiCopyBox() for blitting.
Only if that fails you should resort to using a memory visual and
crossblit.

 So, what memory are the visuals allocated from ? Assuming that
 they are allocated from video memory (framebuffer ?),

Your assumption is wrong, from targets.txt:

memory-target
=

Description
+++

Emulates a linear framebuffer in main memory. This memory area can be
a shared memory segemnt, an area specified by the application, or be
malloc()ed by the memory-target itself.

 I'd suggest
 to think about a QoS (Quality of Service) issue: Given that video
 memory is limited, some visuals would need to be allocated on regular
 heap.
 I might know when allocating visuals (drawing buffers) that some are
 updated more frequently than others, i.e. they would profit much more
 from being close to the graphic card. Is there (or could there be) any way 
 to expose some policy issues like these through an API for drawable memory
 management ?

The idea is to implement simple offscreen memory requesting in
LibGGI, and to let the blitting extension have all the intelligence.
The blitting extension will have some sort of priority based API
for allocating areas of either offscreen video memory or RAM, and
also moving areas between these two types of memory. Something in the
line of http://www.xfree86.org/4.0.1/DESIGN12.html

 You will notice that this is an issue which I brought up a couple of
 months ago already: I'm thinking of a 'Backing Store' for berlin, i.e.
 for example for video intensive graphics, I'd like to make backups of
 the scene graph in front and behind the graphic with the high frame rate,
 such that I then don't need to traverse the scene graph on each redraw,
 but rather map the three layers (back, animated graphic, front) into the
 screen to keep it consistent with the scene graph (for example if the
 exposed region of the animation isn't regular (rectangular), or if the
 layers are translucent, such that I need to blend them together, rather
 than just blitting them in.

//Marcus
-- 
---+
Marcus Sundberg| http://www.stacken.kth.se/~mackan
 Royal Institute of Technology |   Phone: +46 707 452062
   Stockholm, Sweden   |   E-Mail: [EMAIL PROTECTED]




Re: Video memory (Was: Re: GGIMesa updates)

2000-11-02 Thread Andreas Beck

 It is *NOT* the way, and will never be!
 The correct way is to use the not-yet-written blitting extension,
 so you can get hw accelerated blits when supported.

Umm - good point ... Marcus: We should talk about the region management once
again ... and finally implement it. I have stubs for blit functions from my
Libbse experimental thingy ...

 Until that has been written you should first try to set a mode
 with a virtual Y-resolution higher than the physical and use the
 offscreen area for caching images, and ggiCopyBox() for blitting.
 Only if that fails you should resort to using a memory visual and
 crossblit.

Yes. This is more or less what said extension will then do internally.

  So, what memory are the visuals allocated from ? Assuming that
  they are allocated from video memory (framebuffer ?),

 Your assumption is wrong, from targets.txt:

Not totally ... though in a nonobvious way, but I think I should mention 
it:

 Emulates a linear framebuffer in main memory. This memory area can be
 a shared memory segemnt, an area specified by the application, or be
 malloc()ed by the memory-target itself.

If you use mmap together with the option "an area specified by the
application", you can place a memvisual into vidmem.

 The idea is to implement simple offscreen memory requesting in
 LibGGI, and to let the blitting extension have all the intelligence.
 The blitting extension will have some sort of priority based API
 for allocating areas of either offscreen video memory or RAM, and
 also moving areas between these two types of memory. Something in the
 line of http://www.xfree86.org/4.0.1/DESIGN12.html

Hmm - got to read that ...

CU, Andy

-- 
= Andreas Beck|  Email :  [EMAIL PROTECTED]=




Re: Video memory (Was: Re: GGIMesa updates)

2000-11-02 Thread Lee Brown

On Thu, 02 Nov 2000, Marcus Sundberg wrote:
 Stefan Seefeld [EMAIL PROTECTED] writes:

 It is *NOT* the way, and will never be!
 The correct way is to use the not-yet-written blitting extension,
 so you can get hw accelerated blits when supported.

What would the extension API look like?

Thanks ahead,
-- 
Lee Brown Jr.
[EMAIL PROTECTED]




Re: Video memory (Was: Re: GGIMesa updates)

2000-11-02 Thread Lee Brown

Scratch that last question.  I see the X documentation.


-- 
Lee Brown Jr.
[EMAIL PROTECTED]




Re: GGIMesa updates

2000-11-02 Thread Jon M. Taylor

On Thu, 2 Nov 2000, [iso-8859-1] Niklas Höglund wrote:

 On Tue, Oct 31, 2000 at 01:59:37PM -0800, Jon M. Taylor wrote:
  On Mon, 30 Oct 2000, beef wrote:
  
   On Sat, 28 Oct 2000, Jon M. Taylor wrote:
   It kindof works, but flickers horribly on the fbdev.
   
   what/where _could_ this doublebuffer problem be?
  
  So, I did a QuickHack(tm) to work around the problem - I pointed
  both buffers to the ggi_visual |-.  This let me render to either the
  front or back buffer, mapped to either hardware or software
  front/backbuffers, with or without hardware acceleration for both drawing
  triangles and the page flips.  As you have seen it also causes horrible
  flickering.  But it "worked" and at the time that was all I was interested
  in.  The hack was never meant to be more than a stopgap until I figured
  out how to do it all properly.  Unfortunately, there wasn't much in the
  way of buffer management API cut-ins in Mesa at the time, so it turned out
  to be more work than I had anticipated, and a few weeks later my Savage4
  driver project got canned and I stopped working on GGIMesa except for the
  occasional build fixes to keep up with the changing Mesa internals.
 
 At that time I found that using a main loop looking like this does sort of
 proper double-buffering using GGIMesa. Note that the SetMode call sets the
 virtual width to twice the physical width.

[snip]

Thanks for the input, but I'm afraid that the "pageflip using
SetOrigin" hack won't work on all targets.  You _can_ allocate a
DirectBuffer or a memory_visual and use that as a backbuffer on every
target.

Jon
 
---
'Cloning and the reprogramming of DNA is the first serious step in 
becoming one with God.'
- Scientist G. Richard Seed




Re: Video memory (Was: Re: GGIMesa updates)

2000-11-02 Thread Jon M. Taylor

On Thu, 2 Nov 2000, Stefan Seefeld wrote:

 This brings up another interesting point:
 
 what is the support for offscreen video memory allocation ?
 I'm not sure I use the correct terminology, so here is what
 I have in mind:
 
 There is often a need to double buffer content in some form,
 and map (blit) it into the screen at specific times. Of course,
 the way to do that with GGI is to allocate a set of (memory) 
 visuals and work with these.

The _unaccelerated_ way.

 So, what memory are the visuals allocated from ? 

System memory.

 Assuming that
 they are allocated from video memory (framebuffer ?), 

They aren't.

 I'd suggest
 to think about a QoS (Quality of Service) issue: Given that video
 memory is limited, some visuals would need to be allocated on regular
 heap.

All memory_visuals already are.

 I might know when allocating visuals (drawing buffers) that some are
 updated more frequently than others, i.e. they would profit much more
 from being close to the graphic card. Is there (or could there be) any way 
 to expose some policy issues like these through an API for drawable memory
 management ?

Sure.  This is not such an easy task, though.  Such an API
(libDirectBuffer?) would need to be able to:

* Establish a set of targets which would know about all the different
types and configurations of memory buffers available for each target

* Establish global resource pools for each (e.g. dividing up a 32MB
AGP-mapped video memory aperture into front, back, z, stencil, texture,
etc buffers)

* Know what all the tradeoffs between various resource allocation requests
are (i.e. if you double your framebuffer size, you cannot have double
buffering, or you can choose to give up your z-buffer instead)

* Be able to map abstract QoS requirement types to various combinations of
the mapped resources, in a sufficiently generic manner that there's a
_point_ to using one single API for this instead of just hardcoding the
target knowledge into the app or a specific library (e.g.
'libNvidiaRivaTNT2AGP32MB' or somesuch'.

Ideas are welcome.
 
 You will notice that this is an issue which I brought up a couple of
 months ago already: I'm thinking of a 'Backing Store' for berlin, i.e.
 for example for video intensive graphics, I'd like to make backups of
 the scene graph in front and behind the graphic with the high frame rate,
 such that I then don't need to traverse the scene graph on each redraw,
 but rather map the three layers (back, animated graphic, front) into the
 screen to keep it consistent with the scene graph (for example if the
 exposed region of the animation isn't regular (rectangular), or if the
 layers are translucent, such that I need to blend them together, rather
 than just blitting them in.

Think about the API and target complexity that will be necessary
to intelligently ask for what you just described.

Jon

---
'Cloning and the reprogramming of DNA is the first serious step in 
becoming one with God.'
- Scientist G. Richard Seed




Re: Video memory (Was: Re: GGIMesa updates)

2000-11-02 Thread Stefan Seefeld

"Jon M. Taylor" wrote:

  I might know when allocating visuals (drawing buffers) that some are
  updated more frequently than others, i.e. they would profit much more
  from being close to the graphic card. Is there (or could there be) any way
  to expose some policy issues like these through an API for drawable memory
  management ?
 
 Sure.  This is not such an easy task, though.  Such an API
 (libDirectBuffer?) would need to be able to:
 
 * Establish a set of targets which would know about all the different
 types and configurations of memory buffers available for each target

why ? To be able to implement crossblits ? Can't you use a simple adapter
interface (some form of 'marshalling') ? I mean, the interesting case is
blitting from video memory to video memory, and there I assume that all
parameters (alignment etc.) are identical.

 * Establish global resource pools for each (e.g. dividing up a 32MB
 AGP-mapped video memory aperture into front, back, z, stencil, texture,
 etc buffers)

does this division need to be static ? Can't you have a single manager
instance which keeps track of which memory is allocated for which purpose ?
That would help in the implementation of QoS policies...

 * Know what all the tradeoffs between various resource allocation requests
 are (i.e. if you double your framebuffer size, you cannot have double
 buffering, or you can choose to give up your z-buffer instead)

Right. Can't that be a simple table ? (which would indicate how much memory
the different buffer types need, etc.)

 * Be able to map abstract QoS requirement types to various combinations of
 the mapped resources, in a sufficiently generic manner that there's a
 _point_ to using one single API for this instead of just hardcoding the
 target knowledge into the app or a specific library (e.g.
 'libNvidiaRivaTNT2AGP32MB' or somesuch'.

Hmm. I don't know whether that is of *any* relevance. But I'm studying the
CORBA architecture, especially its historical evolution. CORBA is a middleware
architecture to provide a set of 'services', encapsulating all the nasty details
of OS dependence, transport protocols, etc.
The more CORBA evolves, the more it becomes clear that users might want to explicitely
control low level features, such as messaging strategies, concurrency strategies, etc.
Therefor, there are more and more specifications added to CORBA which allow to
control these features in terms of 'policies', 'interceptors' (some sophisticated
form of callbacks), etc.
May be it would be interesting for you to look into it, or to let us discuss this,
as I think that some general architectural principles would apply equally well
for GGI, where you try to encapsulate the OS, and video hardware, from the user,
while still trying to provide a maximum of flexibility and efficiency. In other
words, some knowledge which is needed to optimize efficiently, can't be known
while you implement GGI, so you need some cooperation from the user. The question
is how to interface this.

Best regards,   Stefan
___  
  
Stefan Seefeld
Departement de Physique
Universite de Montreal
email: [EMAIL PROTECTED]

___

  ...ich hab' noch einen Koffer in Berlin...




Re: [Berlin-design] GGIMesa updates

2000-10-31 Thread soyt

Quoting "Jon M. Taylor" [EMAIL PROTECTED]:

   Yep, that's what I'm seeing as well.  I haven't been able to track
 down the problem yet.  For now, I am hacking around the problem by
 manually editing Mesa/src/GGI/libMesaGGI.la and changing the line that
 reads:
 
 dependency_libs=' /usr/local/lib/libggi.la -lgii -lgg'
 
 to:
 
 dependency_libs=' -lggi -lgii -lgg'

I had a similar problem some time ago with the .la files.
The problem was: on 'make install' the lib paths
are not correclty set in lib*.la. They still point to the
lib in the build tree:

from /usr/local/lib/libgii.la:
-
# Libraries that this one depends upon.
dependency_libs=' ../gg/.libs/libgg.so'
--

I don't know the actual reason but I had it working by manually
changing the dependencies in *.la

Hope it helps.
Regards.




Re: GGIMesa updates

2000-10-31 Thread Jon M. Taylor

On Mon, 30 Oct 2000, beef wrote:

 On Sat, 28 Oct 2000, Jon M. Taylor wrote:
 
  I just committed a bunch of GGIMesa fixes to the Mesa CVS tree. It
 _should_ all build just fine again, but I have weird libtool and autoconf
 incompatibilities popping up which are preventing the final library
 install so I can't test it over here.  If someone else could test it for
 me, that would be cool.  Brian, I still have to merge those config file
 patches you sent me - some of that stuff isn't strictly correct.
 
 Jon
 
 I have Mesa-HEAD-20001029, ggi-devel-20001028:
 see attachement for the bits i changed to build.
 
 It kindof works, but flickers horribly on the fbdev.

Argh!  Why are you and Stefan getting this to work, when I get
segfaults???

 A 3rd party demo complained about 'too few' stencil bits. are there any?

Stencil buffers are not supported in GGIMesa at this time.  I'll
look into it.
 
 what/where _could_ this doublebuffer problem be?
 
 -- 
 #berlin
 stefan bvc: I had hoped Jon would fix the double buffer problem as well...
 stefan bvc: mesa / ggi on /dev/fb flickers awefully
 stefan bvc: unfortunately, it appears Jon is the only person knowing
  MesaGGI. There is nobody else who can fix that. :(


OK, here's the whole story in detail.  Way back in mid-1999, I was
working at Creative Labs on an accelerated KGIcon device driver for the S3
Savage4 chipset (this project died an ugly death when S3 bought STB and
became a competitior...).  This meant that I needed to be able to handle
both software and hardware accelerations in the GGIMesa targets, including
soft/hard front and backbuffer mappings and page flipping.  The
doublebuffer implementation in GGIMesa at the time was based on
malloc()ing a separate backbuffer, drawing into that and blitting it to
the frontbuffer (the ggi_visual) every flush().  This was not compatible
with the acceleration cut-in architecture I had in mind at the time - no
way to hook a separate buffer-mapping function and no possibility to use
hardware acceleration.

So, I did a QuickHack(tm) to work around the problem - I pointed
both buffers to the ggi_visual |-.  This let me render to either the
front or back buffer, mapped to either hardware or software
front/backbuffers, with or without hardware acceleration for both drawing
triangles and the page flips.  As you have seen it also causes horrible
flickering.  But it "worked" and at the time that was all I was interested
in.  The hack was never meant to be more than a stopgap until I figured
out how to do it all properly.  Unfortunately, there wasn't much in the
way of buffer management API cut-ins in Mesa at the time, so it turned out
to be more work than I had anticipated, and a few weeks later my Savage4
driver project got canned and I stopped working on GGIMesa except for the
occasional build fixes to keep up with the changing Mesa internals.

I never implemented a better buffer-mamangement scheme, because I
was (and still am) unsure as to the best way to provide target hooks for
buffer-management and page flipping functions in the GGIMesa targets.  I'm
going to try again - I'm a lot better at writing GGI extensions after my
work on LibXMI earlier this year and Mesa's internals have gotten a LOT
better recently.  But in the meantime, I'm going to revert back to the
software-only double buffering scheme I threw away last year so people can
run on fbdev without horrible flickering.  Note that it will still flicker
somewhat, because fbdev has no way to poll for VSYNC and thus the GGI
fbdev target has no way to synchronize ggiFlush() calls with the vertical
retrace.

Jon

---
'Cloning and the reprogramming of DNA is the first serious step in 
becoming one with God.'
- Scientist G. Richard Seed




Re: [Berlin-design] GGIMesa updates

2000-10-30 Thread Stefan Seefeld

"Jon M. Taylor" wrote:
 
 I just committed a bunch of GGIMesa fixes to the Mesa CVS tree. It
 _should_ all build just fine again, but I have weird libtool and autoconf
 incompatibilities popping up which are preventing the final library
 install so I can't test it over here.  If someone else could test it for
 me, that would be cool.  Brian, I still have to merge those config file
 patches you sent me - some of that stuff isn't strictly correct.

Trying to build Mesa with GGI support, I get the following linking error.
Since I'v seen a similar report on the GGI mailing list some months ago, I
Cc it to the list, may be it is an obvious problem to some of you...

The problem seems to be related to libtool, as the linker complains in the
final link stage about:

/usr/local/lib/libggi.la: file not recognized: File format not recognized

(the file in question is indeed a libtool generated shell script).

I hope this is an easy to fix configuration problem, as I'm very eager to
see GGIMesa in action on my /dev/fb :)

Best regards,   Stefan
___  
  
Stefan Seefeld
Departement de Physique
Universite de Montreal
email: [EMAIL PROTECTED]

___

  ...ich hab' noch einen Koffer in Berlin...




GGIMesa updates

2000-10-28 Thread Jon M. Taylor

I just committed a bunch of GGIMesa fixes to the Mesa CVS tree. It
_should_ all build just fine again, but I have weird libtool and autoconf
incompatibilities popping up which are preventing the final library
install so I can't test it over here.  If someone else could test it for
me, that would be cool.  Brian, I still have to merge those config file
patches you sent me - some of that stuff isn't strictly correct.

Jon

---
'Cloning and the reprogramming of DNA is the first serious step in 
becoming one with God.'
- Scientist G. Richard Seed





Re: GGIMesa and the new ExtensionLoadDL

2000-05-28 Thread Andreas Beck

Hi Stefan,

 as you know, berlin uses MesaGGI as a rendering backend.
 With the lates change in the GGI internal API, however,
 MesaGGI doesn't compile any more. What can be done about
 that ?

I've downloaded the Mesa sources and fixed the main errors that prevented
compilation. However I still got to fix the rendering libs.

I wanted to run some demos and wait for them to error out to make sure I
find all points to be fixed without looking through all the files, but I
can't get any of them to link. Libtool complains about libraries called `'
not being found ... strange. I did some hotfixes on that, but didn't get
it working. 

Is there a "recommended way" to install MesaGGI ? 

CU, Andy

-- 
= Andreas Beck|  Email :  [EMAIL PROTECTED] =




Re: GGIMesa and the new ExtensionLoadDL

2000-05-28 Thread Jon M. Taylor

On Sun, 28 May 2000, Andreas Beck wrote:

 Hi Stefan,
 
  as you know, berlin uses MesaGGI as a rendering backend.
  With the lates change in the GGI internal API, however,
  MesaGGI doesn't compile any more. What can be done about
  that ?
 
 I've downloaded the Mesa sources and fixed the main errors that prevented
 compilation. However I still got to fix the rendering libs.
 
 I wanted to run some demos and wait for them to error out to make sure I
 find all points to be fixed without looking through all the files, but I
 can't get any of them to link. Libtool complains about libraries called `'
 not being found ... strange. I did some hotfixes on that, but didn't get
 it working. 
 
 Is there a "recommended way" to install MesaGGI ? 

# cd Mesa
# ./bootstrap
# ./configure
# make 
# make install
# cd ggi/ggiglut
# make
# make install

Jon

---
'Cloning and the reprogramming of DNA is the first serious step in 
becoming one with God.'
- Scientist G. Richard Seed




Re: GGIMesa and the new ExtensionLoadDL

2000-05-27 Thread Andreas Beck

[Stefan, please excuse, if you get that twice - I hope it gets through to
the Mailinglist, but I'm not quite sure.]

 as you know, berlin uses MesaGGI as a rendering backend.
 With the lates change in the GGI internal API, however,
 MesaGGI doesn't compile any more. What can be done about
 that ?

I'll have a look. I just spent about an hour with downloading Mesa
("cvs -z5 -d:pserver:[EMAIL PROTECTED]:/cvs/mesa3d checkout Mesa"
does the trick, as someone asked where he'd get it) and I'll have a look at
it, when I'm awake again.

CU, ANdy


-- 
= Andreas Beck|  Email :  [EMAIL PROTECTED] =




Re: GGIMesa and the new ExtensionLoadDL

2000-05-27 Thread Jon M. Taylor

On Sat, 27 May 2000, Andreas Beck wrote:

 [Stefan, please excuse, if you get that twice - I hope it gets through to
 the Mailinglist, but I'm not quite sure.]
 
  as you know, berlin uses MesaGGI as a rendering backend.
  With the lates change in the GGI internal API, however,
  MesaGGI doesn't compile any more. What can be done about
  that ?
 
 I'll have a look. I just spent about an hour with downloading Mesa
 ("cvs -z5 -d:pserver:[EMAIL PROTECTED]:/cvs/mesa3d checkout Mesa"
 does the trick, as someone asked where he'd get it) and I'll have a look at
 it, when I'm awake again.

If you manage to get it to build, let me know.  I haven't been
able to get Mesa to link for a few weeks now.

Jon

---
'Cloning and the reprogramming of DNA is the first serious step in 
becoming one with God.'
- Scientist G. Richard Seed




Re: GGIMesa and the new ExtensionLoadDL

2000-05-27 Thread Jay

This cvs command:
cvs -z5 -d:pserver:[EMAIL PROTECTED]:/cvs/mesa3d checkout Mesa"

says:
cvs checkout: cannot find password
cvs [checkout aborted]: use "cvs login" to log in first

on my system. Do I have an old version of cvs? "rpm -q cvs" says "cvs-1.10.7-7"
If not, what do I need to do?

(Sorry I haven't used cvs, just rcs)

"Jon M. Taylor" wrote:

 On Sat, 27 May 2000, Andreas Beck wrote:

  [Stefan, please excuse, if you get that twice - I hope it gets through to
  the Mailinglist, but I'm not quite sure.]
 
   as you know, berlin uses MesaGGI as a rendering backend.
   With the lates change in the GGI internal API, however,
   MesaGGI doesn't compile any more. What can be done about
   that ?
 
  I'll have a look. I just spent about an hour with downloading Mesa
  ("cvs -z5 -d:pserver:[EMAIL PROTECTED]:/cvs/mesa3d checkout Mesa"
  does the trick, as someone asked where he'd get it) and I'll have a look at
  it, when I'm awake again.

 If you manage to get it to build, let me know.  I haven't been
 able to get Mesa to link for a few weeks now.

 Jon

 ---
 'Cloning and the reprogramming of DNA is the first serious step in
 becoming one with God.'
 - Scientist G. Richard Seed




GGIMesa and the new ExtensionLoadDL

2000-05-26 Thread Stefan Seefeld

Hi everybody,

as you know, berlin uses MesaGGI as a rendering backend.
With the lates change in the GGI internal API, however,
MesaGGI doesn't compile any more. What can be done about
that ?
Luckily for us, we have a second rendering backend, based
on libart. However, there are still some features missing
(textures among others) so I would really like to be able
to run with MesaGGI. It would be a shame if all the work
(on MesaGGI as well as our GLDrawingKit) would go into oblivion !

Any help is as always highly appreciated,

Stefan
___  
  
Stefan Seefeld
Departement de Physique
Universite de Montreal
email: [EMAIL PROTECTED]

___

  ...ich hab' noch einen Koffer in Berlin...




Re: GGIMesa and the new ExtensionLoadDL

2000-05-26 Thread Jay

Where is the MesaGGI source?

Stefan Seefeld wrote:

 Hi everybody,

 as you know, berlin uses MesaGGI as a rendering backend.
 With the lates change in the GGI internal API, however,
 MesaGGI doesn't compile any more. What can be done about
 that ?
 Luckily for us, we have a second rendering backend, based
 on libart. However, there are still some features missing
 (textures among others) so I would really like to be able
 to run with MesaGGI. It would be a shame if all the work
 (on MesaGGI as well as our GLDrawingKit) would go into oblivion !

 Any help is as always highly appreciated,

 Stefan
 ___

 Stefan Seefeld
 Departement de Physique
 Universite de Montreal
 email: [EMAIL PROTECTED]

 ___

   ...ich hab' noch einen Koffer in Berlin...

--

-Jay






Re: ggimesa

2000-04-26 Thread Murphy Chen

 I've got glide/2d running under FB! :)
  [and yes ggi/glide target seems to work fine :]

What is glide?

I'm working on mesa/ggi, but after I make all in mesa.
I cannot find any demo program built.
When I make demo programs manually, I got many error messages
about using X related funtion without linking related library.
However, I'm not intended to use X.

Which version of ggi do you use?
Do you use kgi?

Murphy





Re: ggimesa

2000-04-26 Thread teunis

On Wed, 26 Apr 2000, Murphy Chen wrote:

  I've got glide/2d running under FB! :)
   [and yes ggi/glide target seems to work fine :]
 
   What is glide?

3D acceleration library for 3Dfx graphics cards.  The glide version I'm
running is specifically for the 3Dfx/Banshee and 3Dfx/3

   I'm working on mesa/ggi, but after I make all in mesa.
   I cannot find any demo program built.

mesa root/ggi/demos
(requires ggi/ggiglut)
make 'gears' should work.

   When I make demo programs manually, I got many error messages
   about using X related funtion without linking related library.
   However, I'm not intended to use X.

The demos are made to require libglut.   Which is designed to work with X.
Now ggiglut is a replacement for some programs - but a poor one.

   Which version of ggi do you use?

current CVS.  I keep up a lot.

   Do you use kgi?

No.  KGI doesn't support 3Dfx/Banshee.
And isn't even potentially 3d-accelerated for my hardware in any event so
it's uninteresting to me at this time.
(now if I could convince Mesa that my glide didn't require X support, I'd
be happier)

G'day, eh? :)
- Teunis




ggimesa

2000-04-24 Thread teunis

Heya!
I've got glide/2d running under FB! :)
[and yes ggi/glide target seems to work fine :]

Anyways I'm just wondering if anyone here's working with ggimesa and could
give a few pointers.  (in private email :)

Thanks :)
- Teunis





Minor GGIMesa updates

2000-01-06 Thread Jon M. Taylor

If any of you have been playing with the newest GGIMesa CVS
sources, you probably noticed that they don't build due to internal Mesa
changes.  Well, I just fixed those problems and GGIMesa CVS now builds
(against the latest GGI CVS) and runs again.  The only functional changes
are a revamped debugging-print system which matches that used by LibGIC
and is quite a bit cleaner than the older stuff.  Also, the genkgi target
is always disabled in configure.in now, since it isn't being used yet and
was causing some DL loading bugs for some people.

Jon

---
'Cloning and the reprogramming of DNA is the first serious step in 
becoming one with God.'
- Scientist G. Richard Seed



Re: Mesa-GGI was Re: ggimesa+multi problem

1999-10-22 Thread becka


 What makes a card primary? Slot position on the motherboard?

Usually yes. Some BIOSes have at least the option to scan AGP first or PCI
first.

CU, ANdy

-- 
Andreas Beck  |  Email :  [EMAIL PROTECTED]



Re: Mesa-GGI was Re: ggimesa+multi problem

1999-10-21 Thread Brian S. Julin

On Wed, 20 Oct 1999, James Simmons wrote:
 From what Jon said it my mistake. I will fix this. By the way I have goten
 in touch with ATI. They have gone to the extend of giving out all their 3D
 docs!!! They even provide example linux driver code I must say ATI has
 really turned around and even responds to inquires from the linux
 community.

Are they giveing out any more docs than they used to on the older
chipsets, or just RAGE and newer?

--
Brian



Re: Mesa-GGI was Re: ggimesa+multi problem

1999-10-21 Thread Marcus Sundberg

James Simmons wrote:
 Note their is a Voodoo framebuffer device for 2.3.x kernels.

And also note that it won't work with Voodoo I/II cards. ;-)

//Marcus
-- 
---+
Marcus Sundberg| http://www.stacken.kth.se/~mackan/
 Royal Institute of Technology |   Phone: +46 707 295404
   Stockholm, Sweden   |   E-Mail: [EMAIL PROTECTED]



Re: Mesa-GGI was Re: ggimesa+multi problem

1999-10-21 Thread Jim Meier

On Wed, 20 Oct 1999, Marcus Sundberg wrote:
 
 Voodoo I/II cards can coexist with anything, and Matrox cards can also
 coexist with anything as long as you have the Matrox as a secondary
 card and whatever you want to use in addition as the primary card.
 If you want to have the Matrox as the primary card the other card
 and it's driver must support MMIO-only operation.
 
 //Marcus

What makes a card primary? Slot position on the motherboard?

-Jim Meier



Re: Mesa-GGI was Re: ggimesa+multi problem

1999-10-20 Thread Marcus Sundberg

Ketil Froyn wrote:
 
 On Wed, 20 Oct 1999, James Simmons wrote:
 
  From what Jon said it my mistake. I will fix this. By the way I have goten
  in touch with ATI. They have gone to the extend of giving out all their 3D
  docs!!! They even provide example linux driver code I must say ATI has
  really turned around and even responds to inquires from the linux
  community. Now the imprtant thing. Jon I need you to help me learn your
  GGi Mesa stuff. It would be really nice if MesaGGI supported full
  acceleration from more than one card. Also I'm in the works to buliding my
  gfx infra structure for linux.
 
 Hey, does that mean my ATI 3D Rage Pro will have native KGI support
 soon? In that case, will it be able to coexist with my Matrox Millenium I
 and my Voodoo II? If so, I can't wait to start running triple-headed X
 here ; Not to mention 2 screens with heavy 3d effects on each! :)
 
 I have kind of asked about the MGA/ATI coexistence thing before, but
 nobody answered, and I wasn't able to get it up (though I gave up before
 trying all options).

Voodoo I/II cards can coexist with anything, and Matrox cards can also
coexist with anything as long as you have the Matrox as a secondary
card and whatever you want to use in addition as the primary card.
If you want to have the Matrox as the primary card the other card
and it's driver must support MMIO-only operation.

//Marcus
-- 
---+
Marcus Sundberg| http://www.stacken.kth.se/~mackan/
 Royal Institute of Technology |   Phone: +46 707 295404
   Stockholm, Sweden   |   E-Mail: [EMAIL PROTECTED]



Re: Mesa-GGI was Re: ggimesa+multi problem

1999-10-20 Thread James Simmons


 Hey, does that mean my ATI 3D Rage Pro will have native KGI support
 soon? In that case, will it be able to coexist with my Matrox Millenium I
 and my Voodoo II? If so, I can't wait to start running triple-headed X
 here ; Not to mention 2 screens with heavy 3d effects on each! :)

Their will be a native fbdev driver for 3D Rage Pro as well as Rage 128.
As for the 3D stuff I'm using the ATI card and Matrox g200 as a test bed
for my /dev/gfx driver. Once completed I will be porting this to SGI 
workstations. Unlike DRI method it has internal locking and
actually does real direct rendering. Well it has to because it will
be ported to SGI machines. Their will be a GGI target written for this.
 
 I have kind of asked about the MGA/ATI coexistence thing before, but
 nobody answered, and I wasn't able to get it up (though I gave up before
 trying all options).

For native fbdev drivers its required for the card that can do MMIO to be
in MMIO mode. So yes all cards can exist toegther. See Marcus reply for
details. Note their is a Voodoo framebuffer device for 2.3.x kernels.



Re: ggimesa+multi problem

1999-10-15 Thread Justin Cormack

 
 
  I found the problem - ggiglut never calls ggiClose(), so there is no
  clean termination (leaves fb in odd state too). Fixed this by adding
  a close function to glut - there ought to be one anyway, I would
  count this as a glut bug.
 
 Is this with Mesa from CVS ? If it is I will fix it. I never had this
 problem but I will look into it.
 
 

3.1beta3

Justin



Re: ggimesa+multi problem

1999-10-13 Thread Justin Cormack

 
 On Tue, Oct 12, 1999 at 10:30:30AM +0100, Justin Cormack wrote:
  I don't seem to be able to run ggiMesa on the multi target.
 
 Probably because the multi target doesn't provide a directbuffer.

Ah yes.

ok, I need to save some Mesa images to a file. As multi doesnt work,
I tried to use the tile target with one of the tiles being a file 
target. However it doesnt save the picture if I use a .ppm file
(the raw format seems to work however - can I convert this to ppm
manually?)

Justin



Re: ggimesa+multi problem

1999-10-12 Thread Niklas Höglund

On Tue, Oct 12, 1999 at 10:30:30AM +0100, Justin Cormack wrote:
 I don't seem to be able to run ggiMesa on the multi target.

Probably because the multi target doesn't provide a directbuffer.

-- 
Niklas