RE: XAA2 namespace?
On Wed, 2004-03-03 at 03:59, Mark Vojkovich wrote: Even with Sync() passing the particular surface which is necessitating the sync, I would expect all drivers to be syncing the whole chip without caring what the surface was. Most hardware allow you to do checkpointing in the command stream so you can tell how far along the execution is, but a Sync can happen at any time. Are you really going to be checkpointing EVERY 2D operation? That's where a driver callback to mark the end of a batch of rendering to (and from?) a surface might come in handy? -- Earthling Michel Dnzer | Debian (powerpc), X and DRI developer Libre software enthusiast| http://svcs.affero.net/rm.php?r=daenzer ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
sugper viagrga somewhere ethology
ONLY REAL SUPER VIAGDRA CALLED CIADLIS IS EFFECTIVE! Annual Sale: ONLY $3 per dose Check out our website with disscounts and get your fdree bonus pillls Shipped world wide! http://humandrugs.com/sv/index.php?pid=evaph6163 http://humandrugs.com/sv/index.php?pid=evaph6163 un|suB|scr|be http://humandrugs.com/sv/applepie.php lindbergh bait
Re: XAA2 namespace?
Mark Vojkovich wrote: On Tue, 2 Mar 2004, Sottek, Matthew J wrote: It's currently global because the hardware I work on doesn't have to fall back to software very often. Bookkeeping on a per- surface basis is a simple modification and one I will add. This precludes using XAA2 with hardware that doesn't support concurrent SW and HW access to the framebuffer, but that's OK since that stuff is old and we're trying to move forward here. HW that sucks can use the old XAA. It shouldn't preclude this from working. You just need the call to look like sync(xaa_surface_t *surface) and let old hardware sync the whole engine regardless of the input. It helps those who can use it and is the same as what you have now for everyone else. I don't understand your reasoning. The difference with per-surface as opposed to global sync state is that you don't have to sync when CPU rendering to a surface that has no previously unsynced GPU rendering. The point of this is to ALLOW concurrent CPU and GPU rendering into video ram except in the case where both want to render to the same surface. There are old hardware that allow no concurrent CPU and GPU rendering at all. Even with Sync() passing the particular surface which is necessitating the sync, I would expect all drivers to be syncing the whole chip without caring what the surface was. Most hardware allow you to do checkpointing in the command stream so you can tell how far along the execution is, but a Sync can happen at any time. Are you really going to be checkpointing EVERY 2D operation? Not every operation, but every few operations. For example, the Radeon 3D driver has a checkpoint at the end of each DMA buffer. It's more coarse grained than every operation, but it's much finer grained than having to wait for the engine to idle. I still contend that it would be a benefit to know how many rects associated with the same state are going to be sent (even if you send those in multiple batches for array size limitations) this allows a driver to batch things up as it sees fit. I don't know the amount of data coming. The old XAA (and cfb for that matter) allocated the pathelogical case: number of rects times number of clip rects. It doesn't know how many there are until it's done computing them, but it knows the upper bounds. I have seen this be over 8 Meg! The new XAA has a preallocated scratch space (currently a #define for the size). When the scratch buffer is full, it flushes it out to the driver. If XAA is configured to run with minimal memory, the maximum batch size will be small. That sounds reasonable. That's basically how the 3D drivers work. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
RE: XAA2 namespace?
Ummm... which other models are you refering to? I'm told that Windows does it globally. Windows Direct Draw does per surface Locking which is a similar thing to what we are discussing, and yes, many drivers DO checkpoint very often. Direct Draw isn't a perfect analogy because it is the application that often wants to render to the surface with the CPU so there are many clients running in parallel. That compounds the impact of waiting too long for your sync. Having per-surface syncing may mean you end up syncing more often. Eg. Render with HW to one surface then to another, then if you render to SW to both of those surfaces, two syncs happen. Doing it globally would have resulted in only one sync call. The driver has to take a little responsibility for knowing when it is out of sync. A global syncing driver would need to handle that second sync without any hardware interaction. The penalty is just the added indirect function call. This common scenario would be improved with per-surface sync: Put Image - offscreen_surface1 ... offscreen_surface1 - FB ... Put Image - offscreen_surface1 The offscreen surface cannot be written until after the blit is finished so a sync is needed. However on a busy system there are lots of other blits going on during the ... so global syncing before the 2nd Put is bad on 2 accounts. 1) You waited longer than you needed to, you only needed to wait for the blit that referenced offscreen1. 2) You idled the hardware while the 2nd put is happening. Now the graphics engine is idle instead of crunching data in parallel. Does the possible improved concurrency outweigh the additional overhead of making an indirect call to check the sync status everytime? It is Hard to tell. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
RE: XAA2 namespace?
It's the best we can do. I'm not going to compute the clips in two passes: one to find out how many rects there end up being, and one to store the rects. At least you would be able to indicate the last one which would serve the same purpose. Or an optional flush call to the driver. A batching driver could queue stuff up until a flush. A flush would happen after a set of operations that originated as a single complex drawing operation. XAA doesn't care about surface particulars. It asks the driver if it could stick this pixmap in videoram for it because the migration logic says it should go in videoram. The driver can refuse or can accept by returning an XAASurface for it. XAA passes that surface back to you in the SetupFor function. To XAA, it's just a device independent structure. The driver has private storage in the XAASurface. Sounds reasonable. How does X tell the driver what the surface will be used for? A RENDER surface could have different alignment or tiling properties than a 2d only surface. That information would be needed at allocation time. There's no such thing as a RENDER surface. Pictures are merely X-drawables with extra state associated with them. Any drawable can eventually be used as a picture. You will need to keep that in mind just as you do now. This has pretty serious implications. Currently the memory manager uses rectangular memory which presumably has pitch etc characteristics that are usable by the stretch blit/alpha blend components of a chip. That makes it reasonable (although probably not ideal) to assume that any offscreen surface can be used for RENDER purposes. Moving to a surface based infrastructure would allow a driver to more carfully choose surface parameters... always choosing the worst case alignment,pitch, etc characteristics seems like a problem. This may be a RENDER problem and not just an Xaa problem, but it seems like there really needs to be prior indication that a surface is being used as a RENDER source or target such that the memory manager can make appropriate choices. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: XAA2 namespace?
Mark Vojkovich wrote: Ummm... which other models are you refering to? I'm told that Windows does it globally. Having per-surface syncing may mean you end up syncing more often. Eg. Render with HW to one surface then to another, then if you render to SW to both of those surfaces, two syncs happen. Doing it globally would have resulted in only one sync call. Unless you can truely checkpoint every rendering operation, anything other than global syncing is going to result in more sync calls. The more I think about going away from global syncing, the more this sounds like a bad idea. It may result in more sync calls, but it should also result in less time spent waiting in each call. If you HW render to surface A, then B, then need to SW render to surface A, you don't need to wait for the HW to finish with surface B. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
RE: XAA2 namespace?
On Wed, 3 Mar 2004, Sottek, Matthew J wrote: Ummm... which other models are you refering to? I'm told that Windows does it globally. Windows Direct Draw does per surface Locking which is a similar thing to what we are discussing, and yes, many drivers DO checkpoint very often. Well, I'll add per surface locking then. You'll get passed a surface when you are asked to sync. Drivers that don't support per-surface synchronization, which I expect to be the majority, will have to do more work to keep from doing redundant syncs. They'll essentially have to keep track of whether or not they rendered anything since the last time Sync was called. Mark. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
RE: XAA2 namespace?
On Wed, 3 Mar 2004, Sottek, Matthew J wrote: Your rectangle array design would only send an array of rectangles that come from the spans of a larger polygon? Maybe the driver can attempt to fix it there. The array only contains parts of a larger single rendering request. But note that XFillRectangles is a single request. All the rectangles passed down in that request will show up in the array. A single XFillRectangle may still be multiple rectangles due to clipping, and complex primitives like filled arcs will end up as many rectangles regardless. The only time XAA will break these up into multiple Subsequent requests is when XAA's internal buffer isn't big enough to buffer all the pieces. The driver can buffer these requests, even across different surfaces, for as long as it likes as long as it sends them to the hardware before the BlockHandler exists. Up until that point you can send it to the hardware based on whatever criteria you like: after a certain amount of data has been placed in the DMA buffer, after an approximate amount of pixels have been asked to be filled, etc... The nv driver buffers all requests until the block handler or until after a primitive of a certain size is encountered. The goal being to buffer as much as possible but to limit latency by limiting the buffering to small primitives. Mark. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Halted progress on XFree86 ffb driver for Solaris/SPARC
I think I can conclude, now, that there is insufficient documentation available for the Solaris ffb kernel driver to make XFree86 work with it. After some more searching, I found a couple newsgroup postings from 1997/1998 timeframe of other people trying to get X11R6 to work with their Creator 3D cards, and the replies from Sun engineers were essentially, you need to write your own kernel driver. In other words, they are keeping the implementation of their kernel driver to themselves. This is fair enough, but custom kernel drivers are a bit over my head at this point in time (it would require porting a Linux-based ffb driver to the Solaris kernel driver architecture) Matt __ Do you Yahoo!? Yahoo! Search - Find what youre looking for faster http://search.yahoo.com ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Re: how to get correct image after switch to other terminal
Hi, In fact,i can get correct image with common window at the time,BUT i found the image of root window became wrong, it's so strange. Any advice is welcome! Mark Vojkovich Wrote: There is no image when you are switched away. I'm not sure about how to figure out how to tell when switched away. I thought there was something in the XFree86-VidModeExtension, but it looks like there isn't. Certainly, you could monitor the visibility events on the window you are grabbing. If it's fully obscured, there's nothing to see. MArk. On Wed, 3 Mar 2004, wjd wrote: Hi All, I wrote a X11 application to get image of some window by XGetImage(),but after i switch to other terminal by Ctrl+Alt+F2,my application can't get correct image of windows.Are there any idea to fix this problem? or how can X11 application to know if desktop has been switch to another terminal? Thanks! ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel