http://bugs.freedesktop.org/show_bug.cgi?id=10852
Summary: R300 problem with multiple glxgears clients, missing
docs on GARTSize
Product: DRI
Version: unspecified
Platform: x86-64 (AMD64)
OS/Version: Linux (All)
Oliver McFadden írta:
On 5/3/07, Zoltan Boszormenyi [EMAIL PROTECTED] wrote:
Hi,
sorry for the crossposting, I don't know who to address.
I am experimenting the new CFS scheduler on Linux
and tried to start multiple glxgears to see whether
they are really running smooth and have evenly
On 5/4/07, Zoltan Boszormenyi [EMAIL PROTECTED] wrote:
Oliver McFadden írta:
On 5/3/07, Zoltan Boszormenyi [EMAIL PROTECTED] wrote:
Hi,
sorry for the crossposting, I don't know who to address.
I am experimenting the new CFS scheduler on Linux
and tried to start multiple glxgears
Jerome Glisse írta:
On 5/4/07, Zoltan Boszormenyi [EMAIL PROTECTED] wrote:
Oliver McFadden írta:
On 5/3/07, Zoltan Boszormenyi [EMAIL PROTECTED] wrote:
Hi,
sorry for the crossposting, I don't know who to address.
I am experimenting the new CFS scheduler on Linux
and
Keith Packard wrote:
On Thu, 2007-05-03 at 01:01 +0200, Thomas Hellström wrote:
It might be possible to find schemes that work around this. One way
could possibly be to have a buffer mapping -and validate order for
shared buffers.
If mapping never blocks on anything other than
On 5/4/07, Zoltan Boszormenyi [EMAIL PROTECTED] wrote:
Jerome Glisse írta:
On 5/4/07, Zoltan Boszormenyi [EMAIL PROTECTED] wrote:
Oliver McFadden írta:
On 5/3/07, Zoltan Boszormenyi [EMAIL PROTECTED] wrote:
Hi,
sorry for the crossposting, I don't know who to address.
I am
On 5/4/07, Thomas Hellström [EMAIL PROTECTED] wrote:
Keith Packard wrote:
On Thu, 2007-05-03 at 01:01 +0200, Thomas Hellström wrote:
It might be possible to find schemes that work around this. One way
could possibly be to have a buffer mapping -and validate order for
shared buffers.
On 5/4/07, Jerome Glisse [EMAIL PROTECTED] wrote:
On 5/4/07, Thomas Hellström [EMAIL PROTECTED] wrote:
Keith Packard wrote:
On Thu, 2007-05-03 at 01:01 +0200, Thomas Hellström wrote:
It might be possible to find schemes that work around this. One way
could possibly be to have a
Jerome Glisse wrote:
On 5/4/07, Thomas Hellström [EMAIL PROTECTED] wrote:
Keith Packard wrote:
On Thu, 2007-05-03 at 01:01 +0200, Thomas Hellström wrote:
It might be possible to find schemes that work around this. One way
could possibly be to have a buffer mapping -and validate order
On 5/4/07, Thomas Hellström [EMAIL PROTECTED] wrote:
Jerome Glisse wrote:
On 5/4/07, Thomas Hellström [EMAIL PROTECTED] wrote:
Keith Packard wrote:
On Thu, 2007-05-03 at 01:01 +0200, Thomas Hellström wrote:
It might be possible to find schemes that work around this. One way
Jerome Glisse wrote:
On 5/4/07, Thomas Hellström [EMAIL PROTECTED] wrote:
Jerome Glisse wrote:
On 5/4/07, Thomas Hellström [EMAIL PROTECTED] wrote:
Keith Packard wrote:
On Thu, 2007-05-03 at 01:01 +0200, Thomas Hellström wrote:
It might be possible to find schemes that work
On 5/4/07, Thomas Hellström [EMAIL PROTECTED] wrote:
I was actually referring to an example where two clients need to have a
buffer mapped and access it at exactly the same time.
If there is such a situation, we have no other choice than to drop the
buffer locking on map. If there isn't we can
http://bugs.freedesktop.org/show_bug.cgi?id=10855
Summary: on Intel 945G, (beryl or compiz) + glxgears =
DRM_I830_CMDBUFFER: -22
Product: Mesa
Version: 6.5
Platform: x86 (IA32)
OS/Version: Linux (All)
Status: NEW
On 5/4/07, Jerome Glisse [EMAIL PROTECTED] wrote:
There was a typo in my mail, i meaned lock not lockup
when i was talking about apps sending data to gpu.
And if multiple instance of glxgears are successfull
to make the gpulockup this is because you are then
sending megs of vertex to the card
On Fri, 2007-05-04 at 10:07 +0200, Thomas Hellström wrote:
It's rare to have two clients access the same buffer at the same time.
In what situation will this occur?
Right, what I'm trying to avoid is having any contention for
applications *not* sharing the same objects.
If there is any
http://bugs.freedesktop.org/show_bug.cgi?id=6664
[EMAIL PROTECTED] changed:
What|Removed |Added
CC||[EMAIL PROTECTED]
--- Comment
On Fri, 2007-05-04 at 11:40 +0200, Jerome Glisse wrote:
On a side note i think this scheme also fit well with gpu having
several context and which doesn't need big validation (read
nv gpu).
Yeah, I want to make sure we have a simple model that supports
multi-context hardware while also
On Fri, 2007-05-04 at 14:32 +0200, Thomas Hellström wrote:
If there isn't we can at least consider other
alternatives that resolve the deadlock issue but that also will help
clients synchronize and keep data coherent.
If clients want coherence, they're welcome to implement their own
Keith Packard wrote:
OTOH, letting DRM resolve the deadlock by unmapping and remapping shared
buffers in the correct order might not be the best one either. It will
certainly mean some CPU overhead and what if we have to do the same with
buffer validation? (Yes for some operations with
On Fri, 2007-05-04 at 16:57 +0100, Keith Whitwell wrote:
That's a special case of a the general problem of what do you do when a
client submits any validation list that can't be satisfied. Failing to
render isn't really an option, either the client or the memory manager
has to either
http://bugzilla.kernel.org/show_bug.cgi?id=8427
Summary: Kernel Panic on shuting down with Xserver using i810
driver
Kernel Version: 2.6.18-4-amd64
Status: NEW
Severity: normal
Owner: [EMAIL PROTECTED]
Submitter:
In playing around yesterday, we found that some drivers will
unnecessarily enable interrupts for vblank events. Since these tend to
happen frequently (60+ Hz), they'll cause your CPU to wake up a lot,
which will waste power if they're not really in use.
This patch hacks the radeon driver to
Hi forward this message to dri-devel Mailing List, where you could find
more tester on i815 DRI drives .
I hope I don't had made a loop :)
Forwarded Message
From: Andreas Mohr [EMAIL PROTECTED]
To: Pavel Machek [EMAIL PROTECTED]
Cc: Andrew Morton [EMAIL PROTECTED], [EMAIL
23 matches
Mail list logo