On 11/27/05, Benjamin Herrenschmidt <[EMAIL PROTECTED]> wrote:
>
> > I really don't know.  I have some sense that some caches are virtual
> > address based, so they have to be flushed when you change TLB entries,
> > and TLB flushes are also somewhat expensive.  But this is the case
> > with ANY CPU context switch, and I'm not sure that it's any worse when
> > dealing with CoW stuff.
>
> ARM has virtually indexed/tagged caches yes, and possibly some others .
> x86 and ppc at least don't have that problem :) but still, mmu
> operations can be expensive. Well, the good thing is, this is mostly a
> software issue and can be easily experimented with.
>
> > > I was thinking more about context switching, while may require also
> > > switching textures in/out vram.
> >
> > Like I say, if we have host backing stores, the transfers are cut in
> > half.  Also, if we have centralized memory management, we can do this
> > swapping in a lazy way.  If an app has 10 textures but uses only two
> > of them during its GPU 'timeslice', there's less thrashing (or none if
> > we haven't run out of vram).
>
> There is a big lack of a proper memory manager currently in the open
> source DRI,  but that is known and is something that some people have
> been working on recently. Iain Romanick for example (works at IBM too
> but in a different part of big blue) has been doing some work on that.

I've done three different implementations of memory management
algorithms for X11 pixmaps for Tech Source.  One was entirely done by
me, and the other two were worked on by me and another engineer
jointly.  I have a really good idea of how to do this.

Now, the algorithm is probably going to do things that other GPU's
won't need, but we can probably generalize it to the point where
others can use it.

> > One approach to take is to not necessarily have backingstores as long
> > as there is free vram.  The instant vram fills, the kernel driver
> > switches tactics and starts keeping them.  This will make downloads
> > happen at the beginning and then taper off quickly.  Best of both
> > worlds.
>
> I think this discussion should be moved to the DRI mailing lists and
> include the DRI and Mesa folks who have been working on those issues for
> some time now, but then, there is no emergency.

Nope.  We've got a number of intermediate steps before we're ready for this.

> BTW. Do you have already some ETA and idea of the price of your FPGA
> based prototypes ? We may get a couple here in the lab to help with
> drivers. (The lab is IBM ozlabs, that is mostly the ppc kernel folks)

No specific ETA.  I'm guessing Andy will have his part by the end of
the year.  And Howard is working on some things to help Andy.  After
the end of this quarter, I should also have time to work on the PCI
core again too.

Price?  Well, it goes up and down as we price parts.  I think it'll
end up being priced a bit on the high side at the beginning, but as it
sells, we'll be able to lower the price a bit.  We had been shooting
for $500 to $600, but we may have to price it higher, like $700 in
one-unit volumes, with a discount for larger volumes.  People want an
answer to this, but no matter what we say, we just don't have enough
information yet to be sure.  We have guesses about PCB fab, for
instance, but we could be very wrong, depending on which board house
is actually able to do it when we need it done.

_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to