-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Jerome Glisse wrote:

| Thanks Ian to stress current and future usage, i was really hopping that
| with GL3 buffer object mapping would have vanish but i guess as you said
| that the fire-and-forget path never get optimized.

I think various drivers have tried to optimize it.  I think it's just a
case where an application managed suballocator will just always be faster.

| Does in GL3 object must be unmapped before being use ? IIRC this what is
| required in current GL 1.x and GL 2.x. If so i think i can still use VRAM
| as cache ie i put their object which are barely never mapped (like a
| constant texture, or constant vertex table). This avoid me to think to
| complexe solution for cleanly handling unmappable vram.

Be careful here.  An object must be unmapped in the context where it is
used for drawing.  However, buffer objects can be shared between
contexts.  This means that even today in OpenGL 1.5 context A can be
drawing with a buffer object while context B has it mapped.  Of course,
context A doesn't have to see the changes caused by context B until the
next time it binds the buffer.  This means that copying data for the map
will "just work."

But to actually answer the original question, a buffer that will be used
as a source or destination by the GL must be unmapped at Begin time.

| A side question is there any data on tlb flush ? ie how much map/unmap,
| from client vma, cycle cost.
|
| In the meantime i think we can promote use of pread/pwrite or
BufferSubData
| to take advantage of offset & size information in software we do
(mesa, EXA,
| ...).
|
| Ian do you know why dev hate BufferSubData ? Is there any place i can read
| about it ? I have been focusing on driver dev and i am little bit out
dated
| on today typical GL usage, i assumed that hw manufacturer did promote
use of
| BufferSubData to software dev.

Because it forces them to make extra copies of their data and do extra
copy operations.  As an app developer, I *much* prefer:

        glBindBuffer(GL_ARRAY_BUFFER, my_buf);
        GLfloat *data = glMapBufferData(GL_ARRAY_BUFFER, GL_READ_WRITE);
        if (data == NULL) {
                /* fail */
        }

        /* Fill in buffer data */

        glUnmapBuffer(GL_ARRAY_BUFFER);

Over:

        GLfloat *data = malloc(buffer_size);
        if (data == NULL) {
                /* fail */
        }

        /* Fill in buffer data */

        glBindBuffer(GL_ARRAY_BUFFER, my_buf);
        glBufferSubData(GL_ARRAY_BUFFER, 0, buffer_size, data);
        free(data);

The second version obviously has extra overhead and takes a performance
hit.  So, now I have to go back and spend time caching the buffer
allocations and doing other things to make it fast.  In the MapBuffer
version, I can leverage the work done by the smart guys that write drivers.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (GNU/Linux)

iD8DBQFIMbf5X1gOwKyEAw8RAvvRAJ4gEjByIWndSs4NWmVFTAOgBQHqAgCaA3pK
2ShJGYatMlCxHR57CSYbuTk=
=FOBu
-----END PGP SIGNATURE-----

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft 
Defy all challenges. Microsoft(R) Visual Studio 2008. 
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
--
_______________________________________________
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to