Re: Manufacturers who fully disclosed specifications for agp cards?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Wed, 4 Feb 2004 05:41 am, Ryan Underwood wrote: It usually works pretty well for simple stuff and especially things connected to e.g. serial and parallel ports, because you can get a log of all the port writes for a given function and work backwards from there. This didn't work for more complicated drivers because they won't run in a v86 session. So for those things I had to resort to dead listing. Again, for simple stuff like read this value from this port, set this bit, write it to this other port where the intent is obvious and there are few interdependencies, it works great. If you don't get lucky or choose your battles wisely, dead listing can waste a lot of time and effort. Is it possible to insert a shim in the Windows video call chain? We have something like that for USB (http://sourceforge.net/projects/usbsnoop/) and it works pretty well. Alternatively, are there tools (even for pay) that can monitor certain addresses/IO ports under Windows? My needs are not so extravagant (I just want to be able turn on dual head mirror on my i830 based laptop, without rebooting), and that would likely be enough to get the missing info. Brad -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.3 (GNU/Linux) iD8DBQFAIJxcGwwszQ/PZzgRAuhcAJ9GSlHhbHJxOKdZa1HLAYiH7XS+UwCeO3yl +gi9Fer6HABWM/E46IkXU58= =B+Al -END PGP SIGNATURE- ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: *** GMX Spamverdacht *** Re: Manufacturers who fully disclosed specifications for agp cards?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Tuesday 03 February 2004 6:42 pm, Alexander Stohr wrote: Subject: *** GMX Spamverdacht *** Re: Manufacturers who fully disclosed It is possible to gain the specs for a chip by discetion for i.e R300 chip or NV 30 chips with the right tools like a electon microscope? full specifications as the headline does say is more than just the register description, and still much more than the register specs plus an in deep descriptions of internal logic in text and diagrams. it is further the electrical, thermal, mechanical and other related specifications - you dont need all those documentation when just wanting to program such a circuit for a limited purpose. an REM device is of course a good tool for finding out a bit more about the internals of such a device. i just dont have it to my hands. other than that, reverse engineering can be helpfull in some cases. -Alex. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel (reverse engineering) Are there any good tools to do that under linux ? -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.3 (GNU/Linux) iD8DBQFAIK99HioDLONIjLARAn0oAJ0UfPKkv0UNNWS/EKevWX4SvmE8KgCfbXbT FbvSCBslfMqlbQnPSPhJGa0= =VYtH -END PGP SIGNATURE- ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
[leadership/opensource] invitation to online survey
Dear all, I have just put online a survey addressing the topic of leadership in the open-source environment. Basically, my objective is to identify the personal conceptions of good leadership that reside in the minds of the contributors, in terms of leaders' _behaviors_ and _characteristics_. What is a good open-source project leader, from the contributor's point of view? To what extent, those personal believes are shared among developers? Can the contributor's national cultural belonging and level of experience in contributing to open-source projects explain such differences in their idea of what a good leader is? I would really appreciate your participation to the survey! Contribution (*completely* anonymous) consists in rating a list of statements that may be used to describe the behaviors of an open-source leader. It will take around ten minutes - there aren't any time-consuming open ended questions;) Following, the link to the survey: http://freeonlinesurveys.com/rendersurvey.asp?id=52191 If you are interested, I will not miss to email you a link to the final report, when ready ;) (approx, a couple of months from now) Thank you in advance, and don't hesitate to contact me if you have any questions or comments :) Gianluca Bosco [EMAIL PROTECTED] Denmark Technical University Department of manufacturing engineering and management ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Manufacturers who fully disclosed specifications for agp cards?
On Tue, 3 Feb 2004, Knut J Bjuland wrote: Date: Tue, 03 Feb 2004 15:52:53 +0100 From: Knut J Bjuland [EMAIL PROTECTED] To: [EMAIL PROTECTED] Reply-To: [EMAIL PROTECTED] Content-Type: text/plain; charset=us-ascii; format=flowed Subject: Re: Manufacturers who fully disclosed specifications for agp cards? It is possible to gain the specs for a chip by discetion for i.e R300 chip or NV 30 chips with the right tools like a electon microscope? Not unless you're using the electron microscope somehow to break into ATI or Nvidia headquarters, perhaps swinging it from a crane to break a window or something. -- Mike A. Harris ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Manufacturers who fully disclosed specifications for agp cards?
On Tue, 3 Feb 2004, Ryan Underwood wrote: Is there some special circumstance you have to fall under to qualify for R300 specs? It seems there are a lot of people wishing they had them and not a lot of people saying I've got em... :) And in any way, i guess this doesn't include the 3D/DRI parts. Whoops, that was what I was talking about. I thought this thread was on the DRI list until I just now looked. Do we have at least 2d support for r300 cards in XFree86? For over a year now, yes. -- Mike A. Harris ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Manufacturers who fully disclosed specifications for agp cards?
On Tue, 3 Feb 2004, Ian Romanick wrote: where is the docs for the VSA based cards (voodoo4/voodoo5)? I have been unable to locate them. In a chest in a basement at Nvidia somewhere, with a lock on it, behind a bunch of old filing cabinets, in a room at the end of a very long hallway, with spiderwebs hanging everywhere, with a sign on the door which reads: Beware of the leopard I can just imagine it in a big warehouse like where the Ark ended up at the end of Raiders. :) Funny you mention Raiders... I just watched it about 3 weeks ago, and the spiderwebs stuck in my mind and after mixing that in my brain mixmaster with some Adams, I came up with the above. ;o) -- Mike A. Harris ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Announcement: Modification to the base XFree86(TM) license.
David, Just to let you know that I disagree with this decision. The original license is clear and simple. It makes it simple for people to use our code without having to consult any lawyers. The new license might be confusing, and it contains an unpleasant ``advertising clause'' that might make it complicated to use our code. How was the license change decided? Where did the discussion (if any) happen? Who was consulted? Juliusz ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: PCI Express
On Tue, 2004-02-03 at 17:37, Marc Aurele La France wrote: PCI-Xpress is programmatically identical to PCI, so I don't forsee any problems in that regard. Yes, its identical to PCI in terms of the interface presented to the OS so configuration probably won't be an issue, but there is code in hw/xfree86/os-support/bus that attempts to walk the bus topology with explicit knowledge of current PCI bridges. I don't believe this code will execute correctly, but I assume it can be easily defeated, yes? But thats probably less of an issue than the fact PCIE systems demand new PCIE cards and that means driver support. If we are fortunate the new PCIE cards might be pragmatically compatible with with current cards and XFree86 will only need to recognize the new cards, but I really doubt that will be the case. ATI, nvidia, 3DLabs have all announced support for PCIE and pledged it will bring dramatic performance increases. Some of that will be due to the faster bus, but I've got to believe these new cards will feature architectural changes as well demanding new driver support. Bottom line, I've been asked if XFree86 will be able to support PCIE systems due to arrive in a few months or if these systems are going to be dead in the water for open source for anything other than a server. So I'm digging for what people know or believe the issues are. John ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Manufacturers who fully disclosed specifications for agp cards?
Brad Hards wrote: Is it possible to insert a shim in the Windows video call chain? We have something like that for USB (http://sourceforge.net/projects/usbsnoop/) and it works pretty well. Do you mean between a Windows display driver and the PCI bus, so that you can snaggle all the I/O port and memory writes? No, that is not possible, at least not in the NT-based systems. Windows display drivers run in kernel mode, where I/O trapping is not possible. Display drivers are supposed to use a set of macros to access memory-mapped I/O space, so in theory you could recompile the driver with those macros redefined (if you have source), but many drivers violate that rule and use ordinary pointer access. Alternatively, are there tools (even for pay) that can monitor certain addresses/IO ports under Windows? My needs are not so extravagant (I just want to be able turn on dual head mirror on my i830 based laptop, without rebooting), and that would likely be enough to get the missing info. In Win9X, this is possible, because display drivers are user-mode. You could install a VXD to trap I/O ports and handle page faults for the memory-mapped space, but it would be a huge pain in the butt. It is not possible in NT/2K/XP. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: [Dri-devel] GL_VERSION 1.5 when indirect rendering?
On Wed, Feb 04, 2004 at 10:12:19AM -0800, Ian Romanick wrote: | | Okay, that's just weird. Normally the Nvidia extension string is about | 3 pages long. Just for reference, here's the direct-rendering version (table of Visuals omitted): name of display: :0.0 display: :0 screen: 0 direct rendering: Yes server glx vendor string: NVIDIA Corporation server glx version string: 1.3 server glx extensions: GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGI_video_sync, GLX_SGI_swap_control, GLX_ARB_multisample client glx vendor string: NVIDIA Corporation client glx version string: 1.3 client glx extensions: GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_EXT_import_context, GLX_SGI_video_sync, GLX_NV_swap_group, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGI_swap_control, GLX_NV_float_buffer GLX extensions: GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGI_video_sync, GLX_SGI_swap_control, GLX_ARB_multisample, GLX_ARB_get_proc_address OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce4 Ti 4200/AGP/SSE2 OpenGL version string: 1.4.1 NVIDIA 53.28 OpenGL extensions: GL_ARB_depth_texture, GL_ARB_imaging, GL_ARB_multisample, GL_ARB_multitexture, GL_ARB_occlusion_query, GL_ARB_point_parameters, GL_ARB_point_sprite, GL_ARB_shadow, GL_ARB_texture_border_clamp, GL_ARB_texture_compression, GL_ARB_texture_cube_map, GL_ARB_texture_env_add, GL_ARB_texture_env_combine, GL_ARB_texture_env_dot3, GL_ARB_texture_mirrored_repeat, GL_ARB_transpose_matrix, GL_ARB_vertex_buffer_object, GL_ARB_vertex_program, GL_ARB_window_pos, GL_S3_s3tc, GL_EXT_texture_env_add, GL_EXT_abgr, GL_EXT_bgra, GL_EXT_blend_color, GL_EXT_blend_minmax, GL_EXT_blend_subtract, GL_EXT_compiled_vertex_array, GL_EXT_draw_range_elements, GL_EXT_fog_coord, GL_EXT_multi_draw_arrays, GL_EXT_packed_pixels, GL_EXT_paletted_texture, GL_EXT_point_parameters, GL_EXT_rescale_normal, GL_EXT_secondary_color, GL_EXT_separate_specular_color, GL_EXT_shadow_funcs, GL_EXT_shared_texture_palette, GL_EXT_stencil_wrap, GL_EXT_texture3D, GL_EXT_texture_compression_s3tc, GL_EXT_texture_cube_map, GL_EXT_texture_edge_clamp, GL_EXT_texture_env_combine, GL_EXT_texture_env_dot3, GL_EXT_texture_filter_anisotropic, GL_EXT_texture_lod, GL_EXT_texture_lod_bias, GL_EXT_texture_object, GL_EXT_vertex_array, GL_HP_occlusion_test, GL_IBM_rasterpos_clip, GL_IBM_texture_mirrored_repeat, GL_KTX_buffer_region, GL_NV_blend_square, GL_NV_copy_depth_to_color, GL_NV_depth_clamp, GL_NV_fence, GL_NV_fog_distance, GL_NV_light_max_exponent, GL_NV_multisample_filter_hint, GL_NV_occlusion_query, GL_NV_packed_depth_stencil, GL_NV_pixel_data_range, GL_NV_point_sprite, GL_NV_register_combiners, GL_NV_register_combiners2, GL_NV_texgen_reflection, GL_NV_texture_compression_vtc, GL_NV_texture_env_combine4, GL_NV_texture_rectangle, GL_NV_texture_shader, GL_NV_texture_shader2, GL_NV_texture_shader3, GL_NV_vertex_array_range, GL_NV_vertex_array_range2, GL_NV_vertex_program, GL_NV_vertex_program1_1, GL_NVX_ycrcb, GL_SGIS_generate_mipmap, GL_SGIS_multitexture, GL_SGIS_texture_lod, GL_SGIX_depth_texture, GL_SGIX_shadow, GL_SUN_slice_accum glu version: 1.3 glu extensions: GLU_EXT_nurbs_tessellator, GLU_EXT_object_space_tess Allen ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Manufacturers who fully disclosed specifications for agp cards?
On Wed, Feb 04, 2004 at 06:16:44PM +1100, Brad Hards wrote: Is it possible to insert a shim in the Windows video call chain? We have something like that for USB (http://sourceforge.net/projects/usbsnoop/) and it works pretty well. Alternatively, are there tools (even for pay) that can monitor certain addresses/IO ports under Windows? My needs are not so extravagant (I just want to be able turn on dual head mirror on my i830 based laptop, without rebooting), and that would likely be enough to get the missing info. I don't know of a generic way. OpenVortex people did this: http://mail.gnu.org/archive/html/openvortex-dev/2003-10/msg00036.html Obviously the process of finding the proper symbols and using them to intercept hardware access will need custom work. This will only let you trap hardware access as far as you can determine where the code is doing it, i.e. you can't setup a fault to drop you into your own code afaik. -- Ryan Underwood, [EMAIL PROTECTED] signature.asc Description: Digital signature
Re: PCI Express
On 4 Feb 2004, John Dennis wrote: On Tue, 2004-02-03 at 17:37, Marc Aurele La France wrote: PCI-Xpress is programmatically identical to PCI, so I don't forsee any problems in that regard. Yes, its identical to PCI in terms of the interface presented to the OS so configuration probably won't be an issue, but there is code in hw/xfree86/os-support/bus that attempts to walk the bus topology with explicit knowledge of current PCI bridges. I don't believe this code will execute correctly, but I assume it can be easily defeated, yes? But thats probably less of an issue than the fact PCIE systems demand new PCIE cards and that means driver support. If we are fortunate the new PCIE cards might be pragmatically compatible with with current cards and XFree86 will only need to recognize the new cards, but I really doubt that will be the case. ATI, nvidia, 3DLabs have all announced support for PCIE and pledged it will bring dramatic performance increases. Some of that will be due to the faster bus, but I've got to believe these new cards will feature architectural changes as well demanding new driver support. Bottom line, I've been asked if XFree86 will be able to support PCIE systems due to arrive in a few months or if these systems are going to be dead in the water for open source for anything other than a server. So I'm digging for what people know or believe the issues are. At the risk of being flippant: Try it. Actual problems are always easier to debug than theoritical ones. Marc. +--+---+ | Marc Aurele La France | work: 1-780-492-9310 | | Computing and Network Services | fax:1-780-492-1729 | | 352 General Services Building | email: [EMAIL PROTECTED] | | University of Alberta +---+ | Edmonton, Alberta | | | T6G 2H1 | Standard disclaimers apply| | CANADA | | +--+---+ XFree86 developer and VP. ATI driver and X server internals. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Welcome to Eliza Bloom's mailing list.
Thanks for subscribing. You will receive one uplifting message each week, straight from Eliza Bloom. You are welcome to invite your friends to join this list. They can do so by sending any mailto:[EMAIL PROTECTED] Here are a few other complimentary publications you may find interesting: 1. LASER LIFE LESSONS Delivers original and immediately usable information and strategies to help you re-script your mental patterns. Join nearly 70,000 of people in over 70 countries who use Laser Life Lessons to help them make each day great. Click here to subscribe: http://www.momscape.com/laserlifelessons.htm 3. BEST SELF HELP WEEKLY Keep up-to-date on the best self-help resources, money-saving promotions, and freebies. To subscribe, send any mailto:[EMAIL PROTECTED] 3. SUCCESS CONDITIONING CLASS This is an intensive 6-day e-mail class on making lasting emotional and behavioral changes in your life that you haven't been able to make on your own. This course works by installing up to 22 core mental patterns (unconscious thoughts, beliefs and attitudes) of the people who already think, feel and perform exactly how you want to concerning that specific situation. By Mike Brescia. http://www.momscape.com/successconditioning.htm 4. MOMSCAPE Stress-relief tools for mothers. To subscribe, send any mailto:[EMAIL PROTECTED] Enjoy your subscription! In peace, Eliza [EMAIL PROTECTED] Click on the link below to remove yourself http://www.demandmail.com/cgi-bin/varpro2/r.cgi?id=ebloomweekly[EMAIL PROTECTED] AOL Users a href=http://www.demandmail.com/cgi-bin/varpro2/r.cgi?id=ebloomweekly[EMAIL PROTECTED] Remove Me/a ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: PCI Express
On Wed, 4 Feb 2004, Marc Aurele La France wrote: On 4 Feb 2004, John Dennis wrote: On Tue, 2004-02-03 at 17:37, Marc Aurele La France wrote: PCI-Xpress is programmatically identical to PCI, so I don't forsee any problems in that regard. Yes, its identical to PCI in terms of the interface presented to the OS so configuration probably won't be an issue, but there is code in hw/xfree86/os-support/bus that attempts to walk the bus topology with explicit knowledge of current PCI bridges. I don't believe this code will execute correctly, but I assume it can be easily defeated, yes? But thats probably less of an issue than the fact PCIE systems demand new PCIE cards and that means driver support. If we are fortunate the new PCIE cards might be pragmatically compatible with with current cards and XFree86 will only need to recognize the new cards, but I really doubt that will be the case. ATI, nvidia, 3DLabs have all announced support for PCIE and pledged it will bring dramatic performance increases. Some of that will be due to the faster bus, but I've got to believe these new cards will feature architectural changes as well demanding new driver support. Bottom line, I've been asked if XFree86 will be able to support PCIE systems due to arrive in a few months or if these systems are going to be dead in the water for open source for anything other than a server. So I'm digging for what people know or believe the issues are. At the risk of being flippant: Try it. Actual problems are always easier to debug than theoritical ones. Marc. I've been told that it works, but PCI Express performance is poor on Linux. But that could be for any number of reasons. Certainly, the sky is not falling. People are expected to get things to work. Whether performance meets expectations is another matter. Mark. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
VisibilityNotify/XRaiseWindow question
Hi! This is a general question on X client design. Please excuse me that it is not directly related to the XFree86 Project. Imagine there are two clients running the same code: while (!done) { XWindowEvent(... | VisibilityChangeMask, theEvent); switch (theEvent.type) { ... case VisibilityNotify: XRaiseWindow(...); break; } } Obviously, there's a race here. Both clients are fighting to get on top. There's a real example of this problem. One application is called xbattbar (which is a battery status indicator), another is xlock (used as screensaver). What would be the right way to solve this problem? Thanks for any answers in advance! -- Alexander Pohoyda [EMAIL PROTECTED] PGP Key fingerprint: 7F C9 CC 5A 75 CD 89 72 15 54 5F 62 20 23 C6 44 ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: VisibilityNotify/XRaiseWindow question
On a modern desktop system, you should probably use window manager hints to perform this function, and let the window manager do the rest. http://freedesktop.org/standards/wm-spec/1.3/ For example, using NET_WM_STATE you can specify that your application's window should be NET_WM_STATE_ABOVE. You might also investigate using NET_WM_TYPE NET_WM_TYPE_DOCK. To me this is all theory, because I've never tried to write an application like xbattbar... Jeff ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
eShield Administrative Alert.
02/04/04 20:21:09 eShield (Version 5.0 D1a (5.0.17.3)) - http://www.Ositis.com/ Antivirus Vendor: Trend Micro, Inc. Scan Engine Version: 6.810-1005 Pattern File Version: 753.58318 (Timestamp: 2004/02/03 11:09:28) Machine name: AVStripper Machine IP address: 172.16.1.246 Client: 66.136.200.100 Protocol: SMTP Virus: WORM_MYDOOM.A found! Attachment: doc.zip =Original message text===(Note: replaced with { } ) From: [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: TEST Date: Wed, 4 Feb 2004 21:30:17 -0600 MIME-Version: 1.0 Content-Type: multipart/mixed; boundary==_NextPart_000_0010_4FFA3D21.8ECFD0EA X-Priority: 3 X-MSMail-Priority: Normal This is a multi-part message in MIME format. --=_NextPart_000_0010_4FFA3D21.8ECFD0EA Content-Type: text/plain; charset=Windows-1252 Content-Transfer-Encoding: 7bit The message cannot be represented in 7-bit ASCII encoding and has been sent a ==End of original message text=== ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Ultimos dias: Kit com tudo que voce precisa estudar para passar no vestibular
Title: KITVESTIBA Respeitamos sua privacidade. Caso queira ser removido de nossa lista de e-mails, siga as instrues presentes no rodap desta mensagem. 150 provas corrigidas das principais universidades brasileiras 400 apostilas com materiais de exatas, humanas e biolgicas 140 livros digitais exigidos pelos principais vestibulares do pas Teste vocacional 2003 - Confira suas aptides para o mercado Tudo isso e muito mais! So mais de 500 textos e 100 programas que lhe auxiliaro a passar no vestibular! Clique aqui para saber mais detalhes! No quer mais receber nossos e-mails informativos? Clique aqui!
Re: [Dri-devel] GL_VERSION 1.5 when indirect rendering?
Ian Romanick wrote: Andreas Stenglein wrote: after setting LIBGL_ALWAYS_INDIRECT=1 glxinfo shows OpenGL version string: 1.5 Mesa 6.0 but doesnt show all extensions necessary for OpenGL 1.5 An application only checking for GL_VERSION 1.5 would probably fail. Any idea what would happen with libGL.so / libGLcore.a from different versions of XFree86 / DRI and/or different vendors (nvidia) on the client/server machines? That's *bad*. It is currently *impossible* to have GL 1.5 with indirect rendering because some of the GLX protocol (for ARB_occlusion_query ARB_vertex_buffer_objects) was never completely defined. Looking back at it, we can't even advertise 1.3 or 1.4 with indirect rendering becuase the protocol for ARB_texture_compression isn't supported (on either end). Ian, it seems to me that xc/lib/GL/glx/single2.c's glGetString() function should catch queries for GL_VERSION (as it does for GL_EXTENSIONS) and compute the minimum of the renderer's glGetString(GL_VERSION) and what the client/server GLX modules can support. That would solve this, right? Please submit a bug for this on XFree86. Something should be done for this for the 4.4.0 release. http://bugs.xfree86.org/ Does anyone know if either the ATI or Nvidia closed-source drivers support ARB_texture_compression for indirect rendering? If one of them does, that would give us a test bed for the client-side protocol support. When that support is added, we can change the library version to 1.4 (i.e., change from libGL.so.1.2 to libGL.so.1.4, with extra .1.2 and .1.3 symlinks). I don't have the latest NVIDIA drivers on my machines, but glxinfo reports: direct rendering: No server glx vendor string: NVIDIA Corporation server glx version string: 1.3 server glx extensions: GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_ARB_multisample client glx vendor string: NVIDIA Corporation client glx version string: 1.3 client glx extensions: GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_EXT_import_context, GLX_SGI_video_sync, GLX_NV_swap_group, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGI_swap_control, GLX_NV_float_buffer GLX extensions: GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_ARB_multisample, GLX_ARB_get_proc_address OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce3/AGP/SSE2 OpenGL version string: 1.4.0 NVIDIA 44.96 OpenGL extensions: GL_EXT_blend_minmax, GL_EXT_texture_object, GL_EXT_draw_range_elements, GL_EXT_texture3D, GL_EXT_secondary_color, GL_ARB_multitexture, GL_EXT_multi_draw_arrays, GL_ARB_point_parameters, GL_EXT_fog_coord, GL_ARB_imaging, GL_EXT_vertex_array, GL_EXT_paletted_texture, GL_ARB_window_pos, GL_EXT_blend_color glu version: 1.3 glu extensions: GLU_EXT_nurbs_tessellator, GLU_EXT_object_space_tess So, it appears that GL_ARB_texture_compression is not supported, but the GL_VERSION is reported as 1.4.0 Hmmm. -Brian ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: [Dri-devel] Re: GL_VERSION 1.5 when indirect rendering?
Michel Dnzer wrote: On Wed, 2004-02-04 at 00:56, Ian Romanick wrote: Does anyone know if either the ATI or Nvidia closed-source drivers support ARB_texture_compression for indirect rendering? If one of them does, that would give us a test bed for the client-side protocol support. When that support is added, we can change the library version to 1.4 (i.e., change from libGL.so.1.2 to libGL.so.1.4, with extra .1.2 and .1.3 symlinks). Are those symlinks really necessary? Apps should only care about libGL.so.1 . It's a debatable point. If an app explicitly links against libGL.so.1.5, then it can expect symbols to statically exist that may not be in libGL.so.1.2. So an app that links against libGL.so.1.5 wouldn't have to use glXGetProcAddress for glBindBuffer or glBeginQuery, but an app linking to a lower version would. Do we want to encourage that? That's the debatable part. :) While we're at it: is there a reason for libGL not having a patchlevel, e.g. libGL.so.1.2.0? This can cause unpleasant surprises because ldconfig will consider something like libGL.so.1.2.bak as the higher patchlevel and change libGL.so.1 to point to that instead of libGL.so.1.2 . That's a good idea. I've been bitten by that before, but my sollution was to make it libGL.bak.so.1.2 or something similar. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: [Dri-devel] GL_VERSION 1.5 when indirect rendering?
Brian Paul wrote: Ian Romanick wrote: That's *bad*. It is currently *impossible* to have GL 1.5 with indirect rendering because some of the GLX protocol (for ARB_occlusion_query ARB_vertex_buffer_objects) was never completely defined. Looking back at it, we can't even advertise 1.3 or 1.4 with indirect rendering becuase the protocol for ARB_texture_compression isn't supported (on either end). Ian, it seems to me that xc/lib/GL/glx/single2.c's glGetString() function should catch queries for GL_VERSION (as it does for GL_EXTENSIONS) and compute the minimum of the renderer's glGetString(GL_VERSION) and what the client/server GLX modules can support. That would solve this, right? Making that change and changing the server-side to not advertise a core version that it can't take protocol for would fix the bug for 4.4.0. Do you think anything should be done to preserve text after the version? That is, if a server sends us 1.4.20040108 Foobar, Inc. Fancypants GL, should we return 1.2 or something more elaborate? I thought about it some last night, and I think there's some longer term work to be done on the client-side. Basically, we need a mechanism for GL extensions that matches what we have for GLX extensions. There are a few extensions that are essentially client-side only. We should be able to expose those without expecting the server-side to list them. In fact, the server-side should not list them. Extensions like EXT_draw_range_elements, EXT_multi_draw_arrays, and a few others fall into this category. It should be fairly easy to generalize the code for GLX extensions so that it can be used for both. As a side bonus, that would eliminate the compiler warning in glxcmds.c about the __glXGLClientExtensions string being too long. :) Does anyone know if either the ATI or Nvidia closed-source drivers support ARB_texture_compression for indirect rendering? If one of them does, that would give us a test bed for the client-side protocol support. When that support is added, we can change the library version to 1.4 (i.e., change from libGL.so.1.2 to libGL.so.1.4, with extra .1.2 and .1.3 symlinks). [big snip] OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce3/AGP/SSE2 OpenGL version string: 1.4.0 NVIDIA 44.96 OpenGL extensions: GL_EXT_blend_minmax, GL_EXT_texture_object, GL_EXT_draw_range_elements, GL_EXT_texture3D, GL_EXT_secondary_color, GL_ARB_multitexture, GL_EXT_multi_draw_arrays, GL_ARB_point_parameters, GL_EXT_fog_coord, GL_ARB_imaging, GL_EXT_vertex_array, GL_EXT_paletted_texture, GL_ARB_window_pos, GL_EXT_blend_color glu version: 1.3 glu extensions: GLU_EXT_nurbs_tessellator, GLU_EXT_object_space_tess So, it appears that GL_ARB_texture_compression is not supported, but the GL_VERSION is reported as 1.4.0 Hmmm. Okay, that's just weird. Normally the Nvidia extension string is about 3 pages long. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: [Dri-devel] Re: GL_VERSION 1.5 when indirect rendering?
Ian Romanick wrote: Michel Dnzer wrote: On Wed, 2004-02-04 at 00:56, Ian Romanick wrote: Does anyone know if either the ATI or Nvidia closed-source drivers support ARB_texture_compression for indirect rendering? If one of them does, that would give us a test bed for the client-side protocol support. When that support is added, we can change the library version to 1.4 (i.e., change from libGL.so.1.2 to libGL.so.1.4, with extra .1.2 and .1.3 symlinks). Are those symlinks really necessary? Apps should only care about libGL.so.1 . It's a debatable point. If an app explicitly links against libGL.so.1.5, then it can expect symbols to statically exist that may not be in libGL.so.1.2. So an app that links against libGL.so.1.5 wouldn't have to use glXGetProcAddress for glBindBuffer or glBeginQuery, but an app linking to a lower version would. Since libGL is usually built with DT_SONAME (aka -soname) set to libGL.so.1, the minor version number isn't really significant. I.e. I may have originally linked my application with libGL.so.1.5 but at runtime the loader will be satisfied with libGL.so.1.2 Do we want to encourage that? That's the debatable part. :) While we're at it: is there a reason for libGL not having a patchlevel, e.g. libGL.so.1.2.0? This can cause unpleasant surprises because ldconfig will consider something like libGL.so.1.2.bak as the higher patchlevel and change libGL.so.1 to point to that instead of libGL.so.1.2 . That's a good idea. I've been bitten by that before, but my sollution was to make it libGL.bak.so.1.2 or something similar. -Brian ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: [Dri-devel] GL_VERSION 1.5 when indirect rendering?
Ian Romanick wrote: Brian Paul wrote: Ian Romanick wrote: That's *bad*. It is currently *impossible* to have GL 1.5 with indirect rendering because some of the GLX protocol (for ARB_occlusion_query ARB_vertex_buffer_objects) was never completely defined. Looking back at it, we can't even advertise 1.3 or 1.4 with indirect rendering becuase the protocol for ARB_texture_compression isn't supported (on either end). Ian, it seems to me that xc/lib/GL/glx/single2.c's glGetString() function should catch queries for GL_VERSION (as it does for GL_EXTENSIONS) and compute the minimum of the renderer's glGetString(GL_VERSION) and what the client/server GLX modules can support. That would solve this, right? Making that change and changing the server-side to not advertise a core version that it can't take protocol for would fix the bug for 4.4.0. Do you think anything should be done to preserve text after the version? That is, if a server sends us 1.4.20040108 Foobar, Inc. Fancypants GL, should we return 1.2 or something more elaborate? It would be nice to preserve the extra text, but it's not essential. I thought about it some last night, and I think there's some longer term work to be done on the client-side. Basically, we need a mechanism for GL extensions that matches what we have for GLX extensions. There are a few extensions that are essentially client-side only. We should be able to expose those without expecting the server-side to list them. In fact, the server-side should not list them. Extensions like EXT_draw_range_elements, EXT_multi_draw_arrays, and a few others fall into this category. It should be fairly easy to generalize the code for GLX extensions so that it can be used for both. Sounds reasonable. As a side bonus, that would eliminate the compiler warning in glxcmds.c about the __glXGLClientExtensions string being too long. :) Does anyone know if either the ATI or Nvidia closed-source drivers support ARB_texture_compression for indirect rendering? If one of them does, that would give us a test bed for the client-side protocol support. When that support is added, we can change the library version to 1.4 (i.e., change from libGL.so.1.2 to libGL.so.1.4, with extra .1.2 and .1.3 symlinks). [big snip] OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce3/AGP/SSE2 OpenGL version string: 1.4.0 NVIDIA 44.96 OpenGL extensions: GL_EXT_blend_minmax, GL_EXT_texture_object, GL_EXT_draw_range_elements, GL_EXT_texture3D, GL_EXT_secondary_color, GL_ARB_multitexture, GL_EXT_multi_draw_arrays, GL_ARB_point_parameters, GL_EXT_fog_coord, GL_ARB_imaging, GL_EXT_vertex_array, GL_EXT_paletted_texture, GL_ARB_window_pos, GL_EXT_blend_color glu version: 1.3 glu extensions: GLU_EXT_nurbs_tessellator, GLU_EXT_object_space_tess So, it appears that GL_ARB_texture_compression is not supported, but the GL_VERSION is reported as 1.4.0 Hmmm. Okay, that's just weird. Normally the Nvidia extension string is about 3 pages long. I guess they just aren't bothering to support many extensions via indirect rendering. -Brian ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: [Dri-devel] GL_VERSION 1.5 when indirect rendering?
Am 2004.02.04 21:00:14 +0100 schrieb(en) Brian Paul: Ian Romanick wrote: [snip] Making that change and changing the server-side to not advertise a core version that it can't take protocol for would fix the bug for 4.4.0. Do you think anything should be done to preserve text after the version? That is, if a server sends us 1.4.20040108 Foobar, Inc. Fancypants GL, should we return 1.2 or something more elaborate? It would be nice to preserve the extra text, but it's not essential. why not just add the 1.2 before the original text? 1.2 1.4.20040108 Foobar, Inc. Fancypants GL so you would see that the renderer could support 1.4 if GLX could do it. @Ian: its bug #1147 http://bugs.xfree86.org/show_bug.cgi?id=1147 best regards Andreas ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel