Re: SupportConvertXXtoXX
On Fri, Mar 05, 2004 at 09:36:19PM -0800, Mark Vojkovich wrote: On Fri, 5 Mar 2004, David Dawes wrote: On Sat, Mar 06, 2004 at 03:28:09AM +0100, Thomas Winischhofer wrote: Mark Vojkovich wrote: On Fri, 5 Mar 2004, Thomas Winischhofer wrote: David Dawes wrote: On Fri, Mar 05, 2004 at 01:38:06AM +0100, Thomas Winischhofer wrote: What exactly does a video driver have to be able to do if the SupportConvert32to24 flag is set at calling xf86SetDepthBpp, provided the hardware supports, for instance, 24bpp (framebuffer depth) only? It has to use a framebuffer layer that can do this conversion. fb can, as can xf24_32bpp (if your driver uses cfb). The s3virge driver is an example that can still be run with the xf24_32bpp method, and it does the following to figure out what to load: case 24: if (pix24bpp == 24) { mod = cfb24; reqSym = cfb24ScreenInit; } else { mod = xf24_32bpp; reqSym = cfb24_32ScreenInit; } Most drivers use fb these days, and it has support for this built-in, and enabled automatically. So it is save just to set these, I assume (since my driver uses fb). (Just wondered why the *driver* and not the layer taking care of this has to (not) set these.) Do you mean the flag? The layer above does not know whether or not the driver/HW supports a 24 bpp framebuffer. The nv driver, for example, does not. Whether or not the hardware does support 24bpp (framebuffer depth, not talking about color depth) should be determined by setting/clearing SupportXXbpp. Why the *driver* needs to set SupportConvert is beyond me. My understanding is that the respective fb layer should take care of this (if supported) based on SupportXXbpp (especially since the *driver* does not need to care about this, as David told me. It just depends on what layer I choose for above the driver level). There are two things here. One, the *fb module isn't loaded at the point where this information is required. Two, only the driver knows which (if any) *fb layer(s) it will use. It is the driver's responsibility to characterise what it can do. The cost, as currently implemented, of this model is a reasonable amount of boiler plate in drivers. But anyway, my question was answered. Seems to be save to set this obsure SupportConvert32to24 flag if using the generic fb layer. Yes. However, we didn't have today's generic fb layer when this stuff was first written. Fortunately for its ease of adoption, the driver model didn't mandate a specific *fb layer or hard code its expected characteristics :-) Looking at the xf86SetDepthBpp() code, there appears to be another wrinkle, because these flags get cleared for (BITMAP_SCANLINE_UNIT == 64) platforms: #if BITMAP_SCANLINE_UNIT == 64 /* * For platforms with 64-bit scanlines, modify the driver's depth24flags * to remove preferences for packed 24bpp modes, which are not currently * supported on these platforms. */ depth24flags = ~(SupportConvert32to24 | SupportConvert32to24 | PreferConvert24to32 | PreferConvert32to24); #endif This has been there for a long time (before we had fb). I'm not sure if it is still valid or not. Anyone with 64-bit scanline platforms care to comment? I thought we stopped using 64 bit scanlines altogether. Hmm, yes it looks that way. I guess we can remove that then, and the related code in xf86Init.c. David ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: SupportConvertXXtoXX
On Sat, 6 Mar 2004, David Dawes wrote: I thought we stopped using 64 bit scanlines altogether. Hmm, yes it looks that way. I guess we can remove that then, and the related code in xf86Init.c. When you use 64 bit scanlines you introduce a mess in the PutImage code. You have to translate the images which have a protocol specified maximum padding of 32 bits to the internal 64 bit format. It's not worth it on any hardware we support. Especially given how important PutImage is compared to optimal alignment for software rendering, which is primarily what 64 bit padding was trying to improve. Mark. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: SupportConvertXXtoXX
David Dawes wrote: On Fri, Mar 05, 2004 at 01:38:06AM +0100, Thomas Winischhofer wrote: What exactly does a video driver have to be able to do if the SupportConvert32to24 flag is set at calling xf86SetDepthBpp, provided the hardware supports, for instance, 24bpp (framebuffer depth) only? It has to use a framebuffer layer that can do this conversion. fb can, as can xf24_32bpp (if your driver uses cfb). The s3virge driver is an example that can still be run with the xf24_32bpp method, and it does the following to figure out what to load: case 24: if (pix24bpp == 24) { mod = cfb24; reqSym = cfb24ScreenInit; } else { mod = xf24_32bpp; reqSym = cfb24_32ScreenInit; } Most drivers use fb these days, and it has support for this built-in, and enabled automatically. So it is save just to set these, I assume (since my driver uses fb). (Just wondered why the *driver* and not the layer taking care of this has to (not) set these.) Thanks Mark and David. Thomas -- Thomas Winischhofer Vienna/Austria thomas AT winischhofer DOT net *** http://www.winischhofer.net twini AT xfree86 DOT org ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: SupportConvertXXtoXX
On Fri, 5 Mar 2004, Thomas Winischhofer wrote: David Dawes wrote: On Fri, Mar 05, 2004 at 01:38:06AM +0100, Thomas Winischhofer wrote: What exactly does a video driver have to be able to do if the SupportConvert32to24 flag is set at calling xf86SetDepthBpp, provided the hardware supports, for instance, 24bpp (framebuffer depth) only? It has to use a framebuffer layer that can do this conversion. fb can, as can xf24_32bpp (if your driver uses cfb). The s3virge driver is an example that can still be run with the xf24_32bpp method, and it does the following to figure out what to load: case 24: if (pix24bpp == 24) { mod = cfb24; reqSym = cfb24ScreenInit; } else { mod = xf24_32bpp; reqSym = cfb24_32ScreenInit; } Most drivers use fb these days, and it has support for this built-in, and enabled automatically. So it is save just to set these, I assume (since my driver uses fb). (Just wondered why the *driver* and not the layer taking care of this has to (not) set these.) Do you mean the flag? The layer above does not know whether or not the driver/HW supports a 24 bpp framebuffer. The nv driver, for example, does not. Mark. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: SupportConvertXXtoXX
Mark Vojkovich wrote: On Fri, 5 Mar 2004, Thomas Winischhofer wrote: David Dawes wrote: On Fri, Mar 05, 2004 at 01:38:06AM +0100, Thomas Winischhofer wrote: What exactly does a video driver have to be able to do if the SupportConvert32to24 flag is set at calling xf86SetDepthBpp, provided the hardware supports, for instance, 24bpp (framebuffer depth) only? It has to use a framebuffer layer that can do this conversion. fb can, as can xf24_32bpp (if your driver uses cfb). The s3virge driver is an example that can still be run with the xf24_32bpp method, and it does the following to figure out what to load: case 24: if (pix24bpp == 24) { mod = cfb24; reqSym = cfb24ScreenInit; } else { mod = xf24_32bpp; reqSym = cfb24_32ScreenInit; } Most drivers use fb these days, and it has support for this built-in, and enabled automatically. So it is save just to set these, I assume (since my driver uses fb). (Just wondered why the *driver* and not the layer taking care of this has to (not) set these.) Do you mean the flag? The layer above does not know whether or not the driver/HW supports a 24 bpp framebuffer. The nv driver, for example, does not. Whether or not the hardware does support 24bpp (framebuffer depth, not talking about color depth) should be determined by setting/clearing SupportXXbpp. Why the *driver* needs to set SupportConvert is beyond me. My understanding is that the respective fb layer should take care of this (if supported) based on SupportXXbpp (especially since the *driver* does not need to care about this, as David told me. It just depends on what layer I choose for above the driver level). But anyway, my question was answered. Seems to be save to set this obsure SupportConvert32to24 flag if using the generic fb layer. Thomas -- Thomas Winischhofer Vienna/Austria thomas AT winischhofer DOT net http://www.winischhofer.net/ twini AT xfree86 DOT org ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: SupportConvertXXtoXX
On Sat, Mar 06, 2004 at 03:28:09AM +0100, Thomas Winischhofer wrote: Mark Vojkovich wrote: On Fri, 5 Mar 2004, Thomas Winischhofer wrote: David Dawes wrote: On Fri, Mar 05, 2004 at 01:38:06AM +0100, Thomas Winischhofer wrote: What exactly does a video driver have to be able to do if the SupportConvert32to24 flag is set at calling xf86SetDepthBpp, provided the hardware supports, for instance, 24bpp (framebuffer depth) only? It has to use a framebuffer layer that can do this conversion. fb can, as can xf24_32bpp (if your driver uses cfb). The s3virge driver is an example that can still be run with the xf24_32bpp method, and it does the following to figure out what to load: case 24: if (pix24bpp == 24) { mod = cfb24; reqSym = cfb24ScreenInit; } else { mod = xf24_32bpp; reqSym = cfb24_32ScreenInit; } Most drivers use fb these days, and it has support for this built-in, and enabled automatically. So it is save just to set these, I assume (since my driver uses fb). (Just wondered why the *driver* and not the layer taking care of this has to (not) set these.) Do you mean the flag? The layer above does not know whether or not the driver/HW supports a 24 bpp framebuffer. The nv driver, for example, does not. Whether or not the hardware does support 24bpp (framebuffer depth, not talking about color depth) should be determined by setting/clearing SupportXXbpp. Why the *driver* needs to set SupportConvert is beyond me. My understanding is that the respective fb layer should take care of this (if supported) based on SupportXXbpp (especially since the *driver* does not need to care about this, as David told me. It just depends on what layer I choose for above the driver level). There are two things here. One, the *fb module isn't loaded at the point where this information is required. Two, only the driver knows which (if any) *fb layer(s) it will use. It is the driver's responsibility to characterise what it can do. The cost, as currently implemented, of this model is a reasonable amount of boiler plate in drivers. But anyway, my question was answered. Seems to be save to set this obsure SupportConvert32to24 flag if using the generic fb layer. Yes. However, we didn't have today's generic fb layer when this stuff was first written. Fortunately for its ease of adoption, the driver model didn't mandate a specific *fb layer or hard code its expected characteristics :-) Looking at the xf86SetDepthBpp() code, there appears to be another wrinkle, because these flags get cleared for (BITMAP_SCANLINE_UNIT == 64) platforms: #if BITMAP_SCANLINE_UNIT == 64 /* * For platforms with 64-bit scanlines, modify the driver's depth24flags * to remove preferences for packed 24bpp modes, which are not currently * supported on these platforms. */ depth24flags = ~(SupportConvert32to24 | SupportConvert32to24 | PreferConvert24to32 | PreferConvert32to24); #endif This has been there for a long time (before we had fb). I'm not sure if it is still valid or not. Anyone with 64-bit scanline platforms care to comment? David ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: SupportConvertXXtoXX
On Fri, 5 Mar 2004, David Dawes wrote: On Sat, Mar 06, 2004 at 03:28:09AM +0100, Thomas Winischhofer wrote: Mark Vojkovich wrote: On Fri, 5 Mar 2004, Thomas Winischhofer wrote: David Dawes wrote: On Fri, Mar 05, 2004 at 01:38:06AM +0100, Thomas Winischhofer wrote: What exactly does a video driver have to be able to do if the SupportConvert32to24 flag is set at calling xf86SetDepthBpp, provided the hardware supports, for instance, 24bpp (framebuffer depth) only? It has to use a framebuffer layer that can do this conversion. fb can, as can xf24_32bpp (if your driver uses cfb). The s3virge driver is an example that can still be run with the xf24_32bpp method, and it does the following to figure out what to load: case 24: if (pix24bpp == 24) { mod = cfb24; reqSym = cfb24ScreenInit; } else { mod = xf24_32bpp; reqSym = cfb24_32ScreenInit; } Most drivers use fb these days, and it has support for this built-in, and enabled automatically. So it is save just to set these, I assume (since my driver uses fb). (Just wondered why the *driver* and not the layer taking care of this has to (not) set these.) Do you mean the flag? The layer above does not know whether or not the driver/HW supports a 24 bpp framebuffer. The nv driver, for example, does not. Whether or not the hardware does support 24bpp (framebuffer depth, not talking about color depth) should be determined by setting/clearing SupportXXbpp. Why the *driver* needs to set SupportConvert is beyond me. My understanding is that the respective fb layer should take care of this (if supported) based on SupportXXbpp (especially since the *driver* does not need to care about this, as David told me. It just depends on what layer I choose for above the driver level). There are two things here. One, the *fb module isn't loaded at the point where this information is required. Two, only the driver knows which (if any) *fb layer(s) it will use. It is the driver's responsibility to characterise what it can do. The cost, as currently implemented, of this model is a reasonable amount of boiler plate in drivers. But anyway, my question was answered. Seems to be save to set this obsure SupportConvert32to24 flag if using the generic fb layer. Yes. However, we didn't have today's generic fb layer when this stuff was first written. Fortunately for its ease of adoption, the driver model didn't mandate a specific *fb layer or hard code its expected characteristics :-) Looking at the xf86SetDepthBpp() code, there appears to be another wrinkle, because these flags get cleared for (BITMAP_SCANLINE_UNIT == 64) platforms: #if BITMAP_SCANLINE_UNIT == 64 /* * For platforms with 64-bit scanlines, modify the driver's depth24flags * to remove preferences for packed 24bpp modes, which are not currently * supported on these platforms. */ depth24flags = ~(SupportConvert32to24 | SupportConvert32to24 | PreferConvert24to32 | PreferConvert32to24); #endif This has been there for a long time (before we had fb). I'm not sure if it is still valid or not. Anyone with 64-bit scanline platforms care to comment? I thought we stopped using 64 bit scanlines altogether. Mark. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
SupportConvertXXtoXX
What exactly does a video driver have to be able to do if the SupportConvert32to24 flag is set at calling xf86SetDepthBpp, provided the hardware supports, for instance, 24bpp (framebuffer depth) only? (SupportConvert24to32 vice versa) Thomas -- Thomas Winischhofer Vienna/Austria thomas AT winischhofer DOT net http://www.winischhofer.net/ twini AT xfree86 DOT org ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: SupportConvertXXtoXX
On Fri, 5 Mar 2004, Thomas Winischhofer wrote: What exactly does a video driver have to be able to do if the SupportConvert32to24 flag is set at calling xf86SetDepthBpp, provided the hardware supports, for instance, 24bpp (framebuffer depth) only? It's expected to support a 24bpp framebuffer. Depth 24/32 bpp will get translated to depth 24/24 bpp. (SupportConvert24to32 vice versa) I don't think that's actually implemented. Mark. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: SupportConvertXXtoXX
Mark Vojkovich wrote: On Fri, 5 Mar 2004, Thomas Winischhofer wrote: What exactly does a video driver have to be able to do if the SupportConvert32to24 flag is set at calling xf86SetDepthBpp, provided the hardware supports, for instance, 24bpp (framebuffer depth) only? It's expected to support a 24bpp framebuffer. So far, so good. Depth 24/32 bpp will get translated to depth 24/24 bpp. By whom (ie what layer)? Does the video driver in any way need to take care of this? Thomas -- Thomas Winischhofer Vienna/Austria thomas AT winischhofer DOT net http://www.winischhofer.net/ twini AT xfree86 DOT org ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: SupportConvertXXtoXX
On Fri, Mar 05, 2004 at 01:38:06AM +0100, Thomas Winischhofer wrote: What exactly does a video driver have to be able to do if the SupportConvert32to24 flag is set at calling xf86SetDepthBpp, provided the hardware supports, for instance, 24bpp (framebuffer depth) only? It has to use a framebuffer layer that can do this conversion. fb can, as can xf24_32bpp (if your driver uses cfb). The s3virge driver is an example that can still be run with the xf24_32bpp method, and it does the following to figure out what to load: case 24: if (pix24bpp == 24) { mod = cfb24; reqSym = cfb24ScreenInit; } else { mod = xf24_32bpp; reqSym = cfb24_32ScreenInit; } Most drivers use fb these days, and it has support for this built-in, and enabled automatically. (SupportConvert24to32 vice versa) This was never implemented, although there is nothing from stopping a driver from doing so. Clients usually prefer 24-bit pixmaps with the pixels padded to 32 bits, so there isn't really much of a need for going the other way (pixmap bpp 24 - hardware bpp 32). David ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel