Re: [PD] [Gem] bit depth of display

2022-02-18 Thread Roman Haefeli
On Fri, 2022-02-18 at 09:47 +0100, cyrille henry wrote:
> 
> Le 17/02/2022 à 21:24, Roman Haefeli a écrit :
> 
> [...]
> 
> > My impression is that the OpenGL side is all 32bit float. I tried
> > 'quality 1' to [pix_texture] which does (from what I see) linear
> > interpolation. And I also tried bicubic interpolation with a shader
> > written by Cyrille Henry from 2007. The shader code is using type
> > vec4
> > internally and GLSL spec says that this is 32bit float [1].
> > 
> 
> Computation is done with 32 bit float, but that does not mean that
> the result is stored as a 32 bit float...
> The GPGPU example shows how to keep precision in texture (but not to
> render in high precision).
> 
> In your example, you only need the gem windows to be rendered in
> 10bits/color. Unfortunately, I don't thing there is flag or message
> to allow this for now. You should create a feature request.
> 
> As Claude says, adding dither is a good way to mask this problem.


Yeah, thanks. Good to know that Gem window is rendered with 8bit. It
turns out there is no real benefit in making sure the whole path is
10bit. It is very likely that the projector in the installation space
supports only 8bit anyway. 

I was finally able to hack something together in your
bicubic_interpolation.frag by generating some noise that I add to the
result of the bicubic interpolation. It looks good to me.

Thanks a lot for your inputs, Claude and Cyrille.

Roman


signature.asc
Description: This is a digitally signed message part
___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
https://lists.puredata.info/listinfo/pd-list


Re: [PD] [Gem] bit depth of display

2022-02-18 Thread cyrille henry



Le 17/02/2022 à 21:24, Roman Haefeli a écrit :

[...]


My impression is that the OpenGL side is all 32bit float. I tried
'quality 1' to [pix_texture] which does (from what I see) linear
interpolation. And I also tried bicubic interpolation with a shader
written by Cyrille Henry from 2007. The shader code is using type vec4
internally and GLSL spec says that this is 32bit float [1].



Computation is done with 32 bit float, but that does not mean that the result 
is stored as a 32 bit float...
The GPGPU example shows how to keep precision in texture (but not to render in 
high precision).

In your example, you only need the gem windows to be rendered in 10bits/color. 
Unfortunately, I don't thing there is flag or message to allow this for now. 
You should create a feature request.

As Claude says, adding dither is a good way to mask this problem.

cheers
c



___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
https://lists.puredata.info/listinfo/pd-list


Re: [PD] [Gem] bit depth of display

2022-02-17 Thread Claude Heiland-Allen

On 17/02/2022 20:24, Roman Haefeli wrote:

Actually, since I'm already using a shader, I could try to add some
noise there. Not totally sure how this should be done, though.
Something like https://pippin.gimp.org/a_dither/ using gl_FragCoord.xy 
would be my first try.



Claude
--
https://mathr.co.uk




___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
https://lists.puredata.info/listinfo/pd-list


Re: [PD] [Gem] bit depth of display

2022-02-17 Thread Roman Haefeli
On Thu, 2022-02-17 at 19:30 +, Claude Heiland-Allen wrote:
> 
> On 17/02/2022 17:59, Roman Haefeli wrote:
> > the gradients between the
> > pixels shows edges that look like low bit depth (and probably are
> > due
> > low bit depth).
> No clue about high bit depth output. Possible workaround: a shader
> that 
> does dithering could help mask the problem,

Oh, good idea. I didn't think of that.

>  that is if the OpenGL 
> texture interpolation is not the source of the problem (hopefully
> it's 
> done with floats, if not maybe you can do interpolation in the
> shader 
> too after reading the texels without interpolation).  Check the
> OpenGL 
> specification for GL_LINEAR magnification filter details, maybe it
> says 
> how much precision is guaranteed.

My impression is that the OpenGL side is all 32bit float. I tried
'quality 1' to [pix_texture] which does (from what I see) linear
interpolation. And I also tried bicubic interpolation with a shader
written by Cyrille Henry from 2007. The shader code is using type vec4
internally and GLSL spec says that this is 32bit float [1].

Actually, since I'm already using a shader, I could try to add some
noise there. Not totally sure how this should be done, though. 

> One thing you could do to diagnose is check pixel values of
> neighbouring 
> bands to see if they are off by one (in which case suspect needing 
> higher bit depth output) or more (in which case suspect OpenGL
> GL_LINEAR 
> precision being insufficient).

Ok. I'll try to measure this.

Thanks for your input,
Roman


[1] https://www.khronos.org/opengl/wiki/Data_Type_(GLSL)


signature.asc
Description: This is a digitally signed message part
___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
https://lists.puredata.info/listinfo/pd-list


Re: [PD] [Gem] bit depth of display

2022-02-17 Thread Claude Heiland-Allen

Hi Roman,

On 17/02/2022 17:59, Roman Haefeli wrote:

the gradients between the
pixels shows edges that look like low bit depth (and probably are due
low bit depth).
No clue about high bit depth output. Possible workaround: a shader that 
does dithering could help mask the problem, that is if the OpenGL 
texture interpolation is not the source of the problem (hopefully it's 
done with floats, if not maybe you can do interpolation in the shader 
too after reading the texels without interpolation).  Check the OpenGL 
specification for GL_LINEAR magnification filter details, maybe it says 
how much precision is guaranteed.


One thing you could do to diagnose is check pixel values of neighbouring 
bands to see if they are off by one (in which case suspect needing 
higher bit depth output) or more (in which case suspect OpenGL GL_LINEAR 
precision being insufficient).



Claude
--
https://mathr.co.uk




___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
https://lists.puredata.info/listinfo/pd-list


[PD] [Gem] bit depth of display

2022-02-17 Thread Roman Haefeli
Hi 

I have a Gem patch for an installation that basically maps a 1x12px
image created with [pix_set] to a fullscreen [rectangle]. When the
pixel values are close enough to each other, the gradients between the
pixels shows edges that look like low bit depth (and probably are due
low bit depth). I am looking for a way to display the Gem window with a
higher bit depth. My external monitor advertises itself as capable of
30-bit (which I assume means 10 bit per channel).

Here my questions:
  * Is it correct that in OpenGL calculations are done with floats?
  * Are the gradients calculated with high (>8bit) precision?
  * Is precision lost during transport to display? 
  * What can be done to feed a monitor/projector with higher bit depth?
  * What can be done on macOS with a HDMI projector attached?

Here a screenshot of the Gem display:
https://netpd.org/~roman/tmp/12px-gradients.png

Roman


signature.asc
Description: This is a digitally signed message part
___
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
https://lists.puredata.info/listinfo/pd-list