Re: [Haskell-cafe] opengl type confusion

2013-06-17 Thread Tom Ellis
On Sun, Jun 16, 2013 at 05:22:59PM -0700, bri...@aracnet.com wrote:
> > Vertex3 takes three arguments, all of which must be of the same instance of
> > VertexComponent.  Specifying GLdoubles in the signature of wireframe
> > specifies the types in the last three calls to Vertex3, but (0.0 ::
> > GLdouble) is still requried on the first to fix the type there.  How else
> > could the compiler know that you mean 0.0 to be a GLdouble and not a
> > GLfloat?
> 
> it's curious that 
> 
> (0.0::GLdouble) 0.0 0.0 
> 
> is good enough and that 
> 
> (0.0::GLdouble) (0.0::GLdouble) (0.0::GLdouble)
> 
> is not required.  I suspect that's because, as you point out, they all
> have to be the same argument and ghc is being smart and saying if the
> first arg _must_ be GLdouble (because I'm explicitly forcing the type),
> then the rest must be too.

That is exactly the reason.

Tom

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] opengl type confusion

2013-06-16 Thread briand
On Sun, 16 Jun 2013 22:19:22 +0100
Tom Ellis  wrote:

> On Sun, Jun 16, 2013 at 01:03:48PM -0700, bri...@aracnet.com wrote:
> > wireframe :: Double -> Double -> Double -> IO ()
> > wireframe wx wy wz = do 
> >   -- yz plane
> >   renderPrimitive LineLoop $ do
> >vertex $ Vertex3 0.0 0.0 0.0
> >vertex $ Vertex3 0.0 wy 0.0
> >vertex $ Vertex3 0.0 wy wz
> >vertex $ Vertex3 0.0 0.0 wz
> [...]
> > 
> > No instance for (VertexComponent Double)
> >   arising from a use of `vertex'
> [...]
> > 
> > Changing the declaration to GLdouble -> GLdouble -> GLdouble -> IO() and 
> > using
> > (0.0::GLdouble) fixes it
> 
> Vertex3 takes three arguments, all of which must be of the same instance of
> VertexComponent.  Specifying GLdoubles in the signature of wireframe
> specifies the types in the last three calls to Vertex3, but (0.0 ::
> GLdouble) is still requried on the first to fix the type there.  How else
> could the compiler know that you mean 0.0 to be a GLdouble and not a
> GLfloat?
> 
> Tom
> 


it's curious that 

(0.0::GLdouble) 0.0 0.0 

is good enough and that 

(0.0::GLdouble) (0.0::GLdouble) (0.0::GLdouble)

is not required.  I suspect that's because, as you point out, they all have to 
be the same argument and ghc is being smart and saying if the first arg _must_ 
be GLdouble (because I'm explicitly forcing the type), then the rest must be 
too.

Meanwhile 4.3.4 about the default is quite interesting. Didn't know about that 
:-)

Thanks very much for the responses !

Brian



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] opengl type confusion

2013-06-16 Thread L Corbijn
I seem to making a mess of it, first accidentally posting an empty message
and then forgetting to reply to the list. Thirdly I forgot to mention that
my message only describes the 'GHCi magic'.

Lars

P.S. Conclusion, I shouldn't write complicated email this late on the
evening.

-- Forwarded message --
From: L Corbijn 
Date: Mon, Jun 17, 2013 at 12:07 AM
Subject: Re: [Haskell-cafe] opengl type confusion
To: bri...@aracnet.com


On Sun, Jun 16, 2013 at 11:10 PM, L Corbijn  wrote:

>
>
>
> On Sun, Jun 16, 2013 at 10:42 PM,  wrote:
>
>> On Sun, 16 Jun 2013 16:15:25 -0400
>> Brandon Allbery  wrote:
>>
>> > On Sun, Jun 16, 2013 at 4:03 PM,  wrote:
>> >
>> > > Changing the declaration to GLdouble -> GLdouble -> GLdouble -> IO()
>> and
>> > > using
>> > > (0.0::GLdouble) fixes it, and I'm not clear on why it's not automagic.
>> > >  There are many times I see the
>> >
>> >
>> > Haskell never "automagic"s types in that context; if it expects
>> GLdouble,
>> > it expects GLdouble. Pretending it's Double will not work. It "would" in
>> > the specific case that GLdouble were actually a type synonym for Double;
>> > however, for performance reasons it is not. Haskell Double is not
>> directly
>> > usable from the C-based API used by OpenGL, so GLdouble is a type
>> synonym
>> > for CDouble which is.
>> >
>> > compiler doing type conversion an numerican arguments although sometimes
>> > > the occasional fracSomethingIntegralorOther is required.
>> > >
>> >
>> > I presume the reason the type specification for numeric literals is
>> because
>> > there is no defaulting (and probably can't be without introducing other
>> > strange type issues) for GLdouble.
>> >
>>
>> What I was thinking about, using a very poor choice of words, was this :
>>
>>
>> *Main> let a = 1
>> *Main> :t a
>> a :: Integer
>> *Main> let a = 1::Double
>> *Main> a
>> 1.0
>> *Main> :t a
>> a :: Double
>> *Main>
>>
>> so normally 1 would be interpreted as an int, but if I declare 'a' a
>> Double then it gets "promoted" to a Double without me having to call a
>> conversion routine explicitly.
>>
>> That seems automagic to me.
>>
>> (0.0::GLdouble) works to make the compiler happy.  So it appears to be
>> taking care of the conversion automagically.
>>
>> So maybe a better question, I hope, is:
>>
>> How can I simply declare 0.0 to be (0.0::GLdouble) and have the
>> functional call work.  Doesn't a conversion have to be happening, i.e.
>> shouldn't I really have to do (realToFrac 0.0) ?
>>
>> Brian
>>
>>
>> ___
>> Haskell-Cafe mailing list
>> Haskell-Cafe@haskell.org
>> http://www.haskell.org/mailman/listinfo/haskell-cafe
>>
>
>
Oops sorry for the empty reply, I accidentally hit the sent button.

What you are seeing is the defaulting (see
http://www.haskell.org/onlinereport/haskell2010/haskellch4.html#x10-790004.3.4).
Which roughly speaking means that if you need a specific instance of a
number first try Integer then Double and as a last resort fail.

Prelude> :t 1
1 :: Num a => a
Prelude> :t 1.0
1.0 :: Fractional a => a

So normally a number can be just any instance of the Num class, and any
number with a decimal can be any Fractional instance. And now with let
bindings


The need for defaulting is caused by the monomorphism restriction (
http://www.haskell.org/haskellwiki/Monomorphism_restriction), which states
that let binds should be monomorphic, or roughly speaking it should contain
no type variables (unless of course you provide a type signature).

Prelude> let b = 1
Prelude> :t b
b :: Integer

Prelude> let c = 1.0
Prelude> :t c
c :: Double

So here you see the result of the combination. The monomorphism restriction
doesn't allow 'Num a => a' as type for 'b'. So the defaulting kicks in and
finds that its first guess 'Integer' fits. Therefore 'b'  gets type
Integer. Though for 'c' the guess 'Integer' fails as it isn't a Fractional.
Its second guess, Double, is a fractional so 'c' gets type Double.

You can see that the monomorphism restriction is to blame by disabling it

Prelude> :set -XNoMonomorphismRestriction
Prelude> let b = 1
Prelude> :t b
b :: Num a => a

But you shouldn't normally need to do this, as you can provide a specific
type signatur

Re: [Haskell-cafe] opengl type confusion

2013-06-16 Thread Brandon Allbery
On Sun, Jun 16, 2013 at 4:42 PM,  wrote:

> On Sun, 16 Jun 2013 16:15:25 -0400
> Brandon Allbery  wrote:
> > On Sun, Jun 16, 2013 at 4:03 PM,  wrote:
> > > Changing the declaration to GLdouble -> GLdouble -> GLdouble -> IO()
> and
> > > using
> > > (0.0::GLdouble) fixes it, and I'm not clear on why it's not automagic.
> > >  There are many times I see the
> >
> > I presume the reason the type specification for numeric literals is
> because
> > there is no defaulting (and probably can't be without introducing other
> > strange type issues) for GLdouble.
>
> What I was thinking about, using a very poor choice of words, was this :
>
> *Main> let a = 1
> *Main> :t a
> a :: Integer
> *Main> let a = 1::Double
> *Main> a
> 1.0
> *Main> :t a
> a :: Double
> *Main>
>
> so normally 1 would be interpreted as an int, but if I declare 'a' a
> Double then it gets "promoted" to a Double without me having to call a
> conversion routine explicitly.
>
> That seems automagic to me.
>

No magic involved, although some automation is. Take a look at the
`default` keyword in the Haskell Report (this is the "defaulting" I
mentioned earlier).

http://www.haskell.org/onlinereport/haskell2010/haskellch4.html#x10-790004.3.4

The "default `default`" is `default (Integer, Double)` which means that it
will try to resolve a numeric literal as type Integer, and if it gets a
type error it will try again with type Double.

You should use this same mechanism to make numeric literals work with
OpenGL code: neither Integer nor Double will produce a valid type for the
expression, but at the same time the compiler cannot infer a type because
there are two possibilities (GLfloat and GLdouble). You could therefore add
a declaration `default (Integer, Double, GLdouble)` so that it will try
GLdouble to resolve numeric literals when neither Integer nor Double will
work.

> How can I simply declare 0.0 to be (0.0::GLdouble) and have the
functional call work.  Doesn't a conversion have to be happening, i.e.
shouldn't I really have to do (realToFrac 0.0) ?

The first part I just answered. As to the second, a conversion *is*
happening, implicitly as defined by the language; the question being, to
what type. A numeric literal has type (Num a => a), implemented by
inserting a call to `fromIntegral` for literals without decimal points and
`fromRational` for others. But the compiler can't always work out what `a`
is in (Num a => a) without some help (the aforementioned `default`
declaration).

-- 
brandon s allbery kf8nh   sine nomine associates
allber...@gmail.com  ballb...@sinenomine.net
unix, openafs, kerberos, infrastructure, xmonadhttp://sinenomine.net
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] opengl type confusion

2013-06-16 Thread Tom Ellis
On Sun, Jun 16, 2013 at 01:03:48PM -0700, bri...@aracnet.com wrote:
> wireframe :: Double -> Double -> Double -> IO ()
> wireframe wx wy wz = do 
>   -- yz plane
>   renderPrimitive LineLoop $ do
>vertex $ Vertex3 0.0 0.0 0.0
>vertex $ Vertex3 0.0 wy 0.0
>vertex $ Vertex3 0.0 wy wz
>vertex $ Vertex3 0.0 0.0 wz
[...]
> 
> No instance for (VertexComponent Double)
>   arising from a use of `vertex'
[...]
> 
> Changing the declaration to GLdouble -> GLdouble -> GLdouble -> IO() and using
> (0.0::GLdouble) fixes it

Vertex3 takes three arguments, all of which must be of the same instance of
VertexComponent.  Specifying GLdoubles in the signature of wireframe
specifies the types in the last three calls to Vertex3, but (0.0 ::
GLdouble) is still requried on the first to fix the type there.  How else
could the compiler know that you mean 0.0 to be a GLdouble and not a
GLfloat?

Tom

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] opengl type confusion

2013-06-16 Thread L Corbijn
On Sun, Jun 16, 2013 at 10:42 PM,  wrote:

> On Sun, 16 Jun 2013 16:15:25 -0400
> Brandon Allbery  wrote:
>
> > On Sun, Jun 16, 2013 at 4:03 PM,  wrote:
> >
> > > Changing the declaration to GLdouble -> GLdouble -> GLdouble -> IO()
> and
> > > using
> > > (0.0::GLdouble) fixes it, and I'm not clear on why it's not automagic.
> > >  There are many times I see the
> >
> >
> > Haskell never "automagic"s types in that context; if it expects GLdouble,
> > it expects GLdouble. Pretending it's Double will not work. It "would" in
> > the specific case that GLdouble were actually a type synonym for Double;
> > however, for performance reasons it is not. Haskell Double is not
> directly
> > usable from the C-based API used by OpenGL, so GLdouble is a type synonym
> > for CDouble which is.
> >
> > compiler doing type conversion an numerican arguments although sometimes
> > > the occasional fracSomethingIntegralorOther is required.
> > >
> >
> > I presume the reason the type specification for numeric literals is
> because
> > there is no defaulting (and probably can't be without introducing other
> > strange type issues) for GLdouble.
> >
>
> What I was thinking about, using a very poor choice of words, was this :
>
>
> *Main> let a = 1
> *Main> :t a
> a :: Integer
> *Main> let a = 1::Double
> *Main> a
> 1.0
> *Main> :t a
> a :: Double
> *Main>
>
> so normally 1 would be interpreted as an int, but if I declare 'a' a
> Double then it gets "promoted" to a Double without me having to call a
> conversion routine explicitly.
>
> That seems automagic to me.
>
> (0.0::GLdouble) works to make the compiler happy.  So it appears to be
> taking care of the conversion automagically.
>
> So maybe a better question, I hope, is:
>
> How can I simply declare 0.0 to be (0.0::GLdouble) and have the functional
> call work.  Doesn't a conversion have to be happening, i.e. shouldn't I
> really have to do (realToFrac 0.0) ?
>
> Brian
>
>
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] opengl type confusion

2013-06-16 Thread briand
On Sun, 16 Jun 2013 16:15:25 -0400
Brandon Allbery  wrote:

> On Sun, Jun 16, 2013 at 4:03 PM,  wrote:
> 
> > Changing the declaration to GLdouble -> GLdouble -> GLdouble -> IO() and
> > using
> > (0.0::GLdouble) fixes it, and I'm not clear on why it's not automagic.
> >  There are many times I see the
> 
> 
> Haskell never "automagic"s types in that context; if it expects GLdouble,
> it expects GLdouble. Pretending it's Double will not work. It "would" in
> the specific case that GLdouble were actually a type synonym for Double;
> however, for performance reasons it is not. Haskell Double is not directly
> usable from the C-based API used by OpenGL, so GLdouble is a type synonym
> for CDouble which is.
> 
> compiler doing type conversion an numerican arguments although sometimes
> > the occasional fracSomethingIntegralorOther is required.
> >
> 
> I presume the reason the type specification for numeric literals is because
> there is no defaulting (and probably can't be without introducing other
> strange type issues) for GLdouble.
> 

What I was thinking about, using a very poor choice of words, was this :


*Main> let a = 1
*Main> :t a
a :: Integer
*Main> let a = 1::Double
*Main> a
1.0
*Main> :t a
a :: Double
*Main> 

so normally 1 would be interpreted as an int, but if I declare 'a' a Double 
then it gets "promoted" to a Double without me having to call a conversion 
routine explicitly.

That seems automagic to me.

(0.0::GLdouble) works to make the compiler happy.  So it appears to be taking 
care of the conversion automagically.

So maybe a better question, I hope, is:

How can I simply declare 0.0 to be (0.0::GLdouble) and have the functional call 
work.  Doesn't a conversion have to be happening, i.e. shouldn't I really have 
to do (realToFrac 0.0) ?

Brian


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] opengl type confusion

2013-06-16 Thread Brandon Allbery
On Sun, Jun 16, 2013 at 4:03 PM,  wrote:

> Changing the declaration to GLdouble -> GLdouble -> GLdouble -> IO() and
> using
> (0.0::GLdouble) fixes it, and I'm not clear on why it's not automagic.
>  There are many times I see the


Haskell never "automagic"s types in that context; if it expects GLdouble,
it expects GLdouble. Pretending it's Double will not work. It "would" in
the specific case that GLdouble were actually a type synonym for Double;
however, for performance reasons it is not. Haskell Double is not directly
usable from the C-based API used by OpenGL, so GLdouble is a type synonym
for CDouble which is.

compiler doing type conversion an numerican arguments although sometimes
> the occasional fracSomethingIntegralorOther is required.
>

I presume the reason the type specification for numeric literals is because
there is no defaulting (and probably can't be without introducing other
strange type issues) for GLdouble.

In any case, the very fact that you refer to "automagic" and "type
conversion" indicates that you don't really have an understanding of how
Haskell's numeric types work; this will lead you into not only this kind of
confusion, but worse problems later. In particular, you're going to get
into dreadful messes where you expect Haskell to transparently deal with
strange combinations of numeric types as if Haskell were (almost-typeless)
Perl or something, and you'll have real trouble getting that code to work
until you sit down and figure out how strong typing and Haskell's numeric
typeclasses interact.

-- 
brandon s allbery kf8nh   sine nomine associates
allber...@gmail.com  ballb...@sinenomine.net
unix, openafs, kerberos, infrastructure, xmonadhttp://sinenomine.net
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe