Re: [Haskell-cafe] Compiled program using OpenGL fails to trigger GPU switch on Mac, but works in GHCi

2013-03-20 Thread Jesper Särnesjö
On Tue, Mar 19, 2013 at 8:18 AM, Albert Y. C. Lai tre...@vex.net wrote:
 On 13-03-18 09:19 AM, Jesper Särnesjö wrote:

 Interestingly, running the program in GHCi with the -fno-ghci-sandbox
 flag, causes it to misbehave in the same way as when compiled:

 Then perhaps to mimic default ghci in hope of getting good results:

 - compile with -threaded (more candidly, link with -threaded, it does not
 change code generation)

 - in the program, deliberately move the work to a forkIO-thread

I did this [1], and sure enough - it did result in a GHC-compiled
program behaving like one run in GHCi:

$ ghc -threaded -lglfw -framework OpenGL -fforce-recomp
glfw_test_threaded.hs  ./glfw_test_threaded
[...]
NVIDIA GeForce GT 330M OpenGL Engine

I should mention that I tried both forkIO and forkOS, as well as
marking the foreign imports as safe and unsafe, and, for my simple
test program, the result was the same every way. This is certainly not
the case in general, though, as the forkOS documentation explains [2].

I haven't yet worked out why simply moving all the work from the main
thread to another (OS) thread fixes this problem. Based on what I've
learned so far, that should, if anything, *prevent* things from
working correctly.

On that note, I did try to apply the same solution to a larger program
[3], and while it did allow the GPU switch to happen properly, it also
caused the program's window to become unresponsive. Like I mentioned
in my earlier email, Mac OS X (or rather Cocoa) is fussy about GUI
actions being run on the main thread.

So, in summary, using GLFW to initialize an OpenGL context fails to
trigger a GPU switch if done on the main thread, but works as expected
if done on a different thread, in a Haskell program. I need to figure
out *why* that is, so that I can find a way to simultaneously appease
Haskell and Cocoa.

Thanks for all your suggestions!

-- 
Jesper Särnesjö
http://jesper.sarnesjo.org/

[1] https://gist.github.com/sarnesjo/5151894#file-glfw_test_threaded-hs
[2] 
http://hackage.haskell.org/packages/archive/base/latest/doc/html/Control-Concurrent.html#v:forkOS
[3] https://gist.github.com/sarnesjo/5116084#file-test-hs

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiled program using OpenGL fails to trigger GPU switch on Mac, but works in GHCi

2013-03-20 Thread Jesper Särnesjö
On Tue, Mar 19, 2013 at 12:19 AM, Jesper Särnesjö sarne...@gmail.com wrote:
 As I final note, I did learn that the GHC runtime generates SIGVTALRM
 signals to cause the scheduler to switch contexts. Perhaps this
 prevents GLFW from running properly? Looks like I'll need to brush up
 on my dtrace.

A bit of googling turned up an thread on the glasgow-haskell-users
mailing list, discussing a similar problem with using the C MySQL
bindings [1]. One of the solutions mentioned there, was to disable the
timer signals generated by the runtime [2]. This can be done using the
-V RTS flag, described as follows in the documentation:

Using a value of zero disables the RTS clock completely, and has the
effect of disabling timers that depend on it: the context switch timer
and the heap profiling timer. Context switches will still happen, but
deterministically and at a rate much faster than normal. Disabling the
interval timer is useful for debugging, because it eliminates a source
of non-determinism at runtime. [3]

Results:

$ ghc -rtsopts -lglfw -framework OpenGL -fforce-recomp glfw_test.hs
[...]
$ ./glfw_test
Apple Software Renderer
$ ./glfw_test +RTS -V0
NVIDIA GeForce GT 330M OpenGL Engine

Nice.

This solution seems to work perfectly for me. Since the foreign code
is allowed to run uninterrupted, the GPU switch happens, and since the
GUI actions stay on the main thread, the program's window responds to
keyboard and mouse input correctly.

Are there any downsides to disabling the timer signal? Why does the
documentation describe it primarily as a debugging aid?

-- 
Jesper Särnesjö
http://jesper.sarnesjo.org/

[1] 
http://www.haskell.org/pipermail/glasgow-haskell-users/2010-September/thread.html#19140
[2] 
http://www.haskell.org/pipermail/glasgow-haskell-users/2010-September/019156.html
[3] 
http://www.haskell.org/ghc/docs/7.4.2/html/users_guide/runtime-control.html#rts-options-misc

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiled program using OpenGL fails to trigger GPU switch on Mac, but works in GHCi

2013-03-20 Thread Brandon Allbery
On Wed, Mar 20, 2013 at 9:02 AM, Jesper Särnesjö sarne...@gmail.com wrote:

 This solution seems to work perfectly for me. Since the foreign code
 is allowed to run uninterrupted, the GPU switch happens, and since the
 GUI actions stay on the main thread, the program's window responds to
 keyboard and mouse input correctly.


If that's correct, then I think you have found a bug in the GHC runtime: my
understanding is that the timers should be disabled while foreign code is
running, specifically to avoid confusing code that has no clue that there
is another runtime involved.

-- 
brandon s allbery kf8nh   sine nomine associates
allber...@gmail.com  ballb...@sinenomine.net
unix, openafs, kerberos, infrastructure, xmonadhttp://sinenomine.net
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiled program using OpenGL fails to trigger GPU switch on Mac, but works in GHCi

2013-03-18 Thread Jesper Särnesjö
On Mon, Mar 18, 2013 at 11:27 AM, Brandon Allbery allber...@gmail.com wrote:
 On Sun, Mar 17, 2013 at 7:58 PM, Jason Dagit dag...@gmail.com wrote:
 On Sat, Mar 16, 2013 at 6:53 PM, Jesper Särnesjö sarne...@gmail.com wrote:

 To be clear, I think this isn't really an OpenGL problem, but rather
 one related to FFI or event handling. If anyone could explain to me,The
 release notes for 7.0.1 said this about that flag:

 There is a new -fno-ghci-sandbox flag, which stops GHCi running
 computations in a separate thread. In particular, this is useful for GLUT on
 OS X, which only works if being run on the main thread.

 Worth noting is that Jesper said it *works* in ghci, and fails when
 compiled

Interestingly, running the program in GHCi with the -fno-ghci-sandbox
flag, causes it to misbehave in the same way as when compiled:

$ ghci -fno-ghci-sandbox -lglfw glfw_test.hs
[...]
*Main main
Apple Software Renderer

This is starting to smell like a concurrency-related issue, then. I
should note that the GLFW library does use multiple OS threads, and I
know from previous experience that Mac OS X is fussy about GUI actions
being run on the main thread. The curious thing here, of course, is
that the behavior I am seeing is the exact opposite of that mentioned
in the release notes.

I've found some interesting reading material on how GHC handles the
interaction of concurrency and FFI, in particular the paper Extending
the Haskell Foreign Function Interface with Concurrency by Simon
Marlow and Simon Peyton-Jones [1], which even brings up OpenGL as an
example of why a programmer must be able to specify that a related
group of foreign calls are all made by the same OS thread. I haven't
yet had the time to dig too deep into this, but I have tried a few
things:

* I've compiled the program with -threaded (as suggested by John
Lato), with the foreign functions marked as safe and unsafe (as
suggested by Carter Schonwald).
* I've checked that the main thread of my Haskell program is bound to
an OS thread [2], which it is when using the threaded runtime. I've
also tried explicitly running the block of foreign calls in a bound
thread [3].
* I've made sure that there is only one GLFW library on my machine for
-lglfw to link with, and that it is the very same library my C program
links with.

None of the above helped. However, I will keep investigating and see
what I find.

As I final note, I did learn that the GHC runtime generates SIGVTALRM
signals to cause the scheduler to switch contexts [4][5]. Perhaps this
prevents GLFW from running properly? Looks like I'll need to brush up
on my dtrace.

-- 
Jesper Särnesjö
http://jesper.sarnesjo.org/

[1] http://community.haskell.org/~simonmar/papers/conc-ffi.pdf
[2] 
http://hackage.haskell.org/packages/archive/base/latest/doc/html/Control-Concurrent.html#v:isCurrentThreadBound
[3] 
http://hackage.haskell.org/packages/archive/base/latest/doc/html/Control-Concurrent.html#v:runInBoundThread
[4] http://joeyh.name/blog/entry/ghc_threaded_runtime_gotchas/
[5] 
http://hackage.haskell.org/trac/ghc/wiki/Commentary/Rts/Signals#RTSAlarmSignalsandForeignLibraries

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiled program using OpenGL fails to trigger GPU switch on Mac, but works in GHCi

2013-03-18 Thread Carter Schonwald
Hey Jesper,
thanks for the headsup!

please continue to share you findings on this matter, It sounds like it'll
be really useful for folks!

-Carter


On Mon, Mar 18, 2013 at 9:19 AM, Jesper Särnesjö sarne...@gmail.com wrote:

 On Mon, Mar 18, 2013 at 11:27 AM, Brandon Allbery allber...@gmail.com
 wrote:
  On Sun, Mar 17, 2013 at 7:58 PM, Jason Dagit dag...@gmail.com wrote:
  On Sat, Mar 16, 2013 at 6:53 PM, Jesper Särnesjö sarnesjo@gmail.comwrote:
 
  To be clear, I think this isn't really an OpenGL problem, but rather
  one related to FFI or event handling. If anyone could explain to me,The
  release notes for 7.0.1 said this about that flag:
 
  There is a new -fno-ghci-sandbox flag, which stops GHCi running
  computations in a separate thread. In particular, this is useful for
 GLUT on
  OS X, which only works if being run on the main thread.
 
  Worth noting is that Jesper said it *works* in ghci, and fails when
  compiled

 Interestingly, running the program in GHCi with the -fno-ghci-sandbox
 flag, causes it to misbehave in the same way as when compiled:

 $ ghci -fno-ghci-sandbox -lglfw glfw_test.hs
 [...]
 *Main main
 Apple Software Renderer

 This is starting to smell like a concurrency-related issue, then. I
 should note that the GLFW library does use multiple OS threads, and I
 know from previous experience that Mac OS X is fussy about GUI actions
 being run on the main thread. The curious thing here, of course, is
 that the behavior I am seeing is the exact opposite of that mentioned
 in the release notes.

 I've found some interesting reading material on how GHC handles the
 interaction of concurrency and FFI, in particular the paper Extending
 the Haskell Foreign Function Interface with Concurrency by Simon
 Marlow and Simon Peyton-Jones [1], which even brings up OpenGL as an
 example of why a programmer must be able to specify that a related
 group of foreign calls are all made by the same OS thread. I haven't
 yet had the time to dig too deep into this, but I have tried a few
 things:

 * I've compiled the program with -threaded (as suggested by John
 Lato), with the foreign functions marked as safe and unsafe (as
 suggested by Carter Schonwald).
 * I've checked that the main thread of my Haskell program is bound to
 an OS thread [2], which it is when using the threaded runtime. I've
 also tried explicitly running the block of foreign calls in a bound
 thread [3].
 * I've made sure that there is only one GLFW library on my machine for
 -lglfw to link with, and that it is the very same library my C program
 links with.

 None of the above helped. However, I will keep investigating and see
 what I find.

 As I final note, I did learn that the GHC runtime generates SIGVTALRM
 signals to cause the scheduler to switch contexts [4][5]. Perhaps this
 prevents GLFW from running properly? Looks like I'll need to brush up
 on my dtrace.

 --
 Jesper Särnesjö
 http://jesper.sarnesjo.org/

 [1] http://community.haskell.org/~simonmar/papers/conc-ffi.pdf
 [2]
 http://hackage.haskell.org/packages/archive/base/latest/doc/html/Control-Concurrent.html#v:isCurrentThreadBound
 [3]
 http://hackage.haskell.org/packages/archive/base/latest/doc/html/Control-Concurrent.html#v:runInBoundThread
 [4] http://joeyh.name/blog/entry/ghc_threaded_runtime_gotchas/
 [5]
 http://hackage.haskell.org/trac/ghc/wiki/Commentary/Rts/Signals#RTSAlarmSignalsandForeignLibraries

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiled program using OpenGL fails to trigger GPU switch on Mac, but works in GHCi

2013-03-18 Thread Albert Y. C. Lai

On 13-03-18 09:19 AM, Jesper Särnesjö wrote:

Interestingly, running the program in GHCi with the -fno-ghci-sandbox
flag, causes it to misbehave in the same way as when compiled:


Then perhaps to mimic default ghci in hope of getting good results:

- compile with -threaded (more candidly, link with -threaded, it does 
not change code generation)


- in the program, deliberately move the work to a forkIO-thread

(I suggest forkIO instead of forkOS because I have just tried:

$ ghci
GHCi, version 7.4.2: http://www.haskell.org/ghc/  :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer-gmp ... linking ... done.
Loading package base ... linking ... done.
Prelude Control.Concurrent.isCurrentThreadBound
False

$ ghci -fno-ghci-sandbox
GHCi, version 7.4.2: http://www.haskell.org/ghc/  :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer-gmp ... linking ... done.
Loading package base ... linking ... done.
Prelude Control.Concurrent.isCurrentThreadBound
True

Although, perhaps it doesn't matter.)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiled program using OpenGL fails to trigger GPU switch on Mac, but works in GHCi

2013-03-17 Thread Carter Schonwald
Hey Jesper:
hrm... have you tried other compilation / ffi choices that can influence
the function call?

eg: using the safe rather than unsafe modifier?
http://www.haskell.org/haskellwiki/GHC/Using_the_FFI#Introduction
(the safe modifier doesn't seem like it matters here, but its a simple
experiment to check)

likewise, another experiment would be write your own c code wrapper around
that opengl call, and  call that wrapper from haskell.

OH, and your code snippet isn't specifying *where* the glfw code is linked
from! might it be linking to the *wrong* variant of the library since
you're not specifying where the code is? You might want to set up a little
cabal configured project that specifies all of that stuff.

additionally, have you tried your code on ghc 7.6.2 ?  (do this last i
suppose)


I don't know if any of these ideas are actually helpful, but if they are,
please share what you've learned.

cheers
-Carter



On Sat, Mar 16, 2013 at 9:53 PM, Jesper Särnesjö sarne...@gmail.com wrote:

 On Thu, Mar 14, 2013 at 12:51 AM, Jesper Särnesjö sarne...@gmail.com
 wrote:
  In short, I have two programs, one written in Haskell [1] and one
  written in C [2], that consist of calls to the same functions, in the
  same order, to the same C library, but which do not exhibit the same
  behavior. Further, the Haskell program behaves differently when
  compiled using GHC, and when run in GHCi. I would like to know why
  this is, and how to fix it.

 To be clear, I think this isn't really an OpenGL problem, but rather
 one related to FFI or event handling. If anyone could explain to me,
 in general, how and why a call to a foreign function returning IO ()
 might cause different behavior in Haskell than in C, that might help
 me track down the problem.

 I've updated my test programs to use glGetString [3] to check which
 renderer is active. On my machine, it should return NVIDIA GeForce GT
 330M OpenGL Engine if rendering happens on the discrete GPU, and
 Intel HD Graphics OpenGL Engine or Apple Software Renderer
 otherwise. These are the results of running the C and Haskell programs
 in various ways:

 $ gcc -lglfw -framework OpenGL glfw_test.c  ./a.out
 NVIDIA GeForce GT 330M OpenGL Engine

 $ ghc -lglfw -framework OpenGL -fforce-recomp glfw_test.hs 
 ./glfw_test
 [...]
 Apple Software Renderer

 $ runhaskell -lglfw glfw_test.hs
 NVIDIA GeForce GT 330M OpenGL Engine

 $ ghci -lglfw glfw_test.hs
 [...]
 Prelude Main main
 NVIDIA GeForce GT 330M OpenGL Engine

 The C program behaves as expected, as does the Haskell one when run
 using runhaskell or GHCi. Only the Haskell program compiled using GHC
 behaves incorrectly. Again, the OS event that signifies that a GPU
 switch has occurred fires either way, but for the compiled Haskell
 program, it fires with roughly a second's delay. Why would that be?

 --
 Jesper Särnesjö
 http://jesper.sarnesjo.org/

 [1] https://gist.github.com/sarnesjo/5151894#file-glfw_test-hs
 [2] https://gist.github.com/sarnesjo/5151894#file-glfw_test-c
 [3] http://www.opengl.org/sdk/docs/man3/xhtml/glGetString.xml

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiled program using OpenGL fails to trigger GPU switch on Mac, but works in GHCi

2013-03-17 Thread John Lato
Hello,

Unfortunately I don't have much to add.

On Wed, Mar 13, 2013 at 9:51 PM, Jesper Särnesjö sarne...@gmail.com wrote:

 Hi everybody,

 This started out on haskell-beginners, as a question about poor
 performance for a Haskell program using OpenGL. Thanks to a few good
 suggestions there, I've managed to figure out more or less what the
 underlying problem is, but not its cause.

 In short, I have two programs, one written in Haskell [1] and one
 written in C [2], that consist of calls to the same functions, in the
 same order, to the same C library, but which do not exhibit the same
 behavior. Further, the Haskell program behaves differently when
 compiled using GHC, and when run in GHCi. I would like to know why
 this is, and how to fix it.


There are two major differences between compiled code and code run from
ghci.  First, ghci always uses the threaded runtime (compiled with
-threaded).  Second, optimization levels.

If you compile your program with -threaded, do you get the desired behavior
(e.g. does it work the same as with ghci)?

John L.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiled program using OpenGL fails to trigger GPU switch on Mac, but works in GHCi

2013-03-17 Thread Jason Dagit
On Sat, Mar 16, 2013 at 6:53 PM, Jesper Särnesjö sarne...@gmail.com wrote:



To be clear, I think this isn't really an OpenGL problem, but rather
 one related to FFI or event handling. If anyone could explain to me,
 in general, how and why a call to a foreign function returning IO ()
 might cause different behavior in Haskell than in C, that might help
 me track down the problem.


I haven't finished reading your other email, but I can suggest a possible
answer to this query.

Lots of the native GUI libraries attached thread local storage to the
process's initial thread. In GHCi expressions are not evaluated in that
thread (in the default setup). One thing that may be worth investigating is
the difference between ghci behavior when you start with and without
-fno-ghci-sandbox.

The release notes for 7.0.1 said this about that flag:
There is a new -fno-ghci-sandbox flag, which stops GHCi running
computations in a separate thread. In particular, this is useful for GLUT
on OS X, which only works if being run on the main thread.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiled program using OpenGL fails to trigger GPU switch on Mac, but works in GHCi

2013-03-17 Thread Brandon Allbery
On Sun, Mar 17, 2013 at 7:58 PM, Jason Dagit dag...@gmail.com wrote:

 On Sat, Mar 16, 2013 at 6:53 PM, Jesper Särnesjö sarne...@gmail.comwrote:

 To be clear, I think this isn't really an OpenGL problem, but rather
 one related to FFI or event handling. If anyone could explain to me,The
 release notes for 7.0.1 said this about that flag:

 There is a new -fno-ghci-sandbox flag, which stops GHCi running
 computations in a separate thread. In particular, this is useful for GLUT
 on OS X, which only works if being run on the main thread.


Worth noting is that Jesper said it *works* in ghci, and fails when
compiled

-- 
brandon s allbery kf8nh   sine nomine associates
allber...@gmail.com  ballb...@sinenomine.net
unix, openafs, kerberos, infrastructure, xmonadhttp://sinenomine.net
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiled program using OpenGL fails to trigger GPU switch on Mac, but works in GHCi

2013-03-16 Thread Jesper Särnesjö
On Thu, Mar 14, 2013 at 12:51 AM, Jesper Särnesjö sarne...@gmail.com wrote:
 In short, I have two programs, one written in Haskell [1] and one
 written in C [2], that consist of calls to the same functions, in the
 same order, to the same C library, but which do not exhibit the same
 behavior. Further, the Haskell program behaves differently when
 compiled using GHC, and when run in GHCi. I would like to know why
 this is, and how to fix it.

To be clear, I think this isn't really an OpenGL problem, but rather
one related to FFI or event handling. If anyone could explain to me,
in general, how and why a call to a foreign function returning IO ()
might cause different behavior in Haskell than in C, that might help
me track down the problem.

I've updated my test programs to use glGetString [3] to check which
renderer is active. On my machine, it should return NVIDIA GeForce GT
330M OpenGL Engine if rendering happens on the discrete GPU, and
Intel HD Graphics OpenGL Engine or Apple Software Renderer
otherwise. These are the results of running the C and Haskell programs
in various ways:

$ gcc -lglfw -framework OpenGL glfw_test.c  ./a.out
NVIDIA GeForce GT 330M OpenGL Engine

$ ghc -lglfw -framework OpenGL -fforce-recomp glfw_test.hs  ./glfw_test
[...]
Apple Software Renderer

$ runhaskell -lglfw glfw_test.hs
NVIDIA GeForce GT 330M OpenGL Engine

$ ghci -lglfw glfw_test.hs
[...]
Prelude Main main
NVIDIA GeForce GT 330M OpenGL Engine

The C program behaves as expected, as does the Haskell one when run
using runhaskell or GHCi. Only the Haskell program compiled using GHC
behaves incorrectly. Again, the OS event that signifies that a GPU
switch has occurred fires either way, but for the compiled Haskell
program, it fires with roughly a second's delay. Why would that be?

-- 
Jesper Särnesjö
http://jesper.sarnesjo.org/

[1] https://gist.github.com/sarnesjo/5151894#file-glfw_test-hs
[2] https://gist.github.com/sarnesjo/5151894#file-glfw_test-c
[3] http://www.opengl.org/sdk/docs/man3/xhtml/glGetString.xml

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Compiled program using OpenGL fails to trigger GPU switch on Mac, but works in GHCi

2013-03-13 Thread Jesper Särnesjö
Hi everybody,

This started out on haskell-beginners, as a question about poor
performance for a Haskell program using OpenGL. Thanks to a few good
suggestions there, I've managed to figure out more or less what the
underlying problem is, but not its cause.

In short, I have two programs, one written in Haskell [1] and one
written in C [2], that consist of calls to the same functions, in the
same order, to the same C library, but which do not exhibit the same
behavior. Further, the Haskell program behaves differently when
compiled using GHC, and when run in GHCi. I would like to know why
this is, and how to fix it.

A bit of background. My original program used the GLFW library to
create an OpenGL rendering context, and rendered a 3D model using
OpenGL. I noticed that this program had very high CPU usage, when I
expected it to do most of its work on the GPU [3]. The reason for the
high CPU usage turned out to be that the program was in fact using a
software implementation of OpenGL [4]. My machine, a MacBook Pro
running Mac OS X 10.8.2, has two GPUs, a discrete (more powerful) and
an integrated one (more energy-efficient). The latter implements a
larger part of OpenGL in software, and the OS is supposed to switch
transparently between the two. In particular, it is supposed to switch
to the discrete GPU when a program tries to use OpenGL features not
supported by the integrated GPU [5]. I discovered that while this all
worked as intended when running the C version of my program, it did
not work quite so well for the Haskell program [6], which would get
stuck with a rendering context on the integrated GPU, which in
practice meant a software implementation and poor performance. I
reduced both programs to fairly minimal test cases.

Now, each program simply configures and creates a rendering context,
checks if it is hardware-accelerated (for the configuration used, on
my machine, this implies that the system has switched to the discrete
GPU), and then terminates. This is all done using calls to the GLFW C
library. The C program succeeds. The Haskell program fails if compiled
(with GHC 7.4.2) and run, but succeeds if run in GHCi.

Further, by monitoring which GPU is active, using gfxCardStatus [7]
and the system console, I've established that the switch happens
immediately following the execution of glfwOpenWindow, for the C
program and the Haskell program when run in GHCi. For the compiled
Haskell program, the switch is delayed by roughly a second. This delay
appears to be what causes the program to get stuck on the integrated
GPU.

Now, there are a lot of moving parts involved, and many places where
things could go wrong, making it tricky to even say where the problem
is. Still, a Haskell program consisting entirely of foreign function
calls in the IO monad, should surely behave the same as a C program
consisting of the same calls? Is this caused by lazy evaluation, and
if so, of what? Why does it work correctly in GHCi? I've bashed my
head against this for some time now, and have run out of good ideas. I
would really appreciate any input that lets me solve this, and get
back to the fun parts of 3D programming. ;)

To reproduce this, you'll need a Mac that has two GPUs (most of them
does, these days). I run Mac OS X 10.8.2, and another user reproduced
this on 10.7.5 [8]. You'll also need the GLFW library [9], which can
be built from source or installed using Homebrew. Please let me know
if I can provide any more information.

-- 
Jesper Särnesjö
http://jesper.sarnesjo.org/

[1] https://gist.github.com/sarnesjo/5151894#file-glfw_test-hs
[2] https://gist.github.com/sarnesjo/5151894#file-glfw_test-c
[3] http://www.haskell.org/pipermail/beginners/2013-March/011557.html
[4] http://www.haskell.org/pipermail/beginners/2013-March/011560.html
[5] 
http://developer.apple.com/library/mac/#qa/qa1734/_index.html%23//apple_ref/doc/uid/DTS40010791
[6] http://www.haskell.org/pipermail/beginners/2013-March/011601.html
[7] http://gfx.io
[8] http://www.haskell.org/pipermail/beginners/2013-March/011563.html
[9] http://www.glfw.org

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe