Re: [Haskell-cafe] Caching the Result of a Transaction?

2008-04-26 Thread Jake Mcarthur

On Apr 26, 2008, at 7:18 PM, Conal Elliott wrote:


Here's another angle on part of Jake's question:

Can we implement a type 'TIVal a' (preferably without  
unsafePerformIO) with the following interface:


newIVal :: STM (TIVal a, a -> STM ()) -- or IO (...)
force   :: TIVal a -> STM a

instance Functor IVal
instance Applicative IVal
instance Monad   IVal

where

* 'newIVal' makes something like an IVar that can be written/defined  
(just once) with the returned a->STM().
* 'force' gets the value, retrying if not yet defined; once force is  
able to succeed, it always yields the same value.
* 'fmap f tiv' becomes defined (force yields a value instead of  
retrying) when tiv does.  Similarly for (<*>) and join.
* Forcing 'fmap f tiv' more than once results in f being called only  
once, i.e., the result is cached and reused, as in pure values.   
Similarly for (<*>) and join.


Well, I think I may have done it! This is only code that I typed up  
really quick. I haven't even made sure it compiles. Regardless, I  
think the gist is pretty clear...


data TIVal a = TIVal (STM a) (TMVar a)

newTIVal = do uc <- newEmptyTMVar
  c <- newEmptyTMVar
  return (TIVal (takeTMVar uc) c, putTMVar uc)

force (TIVal uc c) = readTMVar c `orElse` cache
where cache = do x <- uc
 putTMVar c x
 return x

unsafeNewEmptyTMVar = unsafePerformIO newEmptyTMVarIO
-- insert NOINLINE and/or other magical pragmas here

instance Functor TIVal where
f `fmap` x = TIVal (return . f =<< force x) unsafeNewEmptyTMVar

-- Applicative, Monad, and Monoid omitted

I did have to resort to unsafePerformIO, but I think the reason is  
innocent enough to still feel good about. This implementation, if it  
works, seems to be embarrassingly simple.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Caching the Result of a Transaction?

2008-04-26 Thread Matthew Brecknell
Conal Elliott said:
> Can we implement a type 'TIVal a' (preferably without unsafePerformIO)
> with the following interface:
> 
> newIVal :: STM (TIVal a, a -> STM ()) -- or IO (...)
> force   :: TIVal a -> STM a
> 
> instance Functor IVal
> instance Applicative IVal
> instance Monad   IVal
> 
> where
> 
> * 'newIVal' makes something like an IVar that can be written/defined
>   (just once) with the returned a->STM().
> * 'force' gets the value, retrying if not yet defined; once force is able
>   to succeed, it always yields the same value.
> * 'fmap f tiv' becomes defined (force yields a value instead of retrying)
>   when tiv does.  Similarly for (<*>) and join.
> * Forcing 'fmap f tiv' more than once results in f being called only
>   once,
> i.e., the result is cached and reused, as in pure values.  Similarly for
> (<*>) and join.

Perhaps what you and Jake are looking for are fully-fledged "triggers"
on transactional memory. To solve the fmap problem, have each TIVar
backed by a separate TVar. Then the TIVar returned by fmap would act as
a cache for the original TIVar. A trigger would watch the TVar which
backs the original TIVar, updating the cache TVar when the original
TIVar is written. Nested fmaps would work simply as a cascade of
triggers.

The STM authors considered the possibility of triggers in their paper on
invariants [1], but instead took the safer option of read-only
invariants.

[1]http://research.microsoft.com/~simonpj/papers/stm/stm-invariants.pdf

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] announce: Glome.hs-0.3 (ray tracing digression, glome tutorial)

2008-04-26 Thread Jim Snow

David48 wrote:

On Sat, Apr 26, 2008 at 9:33 AM, Andrew Coppin
<[EMAIL PROTECTED]> wrote:

  

>  Personally, I don't see the point in rendering a couple of million
> mathematically flat surfaces,



What about speed ?
That is a good point.  Ray tracing may give you a better image 
approximation than rasterizing triangles, but unless you're tracing an 
infinite number of rays, it is still just an approximation.  Some shapes 
(like spheres) can be rendered more quickly in a ray tracer than 
triangles, and they use less memory as well, so if you really want a 
sphere, there's no reason to approximate it with triangles.  For other 
objects, the trade-offs are more complex.

...

Unfortunately, the SDL is so complex it's well-nigh impossible to 
support in other third-party applications.


I don't think it would be "impossible" to support in 3rd party apps. A 
lot of work, certainly! Especially the way the parser does parsing and 
macro expansion in the same pass, so a macro body need not contain a 
syntactically-complete code fragment. I think if somebody sat down and 
wrote something that would do all the macro expansion, variable 
substitution and so forth to leave just few geometry descriptions, 
*those* would be quite easy to parse and manipulate for 3rd parties.


I guess a pov importer wouldn't necessarily be all that difficult, or an 
exporter, for that matter.  (Supporting every single primitive type, 
texture, and rendering feature would be a daunting challenge, but just 
supporting the basics might be relatively simple.)  The problem comes 
when you want to import a nice short hand-edited script, and then do a 
few things to it and export it again: suddenly the file turns into a 
multi-megabyte monstrosity with all the loops unrolled and all the 
recursive macros expanded and you lose the ability to hand-edit the 
scene.  Depending on what you're doing, this might not be a problem.




Personally, I'd quite like to write my own ray tracer to address 
some of these limitations. But every time I try, I end up hitting 
design issues [Haskell works very differently to Java] or 
performance issues [which I don't even know how to begin debugging]. 
Not to mention that POV-Ray uses sophisticated techniques like 
volume bounding that I know nothing about. (There's nothing like 
using an inherantly superior algorithm to make your code orders of 
magnitude faster...)


I haven't really run into any issues where Haskell didn't let me do 
what I want, except for the performance counters thing mentioned way 
back at the beginning of this thread (and which I've more or less 
given up on for now, since the acceleration structure seems to be 
working okay and I have other things to work on).


Well, for example, Haskell doesn't have hetrogenous lists - which are 
trivial in any OOP language. That's quite awkward to get around. Also, 
both spheres and cylinders have a "radius" property, but then you end 
up with name clashes. Again, a non-issue in OOP languages. [I gather 
there's some new GHC language extension to partially solve this one - 
but only when types are statically known. In the general case, you'd 
have to write a "radius class" and define instances... yuck!]


It's funny you mention that, those are actually problems I ran into, but 
(having learned my lesson the hard way in Ocaml), I decided not to try 
and force the language to do things my way, but instead try to do things 
in a way that Haskell makes easy. 

For instance, I started out by making each primitive a separate type 
(Sphere, Triangle, etc...), and then made a type class that defines a 
ray intersection method (and a few other odds and ends), but then I 
realized that I can't do something as simple as store a list of 
primitives (or if there is in fact a way, I'm not aware of it). 


Instead, I made a single large sum type with all my primitives:

data Solid =  Sphere {center :: Vec,
 radius, invradius :: Flt}
   | Triangle {v1, v2, v3 :: Vec}
   | TriangleNorm {v1, v2, v3, n1, n2, n3 :: Vec}
   | Disc Vec Vec Flt  -- position, normal, r*r
   | Cylinder Flt Flt Flt -- radius height1 height2
   | Cone Flt Flt Flt Flt -- r clip1 clip2 height
   | Plane Vec Flt -- normal, offset from origin
   | Box Bbox
   | Group [Solid]
   | Intersection [Solid]
   | Bound Solid Solid
   | Difference Solid Solid
   | Bih {bihbb :: Bbox, bihroot :: BihNode}
   | Instance Solid Xfm
   | Tex Solid Texture
   | Nothing deriving Show
etc...

(strictness annotations and a few broken primitives omitted for brevity)

so that now I can easily make a list of primitives.  (In fact, that's 
what a Group is.)  I also gave up on using named fields, since coming up 
with a separate name for each one gets ugly:  instead of radius, you 
have sphere_radius, cylinder_radius, cone_radius disc_radius, etc...


Cones and spheres and bo

Re: [Haskell-cafe] Caching the Result of a Transaction?

2008-04-26 Thread Conal Elliott
Here's another angle on part of Jake's question:

Can we implement a type 'TIVal a' (preferably without unsafePerformIO) with
the following interface:

newIVal :: STM (TIVal a, a -> STM ()) -- or IO (...)
force   :: TIVal a -> STM a

instance Functor IVal
instance Applicative IVal
instance Monad   IVal

where

* 'newIVal' makes something like an IVar that can be written/defined (just
once) with the returned a->STM().
* 'force' gets the value, retrying if not yet defined; once force is able to
succeed, it always yields the same value.
* 'fmap f tiv' becomes defined (force yields a value instead of retrying)
when tiv does.  Similarly for (<*>) and join.
* Forcing 'fmap f tiv' more than once results in f being called only once,
i.e., the result is cached and reused, as in pure values.  Similarly for
(<*>) and join.

   - Conal

On Sat, Apr 26, 2008 at 9:54 AM, Jake Mcarthur <[EMAIL PROTECTED]>
wrote:

> I have a problem I've been trying to work around using the existing STM
> API, and so far it seems that I may be unable to do it. For more background,
> see my blog post at <
> http://geekrant.wordpress.com/2008/04/25/stm-caching-need/>. Here, for
> brevity, I will only describe exactly what I think I need, not what it's
> for.
>
> Say I have a function f :: STM a. The transaction reads from one or more
> TMVars, performs some computation, and returns the result in the STM monad.
> Also, in this scenario, it is known that once the TMVars have values, those
> values will never be changed again (write once, read many, somewhat like
> IVars before they were removed). Now say I try to use this function as so.
>
>liftM2 (,) f f
>
> So the desired result is a pair in the STM monad where both components are
> the result from f. The problem I have is that, in the above example, the
> TMVars are all read twice and the computations are all performed twice, once
> for each of the components of the resulting pair. In many cases, this may be
> the correct thing to do because the values of the TMVars may have changed,
> but what about this case where I _know_ that the values have not been
> modified?
>
> What I need is a way to cache the result of f so that future uses of f
> don't have to reread from the TMVars, even across multiple transactions,
> maybe even leading to the eventual garbage collection of the TMVars if they
> are not used elsewhere.
>
> Right now I think the only way to do this would be to change the STM
> implementation slightly and create a new primitive function. If there is a
> way to do something like this with the current STM API, I would love to hear
> suggestions. Any ideas?
>
> - Jake McArthur
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Trouble compiling collections-0.3 (from Hackage)

2008-04-26 Thread Jake Mcarthur

On Apr 26, 2008, at 4:25 PM, David F. Place wrote:


Data/Collections.hs:154:17:
  Could not find module `Data.ByteString.Lazy':
it is a member of package bytestring-0.9.0.4, which is hidden


In the .cabal file, add "bytestring" to the dependencies property.

- Jake
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Trouble compiling collections-0.3 (from Hackage)

2008-04-26 Thread David F. Place

Hi,

I am not able to build collections-0.3 because on an error.  I hope 
someone can give me a hint.   Below, please find a short transcript with 
various version information.


Cheers, David


[EMAIL PROTECTED] ~]$ cd Desktop/collections-0.3
[EMAIL PROTECTED] collections-0.3]$ runghc Setup.hs configure
Configuring collections-0.3...
[EMAIL PROTECTED] collections-0.3]$ runghc Setup.hs build
Preprocessing library collections-0.3...
Building collections-0.3...

Data/Collections.hs:154:17:
   Could not find module `Data.ByteString.Lazy':
 it is a member of package bytestring-0.9.0.4, which is hidden
[EMAIL PROTECTED] collections-0.3]$ uname -a
Linux congo 2.6.16-1.2111_FC5 #1 SMP Thu May 4 21:16:04 EDT 2006 x86_64 
x86_64 x86_64 GNU/Linux

[EMAIL PROTECTED] collections-0.3]$ ghc --version
The Glorious Glasgow Haskell Compilation System, version 6.8.2
[EMAIL PROTECTED] collections-0.3]$ ghc-pkg list
/usr/local/lib/ghc-6.8.2/package.conf:
   ALUT-2.1.0.0, Cabal-1.2.3.0, GLUT-2.1.1.1, HUnit-1.2.0.0,
   OpenAL-1.3.1.1, OpenGL-2.2.1.1, QuickCheck-1.1.0.0, array-0.1.0.0,
   base-3.0.1.0, bytestring-0.9.0.1, bytestring-0.9.0.4,
   cgi-3001.1.5.1, containers-0.1.0.1, directory-1.0.0.0, fgl-5.4.1.1,
   filepath-1.1.0.0, (ghc-6.8.2), haskell-src-1.0.1.1,
   haskell98-1.0.1.0, hpc-0.5.0.0, html-1.0.1.1, mtl-1.1.0.0,
   network-2.1.0.0, old-locale-1.0.0.0, old-time-1.0.0.0,
   packedstring-0.1.0.0, parallel-1.0.0.0, parsec-2.1.0.0,
   pretty-1.0.0.0, process-1.0.0.0, random-1.0.0.0, readline-1.0.1.0,
   regex-base-0.72.0.1, regex-compat-0.71.0.1, regex-posix-0.72.0.2,
   rts-1.0, stm-2.1.1.0, template-haskell-2.2.0.0, time-1.1.2.0,
   unix-2.3.0.0, xhtml-3000.0.2.1
[EMAIL PROTECTED] collections-0.3]$



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] lookup tables & style guidelines

2008-04-26 Thread Jan-Willem Maessen


On Apr 26, 2008, at 7:41 AM, Adrian Hey wrote:


Jan-Willem Maessen wrote:

On Apr 24, 2008, at 11:33 AM, Adrian Hey wrote:
Also, if you're likely to be using union/intersection a lot you  
should

know that Data.Map/Set are very slow for this because they use the
not efficient hedge algorithm :-)
OK, I'm going to bite here: What's the efficient algorithm for  
union on balanced trees, given that hedge union was chosen as being  
more efficient than naive alternatives (split and merge, repeated  
insertion)?  My going hypothesis has been "hedge union is an  
inefficient algorithm, except that it's better than all those other  
inefficient algorithms".


Divide and conquer seems to be the most efficient, though not the
algorithm presented in the Adams paper.


Just to clarify: divide and conquer splits one tree on the root value  
of the other (possibly avoiding enforcing the balance metric until  
after joining trees, though not obvious how / if that's useful)?  The  
definition of "divide and conquer" on trees without a fixed structure  
is rather unclear, which is why the question comes up in the first  
place.



Hedge algorithm performs many
more comparisons than are needed, which is obviously bad if you don't
know how expensive those comparisons are going to be.


That makes sense.  I found myself having the same kinds of thoughts  
when reading Knuth's analyses of (eg) different binary search  
algorithms in TAOCP v.3; if comparison was the dominant cost, which  
algorithm looked best suddenly changed.


-Jan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] announce: Glome.hs-0.3 (Haskell raytracer)

2008-04-26 Thread Andrew Coppin

david48 wrote:

On Sat, Apr 26, 2008 at 9:33 AM, Andrew Coppin
<[EMAIL PROTECTED]> wrote:

  

 Personally, I don't see the point in rendering a couple of million
mathematically flat surfaces,



What about speed ?
  


Speed is not always the most important thing. ;-)

["The best things come to those who wait" and all that...]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] announce: Glome.hs-0.3 (Haskell raytracer)

2008-04-26 Thread Sebastian Sylvan
On Sat, Apr 26, 2008 at 10:21 AM, david48
<[EMAIL PROTECTED]<[EMAIL PROTECTED]>>
wrote:

> On Sat, Apr 26, 2008 at 9:33 AM, Andrew Coppin
> <[EMAIL PROTECTED]> wrote:
>
> >  Personally, I don't see the point in rendering a couple of million
> > mathematically flat surfaces,
>
> What about speed ?
>

If you tessellate the curve so that there is zero error, then it's probably
going to be faster to ray trace the actual curve, since you'll essentially
have loads of very small (1 pixel or thereabouts) triangles...



-- 
Sebastian Sylvan
+44(0)7857-300802
UIN: 44640862
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] asserting the type of a binding in a "do" expression

2008-04-26 Thread Philip Weaver
On Fri, Apr 25, 2008 at 11:49 PM, Brandon S. Allbery KF8NH
<[EMAIL PROTECTED]> wrote:
>
>  On Apr 26, 2008, at 2:36 , Ken Takusagawa wrote:
>
>
> > But this does not:
> >
> >  foo::IO a;
> >  foo = do{
> >   (x::a) <- bar;
> >   return x;};
> >
> > Error message: A pattern type signature cannot bind scoped type
> > variables `a' unless the pattern has a rigid type context.
> >
>
Yeah, using the "forall" is exactly what you want to fix this problem.
 It puts the type variable in scope throughout the definition of the
function.
>  This works for me (in a slightly out of date HEAD) if I explicitly forall
> the declaration as per the ghc manual (see section 8.7.6.3):
>
>  > bar :: forall b. IO b
>  > bar =  return undefined -- just want a type for now
>  > foo :: forall a. IO a
>
>  > foo =  do { (x :: a) <- bar; return x; }
>
>  --
>  brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
>  system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
>  electrical and computer engineering, carnegie mellon universityKF8NH
>
>
>
>
>  ___
>  Haskell-Cafe mailing list
>  Haskell-Cafe@haskell.org
>  http://www.haskell.org/mailman/listinfo/haskell-cafe
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Replacing RDMS - why I want this retrying?

2008-04-26 Thread Mads Lindstrøm
Hi,

Marc Weber wrote:

> > Another question is why do you want to we replace RDBMS-es?
> a) Speed. A simple HAppS state benchmark shows that inserting records
>can be 10 times faster than MySQL
>don't know wether its' because switching processes, parsing SQL queries ?

You could try using prepared statements, see
http://dev.mysql.com/tech-resources/articles/4.1/prepared-statements.html . 
According to the article, this will save the parsing overhead but not the 
prepare-plan overhead. However, the article is about MySQL 4.1, so maybe newer 
versions of MySQL may only need to prepare the execution plan once. PostgreSQL 
seems to supports preparing both parse result and the prepare-plan result (see 
http://www.postgresql.org/docs/8.1/interactive/sql-prepare.html ).


/Mads Lindstrøm

> b) Type safety. HaskellDB is nice.. But it's limiting because you can't
>optimize queries very well. 
> Having something (maybe completeley in mem as HAppS proposes it) beeing
> as easy as Data.Map would be nice.
> 
> Sincerly
> Marc Weber
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Caching the Result of a Transaction?

2008-04-26 Thread Jake Mcarthur
I have a problem I've been trying to work around using the existing  
STM API, and so far it seems that I may be unable to do it. For more  
background, see my blog post at . Here, for brevity, I will only describe exactly what I think I  
need, not what it's for.


Say I have a function f :: STM a. The transaction reads from one or  
more TMVars, performs some computation, and returns the result in the  
STM monad. Also, in this scenario, it is known that once the TMVars  
have values, those values will never be changed again (write once,  
read many, somewhat like IVars before they were removed). Now say I  
try to use this function as so.


liftM2 (,) f f

So the desired result is a pair in the STM monad where both components  
are the result from f. The problem I have is that, in the above  
example, the TMVars are all read twice and the computations are all  
performed twice, once for each of the components of the resulting  
pair. In many cases, this may be the correct thing to do because the  
values of the TMVars may have changed, but what about this case where  
I _know_ that the values have not been modified?


What I need is a way to cache the result of f so that future uses of f  
don't have to reread from the TMVars, even across multiple  
transactions, maybe even leading to the eventual garbage collection of  
the TMVars if they are not used elsewhere.


Right now I think the only way to do this would be to change the STM  
implementation slightly and create a new primitive function. If there  
is a way to do something like this with the current STM API, I would  
love to hear suggestions. Any ideas?


- Jake McArthur
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Replacing RDMS - why I want this retrying?

2008-04-26 Thread Marc Weber
> Have you looked at http://happs.org/ ?
:) It should be used with HAppS.
But it's only a state system. It doesn't tell you yet how to organize
your data. The IxSet is nice but it's not relational.

> Another question is why do you want to we replace RDBMS-es?
a) Speed. A simple HAppS state benchmark shows that inserting records
   can be 10 times faster than MySQL
   don't know wether its' because switching processes, parsing SQL queries ?
b) Type safety. HaskellDB is nice.. But it's limiting because you can't
   optimize queries very well. 
Having something (maybe completeley in mem as HAppS proposes it) beeing
as easy as Data.Map would be nice.

Sincerly
Marc Weber
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Generic functional references for free, now!

2008-04-26 Thread Sterling Clover
Recent discussions inspired me to cook up the attached, which through  
controlled abuse of various extensions gives functional references  
for all records deriving Data and Typeable for free, with no template  
haskell necessary. Composition is fully supported, as is  
"overloading" of standard record accessors. For the sake of  
preserving at least mild sanity, the (.) operator is not overloaded,  
and composition is instead provided via an overloaded `o`.


For anyone that doesn't mind the absurdity of how this is  
implemented, it should be suitable for "drop in" use.


For those that do mind the absurdity, it nonetheless serves as a  
proof-of-concept for how far Haskell's reflective capacities can be  
pushed.


Cheers, and happy hacking,
Sterl.

Example usage:

data Test = Test {t1 :: Int, t2 :: Int, t3 :: String, t4 ::  
InnerTest} deriving (Data, Typeable, Show)


data InnerTest = InnerTest {t'1 :: Int, t'2 :: Int, t'3 :: String}  
deriving (Data, Typeable, Show)


testData = Test {t1 = 1, t2 = 2, t3 = "foo", t4 = InnerTest {t'1 = 2,  
t'2 = 3, t'3 = "bar"}}


*GenericFRef> set t1 23 testData
Test {t1 = 23, t2 = 2, t3 = "foo", t4 = InnerTest {t'1 = 2, t'2 = 3,  
t'3 = "bar"}}


*GenericFRef> set (t'1 `o` t4) 23 testData
Test {t1 = 1, t2 = 2, t3 = "foo", t4 = InnerTest {t'1 = 23, t'2 = 3,  
t'3 = "bar"}}


*GenericFRef> update (t2) (\x->x*x) testData
Test {t1 = 1, t2 = 4, t3 = "foo", t4 = InnerTest {t'1 = 2, t'2 = 3,  
t'3 = "bar"}}


*GenericFRef> update (t'2 `o` t4) (\x->x*x) testData
Test {t1 = 1, t2 = 2, t3 = "foo", t4 = InnerTest {t'1 = 2, t'2 = 9,  
t'3 = "bar"}}


p.s. I have a nagging sensation that somebody may have done this  
before, although I can't trace the source.





GenericFRef.hs
Description: Binary data
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Simplest possible Fasta shootout entry. How do I zap the ugly line? Suggest any other improvements.

2008-04-26 Thread Richard Kelsall

(Extracting these questions from my previous thread for clarity.)

Below is my simplest possible program to solve the Fasta shootout
benchmark.

http://shootout.alioth.debian.org/gp4/benchmark.php?test=fasta&lang=all
http://haskell.org/haskellwiki/Shootout/Fasta

I can see one remaining flaw - the line marked 'Ugly'. What's the best
way to get rid of this line?

Any other suggestions for simplifying or improving the program would
also be interesting.

This code is about three or four times slower that the current fastest
GHC entry for the Fasta benchmark. I'll elaborate it for speed when
I've produced the best version regardless of speed.

Richard.


{-# OPTIONS -O -fexcess-precision #-}
-- The Computer Language Shootout : Fasta
-- http://shootout.alioth.debian.org/
-- Simple solution by Richard Kelsall.
-- http://www.millstream.com/

import System

main = do
n <- getArgs >>= readIO . head

title "ONE" "Homo sapiens alu"
writeLined (cycle alu) (n * 2)

title "TWO" "IUB ambiguity codes"
let (r1, r2) = splitAt (fromIntegral (n * 3)) (rand 42)  -- Ugly !!
writeLined (map (look iubs) r1) (n * 3)

title "THREE" "Homo sapiens frequency"
writeLined (map (look homs) r2) (n * 5)

title :: String -> String -> IO ()
title a b = putStrLn $ ">" ++ a ++ " " ++ b

look :: [(Char, Float)] -> Float -> Char
look [(c, _)] _ = c
look ((c, f) : cfs) r = if r < f
   then c
   else look cfs (r - f)

lineWidth = 60

writeLined :: [Char] -> Integer -> IO ()
writeLined cs 0 = return ()
writeLined cs n = do
let w = min n lineWidth
(cs1, cs2) = splitAt (fromInteger w) cs
putStrLn cs1
writeLined cs2 (n - w)

rand :: Int -> [Float]
rand seed = newran : (rand newseed)
where
im = 139968
ia = 3877
ic = 29573
newseed = (seed * ia + ic) `rem` im
newran = fromIntegral newseed / fromIntegral im

alu = "GGCCGGGCGCGGTGGCTCACGCCTGTAATCCCAGCACTTTGGGAGGCCGAGGCGGGCGGA\
  \TCACCTGAGGTCAGGAGTTCGAGACCAGCCTGGCCAACATGGTGAAAGTCTCTACT\
  \ATACATTAGCCGGGCGTGGTGGCGCGCGCCTGTAATCCCAGCTACTCGGGAG\
  \GCTGAGGCAGGAGAATCGCTTGAACCCGGGAGGCGGAGGTTGCAGTGAGCCGAGATCGCG\
  \CCACTGCACTCCAGCCTGGGCGACAGAGCGAGACTCCGTCTCA"

iubs = [('a', 0.27), ('c', 0.12), ('g', 0.12), ('t', 0.27), ('B', 0.02),
('D', 0.02), ('H', 0.02), ('K', 0.02), ('M', 0.02), ('N', 0.02),
('R', 0.02), ('S', 0.02), ('V', 0.02), ('W', 0.02), ('Y', 0.02)]

homs = [('a', 0.3029549426680), ('c', 0.1979883004921),
('g', 0.1975473066391), ('t', 0.3015094502008)]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why doesn't GHC use the Hugs definition of splitAt to avoid traversing the first part of the list twice?

2008-04-26 Thread Richard Kelsall

Brandon S. Allbery KF8NH wrote:


On Apr 26, 2008, at 9:02 , Richard Kelsall wrote:


I'm now wondering why my splitAtRK function in the following code
makes it run in 11 seconds given a parameter of 250 but it takes
14 seconds when I change it to splitAt. Am I accidentally invoking


It's somewhat unusual to build the standard libraries with -O2, I 
think.  (It can be done but the build takes a very long time.)


Also, 11 vs. 14 seconds seems not that much of a difference when you're 
talking 250 items, especially given how inefficient Haskell lists 
(Strings are lists) are.




Yes, well spotted. If I lower the compile level from -O2 to -O the
splitAtRK version takes 14.5 seconds vs 14 seconds for the built-in
version. Thank you. That solves the puzzle.

For the benchmarks I expect they use a default packaged GHC, I don't
imagine I could get them to use specially compiled libraries to bump
up the GHC score. Which gives an interesting dilemma of whether I
could get away with adding a small relevant snippet of library code
to the program in order to give it -O2 level compilation. I wonder
how much the other compilers optimise their library code.


Richard.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Replacing RDMS - global lock and STM preventing retrying?

2008-04-26 Thread Mads Lindstrøm
Hi

Marc Weber wrote:
> What would be the right way to go to replace RDBMS (postgres/ mysql) etc
> using haskell only for small to medium sized (web)applications?
> I guess one way to go is using STM.
> But what happens if you have some tables each row represented as TVar
> and you'd like to do a full backup? Some occasionally occuring updates
> on single rows will make the atomic action get all rows and write them
> to disk retry again and again? Is there a way to make the update action
> retry in this case?
> And then you can implement something like: Try it 20 times, if the aciton
> doesn't succeed aquire global lock ! Wow.
> 
> Has anyone already implemented such a RDBMS replacement?

Have you looked at http://happs.org/ ?

Their HappS-State seems somewhat similar to what you are proposing.

> Anyone interested in working on this?
> 
> Marc Weber

Another question is why do you want to we replace RDBMS-es?


Greetings,

Mads Lindstrøm


> 
> One solution I came up within minutes :) I love haskell. You write it
> down fix error and it works :)
> Would you prefer another way to solve this?
> 
> --packages: containers, binary, stm, mtl, random
> module Main where
> import System.IO.Unsafe
> import Random
> import Control.Concurrent
> import Control.Monad
> import Control.Concurrent.STM
> import Control.Monad.Trans
> 
> -- running count of actions. if set to -1 a transaction has aquired global 
> lock 
> globalLock = unsafePerformIO $ newTVarIO (0::Int)
> 
> modifyTVar tvar f = do
>   v <- readTVar tvar
>   writeTVar tvar $ f v
> 
> -- of course this should be in it's own monad to force using this function
> -- myAtomically: aquires global lock
> -- of course I don't need 5 atomically calls, but this way an action will not 
> be retried if only the global count changes
> myAtomically aquireGlobalLock stmAction =
>   if aquireGlobalLock
> then do
>   atomically $ do
> runningCount <- readTVar globalLock
> when (runningCount /= 0) retry
> writeTVar globalLock (negate 1)
> -- other  actions should be retrying
>   atomically $ do 
> stmAction
> writeTVar globalLock 0
>   else do
>   atomically $ do
> runningCount <- readTVar globalLock
> when (runningCount == (negate 1)) retry
> modifyTVar globalLock (+1)
>   atomically stmAction
>   atomically $ modifyTVar globalLock (\x -> x -1)
> 
> -- log utility printing start / stop of an action 
> stsp :: (MonadIO m) =>  String -> m r ->  m r
> stsp msg act = do
>   liftIO $ putStrLn $ "start: " ++ msg
>   r <- act
>   liftIO $ putStrLn $ "stop: " ++ msg
>   return r
> 
> data Table rec = Table { table :: [TVar rec] }
> 
> newTable rowcount = liftM Table $ mapM newTVarIO [(1::Int)..rowcount]
> 
> dumpTable fn t = do
>   dat <- myAtomically True $ mapM readTVar $ table t
>   writeFile fn $ show dat
> 
> disturb t@(Table (row:_)) = do
>   stsp "disturbing" $ do
> v <- randomRIO (0,100)
> myAtomically False  $ writeTVar row v
>   threadDelay 100
>   disturb t -- loop 
> 
> main = do
>   stsp "application" $ do
> table <- newTable 10
> forkIO $ disturb table
> stsp "dumping" $ dumpTable "dump" table
> 
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] announce: Glome.hs-0.3 (Haskell raytracer)

2008-04-26 Thread Ketil Malde
david48 <[EMAIL PROTECTED]> writes:

>> Personally, I don't see the point in rendering a couple of million
>> mathematically flat surfaces,

> What about speed ?

"If it doesn't have to be correct, it can be arbitrarily fast"

:-)

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Replacing RDMS - global lock and STM preventing retrying?

2008-04-26 Thread Marc Weber
What would be the right way to go to replace RDBMS (postgres/ mysql) etc
using haskell only for small to medium sized (web)applications?
I guess one way to go is using STM.
But what happens if you have some tables each row represented as TVar
and you'd like to do a full backup? Some occasionally occuring updates
on single rows will make the atomic action get all rows and write them
to disk retry again and again? Is there a way to make the update action
retry in this case?
And then you can implement something like: Try it 20 times, if the aciton
doesn't succeed aquire global lock ! Wow.

Has anyone already implemented such a RDBMS replacement?
Anyone interested in working on this?

Marc Weber

One solution I came up within minutes :) I love haskell. You write it
down fix error and it works :)
Would you prefer another way to solve this?

--packages: containers, binary, stm, mtl, random
module Main where
import System.IO.Unsafe
import Random
import Control.Concurrent
import Control.Monad
import Control.Concurrent.STM
import Control.Monad.Trans

-- running count of actions. if set to -1 a transaction has aquired global lock 
globalLock = unsafePerformIO $ newTVarIO (0::Int)

modifyTVar tvar f = do
  v <- readTVar tvar
  writeTVar tvar $ f v

-- of course this should be in it's own monad to force using this function
-- myAtomically: aquires global lock
-- of course I don't need 5 atomically calls, but this way an action will not 
be retried if only the global count changes
myAtomically aquireGlobalLock stmAction =
  if aquireGlobalLock
then do
  atomically $ do
runningCount <- readTVar globalLock
when (runningCount /= 0) retry
writeTVar globalLock (negate 1)
-- other  actions should be retrying
  atomically $ do 
stmAction
writeTVar globalLock 0
  else do
  atomically $ do
runningCount <- readTVar globalLock
when (runningCount == (negate 1)) retry
modifyTVar globalLock (+1)
  atomically stmAction
  atomically $ modifyTVar globalLock (\x -> x -1)

-- log utility printing start / stop of an action 
stsp :: (MonadIO m) =>  String -> m r ->  m r
stsp msg act = do
  liftIO $ putStrLn $ "start: " ++ msg
  r <- act
  liftIO $ putStrLn $ "stop: " ++ msg
  return r

data Table rec = Table { table :: [TVar rec] }

newTable rowcount = liftM Table $ mapM newTVarIO [(1::Int)..rowcount]

dumpTable fn t = do
  dat <- myAtomically True $ mapM readTVar $ table t
  writeFile fn $ show dat

disturb t@(Table (row:_)) = do
  stsp "disturbing" $ do
v <- randomRIO (0,100)
myAtomically False  $ writeTVar row v
  threadDelay 100
  disturb t -- loop 

main = do
  stsp "application" $ do
table <- newTable 10
forkIO $ disturb table
stsp "dumping" $ dumpTable "dump" table

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why doesn't GHC use the Hugs definition of splitAt to avoid traversing the first part of the list twice?

2008-04-26 Thread Brandon S. Allbery KF8NH


On Apr 26, 2008, at 9:02 , Richard Kelsall wrote:


I'm now wondering why my splitAtRK function in the following code
makes it run in 11 seconds given a parameter of 250 but it takes
14 seconds when I change it to splitAt. Am I accidentally invoking


It's somewhat unusual to build the standard libraries with -O2, I  
think.  (It can be done but the build takes a very long time.)


Also, 11 vs. 14 seconds seems not that much of a difference when  
you're talking 250 items, especially given how inefficient  
Haskell lists (Strings are lists) are.


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why doesn't GHC use the Hugs definition of splitAt to avoid traversing the first part of the list twice?

2008-04-26 Thread Richard Kelsall

Duncan Coutts wrote:

On Fri, 2008-04-25 at 17:30 +0100, Richard Kelsall wrote:

I've just been investigating a performance oddity in using splitAt
on a long stream of random numbers. I don't understand why GHC
appears to want to traverse the first part of the list twice.

GHC seems to implement the splitAt function something like

splitAt n xs = (take n xs, drop n xs)

whereas Hugs is something like

splitAt n (x : xs) = (x : xs', xs'')
 where (xs', xs'') = splitAt (n-1) xs

which seems much more sensible to me. Wouldn't it be better to change
GHC to the Hugs method? Have I misunderstood something?


Actually GHC uses this definition, in GHC.List:

#ifdef USE_REPORT_PRELUDE

splitAt n xs   =  (take n xs, drop n xs)

#else /* hack away */

splitAt (I# n#) ls
  | n# <# 0#= ([], ls)
  | otherwise   = splitAt# n# ls
where
splitAt# :: Int# -> [a] -> ([a], [a])
splitAt# 0# xs = ([], xs)
splitAt# _  [EMAIL PROTECTED]  = (xs, xs)
splitAt# m# (x:xs) = (x:xs', xs'')
  where
(xs', xs'') = splitAt# (m# -# 1#) xs

#endif /* USE_REPORT_PRELUDE */

So ghc's version should be of equivalent strictness to the hugs version.

What's interesting here is that the H98 specification of splitAt is
silly. It got 'simplified' from a previous version of the Haskell spec
and is so doing it was made less strict.

With this definition:
splitAt n xs   =  (take n xs, drop n xs)

splitAt _|_ _|_ = (_|_, _|_)

but with the sensible definition it'd return _|_

and that's really the only point of having splitAt, so that you can walk
down the list once rather than twice. If someone needs the very lazy
version there's always take and drop.

Duncan



That looks good, I didn't see this 'hack away' version when I found
splitAt on the web.

I'm now wondering why my splitAtRK function in the following code
makes it run in 11 seconds given a parameter of 250 but it takes
14 seconds when I change it to splitAt. Am I accidentally invoking
the (take, drop) version of splitAt? Why is mine so much faster than
the built-in version? (Using GHC 6.8.2, W2K, Intel Core 2 Duo 2.33GHz)
Maybe mine isn't working properly somehow.

(I hadn't intended to post this code just yet because I wanted to
do a bit more testing etc then ask for suggestions for simplifying
and improving it. I actually want to get rid of the line containing
splitAt because it's ugly. All suggestions for improvement gratefully
received. The time function is just temporary. This code is about three
or four times slower that the current fastest Haskell entry for the
Fasta shootout benchmark. I'll elaborate it for speed when I've got
down to the simplest version possible.)

Richard.


{-# OPTIONS -O2 -fexcess-precision #-}
--
-- The Computer Language Shootout : Fasta
-- http://shootout.alioth.debian.org/
--
-- Simple solution by Richard Kelsall.
-- http://www.millstream.com/
--

import System

import Text.Printf
import System.CPUTime

time :: IO t -> IO t
time a = do
start <- getCPUTime
v <- a
end   <- getCPUTime
let diff = (fromIntegral (end - start)) / (10 ^12)
printf "Calc time %0.3f \n" (diff :: Double)
return v

main = do
time $ comp

comp :: IO ()
comp = do
n <- getArgs >>= readIO . head

title "ONE" "Homo sapiens alu"
writeLined (cycle alu) (n * 2)

title "TWO" "IUB ambiguity codes"
let (r1, r2) = splitAtRK (fromIntegral (n * 3)) (rand 42)
writeLined (map (look iubs) r1) (n * 3)

title "THREE" "Homo sapiens frequency"
writeLined (map (look homs) r2) (n * 5)

splitAtRK n xs | n <= 0 = ([], xs)
splitAtRK _ []  = ([], [])
splitAtRK n (x : xs) = (x : xs', xs'')
   where (xs', xs'') = splitAtRK (n - 1) xs

title :: String -> String -> IO ()
title a b = putStrLn $ ">" ++ a ++ " " ++ b

look :: [(Char, Float)] -> Float -> Char
look [(c, _)] _ = c
look ((c, f) : cfs) r = if r < f
   then c
   else look cfs (r - f)

lineWidth = 60

writeLined :: [Char] -> Integer -> IO ()
writeLined cs 0 = return ()
writeLined cs n = do
let w = min n lineWidth
(cs1, cs2) = splitAt (fromInteger w) cs
putStrLn cs1
writeLined cs2 (n - w)

rand :: Int -> [Float]
rand seed = newran : (rand newseed)
where
im = 139968
ia = 3877
ic = 29573
newseed = (seed * ia + ic) `rem` im
newran = fromIntegral newseed / fromIntegral im

alu = "GGCCGGGCGCGGTGGCTCACGCCTGTAATCCCAGCACTTTGGGAGGCCGAGGCGGGCGGA\
  \TCACCTGAGGTCAGGAGTTCGAGACCAGCCTGGCCAACATGGTGAAAGTCTCTACT\
  \ATACATTAGCCGGGCGTGGTGGCGCGCGCCTGTAATCCCAGCTACTCGGGAG\
  \GCTGAGGCAGGAGAATCGCTTGAACCCGGGAGGCGGAGGTTGCAGTGAGCCGAGATCGCG\
  \CCACTGCACTCCAGCCTGGGCGACAGAGCGAGACTCCGTCTCA"

iubs = [('a', 0.27), ('c', 0.12), ('g', 0.12), ('t', 0.27), ('B', 0.02),
('D', 0.02), ('H', 0.02), ('K', 0.02), ('M', 0.02),

Re: [Haskell-cafe] lookup tables & style guidelines

2008-04-26 Thread Adrian Hey

Jan-Willem Maessen wrote:


On Apr 24, 2008, at 11:33 AM, Adrian Hey wrote:


Also, if you're likely to be using union/intersection a lot you should
know that Data.Map/Set are very slow for this because they use the
not efficient hedge algorithm :-)


OK, I'm going to bite here: What's the efficient algorithm for union on 
balanced trees, given that hedge union was chosen as being more 
efficient than naive alternatives (split and merge, repeated 
insertion)?  My going hypothesis has been "hedge union is an inefficient 
algorithm, except that it's better than all those other inefficient 
algorithms".


Divide and conquer seems to be the most efficient, though not the
algorithm presented in the Adams paper. Hedge algorithm performs many
more comparisons than are needed, which is obviously bad if you don't
know how expensive those comparisons are going to be. IIRC it was
something like 4..5 times as many of 2 sets of a million or so random
Ints.

But even in favourable circumstances (tree elements are boxed Ints)
divide and conquer on AVL trees seemed much faster than Hedge on
Data.Set. Of course ideally we would want implementations of Hedge
for AVL and divide and conquer for Data.Set too, but I didn't feel
inclined to write them.

Regards
--
Adrian Hey

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Function Type Calculation (revisited)

2008-04-26 Thread PR Stanley

Hi
I know we've already looked at the topic of function type calculation 
though last time I didn't have the chance to go through it 
thoroughly. So here it is again. Apologies for the repetition. I've 
had a try at calculating function types for two examples below. So to 
start with I'd be grateful for an assessment of my efforts. All 
comments are welcome.

Thanks,
Paul

[1]
funk f x = f (funk f) x

f :: a
x :: b
funk f x :: c
therefore funk :: a -> b -> c

RHS
f (funk f) x :: c

f (funk f) :: d -> c
x :: d

f :: e -> d -> c

funk :: h -> e
f :: h

unification
f :: a = h = (e -> d -> c)
x b = d

therefore funk :: ((h -> e) -> b -> c) -> b -> c

[2]
w f = f f

Assigning Types
f :: a
w f :: b
therefore w :: a -> b

RHS
f f :: b

f :: c -> b
f :: c

f :: a = b = c = (c -> b)

therefore w :: (a -> a) -> a

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re[2]: [Haskell-cafe] n00b circular dep question

2008-04-26 Thread Bulat Ziganshin
Hello Stuart,

Saturday, April 26, 2008, 1:57:47 PM, you wrote:

> Are you sure? I would be very interested in a switch that
> automatically generates hs-boot files, but I've been unable to find
> any mention of it.

no, i never used this function, so you should be informed better


-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] n00b circular dep question

2008-04-26 Thread Stuart Cook
On Sat, Apr 26, 2008 at 4:07 AM, Bulat Ziganshin
<[EMAIL PROTECTED]> wrote:
>  2. ghc supports this part of standard in a rather awkward way - you
>  need to generate .hs-boot files using some switch (look into docs).
>  which is like .h files generated automatic from .cpp. once these files
>  aregenerated, your circular deps will be ok

Are you sure? I would be very interested in a switch that
automatically generates hs-boot files, but I've been unable to find
any mention of it.


Stuart
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why doesn't GHC use the Hugs definition of splitAt to avoid traversing the first part of the list twice?

2008-04-26 Thread Duncan Coutts

On Fri, 2008-04-25 at 17:30 +0100, Richard Kelsall wrote:
> I've just been investigating a performance oddity in using splitAt
> on a long stream of random numbers. I don't understand why GHC
> appears to want to traverse the first part of the list twice.
> 
> GHC seems to implement the splitAt function something like
> 
> splitAt n xs = (take n xs, drop n xs)
> 
> whereas Hugs is something like
> 
> splitAt n (x : xs) = (x : xs', xs'')
>  where (xs', xs'') = splitAt (n-1) xs
> 
> which seems much more sensible to me. Wouldn't it be better to change
> GHC to the Hugs method? Have I misunderstood something?

Actually GHC uses this definition, in GHC.List:

#ifdef USE_REPORT_PRELUDE

splitAt n xs   =  (take n xs, drop n xs)

#else /* hack away */

splitAt (I# n#) ls
  | n# <# 0#= ([], ls)
  | otherwise   = splitAt# n# ls
where
splitAt# :: Int# -> [a] -> ([a], [a])
splitAt# 0# xs = ([], xs)
splitAt# _  [EMAIL PROTECTED]  = (xs, xs)
splitAt# m# (x:xs) = (x:xs', xs'')
  where
(xs', xs'') = splitAt# (m# -# 1#) xs

#endif /* USE_REPORT_PRELUDE */

So ghc's version should be of equivalent strictness to the hugs version.

What's interesting here is that the H98 specification of splitAt is
silly. It got 'simplified' from a previous version of the Haskell spec
and is so doing it was made less strict.

With this definition:
splitAt n xs   =  (take n xs, drop n xs)

splitAt _|_ _|_ = (_|_, _|_)

but with the sensible definition it'd return _|_

and that's really the only point of having splitAt, so that you can walk
down the list once rather than twice. If someone needs the very lazy
version there's always take and drop.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[2]: a faster, accumulating mapM (was Re: [Haskell-cafe] mapM vs mapM_ performance)

2008-04-26 Thread Neil Mitchell
Hi

> I didn't say I agree, I most certainly don't. What I meant with my
>  comment was that a slowdown of 10x, just to preserve laziness, is
>  perfect fuel for those who claim that laziness is good in theory but
>  bad in practice.

A bad implementation of laziness will always be slower than a bad
implementation of strictness - as we have strict CPU's. However,
laziness gives you some really cool opportunities for deforestation
and supercompilation - so at some point Haskell will overcome the
performance penalty of laziness and start to reap the performance
benefits - I think...

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] announce: Glome.hs-0.3 (Haskell raytracer)

2008-04-26 Thread david48
On Sat, Apr 26, 2008 at 9:33 AM, Andrew Coppin
<[EMAIL PROTECTED]> wrote:

>  Personally, I don't see the point in rendering a couple of million
> mathematically flat surfaces,

What about speed ?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] asserting the type of a binding in a "do" expression

2008-04-26 Thread Henning Thielemann


On Sat, 26 Apr 2008, Ken Takusagawa wrote:


This works (with -fglasgow-exts):

foo::IO Int;
foo = do{
  (x::Int) <- bar;
  return x;};

bar = undefined

But this does not:

foo::IO a;
foo = do{
  (x::a) <- bar;
  return x;};

Error message: A pattern type signature cannot bind scoped type
variables `a' unless the pattern has a rigid type context.


'asTypeOf' is the answer for Haskell98. You can also write your own helper 
functions which might simplify that for your use cases.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] announce: Glome.hs-0.3 (Haskell raytracer)

2008-04-26 Thread Andrew Coppin

Jim Snow wrote:
The Scene Description Language (SDL) is the best and worst thing about 
POV-Ray.


Indeed.

It's very intuitive and user-friendly, so a person can reasonably 
write a complex scene in pure code without using an external GUI editor.


Not to mention that there's a smooth and continuous path from "I 
rendered a reflective sphere on a chequered plane" to "I build a complex 
CSG object" to "I wrote my first animation" to "I just built a particle 
physics system / fractal terrain generator / inverse kinetics animator / 
whatever".


Unfortunately, the SDL is so complex it's well-nigh impossible to 
support in other third-party applications.


I don't think it would be "impossible" to support in 3rd party apps. A 
lot of work, certainly! Especially the way the parser does parsing and 
macro expansion in the same pass, so a macro body need not contain a 
syntactically-complete code fragment. I think if somebody sat down and 
wrote something that would do all the macro expansion, variable 
substitution and so forth to leave just few geometry descriptions, 
*those* would be quite easy to parse and manipulate for 3rd parties.


It's also slow. The scene has to be re-parsed for every frame.  For 
complex scenes, the scene parsing could take longer than the actual 
render.


For complex scenes, or scenes involving complex algorithms like fractal 
generation or something, parsing can indeed take far longer than 
rendering. The SDL is basically a scripting language. It's kinda 
flexible in its own way, but totally useless in speed terms. It also 
lacks all the features of a "real" programming language - abstraction, 
encapsulation, rich data types, powerful control structures... actually, 
all the things that Haskell excells at. I did start a project a while 
back to try and make it possible to describe POV-Ray scenes in Haskell, 
thus leveraging the full power of the language. But that never came to 
much...


[The "clock" variable is one way to animate, yes. Another is to use the 
"frame_number" variable. A common idiom is to generate an initial state 
at frame #1, and on each frame, read the state from disk, update it, 
write the updated version back to disk, and then render the frame. 
Obviously this destroys the ability to render frames in arbitrary order 
or distribute them across a render farm, but that's rather unavoidable. 
Note that the loading and saving must be hand-coded, from scratch, every 
time you want to do this. Tedious...]


Personally, I'd quite like to write my own ray tracer to address some 
of these limitations. But every time I try, I end up hitting design 
issues [Haskell works very differently to Java] or performance issues 
[which I don't even know how to begin debugging]. Not to mention that 
POV-Ray uses sophisticated techniques like volume bounding that I 
know nothing about. (There's nothing like using an inherantly 
superior algorithm to make your code orders of magnitude faster...)


I haven't really run into any issues where Haskell didn't let me do 
what I want, except for the performance counters thing mentioned way 
back at the beginning of this thread (and which I've more or less 
given up on for now, since the acceleration structure seems to be 
working okay and I have other things to work on).


Well, for example, Haskell doesn't have hetrogenous lists - which are 
trivial in any OOP language. That's quite awkward to get around. Also, 
both spheres and cylinders have a "radius" property, but then you end up 
with name clashes. Again, a non-issue in OOP languages. [I gather 
there's some new GHC language extension to partially solve this one - 
but only when types are statically known. In the general case, you'd 
have to write a "radius class" and define instances... yuck!]


A good acceleration structure definitely makes everything go a lot 
faster.  It's the difference between rendering a scene in a few 
seconds or ten minutes.


BIH is what I'm using.  It's relatively new.  Here's a paper about 
it:  http://ainc.de/Research/BIH.pdf


The actual constructor is based loosely on this pseudocode: 
http://ompf.org/forum/viewtopic.php?p=1411#p1411


I'll take a look. Certainly all the ray tracers I've written just test 
every ray against every object - O(n) complexity? Hmm, that's not 
good[tm]...


You don't need to drop support for the whole multitude of primitives, 
though the ones that are packet-friendly will render faster.  Many 
modern ray tracers spend most of their time traversing the 
acceleration structure rather than doing intersection tests with 
actual geometry, so all you really need to do to get most of the 
benefit of packet tracing is to make the acceleration structure 
support packets.  (For the actual geometry, you can fall back on 
mono-rays.)  I tried 2x2 packets out last night and got about a 10-15% 
speed improvement on the level-3 SPD sphereflake.  (Shadow and 
reflection rays are still mono rays; if I converted them over, the 
improvement wo

Re: [Haskell-cafe] announce: Glome.hs-0.3 (Haskell raytracer)

2008-04-26 Thread Andrew Coppin

Jon Harrop wrote:

On Thursday 24 April 2008 20:29:50 Andrew Coppin wrote:
  

2. It's the only program I've seen that can render *real* curves, not
fake trickery with triangle meshes.



What you called "fake trickery with triangle meshes" is the core of all modern 
computer graphics. Focussing on ray tracing instead of that is rather missing 
the point, IMHO.
  


Well, that rather depends on what your "point" is, doesn't it?

Personally, I don't see the point in rendering a couple of million 
mathematically flat surfaces, and then trying to blur the edges to give 
the false illusion of a curved surface - when you could just, you know, 
render a real curved surface. Sure, surface normal tricks can make 
lighting and reflections look almost correct, but they won't fix profile 
views, shadows and self-shadows, intersections, or any of the other 
artifacts caused by using triangles.


But anyway, this is wandering off-topic. Suffice it to say, for my 
humble little hobby graphics, I intend to stick to ray tracing. If you 
disagree, feel free to use something else for your stuff. :-)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe