[Haskell-cafe] Re: IO Put confusion

2010-09-15 Thread Chad Scherrer
Chad Scherrer chad.scherrer at gmail.com writes:

 Second attempt:
 doc :: IO Put
 doc = docLength = go
   where
   go 1 = word
   go n = do
 w - word
 ws - go (n-1)
 return (w  putSpace  ws)
 
 This one actually works, but it holds onto everything in memory
 instead of outputting as it goes. If docLength tends to be large, this
 leads to big problems.

Sorry to answer my own post, but I've got a kludgy work-around now. I tried a 
WriterT approach, and also building my own PutT like Data.Binary.Put, neither 
with any luck. Instead I just changed the type to
word, doc :: IO ()
and had it write standard out as is goes. Not nearly as elegant, but at least 
it 
works now.

Thanks,
Chad

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] IO Put confusion

2010-09-14 Thread Chad Scherrer
Hello,

I need to be able to use strict bytestrings to efficiently build a
lazy bytestring, so I'm using putByteString in Data.Binary. But I also
need random numbers, so I'm using mwc-random. I end up in the IO Put
monad, and it's giving me some issues.

To build a random document, I need a random length, and a collection
of random words. So I have
docLength :: IO Int
word :: IO Put

Oh, also
putSpace :: Put

My first attempt:
doc :: IO Put
doc = docLength = go
  where
  go 1 = word
  go n = word  return putSpace  go (n-1)

Unfortunately, with this approach, you end up with a one-word
document. I think this makes sense because of the monad laws, but I
haven't checked it.

Second attempt:
doc :: IO Put
doc = docLength = go
  where
  go 1 = word
  go n = do
w - word
ws - go (n-1)
return (w  putSpace  ws)

This one actually works, but it holds onto everything in memory
instead of outputting as it goes. If docLength tends to be large, this
leads to big problems.

Oh, yes, and my main is currently
main = L.writeFile out.txt = fmap runPut doc

This needs to be lazier so disk writing can start sooner, and to avoid
eating up tons of memory. Any ideas?

Thanks!
Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] bounded ranges

2010-07-22 Thread Chad Scherrer
Hello cafe,

I'm trying to do some things with bounded indices so I can carry
around arrays (well, Vectors, really) without needing to refer to the
bounds.

For example, if I know my indices are Bool values, I can do

 rangeSize (minBound, maxBound :: Bool)
2

I'd like to be able to do this in general, but...
 :t rangeSize (minBound, maxBound)
interactive:1:11:
Ambiguous type variable `a' in the constraints:
  `Bounded a'
arising from a use of `minBound' at interactive:1:11-18
  `Ix a' arising from a use of `rangeSize' at interactive:1:0-29
Probable fix: add a type signature that fixes these type variable(s)

I thought it might help to put it into a module and do a better job
with the type, like this:

bdRangeSize :: (Ix i, Bounded i) = i - Int
bdRangeSize _ = rangeSize (minBound, maxBound :: i)

but I still have problems:

MyArray.hs:22:36:
Could not deduce (Bounded i1) from the context ()
  arising from a use of `maxBound' at MyArray.hs:22:36-43
Possible fix:
  add (Bounded i1) to the context of an expression type signature
In the expression: maxBound :: i
In the first argument of `rangeSize', namely
`(minBound, maxBound :: i)'
In the expression: rangeSize (minBound, maxBound :: i)

I thought maybe it's an existential types problem or something, but I
don't understand why it would be coming up here. Any thoughts?

Oh yes, and I'm using  GHC version 6.12.1.

Thanks,
Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: bounded ranges

2010-07-22 Thread Chad Scherrer
 On Thu, 22 Jul 2010, Chad Scherrer wrote:
  I thought it might help to put it into a module and do a better job
  with the type, like this:
 
  bdRangeSize :: (Ix i, Bounded i) = i - Int
  bdRangeSize _ = rangeSize (minBound, maxBound :: i)
 

Henning Thielemann lemming at henning-thielemann.de writes:
 bdRangeSize x = rangeSize (minBound, maxBound `asTypeOf` x)

Perfect, thank you!

Chad

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] bounded ranges

2010-07-22 Thread Chad Scherrer
On Thu, Jul 22, 2010 at 3:43 PM, Daniel Fischer
daniel.is.fisc...@web.de wrote:

 {-# LANGUAGE ScopedTypeVariables #-}

 bdRangeSize :: forall i. (Ix i, Bounded i) = i - Int

Ah nice, I tried a forall in that position, but I didn't know about
ScopedTypeVariables. Thanks!

Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: replicateM over vectors

2010-04-03 Thread Chad Scherrer
Roman Leshchinskiy rl at cse.unsw.edu.au writes:

 Ah. I missed that. Then your best bet is probably
 
 replicate n action = munstream v $ Fusion.Stream.Monadic.generateM n (const
action)
  $ new n
 
 It's uglier that it should be but vector simply doesn't define the right
combinators for this at the moment.

I'm having trouble getting this to typecheck. I'll reread your Recycle Your
Arrays paper; maybe then it will make more sense.


 There are two things one would have to do. First, add a function to
Generic.New which initialises a New from a
 Monadic.Stream and fusion rules for it. That's easy. The hard part is to
generalise New to work with
 arbitrary monads: at the moment it is defined as:
 
 data New a = New (forall mv s. MVector mv a = ST s (mv s a))
 
 This is because its basic reason for existence is to be passed to Vector.new
which then does a runST to
 produce an immutable vector. It is perhaps possible to make New more general
but it's quite tricky. I'll
 think about it after the ICFP deadline 

But the m I'm interested in happens to be ST. Sounds like it's still easy in
principle, but not immediate. Is that right?

Thanks,
Chad

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] mapping unfreeze over an IntMap of IOUArrays

2008-11-11 Thread Chad Scherrer
Hello cafe,

I've hit a bit of a monadic snag here...

I'm scanning a big file, building a table of statistics. I end up with something
like

IO (IntMap (IOUArray Int Double))

Once I've read in the whole file and built my statistics, I don't need any more
updates, so I'd like to do something like

IntMap (IOUArray Int Double) - IO (IntMap (UArray Int Double)),

using unsafeFreeze. I'm getting stuck here, since the IntMap library is not so
monad-friendly. 

I could rebuild the whole thing using sequence and lists, but the table is
pretty big (4.5 million keys on a first run), so I'd prefer to avoid that. Any
ideas?

BTW, I probably should be using ST for this, but I hit the usual type s
escapes irritation and gave up. If that would work more easily, that would be
fine with me too.

Thanks!

Chad Scherrer

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: mapping unfreeze over an IntMap of IOUArrays

2008-11-11 Thread Chad Scherrer
Don Stewart dons at galois.com writes:
 Hmm. So you'd need to construct a new IntMap, made by fmap'ping
 unsafeFreeze over each element of the old map.

I guess if we had a Traversable instance for Data.IntMap things would be just
fine. Would this be a bad thing in any way?

Chad

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: mapping unfreeze over an IntMap of IOUArrays

2008-11-11 Thread Chad Scherrer
Don Stewart dons at galois.com writes:

 Hmm. So you'd need to construct a new IntMap, made by fmap'ping
 unsafeFreeze over each element of the old map.
 

For now I'll just do

IntMap.map (unsafePerformIO . unsafeFreeze)

Hopefully this won't come back to bite me

Thanks!
Chad

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Automatic parallelism in Haskell, similar to make -j4?

2008-11-04 Thread Chad Scherrer
T Willingham t.r.willingham at gmail.com writes:
 I am thinking of our troglodytic friend 'make', which will run (for
 example) 4 parallel jobs when given the option make -j4.  Even
 'rake', the ruby version of make, now has a branch (called drake)
 which does the parallel -j option.

From the replies I've seen about this, I think it's been interpreted as asking
whether ghc could compile a given program so that it will execute in parallel.
In general that's a hard problem.

On the other hand, it should be really straightforward (in principle, I mean) to
get something going like
ghc --make -j4 Foo.hs
similar to your make example, so that compile time could be reduced, while the
execution could either be sequential or parallel. I don't think there's anything
like this yet (is there?). 

Does anyone have any thought what it would take to get this going?

Chad

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Is www.haskell.org down?

2008-08-14 Thread Chad Scherrer
Don Stewart dons at galois.com writes:
 Some choice quotes:
 
almost all of the examples of Haskell's use in industry are fakes
 
nobody has ever done anything significant using Haskell
 
 Jon's the primary source of FUD against Haskell and its community, as he
 goes around promoting his site in other functional programming
 communities. This kind of behaviour's been going on for a few years now,
 sadly.

Maybe trolls are just an unfortunate growing pain. Never see those in purely
academic languages.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] WebSense doesn't like darcs.haskell.org

2008-05-29 Thread Chad Scherrer
My work uses WebSense to filter viewable pages - I don't have an option.

I used to be able to get to darcs.haskell.org just fine, but apparently a
/hacking directory was added somewhere, so WebSense put it on the naughty list.

I put in a request for WebSense to review the site; hopefully it will soon be
allowed again. But if not, I'll have to go through lots of red tape, which I'd
really like to avoid.

I assume/hope this hacking is only in the sense of making adjustments to code.
Is this right? If there's anything involving DoS attacks, etc, I might never be
able to get to it from work again.

I get the impression WebSense is relatively widely used. Is anyone else here
having similar trouble?

Chad

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Copying Arrays

2008-05-29 Thread Chad Scherrer
Jed Brown jed at 59A2.org writes:
 Uh, ByteString is Unicode-agnostic.  ByteString.Char8 is not.  So why not do 
 IO
 with lazy ByteString and parse into your own representation (which might look 
 a
 lot like StorableVector)?

One problem you might run into doing it this way is if a wide character is split
between two different arrays. In that case you have to do some post-porcessing
to put the pieces back together. More efficient, I think, if you could force a
given alignment when reading in the lazy bytestring. But there's not a way to do
that, is there?

I hope this makes sense. It's the problem I ran into when I tried once to use
lazy bytestrings instead of a storable vector, reasoning that the more recent
fusion work in bytestring would give a speed boost. But then I was doing
numerical stuff, and I don't know much about unicode.

Chad

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: WebSense doesn't like darcs.haskell.org

2008-05-29 Thread Chad Scherrer
Duncan Coutts duncan.coutts at worc.ox.ac.uk writes:

 On Thu, 2008-05-29 at 18:12 +, Chad Scherrer wrote:
 I used to be able to get to darcs.haskell.org just fine, but apparently a
 /hacking directory was added somewhere, so WebSense put it on the naughty
 list.
 Do you know where?

Neil suggested it might be

http://darcs.haskell.org/ghc/HACKING

but that's been there for a long time, hasn't it? Maybe it's not that a new
directory was added, but that WebSense hadn't ever indexed haskell.org before?

Chad

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Interesting critique of OCaml

2008-05-08 Thread Chad Scherrer
Don Stewart dons at galois.com writes:
[interesting quote...]
 Which I think really captures the joy of being able to write algebraic
 and data structure transformations, via rewrite rules, without having to
 extend the compiler -- all thanks to purity, laziness, and static
 typing.

This makes me wonder... Rewrite rules are certainly effective, and seem to be a
good place to point (one of many, of course) when asked why you'd want a 
language with Haskell's characteristics. It seems like there should be an
argument to this effect:

1. You'd like be able to declare compile-time transformations like
  map f . map g = map (f . g)
without messing with the compiler

2. For x in [purity, laziness, static typing, higher-order functions]
  If you don't have x, here's what goes wrong (or can go wrong)

3. Of the very few languages with these characteristics, Haskell is the most
widely-used, and the most actively developed and researched.

Now, I don't know much about lisp, but aren't code transformations like this the
whole point of macros? What makes is difficult to do the same thing in this 
context?

What about object-oriented languages? The problem with step 1 in the argument is
that it's already cast in a functional-programming framework. Is there a way to
recast it so OOP could play?

Thanks,
Chad

PS - the link now gives a 500 server error - did our traffic overwhelm it? Is
the cafe the next slashdot? ;)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: fast integer base-2 log function?

2008-02-27 Thread Chad Scherrer
Jens Blanck jens.blanck at gmail.com writes:

  {-# LANGUAGE MagicHash #-} import GHC.Exts import Data.Bits -- experiment
with using a LUT here (hint: FFI + static arrays in C)
 ...

Sorry I don't have an answer, only more questions.

Is {-# LANGUAGE MagicHash #-} documented somewhere? I've seen it referenced a
few times now, but I can't find any details about it.

Chad

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Just a quick question

2008-02-27 Thread Chad Scherrer
Imam Tashdid ul Alam uchchwhash at yahoo.com writes:
 A few days back, I *think* I stumbled upon the
 statement the interaction between GADTs and
 functional dependencies is not yet well understood.
 Then I glossed over it. Now that I have (finally)
 started understanding what GADTs are meant to do, I am
 somewhat terrified. Did I read that one correctly? Or
 was it ... between GADTs and *typeclasses*...?
 
 Was it?
 
 Gah. That would be disastrous. 


Functional dependencies might mean something different than you think. See
section 8.6.2 of this:

http://www.haskell.org/ghc/docs/latest/html/users_guide/type-class-extensions.html

Chad

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] stream/bytestring questions

2008-02-21 Thread Chad Scherrer
On Wed, Feb 20, 2008 at 5:53 PM, Roman Leshchinskiy [EMAIL PROTECTED] wrote:
  In general, I don't see why programming directly with streams is
  something that should be avoided. You do have to be quite careful,
  though, if you want to get good performance (but GHC's simplifier is
  becoming increasingly robust in this respect).

Hmm. I was taking the approach of getting something working, given
what is currently exported from Data.Stream. How would you deal with
this? Should there be a Data.Stream.Internal or something that exports
streams and unlifted types?

If I'm understanding this correctly, these things were not exported in
the first place because this fusion framework provides an
approximation, but not an isomorphism, so partial bottoms don't always
behave nicely. I was hoping to get around this by programming instead
to Step and then hoping rules could be constructed to translate to
Streams. Do you think there's a better way around it?

   extract ns xs == [xs !! n | n - ns]

  Note that in contrast to your function, this doesn't assume that ns is
  sorted and hence, there is no way to implement this without producing an
  intermediate list.

Oh yes, good point. It's so easy to forget about assumptions like that.

Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: The Proliferation of List-Like Types

2008-02-21 Thread Chad Scherrer
On Wed, Feb 20, 2008 at 7:57 PM, Henning Thielemann
[EMAIL PROTECTED] wrote:
  I think there can also be problems simply because the element type is no
  longer fixed to Word8 but also not entirely free, but restricted to
  Storable. E.g. you cannot simply replace
 SV.fromList . List.map f by  SV.map f . SV.fromList
   because in the second form not only the result type of 'f' must be
  Storable, but the input type of 'f' must be Storable, too.

Hmm, interesting. But would we really need this? If we have [a]
rewritten as a stream and SV rewritten as a stream, couldn't they
still fuse?

Loosely speaking,
SV.fromList . List.map f
- (SV.unstream . List.stream) . (List.unstream . mapS f . List.stream)
- SV.unstream .  mapS f . List.stream

Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] stream/bytestring questions

2008-02-20 Thread Chad Scherrer
On Feb 17, 2008 6:06 PM, Derek Elkins [EMAIL PROTECTED] wrote:
 It's -quite- possible that a coalgebraic perspective is much more
 natural for your code/problem.  If that's the case, use it (the
 coalgebraic perspective that is).  Obviously depending on the internals
 of the stream library is not a good idea and using Streams directly was
 not their intent, but it is your code.  Do what you will.

Here's an example of the problem. Start with a function

extract :: [Int] - [a] - [a]
extract = f 0
where
f !k nss@(n:ns) (x:xs)
  | n == k= x : f (k+1) ns xs
  | otherwise = f (k+1) nss xs
f _ _ _ = []

which is just a more efficient way of getting
extract ns xs == [xs !! n | n - ns]

There should be a way to write this that will be friendly for stream
fusion. The best option I can see is unfoldr. But if you try to write
it this way, you get something like

extract' ns xs = unfoldr f (0,ns,xs)
  where
  f (!k, nss@(n:ns), x:xs)
| n == k= Just (x, (k + 1, ns, xs))
| otherwise = f (k+1, nss, xs)
  f _ = Nothing

This is fine, except that the second-to-last line means this is still
recursive. If I understand correctly, fusion requires that the
recursion be encapsulated within the unfoldr or other functions that
are expressed internally as stream functions.

We could encapsulate the recursion with a function
stepUnfoldr :: (s - Step a s) - s - [a]
stepUnfoldr f s = unfoldr g s
  where
  g s = case f s of
Done - Nothing
Yield x s' - Just (x,s')
Skip s' - g s'

Using this, we could just write

extract'' ns xs = stepUnfoldr f (0,ns,xs)
  where
  f (!k, nss@(n:ns), x:xs)
| n == k= Yield x (k + 1, ns, xs)
| otherwise = Skip (k+1, nss, xs)
  f _ = Done

This is a pretty natural way to write the algorithm, and the recursion
is nicely tucked away. The only remaining question is whether we can
get things to fuse.

The type of stepUnfoldr looks familiar...

*Main :t stepUnfoldr
stepUnfoldr :: (s - Step a s) - s - [a]

*Main :t \f s - unstream $ Stream f s
\f s - unstream $ Stream f s :: (Data.Stream.Unlifted s) =
 (s - Step a s) - s - [a]

If we could somehow swap out our state type for an unlifted one, we
could write a rule
  stepUnfoldr f = unstream . Stream f

It seems like there might be some subtleties there to watch out for,
but I'm not sure yet.

Anyway, this is the kind of thing I had in mind when I asked about
using the internals of Data.Stream more directly.

Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: The Proliferation of List-Like Types

2008-02-20 Thread Chad Scherrer
Henning Thielemann lemming at henning-thielemann.de writes:
  4) We are missing one final useful type: a Word32-based ByteString.
When working in the Unicode character set, a 32-bit character
can indeed be useful, and I could see situations in which the
performance benefit of a ByteString-like implementation could
be useful combared to [Char].
 
 StorableVector should fill this gap.
http://code.haskell.org/~sjanssen/storablevector/
 

Yes, it could, but 
(1) it's way behind ByteString in terms of optimizations (== fusion)
(2) there's (as far as I know) not a StorableVector.Lazy, which is very much
needed

To catch up on both fronts, we're looking at a lot of duplicate code.

Chad

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: The Proliferation of List-Like Types

2008-02-20 Thread Chad Scherrer
On Feb 20, 2008 10:57 AM, Antoine Latter [EMAIL PROTECTED] wrote:
 For anyone looking into it - the StorableVector fusion would have to
 be quite different from the current ByteString fusion framework.
 Maybe it would be enough to lay down a Stream fusion framework for
 StorableVectors.

I must be missing something. Why would it have to be so different?

Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: The Proliferation of List-Like Types

2008-02-20 Thread Chad Scherrer
Antoine Latter [EMAIL PROTECTED] wrote:
 From what I saw of Data.ByteString.Fusion, it relies on the assumption
 that the elements of the output array are of the same size and
 alignment as the elements of all of the arrays in the fused
 intermediate steps.  That way, all of the intermediate stages can
 mutate the output array in place.

I see a lot in there involving the elimination of intermediate data
structures, but nothing about destructive updates. The API is purely
functional, and what you're talking about would need to be done in the
IO monad to be sure you don't throw away stuff you might need to use
again.

Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] stream/bytestring questions

2008-02-17 Thread Chad Scherrer
ByteStrings have given a real performance boost to a lot of Haskell
applications, and I'm curious why some of the techniques aren't more
abstracted and widely available. If it's because it's a big job,
that's certainly understandable, but maybe there's something I'm
overlooking that some of the abstractions aren't really desirable.

My first question is about the stream fusion. This seems (based on the
ByteString paper) to speed things up quite a lot, but for some reason
this isn't necessarily the case for lists. Is this just a matter of
the existing fusion for lists doing such a good job in many cases that
it's hard to beat? The streams paper suggests that some contructors
aren't optimized away during fusion, but wouldn't this also be a
problem for fusion in the bytestring context? Are there many cases
where it helps performance to program to streams directly, rather than
letting the rules go back and forth between them and lists? I tried to
do this, but kept getting hung up on the Unlifted stuff; it's not
exposed (pardon the pun) in the API, and I don't understand why the
types are different than the usual (), Either a b, (a,b), etc.

Second, the idea of representing a sequence as a lazy list of strict
arrays is very appealing. But why represent it so concretely? Wouldn't
it make sense to do something like

data ArrayList a i e = Empty | Chunk !(a i e) (ArrayList a i e)
?

Then array fusion could be written for IArray types in general, and
the ByteString library would become a set of instances with some
specialization pragmas. Presumably a more general StorableVector could
be represented as an IArray, and the NDP stuff seems a good fit too,
so this would make it easy to leverage future work. Seems to separate
the various concerns a little better, too, but again maybe there's a
performance issue I'm missing.

Sorry for the barrage of questions; I guess there's just a lot I'm
trying to understand better.

Thanks,
Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] stream/bytestring questions

2008-02-17 Thread Chad Scherrer
On Feb 17, 2008 4:13 PM, Brandon S. Allbery KF8NH [EMAIL PROTECTED] wrote:

 Have you looked at the stream-fusion package on Hackage?
 http://hackage.haskell.org/cgi-bin/hackage-scripts/package/stream-
 fusion-0.1.1

Yeah, I've seen this. It's nice that this is separated, but a little
unsatisfying that the bytestring library reimplements it rather than
just requiring it as a dependency. This, and the fact that bytestrings
are way fast, while the stream-fusion stuff is sometimes slower than
just using lists, make me think that either (1) there's sometimes an
advantage in programming directly to streams, or (2) maybe more of the
stream functions need to be exposed in the API. Or maybe there's
another reason... that's part of my question.

Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] stream/bytestring questions

2008-02-17 Thread Chad Scherrer
 they currently use two different fusion systems. bytestring uses an
 older version of what is now stream fusion. at some point we'll switch
 bytestrings over to using the new stuff in the stream-fusion package,
 since its a lot better.

Oh, that's pretty interesting. I had assumed bytestring had the latest
fusion stuff.

 ah, you assume stream-fusion lists are slower. for list stuff, since
 6.8, i've only seen things the same speed or better. but if you have a
 program doing the wrong thing, it would be worth looking at.

I was just going by the Lists to Streams to Nothing At All  paper.
That's great that it's gotten faster! Does that mean the path for
replacing Data.List with this is any clearer?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] stream/bytestring questions

2008-02-17 Thread Chad Scherrer
On Feb 17, 2008 5:01 PM, Don Stewart [EMAIL PROTECTED] wrote:
 yeah, with lists, as compared to bytestrings, there are:

 * more complex operations to fuse
 * allocation is much cheaper (lazy list cons nodes)
 * built in desugaring for build/foldr fusion interferes (enumerations, 
 comprehensions)

 so the benefits of fusing lists are less obvious than bytestrings, where
 every fusion point knocks out a big array allocation, and they're a bit
 more complex to get the full api going.

Ok, that makes sense.

 no, using the rules should be fine. you're not supposed to program in
 the stream abstraction.

I was working on some run-length encoding stuff and found it most
natural to do a lot of it using unfoldr, and I had a state in the
unfold that was naturally represented as a tuple. I got a little
worried about all the packing and unpacking tuples not being optimized
out, and this got me wondering about just using Streams directly. At
the time, the Skip constructor for a Step felt natural to use in some
places, which I thought could be really convenient. I dunno, maybe
it's not an issue. Hmm, if it would help I can try to post some of the
code...

  Wouldn't
  it make sense to do something like
 
  data ArrayList a i e = Empty | Chunk !(a i e) (ArrayList a i e)
  ?

 someone could do that. we chose to  go with the monomorphic case, since
 its easier to get the api right, and the performance right.

Unless I'm overlooking something, this would involve something like...
(1) make the fusion stuff apply to an IArray (pretty handy anyway)
(2) make (strict) ByteString an instance of IArray (or maybe via StorableArray)
(3) write an ArrayList module, similar to Data.ByteString.Lazy

From this, it would be pretty short again, in theory, to write an
alternate ByteString library using the abstractions. The point of this
would be to leverage future code improvements and reduce code
duplication.

Does this seem reasonable? Or is it too soon to consider this kind of extension?

Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] :i and :t give different types

2008-02-08 Thread Chad Scherrer
On Feb 8, 2008 9:55 AM, Simon Peyton-Jones [EMAIL PROTECTED] wrote:
 I have not been following closely but if Don thinks there's a bug there 
 probably is.  Can someone submit a bug report pls?  Better still a patch! :-)

 Simon

Ok, I filed a bug report.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] :i and :t give different types

2008-02-07 Thread Chad Scherrer
Hello haskell-cafe,

In ghci, I tried to get info for Data.Stream.Stream:

$ ghci
GHCi, version 6.8.2: http://www.haskell.org/ghc/  :? for help
Loading package base ... linking ... done.
Prelude :m Data.Stream
Prelude Data.Stream :i Stream
data Stream a where
  Stream :: forall a s.
(Data.Stream.Unlifted s) =
!s - Step a s - !s - Stream a
-- Defined in Data.Stream
instance Functor Stream -- Defined in Data.Stream

This didn't seem right to me, so I asked tried this:

Prelude Data.Stream :t Stream
Stream :: (Data.Stream.Unlifted s) = (s - Step a s) - s - Stream a

What's going on here?

forall a s. (Data.Stream.Unlifted s) = !s - Step a s - !s - Stream a
 and
(Data.Stream.Unlifted s) = (s - Step a s) - s - Stream a

are completely different, right? And really, neither one makes much
sense to me. I would have expected

forall s. (Data.Stream.Unlifted s) = (s - Step a s) - s - Stream a

-- 

Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Abstracting ByteStrings

2008-01-23 Thread Chad Scherrer
 Careful. ByteString is an alternative to [Word8]. Converting [Char] to
 ByteString and back requires an encoding. (Unfortunately, the only encoding
 that comes with the bytestring package is lossy.)

Ahh, good point. I guess I almost always just use them to read ASCII,
so it hasn't been an issue.

But for numeric types, I think this wouldn't be so much of an issue,
since the format is more uniform.

Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Abstracting ByteStrings

2008-01-23 Thread Chad Scherrer
 Given a reasonable Storable instance of pairs you could use:
   http://code.haskell.org/~sjanssen/storablevector

I hadn't seen that before, thanks!
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Abstracting ByteStrings

2008-01-22 Thread Chad Scherrer
A lazy ByteString is an alternative to a String=[Char], where
sacrificing some degree of laziness through chunks gives much
greater performance in many applications. If I remember correctly, we
could as well create an IntString, DoubleString, etc by filling the
chunk arrays with different types.

Now, this might only work for primitive types, but couldn't we extend
it by taking a similar approach to NDP? For example, if we have (lazy)
ByteString = [Chunk Char], then we could represent, say [Chunk (a,b)]
as ([Chunk a], [Chunk b]), etc

Is this reasonable? Do you think it would work? Or is all of this
subsumed by fusion anyway?

-- 

Chad Scherrer

Time flies like an arrow; fruit flies like a banana -- Groucho Marx
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] functional maps

2007-12-21 Thread Chad Scherrer
A while back I was playing with Data.Map was getting irritated about
lookups that fail having the type of values, but wrapped in an extra
monad. I decided to work around this by putting a default in the data
type itself, so we have a functional map

data FMap k a = FMap (k - a) (Map k a)

This has been really convenient - it's a monad, and failed lookups
have the same type as successful ones.

lookup :: (Ord k) = k - FMap k a - a
lookup k (FMap f m)= Map.findWithDefault (f k) k m

This also makes it much nicer to build a function that tabulates a
list of pairs (nicer than I've found using Data.Map, anyway):

fromPairs :: (Ord k) = [(k,a)] - FMap k [a]
fromPairs = foldl' (flip . uncurry $ insertWith (:)) $ return []

insertWith :: (Ord k) = (a - b - b) - k - a - FMap k b - FMap k b
insertWith f k x m = case lookup k m of
  v - insert k (f x v) m

Ok, great, but fromPairs is blowing the stack. It does fine for a
while, but today I was trying to use it for a few million pairs. It
runs for a while, eats a couple gigs of ram, and then I get a stack
overflow.

Any advice for tracking this down? Thanks!

Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: functional maps

2007-12-21 Thread Chad Scherrer
Chad Scherrer chad.scherrer at gmail.com writes:

 
 A while back I was playing with Data.Map was getting irritated about
 lookups that fail having the type of values, but wrapped in an extra
 monad. I decided to work around this by putting a default in the data
 type itself, so we have a functional map
 
 data FMap k a = FMap (k - a) (Map k a)
...

Sorry to respond to my own message, but I think I might have figured it out.
This should be strict in the Map parameter, so this works better:

data FMap k a = FMap (k - a) !(Map k a)

It still takes lots of memory for what I'm trying to do, but that's another
problem. At least the stack seems happy.

Chad

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Using Data.Binary for compression

2007-11-14 Thread Chad Scherrer
Hi,

I'd like to be able to use Data.Binary (or similar) for compression.
Say I have an abstract type Symbol, and for each value of Symbol I
have a representation in terms of some number of bits. For compression
to be efficient, commonly-used Symbols should have very short
representations, while less common ones can be longer.

Since an encoding like [Bool] would be really inefficient for this (at
least I think it would, though some fancy fusion tricks might be able
to help), I was thinking a reasonable approach might be to use Word8
(for example), and then specify a number of bits n, indicating that
only the first n bits are to be written to the compressed
representation.

I was looking at the internals of Data.Binary, and saw it seems that
PutM could be used for this purpose (was something like this its
original purpose?). Today, I put this together:

type BitRep = Word8
type NBits = Int

type MyBits = (BitRep, NBits)

(#) :: MyBits - MyBits - PutM MyBits
(a, m) # (b, n) = case (a .|. (b `shiftR` m), m + n) of
  ans@(ab, s) - if s  8 then return ans
else putWord8 ab  return (b `shiftL` (8 - m), s - 8)

Then, it would be easy enough to map [Symbol] - [MyBits], and then
use something like foldM (#) to get into the PutM monad.

A couple of questions:

(1) Am I reinventing the wheel? I haven't seen anything like this, but
it would be nice to be a bit more certain.

(2) This seems like it will work ok, but the feel is not as clean as
the current Data.Binary interface. Is there something I'm missing that
might make it easier to integrate this?

(3) Right now this is just proof of concept, but eventually I'd like
to do some performance tuning, and it would be nice to have a
representation that's amenable to this. Any thoughts on speeding this
up while keeping the interface reasonably clean would be much
appreciated.

Thanks!

-Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Suspected stupid Haskell Question

2007-10-17 Thread Chad Scherrer
Big_Ham joymachine2001 at hotmail.com writes:

 
 
 Is there a library function to take a list of Strings and return a list of
 ints showing how many times each String occurs in the list.
 
 So for example:
 
 [egg, egg, cheese] would return [2,1]
 
 I couldn't find anything on a search, or anything in the librarys.
 
 Thanks BH.

Hi BH,

This might be overkill, but it works well for me. And it avoid stack overflows I
was originally getting for very large lists. Dean Herrington and I came up with
this:

ordTable :: (Ord a) = [a] - [(a,Int)]
ordTable xs = Map.assocs $! foldl' f Map.empty xs
where f m x = let  m' = Map.insertWith (+) x 1 m
   Just v = Map.lookup x m'
  in v `seq` m'

intTable :: [Int] - [(Int,Int)]
intTable xs = IntMap.assocs $! foldl' f IntMap.empty xs
where f m x = let  m' = IntMap.insertWith (+) x 1 m
   Just v = IntMap.lookup x m'
  in v `seq` m'

enumTable :: (Enum a) = [a] - [(a,Int)]
enumTable = map fstToEnum . intTable . map fromEnum
where fstToEnum (x,y) = (toEnum x, y)

If you like, it's easily wrapped in a Table class.

Chad




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Suspected stupid Haskell Question

2007-10-17 Thread Chad Scherrer
Hmm, is insertWith' new? If I remember right, I think the stack overflows
were happening because Map.insertWith isn't strict enough. Otherwise I think
the code is the same. But I would expect intTable to be faster, since it
uses IntMap, and there's no IntMap.insertWith' as of 6.6.1 (though it may be
easy enough to add one).

Chad

On 10/17/07, Thomas Hartman [EMAIL PROTECTED] wrote:


 Since I'm interested in the stack overflow issue, and getting acquainted
 with quickcheck, I thought I would take this opportunity to compare your
 ordTable with some code Yitzchak Gale posted earlier, against Ham's original
 problem.

 As far as I can tell, they're the same. They work on lists up to 10
 element lists of strings, but on 10^6 size lists I lose patience waiting for
 them to finish.

 Is there a more scientific way of figuring out if one version is better
 than the other by using, say profiling tools?

 Or by reasoning about the code?

 t.

 

 import Data.List
 import qualified Data.Map as M
 import Control.Arrow
 import Test.QuickCheck
 import Test.GenTestData
 import System.Random

 {-
 Is there a library function to take a list of Strings and return a list of
 ints showing how many times each String occurs in the list.

 So for example:

 [egg, egg, cheese] would return [2,1]
 -}

 testYitzGale n = do
   l - rgenBndStrRow (10,10) (10^n,10^n)  -- 10 strings, strings are
 10 chars long, works. craps out on 10^6.
   m - return $ freqFold l
   putStrLn $ map items:  ++ ( show $ M.size m )

 testCScherer n = do
   l - rgenBndStrRow (10,10) (10^n,10^n)  -- same limitations as yitz gale
 code.
   m - return $ ordTable l
   putStrLn $ items:  ++ ( show $ length m )


 -- slow for big lists
 --freqArr = Prelude.map ( last  length ) . group . sort

 -- yitz gale code. same as chad scherer code? it's simpler to understand,
 but is it as fast?
 freqFold :: [[Char]] - M.Map [Char] Int
 freqFold = foldl' g M.empty
   where g accum x = M.insertWith' (+) x 1 accum
 -- c scherer code. insists on ord. far as I can tell, same speed as yitz.
 ordTable :: (Ord a) = [a] - [(a,Int)]
 ordTable xs = M.assocs $! foldl' f M.empty xs
 where f m x = let  m' = M.insertWith (+) x 1 m
Just v = M.lookup x m'
   in v `seq` m'


 l = [egg,egg,cheese]

 -- other quickcheck stuff
 --prop_unchanged_by_reverse = \l - ( freqArr (l :: [[Char]]) ) == (
 freqArr $ reverse l )
 --prop_freqArr_eq_freqFold = \l - ( freqArr (l :: [[Char]]) == (freqFold
 l))
 --test1 = quickCheck prop_unchanged_by_reverse
 --test2 = quickCheck prop_freqArr_eq_freqFold

 --- generate test data:
 genBndStrRow (minCols,maxCols) (minStrLen, maxStrLen) = rgen ( genBndLoL
 (minStrLen, maxStrLen) (minCols,maxCols) )

 gen gen = do
   sg - newStdGen
   return $ generate 1 sg gen

 -- generator for a list with length between min and max
 genBndList :: Arbitrary a = (Int, Int) - Gen [a]
 genBndList (min,max) = do
   len - choose (min,max)
   vector len


 -- lists of lists
 --genBndLoL :: (Int, Int) - (Int, Int) - Gen [[a]]
 genBndLoL (min1,max1) (min2,max2) = do
   len1 - choose (min1,max1)
   len2 - choose (min2,max2)
   vec2 len1 len2

 --vec2 :: Arbitrary a = Int - Int - Gen [[a]]
 vec2 n m = sequence [ vector m | i - [1..n] ]




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] gtk2hs in Ubuntu Gutsy

2007-10-04 Thread Chad Scherrer
I just installed the beta release for Ubuntu Gutsy, and I noticed that
gtk2hs (provided by libghc6-gtk-dev) is still at version 0.9.10.5-1ubuntu1.
Worse, it's apparently not installable; when I try I get this message:

libghc6-gtk-dev:
 Depends: ghc6 (6.6+) but 6.6.1-2ubuntu2 is to be installed
 Depends: libghc6-cairo-dev but it is not going to be installed

Do any of you guys know why this wouldn't include a newer version? Probably
just an oversight, I'm guessing.

-- 

Chad Scherrer

Time flies like an arrow; fruit flies like a banana -- Groucho Marx
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] monte carlo trouble

2007-08-16 Thread Chad Scherrer
I'm using ListT now, trying to do this:

type Sample a = ListT (State StdGen) a

randomStreamR :: (RandomGen g, Random a) = (a,a) - g - ([a], g)
randomStreamR bds g =(randomRs bds g1, g2)
  where (g1,g2) = split g

sample :: [a] - Sample a
sample [] = ListT (State f)
  where f s = case next s of (_,s') - ([],s')
sample xs = do
  let bds = (1, length xs)
  xArr = listArray bds xs
  i - ListT . State $ randomStreamR bds
  return $ (xArr ! i)

-- Simple example, for testing
main = mapM_ print . flip evalState (mkStdGen 1) . runListT $ do
  x - sample [1..100]
  y - sample [1..x]
  return (x,y)

The abstraction seems much better now, but even the simple little
example blows up in memory. Here's a snippet from profiling with
-auto-all -caf-all (after I interrupted it):

 CAF:lvl4Main 261   1
 0.00.0   100.0  100.0
  main   Main 273   0
 0.00.0   100.0  100.0
   sampleMain 274   1
100.0  100.0   100.0  100.0
randomStreamRMain 276   1
 0.00.0 0.00.0

I'm wondering if the bind for ListT is still effectively building
every possibility behind the scenes, and sampling from that. I could
redo the Sample monad by hand, if that could be the problem, but I'm
not sure what changes would need to be made. Or maybe it's building
lots of different arrays and holding them for too long from the GC. Or
maybe it's a strictness/laziness issue I'm missing. Still not so sure
when I need case vs let/where.

How would you guys going about tracking down the problem?

Thanks,
Chad

On 8/15/07, Paul Johnson [EMAIL PROTECTED] wrote:
 Chad Scherrer wrote:
  Thanks for your replies.
 
  I actually starting out returning a single element instead. But a
  given lookup might return [], and the only way I could think of to
  handle it in (State StdGen a) would be to fail in the monad. But
  that's not really the effect I want - I'd rather have it ignore that
  element. Another option was to wrap with Maybe, but then since I
  really want  a sequence of them anyway, I decided to just wrap in a
  List instead. Is there a way Maybe would work out better?
 Ahh.  How about using ListT Gen then?  That (if I've got it the right
 way round) works like the list monad (giving you non-determinism), but
 has your random number generator embedded as well.  Take a look at All
 About Monads at http://www.haskell.org/all_about_monads/html/.  Each
 action in the monad may use a random number and produce zero or more
 results.

 Paul.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] monte carlo trouble

2007-08-15 Thread Chad Scherrer
Thanks for your replies.

I actually starting out returning a single element instead. But a
given lookup might return [], and the only way I could think of to
handle it in (State StdGen a) would be to fail in the monad. But
that's not really the effect I want - I'd rather have it ignore that
element. Another option was to wrap with Maybe, but then since I
really want  a sequence of them anyway, I decided to just wrap in a
List instead. Is there a way Maybe would work out better?

I've seen PFP, but I don't see where that would help here. I'd still
end up with an enormous list of tuples. This could be generated
lazily, but sampling with replacement (yes I want this, not a shuffle)
would require forcing the whole list anyway, wouldn't it? Using my
approach, even asking ghci for the length of the list ran for 30+
minutes.

If there's a way to lazily sample with replacement from a list without
even requiring the length of the list to be known in advance, that
could lead to a solution.

Thanks,
Chad

On 8/15/07, Paul Johnson [EMAIL PROTECTED] wrote:
 Chad Scherrer wrote:
  There's a problem I've been struggling with for a long time...
 
  I need to build a function
  buildSample :: [A] - State StdGen [(A,B,C)]
 
  given lookup functions
  f :: A - [B]
  g :: A - [C]
 
  The idea is to first draw randomly form the [A], then apply each
  lookup function and draw randomly from the result of each.
 
 I don't understand why this returns a list of triples instead of a
 single triple.  Your description below seems to imply the latter.

 You should probably look at the Gen monad in Test.QuickCheck, which is
 basically a nice implementation of what you are doing with State
 StdGen below.  Its elements function gets a single random element,
 and you can combine it with replicateM to get a list of defined length.

 (BTW, are you sure want multiple random samples rather than a shuffle?
 A shuffle has each element exactly once whereas multiple random samples
 can pick any element an arbitrary number of times.  I ask because
 shuffles are a more common requirement.  For the code below I'll assume
 you meant what you said.)

 Using Test.QuickCheck I think you want something like this (which I have
 not tested):

buildSample :: [A] - Gen (A,B,C)
buildSample xs = do
   x - elements xs
   f1 - elements $ f x
   g1 - elements $ g x
   return

 If you want n such samples then I would suggest

samples - replicateM n $ buildSample xs
  It's actually slightly more complicated than this, since for the real
  problem I start with type [[A]], and want to map buildSample over
  these, and sample from the results.
 
  There seem to be so many ways to deal with random numbers in Haskell.
 
 Indeed.
  After some false starts, I ended up doing something like
 
  sample :: [a] - State StdGen [a]
  sample [] = return []
  sample xs = do
g - get
let (g', g'') = split g
bds = (1, length xs)
xArr = listArray bds xs
put g''
return . map (xArr !) $ randomRs bds g'
 
 Not bad, although you could instead have a sample function that returns
 a single element and then use replicateM to get a list.
  buildSample xs = sample $ do
x - xs
y - f x
z - g x
return (x,y,z)
 
  This is really bad, since it builds a huge array of all the
  possibilities and then draws from that. Memory is way leaky right now.
  I'd like to be able to just have it apply the lookup functions as
  needed.
 
  Also, I'm still using GHC 6.6, so I don't have
  Control.Monad.State.Strict. Not sure how much difference this makes,
  but I guess I could just copy the source for that module if I need to.
 
 Strictness won't help.  In fact you would be better with laziness if
 that were possible (which it isn't here).  The entire array has to be
 constructed before you can look up any elements in it.  That forces the
 entire computation.   But compare your implementation of buildSample to
 mine.

 Paul.



-- 

Chad Scherrer

Time flies like an arrow; fruit flies like a banana -- Groucho Marx
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] monte carlo trouble

2007-08-15 Thread Chad Scherrer
Funny you should say that, I was just experimenting with generating
one at a time using (StateT StdGen Maybe). If I get stuck (again) I'll
check out ListT. Thanks!

Chad

On 8/15/07, Paul Johnson [EMAIL PROTECTED] wrote:
 Chad Scherrer wrote:
  Thanks for your replies.
 
  I actually starting out returning a single element instead. But a
  given lookup might return [], and the only way I could think of to
  handle it in (State StdGen a) would be to fail in the monad. But
  that's not really the effect I want - I'd rather have it ignore that
  element. Another option was to wrap with Maybe, but then since I
  really want  a sequence of them anyway, I decided to just wrap in a
  List instead. Is there a way Maybe would work out better?
 Ahh.  How about using ListT Gen then?  That (if I've got it the right
 way round) works like the list monad (giving you non-determinism), but
 has your random number generator embedded as well.  Take a look at All
 About Monads at http://www.haskell.org/all_about_monads/html/.  Each
 action in the monad may use a random number and produce zero or more
 results.

 Paul.


-- 

Chad Scherrer

Time flies like an arrow; fruit flies like a banana -- Groucho Marx
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] monte carlo trouble

2007-08-15 Thread Chad Scherrer
Yeah, I did have troubles with  (StateT StdGen Maybe). If it hits a
Nothing, I'd like it to skip that one and try again with the next
state. But instead, Nothing is treated as a failure condition that
makes the whole thing fail. I just found MaybeT on the wiki, which
looks like it could work. I'll take a look at that and the ListT
thing.

I'm starting to think the power of abstraction is a blessing and a
curse. Haskell's abstraction mechanisms are so powerful that it's
generally possible to come up with a way to solve a given problem
elegantly and efficiently. On the other hand, if a problem isn't so
well studied, the bulk of the work is in finding the right
abstraction, which forces generalization beyond what would otherwise
be needed (though it'll be easier the next time!).

On 8/15/07, Paul Johnson [EMAIL PROTECTED] wrote:
 Chad Scherrer wrote:
  Funny you should say that, I was just experimenting with generating
  one at a time using (StateT StdGen Maybe). If I get stuck (again) I'll
  check out ListT. Thanks!
 
 
 You definitely want a list not a Maybe.  List is for 0 or more results
 whereas Maybe is for 0 or 1.  Yes you can have a Maybe [a] with Nothing
 instead of [], but its redundant.

 I'm fairly sure you need it the other way out too.  All About Monads has
 a demo of StateT s [a] where it is used to solve a logic problem.  The
 key point is that in the logic problem each possible result is
 associated with a different state change.  However here you want to have
 a list of results and then a state change when you pick one (or a
 subset, or whatever you want).  So that would be ListT (StateT StdGen).

 Paul.




-- 

Chad Scherrer

Time flies like an arrow; fruit flies like a banana -- Groucho Marx
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Bathroom reading

2007-08-14 Thread Chad Scherrer
Maybe something of these?

http://www.haskell.org/haskellwiki/Blow_your_mind

-- 

Chad Scherrer

Time flies like an arrow; fruit flies like a banana -- Groucho Marx
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] where to put handy functions?

2007-08-10 Thread Chad Scherrer
Agreed. I like select better too, and the regular vs Asc version
is a nice parallel with fromList and fromAscList.

Chad

On 8/10/07, Tillmann Rendel [EMAIL PROTECTED] wrote:

 Non-negative is obvious for a list of indexes. Ordered makes sense
 implementation-wise, and should be easy to match for many applications.
 But is it a sensible constraint on a standard library function?

 For Data.List, I would prefer a multi-pass select function like this:

select :: Integral n = [n] - [a] - [a]
select ns xs = select' 0 ns xs where
  select' k [] _ = []
  select' k (n:ns) [] = select' k ns []
  select' k nns@(n:ns) yys@(y:ys) = case k `compare` n of
LT - select' (succ k) nns ys
EQ - y : select' k ns yys
GT - select nns xs

 *Main select [0, 2, 2, 1] abcde
 accb

 There could be selectAsc for the special case of ordered indexes, to
 avoid keeping the whole input list in memory.

Tillmann
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] where to put handy functions?

2007-08-10 Thread Chad Scherrer
Hmm, this would make a good QuickCheck property. I wonder, is listify
a contravariant functor? Fun to work through the details of that some
time, I think.

Chad

On 8/10/07, Brent Yorgey [EMAIL PROTECTED] wrote:

 Amusingly, extract is intimately related to function composition. Suppose we
 have

 listify :: (Int - Int) - [Int]
 listify = flip map [0..]

 Then if f, g :: Int - Int, and f is monotonically increasing, we have the
 identity

 (listify f) `extract` (listify g) = listify (g . f)

 This randomly occurred to me as I was falling asleep last night and I
 thought I would share. =)

 -Brent
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] listify

2007-08-10 Thread Chad Scherrer
Don't be too impressed, I think I was way off base. Looks like just a
homomorphism:
http://en.wikipedia.org/wiki/Homomorphism

Chad

  I wonder, is listify a contravariant functor?

 I wonder - will I ever reach the stage where I too make off-hand remarks
 like this? :-}

 Now I know how all the normal people feel when I tell them that a
 relation is simply a subset of the extended Cartesian product of the
 respective domains of its attributes...
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] where to put handy functions?

2007-08-09 Thread Chad Scherrer
Is there process for submitting functions for consideration for
inclusion into future versions of the standard libraries? For example,
I'd like to see this in Data.List:

extract :: [Int] - [a] - [a]
extract = f 0
where
f _ _ [] = []
f _ [] _ = []
f k nss@(n:ns) (x:xs) = if n == k then x:f (k+1) ns xs
else f (k+1) nss xs

This behaves roughly as
extract ns xs == map (xs !!) ns

except that it's a lot more efficient, and it still works if ns or xs
(but not both) are infinite. Oh, and ns are required to be ordered
and non-negative.

I'm guessing there are a lot of similarly simple handy functions, and
I'm wondering if there's anything in place to avoid (1) reinventing
the wheel, and (2) name clashes. Someone else may have written
extract as well, meaning one of us wasted our time. And chances are,
if they did, it has a different name, leading to forced qualified
imports.

Finally, even if no one else is using it, it would be good to settle
on reasonable names for things more easily. Is there a better name for
this function? Is there a reason not to call it extract?

-- 

Chad Scherrer

Time flies like an arrow; fruit flies like a banana -- Groucho Marx
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] infinite list of random elements

2007-07-31 Thread Chad Scherrer
Ok, that looks good, but what if I need some random values elsewhere
in the program? This doesn't return a new generator (and it can't
because you never get to the end of the list). Without using IO or ST,
you'd have to thread the parameter by hand or use the State monad,
right? This is where I was leaking space before.

Actually, this makes me wonder... I think what killed it before was
that the state was threaded lazily through the various (= very many)
calls. I suppose a State' monad, strict in the state, could help here.
I wonder how performance for this would compare with IO or ST. Might
have to try that sometime...

Chad

On 7/31/07, Lennart Augustsson [EMAIL PROTECTED] wrote:
 No leak in sight.

   -- Lennart

 import Random
 import Array

 randomElts :: RandomGen g = g - [a] - [a]
 randomElts _ [] = []
 randomElts g xs = map (a!) rs
where a = listArray (1, n) xs
 rs = randomRs (1, n) g
  n = length xs

 main = do
 g - getStdGen
 let xs = randomElts g [10,2,42::Int]
 print $ sum $ take 100 xs
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] infinite list of random elements

2007-07-31 Thread Chad Scherrer
On 7/31/07, Brandon S. Allbery KF8NH [EMAIL PROTECTED] wrote:

 On Jul 31, 2007, at 16:20 , Chad Scherrer wrote:

  calls. I suppose a State' monad, strict in the state, could help here.

 You mean Control.Monad.State.Strict ?

Umm, yeah, I guess I do. Glad I hadn't started recoding it!
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] infinite list of random elements

2007-07-30 Thread Chad Scherrer
I'm trying to do something I thought would be pretty simple, but it's
giving me trouble.

Given a list, say [1,2,3], I'd like to be able to generate an infinite
list of random elements from that list, in this case maybe
[1,2,1,3,2,1,3,2,3,1,2,...]. I'm using IO for random purely due to
laziness (my own, not Haskell's).

I was thinking the best way to do this might be to first write this function:

randomElts :: [a] - [IO a]
randomElts [] = []
randomElts [x] = repeat (return x)
randomElts xs = repeat r
  where
  bds = (1, length xs)
  xArr = listArray bds xs
  r = do
i - randomRIO bds
return (xArr ! i)

Then I should be able to do this in ghci:

 sequence . take 5 $ randomElts [1,2,3]
[*** Exception: stack overflow

Any idea what's going on? I thought laziness (Haskell's, not my own)
would save me on this one.

Thanks!
Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] infinite list of random elements

2007-07-30 Thread Chad Scherrer
Thanks for your responses.

Stefan, I appreciate your taking a step back for me (hard to judge
what level of understanding someone is coming from), but the example
you gave doesn't contradict my intuition either. I don't consider the
output [IO a] a list of tainted a's, but, as you suggest, a list of
IO actions, each returning an a. I couldn't return an IO [a], since
that would force evaluation of an infinite list of random values, so I
was using [IO a] as an intermediary, assuming I'd be putting it
through something like (sequence . take n) rather than sequence alone.
Unfortunately, I can't use your idea of just selecting one, because I
don't have any way of knowing in advance how many values I'll need (in
my case, that depends on the results of several layers of Map.lookup).
Also, I'm using GHC 6.6, so maybe there have been recent fixes that
would now allow my idea to work.

Cale, that's interesting. I wouldn't have thought this kind of
laziness would work in this context.

Lennart, I prefer the purely functional approach as well, but I've
been bitten several times by laziness causing space leaks in this
context. I'm on a bit of a time crunch for this, so I avoided the
risk.

Sebastian, this seems like a nice abstraction to me, but I don't think
it's the same thing statistically. If I'm reading it right, this gives
a concatenation of an infinite number of random shuffles of a
sequence, rather than sampling with replacement for each value. So
shuffles [1,2] g
would never return [1,1,...], right?

Chad

 I was thinking the best way to do this might be to first write this function:

 randomElts :: [a] - [IO a]
 randomElts [] = []
 randomElts [x] = repeat (return x)
 randomElts xs = repeat r
   where
   bds = (1, length xs)
   xArr = listArray bds xs
   r = do
 i - randomRIO bds
 return (xArr ! i)

 Then I should be able to do this in ghci:

  sequence . take 5 $ randomElts [1,2,3]
 [*** Exception: stack overflow
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Preferred way to get data from a socket

2007-06-26 Thread Chad Scherrer

I've never used sockets before, but I need to now, and I need to be
able to get a lot of data quickly. I was thinking about doing
something like this (below), but I'm wondering if there's a way that
would be faster. Is the obvious way of doing this the right way? I'm
happy to install outside libraries if that would help, as long as they
work on Linux and MS. Thanks!

-Chad

--8---

import Network
import qualified Data.ByteString.Lazy as B

hostName = myComputer
portID = PortNumber 54321

theData :: IO B.ByteString
theData = connectTo hostName portID = B.hGetContents
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Preferred way to get data from a socket

2007-06-26 Thread Chad Scherrer

Ok, cool. FWIW, the current documentation for Network says:

For really fast I/O, it might be worth looking at the hGetBuf and
hPutBuf family of functions in System.IO.

But this looked pretty low-level to me, and I figured it might be outdated.

I also know Bulat Ziganshin had put together a nice-looking Streams
library (http://unix.freshmeat.net/projects/streams/) based on John
Goerzen's previous HVIO work, but I wasn't sure if the ByteString
stuff matches the speed and encapsulates all of the functionality of
that anyway. Or can/should they be used together somehow?

Chad



 --8---

 import Network
 import qualified Data.ByteString.Lazy as B

 hostName = myComputer
 portID = PortNumber 54321

 theData :: IO B.ByteString
 theData = connectTo hostName portID = B.hGetContents
 ___

Looks like the obvious, right way to me.

-- Don

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] directory tree?

2007-06-25 Thread Chad Scherrer

Bryan,

I downloaded your FileManip library and Duncan's zlib library, but I
kept getting a Too many open files exception (it matches over 9000
files). I tried to get around this using unsafeInterleaveIO as Greg
had suggested, so now I have this:

foo = namesMatching */*.z =
 fmap B.concat . mapM (unsafeInterleaveIO . fmap decompress . B.readFile)

Now it doesn't complain about too many open files, but instead I get
this runtime error:

LPS *** Exception: user error (Codec.Compression.Zlib: incorrect header check)

I tried to get the same error on simpler code, and I've found this
gives the same error:

bar = fmap decompress $ L.readFile myData.z

It seemed to me the file might be corrupted, but I can do
gunzip -c  myData.gz

at the command line and see the results just fine.

I also tried gzipping a different, smaller file, and I changed the
string in bar accordingly. No error in that case. So it seems to be
a problem with myData.z, but why would it gunzip from the command line
with no trouble in that case?

Thanks,
Chad

On 6/24/07, Bryan O'Sullivan [EMAIL PROTECTED] wrote:

Using my FileManip library, you'd do that like this.

import Codec.Compression.GZip
import qualified Data.ByteString.Lazy as B
import System.FilePath.Glob

foo :: IO B.ByteString
foo = namesMatching */*.gz =
   fmap B.concat . mapM (fmap decompress . B.readFile)

http://hackage.haskell.org/cgi-bin/hackage-scripts/package/FileManip-0.2

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] directory tree?

2007-06-25 Thread Chad Scherrer

Jedaï,

Are you sure you're not confusing .z with .Z?

http://kb.iu.edu/data/afcc.html

And is it possible that gzip is smarter somehow? Doesn't
Codec.Compression.GZip call the same C library used by gzip?

Chad

On 6/25/07, Chaddaï Fouché [EMAIL PROTECTED] wrote:

Because gunzip is smarter than your program in that he can decompress
gzip format but Z format too (which is produced by the very old
compress unix utility).

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[2]: [Haskell-cafe] directory tree?

2007-06-25 Thread Chad Scherrer

Bulat,

I don't think I can. (1) (de)compress is defined for lazy bytestrings,
and (2) my data comes to me compressed in order to fit it all on a
single DVD. So even if I could uncompress each file strictly, I
couldn't hold such a big strict bytestring in memory at once.

On 6/25/07, Bulat Ziganshin [EMAIL PROTECTED] wrote:

Monday, June 25, 2007, 10:47:11 PM, you wrote:
 bar = fmap decompress $ B.readFile myData.gz
try it with non-lazy bytestrings:
import qualified Data.ByteString as B

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] directory tree?

2007-06-25 Thread Chad Scherrer

On 6/25/07, Bryan O'Sullivan [EMAIL PROTECTED] wrote:

Are you sure you really have gzip files?  If you're on a Linux or
similar box, what does file myfile.z report to you?  It should say
something like gzip compressed data.


Aarrgh, that's the problem - it does use compress. Is the distinction
between .z and .Z not an established standard? I'm guessing there's
not a Haskell interface for compress.

I could just tell the OS to start a gzip process, but I need to be
able to build it here on my Linux box, and run it on various MS
machines. Seems like the best approach at this point might be to
require everyone (only 3 people) to uncompress the data onto the hard
drive first, then go from there.

Thanks for all the help!

-Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] directory tree?

2007-06-25 Thread Chad Scherrer

On 6/25/07, Stefan O'Rear [EMAIL PROTECTED] wrote:

.z  : always pack
.Z  : always compress
.gz : always gzip

gzip can handle all three, zlib only the last.  (Are you *sure* your
file is compress?)


This means it's compress, doesn't it?

$ file myData.z
myData.z: compress'd data 16 bits


 I could just tell the OS to start a gzip process, but I need to be
 able to build it here on my Linux box, and run it on various MS
 machines. Seems like the best approach at this point might be to
 require everyone (only 3 people) to uncompress the data onto the hard
 drive first, then go from there.

Or could could reimplement compress in Haskell.  The algorithm is
shockingly simple, and there is a sample implementation (needs
optimization and compress(1) header support, but the LZW engine is
there) is already on the Wiki.  Note that the patent expired in June
'06, so you don't need to worry about that.

http://haskell.org/haskellwiki/Toy_compression_implementations


This looks like a lot of fun, but I've got too many other pieces of
code to try to get running efficiently as it is. But I hadn't seen
this link before, and it looks like interesting stuff. Thanks!

Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] directory tree?

2007-06-24 Thread Chad Scherrer

Thanks, Bryan, this is much cleaner than the imperative hack I was
throwing together. And aside from the imports, it even fits the
couple lines of code criteria! Wonderful.

I won't be able to try this out until I get back to work, but I'm
wondering whether this will handle a few thousand files. As it is,
even the gunzip -c ./2*/*.z I'm trying to emulate doesn't really
work as is, because the OS complains there are too many pattern
matches.

Does namesMatching just feed the pattern to the OS, or does it match
the pattern itself?

Chad


 What got me thinking about this is I'd like to be able to do something
 like this in just a couple lines of code:

 gunzip -c ./2*/*.z

 ... and feed the result into a giant lazy ByteString.

Using my FileManip library, you'd do that like this.

import Codec.Compression.GZip
import qualified Data.ByteString.Lazy as B
import System.FilePath.Glob

foo :: IO B.ByteString
foo = namesMatching */*.gz =
   fmap B.concat . mapM (fmap decompress . B.readFile)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] directory tree?

2007-06-22 Thread Chad Scherrer

Haskell is great at manipulating tree structures, but I can't seem to
find anything representing a directory tree. A simple representation
would be something like this:

data Dir = Dir {dirName :: String, subDirectories :: [Dir], files :: [File]}
data File = File {fileName :: String, fileSize :: Int}

Maybe these would need to be parametrized to allow a function
splitting files by extension or that kind of thing. Anyway, the whole
idea would be to abstract as much of the file stuff out of the IO
monad as possible.

I haven't used the Scrap Your Boilerplate stuff yet, but it seems
like that could fit in here naturally to traverse a Dir and make
changes at specified points.

The only problem I can see (so far) with the approach is that it might
make big changes to the directory tree too easy to make. I'm not
sure immediately how to deal with that, or if the answer is just to
post a be careful disclaimer.

So, what do you think? Do you know of any work in this direction? Is
there a way to make dangerous 1-liners safe? Is there a fundamental
flaw with the approach I'm missing?

Thanks much,

Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] directory tree?

2007-06-22 Thread Chad Scherrer

On 6/22/07, Jeremy Shaw [EMAIL PROTECTED] wrote:

Hello,

Have you seen Tom Moertel's series on directory-tree printing in Haskell ?


No, I hadn't. Might be just the ticket. Thanks!
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] directory tree?

2007-06-22 Thread Chad Scherrer

Nice, thanks! Certainly looks like a good start at this.

What got me thinking about this is I'd like to be able to do something
like this in just a couple lines of code:

gunzip -c ./2*/*.z

... and feed the result into a giant lazy ByteString. Now, the UNIX
command doesn't really cut it, because it complains there are too many
files, but its simplicity still makes the IO monad solution feel
clunky by comparison.

Chad

On 6/22/07, Greg Fitzgerald [EMAIL PROTECTED] wrote:

Here's my shot.  http://hpaste.org/370
Not much different than Tom Moertel's, but grabs the fileSize along the way.
-Greg

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Currying: The Rationale

2007-05-23 Thread Chad Scherrer

Is (^2) really considered currying? As I understand it, this is
syntactic sugar for a section, and might confuse the issue a bit,
since it's distinct from ((^) 2). In this case we would have something
like

Prelude let pow2 = ((^) 2)
Prelude map pow2 [1..10]
[2,4,8,16,32,64,128,256,512,1024]

I think it's also worth pointing out that currying can make point-free
programming more natural. So, foldl has type

Prelude :t foldl
foldl :: (a - b - a) - a - [b] - a

That this could have been defined as
foldlUncurried :: ((a - b - a), a, [b]) - a

but then simple point-free definitions like
sum = foldl (+) 0

would have to be written as
sum xs = foldlUncurried ((+), 0, xs)

OTOH, sometimes an uncurried version is easier to reason about. So, in
some of Richard Bird's work (Algebra of Programming is like this,
and probably at least some of his intro book) examples are uncurried,
as it shows more of the algebraic structure, and allows reasoning in a
product space rather than an exponential. Umm, I guess that qualifies
as jargon - sorry.

Chad


(^) applied to 2, produces a new function, we can map over a list:

   Prelude let sq = (^2)
   Prelude map sq [1..10]
   [1,4,9,16,25,36,49,64,81,100]

or more explicitly:

   Prelude let x `to` y = x ^ y
   Prelude let sq x = x `to` 2
   Prelude map sq [1..10]
   [1,4,9,16,25,36,49,64,81,100]

-- Don

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Currying: The Rationale

2007-05-23 Thread Chad Scherrer

On 5/23/07, Philippa Cowderoy [EMAIL PROTECTED] wrote:

On Wed, 23 May 2007, Chad Scherrer wrote:

 Is (^2) really considered currying? As I understand it, this is
 syntactic sugar for a section, and might confuse the issue a bit,
 since it's distinct from ((^) 2).

Sure, but it's (flip (^)) 2.


Well, ok, but you've changed the definition. If it were enough for it
to be equivalent to a curried version, we could as well write

sq x = times (x,x) where times (x,y) = x * y

and argue that this is partial application of a curried function
because it's equivalent to the curried version you gave. But I guess
I'm being a bit pedantic here, and I suspect your definition is
exactly how (^2) is desugared.

Chad



--
[EMAIL PROTECTED]

Sometimes you gotta fight fire with fire. Most
of the time you just get burnt worse though.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Bad let..do syntax

2007-05-17 Thread Chad Scherrer

I've gotten into a habit of preceding most dos in my code with a
$, and indenting the next line. I kind of like this, since it makes
the indentation more uniform. But it seems to have bitten me now. I'd
like to write something like this

s = sum $ do
 x - [1,2,3]
 let b = sum $ do
   y - [0..x + 1]
   return y
 return (x + b)

But GHC complains of Empty 'do' construct. It likes the alternative

s' = sum $ do
 x - [1,2,3]
 let b = sum $ do y - [0..x + 1]
  return y
 return (x + b)

just fine, but that looks horrible to me (ok, horrible is a bit
strong, but I don't like it as much).

So I'm wondering, (1) Is this intended to give an error, or is it just
a momentary hiccup, and (2) if others have run into this, is there a
more aesthetic alternative that works?

Thanks,

Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Bad let..do syntax

2007-05-17 Thread Chad Scherrer

Thanks, I had forgotten about multiple let bindings as something it
might be looking for. I guess in this case the curly braces aren't too
bad, given that this situation doesn't come up so much, and it would
let me keep the indentation consistent.

And yes, this is just a boiled-down version of the original code,
where sum [0.. x+1] wasn't an option.

-Chad

On 5/17/07, David House [EMAIL PROTECTED] wrote:

On 17/05/07, Chad Scherrer [EMAIL PROTECTED] wrote:
 But GHC complains of Empty 'do' construct.

Because it takes the indented following lines as being new bindings in
the let-block. The trick is to intent them past the 'sum':

let b = sum $ do
  y - [0..x + 1]
  return y

Or to bypass layout altogether:

let { b = sum $ do
  y - [0..x + 1]
  return y }

(Of course, in this specific case I'd write sum [0..x + 1], but I
guess that this is an example of a general case.)

--
-David House, [EMAIL PROTECTED]


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: release plans

2007-04-16 Thread Chad Scherrer

What do you think of this plan?  Are there features/bug-fixes that you really

want to see in 6.8?

I'm most anxious for parallel GC - do you think it will be another
release or two before this is a reality?

Thanks,
Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] operating on a hundred files at once

2007-04-10 Thread Chad Scherrer

Hi Jeff,


I have a series of NxM numeric tables I'm doing a quick
mean/variance/t-test etcetera on.  The cell t1 [i,j] corresponds exactly
to the cells t2..N [i,j], and so it's perfectly possible to read one
item at a time from each of the 100 files and compute the mean/variance
etcetera on all cells that way.


So after mapping openAndProcess, you have a 100xNxM array (really a
list-of-lists-of-lists), right? And then when you take means and
variances, which index are you doing this with respect to? As I read
it, you seem to be trying to eliminate the first axis, and end up with
an NxM array.

If this is the case, let's say we have
mean, variance :: [Double] - Double
openAndProcess :: String - IO (Matrix String)

Here, defining
type Matrix a = [[a]]
makes it easier to keep the types straight.

Then you have these building blocks:

(map . map . map) read :: [Matrix String] - [Matrix Double]

transpose2 :: [Matrix Double] - Matrix [Double]
(a couple of lines, maybe even a one-liner, if you use that [a] is a monad)

(map . map) mean :: Matrix [Double] - Matrix Double

Composing these gives a function [Matrix String] - Matrix Double, so
once we get to [Matrix String], we're effectively done.

you also use
map OpenAndProcess :: [String] - [IO (Matrix String)]

You can use sequence to get the IO outside the list, so now you have
IO [Matrix String]. All you have to do now is use liftM on your
function [Matrix String] - Matrix Double, which turns it into a
function IO [Matrix String] - IO (Matrix Double).

--

Chad Scherrer

Time flies like an arrow; fruit flies like a banana -- Groucho Marx
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Is ([] - []) an arrow?

2007-03-21 Thread Chad Scherrer

In John Hughes's Programming With Arrows
(http://www.cs.chalmers.se/~rjmh/afp-arrows.pdf), he discusses a
stream function type
newtype SF a b = SF {runSF :: [a] - [b]}
and gives
instance Arrow SF where 
He gives some examples using this, and everything seems to go just fine.

But in Ross Patterson's Arrows and Computation
(http://www.soi.city.ac.uk/~ross/papers/fop.html), he says that
newtype ListMap i o = LM ([i] - [o])
is ALMOST an arrow.

Now, I've heard (but never verified) that IO fails to satisfy some
monad laws, yet here we are, using it as a monad. Is a similar kind of
thing going on here? Has anyone hit any snags in using this instance?

Thanks,

Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Trouble trying to find packages for ubuntu linux

2007-03-15 Thread Chad Scherrer

Brian,

I had this exact problem, and I found this approach to work wonderfully:

http://pupeno.com/2006/12/17/unstable-packages-on-ubuntu/


Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] numeric minimization in Haskell

2007-02-28 Thread Chad Scherrer

Does anyone know of any Haskell code for numeric minimization? I was
thinking conjugate gradient would be good, but at this point I'd be
happy with anything.

I've found some code written by Tomasz Cholewo at
http://ci.uofl.edu/tom/software/Haskell/
but it requires importing his Arr.lhs library, which is not publicly
available.

The only other thing I've been able to dig up is this
www.st.cs.ru.nl/papers/1997/serp97-cgfunctional.ps.gz
which suggests Haskell is slow for such problems. I suspect this was
an implementation issue, so I don't think their code would be very
helpful (though it would be nice to tidy it up and demonstrate the
improvement - could it beat the Clean implementation they give?)

The other possibility I was considering was using Alberto Ruiz's
wrapper for the GSL library
http://dis.um.es/~alberto/GSLHaskell/
The only problems with this are (1) requires having GSL available, so
it's not as portable, and (2) does everything in terms of lists, which
requires a lot of translations to and from lists (I'm using mutable
arrays).

If there's nothing already written that works together, one of these
should give me a start, but I'd like to avoid reinventing the wheel if
possible.

Thanks!

--

Chad Scherrer

Time flies like an arrow; fruit flies like a banana -- Groucho Marx
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] numeric minimization in Haskell

2007-02-28 Thread Chad Scherrer

On 2/28/07, Dan Weston [EMAIL PROTECTED] wrote:

GSL is written in C, and I don't know any language more portable than
that! gsl_vector and gsl_matrix use a continuous block of doubles, so
you can use the FFI to marshall this however you want for efficiency.

I'd stick with GSLHaskell until you're ready to optimize the data
marshalling though.

I like spending my time on interesting things, not reinventing
pre-debugged and efficient libraries. I use GSLHaskell in my work and
have never had a problem.

Dan


That's my preference, too. Have you ever tried GSLHaskell on MS
Windows? I do most of my work on Linux, but a guy I'm working with
uses MS, and I've heard cygwin can be a huge pain.

I have a big space leak right now I thought might be because of list
laziness in the interface, but that should be squashable with a little
work, and is not as big a deal as having lots of dependencies when
passing code around. I only really need one function from GSL, and the
odds of someone having written a work-alike in Haskell seemed pretty
good.

Of course, in cases where GSL is already installed, or where more of
its functionality is needed, GSLHaskell is the obvious choice.

Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Leaves of a Tree

2007-02-21 Thread Chad Scherrer

Tom,

I think inserting elements would be a lot faster than multiple unions.
I would try:

leafList :: Tree - [Int]
leafList (Leaf n) = [n]
leafList (Branch left right) = leafList left ++ leafList right

leaves = fromList . leafList

If you're writing many functions on Trees (or maybe even if you're
not), you might consider writing a fold function and putting leafList
in terms of this. Do you have experience with folds?

-Chad


Hello,

Any recommendations for speeding up extracting the set of leaves from a tree?

data Tree = Branch Tree Tree | Leaf Int deriving (Eq, Ord)

My slow, naive function:

leaves :: Tree - Set Int
leaves (Leaf n) = singleton n
leaves (Branch left right) = union (leaves left) (leaves right)

In my case, many of the branches in the tree are the same.  I suspect
the fixed point operation mentioned in the thread speeding up
fibonacci with memoizing is applicable, but I'm having a tough time
making the connection.

-Tom

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Leaves of a Tree

2007-02-21 Thread Chad Scherrer

Neil,

I think this idea is better than what I had suggested, but as it
stands it doesn't typecheck. Did you mean something like this?

leaves :: Tree - [Int]
leaves = f []
 where
 f rest (Leaf n) = n : rest
 f rest (Branch l r) = f (f rest r) l


-Chad
---
(from Neil Mitchell)

Hi Tom


data Tree = Branch Tree Tree | Leaf Int deriving (Eq, Ord)



leaves :: Tree - Set Int
leaves (Leaf n) = singleton n
leaves (Branch left right) = union (leaves left) (leaves right)


The standard method for a traversal over leaves with accumulation is:

leaves :: Tree - Set Int
leaves x = f []
 where
 f (Leaf n) rest = n : rest
 f (Branch l r) rest = f l (f r rest)

This makes the construction of the list quite cheap.

Then you can do the fromList trick, and that might give you a speed up.

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Leaves of a Tree

2007-02-21 Thread Chad Scherrer

Hi Tom,

Tom Hawkins wrote:

Folding was my first approach:

leaves :: Tree - Set Int
leaves tree = accumLeaves Set.empty tree

accumLeaves :: Set Int - Tree - Set Int
accumLeaves set (Leaf n) = insert n set
accumLeaves set (Branch l r) = foldl accumLeaves set [l,r]

However, with this approach I quickly ran out of stack space.  I found
this odd, since I thought this program was tail recursive and
shouldn't be using the stack.


This is a problem of tail recursion and lazy evaluation not playing
nicely. See this page:

http://www.haskell.org/haskellwiki/Stack_overflow


My next attempt was to use some form of memorization.  Since many of
the branches in my trees are equal, memorization should prevent
following branches that have already been calculated.  My question is,
what is the best way to transform the original function to incorporate
memorization?


I think something like this could be done, if there's some invariants
maintained by the data structure. Is there any additional structure
you're imposing beyond that required for the data line?

-Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Monolithic module tool

2007-02-08 Thread Chad Scherrer

Maybe a year or so ago, I came across a tool for Haskell that takes a
collection of modules, does some name-mangling, and gives a single
module that only needs to export main. There were wonderful reductions
in the size of the resulting executable, and potentially more
optimizations available to GHC.

Anyway, I can't find the link now, and I'm wondering if anyone knows
of it. Thanks!

Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] (a - [b]) vs. [a - b]

2007-02-02 Thread Chad Scherrer

Oops, I thought I had sent a response to the cafe, but it looks like
it just went to Matthew.

Unfortunately, I was trying to give a simplification of the real
problem, where the monad is STM instead of []. Based on apfelmus's
observation of why they can't be isomorphic, I'm guessing I'm out of
luck.

http://www.haskell.org/pipermail/haskell-cafe/2006-December/020041.html

So in reality, I'm trying to construct something like
f :: (a - STM b) - STM (a - b)

I just figured it was a general monadic kind of problem, more simply
expressed using lists. But the (!!) solution doesn't make sense in
this context.

On 2/2/07, Yitzchak Gale [EMAIL PROTECTED] wrote:

Chad Scherrer wrote:
 Are (a - [b]) and [a - b] isomorphic? I'm trying to construct a function

 f :: (a - [b]) - [a - b]

 that is the (at least one-sided) inverse of

 f' :: [a - b] - a - [b]
 f' gs x = map ($ x) gs

Anything better than this?

f g = [\x - g x !! n | n - [0..]]

-Yitz




--

Chad Scherrer

Time flies like an arrow; fruit flies like a banana -- Groucho Marx
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] (a - [b]) vs. [a - b]

2007-02-01 Thread Chad Scherrer

Are (a - [b]) and [a - b] isomorphic? I'm trying to construct a function

f :: (a - [b]) - [a - b]

that is the (at least one-sided) inverse of

f' :: [a - b] - a - [b]
f' gs x = map ($ x) gs

It seems like it should be obvious, but I haven't had any luck with it yet.
Any help is greatly appreciated.

Thanks,

Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] newTArrayIO

2007-01-26 Thread Chad Scherrer

This seems like a natural thing to have around, but it's not in GHC 6.6...

newTArrayIO :: (Enum i, Ix i) = (i, i) - a - IO (TArray i a)
newTArrayIO (a,b) = liftM (TArray . listArray (a,b)) . sequence . zipWith
ignore [a..b] . repeat . newTVarIO
 where ignore = flip const

I haven't done any testing with this beyond type checking, but it seems like
it could be useful for similar cases to newTVarIO. Has anyone else played
with anything similar?

--

Chad Scherrer

Time flies like an arrow; fruit flies like a banana -- Groucho Marx
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] STM and random numbers

2007-01-12 Thread Chad Scherrer

Hi,

I'd like to be able to use randomIO, but I'm working within the
context of STM. Is there a way to get these working together happily?

For now, I guess I could kludgingly use unsafePerformIO inside STM
(it's the other way around that's not allowed, right?), but I would
need to be sure it doesn't get inlined.

Thanks,

Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] STM and random numbers

2007-01-12 Thread Chad Scherrer

Wow, lots of great ideas. Thanks, guys.

Lemmih,
I worry about the uncertainty in the semantics that seems to be
introduced by the unsafe stuff. But I actually hadn't noticed
GHC.Conc.unsafeIOToSTM before, so it's good to know it's there.

Rich,
Even if I use randomIO outside the STM code, I don't know of a (safe)
way to bring it in. Anyway, the number of random values needed depends
on other stuff going on within the STM part.

Christian,
I think setStdGen has the same problem as just using randomIO and
ignoring the initial seed - there's no nice way of moving the (IO a)
value generated into the STM monad.

Rob, Henning,
I think I'll take this approach, or something similar. Thanks for the pointers.

-Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] MVar style question

2007-01-05 Thread Chad Scherrer

Not that I've worked through yet. I really could be using IORef for
now, but I started using MVar because I might start using multiple
threads at some point, and I'd like to get comfortable with MVars for
that time.

On 1/4/07, Mike Gunter [EMAIL PROTECTED] wrote:


Do you need to maintain invariants that span the two?  Put
operationally, do you want different threads to be able to access a
and b concurrently?

-m

Chad Scherrer [EMAIL PROTECTED] writes:

 When using MVars, is there a reason to prefer using MVar (a,b) over
 (MVar a, MVar b), or vice versa? I'm not sure if this is really a
 question of style, or if there are practial implications I'm missing
 one way or another. Thanks!
 Chad Scherrer

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] MVar style question

2007-01-04 Thread Chad Scherrer

When using MVars, is there a reason to prefer using MVar (a,b) over
(MVar a, MVar b), or vice versa? I'm not sure if this is really a
question of style, or if there are practial implications I'm missing
one way or another. Thanks!

--

Chad Scherrer

Time flies like an arrow; fruit flies like a banana -- Groucho Marx
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: what are the points in pointsfree?

2006-12-15 Thread Chad Scherrer

so pointsfree is a step beyond leaving the domain unspecified.


Actually, the domain is specified -  a function written as
f = g . h
has the same domain as h has.


my reading knowledge of haskell at this point far exceeds my ability
to write haskell. but so far, it has seemed to me that functions
written in the pf style are the most reuseable.

from what you just told me, it's not an artifact of the pf style, but
that maximally reusable functions will be expressible in a pointsfree
style. that those functions embody a pattern of computation, without
concern for the details.



I don't see where reusability is affected either way. Many (all?)
functions can be written in either style, and the definitions are
equivalent.

There are a few advantages I've seen to pf style:

1. Function definitions are shorter, and sometimes clearer.

2. It saves you from having to give points in the domain a name.

3. It can make reasoning about programs simpler. For example, if we know that
reverse . reverse == id
then anywhere we see reverse . reverse, we can replace it with id,
without having to track any other variables. In particular, Richard
Bird's book and articles use this to great effect for program
transformation and derivation from specification (Bird-Meertens
formalism)

The only disadvantage I know of is that it can lead to obfuscation,
especially if Haskell hasn't twisted your brain yet (in a good way).

--

Chad Scherrer

Time flies like an arrow; fruit flies like a banana -- Groucho Marx
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Efficiency of bang patterns

2006-11-07 Thread Chad Scherrer

I'm curious about the implementation of bang patterns, and the
implications for performance. Previously on this list, Lemmih has
pointed out that throwing in an extra `seq` here and there to force
strictness is a bad idea, unless you do it very carefully. He points
out that the strictness analyzer will catch a lot of cases without a
need for seq.

But the approach of compiling without any `seq`s, looking for leaks,
and then adding them in one at a time seems tedious. There should be a
more predictable, more uniform way of achieving strictness.

Is it reasonable to promote a programming style where strictness is
achieved using strictness annotations and bang patterns? I find it
very appealing that the ! syntax translates so nicely from type
declarations to patterns. I had originally thought that every bang
pattern was translated into a seq call, as a sort of preprocessing
step, but from this page...

http://hackage.haskell.org/trac/haskell-prime/wiki/BangPatterns

... is is clear that in many cases a bang pattern can be rewritten as
a case statement, which seems to me an opportunity to avoid some code
ugliness. In cases semantically distinct from a case statement, maybe
something like the following could be done, or maybe is already...

1. Pass the expression to the strictness analyzer, without the bang pattern.
2. If it's already strict, great! We're done.
3. If not, add an extra seq call.

Now, I'm no compiler expert (obviously, I suspect), and maybe I've
misunderstood the role of the strictness analysis step. But being able
to easily make things really strict seems pretty important, and there
seem to be a lot of subtleties to using seq that make it difficult to
tune for performance.

--

Chad Scherrer

Time flies like an arrow; fruit flies like a banana -- Groucho Marx
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Haskell-Cafe Digest, Vol 39, Issue 6

2006-11-02 Thread Chad Scherrer

Lemmih wrote:

Using 'seq' is generally a bad idea. It can worsen the performance if
not used carefully and GHCs strictness analyser is usually good
enough.


Is GHC.Conc.pseq any better? Usually the whole point of making things
more strict is to optimize performance for pieces you know will be
evaluated anyway. It's frustrating if there's not a consistent way to
do this that works well.

Lately, I've been using lots of strictness annotations and bang
patterns - are there non-obvious places this could slow things down?

Would it be possible for the type system to distinguish at compile
time whether something would need to be evaluated, and optimize away
redundant `seq`s? Maybe this is what the strictness analyzer does
already.

Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] monadic performance

2006-10-30 Thread Chad Scherrer

On 10/28/06, Tomasz Zielonka [EMAIL PROTECTED] wrote:

On Fri, Oct 27, 2006 at 06:28:58AM -0700, Chad Scherrer wrote:
 Should I expect a monadic version to take a performance hit? What if I
 use some SPECIALIZE pragmas or somesuch? Is it more efficient to write
 one from scratch, or do specific type annotations give me the same
 thing anyway?


That's good to hear, thanks!

At this point my biggest concern is whether the strict parts and lazy
parts will play nice. Everything about my data structure is strict,
and thinking about how a Writer monad would interact with that just
makes me confused, so far.

-Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] monadic performance

2006-10-27 Thread Chad Scherrer

Hello,

I've written some code that does a foldl (or scanl, depending on my
mood) kind of thing, and builds a huge tree structure as it goes
along. I've been careful to make inserts as strict (and eager) as
possible, since I know all the pieces will be evaluated eventually
anyway. Now I'd like to be able to write various output as I traverse
the structure, and I was thinking about using the WriterT monad
transformer. For this, I could just make the base monad Identity, or I
think it could be nice (from a code elegance perspective) to use State
and rewrite the insert code to be State-ful. Sometimes when I have
used State before, I have been bitten by laziness. Should I expect a
monadic version to take a performance hit? What if I use some
SPECIALIZE pragmas or somesuch? Is it more efficient to write one from
scratch, or do specific type annotations give me the same thing
anyway?

Thanks,

Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] deepSeq vs rnf

2006-10-22 Thread Chad Scherrer

Hi,

I had posted this question a while back, but I think it was in the
middle of another discussion, and I never did get a reply. Do we
really need both Control.Parallel.Strategies.rnf and deepSeq? Should
we not always have

x `deepSeq` y == rnf x `seq` y
?

Maybe there's a distinction I'm missing, but it seems to me they're
basically the same.

--

Chad Scherrer

Time flies like an arrow; fruit flies like a banana -- Groucho Marx
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] deepSeq vs rnf

2006-10-22 Thread Chad Scherrer

Interesting, I hadn't thought of the SYB approach. I still need to get
through those papers. Actually, I wonder if this idea would help with
something else I was looking into. It seems like it might occasionally
be useful to have a monad that is the identity, except that it forces
evaluation as it goes. Something like:

instance Monad Strict where
   return = Strict
   Strict x = f = rnf x `seq` f x

The problem is, this won't typecheck as-is, since not everything is an
instance of class NFData. I had been thinking of making a default
instance, something like

instance NFData a where
   rnf = id

and then using overlapping instances. But maybe boilerplate-scrapping
would make this cleaner? I'm still not sure what it can and can't do.

-Chad


I agree, they are the same. The Strategies library also gives much
more general operations for working with strictness and
parallelisation. That library seems to need more love, I think it's a
great idea, but it doesn't really get noticed all that much. The
Hierarchical libraries documentation for it is a little lacking -- it
doesn't even provide a reference or link to the paper, and many of the
combinators, as well as the general idea of how to use it are
undocumented from there. It also spuriously contains an Assoc
datatype, which if I recall correctly, was an example from the paper,
but doesn't really belong in the library as far as I can tell. It
would also be really nice to see the list of instances for the NFData
class expanded to include other datatypes in the libraries, possibly
also with compiler support for deriving, since it's mostly
boilerplate.

Speaking of boilerplate and the scrapping thereof, Data.Generics could
theoretically also be used to write a relatively generic rnf/deepSeq,
but in my attempts, it seems to be much much slower than using a
specific normal form class. Here's my code from quite a while back. As
I recall, it's semantically correct, but ran about an order of
magnitude slower. There might be a much better way to do it, I don't
really know Data.Generics all that well.

rnf :: (Data a) = a - ()
rnf x = everything (\x y - x `seq` y) (\x - x `seq` ()) x

deepSeq x y = rnf x `seq` y

f $!! x = rnf x `seq` f x

 - Cale

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] cumulative sum

2006-10-02 Thread Chad Scherrer

Tamas,

try
scanl (+) 0
for the cumulative sum


From there the zipWith idea you mentioned seems like the way to go.


-Chad


Hi,

I have two lists, p and lambda (both are finite).  I would like to
calculate

1) the cumulative sum of lambda, ie if

lambda = [lambda1,lambda2,lambda3,...]

then

csum lambda = [lambda1,lambda1+lambda2,lambda1+lambda2+lambda3,...]

2) the cumulative sum of p*lambda (multiplication elementwise)

Once I know how to do the first, I know how to do the second I guess
(doing the multiplication using zipWith to get the p*lambda list, but
I would be interested in any other suggestions).  Currently I take and
sum, but then I am calculating the same sums many times.

Thanks,

Tamas

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] DiffTime in Data.Time

2006-10-02 Thread Chad Scherrer

I'm trying to use Data.Time, and I'm totally confused. DiffTime is
abstract, and I don't see anything that maps into it. How do I
construct one? I would like to then use the result to create a value
of type UTCTime, but it seems (currently) like this might be easier.

Thanks,

--

Chad Scherrer

Time flies like an arrow; fruit flies like a banana -- Groucho Marx
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: DiffTime in Data.Time

2006-10-02 Thread Chad Scherrer

Ok, that's much simpler than I was making it. fromIntegral or
fromRational does the trick. Obvious in hindsight, I guess. Thanks!

-Chad

On 10/2/06, Ashley Yakeley [EMAIL PROTECTED] wrote:

Chad Scherrer wrote:
 I'm trying to use Data.Time, and I'm totally confused. DiffTime is
 abstract, and I don't see anything that maps into it. How do I
 construct one? I would like to then use the result to create a value
 of type UTCTime, but it seems (currently) like this might be easier.

It's an instance of Num etc. (as seconds).

--
Ashley Yakeley
Seattle WA




--
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] smallest double eps

2006-09-30 Thread Chad Scherrer


Hang on, hang on, now I'm getting confused.

First you asked for the smallest (positive) x such that
1+x /= x
which is around x=4.5e15.
Then Joachim wondered if you wanted
1+x /= 1
which is around x=2.2e-16.
But not you claim to be looking for the smallest positive number that
a Double can represent.  Which is a totally different beast.  The
smallest possible Double depends on if you want to accept
denormalized numbers or not.  If you don't, then it's about x=4.5e-308.

Now what is the number you are looking for?

-- Lennart



This is the point I was confused about also. Joachim seemed to be
correcting what might have been a misstatement of the problem.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] smallest double eps

2006-09-29 Thread Chad Scherrer

Tamas,

You might want to read Joachim's post more carefully - he's trying to
help you, and I think he makes a good point.

-Chad


 Am Freitag, den 29.09.2006, 19:30 -0400 schrieb Tamas K Papp:
  the smallest positive floating point number x such that 1+x /= x?
 That would be the smallest positive number, woudn't it?

 Do you mean the smalles postive number x with 1+x /= 1?

Hi Joachim,

Specifically, I would be happy with the smallest Double that makes the
statement true.  I need it as a convergence bound for an iterative
algorithm.  Anyhow, thanks for the clarification, but I would be
happier with an answer.

Tamas


--

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


End of Haskell-Cafe Digest, Vol 37, Issue 92





--

Chad Scherrer

Time flies like an arrow; fruit flies like a banana -- Groucho Marx
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Expressing seq

2006-09-27 Thread Chad Scherrer

I was reading on p. 29 of A History of Haskell (a great read, by the
way) about the controversy of adding seq to the language. But other
than for efficiency reasons, is there really any new primitive that
needs to be added to support this?

As long as the compiler doesn't optimize it away, why not just do
something like this (in ghci)?

Prelude let sq x y = if x == x then y else y
Prelude 1 `sq` 2
2
Prelude (length [1..]) `sq` 2
Interrupted.

There must be a subtlety I'm missing, right?
--

Chad Scherrer

Time flies like an arrow; fruit flies like a banana -- Groucho Marx
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Expressing seq

2006-09-27 Thread Chad Scherrer


 There must be a subtlety I'm missing, right?

What if the types are not instances of Eq?

Jason



Thanks, I figured it was something simple. Now I just to convince
myself there's no way around that. Is there a proof around somewhere?
--

Chad Scherrer

Time flies like an arrow; fruit flies like a banana -- Groucho Marx
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] REALLY simple STRef examples

2006-08-05 Thread Chad Scherrer

Thanks, Simon. I've begun putting together some text describing very
simple STRef examples, as Bulat suggested earlier. I think I know how
to make it work, but I'll still need to work on typing subtleties. I'm
headed to bed now, but I'll go through this in detail when I get a
chance to try to get my head around it.

-Chad

On 8/4/06, Simon Peyton-Jones [EMAIL PROTECTED] wrote:

Chad

| x = runST $ return (1::Int)

This code looks simple, but it isn't. Here are the types:

   runST :: forall a.  (forall s. ST s a) - a

   ($) :: forall b c. (b-c) - b - c

   return 1 :: forall s. ST s Int

To typecheck, we must instantiate
   b   with (forall s. ST s Int)
   c   with Int

In H-M that's impossible, because you can't instantiate a type variable
(b) with a polytype (forall s. ST s Int).  GHC will now let you do that
(a rather recent change), but in this case it's hard to figure out that
it should do so.  Equally plausible is to instantiate b with (ST s'
Int), where s' is a unification variable.

One way to make this work is to look at $'s first argument first. Then
it's clear how to instantiate b.  Then look at the second argument.  But
if you look at the second argument first, matters are much less clear.
GHC uses an algorithm that is insensitive to argument order, so it can't
take advantage of the left-to-right bias of this example.

It's unfortunate that such a simple-looking piece of code actually
embodies a rather tricky typing problem!  Of course there is no problem
if you don' use the higher order function $.  Use parens instead

   x = runST (return 1)

Simon


| -Original Message-
| From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
| Chad Scherrer
| Sent: 19 July 2006 23:02
| To: haskell-cafe@haskell.org
| Subject: [Haskell-cafe] REALLY simple STRef examples
|
| I've looked around at the various STRef examples out there, but still
| nothing I write myself using this will work. I'm trying to figure out
| how the s is escaping in really simple examples like
|
| x = runST $ return 1
|
| y = runST $ do {r - newSTRef 1; readSTRef r}
|
| Neither of these works in ghci - they both say
|
| interactive:1:0:
| Inferred type is less polymorphic than expected
|   Quantified type variable `s' escapes
|   Expected type: ST s a - b
|   Inferred type: (forall s1. ST s1 a) - a
| In the first argument of `($)', namely `runST'
| In the definition of `it':
|...
|
| I thought maybe I needed to replace 1 with (1 :: Int) so the state
| representation didn't force the type, but it still gives the same
| result.
|
| Can someone point me to the simplest possible runST example that
| actually works? Thanks!




--

Chad Scherrer

Time flies like an arrow; fruit flies like a banana -- Groucho Marx
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[4]: [Haskell-cafe] REALLY simple STRef examples

2006-07-21 Thread Chad Scherrer

The IO monad hasn't given me too much trouble, but I want to be sure
to structure things the way they should be. If I get everything
running using IO first and then have type-checking problems with ST,
it will be tempting to just slap on an unsafePerformIO and call it
good. Sure, it's really doing the same thing anyway, but it just comes
out looking like a hack.

On 7/20/06, Bulat Ziganshin [EMAIL PROTECTED] wrote:

Hello Chad,

Friday, July 21, 2006, 12:26:58 AM, you wrote:

 Ok, I see now why the return is necessary.

btw, it may be helpful to read IO inside material. ST monad is not
very different from IO monad - it only limited to operations on STRef
and STArray, so that it can't have side-effects visible outside of
runST statement used to run ST computation


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] REALLY simple STRef examples

2006-07-20 Thread Chad Scherrer

I've looked around at the various STRef examples out there, but still
nothing I write myself using this will work. I'm trying to figure out
how the s is escaping in really simple examples like

x = runST $ return 1

y = runST $ do {r - newSTRef 1; readSTRef r}

Neither of these works in ghci - they both say

interactive:1:0:
   Inferred type is less polymorphic than expected
 Quantified type variable `s' escapes
 Expected type: ST s a - b
 Inferred type: (forall s1. ST s1 a) - a
   In the first argument of `($)', namely `runST'
   In the definition of `it':
  ...

I thought maybe I needed to replace 1 with (1 :: Int) so the state
representation didn't force the type, but it still gives the same
result.

Can someone point me to the simplest possible runST example that
actually works? Thanks!

--

Chad Scherrer

Time flies like an arrow; fruit flies like a banana -- Groucho Marx
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


  1   2   >