On 11/2/07, Petr Hoffmann [EMAIL PROTECTED] wrote:
import System.Cmd
main = do
System.Cmd.system echo hello output.txt -- use the external
application to create an output file
o1 - readFile output.txt
System.Cmd.system echo bye output.txt -- the second call to
Sebastian Sylvan [EMAIL PROTECTED] writes:
[LOC vs gz as a program complexity metric]
Obviously no simple measure is going to satisfy everyone, but I think the
gzip measure is more even handed across a range of languages.
It probably more closely aproximates the amount of mental effort [..]
On 11/2/07, Petr Hoffmann [EMAIL PROTECTED] wrote:
I'm solving the following problem - I need to use an external
application - give it the input data and receive its output.
Check out: The HSH library:
HSH is designed to let you mix and match shell expressions with
Haskell programs. With HSH,
On Fri, 2 Nov 2007, Petr Hoffmann wrote:
Hi,
I'm solving the following problem - I need to use an external
application - give it the input data and receive its output.
However, when multiple calls are made, the results are not
as expected. The simplified version of the problem is given
On 02/11/2007, Bulat Ziganshin [EMAIL PROTECTED] wrote:
Hello Sebastian,
Thursday, November 1, 2007, 9:58:45 PM, you wrote:
the ideal. Token count would be good, but then we'd need a parser for
each language, which is quite a bit of work to do...
i think that wc (word count) would be
On Nov 2, 2007, at 6:35 , apfelmus wrote:
during function evaluation. Then, we'd need a purity lemma that
states that any function not involving the type *World as in- and
output is indeed pure, which may be a bit tricky to prove in the
presence of higher-order functions and polymorphism.
On 11/2/07, Petr Hoffmann [EMAIL PROTECTED] wrote:
import System.Cmd
main = do
System.Cmd.system echo hello output.txt -- use the external
application to create an output file
o1 - readFile output.txt
System.Cmd.system echo bye output.txt -- the second call to
On Fri, 2007-11-02 at 08:35 -0400, Brandon S. Allbery KF8NH wrote:
On Nov 2, 2007, at 6:35 , apfelmus wrote:
during function evaluation. Then, we'd need a purity lemma that
states that any function not involving the type *World as in- and
output is indeed pure, which may be a bit
Brandon S. Allbery KF8NH wrote:
apfelmus wrote:
during function evaluation. Then, we'd need a purity lemma that
states that any function not involving the type *World as in- and
output is indeed pure, which may be a bit tricky to prove in the
presence of higher-order functions and
Can you please give me some hint to solve this problem?
I'm a beginning haskell developer and I'm still a bit confused
by the IO monad.
Other people have explained to the OP why unsafe lazy IO is breaking his
code.
Yet another piece of evidence, in my opinion, that
unsafe-lazy-by-default is
On Nov 2, 2007, at 11:51 , Jonathan Cast wrote:
I will grant that hiding *World / RealWorld# inside IO is cleaner
from a practical standpoint, though. Just not from a semantic one.
On the contrary. GHC's IO newtype isn't an implementation of IO in
Haskell at all. It's an implementation in
Andrew Butterfield [EMAIL PROTECTED] writes:
I'm puzzled - when I run this on GHCi (v6.4, Windows XP) I get the
following outcome^^
The process cannot access the file because it is being used by another
process.
Isnt' this a difference between Windows and
Hello Petr,
Friday, November 2, 2007, 11:17:23 AM, you wrote:
o1 - readFile output.txt
add return $! length o1 here to evaluate whole list
System.Cmd.system echo bye output.txt -- the second call to
--
Best regards,
Bulatmailto:[EMAIL
Petr Hoffmann writes:
I'm solving the following problem - I need to use an external
application - give it the input data and receive its output.
However, when multiple calls are made, the results are not
as expected. The simplified version of the problem is given
below:
Hello Sebastian,
Thursday, November 1, 2007, 9:58:45 PM, you wrote:
the ideal. Token count would be good, but then we'd need a parser for
each language, which is quite a bit of work to do...
i think that wc (word count) would be good enough approximation
--
Best regards,
Bulat
Hi,
I'm solving the following problem - I need to use an external
application - give it the input data and receive its output.
However, when multiple calls are made, the results are not
as expected. The simplified version of the problem is given
below:
import System.Cmd
main = do
On 11/1/07, PR Stanley [EMAIL PROTECTED] wrote:
If anyone knows anything about the rules of proof by deduction and
quantifiers I'd be grateful for some assistance.
I'm currently doing a course on Type Theory which includes proving by
natural deduction. See, among other things, the course notes
Massimiliano,
I had to update your code for it to compile (removed sequence from
testpdf'. However, I don't see any significant difference in the
memory profile of either testpdf or testpdf'.
Not sure how you are watching the memory usage, but if you didn't know
the option +RTS -sstderr will
Hello,
Just a bit of minor academic nitpicking...
Yeah. After all, the uniqueness constraint has a theory with an
excellent pedigree (IIUC linear logic, whose proof theory Clean uses
here, goes back at least to the 60s, and Wadler proposed linear types
for IO before anybody had heard of
{-# OPTIONS_GHC -fglasgow-exts -fno-monomorphism-restriction #-}
-- Many people ask if GHC will evaluate toplevel constants at compile
-- time, you know, since Haskell is pure it'd be great if those
-- computations could be done once and not use up cycles during
-- runtime. Not an entirely bad
On 11/2/07, Stuart Cook [EMAIL PROTECTED] wrote:
The solution would be to use a version of readFile that works in a
stricter way, by reading the file when it's told to, but I don't have
an implementation handy.
I guess this does the job:
readFile' fp = do
contents - readFile fp
let ret
On Fri, 2 Nov 2007, Felipe Lessa wrote:
On 11/2/07, Stuart Cook [EMAIL PROTECTED] wrote:
The solution would be to use a version of readFile that works in a
stricter way, by reading the file when it's told to, but I don't have
an implementation handy.
I guess this does the job:
( these two lines are just to fool the gmane post algorithm which
complains for top-posting)
Hi,
i'm learning Haskell and trying to use the HPDF 1.2 library I've come
across some large memory consumption for which I do not understand
the origin. I've tried heap profiling but without
Paul Hudak wrote:
loop, loop' :: *World - ((),*World)
loop w = loop w
loop' w = let (_,w') = print x w in loop' w'
both have denotation _|_ but are clearly different in terms of side effects.
One can certainly use an operational semantics such as bisimulation,
but you don't have
On Fri, 2 Nov 2007 05:11:53 -0500
Nicholas Messenger [EMAIL PROTECTED] wrote:
-- Many people ask if GHC will evaluate toplevel constants at compile
-- time, you know, since Haskell is pure it'd be great if those
-- computations could be done once and not use up cycles during
-- runtime. Not
On 11/2/07, Andrew Butterfield [EMAIL PROTECTED] wrote:
I'm puzzled - when I run this on GHCi (v6.4, Windows XP) I get the
following outcome
*Mainmain
The process cannot access the file because it is being used by another
process.
hello
*Main
Under GHCi 6.6 I get this:
*Main main
bye
On Fri, 2007-11-02 at 11:56 -0400, Brandon S. Allbery KF8NH wrote:
On Nov 2, 2007, at 11:51 , Jonathan Cast wrote:
I will grant that hiding *World / RealWorld# inside IO is cleaner
from a practical standpoint, though. Just not from a semantic one.
On the contrary. GHC's IO newtype
lemming:
On Fri, 2 Nov 2007, Felipe Lessa wrote:
On 11/2/07, Stuart Cook [EMAIL PROTECTED] wrote:
The solution would be to use a version of readFile that works in a
stricter way, by reading the file when it's told to, but I don't have
an implementation handy.
I guess this does
Ketil Malde wrote:
[LOC vs gz as a program complexity metric]
Do either of those make sense as a program /complexity/ metric?
Seems to me that's reading a lot more into those measurements than we
should.
It's slightly interesting that, while we're happily opining about LOCs
and gz, no one
Cale Gibbard [EMAIL PROTECTED] writes:
On 21/10/2007, Jon Fairbairn [EMAIL PROTECTED] wrote:
No, they (or at least links to them) typically are that bad!
Mind you, as far as fragment identification is concerned, so
are a lot of html pages. But even if the links do have
fragment ids, pdfs
On Fri, 2007-11-02 at 15:43 -0400, Jeff Polakow wrote:
Hello,
Just a bit of minor academic nitpicking...
Yeah. After all, the uniqueness constraint has a theory with
an
excellent pedigree (IIUC linear logic, whose proof theory Clean
uses
here, goes back at least to
On 11/2/07, Isaac Gouy [EMAIL PROTECTED] wrote:
Ketil Malde wrote:
[LOC vs gz as a program complexity metric]
Do either of those make sense as a program /complexity/ metric?
You're right! We should be using Kolmogorov complexity instead!
I'll go write a program to calculate it for the
On Friday 02 November 2007 19:03, Isaac Gouy wrote:
It's slightly interesting that, while we're happily opining about LOCs
and gz, no one has even tried to show that switching from LOCs to gz
made a big difference in those program bulk rankings, or even
provided a specific example that they
Somewhat related to the discussions about Haskell's performance...
String. ByteString. Do we really need both? Can one replace the other?
Why is one faster? Can't we make *all* lists this fast? [insert further
variations here]
Thoughts?
___
Hello,
Just to continue the academic nitpicking.. :-)
Linear logic/typing does not quite capture uniqueness types since a
term
with a unique type can always be copied to become non-unique, but a
linear
type cannot become unrestricted.
Actually, that isn't quite accurate. In
Hello,
I think you mean
!U -o U
is a theorem. The converse is not provable.
Oops... I should read more carefully before hitting send.
This is of course completely wrong.
Sorry for the noise,
Jeff
---
This e-mail may contain confidential and/or privileged information. If you
Hello,
I think you mean
!U -o U
is a theorem. The converse is not provable.
Oops... I should read more carefully before hitting send.
This is of course completely wrong.
This is embarrassing... I was right the first time.
!U -o U
is a theorem in linear logic.
--- Jon Harrop [EMAIL PROTECTED] wrote:
On Friday 02 November 2007 19:03, Isaac Gouy wrote:
It's slightly interesting that, while we're happily opining about
LOCs
and gz, no one has even tried to show that switching from LOCs to
gz
made a big difference in those program bulk rankings,
Anybody know of an ARM back end for any of the Haskell compilers?
Thanks,
Greg
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
type Pkg = (Pkgtype,Address,Payload)
type Table = [(Address,Port)]
update_table1::Table - Pkg - Table
update_table1 [] (t,d,y) = [(t,d,y)]
The problem is that your function's type signature says it's returning
a Table, which is a [(Address,Port)], but it's actually returning a
On 11/2/07, karle [EMAIL PROTECTED] wrote:
type Address = Int
data Port = C | D deriving(Eq,Show)
data Payload = UP[Char] | RTDP(Address,Port) deriving(Eq,Show)
data Pkgtype = RTD | U deriving(Eq,Show)
type Pkg = (Pkgtype,Address,Payload)
type Table = [(Address,Port)]
garious:
Anybody know of an ARM back end for any of the Haskell compilers?
nhc98 compiles to ARM,
http://www.haskell.org/nhc98/
however its lightly maintained, and many hackage libraries don't work
with nhc. So there's GHC with some effort can be made to work,
Karle,
The expression (t,d,y) must have type Pkg, by your type annotation for
update_table1, so [ (t,d,y) ] has type [Pkg]. Also by your type
annotation, the result of update_table1 should by of type Table. Is
the type [Pkg] compatible with type Table? In other words, is the type
[
On 11/2/07, Greg Fitzgerald [EMAIL PROTECTED] wrote:
Anybody know of an ARM back end for any of the Haskell compilers?
This version of hugs worked on my (ARM based) NSLU2:
http://ipkgfind.nslu2-linux.org/details.php?package=hugsofficial=format=
-
Dan
On 03/11/2007, Greg Fitzgerald [EMAIL PROTECTED] wrote:
Anybody know of an ARM back end for any of the Haskell compilers?
If there's an arm-eabi port somewhere, I might be able to get Haskell
code running on the Nintendo DS...
--
- Jeremy
___
Tim Chevalier wrote:
On 11/2/07, Andrew Coppin [EMAIL PROTECTED] wrote:
Somewhat related to the discussions about Haskell's performance...
String. ByteString. Do we really need both? Can one replace the other?
You can't get rid of String because a String is just a [Char].
Requiring
On 11/2/07, Andrew Coppin [EMAIL PROTECTED] wrote:
1. Why do I have to type ByteString in my code? Why isn't the compiler
automatically performing this optimisation for me? (I.e., is there some
observable property that is changed? Currently the answer is yes: the
ByteString interface only
Tim Chevalier wrote:
I don't think there's a deep theoretical reason why this doesn't
exist, but I also don't think it's necessarily *just* a matter of no
one having had time yet. As always, there are trade-offs involved, and
people try to avoid introducing *too* many special cases into the
Andrew Coppin wrote:
1. Why do I have to type ByteString in my code? Why isn't the compiler
automatically performing this optimisation for me?
One reason is that ByteString is stricter than String. Even lazy
ByteString operates on 64KB chunks. You can see how this might lead to
problems
On 11/2/07, Isaac Gouy [EMAIL PROTECTED] wrote:
How strange that you've snipped out the source code shape comment that
would undermine what you say - obviously LOC doesn't tell you anything
about how much stuff is on each line, so it doesn't tell you about the
amount of code that was written
On Nov 2, 2007, at 17:35 , Andrew Coppin wrote:
These are the things I'm thinking about. Is there some deep
theoretical reason why things are the way they are? Or is it merely
that nobody has yet had time to make something better? ByteString
solves the problem of text strings (and raw
On 11/2/07, Robin Green [EMAIL PROTECTED] wrote:
snip ...since
there is a Template Haskell class for the concept of translating actual
values into TH representations of those values called Lift... snip
There's a WHAT?!
*checks docs*
You're telling me all that horrendous pain in implementing
type Address = Int
data Port = C | D deriving(Eq,Show)
data Payload = UP[Char] | RTDP(Address,Port) deriving(Eq,Show)
data Pkgtype = RTD | U deriving(Eq,Show)
type Pkg = (Pkgtype,Address,Payload)
type Table = Signal (Address,Port)
system inA inB = (outC,outD)
where
route =
On Fri, 2007-11-02 at 21:35 +, Andrew Coppin wrote:
Well OK, maybe I was a little vague. Let me be a bit more specific...
If you do text processing using ByteString rather than String, you get
dramatically better performance in time and space. For me, this raises a
number of
Hi,
I understand that many people like using
layout in their code, and 99% of all
Haskell examples use some kind of layout
rule. However, sometimes, I would like
not to use layout, so I can find errors
easier (and maybe convert it to layout for
presentation after all problems are solved).
So, I
--- Sebastian Sylvan [EMAIL PROTECTED] wrote:
-snip-
It still tells you how much content you can see on a given amount of
vertical space.
And why would we care about that? :-)
I think the point, however, is that while LOC is not perfect, gzip is
worse.
How do you know?
Best case
igouy2:
--- Sebastian Sylvan [EMAIL PROTECTED] wrote:
-snip-
It still tells you how much content you can see on a given amount of
vertical space.
And why would we care about that? :-)
I think the point, however, is that while LOC is not perfect, gzip is
worse.
How do you
briqueabraque:
Hi,
I understand that many people like using
layout in their code, and 99% of all
Haskell examples use some kind of layout
rule. However, sometimes, I would like
not to use layout, so I can find errors
easier (and maybe convert it to layout for
presentation after all
On Friday 02 November 2007 20:29, Isaac Gouy wrote:
...obviously LOC doesn't tell you anything
about how much stuff is on each line, so it doesn't tell you about the
amount of code that was written or the amount of code the developer can
see whilst reading code.
Code is almost ubiquitously
while LOC is not perfect, gzip is worse.
the gzip change didn't significantly alter the rankings
Currently the gzip ratio of C++ to Python is 2.0, which at a glance,
wouldn't sell me on a less code argument. Although the rank stayed the
same, did the change reduce the magnitude of the victory?
On Friday 02 November 2007 23:53, Isaac Gouy wrote:
Best case you'll end up concluding that the added complexity had
no adverse effect on the results.
Best case would be seeing that the results were corrected against bias
in favour of long-lines, and ranked programs in a way that
--- Greg Fitzgerald [EMAIL PROTECTED] wrote:
while LOC is not perfect, gzip is worse.
the gzip change didn't significantly alter the rankings
Currently the gzip ratio of C++ to Python is 2.0, which at a glance,
wouldn't sell me on a less code argument.
a) you're looking at an average,
On 11/2/07, Sterling Clover [EMAIL PROTECTED] wrote:
As I understand it, the question is what you want to measure for.
gzip is actually pretty good at, precisely because it removes
boilerplate, reducing programs to something approximating their
complexity. So a higher gzipped size means, at
Hi,
(...)
So, I wonder: would it be possible to implement
a feature in, say, ghc, that would take code
from input and output the same code with layout
replaced by delimiting characters? (...)
ghc -ddump-parsed does this, iirc.
So does the Language.Haskell library. See this
wiki
64 matches
Mail list logo