In C++ it is perfectly normal to have overloaded functions like
f : Int - Int - Int
f : Int - Char - Int
Something that may not be obvious about Haskell is that
Haskell does NOT have overloaded functions/operators at all.
More precisely, for any identifier and any point in a
Haskell
On 13.02.2013 21:41, Brandon Allbery wrote:
The native solution is a parser like parsec/attoparsec.
Aleksey Khudyakov alexey.sklad...@gmail.com replied
Regexps only have this problem if they are compiled from string. Nothing
prevents from building them using combinators. regex-applicative[1]
As a software developer, who typically inherits code to work on rather
than simply writing new, I see a potential of aggressive compiler
optimizations causing trouble.
I would be grateful if someone could explain the
difference between aggressive optimisation and
obviously sensible compilation
The difference is what's called dynamic programming (an utterly
non-intuitive and un-insightful name).
It was meant to be. The name was chosen to be truthful while
not revealing too much to a US Secretary of Defense of whom
Bellman wrote:
His face would suffuse, he would turn red, and he
By the way, not all databases supported by Persistent have the ability to
represent NUMERIC with perfect precision. I'm fairly certain the SQLite
will just cast to 8-byte reals, though it's possible that it will keep the
data as strings in some circumstances.
According to the documentation,
I've perhaps been trying everyones patiences with my noobish CT
questions, but if you'll bear with me a little longer: I happened to
notice that there is in fact a Category class in Haskell base, in
Control.Category:
quote:
class Category cat where
A class for categories. id and
Any search tree implementation will do add and purge in O(log n) time.
Add's obvious, but could you explain to me about purge?
All of the explanations of search trees I'm familiar with,
if they bother to explain deletion at all, talk about how
to repair the balance of a tree after deleting
An ordering does not typically induce a computable enumeration. For
example, there are infinitely many rationals between any pair of
rationals.
I didn't say it was odd that Ords weren't Enums,
I said that it's odd that Enums aren't Ords.
It makes little or no sense to make treat rationals
There was and is no claim that method 2 is much harder
to implement in C or C++. In fact both methods *were* implemented
easily in C.
OK, got that now. So Haskell doesn't have a *big* advantage over C w/r
to the ease of implementation of both algorithms?
In the case of these specific
Why is toRational a method of Real? I thought that real numbers need not
be rational, such as the square root of two. Wouldn't it make more sense
to have some sort of Rational typeclass with this method?
I think everyone has problems with the Haskell numeric typeclasses.
The answer in this
On 11 Oct 2007, at 1:00 pm, [EMAIL PROTECTED] wrote:
An anonymous called ok writes:
I am not anonymous. That is my login and has been since 1979.
jerzy.karczmarczuk wrote [about R]:
... This is not a functional language.
There is some laziness (which looks a bit like macro-
processing
On 11 Oct 2007, at 4:06 pm, Tom Davies basically asked for
something equivalent to Ada's
type T is new Old_T;
which introduces a *distinct* type T that has all the operations and
literals of Old_T. In functional terms, suppose there is a function
f :: ... Old_T ... Old_T ...
On 10 Oct 2007, at 12:49 pm, [EMAIL PROTECTED] wrote:
No, I am sorry, I know a little bit R. This is not a functional
language.
There is some laziness (which looks a bit like macro-processing),
sure.
There is no macro processing in R (or S).
The manual speaks about promises and about
Let's be clear what we are talking about, because I for one am
getting very confused by talk about putting PI into FLoating as
a class member serves nobody when it already IS there.
From the report:
class (Fractional a) = Floating a where
pi :: a
exp, log, sqrt :: a - a
(**), logBase :: a
Someone wrote about pi:
| But it is just a numerical constant, no need to put it into a
class, and
nothing to do with the type_classing of related functions. e is not
std. defined, and it doesn't kill people who use exponentials.
But it *isn't* A numerical constant.
It is a *different*
On 11 Oct 2007, at 4:49 am, Dan Piponi wrote:
Maybe this is the wrong point of view, but I think of defaults as
impementations that are meant to be correct, but not necessarily the
best way of doing things, leaving you the option to provide something
better.
The example of tanh in the report
On 11 Oct 2007, at 1:34 pm, Dan Weston wrote:
Actually, [pi] is a constant: piDecimalExpansion :: String.
No, that's another constant.
A translation from piDecimalExpansion :: String to pi :: Floating a
= a is already well defined via read :: Read a = String - a
Wrong.
On 9 Oct 2007, at 9:10 am, [EMAIL PROTECTED] wrote:
* Scheme is very different from what we practice (C++, Fortran,
etc., you
know the song...) It may slow down the *adaptation* of students. They
*will need* all that imperative stuff you hate. But, as a first
language,
the FLs condition the
UltraSPARC II, Solaris 2.10, gcc 4.0.4 (gccfss),
Haskell GHC 6.6.1 binary release.
Trying to compile a simple file gives me oodles of errors because
ghc is generating something that makes gcc generate lots of these:
sethi %hi(some register),another register
For people unfamiliar with
On 3 Oct 2007, at 1:42 pm, PR Stanley wrote:
When a function is declared in C the argument variable has an
address somewhere in the memory:
int f ( int x ) {
return x * x;
}
Wrong. On the machines I use, x will be passed in registers and will
never ever have an address in memory. In fact,
I have often found myself wishing for a small extension to the syntax of
Haskell 'data' declarations. It goes like this:
data as usual
= as usual
| ...
| as usual
+++where type tvar = type
type tvar = type
...
On 26 Sep 2007, at 7:05 pm, Johan Tibell wrote:
If UTF-16 is what's used by everyone else (how about Java? Python?) I
think that's a strong reason to use it. I don't know Unicode well
enough to say otherwise.
Java uses 16-bit variables to hold characters.
This is SOLELY for historical reasons,
On 28 Sep 2007, at 10:01 am, Thomas Conway wrote:
data Tree key val
= Leaf key val
| Node BST key val BST
where
type BST = Tree key val
data RelaxedTree key val
= Leaf Bal [(key,val)]
| Node Bal [(key,RelaxedTree key val)]
where
data Bal = Balanced | Unbalanced
On 26 Sep 2007, at 8:32 am, Brian Hulley wrote:
Aha! but this is using section syntax which is yet another
complication. Hypothesis: section syntax would not be needed if the
desugaring order was reversed.
Binary operators have two arguments. That's why sections are needed.
This is one of
[Concerning the fact that fmod(x,y) = -fmod(-x,y)]
I wrote:
Interesting, perhaps. Surprising, no. fmod() is basically there for
the sake of sin(), cos(), and tan() (or any other periodic and
either symmetric or antisymmetric function).
On 25 Sep 2007, at 8:58 pm, Henning Thielemann wrote:
There are a number of interesting issues raised by mbeddoe's
Math.Statistics.
0. Coding issues.
Why use foldr1 (*) instead of product?
covm xs = split' (length xs) cs
where
cs = [ cov a b | a - xs, b - xs]
split' n = unfoldr (\y - if null y then Nothing
On 25 Sep 2007, at 10:55 am, Thomas Conway wrote:
This old chestnut! It's a common problem in practice. As I recall, the
behaviour of C's % operator allows implementations to yield either
behaviour. I just checked ISO 9899:1999 which defines fmod. It
specifies that the result of fmod(x,y)
I wrote:
Since not all Turing machines halt, and since the halting problem is
undecidable, this means not only that some Haskell programs will make
the type checker loop forever, but that there is no possible meta-
checker that could warn us when that would happen.
On 13 Sep 2007, at 4:27 pm,
In Monad.Reader 8, Conrad Parker shows how to solve the Instant Insanity
puzzle in the Haskell type system. Along the way he demonstrates very
clearly something that was implicit in Mark Jones' Type Classes with
Functional Dependencies paper if you read it very very carefully (which
I hadn't,
On 12 Sep 2007, at 8:08 pm, [EMAIL PROTECTED] wrote:
take 1000 [1..3] still yields [1,2,3]
You can think about take n as: Take as much as possible, but at
most n elements. This behavior has some nice properties as turned
out by others, but there are some pitfalls.
One of the very nice
On 7 Sep 2007, at 11:22 pm, Chaddaï Fouché wrote:
From what I can see of your program, it would greatly benefit from
using Data.ByteString, is there an obvious reason not to use it ?
I am writing a a set of tools to process a legacy programming
language, as I said. Speed is not, in fact, a
On 10 Sep 2007, at 11:49 am, Neil Mitchell wrote:
Buffering, blocks and locks.
Buffering: getChar demands to get a character now, which pretty much
means you can't buffer.
Eh what? getchar() in C demands to get a character now, which is
fully compatible with as much buffering as you want.
I wanted to use the standard name for the function
pair :: (a - b) - (a - c) - (a - (b,c))
pair f g x = (f x, g x)
but I can find no such function in the Report or its Libraries.
Is there a recommended name for this?
___
Haskell-Cafe
On 9 Sep 2007, at 10:05 pm, Axel Gerstenberger wrote:
I am used to work with map, zip and zipWith, when working with
lists, however, I could not find such functions for Arrays.
They aren't there for at least two reasons.
(1) They are easy to implement on top of the operations that are
I'm writing a tokeniser for a legacy programming language in Haskell.
I started with the obvious
main = getContents = print . tokenise
where tokenise maps its way down a list of characters. This is very
simple, very pleasant, and worked like a charm.
However, the language has an INCLUDE
On 4 Sep 2007, at 6:47 am, Vimal wrote:
In my Paradigms of Programming course, my professor presents this
piece of code:
while E do
S
if F then
break
end
T
end
He then asked us to *prove* that the above programming fragment cannot
be implemented just using if and while
On 5 Sep 2007, at 6:16 pm, Henning Thielemann wrote:
I think it is very sensible to define the generalized function in
terms of the specific one, not vice versa.
The specific point at issue is that I would rather use ++ than
`mplus`. In every case where both are defined, they agree, so
it is
I've been thinking about making a data type an instance of MonadPlus.
From the Haddock documentation at haskell.org, I see that any such
instance should satisfy
mzero `mplus` x = x
x `mplus` mzero = x
mzero = f = mzero
v mzero = mzero
but is that all
What is so bad about
f x = g x''
where x'' = x' + transform
x' = x * scale
(if you really hate inventing temporary names, that is).
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
On 8/21/07, Andrew Coppin [EMAIL PROTECTED] wrote:
I highly doubt that automatic threading will happen any time this
decade
Hm. I happen to have an old Sun C compiler on my old UltraSPARC box.
cc -V = Sun Workshop 6 update 2 C 5.3 2001/05/15.
One of its options is '-xautopar'. I'll let you
Someone mentioned the Blow your mind page.
One example there really caught my attention.
1234567 = (1357,246)
foldr (\a ~(x,y) - (a:y,x)) ([],[])
I've known about lazy match since an early version of the Haskell
report, but have never actually used it. Last night, looking at
that example, the
Let's start by reminding ourselves what foldr does.
foldr f z [x1,x2,...,xn] = f x1 (f x2 ... (f xn z) ...)
Now let's ask about last:
last [] = error ...
last [x1,...,xn] = xn
We're going to have to keep track of whether we have a last element
or not. The obvious candidate for this is Maybe
Trying to install GHC 6.6 on a Solaris 2.9 box (do I have the last
2.9 box in captivity? I've asked for 2.10, really and truly I have)
I ran into two problems.
(1) Somewhere it was assumed that only FreeBSD didn't have stdint.h.
Solaris 2.9 doesn't have stdint.h. That was an easy patch.
(2)
On 10 Aug 2007, at 6:42 am, David Roundy wrote:
do x1 - e1
if x1 then do x2 - e2
xx - if x2 then e3
else do x4 - e4
x5 - e5
e6 x4 x5
e7 xx x1
else
On 10 Aug 2007, at 9:37 am, Stefan O'Rear wrote:
http://www.haskell.org/haskellwiki/Library_submissions
I'd like to ask if it's possible to add expm1 and log1p to
the Floating class:
class ... Floating a where
...
exp, log, sqrt :: a - a
expm1, lop1p:: a - a--
On 9 Aug 2007, at 8:41 am, David Roundy wrote:
I may be stating the obvious here, but I strongly prefer the do
syntax.
It's nice to know the other also, but the combination of do
+indenting makes
complicated code much clearer than the nested parentheses that
would be
required with purely =
On 4 Aug 2007, at 12:41 am, Mirko Rahn wrote:
rewrite *p++=*q++ in haskell?
it's one of C idioms. probably, you don't have enough C experience to
understand it :)
Maybe, but how can *you* understand it, when the standard is vague
about it?
It could be
A: *p=*q; p+=1; q+=1;
B:
On 5 Aug 2007, at 5:26 am, Andrew Coppin wrote:
Infinity times any positive quantity gives positive infinity.
Infinity times any negative quantity gives negative infinity.
Infinity times zero gives zero.
What's the problem?
That in IEEE arithmetic, infinity times zero is *NOT* zero.
I wrote:
But please, let's keep one foot in the real world if possible.
Monads were invented to solve the how do I do imperative programming
in a pure functional language problem.
On 2 Aug 2007, at 7:05 pm, Greg Meredith wrote:
This is more than a little revisionist. Monads have been the
I asked How is IO a functor?
On 3 Aug 2007, at 11:50 am, Dan Piponi wrote:
IO is a fully paid up Monad in the categorical sense. The category is
the category whose objects are types and whose arrows are functions
between those types. IO is a functor. The object a maps to IO a. An
arrow f::a-b
Someone asked about comparing monads to loops.
If you are chiefly familiar with the i/o and state monads, that doesn't
really make a lot of sense, but there IS a use of monads which IS a kind
of loop.
Only yesterday I was trying to read someone else's Haskell code where
they had imported
On 2 Aug 2007, at 2:28 am, Andy Gimblett wrote:
Is this a reasonable way to compute the cartesian product of a Set?
cartesian :: Ord a = S.Set a - S.Set (a,a)
cartesian x = S.fromList [(i,j) | i - xs, j - xs]
where xs = S.toList x
Following up on my recent message about (ab)use of
On 2 Aug 2007, at 5:13 am, alpheccar wrote:
I think the problem is due to a few bad tutorial still available on
the web and which are making two mistakes:
1 - Focusing on the IO monad which is very special ;
2 - Detailing the implementation. As a newie we don't care and we
would prefer to
On 2 Aug 2007, at 1:20 pm, Alexis Hazell wrote:
Category theorists can define monads concisely using the language
of their
discipline - surely we can settle on a definition of Haskell Monads
that
would make sense to any programmer who has mastered basic programming
concepts?
It all depends
Concerning
function argument argument2
| guard = body
| guard = body
I feel that anything that prevents that kind of horror is
a great benefit of the current rules and that this benefit
must not be lost by any revision of the rules.
The Fundamental Law of Indentation is
If major syntactic
On 25 Jul 2007, at 6:50 pm, Melissa O'Neill wrote:
[section 23.4.2 of Simon's 1987 book].
The really scary thing about this example is that so much depends
on the order in which the subsets are returned, which in many cases
does not matter. Here's code that I tried with GHC on a 500MHz SPARC.
On 18 Jul 2007, at 8:52 pm, Bjorn Bringert wrote:
Well, the original poster wanted advice on how to improve his
Haskell style, not algorithmic complexity. I think that the
appropriate response to that is to show different ways to write the
same program in idiomatic Haskell.
(a) I gave
On Jul 17, 2007, at 22:26 , James Hunt wrote:
As a struggling newbie, I've started to try various exercises in
order to improve. I decided to try the latest Ruby Quiz (http://
www.rubyquiz.com/quiz131.html) in Haskell.
Haskell guru level: I am comfortable with higher order functions, but
I wrote [student code in Java twice the size of C code, 150 times
slower].
On 12 Jul 2007, at 7:04 pm, Bulat Ziganshin wrote:
using student's work, it's easy to proof that Basic is faster than
assembler (and haskell is as fast and memory-efficient as C,
citing haskell-cafe)
This completely
On 13 Jul 2007, at 2:58 am, apfelmus wrote:
What I wanted to do is to capture common patterns
x - y = epsilon
abs (x - y) = epsilon
for comparing floating point numbers in nice functions
x y = x - y = epsilon
x ≈ y = abs (x - y) = epsilon
See Knuth, The Art of Computer
On 11 Jul 2007, at 9:56 pm, Bulat Ziganshin wrote:Java comes close to
being competition, but it's slow and eats memory
never checked it myself, but people say that modern Java
implementations are as fast as C# and close to C++ speeds
People will say anything, but before believing this
On 11 Jul 2007, at 8:02 am, Sebastian Sylvan wrote:
On 10/07/07, Alex Queiroz [EMAIL PROTECTED] wrote:
20 years from now people will still be saying this...
I highly doubt that. For two reasons:
1. People can only cling to unproductive and clumsy tools for so long
(we don't write much
62 matches
Mail list logo