Re: The importance and relevance of FP

2000-08-17 Thread Craig Dickson

Doug Ransom wrote:

 I think you are mistakening ignorance for stupidity.  It
 is true that C/C++ programmers like to write OO and few
 have any idea about functional programming, but very few
 will miss the ability to constantly shoot themselves in
 the foot with uninitalized random pointers, weird memory
 access errors, and none will miss spending a couple of
 weeks at the end of the development cycle trying to find
 a memory leak.

With such optimism about programmers, I'm astounded that you're writing from
a .com rather than a .edu address. :-) My experience in industry has led me
to quite different conclusions. Many C/C++ programmers seem not to recognize
pointer issues as a problem, or at least, one with a viable solution; when
offered languages with built-in GC, they say things like, "I like doing my
own memory management; it gives me more control and makes it clearer what's
really going on in the program." As I pointed out in a previous message,
most of these people can't really handle the power that this "more control"
gives them, but they have a blind spot that prevents them from recognizing
that this is a problem.

Craig






Re: The importance and relevance of FP

2000-08-16 Thread Craig Dickson

I see that the discussion has progressed considerably during the (for me, in
California) night, so I'll just make a couple of comments...

Ketil Z. Malde wrote:

 : functions, while pretty first class objects, reside in
   their own namespace, and need special operators.
 : iteration and side effects are not particularly discouraged,
   and is probably as common as recursion and purity in actual
   code.

I agree, and would add (if I'm recalling correctly) that LISP lacks lexical
scoping, which may not be an absolute requirement for functional programming
but certainly enhances it, and is a common feature of functional languages.

Friedrich Dominicus wrote:

 But saying Lisp isn't functional (to some extend) is
 simply not true.

Well, it's that "to some exten[t]" that is the crux here, I think. If you
are going to say that LISP "is" or "is not" a functional language (an overly
simplistic, binary distinction), then you have to decide where to draw the
line between "functional languages" and other languages that may, to some
degree, include support for functional programming. LISP has much more
support for FP than C++ or Perl, but not as much as Scheme, which in turn is
less "pure" than Haskell or Erlang. So whether you want to say that it "is"
functional or not is just a matter of where you draw the line. Scheme, which
is undeniably more "functional" than LISP, is about the least purely
functional language I would call "functional". So that's where I draw the
line. Your mileage may vary.

Craig






Re: The importance and relevance of FP

2000-08-15 Thread Craig Dickson

Jacques Lemire wrote:

 On the contrary,  languages like C++ (and Java) and
 C#  are full of concepts and ideas coming from FP
 languages. For example, the catch/try/throw construct
 is coming directly from Common Lisp (Lisp is a
 (although impure) FP language).

This is, needless to say, something of a matter of opinion and historical
interpretation. I wouldn't call LISP an FP language, though it is surely
ancestral to many FP languages, and many FP concepts have their basis in
LISP features.

FP has had some influence on non-FP languages, surely, but usually the
effect of this influence has been not to add core FP capabilities or allow
FP thinking, but instead to add some isolated feature that is more or less
orthogonal to FP, even if it originated in an FP language. Exceptions are a
good example of this; they really have nothing to do with FP as such (in the
sense of "FP is sugared lambda calculus"). How many non-FP languages have
real higher-order functions? Certainly not C++, despite the STL's "function
objects".

 I would not be a bit suprise to see tuples, lists and
 cons(es) introduce in the next generation of languages.

Which "next generation"? If you mean the descendants of C++ and Java, I
would be quite surprised to see LISP-style lists built into them.

 If you what to see the limitations of OO and C++, take
 a look at the MFC, MFC architects had to introduce macros.

Well, I'm not sure that's a fair target. I don't think anyone knowledgeable
about OO would consider MFC (or C++ itself) a very good example of it.

Craig






Re: Haskell and the NGWS Runtime

2000-08-14 Thread Craig Dickson

Benjamin Leon Russell wrote:

  Tyson Dowd [EMAIL PROTECTED] wrote:
  On 11-Aug-2000, Craig Dickson [EMAIL PROTECTED] wrote:
 
  stuff deleted/stuff deleted
 
   will be coming from C and C++ where it is perfectly
   normal to do all sorts of things that the compiler
   cannot guarantee to be safe. This leads to all sorts
   of bugs such as buffer overflows, stack corruption,
   page faults accessing unmapped memory, etc. By making
   it so convenient to write unsafe code in C#, Microsoft
   has essentially given these C/C++ coders an excuse not
   to bother learning how to write safe code, and many
   programmers will take that excuse.
 
  I don't believe you can teach programmers anything by
  trying to take tools away from them.
 
  I believe you can only teach programmers by showing them
  a better tool.

 Aha, but *which* programmers?  The C/C++ programmers who
 will bother learning how to write safe code, or those who
 won't?

[...]

 The problem is that many programmers will not focus on the
 safe features if the unsafe ones remain.

Exactly. For example, many C programmers use "goto" habitually, even though
there are better solutions even in C.

I, personally, have no problem with learning new paradigms; in fact, I like
it. I regard learning a new programming language as essentially getting an
inside view of someone else's ideas of how programming should be done, and
so I go out of my way to learn the "correct" style for that language.

rant

But I work with a number of people who I don't think have bothered learning
any really different ideas about programming since graduating back in the
mid-80s. They write C++ as if it were C, after ten years of working
exclusively in C++. Sure, they call their structs "classes" now, and the
classes have methods, but all their data members are public, and if they
need a list of objects, they just give each object a "next" pointer instead
of using a separate list class. The company's C++ style guide, when I was
hired, contained pearls of wisdom such as "Declare all methods virtual" and
"Never use static members". The company's chief engineer (now chief
architect) habitually violates Microsoft's Windows coding guidelines left
and right because she thinks it makes the code "easier to read" -- never
mind that it doesn't work. I'm sorry, but I just don't think these people
will, of their own volition, learn how to write "safe" C#. They'll just
write unsafe C# because it's more convenient. "After all," they'll say, "my
code is valid C# too, so it's just as good as yours. It's just a matter of
style."

/rant

Now, on the other hand, if they need to work in C#, and if C# didn't have
"unsafe" features, instead merely having a decent FFI, these folks might get
the idea that safe code is the way to go.

 It takes time to learn a new skill, and if the programmers
 already know how to write unsafe code that they are
 convinced will work (even if it also potentially introduces
 bugs), they are not likely suddenly to change their style
 and to write everything using safe code whenever possible,
 even if it happens to be better in each situation.

rant

One thing I've noticed with C/C++ programmers, particularly (which is,
again, the pool from which most C# programmers will be drawn), is that many
of them are convinced that they can handle dangerous techniques which
experience shows they can't handle. They say things such as, "I like doing
my own memory management, because it gives me more control," but their code
continually suffers from memory leaks and other pointer-related problems
that show quite clearly that they are not to be trusted with these things
that give them "more control". This, in my view, is just one more reason why
"unsafe" features should not be built into mass-market languages like C#.

/rant

 Ah, a testable hypothesis!  If you are right, then you
 should be able to provide an example of a language that
 meets the requirements of writing both low-level kernel
 code and most user applications equally well for the
 bulk of the programmers working with the language!

I think this is a better hypothesis to test than Tyson's. For one thing, to
really be able to say what problems exist in C# as a language, and how the
desire to be both a high-level applications language and a low-level kernel
language plays a role in those problems, will take time and experience with
C#. A lot of the things I now dislike about C++ took me a long time to
really understand.

I realize, of course, that it probably seems very convenient for me that I
prefer the hypothesis that puts the burden of proof on the other party. Of
course it is. But I think history, so far, shows that attempts to make a
language that is "universal" or "good for everything" have resulted in
languages that are too complicated. It may be _possible_ t

Re: Haskell and the NGWS Runtime

2000-08-11 Thread Craig Dickson

Antony Courtney wrote:

 But Java also has a way to do "rampant pointer-level
 optimization":  You declare a method as "native" and
 then implement it in C.

That's hardly the same thing, though. Of course an FFI allows you do to all
sorts of things, but at least it's very clear, from the fact that you're
using another language, that you are switching paradigms and potentially
doing something dangerous. In fact, I would generalize this a bit further
and say that if you want to do something that violates the paradigm of the
language you're working in, you _should_ do it in another language,
precisely to make the point (to anyone reading your code) that certain
components aren't following the rules.

In C++, this isn't much of an issue because C++ is the all-time paradigm
whore of languages; there basically aren't any rules, and you can do
whatever you like, which is part of why C++ sucks. With STL and some of the
weirder properties of recursive templates, you even almost have a sort of
half-assed functional language. I'm almost surprised that Stroustrop hasn't
tried to build in a real module system, closures, and backtracking; then
he'd have just about everything.

 Any sensible programmer

Most programmers, in my experience, are not sensible.

 will recognize the loss of portability, safety and
 abstraction when writing a native method in Java, and
 will only do so when absolutely necessary.  The same
 should go for "unsafe" methods in C# [...]

The difference is that C# allows you to fundamentally design an application
in an unsafe way and still claim (to your manager) that your code is 100%
C#. I'm not kidding; this will happen. Remember that many C# programmers
will be coming from C and C++ where it is perfectly normal to do all sorts
of things that the compiler cannot guarantee to be safe. This leads to all
sorts of bugs such as buffer overflows, stack corruption, page faults
accessing unmapped memory, etc. By making it so convenient to write unsafe
code in C#, Microsoft has essentially given these C/C++ coders an excuse not
to bother learning how to write safe code, and many programmers will take
that excuse.

 Remember, too, that not every program is written as an
 application on a PC.  The requirement in Java that native
 methods be implemented in another language caused serious
 problems for the JavaOS and embedded / JVM-on-a-chip
 efforts.

Erlang has the same requirement that code doing unsafe things has to be
written in another language. It is used in a number of embedded systems; in
fact, that's what it was originally intended for. So the argument that you
have to have unsafe features built into the language just doesn't wash.

 How do you write a device driver for a memory-mapped
 device in 100% pure Java? You can't.

And you aren't supposed to. I wouldn't want to write hardware drivers in a
garbage-collected language that allocates all objects on the heap.

Java is not a systems-level language; adding low-level bit-twiddling
features to it would give it some of the same problems that C++ has, with
the low-level features undermining the high-level features and the
requirements of the high-level features interfering with the low-level
features.

Your not-quite-spoken assumption that it should be possible to write
everything in one language is just something I fundamentally disagree with.
The requirements of low-level kernel code are quite different from those of
most user-level applications, and any single language that tries to address
both sets of requirements will do so poorly.

Craig






Re: Haskell and the NGWS Runtime

2000-08-11 Thread Craig Dickson

Sylvan Ravinet wrote:

 Do, or do not. There's no try. -Yoda

Pedantic not to be, but in contractions speak, does Yoda not. Is quote, "Do,
or do not. There is no 'try'."

Craig






Re: Haskell and the NGWS Runtime

2000-08-10 Thread Craig Dickson

Brent Fulgham wrote:

 Thanks for the link!  Unfortunately, its click-through
 license forbids disassembly, reverse engineering, and a
 raft of other endeavors that one should be allowed if
 they were truly interested in global acceptance.

Well, this _is_ Microsoft, after all.

 Of course, a few hops up the chain you might run across Joshua
 Trupin's execrable description of the C# language.  Really one
 of the worst articles I've ever read.  You will get such
 wisdom as:

 "It's [C#] a little like taking all the good stuff in
 Visual Basic (C) and adding it to C++, while trimming off some
 of the more arcane C and C++ traditions."

More like "Microsoft Java", but of course they never mention Java, as if
hoping that people will read all this tripe and not notice the similarities.

I had a most exquisite sense of down-the-rabbit-hole a few weeks ago when I
browsed through a Microsoft book on C#. I kept running into all this stuff
about how C# was a revolutionary next-generation OO language, better than
C++, but all the programming samples looked only trivially different from
Java, which of course they never mentioned, as if the book had arrived from
some parallel universe in which James Gosling was assassinated by Richard
Stallman after inventing Gosling Emacs, and had thus never invented Java. I
had to put the book down after a little while because I felt like I'd lose
my mind if I kept going.

I don't really mind them inventing a variation on Java. Either it will take
off or it won't, and I don't really care either way. But I just wish they'd
admit what they're doing, and stop treating us all as if we're so stupid
that if they don't _tell_ us that C# is a Java derivative, we won't notice.

 If you're like me, you might be wondering what exactly the
 "good stuff" allegedly contained within Visual Basic (C) might
 be.  Well, one such element is apparently the labeled "goto".

My impression is that C#'s use of labeled gotos is extremely restricted. (I
could be wrong.) The only reference to gotos that I found in a Microsoft
Press book on C# is its use in jumping to the start of one case in a switch
from another case in the same switch. This is actually useful, since it is a
better solution than "falling through" from one case to another, as is
sometimes done in C, C++, and Java (but which cannot be done in C#, since
the "break" required in those languages is implicit in C# -- also a good
thing). It's a better solution because (1) it doesn't look like a mistake
when you see it in someone else's code; (2) it is more flexible, since the
case you're going to doesn't have to be positioned directly below the case
you're coming from.

Now, if it turns out that C#'s goto can be used for any other purposes, I
will be much less happy about it.

 Joshua is also quick to highlight another:

 "What's one of the most annoying things about working in C++?
 It's gotta be remembering when to use the - pointer indicator,
 when to use the :: for a class member, and when to use the dot."

 Hmm.  I guess stating precisely what you mean is a bad feature
 for programming languages.

Well, claiming that the - vs. . thing is one of the "most annoying things
about working in C++" is pretty silly, but only because it's so trivial, not
because "stating precisely what you mean is a bad feature". C#, like Java,
doesn't need - because there's no distinction between having a reference to
an object vs. having a pointer to an object. And C++ overloads so many
things that it's sort of silly to complain that getting rid of the
double-colon scoping operator reduces clarity. The double-colon was
unnecessary to begin with.

 The thing that really bothers me is that they claim that ".NET
 will be available on Windows (C) and other systems".  But they
 have no reference implementations available for non-Windows (C)
 environments.  When Sun released Java, we had Unix and Windows
 versions available right away, and the Linux Blackdown port
 shortly thereafter.

True, but then Sun is a Unix vendor, so of course they had to support it,
and Windows is 90% of the market, so they had to support that too.
Microsoft's incentive to support anything other than Windows is unclear to
me, to say the least. I interpret "other systems" to mean Windows CE, and
will believe otherwise only when a less ambiguous announcement is made.

Craig






Re: Haskell and the NGWS Runtime

2000-08-10 Thread Craig Dickson

Benjamin Leon Russell wrote:

 However, according to the C# Language Reference,
 "For developers who are generally content with
 automatic memory management but sometimes need
 fine-grained control or that extra iota of
 performance, C# provides the ability to write
 “unsafe” code. Such code can deal directly with
 pointer types, and fix objects to temporarily
 prevent the garbage collector from moving them."
 [Section 1.2]

I hadn't known that. Guess I didn't read far enough, or perhaps it wasn't
covered in the book I was reading. I agree that this is likely to be abused.

Craig






Re: Haskell and the NGWS Runtime

2000-08-09 Thread Craig Dickson

Nigel Perry wrote:

  NGWS

 An older temporary name for .NET. NGWS? Never Goes Wonderfully Sucks?
 I think somebody shot the marketing guy and replaced him, she then
 came up with ".NET" :-)

Next Generation Windows Services (I think), as opposed to older generations
such as the Win32 APIs and COM.

  C# (which I've discovered is intended to be pronounces C-sharp
  rather than C-hash)

 MS's version of a "better C", "better" is subjective of course ;-)

More of a "better Java", really. I haven't looked at C# in any great detail,
but as a language it does seem to be a bit better than Java from MY
particular subjective viewpoint, aside from its current complete lack of
portability. (Somehow I doubt that Microsoft will actually create, or allow
anyone else to create, a really good Mac or .*n[iu]x version of it. That
would conflict with their desire to have Windows conquer the universe.)

  the .NET virtual machine

 Under .NET compilers compile to IL, this is then JIT'ed and executed.
 "JIT" includes such options as "JIT at install time" (a new defn of
 JIT!). MS are keen to point out that IL code is never interpreted.

Yes, this is nice. Compiling as part of installation is a cool option. Of
course, JIT at runtime is probably preferable when you're downloading web
app(let)s, which I assume the .NET infrastructure is meant to support.

  COM

 I'm too young to know about COM, but I hear it was less than wonderful

COM is/was Common Object Model, a language-independent (though somewhat
C/C++-biased) binary-compatibility standard for object interfaces. Related
terms include OLE (Object Linking and Embedding) and ActiveX, both of which
are particular subclasses/extensions of COM. Working with COM is sort of a
pain, and the extra levels of indirection and data transformations it
requires degrade performance to a noticeable extent. There are tons of COM
objects built into Windows, by the way, which may partially explain why
Windows is such a pig.

 Why does the world need C# when it already has Java and C++?

 Who invented Java  C++?

Good answer, but let's consider the history and purpose of these things a
bit...

C++ was invented (by Stroustrop at Bell Labs) because he wanted to add
objects to C, while retaining near-perfect backward-compatibility at the
source level (i.e., nearly all legal C programs should be legal C++ as well,
aside from conflicts caused by new keywords in C++ that are legal variable
names in C). It has since grown to include exceptions, templates, and other
capabilities that are orthogonal to object-orientation but were considered
good things to have. The resulting language, in some people's opinions (mine
included) is a chaotic mess of conflicting features that is hard to learn
and use.

Java was Gosling's attempt to make a more pure OO language than C++ while
still retaining C-like syntax. He discarded what he considered "mistakes" in
C++, such as multiple inheritance, backward-compatibility with C, and the
"const" keyword; added some mistakes of his own; and neglected to fix a
number of syntactic uglinesses inherited from C, such as the need to put
"break;" at the end of every case in a switch statement. His superiors at
Sun then marketed Java as a revolution in language design, which it was not,
and as a Windows-killer, which it was not. What it was and is, IMHO, is a
mediocre language created with minimal ingenuity and no really new ideas.

C# is Microsoft's attempt at a Java-killer. It isn't really all that
different from Java, but as Nigel says below, its virtual machine is
intended to support more languages than just C#, whereas the JVM was
designed just for Java, and getting other languages to compile to JVM can be
a bit of a struggle. Syntactically, C# seems a bit cleaner than Java, but
the differences don't really add up to much.

 Why does the world need a .NET virtual machine when it has
 dozens of Java Virtual Machines?  Don't COM and Corba already

 The argument here is that .NET is designed from the ground up to
 support multiple languages, the JVM was not. So languages can
 interwork, share libraries, and even extend each others classes.

And CORBA isn't a Microsoft product, nor is it supported by any Microsoft
product, so Microsoft prefers to ignore it. This is not to say whether CORBA
is good or bad, as I haven't worked with it. I do think, though, that
Microsoft's idea of using a VM-based intermediate language, rather than a
low-level binary compatibility standard such as CORBA or COM, is a better
way of getting different languages to work together.

Craig






Re: Haskell and the NGWS Runtime

2000-08-03 Thread Craig Dickson

Fergus wrote:

 I guess one could argue that the costs of most other things pale
 in comparison to the costs of having lazy evaluation as the default ;-)

Of course, if you're the sort of person who likes to write "head (sort lst)"
to get the least member of a list, then lazy evaluation is incredibly
efficient. Getting used to lazy evaluation, and really learning how to use
it properly, is probably really the hardest thing about Haskell for someone
who has already learned how to program using strict languages. Possibly even
harder than monads.

Funny/sad anecdote: I once saw a senior software engineer with over 15 years
of C experience and a master's degree in CS, write the following code in C
to locate the least member of an array:

qsort(array,
  sizeof(array) / sizeof(array[0]),
  sizeof(array[0]),
  comparator);
least = array[0];

Essentially, this is just "head (sort lst)" in C, which, as I daresay we all
know, is NOT a lazy language! I actually had to explain to her why it was
inefficient. Depending how you look at it, this is either strong evidence
that lazy evaluation is a more natural way to think than strict evaluation,
or proof that a master's in CS doesn't necessarily mean you really know
anything about programming.

Craig






Re: mail delivery

2000-02-22 Thread Craig Dickson

S.D.Mechveliani [EMAIL PROTECTED] wrote:

 Today, there came the letter by  Joe English  on space leak etc.,
 dated of  February 06.
 And today is  February 22.
 I wonder, what is the matter with the mail lists.

The delay, in this case, appears to have been on the sending server's side,
so I don't see any reason to think the list is at fault.

Craig





Re: drop take [was: fixing typos in Haskell-98]

2000-01-25 Thread Craig Dickson

Brian Boutel [EMAIL PROTECTED] wrote:

 We have seen various proposals about what laws should hold wrt
 take and drop. I think there is a reasonable presumption that the
 following  very simple laws should hold first:

 length (take n xs) === n
 length (drop n xs) === length xs -n

Does that not imply that "take n xs" when n  (length xs) should be an
error? I would support that for Haskell 2000, but not for Haskell 98; it's
too big a change, and goes far beyond the original goal of resolving the
problem of "take n xs | n  0".

For Haskell 98, I still favor the proposal:

take n xs | n  0 = []
drop n xs | n  0 = xs

For Haskell 2000, I feel that the list functions should be consistent in
their treatment of empty lists. If "head []" is an error, then "take 2 [1]"
should also be an error. And I like having "head []" be an error, because if
it returned [], then it seems to me that that would have nasty implications
for pattern-matching. I don't want a pattern like "(x:xs)" to match the
empty list, which it presumably would if "head []" and "tail []" did not
fail (x and xs would both be bound to []).

So, if "head []" and "tail []" are going to fail, then other things that
imply looking at the head or tail of [] should also fail, including "take 2
[1]" and "drop 2 [1]".

Craig





Re: drop take [was: fixing typos in Haskell-98]

2000-01-25 Thread Craig Dickson

Tom Pledger [EMAIL PROTECTED] wrote:

 Craig Dickson writes:
   [...]
   I don't want a pattern like "(x:xs)" to match the empty list, which
   it presumably would if "head []" and "tail []" did not fail (x and
   xs would both be bound to []).

 I don't think it would.  Patterns involve data constructors like []
 and (:), but not functions like head and tail, which may happen to
 obey all sorts of rules, but aren't part of the data type definition.

True, but I think the standard functions, especially those in the prelude,
ought to make sense in terms of the data type's definition, for the sake of
presenting a consistent view of that data type to the programmer. If
"(x:xs)" does not match [], then the reason for this should be that [] has
no head to bind to x, nor tail to bind to xs; and if this is so, then "head
[]", "tail []", and "take 1 []" should also fail. Conversely, if "head []"
and "tail []" succeed, then "(x:xs)" should match the empty list. If this
consistency is not maintained, then the language and its core functions are
presenting a confusing and somewhat contradictory view of what a list is and
how you interact with it.

Craig





Re: fixing typos in Haskell-98

2000-01-24 Thread Craig Dickson

Brian Boutel wrote:


 take -2 [1,2,3,4] ++ drop -2 [1,2,3,4] - [3,4,1,2]

But [1,2,3,4] is NOT the same as [3,4,1,2]. So the equality doesn't hold.

Personally, for reasons I'm not sure I can articulate, I've always strongly
disliked the notion that negative arguments should produce "backwards"
behavior, e.g. "take n xs" == "drop (length xs - n) xs". I think the best
way I can put it is that the simple, easily comprehended definition of,
e.g., "take n xs", is that it gives you the first n items in the list xs,
and that this simple concept is violated, to no significant benefit, by
defining backwards behavior for negative values of n. It is particularly
dangerous in the presence of infinite lists, since the negative value may
result from a bug in the program -- I would rather have the program fail
than have it subtly become non-terminating. (Not that there aren't other
ways of doing this.)

I greatly prefer the suggestion that

take n xs | n  0 = []
drop n xs | n  0 = xs

Craig





Re: fixing typos in Haskell-98

2000-01-24 Thread Craig Dickson

I would like to take this opportunity to thank Microsoft Outlook Express for
trashing the format of Brian Boutel's message so that I couldn't tell what
part of it was quotation (from Jacobsen) and what part was Boutel's reply.
Of course, I now realize that Brian was pointing out that Jacobsen's
definitions for take and drop didn't work. Nevertheless, I stand by my
argument against allowing negative arguments to take/drop to produce
"backwards" behavior.

Craig

- Original Message -----
From: "Craig Dickson" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, 24 January 2000 02:39 pm
Subject: Re: fixing typos in Haskell-98


 Brian Boutel wrote:


  take -2 [1,2,3,4] ++ drop -2 [1,2,3,4] - [3,4,1,2]

 But [1,2,3,4] is NOT the same as [3,4,1,2]. So the equality doesn't hold.

 Personally, for reasons I'm not sure I can articulate, I've always
strongly
 disliked the notion that negative arguments should produce "backwards"
 behavior, e.g. "take n xs" == "drop (length xs - n) xs". I think the best
 way I can put it is that the simple, easily comprehended definition of,
 e.g., "take n xs", is that it gives you the first n items in the list xs,
 and that this simple concept is violated, to no significant benefit, by
 defining backwards behavior for negative values of n. It is particularly
 dangerous in the presence of infinite lists, since the negative value may
 result from a bug in the program -- I would rather have the program fail
 than have it subtly become non-terminating. (Not that there aren't other
 ways of doing this.)

 I greatly prefer the suggestion that

 take n xs | n  0 = []
 drop n xs | n  0 = xs

 Craig






Re: The Haskell mailing list

1999-10-08 Thread Craig Dickson

Colin Runciman [EMAIL PROTECTED] wrote:

 I also agree with Simon that simply making this a moderated list is
 not the solution.  Perhaps splitting is best.  How about

 haskell-info
 haskell-talk

 where info carries *brief* announcements, requests for information
 and responses to such requests, and talk carries anything and
 everything else appropriate to a Haskell list.

Of the suggestions so far, this is the one I like best. Another writer had
suggested "haskell-announce", but I prefer Colin's idea of "haskell-info",
combining announcements with some qa but no extended discussions. Any
threads lasting beyond a few messages could move to haskell-talk. For those
of us who want everything, there are two possibilities that come to my mind
offhand:

(1) All haskell-info traffic could be automatically copied by the list
server to haskell-talk, effectively making haskell-info just the "low
bandwidth" version of haskell-talk; or

(2) Readers could subscribe to both lists, and optionally use mail filters
to merge them into one folder locally.

I'm slightly inclined towards (1) just because I don't see why anyone
subscribed to haskell-talk wouldn't also want to read the haskell-info
messages, and it would be nice to not need to remember to update two
subscriptions when changing email addresses or unsubscribing.

All this seems somewhat less than ideal, though. I think the real problem is
that most mail clients don't have killfiles the way newsreaders do (unless
your newsreader is also your mail client, perhaps). I would like to be aware
of all topics in all the Haskell lists, but not have to bother with a given
thread once I've decided it doesn't interest me. In a newsreader, I can kill
just the threads I don't like, and not even see subsequent followups. Even
in fairly busy newsgroups, this is an effective tool for controlling
perceived traffic levels. Mail clients, unfortunately, generally don't
support such a capability. So unless moving everything to comp.lang.haskell
is a viable option, I think splitting the list into haskell-info and
haskell-talk is the best option.

Craig








Re: OO in Haskell

1999-10-05 Thread Craig Dickson

Kevin Atkinson [EMAIL PROTECTED] wrote:

 God NO, I like C++ because it is powerful but adding more features on an
 already ugly (but powerful languge) will make matters worse my making it
 more powerful but so ugly with so many pitfalls that no one will want to
 use it.

Some would say this has been true for some time...

Craig








Re: Tail recursion

1999-09-15 Thread Craig Dickson

Paul Moore [EMAIL PROTECTED] wrote:

 Now, I tried this in Hugs98 and found inconclusive results. Both fact1
1
 and fact2 1 failed with a control stack overflow. However, when I was
 experimenting earlier today (I didn't save the results :-() I got a
 variation on fact2 which went well beyond 1, although it failed
further
 on with a garbage collection not being able to reclaim enough space (which
 indicates that Hugs may *still* not be fully optimising to an iterative
form
 - unless the numbers had got so huge that the space for the bignums
involved
 was filling my RAM - but that seems unlikely...)

I think Haskell's lazy evaluation is invalidating your tests. The gc problem
is probably caused by memory being filled up with partially-evaluated (or
not yet evaluated) expressions.

I'm not sure offhand whether Haskell guarantees tail-recursion
optimizations, but I'm sure there are several people on this list who can
tell us...

Craig







Re: opposite of (:)

1999-08-20 Thread Craig Dickson

xander [EMAIL PROTECTED] wrote:

 is there an opposite (:) available in haskell?
 like, i can toss an element at the back of a list without
 O(n) fuss recursing bla??

You can cons an element onto the front of a list without modifying it, but
adding an item at the end of the list will modify the list. That's why you
can't do it in O(1) time.

To illustrate, here's a simple list:

lst = [1,2,3]

Internally, most functional languages will implement this as a singly-linked
list, with the name "lst" pointing to the "1" element, as follows:

"lst" -- 1 -- 2 -- 3

Now, if you cons an item onto "lst", like this:

lst2 = 0:lst

the name "lst2" points to a newly-created "0" element, which in turn is
linked to the same "1" element that "lst" points to. Note that the original
list is NOT duplicated; there is still only one copy of the list [1,2,3],
and it can be referenced both as the list "lst" and also as the tail of the
list "lst2". This is what makes it possible to cons an element onto a list
in O(1) time.

But to add an element to the _end_ of a list, you have to duplicate the
entire list, or else you'll not only create a new list, but also modify any
other lists that happen to share elements with the list you're adding to.
For example (using -: as the "add element at tail in O(1) time" operator
you're looking for):

lst  = [1,2,3] (internally: lst  -- 1 -- 2 -- 3)
lst2 = lst -: 4(internally: lst2 -- 1 -- 2 -- 3 -- 4)

This might seem okay at first glance, but remember that you haven't
duplicated the original list (because we want O(1) performance); we've just
added a new element to its tail. This means that "lst2" is referencing the
same 1 -- 2 -- 3 nodes that "lst" is using. The problem is that the "3"
node is now pointing to a "4" node, which means that in defining "lst2",
we've modified "lst", which is obviously not acceptable. This is why the
operator you want doesn't exist, and why instead you have to say

lst2 = lst ++ [4]

which actually copies the elements of "lst", giving O(n) performance.

I'm not sure if this is in a FAQ somewhere, but it ought to be. I think
every newcomer to functional programming (or LISP) wonders about this.

Craig







Re: ANNOUNCEMENT: The Glasgow Haskell Compiler, version 4.04

1999-07-29 Thread Craig Dickson

Now that you're an (ahem) Microsoft employee, is there any intention of
allowing ghc to use Visual C++ instead of gcc, or supporting the Win32
platform without cygwin?

Thanks,

Craig

- Original Message -
From: Simon Marlow [EMAIL PROTECTED]
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Thursday, 29 July 1999 10:37 am
Subject: ANNOUNCEMENT: The Glasgow Haskell Compiler, version 4.04


  The Glasgow Haskell Compiler -- version 4.04
 ==

[etc.]







Re: How to murder a cat

1999-06-14 Thread Craig Dickson

Jeff Dalton [EMAIL PROTECTED] wrote:

 Sure, cat in itself isn't very interesting.  But cat is just a simple
 case of a more interesting problem, that of writing what Unix calls
 "filters": programs that take some input from a file or pipe or other
 similar source and transform it into some output.

Yes, but cat *doesn't* transform anything. The problem is not that cat is a
filter, but that it is such a trivial one.

I would think that if one wishes to learn functional programming, one would
be best advised to start out solving problems that are well-suited to the
functional paradigm -- where most of the solution involves manipulating the
data in memory, rather than getting the data in or out of the program. The
simplest Unix filters, such as cat and tee, don't fit this description. More
complex filters such as grep (or even wc) would make much better learning
projects.

Craig







Re: Haskell conventions (was: RE: how to write a simple cat)

1999-06-11 Thread Craig Dickson

Jan Skibinski [EMAIL PROTECTED] wrote:

 But there are some stylistic camps, such as Eiffel's, that
 prefer names with underscores rather than Hungarian notation
 - claiming exactly the same reason: better readability. :-)

I don't see that underscores serve readability in the same way as Hungarian
notation purports to (unless the Eiffel people claim that underscores
somehow convey type information?), so I don't see a conflict here. One could
easily use both, e.g. n_widget_count for an integer value.

Whether underscores are better than mixed case, or whether Hungarian
notation is useful, seem to be matters of personal taste, not of fact. I
personally don't see much advantage to either underscores or mixed case
(except in C++, where many programmers tend towards such lengthy names that
the use of mixed case instead of underscores is actually helpful in keeping
identifier lengths under control). I use Hungarian notation only in C/C++,
and only when writing specifically for MS Windows, simply because that's the
convention on that platform (all of MS's documentation and samples use it).
I tend to think (getting off topic here) that Hungarian notation is fairly
useless; I'd rather know something about the scope of a variable (e.g. is it
a global? file-static? class member? static class member? local? static
local? In C++ there are so many possibilities! And that's not even
considering "const", "mutable", "volatile"...) so that I can see what the
variable relates to and how widely-felt the effects of changing it might be.
One company I worked at a few years back had a cute prefix scheme for this:
non-static member variables were prefixed "my" (i.e. owned by a single
object), static members "our" (i.e. shared by a class of objects), globals
and file-statics "the" (i.e. there can only be one), and locals "a" or "an"
(depending, of course, on whether the variable name began with a consonant
or a vowel). In practice, this seemed to my then-co-workers and me to be far
more helpful than Hungarian notation.

Craig







Re: How to murder a cat

1999-06-10 Thread Craig Dickson

Jerzy Karczmarczuk [EMAIL PROTECTED] wrote:

 More seriously, I jumped into this FP paradise from a good, Fortran
 loving milieu, when I found that there were problems very awkward to
 solve using imperative programming. I wouldn't have started to use FP
 just to test that it is possible to repeat the standard imperative
 archetypic codes functionally, because it is not very rewarding.

I have to agree. I've been mostly ignoring this whole "how to write 'cat' in
Haskell" thread, because I don't find it to be in any regard an interesting
problem, nor a very appropriate one for learning about functional
programming, especially lazy functional programming. If it seems desireable
to re-implement a standard Unix utility in Haskell, I suggest 'make'. One
could even design and implement a 'make' that would know all about Haskell
modules, and parse them to generate dependencies automatically.

Craig







Re: Type casting??

1999-03-11 Thread Craig Dickson

Steve Frampton wrote:

Okay, I'm [damn] confused regarding type-casting in Haskell.

Because there isn't any?

I'm trying
to write a function that accepts an integer and then returns a set (in
this case, a set of characters).

I'm having a lot of problems with "Declared type too general", "Type error
in application", etc. depending on what I try.

My function looks sort of like this:

  foo :: Int - [a]
  foo 0 = []
  foo x = ['1'] ++ foo(x - 1)

I would expect that I'd end up with a set of ''s depending on how
large a value I pass to 'foo'.  However, I can't seem to make the
interpreter happy, no matter what I try (sounds like an ex-gf of mine).

Well, the example you give doesn't make much sense, since the returned list
will always be a list of characters; thus the supplied signature Int - [a]
is indeed too general. Int - [Char] would be correct.

If the signature Int - [a] represents what you really want, then the code
is wrong, and I'm not sure what you're trying to do. You want a list N
elements long, obviously, but what are the members of the list? If you want
the type of the list members to be different depending on how you use the
function, then the function needs an argument that tells it what type (and
what value of that type) to use, like this:

foo :: Int - a - [a]
foo 0 _ = []
foo x a = a : foo (x - 1) a

With this, evaluating foo 3 '1' results in "111" :: [Char], and evaluating
foo 3 1 results in [1, 1, 1] :: [Int], which makes sense and sounds like it
might approach what you're looking for.

This function could be further improved by making it tail-recursive, but
that's not quite germane to your question.

I thought maybe I need some kind of type-cast ala C, but I'm not really
sure what needs to be put there.

Absolutely not. It sounds like you're thinking you can call foo, have it
generate a list of unknown type but known length, and then assign a type
afterwards. You can't do that.

Craig







Re: why I hate n+k

1998-11-30 Thread Craig Dickson

Brian Boutel wrote:

n+k patterns make sense for a type of Natural Numbers (including 0),
but not for general Integral types.

They are also dangerous because they are defined in terms of  and -,
which,
in a user-defined type, need not obey the usual laws, e.g. you cannot
assume
that 0  1 holds.

The problem is that dropping them would break lots of stuff - but probably
more textbooks than programs.

I wonder. It probably depends on how much mindshare (n+k) patterns have.
Personally, I have never used one, but since I'm disinclined to do so I may
not have noticed situations in which they come in handy -- if indeed there
are such things.

I imagine it would be possible to write a Haskell program to translate
functions using (n+k) patterns to other forms. Distributing such a program
along with the new (n+k)-less compiler would make for an easy transition for
those who have made significant use of such patterns.

Craig






Re: why I hate n+k

1998-11-30 Thread Craig Dickson

Johannes Waldmann wrote:

i'd like to support Ralf's opinion: n+k patterns have advantages
(when used in a certain manner) so it would be good to keep them.

personal reason: right now i'm busy giving tutorials on recursive functions
and it's really nice if you can write f(...,y+1) = ... (... y)
instead of f(...,y) = ... (... y-1)

Why do you find this makes a significant difference? Personally, I find

f x = ... f (x - 1)

much more intuitive than

   f (x + 1) = ... f x

I see no advantage in the n+k version.

Craig