Re: [Haskell-cafe] meaning of referential transparency

2013-04-06 Thread Eli Frey
Links

SO:
http://stackoverflow.com/questions/210835/what-is-referential-transparency

Reddit discussions of said SO question.

http://www.reddit.com/r/haskell/comments/x8rr6/uday_reddy_on_referential_transparency_and_fp/

http://www.reddit.com/r/haskell/comments/xgq27/uday_reddy_sharpens_up_referential_transparency/

This was a fascinating exchange and I'm glad to be reminded to revisit it
:).


On Sat, Apr 6, 2013 at 11:13 AM, Kim-Ee Yeoh k...@atamo.com wrote:

 On Sun, Apr 7, 2013 at 12:43 AM, Henning Thielemann
 lemm...@henning-thielemann.de wrote:
  Can someone enlighten me about the origin of the term referential
  transparency? I can lookup the definition of referential transparency
 in
  the functional programming sense in the Haskell Wiki and I can lookup the
  meaning of reference and transparency in a dictionary, but I don't
 know
  why these words were chosen as name for this defined property.

 Instead of a immaculately precise definition, may I suggest going
 about it from the practical benefits POV? RT matters so much in
 Haskell because of the engineering leverage it gives us. Bird's Pearls
 are a good source of Why Equational Reasoning Matters.

 -- Kim-Ee

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [extension]syntactic sugar for maps

2013-03-27 Thread Eli Frey
 Sorry, I forgot to explain (probably because I'm too used to it). I am
referring to a syntax for easy creation of maps. Something equivalent to
lists:

 to build a list: [ 1, 2, 3]
 to build a map; { 1, one, 2, two, 3, three}

 Without it I am always forced to use fromList.

This looks like something to use records for, or in any case something
where association list performance is not an issue.

If you just want to store some configuration-like structure and pass it
around, a record is great for this.  You might find where in other
languages you would simply leave a key null, in Haskell you can just fill
it with a Nothing.

Maps (hash or binary-tree) really pay off when they are filled dynamically
with massive numbers of associations.  I find when I am ending up in this
scenario, I am generating my map programatically, not writing it as a
literal.

Sometimes people even write maps simply as functions and not even as a
data-structure.

 myMap char = case char of
 'a' - 1
 'b' - 2
 'c' - 3

Perhaps you could describe a situation you are in where you are wanting
this, and we could see if there is something you can do already that is
satisfying and solves your problem.



On Wed, Mar 27, 2013 at 12:59 PM, Eli Frey eli.lee.f...@gmail.com wrote:

  http://hackage.haskell.org/trac/ghc/wiki/OverloadedLists comes to mind.

 This assumes you can turn ANY list into a thing.  Maps only make sense to
 be constructed from association list.  If I've got a [Char], how do I make
 a map form it?


 On Wed, Mar 27, 2013 at 12:56 PM, Nicolas Trangez nico...@incubaid.comwrote:

 On Wed, 2013-03-27 at 21:30 +0200, Răzvan Rotaru wrote:
  I am terribly missing some syntactic sugar for maps (associative data
  structures) in Haskell. I find myself using them more than any other
  data
  structure, and I think there is no big deal in adding some sugar for
  this
  to the language. I could not find out whether such an extension is
  beeing
  discussed. If not, I would like to propose and extension. Any help and
  suggestions are very welcome here. Thanks.

 http://hackage.haskell.org/trac/ghc/wiki/OverloadedLists comes to mind.

 Nicolas


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Fwd: A Thought: Backus, FP, and Brute Force Learning

2013-03-22 Thread Eli Frey
-- Forwarded message --
From: Eli Frey eli.lee.f...@gmail.com
Date: Wed, Mar 20, 2013 at 4:56 PM
Subject: Re: [Haskell-cafe] A Thought: Backus, FP, and Brute Force Learning
To: OWP owpmail...@gmail.com


I have not read Bacchus' paper, so i might be off the mark here.

Functional code is just as simple (if not more so) to puzzle apart and
understand as imperative code.  You might find that instead of  stepping
through the process of code, you end up walking the call graph more
often.  FPers tend to break their problem into ever smaller parts before
re-assembling them back together, often building their own vocabulary as
they go.  Not to say this is not done in imperative languages, but it is
not so heavily encouraged and embraced.  One result of this is that you can
easily understand a piece of code in isolation, without considering it's
place in some process.  It sounds as though you are not yet comfortable
with this yet.

So yes, you might have to learn more vocabulary to understand a piece of
functional code.  This is not because the inner workings are obfuscated,
but because there are so many more nodes in the call graph that are given
names.  You can still go and scrutinize each of those new pieces of
vocabulary by themselves and understand them without asking for the author
to come down from on high with his explanation.

Let's take iteration for example.  In some imperative languages, people
spend an awful lot of time writing iteration in terms of language
primitives.  You see a for loop.  What is this for loop doing? you say to
yourself.  So you step through the loop imagining how it behaves as it goes
and you say Oh, this guy is walking through the array until he finds an
element that matches this predicate.  In a functional style, you would
reuse some iterating function and give it functions to use as it is
iterating.  The method of iteration is still there, it has just nested into
the call graph and if you've never seen that function name before you've
got to go look at it.  Again I don't mean to suggest that this isn't
happening in an imperative language, just not to the same degree.

As well there is a bit of a learning curve in seeing what a function does
when there is no state or doing to observe in it.  Once you get used to
it, I believe you will find it quite nice though.  You have probably heard
FPers extolling the virtues of declarative code.  When there is no state
or process to describe, you end up describing what a thing is.  I for one
think this greatly increases readability.

Good Luck!
Eli


On Wed, Mar 20, 2013 at 3:59 PM, OWP owpmail...@gmail.com wrote:

 I made an error.  I meant FP to stand for Functional Programming, the
 concept not the language.

 On Wed, Mar 20, 2013 at 6:54 PM, OWP owpmail...@gmail.com wrote:

 This thought isn't really related to Haskell specifically but it's more
 towards FP ideal in general.

 I'm new to the FP world and to get me started, I began reading a few
 papers.  One paper is by John Backus called Can Programming Be Liberated
 from the von Neumann Style? A Functional Style and It's Algebra of
 Programs.

 While I like the premise which notes the limitation of the von Neumann
 Architecture, his solution to this problem makes me feel queasy when I read
 it.

 For me personally, one thing I enjoy about a typical procedural program
 is that it allows me to Brute Force Learn.  This means I stare at a
 particular section of the code for a while until I figure out what it
 does.  I may not know the reasoning behind it but I can have a pretty
 decent idea of what it does.  If I'm lucky, later on someone may tell me
 oh, that just did a gradient of such and such matrix.  In a way, I feel
 happy I learned something highly complex without knowing I learned
 something highly complex.

 Backus seems to throw that out the window.  He introduces major new terms
 which require me to break out the math book which then requires me to break
 out a few other books to figure out which bases things using archaic
 symbols which then requires me to break out the pen and paper to mentally
 expand what in the world that does.  It makes me feel CISCish except
 without a definition book nearby.  It's nice if I already knew what a
 gradient of such and such matrix is but what happens if I don't?

 For the most part, I like the idea that I have the option of Brute Force
 Learning my way towards something.  I also like the declarative aspect of
 languages such as SQL which let's me asks the computer of things once I
 know the meaning of what I'm asking.  I like the ability to play and learn
 but I also like the ability to declare this or that once I do learn.  From
 Backus paper, if his world comes to a reality, it seems like I should know
 what I'm doing before I even start.  The ability to learn while coding
 seems to have disappeared.  In a way, if the von Neumann bottleneck wasn't
 there, I'm not sure programming would be as popular as it is today

[Haskell-cafe] Fwd: A Thought: Backus, FP, and Brute Force Learning

2013-03-22 Thread Eli Frey
I always forget to reply-all :(

-- Forwarded message --
From: Eli Frey eli.lee.f...@gmail.com
Date: Thu, Mar 21, 2013 at 5:04 PM
Subject: Re: [Haskell-cafe] A Thought: Backus, FP, and Brute Force Learning
To: OWP owpmail...@gmail.com


Ah, ye old point free programming [1].  Yes when you first see this stile
it's a bit alarming.  I think it's valid to say you shouldn't take this too
far, but IMHO it is a good thing.

If it is any more enlightening, it is good to think of this style like
building shell pipelines.  Depending on your disposition, that might make
you more dismayed though :).

Personally I like this style because it allows me to very rapidly prototype
my ideas.  When I am fleshing some solution out I will write it nearly
entirely in this style.  As I am throwing code around and refactoring, I
will reevaluate things and name my pipes and their inputs and outputs where
it makes sense to increase legibility.

I am sure that Bacchus is serious about this, but have no fear.  Just
because you can do this does not mean you have to.  You are still free to
name your inputs and outputs as much as you please.  However, no-one is
forcing you to.

There is a parallel in languages that have syntactic support for OOP.

obj.dothis.dothat.andthis.andthat.andanotherthing

There is just as much debate about how far to go with method chaining, and
when to name intermediate values.

[1] https://en.wikipedia.org/wiki/Tacit_programming


On Thu, Mar 21, 2013 at 4:21 PM, OWP owpmail...@gmail.com wrote:

 Thank you for this reply.  This thought is more about Backus (he is cited
 quite a lot) and how much of his theory made it into Haskell and other
 commonly used functional programming.

 In Backhu's paper (
 http://www.thocp.net/biographies/papers/backus_turingaward_lecture.pdf),
 is his comparison between FP and Impertive as seen in 5.1  5.2 of
 Programs for Inner Product.

 When I start seeing that program described as:

 Def Innerproduct = (Insert +) o (ApplyToAll x) o Transpose

 I start getting queasy.  When he later describes functional programming
 main purpose is to expand on that, I get worried.  When he later explains
 how I don't need to use variables and just need to use elementary
 substitution rule for everything, I'm wondering if he's really serious
 about this.

 From what you say, it doesn't sound as bad and I hope it isn't as I learn
 more.

 Thanks for the reply.

 On Wed, Mar 20, 2013 at 7:56 PM, Eli Frey eli.lee.f...@gmail.com wrote:

 I have not read Bacchus' paper, so i might be off the mark here.

 Functional code is just as simple (if not more so) to puzzle apart and
 understand as imperative code.  You might find that instead of  stepping
 through the process of code, you end up walking the call graph more
 often.  FPers tend to break their problem into ever smaller parts before
 re-assembling them back together, often building their own vocabulary as
 they go.  Not to say this is not done in imperative languages, but it is
 not so heavily encouraged and embraced.  One result of this is that you can
 easily understand a piece of code in isolation, without considering it's
 place in some process.  It sounds as though you are not yet comfortable
 with this yet.

 So yes, you might have to learn more vocabulary to understand a piece of
 functional code.  This is not because the inner workings are obfuscated,
 but because there are so many more nodes in the call graph that are given
 names.  You can still go and scrutinize each of those new pieces of
 vocabulary by themselves and understand them without asking for the author
 to come down from on high with his explanation.

 Let's take iteration for example.  In some imperative languages, people
 spend an awful lot of time writing iteration in terms of language
 primitives.  You see a for loop.  What is this for loop doing? you say to
 yourself.  So you step through the loop imagining how it behaves as it goes
 and you say Oh, this guy is walking through the array until he finds an
 element that matches this predicate.  In a functional style, you would
 reuse some iterating function and give it functions to use as it is
 iterating.  The method of iteration is still there, it has just nested into
 the call graph and if you've never seen that function name before you've
 got to go look at it.  Again I don't mean to suggest that this isn't
 happening in an imperative language, just not to the same degree.

 As well there is a bit of a learning curve in seeing what a function
 does when there is no state or doing to observe in it.  Once you get
 used to it, I believe you will find it quite nice though.  You have
 probably heard FPers extolling the virtues of declarative code.  When
 there is no state or process to describe, you end up describing what a
 thing is.  I for one think this greatly increases readability.

 Good Luck!
 Eli


 On Wed, Mar 20, 2013 at 3:59 PM, OWP owpmail...@gmail.com wrote:

 I made an error.  I meant FP

[Haskell-cafe] Shake, Shelly, FilePath

2013-03-08 Thread Eli Frey
I began converting an unwieldy Makefile into a Haskell program via Shake,
hoping that I could increase both its readability and modularity.  The
modularity has increased greatly, and I have found it exhilarating how much
more I can express about my dependencies.  However, readability has
suffered.

I quickly found that heavy shell interaction became unwieldy and I came out
with code that was much more difficult to scan with my eyes then what I had
in my Makefile.

I attempted to fix this by using the Shelly library for my shell
interactions, and was well pleased until I attempted to compile and
discovered Shelly.FilePath is NOT Prelude.FilePath.  Now I am left
sprinkling coercions all over the place and am again shaking my head at how
difficult to scan my code has become.

I have been considering writing a shim over Shelly, but the prospect makes
me uneasy.

Has anyone else walked down this path before, and if so what did you bring
away from the experience?  I find this situation such a shame, as all my
other experiences with both libraries have been quite wonderful.

- Eli
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Object Oriented programming for Functional Programmers

2012-12-30 Thread Eli Frey
I think it is always a good idea to learn languages that make
your-favorite-paradigm hard.  There are a lot of Aha moments to be had
from forcing your brain to come at a problem from another angle.

As for things to watch out for.

There is a very strong duality between TypeClasses and existential
polymorphism in OO. Both require a way to dynamically look up the correct
implementation for the type you are operating upon.  In Haskell we use
Typeclasses which place this lookup table on the functions that have
existential constraints on them.

 mconcat :: Monad m = [m] - m
 mconcat = foldl mappend []

We can think of `mconcat` having a little lookup table inside of itself,
and whenever we pass it a concrete `[m]`, `mappend` gets looked up and we
get the implementation for `m`.  Typeclasses are just mappings from types
to functions

In OO on the other hand, the lookup table is attached to the
datastructure.  We can think of the Object as a mapping from function names
to functions that operate on that Object.  Python, Javascript, Ruby, and of
course Smalltalk make this quite explicit.

Aside from Object Orientation, it is probably a good idea to learn some C
for a bit too.  C is a good language to play in and try and implement more
advanced language features.  Once you reallize that objects are just lookup
tables of functions bound with a data-structure, you can implement your own
in C, or you can make closures as functions bundled with (some) of their
arguments, or you can implement interesting datastructures, or so many
other fun things.  A good understanding of tagged unions has helped me in
many a convo with an OO head.


On Sun, Dec 30, 2012 at 11:58 AM, Daniel Díaz Casanueva 
dhelta.d...@gmail.com wrote:

 Hello, Haskell Cafe folks.

 My programming life (which has started about 3-4 years ago) has always
 been in the functional paradigm. Eventually, I had to program in Pascal and
 Prolog for my University (where I learned Haskell). I also did some PHP,
 SQL and HTML while building some web sites, languages that I taught to
 myself. I have never had any contact with JavaScript though.

 But all these languages were in my life as secondary languages, being
 Haskell my predominant preference. Haskell was the first programming
 language I learned, and subsequent languages never seemed so natural and
 worthwhile to me. In fact, every time I had to use another language, I
 created a combinator library in Haskell to write it (this was the reason
 that brought me to start with the HaTeX library). Of course, this practice
 wasn't always the best approach.

 But, why I am writing this to you, haskellers?

 Well, my curiosity is bringing me to learn a new general purpose
 programming language. Haskellers are frequently comparing Object-Oriented
 languages with Haskell itself, but I have never programmed in any
 OO-language! (perhaps this is an uncommon case) I thought it could be good
 to me (as a programmer) to learn C/C++. Many interesting courses (most of
 them) use these languages and I feel like limited for being a Haskell
 programmer. It looks like I have to learn imperative programming (with side
 effects all over around) in some point of my programming life.

 So my questions for you all are:

 * Is it really worthwhile for me to learn OO-programming?

 * If so, where should I start? There are plenty of functional programming
 for OO programmers but I have never seen OO programming for functional
 programmers.

 * Is it true that learning other programming languages leads to a better
 use of your favorite programming language?

 * Will I learn new programming strategies that I can use back in the
 Haskell world?

 Thanks in advance for your kind responses,
 Daniel Díaz.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Fwd: Object Oriented programming for Functional Programmers

2012-12-30 Thread Eli Frey
sorry, forgot to reply-all

-- Forwarded message --
From: Eli Frey eli.lee.f...@gmail.com
Date: Sun, Dec 30, 2012 at 1:56 PM
Subject: Re: [Haskell-cafe] Object Oriented programming for Functional
Programmers
To: Brandon Allbery allber...@gmail.com


 Except not quite... [...] It's not a built in table, it's a hidden
parameter.

I'll admit this doesn't HAVE to be the implementation.  Often time the
compiler can monomorphise the types and perform the lookup at compile
time.

But what's going on here tho.

 {-# LANGUAGE ExistentialQuantification #-}
  data Showable = forall a. Show a = Showable a

 printShowable :: Showable - IO ()
 printShowable (Showable x) = print x

  main = mapM printShowable [ Showable bob, Showable 3, Showable Nothing
]

If we take `mapM printShowable` and just give it an arbitrary list, it has
to lookup the correct implementations of `print` as it walks down that list
at run-time.  I believe similar things motivate vtables in c++/java.  I
don't have a strong intuition about how dynamically typed OO langs deal
with this, but I'm sure structural typing has similar issues.


On Sun, Dec 30, 2012 at 1:27 PM, Brandon Allbery allber...@gmail.comwrote:

 On Sun, Dec 30, 2012 at 3:45 PM, Eli Frey eli.lee.f...@gmail.com wrote:

  mconcat :: Monad m = [m] - m
  mconcat = foldl mappend []

 We can think of `mconcat` having a little lookup table inside of itself,
 and whenever we pass it a concrete `[m]`, `mappend` gets looked up and we
 get the implementation for `m`.  Typeclasses are just mappings from types
 to functions


 Except not quite... the Monad m = in the signature really means hey,
 compiler, pass me the appropriate implementation of Monad so I can figure
 out what I'm doing with this type m.  It's not a built in table, it's a
 hidden parameter.

 Aside from Object Orientation, it is probably a good idea to learn some C
 for a bit too.  C is a good language to play in and try and implement more
 advanced language features.  Once you reallize that objects are just lookup
 tables of functions bound with a data-structure, you can implement your own
 in C, or you can make closures as functions bundled with (some) of their
 arguments, or you can implement interesting datastructures, or so many
 other fun things.  A good understanding of tagged unions has helped me in
 many a convo with an OO head.


 A perhaps strange suggestion in this vein:  dig up the source code for Xt,
 the old X11 Toolkit, and the Xaw widget library that is built atop it.
  (It's part of the X11 source tree, since most of the basic X11 utilities
 and xterm are based on it.)  It implements a primitive object system in C.
  Gtk+ does the same, but hides much of the implementation behind macros and
 relies on tricky casting etc. behind the scenes for performance; in Xt, the
 basic machinery is more easily visible for inspection and much easier to
 understand even if you're not all that familiar with C.  If you go this
 way, once you've figured out what Xt is doing you might go on to see the
 more advanced concepts in how Gtk+ does it.

 And once you've done this, you'll have a good idea of what Objective-C and
 C++ (minus templates) are doing under the covers.  (Mostly C++, since ObjC
 is more or less Smalltalk's OO on top of X, whereas the core concepts of
 C++ are not so very different from what Xt does.)  If you really want to
 dig in further, you might want to try to find the source to cfront, the
 original C++ implementation which was a preprocessor for the C compiler.
  It'll be missing a lot of modern C++ features, but the core is there.

 --
 brandon s allbery kf8nh   sine nomine
 associates
 allber...@gmail.com
 ballb...@sinenomine.net
 unix, openafs, kerberos, infrastructure, xmonad
 http://sinenomine.net

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Call for Seattle Area Haskell Users Group

2012-11-22 Thread Eli Frey
Greetings,

(x-posted from /r/haskell)

I am calling for any interested parties to come together and help me get a
SEA HUG going. My boss has offered to provide facilities and foods. I don't
think he's expecting much, maybe we can make him second guess the offer :P.

I have put together a meetup http://www.meetup.com/SEAHUG group. I don't
know what all Seattle area Haskellers are wanting from this kind of thing,
so I thought our first meeting would be low key. Some time to shoot the
bull and introduce ourselves. Some time to help each other on our stumbling
blocks. Then talking about what we would want from further meetups.

We'll need a central place for discussion and coordination, so aside from
voicing your support, it would be great if you could take further
commentary over to the meetup page. If that doesn't work for some, let me
know.

Hope to hear from y'all soon.

- Eli
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe