Re: [Haskell-cafe] on finding abstractions ...

2010-02-14 Thread Alexander Solla


On Feb 14, 2010, at 4:38 AM, Günther Schmidt wrote:

I've got a problem, in short my haskell code sucks. While it does  
work and I do manage to use higher-orderish aspects quite  
extensively to make my code more concise it still is nowhere  
abstract, always concrete and thus always with lots of boilerplate.


There is a such a thing as too much higher-ordered-ness.  Every time  
you introduce a new higher order dispatch mechanism, you go through  
many of the same steps, and produce "slightly incompatible"  
interfaces.  You're basically defining plumbing, which you might have  
to plumb into your existing plumbing.  Boilerplate city.


Instead of looking for a higher-order solution to your problem, look  
for a normal form to express abstract terms that solve your problem.   
The higher-order solution will become "obvious" when you have the  
right normal forms (because now you know what you're quantifying  
over)  For example, algebraic data types are a natural encoding for  
tree-like structures.  They can even contain things.  Monads are an  
encoding for tree-like structures with an interface to the things the  
nodes contain.  I'm not saying you need to use monads.  I'm saying you  
should use them when they are the right normal form.


Unfortunately, boilerplate is kind of a fact of life in Haskell,  
unless you use things like TemplateHaskell or SYB.  After making all  
our nice orthogonal classes and data types in sensible normal forms,  
we need to join these classes and types, with functions and class  
instances.  Taking big types down a chain of type classes is no fun.   
But at least all that complexity is kept in one sensible place.


One thing I wish GHC would do better is deal with cyclical module  
dependencies more flexibly.  More flexibility would mean letting us  
organize our code by the normal form it embodies, as opposed to the  
modules/types it uses.  That makes code look a lot less boilerplate- 
ish, since the "irrelevant" but important bits can all go together.___

Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Category Theory woes

2010-02-16 Thread Alexander Solla

On Feb 16, 2010, at 9:43 AM, Gregg Reynolds wrote:

I've looked through at least a dozen.  For neophytes, the best of  
the bunch BY FAR is Goldblatt, Topoi: the categorial analysis of  
logic .  Don't be put off by the title.  He not only explains the  
stuff, but he explains the problems that motivated the invention of  
the stuff.  He doesn't cover monads, but he covers all the basics  
very clearly, so once you've got that down you can move to another  
author for monads.


He does cover monads, briefly.  They're called "triples" in this  
context, and the chapter on interpretations of the intuitionistic  
logic depend on functorial/monadic techniques.  If I remember  
correctly, he uses the techniques and abstracts from them.___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Heterogeneous Data Structures - Nested Pairs and functional references

2010-02-16 Thread Alexander Solla


On Feb 16, 2010, at 12:14 PM, Günther Schmidt wrote:


Let's say there was some clever monad ...

someMonad = do
h1 <- add "twenty"
h2 <- add False
h3 <- add 16
.   
modify h2 True

and get a ("twenty",(True, 16)) back. And while *in* the monad some  
accessors available.


Your return value will be wrapped a bit more strongly if you use  
monads and try to combine types.  Basically, build a monad with lots  
of values with no free monadic variables.  For example, in Maybe, Just  
has a free variable, whereas Nothing doesn't.  You can create values  
free in the monadic variable to carry any SPECIFIC type you want.  (Or  
even type class instances, if you use existential types)


> data Accumulator value = AtomicAccumulator value |  
StringAccumulator String | IntAccumulator Int | ConcatAccumulators  
(Accumulator value) (Accumulator value)


(Accumulator String)s are (Accumulator value)s for any value.  So you  
can build things like:


> ConcatAccumulators (IntAccumulator 10) (StringAccumulator "Hi")

I would build up accessors to these values using record syntax.   
Here's a meaty example I've been working on:


data View view = (Left view) `ConcatViews` (Right view)
   | NestViews (Left view) (Middle view) (Right view)

   | EmptyView
   | AtomicView view
   | ReturnView view

  -- View Nodes:
   | DocumentView { document_title_view  :: View view
  , document_toc :: View view
  , document_footer  :: View view
  , document_header  :: View view
  , document_contents:: View view
  }
   | PageView { page_title   :: View view
  , page_contents:: View view
  }
   | TableView { table_title_view:: View view
   , table_heading_view  :: View view
   , table_row_view  :: View view
   , table_subtotal_row_view :: View view
   , table_total_row_view:: View view
   }
   | SectionView { section_title :: View view
 , section_heading_view  :: View view
 , section_contents  :: View view
 }
   | SidebarView { sidebar_title :: View view
 , sidebar_heading   :: View view
 , sidebar_contents  :: View view
 }
   | FieldView
   | HeadingViewString
   | ListView [ View view ] -- View of list, not list of  
views.

   | PageFooterView
   | PageHeaderView
   | PageHeadingViewString
   | ParagraphView  String
   | RowView
   | TableTitleView String
   | TextView   String
   deriving Show

instance Monoid (View view) where
mempty = EmptyView
EmptyView `mappend` right   = right
left  `mappend` EmptyView   = left
left  `mappend` right   = ConcatViews left right


data Field = Field { field_name :: String
   , field_description  :: String
   }

data Row = HeadingRow [Field]

instance Monad View where
return = ReturnView
(AtomicView view)   >>= f = f view
(ConcatViews left right)>>= f = ( ConcatViews (left >>= f)  
(right >>=f)  )

(DocumentView title toc footer header content) >>= f =
( DocumentView (title   >>= f)
   (toc >>= f)
   (footer  >>= f)
   (header  >>= f)
   (content >>= f) )
(NestViews l m r)   >>= f = ( NestViews (l >>= f) (m >>=  
f) (r >>= f) )


(EmptyView) >>= f = EmptyView
(HeadingView string)>>= f = HeadingView string
(ReturnView view)   >>= f = f view
(PageView t c)  >>= f = ( PageView (t >>= f) (c >>=  
f) )

(TableView title heading row subtotal total) >>= f =
( TableView (title  >>= f)
(heading>>= f)
(row>>= f)
(subtotal   >>= f)
(total  >>= f) )

(SectionView title heading contents) >>= f =
( SectionView (title>>= f)
  (heading  >>= f)
  (contents >>= f) )

(ListView views) >>= f = ListView (fmap (>>= f) views)
(SidebarView title heading contents) >>= f =
( SidebarView (title>>= f)
  (heading  >>= f)
  (contents >>= f) )

(TextView   

Re: [Haskell-cafe] Heterogeneous Data Structures - Nested Pairs and functional references

2010-02-16 Thread Alexander Solla


On Feb 16, 2010, at 12:48 PM, Alexander Solla wrote:

(Accumulator String)s are (Accumulator value)s for any value.  So  
you can build things like:




Sorry, I made a typo.  I meant "StringAccumulator String"s are  
"Accumulator value"s for any value.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Heterogeneous Data Structures - Nested Pairs and functional references

2010-02-16 Thread Alexander Solla


On Feb 16, 2010, at 2:11 PM, Stephen Tetley wrote:


Your monad looks equivalent to the Identity monad but over a much
bigger syntax. What advantages do you get from it being a monad,
rather than just a functor?



I replied to Stephen, but forgot to include the list.  I took the  
liberty of making some changes.


I mostly use this construct functorially.  Defining a monad instance  
can be done in O(n) lines, but an applicative functor instance needs  
O(n^2) lines, where n is the number of type constructors.  The monadic  
structure doesn't interfere with the semantics I want, so I went with  
that.


As you said, this is basically an identity monad, but it's not too  
hard to turn it into a sequencing monad.  For example, instead of  
defining (NestViews l m r) >>= f as (NestViews (l >>= f) (m >>= f) (r  
>>= f)), we can do it in terms of arbitrary constructors, as long as  
>>= induced a partial order.  This approach has some interesting  
potential.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Heterogeneous Data Structures - Nested Pairs and functional references

2010-02-18 Thread Alexander Solla




sry for being a bit thick, but how would this code be used?

I'm unable to figure out the application yet. Could you give some  
examples how you use it?



Günther


So, the type (View view) -- ignoring class instances -- is basically  
isomorphic to this (slightly simpler) type:


data View = EmptyView | TextView String | ConcatView View View |  
NestViews View View View | ...

instance Monoid View where ...

Now, consider the problem of "generic programming" on the simpler  
type:   you quantify over the data constructors "generically", and in  
doing so you gain "traversals" for the type.[1]  You gain the same  
things by turning View into (View view) --  a functor, a foldable  
functor, and so on.  When it comes time to "render" a format for a  
View (for example, a bit of Html from Text.XHtml.Strict), I use some  
higher order functions I'm already familiar with.  Something like


renderXHtml :: (View view) -> Html
renderXHtml (ConcatViews l r) = fold $ renderXHtml (ConcatViews l r)
renderXHtml (NestViews l m r) = fold $ renderXHtml (NestViews l m r)
renderXHtml (TextView string) = stringToHtml string
renderXHtml (PageView v_title, v_heading, v_header, v_footer,  
v_contents) =
	(the_title << (renderXHtml v_title)) +++ -- (We assume  
v_title is a TextView String)

(body << ( renderXHtml v_header )  +++
 (render_page_contents v_contents v_heading) +++
 (renderXHtml v_footer) )
		 where render_page_contents contents heading = undefined -- takes a  
View and uses the page's heading View

 -- so 
I guess we assume v_heading is a function
 -- 
into a TextView

...

You could potentially use (>>=) for this, directly.

And if you were using the simpler type, you could do the same thing  
with Uniplate, for example.  It's going to construct an automorphism  
for you.


[1]  Actually, it's "the other way around".  But the container/ 
contained adjunction makes them equivalent.___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Category Theory woes

2010-02-18 Thread Alexander Solla


On Feb 18, 2010, at 10:19 AM, Nick Rudnick wrote:

Back to the case of open/closed, given we have an idea about sets --  
we in most cases are able to derive the concept of two disjunct sets  
facing each other ourselves, don't we? The only lore missing is just  
a Bool: Which term fits which idea? With a reliable terminology  
using «bordered/unbordered», there is no ambiguity, and we can pass  
on reading, without any additional effort.



There are sets that only partially contain their boundary.  They are  
neither open nor closed, in the usual topology.  Consider (0,1] in the  
Real number line.  It contains 1, a boundary point.  It does not  
contain 0.  It is not an open set OR a closed set in the usual  
topology for R.___

Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Category Theory woes

2010-02-18 Thread Alexander Solla


On Feb 18, 2010, at 1:28 PM, Hans Aberg wrote:

It is a powerful concept. I think of a function closure as what one  
gets when adding all an expression binds to, though I'm not sure  
that is why it is called a closure.


Its because a monadic morphism into the same type carrying around data  
is a closure operator on the type.  It's basically a direct sum of the  
"inner" type, and the "data" type.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Category Theory woes

2010-02-18 Thread Alexander Solla


On Feb 18, 2010, at 2:08 PM, Nick Rudnick wrote:

my actual posting was about rename refactoring category theory;  
closed/open was just presented as an example for suboptimal  
terminology in maths. But of course, bordered/unbordered would be  
extended by e.g. «partially bordered» and the same holds.


And my point was that your terminology was suboptimal for just the  
same reasons.  The difficulty of mathematics is hardly the funny names.


Perhaps you're not familiar with the development of Category theory.   
Hans Aberg gave a brief development.  Basically, Category theory is  
the RESULT of the refactoring you're asking about.  Category theory's  
beginnings are found in work on differential topology (where functors  
and higher order constructs took on a life of their own), and the  
unification of topology, lattice theory, and universal algebra (in  
order to ground that higher order stuff).  Distinct models and notions  
of computation were unified, using arrows and objects.


Now, you could have a legitimate gripe about current category theory  
terminology.  But I am not so sure.  We can "simplify" lots of  
things.  Morphisms can become arrows or functions.  Auto- can become  
"self-".  "Homo-" can become "same-".  Functors can become "Category  
arrows".  Does it help?  You tell me.


But if we're ever going to do anything interesting with Category  
theory, we're going to have to go into the realm of dealing with SOME  
kind of algebra.  We need examples, and the mathematically tractable  
ones have names like "group", "monoid", "ring", "field", "sigma- 
algebras", "lattices", "logics", "topologies", "geometries".  They are  
arbitrary names, grounded in history.  Any other choice is just as  
arbitrary, if not more so.  The closest thing algebras have to a  
unique name is their signature -- basically their axiomatization -- or  
a long descriptive name in terms of arbitrary names and adjectives  
("the Cartesian product of a Cartesian closed category and a groupoid  
with groupoid addition induced by").  The case for Pareto  
efficiency is here:  is changing the name of these kinds of structures  
wholesale a win for efficiency?  The answer is "no".  Everybody would  
have to learn the new, arbitrary names, instead of just some people  
having to learn the old arbitrary names.


Let's compare this to the "monad fallacy".  It is said every beginner  
Haskell programmer write a monad tutorial, and often falls into the  
"monad fallacy" of thinking that there is only one interpretation for  
monadism.  Monads are relatively straightforward.  Their power comes  
from the fact that many different kinds of things are "monadic" --  
sequencing, state, function application.  What name should we use for  
monads instead?  Which interpretation must we favor, despite the fact  
that others will find it counter-intuitive?  Or should we choose to  
not favor one, and just pick a new arbitrary name?___

Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Category Theory woes

2010-02-18 Thread Alexander Solla


On Feb 18, 2010, at 4:49 PM, Nick Rudnick wrote:


Why does the opposite work well for computing science?


Does it?  I remember a peer trying to convince me to use "the factory  
pattern" in a language that supports functors.  I told him I would do  
my task my way, and he could change it later if he wanted.  He told me  
an hour later he tried a trivial implementation, and found that the  
source was twice as long as my REAL implementation, split across  
multiple files in an unattractive way, all while losing conceptual  
clarity.  He immediately switched to using functors too.  He didn't  
even know he wanted a functor, because the name "factory" clouded his  
interpretation.


Software development is full of people inventing creative new ways to  
use the wrong tool for the job.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Restricted categories

2010-02-20 Thread Alexander Solla


On Feb 20, 2010, at 10:29 AM, Sjoerd Visscher wrote:

I don't understand this, as I thought the constraints the error is  
complaining about is just what withConstraintsOf g should provide.
I guess there's something about the Suitable trick that I don't  
understand, or possibly the type families Fst and Snd are biting me.


Who can help me out? Thanks.


You specifically ask withConstraintsOf to accept only Suitable2's when  
you say


withConstraintsOf :: Suitable2 m a b => m a b -> (Constraints m a b - 
> k) -> k


But you aren't saying that the argument of withConstraintsOf IS a  
Suitable2, when you say:



instance (RCategory c1, RCategory c2) => RCategory (c1 :***: c2) where
id = withResConstraints $ \ProdConstraints -> id :***: id
-- f@(f1 :***: f2) . g@(g1 :***: g2) =
--   withResConstraints $ \ProdConstraints ->
--   withConstraintsOf f $ \ProdConstraints ->
--   withConstraintsOf g $ \ProdConstraints ->
--   (f1 . g1) :***: (f2 . g2)



You need to make a type class instance for Suitable2 for whatever type  
\ProdConstraints -> ... represents.


For comparison, try:

> data Unordered = A | B | C

> A <= B
   No instance for (Ord Unordered)
 arising from a use of `<=' at :1:0-5
   Possible fix: add an instance declaration for (Ord Unordered)
   In the expression: A <= B
   In the definition of `it': it = A <= B

and now:

> data Ordered = A | B | C deriving (Show, Eq, Ord) -- (easier than  
writing an instance for Eq, Ord)

> A <= B
True
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Heterogeneous Data Structures - Nested Pairs and functional references

2010-02-20 Thread Alexander Solla


On Feb 20, 2010, at 10:25 AM, Heinrich Apfelmus wrote:


But isn't the line

   renderXHtml (ConcatView l r) = fold $ renderXHtml (ConcatViews l r)

a type error? I'm assuming

   Data.Foldable.fold :: (Foldable m, Monoid t) => m t -> t

being applied to the result type of  renderXHtml  which is  Html  and
not of the form  m t .


Yup, that's a type error.  I mean to fold the View (in this case a  
ConcatView) into a monoid.  I think I meant


> foldMap renderXHtml (ConcatViews l r)


Your intention reminds me of the use of type variables to get
functor-like behavior for free, like in

   data RGB' a = RGB a a a   -- auxiliary type constructor
   type RGB = RGB' Int   -- what we're interested in

but I don't quite see what you're doing with the free monad here,  
Alexander?


As you noticed, I am seeking that functorial behavior in order to gain  
some genericity.  bind and return do encode some logic about the  
nature of monadic adjunction, which I am relying on theoretically.  I  
could have used a Functor instance just as easily, but I would have  
lost my "intention" of defining co-equalizers implicitly.  (http://en.wikipedia.org/wiki/Beck%27s_monadicity_theorem 
)


Also, it was easier to write a monad instance than an Applicative  
instance, at least on my first try. ;-)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Restricted categories

2010-02-20 Thread Alexander Solla


On Feb 20, 2010, at 6:32 PM, Nick Rudnick wrote:


A simple example:
class Show el=> ExceptionNote el where
  comment:: Show exception=> exception-> el-> String

instance ExceptionNote Int where
  comment exception refId = show refId ++ ": " ++ show exception

Here you don't need to constrain «exception» to be of «Show» at the  
instance declaration. So it does not appear wrong for Sjoerd to  
expect f and g to already be of Suitable2...


  There really are instances missing.  The context is coming from here:

The instance for Suitable2 restricts objects to IsProduct, and  
requires the Fst and Snd parts of the objects to be suitable in c1  
resp. c2:


instance (IsProduct a, IsProduct b, Suitable2 c1 (Fst a) (Fst b),  
Suitable2 c2 (Snd a) (Snd b)) => Suitable2 (c1 :***: c2) a b where
 data Constraints (c1 :***: c2) a b = (IsProduct a, IsProduct b,  
Suitable2 c1 (Fst a) (Fst b), Suitable2 c2 (Snd a) (Snd b)) =>  
ProdConstraints

 constraints _ = ProdConstraints



Given types A and B, the product A x B has a type Fst which is A, and  
a type Snd which is B -- by construction:



class IsProduct p where
 type Fst p :: *
 type Snd p :: *
 fst :: p -> Fst p
 snd :: p -> Snd p



So, if we have two products named a and b, and a = A + B, b = C + D  
(letting the first type in the addition be Fst, and the latter be Snd  
in each), a x b has

four "components":  (A x C)  +  (B x C) + (A x D) +  (B x D).

 a x b = (A | B) x (C | D) = (Fst a | Snd a) x (Fst b | Snd b) = (Fst  
a, Fst b) + (Snd a, Fst b) + (Fst a, Snd b) + (Snd a, Snd b)


This really comes down to the semantics of Suitability2, but I think  
Sjoerd made a logic mistake, and is trying to make a product out of a  
pair of products, in stead of a pair of categories.___

Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Heterogeneous Data Structures - Nested Pairs and functional references

2010-02-25 Thread Alexander Solla


On Feb 24, 2010, at 10:15 AM, Heinrich Apfelmus wrote:

Namely, let's assume we are already given a "magic" type  
constructor  |-
(so magic that it's not even legal Haskell syntax) with the property  
that


   A |- B

somehow denotes an expression of type  B  with free variables of  
type  A
. In other words, the result of type  B  is somehow allowed to  
depend on

values of type  A .



In the context of this post, when I say "functor", I don't mean a  
Functor instance.  I mean the mathematical object which maps between  
categories.


As you noted, functions satisfy your "magic".  A function from a type  
A to B is a functor from A to B from a different point of view.  (One  
is "in" some kind of algebra in A and B, and the other is in the  
category of small categories).  But you lose something important by  
using functions.


Consider the parallel type specification of (my) Views with types,  
guided by your example code:


concat_views :: a -> v -> v -> (a, (v, v)) -- v's are meant to be the  
result of calling a *_view function.

nest_views :: a -> v -> v -> v -> (a, (v, v, v))
page_view :: a -> v -> v -> v -> v -> (a, [v])
text_view:: String -> v -> (String, v)

As you can see from the types, these are functions that take values  
and "wrap them up".  These functions trivially induce functors of the  
form View a :: v -> (a, [v])  (let's call them lists for the purposes  
of "form" since there can be any number of v's).


What are we gaining?  What are we losing?  My  Functor-based  
implementation had a uniform interface to all the View's innards,  
which we have lost.  And if we want to turn these functions into an  
algebra, we need to define a fair amount of plumbing.  If you take the  
time to do that, you'll see that the implementation encodes a  
traversal for each of result types.  An fmap equivalent for this  
implementation.


In short, before you can do anything with your construct, you will  
need to normalize the return result.  You can do that by reifying them:


> data View view = EmptyView | TextView String | ConcatViews (View  
view) (View view) | ...


or by doing algebra on things of the form (a, [v]).  Notice, also,  
that the View view data constructors are (basically) functions of the  
form [View] -> (a, [View]) or a -> (a, [View]) for some a.  (The  
tricky bit is the "some a" part.  Consider why EmptyView and (TextView  
String) is a (View view) no matter what type view is).  The parallel  
for EmptyView is:


> empty_view :: a -> v -> (a, v)
> empty_view a v = (a, ()) --?  I guess that works.  Dummy arguments  
for empty things.  Ikky.


My example code had some problems, but I really think it's a superior  
solution for the problem of making reusable renderable fragments.   
Indeed, this post described how I refactored your approach into mine,  
when I wrote my View code in the first place.  Then again, I also got  
tired of wrestling with Data.Foldable and moved on to using a plain  
initial algebra and Uniplate.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [Haskell] Recursive definition of fibonacci with Data.Vector

2010-03-07 Thread Alexander Solla


On Mar 7, 2010, at 12:56 PM, Don Stewart wrote:


In fact, infinite vectors make no sense, as far as I can tell -- these
are fundamentally bounded structures.


Fourier analysis?  Functional analysis?  Hamel bases in Real  
analysis?  There are lots of infinite dimensional vector spaces out  
there.



GHC even optimizes it to:

   fib = fib



Sounds like an implementation bug, not an infinite dimensional vector  
space bug.  My guess is that strictness is getting in the way, and  
forcing what would be a lazy call to fib in the corresponding list  
code -- fib = 0 : 1 : (zipWith (+) fib (tail fib)) -- into a strict one.


In fact, I'm pretty sure that's what the problem is:

data Vector a = Vector {-# UNPACK #-} !Int
   {-# UNPACK #-} !Int
   {-# UNPACK #-} !(Array a)

The !'s mean "strict" right?___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [Haskell] Recursive definition of fibonacci with Data.Vector

2010-03-07 Thread Alexander Solla


On Mar 7, 2010, at 5:22 PM, Don Stewart wrote:

Sorry for the overloading, I mean 'vector' in the sense of  
Data.Vector.
Being strict in the length, its unclear  to me that you can do much  
with

infinite ones :-)


Yeah, fair enough.  I studied mathematics, not Haskell's Data.*  
hierarchy.  The potential for confusion is pretty strong, especially  
since Haskell uses other mathematical terms (functor, monad, etc) with  
the mathematical meaning in mind.


Then again, Haskell type classes are not proper classes -- they're  
constructible sets of constructible types, at best.  Integrals are  
integer-like, and Haskell derivatives are programming languages like  
Adga. ;-)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] how to listen on a specific IP using the network library

2010-03-16 Thread Alexander Solla


On Mar 15, 2010, at 12:09 PM, Jeremy Shaw wrote:

In happstack we use a really horrible trick involving template  
haskell where we see if the  SockAddrInet6 constructor exists at  
compile time and conditionally compile different versions of the  
code that way. But it is really ugly.


Maybe a simpler code generator would be a better fit, since this task  
isn't really parametrized over types?  I'm not familiar with cabal  
really -- can you drop down to bash?  Can you run an arbitrary  
program?  Or at least one in the cwd?  Or at the very least, one you  
just built?  If so, a simple bash script (or Haskell program) can emit  
valid Haskell to compile.  It can (potentially) be as simple as  
concatenating your IPv6 instances/data to the end of the "default"  
IPv4 code.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to improve its performance ?

2010-03-17 Thread Alexander Solla


On Mar 17, 2010, at 6:14 PM, Daniel Fischer wrote:


I found it surprisingly not-slow (code compiled with -O2, as usual).
There are two points where you waste time.



I found one big point where time is wasted:  in computing the powerset  
of a list.  He's making 2^n elements, and then iterating through them  
all and filtering, but only needs n^2 or n `choose` 2 of the  
(depending on the semantics for his "groups").


The answer is to do something like:

allPairs list = [(x,y) | x <- list, y <- list]

to get it done in n^2 time.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to improve its performance ?

2010-03-17 Thread Alexander Solla


On Mar 17, 2010, at 8:33 PM, zaxis wrote:



`allPairs list = [(x,y) | x <- list, y <- list]  ` is not what  
`combination`

does !

let allPairs list = [(x,y) | x <- list, y <- list]
allPairs [1,2,3]

[(1,1),(1,2),(1,3),(2,1),(2,2),(2,3),(3,1),(3,2),(3,3)]


Yeah, I know that.  I said so specifically.  combination computes the  
power set of a list.  You said your goal was to compute a set of "two  
groups".  You don't need the power set in order to compute a set of  
pairs.  Moreover, computing the power set is a slow operation.   
Indeed, it is the source of your slowness.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Abstraction in data types

2010-03-17 Thread Alexander Solla

I wrote this to Darrin, but didn't CC cafe:

On Mar 17, 2010, at 9:20 PM, Darrin Chandler wrote:


type Cartesian_coord = Float

type Latitude  = Float
type Longitude = Float

data Point  = Cartesian (Cartesian_coord, Cartesian_coord)
| Spherical (Latitude, Longitude)

type Center = Point
type Radius = Float

data Shape  = Circle Center Radius
| Polygon [Point]

This obviously stinks since a Polygon could contain mixed Cartesian  
and

Spherical points. Polygon needs to be one or the other, but not mixed.


My suggestion would be to use an alternate representation of  
"spherical" points in terms of polar coordinates, and then to  
normalize and mix at will:


type Theta = Float
type Radius = Float

data Point = Cartesian (Cartesian_coord, Cartesian_coord)
   | Polar   (Theta, Radius)

normalize_point :: Point -> Point
normalize_point Cartesian x y = Cartesian x y
normalize_point Polar t r = Cartesian x y where x = r * cos t; y = r *  
sin t;


It really depends on what you want to do with your points.  If you  
want to do linear algebra, you might want your points to depend on a  
basis, for example.  But your "spherical" points don't really form a  
basis in three-space, or even over all of two-space.___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Abstraction in data types

2010-03-17 Thread Alexander Solla


On Mar 17, 2010, at 9:56 PM, Alexander Solla wrote:

But your "spherical" points don't really form a basis in three- 
space, or even over all of two-space.


I'll take this back.  Lattitude and longitude is enough to "form a  
basis" on R^2, by taking a basis for the surface of the sphere in  
terms of latitude and longitude and projecting it stereographically.   
So if you wanted to use the normalization idea, you could use the  
stereographic projection formulas to turn a spherical point into a  
Cartesian point.


http://en.wikipedia.org/wiki/Stereographic_projection___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Abstraction in data types

2010-03-17 Thread Alexander Solla


On Mar 17, 2010, at 10:27 PM, Darrin Chandler wrote:

Let's go back to your original code:

data Point  = Cartesian (Cartesian_coord, Cartesian_coord)
| Spherical (Latitude, Longitude)

type Center = Point
type Radius = Float

data Shape  = Circle Center Radius
| Polygon [Point]


normalize_shape :: Shape -> Shape
normalize_shape Circle c r = Circle c r
normalize_shape Polygon ps = Polygon $ fmap normalize_point ps where
normalize_point = something appropriate for the function.

In fact, you could lift this into a higher order function, that takes  
a normalize_point function as an argument:


normalize_shape :: (Point -> Point) -> Shape -> Shape
normalize_shape f (Circle c r)= Circle (f c) r
normalize_shape f (Polygon ps) = Polygon $ fmap f ps

Now, I'm not suggesting that you should always normalize shapes, as I  
had with normalize_point before.  But this combinator captures some  
nice, generic logic. For example, you can do stuff like:


cartesian_shape :: Shape -> Shape
cartesian_shape = normalize_shape cartesian_point where ...

normalize_shape is the sort of function you would use while defining a  
function, and possibly provide function specific behavior in the  
function's where clause.


double_shape :: Shape -> Shape
double_shape (Circle c r) = Circle c (2 * r)
double_shape (Polygon ps) = Polygon $ normalize_shape (double_point .  
cartesian_point) ps where

double_point Cartesian (x, y) = Cartesian (sqrt(2) * x, sqrt(2) * y)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: instance Eq (a -> b)

2010-04-14 Thread Alexander Solla


On Apr 14, 2010, at 12:16 PM, Ashley Yakeley wrote:

They are distinct Haskell functions, but they represent the same  
moral function.


If you're willing to accept that distinct functions can represent the  
same "moral function", you should be willing to accept that different  
"bottoms" represent the same "moral value".  You're quantifying over  
equivalence classes either way.  And one of them is much simpler  
conceptually.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: instance Eq (a -> b)

2010-04-14 Thread Alexander Solla


On Apr 14, 2010, at 1:24 PM, Ashley Yakeley wrote:

Bottoms should not be considered values. They are failures to  
calculate values, because your calculation would never terminate (or  
similar condition).


And yet you are trying to recover the semantics of comparing bottoms. 
 
___

Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: instance Eq (a -> b)

2010-04-14 Thread Alexander Solla


On Apr 14, 2010, at 5:10 PM, Ashley Yakeley wrote:


Worse, this rules out values of types that are not Eq.


In principle, every type is an instance of Eq, because every type  
satisfies the identity function.  Unfortunately, you can't DERIVE  
instances in general.  As you are finding out...  On the other hand,  
if you're not comparing things by equality, it hardly matters that you  
haven't defined the function (==) :: (Eq a) => a -> a -> Bool for  
whatever your a is.


Put it another way:  the existence of the identity function defines --  
conceptually, not in code -- instances for Eq.  In particular, note  
that the extension of the identify function is a set of the form  
(value, value) for EVERY value in the type.  A proof that (id x) is x  
is a proof that x = x.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: instance Eq (a -> b)

2010-04-15 Thread Alexander Solla


On Apr 15, 2010, at 12:53 AM, rocon...@theorem.ca wrote:

I'd call them disrespectful functions, or maybe nowadays I might  
call them

improper functions.  The "good" functions are respectful functions or
proper functions.


There's no need to put these into a different class.  The IEEE defined  
this behavior in 1985, in order to help with rounding error.  Floats  
and doubles are NOT a field, let alone an ordered field.  0.0 =/= -0.0  
by design, for floats and doubles.  0.0 == -0.0 for integers, exact  
computable reals, etc.  The problem isn't the functions, or the Eq  
instance.  It's the semantics of the underlying data type -- or  
equivalently, expecting that floats and doubles form an ordered field.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Bulk Synchronous Parallel

2010-04-20 Thread Alexander Solla


On Apr 20, 2010, at 11:05 AM, Jason Dusek wrote:

Thanks for the link;  my ultimate interest, though, is in an  
architecture
that could scale to multiple machines rather than multiple cores  
with shared
memory on a single machine.  Has there been any interest and/or  
progress in

making DPH run on multiple machines and other NUMA architectures?


 I wonder what it would take to do this.


Erlang.  ;0)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Build problems (hsp, trhsx, ultimately Happstack)

2010-04-22 Thread Alexander Solla

Consider the following bash session:

[ a...@kizaru:~/ ]$ which trhsx
/home/ajs/.cabal/bin/trhsx

[ a...@kizaru:~/ ]$ trhsx
Usage: trhsx  []

[ a...@kizaru:~/ ]$ cabal install hsp
Resolving dependencies...
Configuring hsp-0.4.5...
Preprocessing library hsp-0.4.5...
Building hsp-0.4.5...
ghc: could not execute: trhsx
cabal: Error: some packages failed to install:
hsp-0.4.5 failed during the building phase. The exception was:
ExitFailure 1

Does anybody have any suggestions?  I'm trying to install Happstack on  
Arch Linux (GHC 6.12.1).   ~/.cabal/bin/ is in my path and trhsx runs.


I've had this problem before, and I think I ran cabal as root for the  
HSP package.  That's not working this time.  I get the same failure.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Build problems (hsp, trhsx, ultimately Happstack)

2010-04-26 Thread Alexander Solla


On Apr 23, 2010, at 2:59 PM, Thomas Hartman wrote:


So, you might need to
-- upgrade hsx
-- make sure that the upgraded trhsx executable is the one being
executed by cabal install hsx (maybe deleting/temporarily moving other
trhsx exes)


Unfortunately, this suggestion didn't work out for me.  I read through  
all the relevant GHC documentation I could find -- I didn't see a way  
to set a specific path for GHC to search through for binaries.   
Whether I moved trhsx or not, the error was the same.  But it's the  
only trhsx on my system.


I've upgraded through every version of hsx that is compatible with  
Happstack, and tried every hsp that is compatible with Happstack.   
I've built Cabal-install, removed it, used the Arch package; tried  
global/user installs for Happstack;  did a darcs get on the official  
Happstack repo, and tried to build using its build script, on a fresh  
GHC install; and other stuff that I guess didn't make much sense.


It always broke in the same place:

[ a...@kizaru:~/ ]$ cabal install hsp-0.4.5
Resolving dependencies...
Configuring hsp-0.4.5...
Preprocessing library hsp-0.4.5...
Building hsp-0.4.5...
ghc: could not execute: trhsx
cabal: Error: some packages failed to install:
hsp-0.4.5 failed during the building phase. The exception was:
ExitFailure 1

The strange thing about it (that I noticed) is that GHC reports  
failing during the building phase instead of the preprocessing phase.   
Maybe this is just an artifact of the logging system, but it seems  
like GHC is doing something strange.  If I increase the verbosity,  
(near the end) I get:


*** Haskell pre-processor:
trhsx src/HSP/HJScript.hs /tmp/ghc6345_0/ghc6345_6.hscpp /tmp/ 
ghc6345_0/ghc6345_0.hspp

*** Deleting temp files:
Deleting: /tmp/ghc6345_0/ghc6345_0.hspp /tmp/ghc6345_0/ 
ghc6345_6.hscpp /tmp/ghc6345_0/ghc6345_5.hscpp /tmp/ghc6345_0/ 
ghc6345_4.hscpp /tmp/ghc6345_0/ghc6345_3.hscpp /tmp/ghc6345_0/ 
ghc6345_2.hscpp /tmp/ghc6345_0/ghc6345_1.hscpp /tmp/ghc6345_0/ 
ghc6345_0.hscpp

Warning: deleting non-existent /tmp/ghc6345_0/ghc6345_0.hspp
*** Deleting temp dirs:
Deleting: /tmp/ghc6345_0
ghc: could not execute: trhsx
/usr/bin/ghc returned ExitFailure 1
cabal: Error: some packages failed to install:
hsp-0.4.5 failed during the building phase. The exception was:
ExitFailure 1


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Build problems (hsp, trhsx, ultimately Happstack)

2010-04-26 Thread Alexander Solla


On Apr 26, 2010, at 12:30 PM, Jeremy Shaw wrote:



Does trying to install hsp-0.5.1 work any better?


I hadn't tried it, since it forces hsx-0.7 to install.  But I gave it  
a shot, and it fails the same way:


[ a...@kizaru:~/ ]$ cabal install hsp-0.5.1
Resolving dependencies...
Configuring hsp-0.5.1...
Preprocessing library hsp-0.5.1...
Building hsp-0.5.1...
ghc: could not execute: trhsx
cabal: Error: some packages failed to install:
hsp-0.5.1 failed during the building phase. The exception was:
ExitFailure 1

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Build problems (hsp, trhsx, ultimately Happstack)

2010-04-26 Thread Alexander Solla


On Apr 26, 2010, at 1:23 PM, Gregory Collins wrote:


Is "$HOME/.cabal/bin" on your $PATH?



Argh.

I had "~/.cabal/bin" in my $PATH instead of "$HOME/.cabal/bin".  I'm  
still not sure what the semantic difference is in this use case, but  
one ($HOME) works and the other (~/) doesn't.


Thanks everybody
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Build problems (hsp, trhsx, ultimately Happstack)

2010-05-01 Thread Alexander Solla


On Apr 30, 2010, at 11:28 PM, Warren Harris wrote:


$ cabal install happstack
Resolving dependencies...
cabal: cannot configure HJScript-0.5.0. It requires hsx >=0.7.0
For the dependency on hsx >=0.7.0 there are these packages:  
hsx-0.7.0. However

none of them are available.
hsx-0.7.0 was excluded because happstack-0.4.1 requires hsx >=0.5.5  
&& <0.6



I think that if you run this, you will satisfy all the dependencies:

cabal install "hjscript<0.5.0" happstack


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Build problems (hsp, trhsx, ultimately Happstack)

2010-05-01 Thread Alexander Solla


On May 1, 2010, at 10:54 AM, Warren Harris wrote:


I think I'll have to wait for Jeremy's update.


On the plus side, the comments/source in the tutorial are pretty good  
to follow the source along as an example, even if you don't compile  
and run them.  I didn't bother installing happstack-tutorial, I just  
read it and picked and chose what I liked.  If you do install it,  
you'll just be running the same site/content as http://tutorial.happstack.com/



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: happstack 0.5.0

2010-05-03 Thread Alexander Solla


On May 3, 2010, at 10:57 AM, Jeremy Shaw wrote:


- hide IxSet constructor. use ixSet instead.
- improved efficiency of gteTLE, getGTE, and getRange
- get rid of Dynamic, just use Data.Typeable (internal change)
- added deleteIx
- Eq and Ord instances for IxSet
- removed a bunch of cruft
- greatly improved documentation
- added stats function
- Protect user from using unindexed keys in searches in IxSet
- Runtime safeguard for badly formed inferIxSet indexes
- Fixed IxSet Default instance
- More detailed error messages in IxSet


Quite nice!  Good job guys, I'll be upgrading and using these features  
today (especially the Eq instance for IxSet)


-Alex
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] forall (What does it do)

2010-05-06 Thread Alexander Solla


On May 5, 2010, at 9:52 PM, John Creighton wrote:


I've seen forall used in a few places related to Haskell. I know their
is a type extension call "explicit forall" but by the way it is
documnted in some places, the documentation makes it sound like it
does nothing usefull.

However on Page 27 of Haskell's overlooked object system:



We define an existential envelope for shape data.
data OpaqueShape = forall x. Shape x => HideShape x



...
(presumably:)
class Shape x where area :: x -> Float ... etc...


It seems that forall can be used as a form of up casting.


Yes, but you can do this without the type class magic, and have the  
same degree of safety.


> data Shape = Square Int | Rectangle Int Int | Circle Float Float |  
etc...

> data OpaqueShape = HideShape Shape
> area :: OpaqueShape -> Float
> area = ...

These constructs are structurally equivalent.  The extra layer of  
abstraction doesn't really add anything in either case.  HideShape (as  
a function) is an isomorphism onto Shapes in both cases.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: GADTs and Scrap your Boilerplate

2010-05-18 Thread Alexander Solla


On May 18, 2010, at 3:27 PM, John Creighton wrote:


I looked again at the paper (page 27):
Haskell's Overlooked object system.
http://homepages.cwi.nl/~ralf/OOHaskell/paper.pdf


Is there any particular reason why you like that paper so much?   
Object orientation is nice, when you're dealing with "object-like"  
normal forms (that's actually quite rare, for most models of OO.  The  
actor model is the only one I can think of off-hand with a "natural"  
interpretation).  Otherwise, it's just an extra, unnecessary layer of  
abstraction.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Proof question -- (==) over Bool

2010-05-22 Thread Alexander Solla


On May 22, 2010, at 1:32 AM, Jon Fairbairn wrote:


Since Bool is a type, and all Haskell types include ⊥, you need
to add conditions in your proofs to exclude it.


Not really.  Bottom isn't a value, so much as an expression for  
computations that don't refer to "real" values.  It's close enough to  
be treated as a value in many contexts, but this isn't one of them.


Proof by pattern matching (i.e., proof by truth table) is sufficient  
to ensure that bottom (as a value) isn't included.  After all, the  
Bool type is enumerable.  At least in principle.


So perhaps the constructive Haskell proof would go something like:

-- Claim to prove
transitivity :: Bool -> Bool -> Bool -> Bool
transitivity x y z = if (x == y) && (y && z) then x == z else True

-- "The" proof
unifier :: Bool
unifier = all (True ==) $ [ transitivity x y z | x <- [ True, False ]
   , y <- [ True, False ]
   , z <- [ True, False ] ]

This includes some syntactic sugar R J might not be "entitled" to, but  
the intent is pretty clear.  We are programmatically validating that  
every assignment of truth values to the sentence "if (x == y) && (y &&  
z) then x == z" is true.  (The theorem is "vacuously true" for  
assignments where the antecedent of the conditional is false)___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Proof question -- (==) over Bool

2010-05-23 Thread Alexander Solla


On May 23, 2010, at 1:35 AM, Jon Fairbairn wrote:


It seems to me relevant here, because one of the uses to which
one might put the symmetry rule is to replace an expression “e1
== e2” with “e2 == e1”, which can turn a programme that
terminates into a programme that does not.


I don't see how that can be (but if you have a counter example, please  
show us).  Even if we extend == to apply to equivalence classes of  
bottom values, we would have to evaluate both e1 and e2 to determine  
the value of e1 == e2 or e2 == e1.


Prelude> undefined == True
*** Exception: Prelude.undefined
Prelude> True == undefined
*** Exception: Prelude.undefined
Prelude> undefined == undefined
*** Exception: Prelude.undefined

That is, if one case is exceptional, so is the other.

You can't really even quantify over bottoms in Haskell, as a  
language.  The language runtime is able to do some evaluation and  
sometimes figure out that a bottom is undefined.  Sometimes.  But the  
runtime isn't a part of the language.  The runtime is an  
implementation of the language's interpetation function.  Bottoms are  
equivalent by conceptual fiat (in other words, vacuously) since not  
even the id :: a -> a function applies to them.___

Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Proof question -- (==) over Bool

2010-05-24 Thread Alexander Solla


On May 23, 2010, at 2:53 AM, Lennart Augustsson wrote:


BTW, the id function works fine on bottom, both from a semantic and
implementation point of view.


Yes, but only because it doesn't work at all.  Consider that calling

> id undefined

requires evaluating undefined before you can call id.  The program  
will "crash" before you ever call id.  Of course, the identity  
function "should" have produced a value that crashed in exactly the  
same way.  But we never got there.


It works in same way (==) works on bottoms.  Vacuously.  Indeed,  
functions only work on values.  There is no value of type (forall a .  
a).  Ergo, id does not work on undefined, as a function applying to a  
value.  It is happy coincidence that the behavior of the runtime can  
be made mostly compatible with our familiar semantics.  But not  
entirely.


Consider:

*Main> let super_undefined = super_undefined
*Main> undefined == super_undefined
*** Exception: Prelude.undefined
*Main> super_undefined == super_undefined
(wait forever)
*Main> id super_undefined
(wait forever)
*Main> id undefined
*** Exception: Prelude.undefined

From a mathematical perspective, super_undefined is  
Prelude.undefined.  But it takes some runtime level hackery to make  
undefined an exceptional "value".  Without it, super_undefined loops,  
despite the fact that their definitions are exactly the same  (The  
"value" defined by itself).


I did not suggest those are the wrong semantics.  Only that the study  
of their semantics doesn't belong to the study of Haskell as a logic/ 
language.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Difference between div and /

2010-06-01 Thread Alexander Solla


On Jun 1, 2010, at 12:20 PM, Aaron D. Ball wrote:


The underlying object here is a Unix file descriptor, which is just a
number.  In that sense, stdin is 0, stdout is 1, and stderr is 2, so
this would be (0 + 2) (mod 1) = 0


Every integer is 0 (mod 1).
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: random-fu 0.1.0.0

2010-06-03 Thread Alexander Solla


On Jun 3, 2010, at 6:34 AM, mo...@deepbondi.net wrote:


Announcing the 0.1.0.0 release of the "random-fu" library for random
number generation[1].   This release hopefully stabilizes the core
interfaces (those exported from the base module "Data.Random").


Great work, I'm upgrading now.

The only feature suggestion I can suggest is the addition of a  
convolution operator to combine distributions (reified as RVar's in  
this implementation, though of course the difference between a random  
variable over a distribution and the distribution is rather thin)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: random-fu 0.1.0.0

2010-06-03 Thread Alexander Solla


On Jun 3, 2010, at 4:19 PM, mo...@deepbondi.net wrote:


I don't think I understand.  My familiarity with probability theory is
fairly light.  Are you referring to the fact that the PDF of the sum  
of

random variables is the convolution of their PDFs?  If so, the sum of
random variables can already be computed as "liftA2 (+) :: Num a =>  
RVar a
-> RVar a -> RVar a" since RVar is an applicative functor (or using  
liftM2

since it's also a monad).

Or perhaps you mean an operator that would take, say, 2 values of the
'Uniform' data type and return an instance of the 'Triangular' type
corresponding to the convolution of the distributions?


I think I had something like the former in mind.  I didn't realize  
liftA2/M2 would do it.When I did this last, I just wrote a monadic  
action to sample values from different RVars.  I should learn the  
higher order monad functions.


On the other hand, it might be kind of nice if RVar's knew which PDF  
they are over.  It's hard for me to see how that would be done with  
Haskell.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: random-fu 0.1.0.0

2010-06-03 Thread Alexander Solla


On Jun 3, 2010, at 6:40 PM, mo...@deepbondi.net wrote:


If anyone knows a way this could be done while still allowing general
functions to be mapped over RVars, I'd love to hear about it.  My
suspicion though is that it is not possible.  It would be a very  
similar
problem to computing the inverse of a function since the PDF is a  
measure
of the size of the preimage of an event in the probability space (if  
I'm

putting all those words together correctly ;)).



We don't necessarily have to compute the inverse of the distribution  
via sampling to do it.  It can be done algebraically, in terms of the  
convolution operator.  Since the types are enumerated, wouldn't  
something like... work?


-- A set and binary operation.  We have an algebra.  I like the J for  
'join'.

-- With this algebra, we can use the real-complex analytical methods to
-- interpret the terms later, if we want to actually reify a  
Distribution

-- instance as a "Real" (Float, Double) function.

> data DistributionJ a = UniformDistribution Uniform a
>   | ...
>   | ExponentialDistribution Exponential a
>   | DistributionJ a `Convolve` (DistributionJ a)


-- I hope I understand the semantics for the PromptT monad.
> newtype RVarT m a = RVarT { unRVarT :: PromptT (Prim,  
DistributionJ) m a }


I guess threading fst and snd in all the low level computations is  
inelegant, but it's a step closer.___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to name a mapped function?

2010-06-07 Thread Alexander Solla


On Jun 6, 2010, at 11:22 AM, Martin Drautzburg wrote:

If I have a function, say "compute" whose last parameter is some  
value ...
and I create another function, which applies "compute" to a list of  
values,

how would I call this function?


computeF is my natural inclination.  F is for Functor.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] is there a way to prove the equivalence of these two implementations of (Prelude) break function?

2010-06-07 Thread Alexander Solla


On Jun 5, 2010, at 8:10 PM, Thomas Hartman wrote:


Is there a way to prove they are identical mathematically? What are
the techniques involved? Or to transform one to the other?


Typically, the easiest way to prove that functions f g are equivalent  
is to (1) show that their domains are the same, and (2) show that for  
every x in the domain, f x = g x.


Usually, this amounts to comparing and understanding the function  
definitions, since each definition if a proof that the function  
relates a value x to f x, its image under f.


So a proof that f = g is a proof that a "characterization" of f is the  
same as the "characterization" for g.  For exposition, I'll do the  
analysis for the Prelude function.  You might note how much like  
evaluating the function Generating the characterization for a  
"caseful" or "stateful" function requires quantifying over cases or  
states.  Exercise left to the reader.


>  prelbreak p xs = (takeWhile (not . p) xs,dropWhile (not . p) xs)

We seek a characterization of prelbreak in the following terms:
"prelbreak is a function that accepts a proposition p and a list of  
x's, and returns a ..."


To that end, note that prelbreak is a "product function".  It returns  
a pair of images of p and xs, under the functions (\p xs -> takeWhile  
(not . p) xs) and (\p xs -> dropWhile (not . p) xs).


takeWhile is kind of tricky to describe in English, since it depends  
on the intrinsic order of the xs.  But in this case it returns the  
largest prefix of xs for which no x satisfies p.


Dually, (\p xs -> dropWhile (not . p) xs) returns the list complement  
(taken in xs) of the image of (\p xs -> takeWhile (not . p) xs).  (I  
guess this is a very high level characterization, especially since I  
didn't derive it from the function's definition.  The level of detail  
you want in your proof is up to you)


So, finally, prelbreak accepts a proposition p and a list xs, and  
returns a pair whose first element is the largest prefix of xs for  
which no x satisfies p, and whose second element is the complement of  
the first, taken in xs.


To do it for mybreak, you would have to pick apart the semantics of  
evalState and brk, and recharacterize them.  That could be a little  
trickier mathematically, or straight forward.  It depends on how you  
end up quantifying over monadic actions.  Either way, it will be a LOT  
like evaluating the code.  (After all, the function definition is a  
proof that the function relates each x to f x)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] is there a way to prove the equivalence of these two implementations of (Prelude) break function?

2010-06-07 Thread Alexander Solla


On Jun 7, 2010, at 4:10 PM, Alexander Solla wrote:

For exposition, I'll do the analysis for the Prelude function.  You  
might note how much like evaluating the function


Correction:

You might note how much like evaluating the function generating the  
analysis is.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] is there a way to prove the equivalence of these two implementations of (Prelude) break function?

2010-06-08 Thread Alexander Solla


On Jun 8, 2010, at 2:38 AM, Alberto G. Corona wrote:


This is`t a manifestation of the Curry-Howard isomorphism?



Yes, basically.

If we rephrase the isomorphism as "a proof is a program, the formula  
it proves is a type for the program" (as Wikipedia states it), we can  
see the connection.  The "characterization" of prelBreak I gave is a  
"type" for prelBreak.  The type is richer than we can express in the  
Haskell type system ("prelbreak accepts a proposition p and a list xs,  
and returns a pair whose first element is the largest prefix of xs for  
which no x satisfies p, and whose second element is the complement of  
the first, taken in xs."), but we can still reason about the richer  
type mathematically, and the Curry-Howard isomorphism applies to it.___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Vague: Assembly line process

2010-06-14 Thread Alexander Solla


On Jun 14, 2010, at 4:40 PM, Luke Palmer wrote:


So hang on, what is the problem?  You have described something like a
vague model, but what information are you trying to get?  Say,
perhaps, a set of possible output lists from a given input list?


I think he's trying to construct a production possibility curve for a  
given set of inputs, and (probably) the most efficient production plan  
given the costs of his inputs.  This is a problem I am going to have  
to solve programmatically, too.  I intend on solving it by finding the  
input in a given category of necessary inputs with the lowest average  
cost per unit.  I'm not concerned about "hard" cost allocation limits  
-- i.e. it's okay for the firm to buy more of an input than might be  
necessary for another output as long as the average unit cost is the  
lowest (since all the inputs will be used eventually anyway).  Hard  
allocation complicates the problem, since you have an upper bound on  
what you can spend, and you want to spend it most effectively,  
presumably with as little "waste" of available cash as possibile.  Bin  
packing.


Martin:  You need to find a nice "normal form" for inputs and  
outputs.  If there aren't going to be many different types  of inputs,  
you can deal with interchangeability by making a type for each type of  
input.  (For example, if you were a jeweler, you could make:)



type Price = Integer  -- In cents, or a suitable denomination
type Quantity = Integer
data Mint   = Perth | CreditSuisse | Sunshine | Kitco deriving (Eq,  
Show)

data Gold   = Gold Mint Quantity Price deriving (Eq, Show)
data Silver = Silver Mint Quantity Price deriving (Eq, Show)

class AverageCost commodity where average_cost_per_unit :: commodity - 
> Price
instance (AverageCost commodity) => Ord commodity where -- I hope this  
isn't an undecidable instance
	left <= right = (average_cost_per_unit left) <=  
(average_cost_per_unit right)


data UnitOutput = UnitOutput { product :: Product,  requires ::  
Requirement }

data Requirement = Requirement { gold :: Gold, silver :: Silver }
data Product = GoldWatch { style :: ... } | GoldChain { style :: ... }  
| SilverWatch { style :: ... }


etc.  How you encode the style/product/output relationship is up to  
you (maybe products should know their necessary inputs.  Maybe not).   
If you have hard allocation limits, you can use a bin packing strategy.___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Terminology

2010-06-15 Thread Alexander Solla


On Jun 15, 2010, at 1:42 PM, wren ng thornton wrote:

Generally these sorts of things are called homomorphisms. It's a  
terribly general term, but that's the one I've always seen to  
describe that pattern.


g is a "list homomorphism", if you want to get specific.   
Equivalently, it is the "list functor induced by f". (A functor is a  
morphism between categories.  Lists form a category, and g is a  
morphism from lists to lists, since it is a homomorphism).  I guess  
you can call the homomorphism characterization "the list homomorphism  
induced by f"


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Mapping a list of functions

2010-06-17 Thread Alexander Solla


On Jun 17, 2010, at 12:02 PM, Martin Drautzburg wrote:

The standard map function applies a single function to a list of  
arguments.
But what if I want to apply a list of functions to a single  
argument. I can
of course write such a function, but I wonder if there is a standard  
way of

doing this,


Use Control.Applicative, and the <*> operator.  Something like [f, g,  
h, i, j] <*> pure x (which equals [f x, g x, h x, i x , j x])


Related to that is the problem, that the function in map may require  
more than
one argument, so I need to apply it partially to some value(s)  
first. But

what if I want to fix its second and third argument and not its first
argument? Can flip help?


I think the <$> operator can help here.  Also, don't forget plain old  
lambda abstraction to abstract over a free variable.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What is Haskell unsuitable for?

2010-06-18 Thread Alexander Solla


On Jun 17, 2010, at 9:44 PM, Michael Snoyman wrote:

While we're on the topic, does anyone else get funny looks when they  
say "monads"?


Yes, almost every time.  They seem to catch on if I say "monadic" when  
I mean "able to syntactically transformed so as to be put in a  
sequence".

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] The functional-object style seems to be gaining momentum.

2010-06-18 Thread Alexander Solla


On Jun 17, 2010, at 10:47 AM, cas...@istar.ca wrote:


The functional-object style seems to be gaining momentum.
Is there any way to convert monads into objects, so that beginners  
have an easier time with the syntax and thus we can attract more  
people to the language?


I think you're a little bit confused about monads.  When you define a  
monad instance, all you are doing is defining the syntax that lets you  
chain properly typed computations together.  (>>=) does almost exactly  
what (->) does in object oriented Perl, and what (.) does in Ruby and  
Java.


Consider something like:

1.plus(10).minus(5).square.log
return 1 >>= return . (+10) >>= return . (-5) >>= return . square >>=  
return . log


They both bind the "return value" of a computation (or chain of  
computations) to the next function in the chain.  return kind of does  
the opposite.  It takes a value that can be "in" the monadic action/ 
object and gives/returns a monadic action/object that contains it.


So what are the differences between the two examples?  First, and most  
obvious, Haskell's is a bit more verbose, because it explicitly uses  
return to set the right context, whereas the ruby/python/whatever  
version does it implicitly.  On the other hand, if we can abstract the  
(return .) function away, with liftM:


return 1 >>= liftM (+10) >>= liftM (-5) >>= liftM square >>= liftM log

Indeed, if we really really want, we can define "methods" for the  
monadic action, just to make it look like Python/Ruby.  (Actually,  
this is probably the preferred way to build up complex computations,  
though it is kind of silly to do for the arithmetic operators)


plusM :: (Monad m, Num n) => n -> m n
plusM n = liftM (+n)
...
squareM :: (Monad m, Num n) => n -> m n
squareM = liftM (^2)

logM :: (Monad m, Num n) => n -> m n   -- for exposition, I am  
assuming log is a part of the Num typeclass.

logM = liftM (log) -- I guess it's really in Rational

Suddenly, we have something that looks a LOT like the object oriented  
version:


return 1 >>= plusM 10 >>= minusM 5 >>= squareM >>= logM

In fact, this is (almost) exactly like how Perl defines its object  
methods.  The first argument of any function/procedure is assumed to  
be the object on which the method acts, and the dereferencing operator  
(->) "knows" to bind the last returned value to the first free  
variable.  The difference is that Perl objects "know" to which class  
they belong, so they can resolve compile time ambiguity at runtime (if  
the program is properly typed, which Perl can't validate at "compile  
time"...).  Since 1 is a literal in Perl, we would have to make a  
Number class and instantiate a Number object to get this to run:


Package Number;
sub new { my $class = shift; my $self = shift; bless $self, $class;  
return $self; }

sub plus arg { return $self + arg; }
sub minus arg { return $self - arg }
...

so that the computation we have been comparing turns out as:

(new Number 1) -> plus 10 -> minus 5 -> square -> log;

Indeed, we are "stuck" in the number class, and (in principle) can't  
get out without a properly typed function.   (It's also kind of  
interesting that the constructor method literally includes the snippet  
"return $self", which is almost equivalent to return 1, when you bind  
1 to (new Number).  That is, the Number constructor takes a "regular"  
value and turns it into a Number, just as return takes a value and  
wraps it in a monadic type.


The difference is that 1 isn't "blessed" into the Number class if you  
just do


return 1 -> plus 10 -> minus 5 -> square -> log

in Perl, so the dereferencing operator (->) will fail.  You need the  
"bless $self, $class" machinery that the constructor runs.  In  
comparison, Haskell's return operator polymorphically "blesses" into  
any monadic type, in virtue of its type signature return :: (Monad m)  
=> a -> m a), so that the bind operator (>>=) will never fail.


The thing that makes object orientation special isn't the syntax.   
It's the relationship objects and classes have to one another -- and  
how classes relate to each other.  Depending on your perspective,  
there is a unique largest or smallest class to which an object  
belongs.  That relationship is a monadic adjunction.  So, in  
particular, you can make a type-unsafe object system in Haskell by  
relating a "class" (for example: a named list of methods) to a type or  
value.  If you want type safety, you need to use type arithmetic to  
implement this monadic adjunction at compile time.  Somebody else  
mentioned OO Haskell.


Finally, if you want to put this all very abstractly, but in nice  
common language, an object and its corresponding monadic action are  
both "servers".  Bind, dereferencing, etc are the interfaces that can  
make a request from "ANY" server.  So the State Monad is a server that  
serves/changes the current state.  The list monad is the server that  
serves up each element of a list

Re: [Haskell-cafe] The functional-object style seems to be gaining momentum.

2010-06-18 Thread Alexander Solla


On Jun 18, 2010, at 2:36 PM, Alexander Solla wrote:


Package Number;
sub new { my $class = shift; my $self = shift; bless $self, $class;  
return $self; }

sub plus arg { return $self + arg; }
sub minus arg { return $self - arg }


Syntax errors in my Perl (sorry, the last time I did Perl seriously,  
we used an extension for named arguments)


sub new   { my $class = shift; my $self = shift; bless $self, $class;  
return $self; }

sub plus  { my ($self, $arg) = @_; return $self + $arg; }
sub minus { my ($self, $arg) = @_; return $self - $arg; }

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] new recursive do notation (ghc 6.12.x) spoils layout

2010-06-20 Thread Alexander Solla


On Jun 21, 2010, at 10:18 AM, John Lask wrote:


 do rec
   a <- getChar
   b <- f c
   c <- g b
   putChar c
   return b

I don't particularly care that the only recursive statements are  
#2,#3 - I just want my nice neat layout back. I have just spent an  
inordinate amount of time updating code when if the parser  
recognised "do rec" as a recursive group it would have been a drop  
in replacement and taken me one tenth of the time.


Why can't we have this?


Why can't you just use let notation do deal with the recursion?  I  
thought "lets" in do blocks were just a little bit of syntactic sugar  
for "regular" let expressions, which do allow mutual recursion.  I  
could be totally wrong though.  I'm thinking of something like:


do a <- getChar
   let b = c >>= return . f
   let c = b >>= return . g
   c >>= putChar
   b___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] new recursive do notation (ghc 6.12.x) spoils layout

2010-06-20 Thread Alexander Solla


On Jun 20, 2010, at 6:24 PM, Alexander Solla wrote:


do a <- getChar
   let b = c >>= return . f
   let c = b >>= return . g
   c >>= putChar
   b



Correction:  by your construction, f and g are already in the Kliesli  
category, so you don't need the return compositions.  I still don't  
know if the construction is admissible though.___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] When the unknown is unknown

2010-06-23 Thread Alexander Solla


On Jun 23, 2010, at 1:50 PM, Martin Drautzburg wrote:

I said that a rhythm is a series of Moments (or Beats), each  
expressed as
fractions of a bar. But each Moment also has volume. So I could  
model rhythm
as Pairs of (Moment, Volume). However I certanly do not want to  
specify both

the Moments and the Volume, but rather compute one from the other.


How about something like:

type RhythmScheme = [(Maybe Moment, Maybe Volume)]
type Rhythm   = [(Moment, Volume)]

-- The resolution function will then be a function with type:

rhythm_from_scheme :: RhythmScheme -> Rhythm

-- Though you might want something like 
-- rhythm_from_scheme :: RhythmScheme -> IO Rhythm

-- or
-- rhythm_from_scheme :: Seed -> RhythmScheme -> Rhythm
-- so that you can get and use random numbers, for example.


I guess the point of my suggestion is to let pattern matching in  
function definitions deal with unification of constraints.  Beta  
reduction and unification are two sides of a coin.___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] When the unknown is unknown

2010-06-24 Thread Alexander Solla


On Jun 24, 2010, at 11:14 AM, Martin Drautzburg wrote:

Another question is: how much past and future knowledge do I need.  
(I believe
the fundamental property of music is that things are ordered).  In  
order to
compute Volumes from Moments I can get pretty much away without the  
past, but
computing Moments from Volumes definitely requires knowing "where I  
am",

because each new Moment has to be placed after a preceding Moment.



You can use pattern matching against lists.  For example:

process_rhythm_scheme :: RhythmScheme -> Rhythm
process_rhythm_scheme ( (Just foo,  Just bar )
  : (Just foo', Just bar')
  : (Just foo'', Just bar'')
  : rest ) = undefined

will match a RhythmScheme where the first three entries are wholly  
defined, and will bind its values to foo, bar, foo', bar', rest, and  
so on.


I think "View Patterns" could help control the complexity of these  
patterns.  http://hackage.haskell.org/trac/ghc/wiki/ViewPatterns.   
That page has quite a few nice constructs that could apply to your  
problem.  ("Both patterns", "iterator style")___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type-Level Programming

2010-06-26 Thread Alexander Solla


On Jun 26, 2010, at 4:33 AM, Andrew Coppin wrote:

It's a bit like trying to learn Prolog from somebody who thinks that  
the difference between first-order and second-order logic is somehow  
"common knowledge".



A first order logic quantifies over values, and a second order logic  
quantifies over values and sets of values (i.e., types, predicates,  
etc).  The latter lets you express things like "For every property P,  
P x".  Notice that this expression "is equivalent" to Haskell's bottom  
type "a".  Indeed, Haskell is a weak second-order language.  Haskell's  
language of values, functions, and function application is a first- 
order language.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type-Level Programming

2010-06-26 Thread Alexander Solla


On Jun 26, 2010, at 11:21 AM, Andrew Coppin wrote:

A first order logic quantifies over values, and a second order  
logic quantifies over values and sets of values (i.e., types,  
predicates, etc).  The latter lets you express things like "For  
every property P, P x".  Notice that this expression "is  
equivalent" to Haskell's bottom type "a".  Indeed, Haskell is a  
weak second-order language. Haskell's language of values,  
functions, and function application is a first-order language.


I have literally no idea what you just said.

It's like, I have C. J. Date on the shelf, and the whole chapter on  
the Relational Calculus just made absolutely no sense to me because  
I don't understand the vocabulary.


Let's break it down then.  A language is a set of "terms" or  
"expressions", together with rules for putting terms together  
(resulting in "sentences", in the logic vocabulary).  A "logic" is a  
language with "rules of inference" that let you transform sets of  
sentences into new sentences.


Quantification is a tricky thing to describe.  In short, if a language  
can "quantify over" something, it means that you can have variables  
"of that kind".  So, Haskell can quantify over integers, since we can  
have variables like "x :: Integer".  More generally, Haskell's run- 
time language quantifies over "values".


From this point of view, Haskell is TWO languages that interact  
nicely (or rather, a second-order language).  First, there is the  
"term-level" run-time language.  This is the stuff that gets evaluated  
when you actually run a program.  It can quantify over values and  
functions.  And we can express function application (a rule of  
inference to combine a function and a value, resulting in a new value).


Second, there is the type language, which relies on specific keywords:

data, class, instance, newtype, type, (::)

Using these keywords, we can quantify over types.  That is, the  
constructs they enable take types as variables.


However, notice that a type is "really" a collection of values.  For  
example, as the Gentle Introduction to Haskell says, we should think  
of the type Integer as being:


data Integer = 0 | 1 | -1 | 2 | -2 | ...

despite the fact that this isn't how it's really implemented.  The  
Integer type is "just" an enumeration of the integers.



Putting this all together and generalizing a bit, a second-order  
language is a language with two distinct, unmixable kinds variables,  
where one kind of variable represents a collection of things that can  
fill in the other kind of variable.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Construction of short vectors

2010-06-27 Thread Alexander Solla


On Jun 27, 2010, at 12:29 PM, Alexey Khudyakov wrote:


This is of course faster but what I really want is vectors with length
parametrized by type. This way I can write generic code. Uniform
representation is requirement for that.


You're going to need dependent types, or a similar construction, for  
that.


"Do We Need Dependent Types?"
http://www.brics.dk/RS/01/10/index.html
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How easy is it to hire Haskell programmers

2010-07-02 Thread Alexander Solla


On Jul 2, 2010, at 7:08 PM, Ivan Lazar Miljenovic wrote:


Knowing about something /= knowing how to use it.  I own and have read
RWH, but I've never had to use hsc2hs, or Applicative, etc.


Applicative is nice.  I had to Google for hsc2hs. This is what I get  
for learning Haskell from the Haskell 98 Report and the Gentle Intro,  
and by writing programs.  I read and commented on RWH while it was in  
beta.But I don't even know C.  As a kid, I got tripped up by =  
meaning "binds to, for now" instead of being an equivalence relation  
(though I didn't know that at the time).  As an adult, I just didn't  
care.  I studied Mathematics and Philosophy and learned how to write  
proofs in maybe 2 dozen logics and maybe 100 algebras.


This is a common problem in "rich" languages: There is more than one  
way to solve a problem, even if the algorithmic content is essentially  
the same.  The only sane way I have found to deal with this is to just  
ignore unfamiliar combinators and focus on a function definition's  
mathematical content, as much as possible.  In particular, every  
function is a many-to-one relation made from a "join" of types (in the  
lattice of types).  Haskell's rich type theory makes this easier than  
Perl or Lisp, for example.  You can often guess the right  
interpretation for a combinator merely by looking at its type, and  
mentally deriving its free theorem (or equivalently, characterizing  
its free function), which is easy with some practice.  Put another  
way, free theorems/functions are plumbing to be ignored.  The  
mathematically interesting parts of a function definition are the  
parts that aren't free, and have to be explicitly defined, often in  
terms of pairs of values.


This mathematical approach makes these sorts of litmus tests trivial,  
in the worst sense possible.  I'm not interested in memorizing  
hundreds or thousands of combinators.  That's what Google/Haddock/text  
books are for.  I had to Google to remember what FizzBuzz was, and I  
had to try GHCi to figure out of mod was named mod or %.  And with the  
right tools, I was able to solve it in under a minute.  It took about  
10 seconds to get the logic down on paper, using arrows of the  
Categorical variety.  This does not make me a Haskell n00b.


These kinds of ideas are rather subversive in a business environment.   
In my experience, businesses want to balance this sort of flexibility  
with maintaining a common body of knowledge.  This leads to those  
brutish patterns OO people seem to love.  At least functional patterns  
are expressive and often written in terms of free functions for some  
type or type class.


I remember an incident at a previous job, where a slightly more senior  
employee told me that I should be using the factory pattern instead of  
a map to traverse a list of "logic objects".  I gave him a little  
challenge.  I told him to just write up the supporting code (i.e.,  
just the class names, imports, interfaces.  No implementations.) for  
the class hierarchy while I solved the problem my way.  It would be a  
race.  10 minutes later, I was done.  Half an hour later, he came by  
my desk, looked at the logic for traversing a list of hashes and  
writing a functor into the solution space, and agreed I was right.___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] bug in ghci ?

2010-07-09 Thread Alexander Solla


On Jul 9, 2010, at 5:46 PM, Kevin Quick wrote:

That's probably an interesting assertion that one of the category  
theorists around here could prove or disprove.  ;-)


It's not too hard.  I don't like thinking about it in terms of  
category theory, though.  It's easier to think about it in terms of  
universal quantification.  The assertion is equivalent to the claim  
that (forall x, forall y, P x y) iff (forall y, forall x, P x y),  
though you have to do quite a bit of packing and unpacking to get there.


Another way to see it is in terms of recursion on initial algebras.   
Given an initial algebra A, and an initial algebra B, we'll say that A  
-> B represents the construction of attaching a copy of B to every  
element of A.  We can assume that A and B are disjoint, because we can  
find a normal form A' -> B' for which A' and B' are disjoint, and such  
that A -> B is isomorphic to A' -> B'.  (To see that, assume that some  
subalgebra C is contained in both A and B.  Attaching a copy of B to  
every element of A means attaching a copy of C to each element, and  
also B \ C.  But A already contains C.  So A -> B is isomorphic to A - 
> B \ C).  Note that since we can assume A and B are disjoint, we can  
also assume A and B are NOT mutually recursive.  We can always find a  
way to break that mutual recursion up.


I'm not sure how to prove that A -> B and B -> A, as I defined them,  
are isomorphic.  But they are.  I guess we can re-interpret A and B as  
meet semi-lattices, and A -> B and B -> A as their products.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Handling absent maintainers

2010-07-15 Thread Alexander Solla


On Jul 15, 2010, at 6:49 PM, Jason Dagit wrote:

 Everyone has their own branch of everything they contribute to,  
listed right on the website?  This is inline with another idea I've  
heard where we'd have a 'stable' hackage and 'unstable/dev'  
versions.  But, how does this work for resolving and communicating  
dependencies?


One thing cabal is missing is a... cabal to pick and choose APIs.   
Imagine:


1. Hackage-stable
2. Multiple Hackage-dev's, each based on current Hackage-stable (if  
they're too old, they're out of the running for inclusion in stable)


Hackage-dev packages are submitted to the Hackage-stable Cabal, which  
picks and chooses the nicest code.  Hackage-stable incorporates that  
code.  The Cabal would be responsible for coordinating dependencies.   
(This would potentially involve some programming from members of the  
Cabal)


This is kind of in line with how Haskell Platform is being built up.   
They take popular/useful packages with mature API's and promote them  
to the platform if there's enough interest.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On documentation

2010-07-21 Thread Alexander Solla


On Jul 20, 2010, at 10:28 PM, Richard O'Keefe wrote:


What I don't see is "HOW DO I USE THIS STUFF?"


I think tutorials are the best way to do that (i.e., example normal  
forms for the computations the library intends to expose).  Perl's  
package archive (the cpan) traditionally uses a "Synopsis" section  
that exposes "representative" functions at each layer/step:  http://search.cpan.org/~lds/GD-2.45/GD.pm 
 for example.


After all, the source is always structured in more-or-less the same  
way.  Fragments of text with opaque -- unless/until you understand  
them -- combinators "join" two distinct concepts/types into  
functions.  A chain of functions (potentially at various levels of  
abstraction) is a computation.  You "use" these things by finding a  
chain of types (Start -> A), (A -> B), (B -> C), ... (N -> Goal) and  
composing, filling in additional details as necessary.  Building that  
chain means doing depth first searches on a tree/graph of  
possibilities, and usually isn't so much fun.  The library developer  
is in the best position to do exactly that, having already done it  
while constructing the library.___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Heavy lift-ing

2010-07-23 Thread Alexander Solla


On Jul 23, 2010, at 4:35 PM, michael rice wrote:


Why is it called "lift"-ing?


Basically, because mathematicians like enlightening metaphors.  It is  
a mathematical term.


A "monadic value" has an "underlying" value.  To turn a function that  
works on the underlying value into one that works on a monadic value,  
you have to lift it.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] default function definitions

2010-07-24 Thread Alexander Solla


On Jul 24, 2010, at 10:59 AM, Patrick Browne wrote:


class C1 c1 where
 age :: c1 -> Integer
-- add default impl, can this be defined only once at class level?
-- Can this function be redefined in a *class* lower down the  
heirarchy?

age(c1) = 1



Yes, but keep in mind that the hierarchy is only two levels tall.   
This mechanism isn't meant for OO-style inheritance.





-- Is it true that instances must exists before we can run function or
make subclasses?
instance C1  Person where
instance C1  Employee where



Yes, absolutely.



-- Is it true that C2 can inherit age, provided an instance of C1  
exists

class C1 c2 =>  C2 c2 where
 name :: c2 -> String
 name(c2) = "C2"



instance C2  Person where
instance C2  Employee where


There's no notion of "inheritance" here.   If Person belongs to C2,  
then it "must" belong to C1, because you have specifically said that a  
C2 needs to be a C1 (presumably because you need a person's age to  
compute their name).  So Person will be using C1's "age" function, in  
virtue of having a C1 instance.


Compare this to:

class C4 c4 where name' :: c4 -> String

instance C1 Person 			-- gives Person an age function, either default  
or overridden
instance C1 thing => C4 thing  -- gives every C1 thing a name, needs  
Haskell extensions.




-- Is it true that C3 cannot override C1 or C2 existing defaults?
-- Is it true that this must be done at instance level?
-- class Cx c3 =>  C3 c3 where
--age(c3) = 3


Yes, as I said, the hierarchy is two levels tall.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Suggestions For An Intro To Monads Talk.

2010-08-03 Thread Alexander Solla


On Aug 3, 2010, at 2:51 PM, aditya siram wrote:

I am looking for suggestions on how to introduce the concept and its  
implications. I'd also like to include a section on why monads exist  
and why we don't really see them outside of Haskell.


Start with functors (things that attach values/functions/functors to  
values in an algebra).  Move on to applicative functors (functors that  
can interpret the thing that is getting things attached to it).  Move  
on to monads (applicative functors where you can explicitly control  
the order of evaluation/interpretation).


Monads exist because every adjunction generates a monad, every monad  
generates an adjunction, and adjunctions are everywhere.  Any time you  
can put things "next to" each other, you can create a monad that  
captures the notion.  A monad corresponds to putting things to the  
left (at least syntactically) of the "main" object.  A comonad  
corresponds to putting things to the right of the main object  
(assuming we observe the "monad = left" convention).


There are monads every where.  They typically carry around extra  
structure in other programming languages.  That is, you can't quantify  
over them, because they have been specialized.  For example, any  
language that has a "map" function automatically supports monadic  
computation, in virtue of the fact that map accepts one argument  
functions.  (I.e., functions where the function's name goes on the  
left, and the argument goes on the right)


sub minus_one = { $x = @_[0]; return $x * (-1); }

return map minus_one
map double
map square [1.. 10]

(I hardly remember Perl now though...)  Notice that you can't change  
the order of the operations without changing the semantics.  This is  
strictly a monadic computation, in Perl, PHP, etc.  Ruby is a little  
nicer than Perl, since it allows functors other than lists.  If you  
make a class a member of the Enumerate mixin, you can call .each on  
the class's instances, give it a monadic (one argument) function, and  
it returns a new functor over the relevant type of object.


funky_tree.each { node | node.value.square.double.minus_one }
(in the function composition case, or ... in the "structurally  
moandic" case:)


funky_tree.each { node | node.value }
   .each { value | square value }
   .each { square' | double square' }
   .each { doubled | minus_one double }

I doubt this is legal Ruby either.  It's been a few years since I  
touched it.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is toRational a method of Real?

2010-08-04 Thread Alexander Solla


On Aug 4, 2010, at 11:30 AM, Omari Norman wrote:

Why is toRational a method of Real? I thought that real numbers need  
not
be rational, such as the square root of two. Wouldn't it make more  
sense

to have some sort of Rational typeclass with this method? Thanks.


You can't build the real number field using a computer.  So you have  
to turn what "should" be a real into something you can express on a  
computer.  You can either choose to use the field of "computable real  
numbers", which is slow, or you can just go with a "near enough"  
rational approximation.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ATs vs FDs

2010-08-14 Thread Alexander Solla


On Aug 14, 2010, at 9:01 AM, Antoine Latter wrote:

What's wrong with fun-deps? The associated type synonym syntax is  
prettier, but I didn't tknow that fun-deps were evil.


Do you have any links?

They're not "evil", they are "tricky" and can lead to non-termination,  
inconsistency, etc.


http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.118.6217&rep=rep1&type=pdf
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage on Linux

2010-08-24 Thread Alexander Solla


On Aug 22, 2010, at 3:41 AM, Andrew Coppin wrote:

It looks as if it's automated for Arch, however. Either that or  
somebody is spending an absurd amount of time keeping it manually up  
to date.


It probably is automated.  There's a tool out there called  
"cabal2arch", which turns a cabal file into a PKGBUILD file.  They are  
similar enough that translation can be done automatically, assuming  
there aren't any special needs.  (For example, I doubt the gtk2  
haskell package would work with cabal2arch)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On to applicative

2010-08-26 Thread Alexander Solla


On Aug 26, 2010, at 12:34 AM, michael rice wrote:

A lot of stuff to get one's head around. Was aware of liftM2,  
liftM3, etc., but not liftA2, liftA3, etc.




liftM and liftA are essentially equivalent (and are both essentially  
equivalent to fmap)  Same for the liftAn = liftMn functions (where n  
is an integer).  Applicative functors are more general than monads, so  
it makes sense for them to have their own functions.  It is a matter  
of history that liftM was defined before liftA.


So, the statement was true, but not the way that was shown in the  
example, i.e., with fmap2, fmap3, etc., which required different  
functions for each of the fmaps.


Strictly speaking, fmap will work with a function in more than one  
argument, as long as it is properly typed.  This is what makes  
applicative functors work.


Consider that a function f :: a -> b -> c also has the type f :: a ->  
(b -> c).   If you feed it an "a", (resulting in a value of the form f  
a), you get a function g :: (b -> c).  In other words, every function  
is a function in one argument.  Some functions just happen to map to  
other functions.


<$> is flip fmap.  f <$> functor = fmap f functor

Consider what happens if f :: a -> b.  (f <$> functor) means "pull an  
a out of the functor, apply f, and return a functor "over" some b.   
That is to say, "lift" f into the functor and apply it.


Now consider what happens if f :: a -> (b -> c).  By analogy, this  
means "pull an a out of the functor object, apply f, and return a  
functor object (f g)  :: f (b -> c)"  (In other words, a functor  
object that "contains" a function g :: b -> c).  In order to get a c  
value out of this, you need to apply g to "something".  But note that  
we're not just dealing with g.  It is "in" the functor already, and so  
doesn't need lifting.  So some smart guy wrote a function called


<*> :: (Functor f) => f (b -> c) -> f b -> f c

that does just that.  This is one of the defining functions for an  
applicative functor.  (And part of the reason for the name.  If the  
functor contains a function, you can "apply the functor" to properly  
typed functor objects.)


The other function is pure :: (a -> b) -> f (a -> b).  It takes a  
function and lifts it into the functor, without applying it to  
anything.  In other words, given an f :: a -> b,


pure f <*> functor = f <$> functor

If f has a bigger type (say, a -> b -> c -> d), you can do things like:

f <$> functor_on_a <*> functor_on_b <*> functor_on_c

Every monad is an applicative functor.  If we have a monad action  
m_f :: m (a -> b), and another one m_a :: (m a), we can get a monad  
action in type (m b) by pulling the function f :: a -> b out of the  
first one and applying it to the b in the second one:


m_f >>= (\f -> liftM f m_a)

or... m_f >>= (flip liftM) m_a

In fact, there is a function called ap :: m (a -> b) -> m a -> m b  
which does just that, and is "essentially equivalent" to <*>.  Of  
course, running return on a function f is equivalent to running pure  
on f.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On to applicative

2010-08-26 Thread Alexander Solla


On Aug 26, 2010, at 1:29 AM, Alexander Solla wrote:

The other function is pure :: (a -> b) -> f (a -> b).  It takes a  
function and lifts it into the functor, without applying it to  
anything.  In other words, given an f :: a -> b,


My mistake, though if you got the rest of it, it should come as no  
surprise that pure :: a -> f a and is essentially equivalent to a  
monad's return.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On to applicative

2010-08-26 Thread Alexander Solla


On Aug 26, 2010, at 9:27 AM, michael rice wrote:

Some functions just happen to map to other functions.

<$> is flip fmap.  f <$> functor = fmap f functor   Brent  
Yorgey's post noted.


 "map to"? Take as arguments?


"maps to" as in "outputs".


pure f <*> functor = f <$> functor

 Prelude Control.Applicative> pure double <*> (Just 5)
 Just 10

 Not so, the f got applied or where did we get the 10? Not sure,  
is this the  "mistake" you point out in your second post?



double is getting applied in that expression, but it isn't because of  
pure.  The <*> operator is pulling double out of pure double (which  
equals Just double in this case), and applying it to Just 5.


My correction was to point out that pure's type is more general than I  
had said.   Instead of
pure :: (a -> b) -> f (a -> b), it is pure :: a -> f a -- which  
includes (a -> b) -> f (a -> b) as a special case.  In fact, pure and  
return are "essentially equivalent" in form.  So you could write out  
your verification case as


(pure double <*> pure 5) :: Just Int

to further decouple your code from the particular functor you're  
working with.  (You need the type annotation to run it in GHCi)




 Two ways of doing the same thing?

 1)
 Prelude Control.Applicative> (double <$> (Just 7))
 Just 14

 2)
 Prelude Control.Applicative> pure double <*> (Just 7)
 Just 14


Indeed. 
 
___

Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On to applicative

2010-08-31 Thread Alexander Solla


On Aug 31, 2010, at 12:03 PM, michael rice wrote:


I tried creating an instance earlier but

*Main> :t (->) Int Char

:1:1: parse error on input `->'


Try:

Prelude> :info (->)
data (->) a b-- Defined in GHC.Prim

If you want type-information about values, use :t.  If you want  
information about types (and the "type-level language"), use :info.  
This includes stuff like class definitions and instances in scope.   
For example, if I include Control.Monad:


Prelude Control.Monad.Instances> :info (->)
data (->) a b-- Defined in GHC.Prim
instance Monad ((->) r) -- Defined in Control.Monad.Instances
instance Functor ((->) r) -- Defined in Control.Monad.Instances

:info is pretty cool:

Prelude Control.Monad.Instances> :info Monad
class Monad m where
  (>>=) :: m a -> (a -> m b) -> m b
  (>>) :: m a -> m b -> m b
  return :: a -> m a
  fail :: String -> m a
-- Defined in GHC.Base
instance Monad ((->) r) -- Defined in Control.Monad.Instances
instance Monad Maybe -- Defined in Data.Maybe
instance Monad [] -- Defined in GHC.Base
instance Monad IO -- Defined in GHC.Base

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Projects that could use student contributions?

2010-08-31 Thread Alexander Solla


On Aug 31, 2010, at 1:52 PM, Brent Yorgey wrote:


This fall I'll be teaching a half-credit introduction to Haskell to
some undergrads.  As a final project I am thinking of giving them the
option of (instead of developing some program/project of their own)
contributing to an existing open-source Haskell project.  Of course,
this requires the existence of projects they could contribute to.  I'm
sure they exist, but need your help to figure out what they are.  So,
do you maintain, or know of, any projects with the following
characteristics?


Maybe the Haskell-related Google Summer of Code proposals would be  
useful to you?


http://hackage.haskell.org/trac/summer-of-code/___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On to applicative

2010-09-02 Thread Alexander Solla


On Sep 2, 2010, at 11:30 AM, michael rice wrote:


In each case, what does the notation

show:: ...

and

undefined:: ...

accomplish?


They're type annotations.  show is a function in "many" types:

Prelude> :t show
show :: (Show a) => a -> String

If you want to see the type of a "specific" show function, you need to  
find a way to "determine" its type.  This is a slightly different  
function, but it's equivalent in types and semantics:


Prelude> :t \x -> show x
\x -> show x :: (Show a) => a -> String

Now we have a named argument, and we can constraint its type with an  
annotation:


Prelude> :t \x -> show (x :: Int)
\x -> show (x :: Int) :: Int -> String

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Unnecessarily strict implementations

2010-09-02 Thread Alexander Solla


On Sep 2, 2010, at 9:10 AM, Stephen Sinclair wrote:


Sorry to go a bit off topic, but I find it funny that I never really
noticed you could perform less-than or greater-than comparisons on
Bool values.  What's the semantic reasoning behind allowing relative
comparisons on booleans?  In what context would you use it?


The Boolean values form a Boolean lattice.  That's reason enough.


It seems
to me a throwback to C's somewhat arbitrary assumption that False=0
and True=1.


That's not arbitrary at all. 0 and 1 are very special numbers, because  
of the roles they play in addition and multiplication.  They "absorb"  
and "identify" things.


http://en.wikipedia.org/wiki/Boolean_algebra_(structure)___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Unnecessarily strict implementations

2010-09-03 Thread Alexander Solla


On Sep 2, 2010, at 11:35 PM, Henning Thielemann wrote:


But in the lattice example the roles of 0 and 1 are interchangeable,
aren't they?


Sort of.  If you try to interchange the roles of 0 and 1, you are  
interchanging the roles of the meet and join operations.  In short,  
you are constructing the dual lattice.  A mirror image is a good  
analogy for this construction.


So, 0 and 1 are interchangeable.  In the same way that "True" and  
"False" are.  And "Top" and "Bottom".  And "Left" and "Right".   
Arbitrary structurally, but with a history of consistency across  
domains.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] help me evangelize haskell.

2010-09-05 Thread Alexander Solla


On Sep 5, 2010, at 7:46 PM, Mathew de Detrich wrote:

Another thing you can say is that Perl is a very extreme language in  
design where as Haskell is more "general". This means the one thing  
Perl does, it does very well (expressing programming problems in the  
most concise/short possible way) but it has to sacrifice for it  
massively in other areas which end up costing much more in the long  
run. Most 'real' world problems do not require that amount of  
brevity, considering the massive cost that Perl brings for such a  
thing.


That doesn't sound right to me.  Perl's biggest weaknesses are  
traditionally:  (i) the syntax:  but those $'s and @'s are actually  
type annotations;  and (ii) "There's More Then One Way to Do It":  the  
existence of multiple approaches to solving a problem, instead of an  
"official" obvious choice.  This means that every programmer on the  
team either has to KNOW all the possible ways to solve a problem with  
Perl, or the programming team has to CHOOSE one and make it "policy"  
-- effectively picking out the nicest bits and sticking to that sub- 
language.


Depending on your point of view, Haskell does not compare particularly  
favorably with respect to "TMTOWTDI".  The whole Control.* hierarchy  
is the construction of custom control structures.  That's the whole  
point of "glue" languages.  You write custom control structures to  
support the chosen normal forms for expressing data and computations.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ultra-newbie Question

2010-09-18 Thread Alexander Solla


On Sep 18, 2010, at 12:51 AM, Christopher Tauss wrote:



I am a professional programmer with 11 years experience, yet I just  
do not seem to be able to get the hang of even simple things in  
Haskell.  I am trying to write a function that takes a list and  
returns the last n elements.




Note that keeping just the suffix is the same as dropping the prefix.   
Consider your data:


A finite list.
The length of the suffix to keep.
An algebraic relationship between lengths of lists,  suffixes, and  
their prefixes:


length(prefix) + length(suffix) = legnth(list)

Putting all of this together:

n_tail n list = drop (prefix_length) list where
prefix_length = length list - n

You may be interested in how drop is carrying around some state:

drop n xs | n <= 0 =  xs
drop _ []  =  []
drop n (_:xs)  =  drop (n-1) xs

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A model theory question

2010-09-26 Thread Alexander Solla

 On 09/26/2010 01:32 PM, Patrick Browne wrote:

Hi,
Below is an assumption (which could be wrong) and two questions.

ASSUMPTION 1
Only smaller models can be specified using the sub-class mechanism.
For example if we directly make a subclass A =>  B then every instance of
B must also be an instance of A (B implies A or B is a subset of A).
Hence, type classes cannot be combined using sub-classing to provide
specifications of bigger models.
I'm not sure what you mean by "models" and "subclass mechanism".  For 
example, any set is a model of the first order logic.  Presumably, we 
can "subclass" (i.e., add axioms to) from FOL into Peano Arithmetic.  
That cuts down on models that satisfy the axioms from "any set" to... 
well, the models of arithmetic (that is, the natural numbers, the class 
of ordinals less than the first uncountable ordinal, etc).  A finite set 
is a model of FOL.  But no finite set is a model of PA.

QUESTIONS
1) If assumption 1 is correct, is there a mechanism whereby the module
system can be used to construct bigger models?


Bigger how?  Under logical implication and its computational analogue?  
That is to say, do you want the model to be more SPECIFIC, describing a 
smaller class of objects more richly (i.e, with "more" logical 
implications to be made) or do you want the model to be more GENERAL, 
and contain the less specific submodels?  This is how "forcing" works.



2) If there is such a mechanism does it involve type classes or is it a
module only solution?



Regards,
Pat

This message has been scanned for content and viruses by the DIT Information 
Services E-Mail Scanning Service, and is believed to be clean. http://www.dit.ie
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Coding conventions for Haskell?

2010-09-26 Thread Alexander Solla

 On 09/25/2010 02:24 AM, Petr Pudlak wrote:

Hi,

sometimes I have doubts how to structure my Haskell code - where to 
break lines, how much to indent, how to name functions and variables 
etc. Are there any suggested/recommended coding conventions? I 
searched a bit and I found a few articles and discussions:


I used a modified version of the "best practices" described by the Perl 
people for Perl code.  "Like things go under like things" is the most 
important rule to follow.  This rule, in other words, is a convention to 
make your code as "tabular" as possible.  Also, most expressions have an 
"outermost" connective.  I tend to align them:


Consider:

data Foo a b = Fooa
 | Bar  b
 | Foobar a b

That's not so nice looking now, but consider what happens when you have 
four or five arguments:


type Label = String
type Address = String

data Foo a b = Foo  (Maybe Label)  Address a
 | Bar Label b
 | Foobar  Label   Address a b

This is rather neat.  Instead of focusing effort on parsing the source, 
we can merely compare lines for the differences in logic they embody.  
Compare it with an "un-normalized" definition:


data Foo a b = Foo (Maybe Label) Address a
 | Bar Label b
 | Foobar Label Address a b

Quick, which one of those has a b in it?  Moreover, it make 
editing/refactoring easier with tools like Vim (with its visual block 
mode) and TextMate.  I'm sure Emacs, etc have block editing modes too.


To that end, I try to keep in the style of this style.  For let-in 
pairs, I do:


expression a = let exp  = blah_blah  a
   exp' = blah_blah' a
in (exp ++ exp')

I usually keep parentheses aligned as well, if a pair won't fit in a 
line.  That's one of the more obvious consequences of my convention.  
Monadic operators are worth keeping together, for similar reasons as 
keeping parentheses aligned.


action = first >>= second >>= third -- :fits on one line

-- Steps added:
action = first
>>= second
>>= third
>>= fourth
>>= fifth

You can jump into do notation pretty easily from there, by deleting the 
>>= operators, and sticking a 'do' before first.


Remember to treat values, functions, and monadic actions as "servers" 
that respond to your requests.  This is the easiest way to maximize the 
value of Haskell's laziness.


Also, and finally, remember that a function is a special kind of join on 
data types.  (A many-to-one join, in terms of the relational algebra as 
spoken of by database people).  My approach makes it easy to "abstract 
operators out" of the act of reading.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A model theory question

2010-09-27 Thread Alexander Solla

 On 09/27/2010 12:25 AM, Patrick Browne wrote:

Alexander Solla wrote:

  On 09/26/2010 01:32 PM, Patrick Browne wrote:
/
Bigger how?  Under logical implication and its computational analogue?
That is to say, do you want the model to be more SPECIFIC, describing a
smaller class of objects more richly (i.e, with "more" logical
implications to be made) or do you want the model to be more GENERAL,
and contain the less specific submodels?  This is how "forcing" works.

My idea of bigger is based on the import modes and parameterized modules
of the Maude/CafeOBJ languages, where smaller theories are used to
construct larger theories (and/or objects). In these languages I guess
theories (loose specifications or interfaces) correspond to your
*logical implication* and objects (or tight specification) correspond to
*computation* at the ordinary programming level. The axioms of the
theories must hold at the programming level.
Well, my question was more along the lines of "do you want bigger (more 
specific) theories (and so more "specific" models to interpret them)?" 
or do you want to generalize from a given theory?  To do the latter with 
Haskell, you might create a module that allows exporting your 
"axiom/interface" functions.  And maybe create a wrapper that drops 
axioms/interfaces/constraints.  This is assuming you've organized your 
modules to embody specific theories (an assumption not usually true of 
my code, at least under this strict interpretation).


To become more specific, you would just import the a module and add new 
axiom/interface functions.


Doing similar constructions with type classes is possible.  I think you 
might have to use witness types (or even a nice functorial wrapper 
around your target value in the original algebra, or both) to do 
generalizations of type classes.  For example:


class Uneditable obj where
  a :: a -> obj
  b :: b -> obj

class Refactored obj witness where
  a' :: Maybe (a -> obj)
  b' :: Maybe (a -> obj)

data EmptyFilter -- I think the name of the extension needed for this is 
'EmptyDataDecls'

data NoA
data NoB

instance Uneditable obj => Refactored obj EmptyFilter where a' = Just a; 
b' = Just b
instance Uneditable obj => Refactored obj NoA where a' = Nothing; b' = 
Just b


etc

You would be using the Maybe part to "switch" functions/axioms on and 
off.  The witness type links the obj type to the intended generalization 
of Uneditable.


By the way, this is a decent example of forcing for types:  given a type 
that satisfies a theory (that is, whose values are models for the 
theory), you generate a set of "names" which links the type to "new" 
theories that are related in a fairly obvious way (in this case, via 
Maybe).  This represents an embedding of the new theory in the old 
theory.  (And, dually, it represents an embedding of the old model in 
the new one)  There's more to it than that, insofar as there is stuff to 
be proved.  But Haskell does it for you.




What does the term *forcing* mean?


Forcing is a technique to create models from axiomatizations.  It is the 
countable (and beyond) extension of creating a model by adding elements 
piecewise, assuming they satisfy the theory.  Indeed, you end up using 
"filters" (the dual to an ideal) to ensure you get rid of all the 
elements that don't satisfy a model.  The wikipedia page goes over some 
of this in term so of constructing a model in terms of a language over a 
theory for the model, and reinterpreting the new language in terms of 
the old one.


http://en.wikipedia.org/wiki/Forcing_(mathematics)


See:
http://books.google.ie/books?id=Q0H-n4Wz2ssC&pg=PA41&lpg=PA41&dq=model+expansion+cafeobj&source=bl&ots=_vCFynLmca&sig=07V6machOANGM0FTgPF5pcKRrrE&hl=en&ei=YkSgTPn0OoqOjAfb0tWVDQ&sa=X&oi=book_result&ct=result&resnum=1&ved=0CBgQ6AEwAA#v=onepage&q=model%20expansion%20cafeobj&f=false

From what I looked at, their logical notions/terminology are pretty 
standard.  In general, if you want the class of models to decrease, you 
must make the class of theories for them increase (under inclusion 
order), and vice-versa.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A model theory question

2010-09-28 Thread Alexander Solla

 On 09/28/2010 03:03 AM, Patrick Browne wrote:

I had a look at the types of a and a’.
*Main>  :t a
a :: forall a obj. (Uneditable obj) =>  a ->  obj
*Main>  :t a'
a' :: forall witness a obj. (Refactored obj witness) =>  Maybe (a ->  obj)

Could you explain the example a little more.
This is going to be a long email.  Logic is a bit of a rambling 
subject.  And then I went out and bought some beer. ;0)


First thing to note:  the importance of the types here is that the stuff 
contained inside the Maybe HAS to be the same type as the corresponding 
"Uneditable" function.  On a conceptual level, we're trying to quantify 
over the axioms embodied by "Uneditable".  It isn't important to my 
point what the axioms types are.  What matters is that the new axiom 
scheme as the same "inner" type, wrapped in a Maybe.  If you're looking 
to do object orientation or similar things, you would probably prefer 
something like:


class Uneditable obj where a :: obj -> a; b :: obj -> a

and

class Refactored obj witness where a' :: Maybe (obj -> a), b' :: Maybe 
(obj -> a)


You can do slick things with existential quantification too.  (To a 
mathematician, "slick" means "so elegant it looks trivial at first 
glance, but deep and unifying upon further inspection".  Compare to the 
mathematician joke about obviousness.[1]  Category theory is the 
slickest of mathematics, under an informal, subjective ordering of 
mine.  Mathematical logic is pretty damn slick too, but it has more 
primitives and so is less elegant.  In different words, it is more 
computational than category theory, which is compositional by design)


I am not sure what you don't understand, so I will start at the 
beginning, at least in broad strokes.


So, going back to the "basic" theory of logic, you have axiomatizations, 
theories, and models.  An axiomatization is a "chosen" finite a set of 
sentences (it's up to you to pick axioms that encode truths about what 
you want to study/program).  A theory is a set of sentences which is 
closed under logical implication, and a model is a set of objects[1] 
which "satisfies" an axiomatization (equivalently, the theory generated 
by an axiomatization, because of the compactness theorem)


In order to make an axiomatization "more specific", you add axioms.  In 
order to make it "more general", you drop axioms.  So, every group is a 
monoid, but not vice-versa:  the non-associative monoid of subtraction 
over the integers is not a group.  The monoid does not satisfy the axiom 
of associativity, and so must be excluded from the class of models for 
the group axioms.  Again, theories are "the same way", but you have to 
deal with logical closure.  I mean, if I was to remove a sentence from a 
theory, it *might* still generate the same theory, because the sentence 
I removed was a logical consequence of the others.  For example, we can 
have a theory of arithmetic where an axiom states that 1 + 1 = 2, in 
addition to the Peano axioms.  If we drop the 1 + 1 = 2 axiom and 
generate the closure of the new theory, we will see that 1 + 1 = 2 anyway.


The notion of "satisfaction" is important.  As I said, a model for an 
axiomatization is (conceptually) a set of objects which satisfies an 
axiomatization or theory.  In short, to show that a model satisfies an 
axiomatization, you need to provide an interpretation function for the 
model.  (This is what type class instances are for, under the 
Howard-Curry correspondence) An interpretation is a function which takes 
a sentence from the axiomatization and an object from the model and maps 
them to a truth value, under certain consistency constraints.  For 
example, I(a 'and' b) = I(a) "and" I(b), where 'and' is a symbolic 
notion of conjunction and also "and" is a semantic notion of 
conjunction.  (In particular, an interpretation I is a "proof-to-truth 
homomorphism", and can potentially be reified as a functor, among other 
things.


Now it gets kind of tricky.  There are a few correspondences and facts 
to put together.  First is the Howard-Curry Correspondence theorem, 
which tells us that in a typed programming language, functions can be 
re-interpreted as proofs of type-level theorems in a corresponding typed 
logic.  That is, typed programming languages are typed constructive 
logics.  So all the "basic" mathematical logic stuff I have talked about 
applies.  But, I never said where axiomatizations and their models 
"live" on the programming side.  And in fact, models are particularly 
tricky, because they "live" in "shifting" grounds.


For example, any logically consistent theory is its own model!  The 
objects of the model (which the interpretation function interprets) are 
sentences from the theory the model models.  The axioms for the theory 
are to be declared "true" (with respect to the model), and it is 
immediately obvious that the "identity functor" offers an 
interpretation, since I(A and B) exactly equals I(A) and I(B).  This is 
called the "free mode

Re: [Haskell-cafe] Re: Monad instance for partially applied type constructor?

2010-09-29 Thread Alexander Solla

 On 09/29/2010 02:15 PM, DavidA wrote:

instance Monad (\v ->  Vect k (Monomial v))
>  

Yes, that is exactly what I am trying to say. And since I'm not allowed to say
it like that, I was trying to say it using a type synonym parameterised over v
instead.


Why not:

instance Monad ((->) Vect k (Monomial v))
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Monad instance for partially applied type constructor?

2010-09-29 Thread Alexander Solla

 On 09/29/2010 09:13 PM, Alexander Solla wrote:

 On 09/29/2010 02:15 PM, DavidA wrote:

instance Monad (\v ->  Vect k (Monomial v))
> 
Yes, that is exactly what I am trying to say. And since I'm not 
allowed to say
it like that, I was trying to say it using a type synonym 
parameterised over v

instead.


Why not:

instance Monad ((->) Vect k (Monomial v))


Sorry, I guess this is a bit off.  I don't think you "really" want a 
monad.  I think you want something like the dual to the reader monad 
(i.e, a comonad)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ordering vs. Order

2010-10-07 Thread Alexander Solla


On Oct 7, 2010, at 1:02 AM, Christian Sternagel wrote:


Hi all,

I'm not a native English speaker and recently I was wondering about  
the two words "order" and "ordering" (the main reason why I write  
this to the Haskell mailing list, is that the type class "Ordering"  
does exist).


My dictionaries tell me that "order" (besides other meanings)  
denotes an ordered structure on elements and "ordering" (as only  
meaning) denotes some request that I made at some entity. So, to me  
it seems that calling the type class "Ordering" is wrong ;)


However, I do know that there are many publications about "ordered  
structures" which use the word "ordering" (most of which I'm aware  
of, not by native speakers).


What do native speakers have to say about that?


They're pretty much synonymous.  Given a specific context, an order is  
"the" relation that orders a set, whereas an ordering is "a" relation  
that orders a set.  For example, a set with three elements can be  
ordered in three different ways.  Each of them is an ordering.  But  
none is "THE" order.  (If the elements are integers, then they can  
inherit THE integer order, if you wanted the set to inherit that  
notion of an order)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A model theory question

2010-10-07 Thread Alexander Solla


On Sep 30, 2010, at 1:39 AM, Patrick Browne wrote:


I think my original question can be rephrased as:
Can type classes preserve satisfaction under the the expansion  
sentences

(signature/theory morphisms inducing a model morphism).

According to [1] expansion requires further measures (programming?)
which you demonstrated. But this raises are further question. Does
Haskell’s  multiple inheritance represent a model expansion where the
classes in the context of an instance are combined in the new bigger  
model?


In principle, the answer is yes (I think).  But I kept running into  
walls when I tried.  It is almost as if there is "too big" a gap to be  
filled between compile-time and run-time.  At least for the approaches  
I tried.


I have two suggestions though.  First, monadism is a great way to  
approach this problem from a run-time level.  Indeed, a monad is an  
interpreter, which comes with the associated notions of a free algebra/ 
model and the like.  Injecting a new axiom amounts to creating a new  
monadic action.  Monad transformers can do lifting and lowering in a  
fairly straight forward way.


Another suggestion is to check out the OOHaskell paper.  I know they  
use type level forcing to get stuff done, but I guess they used a  
different cluster of extensions than I tried.  http://homepages.cwi.nl/~ralf/OOHaskell/


Sorry for the delay in responding.

-Alex___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ordering vs. Order

2010-10-08 Thread Alexander Solla


On Oct 7, 2010, at 1:15 AM, Alexander Solla wrote:

For example, a set with three elements can be ordered in three  
different ways.


Six ways.  I hate making such basic math mistakes.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Re-order type

2010-10-09 Thread Alexander Solla


On Oct 9, 2010, at 4:17 PM, André Batista Martins wrote:

If i want compose   f  and f1, i need to do a correct input to f1  
from the output of f.
So i want one function to  convert the output of "f" to input off  
"f!".
In this case,  we do  f1 fst (snd (t,(t1,t2)))  snd (snd (t, 
(t1,t2)))
But i want do this automaticaly, for type of any two function. I  
search for the "glue".



You have to write the glue.  There is no way to do it automatically,  
for all cases.  If there was a way to derive the glue, they wouldn't  
be separate cases.


Type classes are a common approach for writing glue functions.  The  
Functor and Monad type classes are good examples.  (And there is a  
nice algebra of functors in the small category of functors, too)   I  
wrote a weird little functorial type class yesterday, specifically to  
handle newtypes as functors over their underlying datatype/category.


class Transform obj functor where
 transform :: (obj -> obj) -> functor -> functor

instance Functor f => Transform obj (f obj) where transform = fmap

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Static computation/inlining

2010-10-10 Thread Alexander Solla

Hi everybody,

I'm working on a module that encodes "static" facts about "the real  
world".  For now, I'm working on an ISO 3166 compliant list of  
countries, country names, and country codes.  I've run into a bit of  
an optimization issue.


There is a static bijective correspondence between countries and their  
codes.  In order to keep one just one "large" data structure  
representation as Haskell code, I encoded this bijection using a  
list.  I'm looking to write queries against this list, but it is  
rather tedious.  I figured I could make some Data.Maps to handle it  
for me.




-- Country and ISOCountryCodes derive (Data, Eq, Ord, Show, Typeable)
countries_and_iso_country_codes :: [ (Country, ISOCountryCode) ]
countries_and_iso_country_codes =

[ ( Afghanistan	   , ISOCountryCode   
AF AFG	(isoNumericCode 004) )
, ( AlandIslands	   , ISOCountryCode  AX	 
ALA	(isoNumericCode 248) )
, ( Albania	   , ISOCountryCode   
AL	ALB	(isoNumericCode 008) )

...
, ( Zimbabwe	   , ISOCountryCode   
ZW	ZWE	(isoNumericCode 716) )

]

map_country_to_country_code :: Map Country ISOCountryCode
map_country_to_country_code = fromList countries_and_iso_country_codes

map_country_code_to_country :: Map ISOCountryCode Country
map_country_code_to_country = fromList . fmap (\(a,b) -> (b, a)) $  
countries_and_iso_country_codes


Is there anyway to instruct GHC (and maybe other compilers) to compute  
these maps statically?  Are GHC and the other compilers smart enough  
to do it automatically?  Although the list isn't huge, I would still  
rather get rid of the O(2*n) operation of turning it into maps at run- 
time.  (Especially since some later list encodings like these might be  
enormous)  What should I be looking into?


Thanks___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Client-extensible heterogeneous types

2010-10-12 Thread Alexander Solla


On Oct 12, 2010, at 4:24 AM, Jacek Generowicz wrote:


I can't see a Haskell solution which combines both of these orthogonal
features without losing the benefits of the type system. (For example,
I could create my own, weak, type system with tags to identify the
type and maps to do the dispatch.)


Is there any particular reason why you want to actually to mirror  
Python code?  I think that letting the programmer design domain  
specific control structures is rather the point of Haskell.  Instead  
of relying on a one-sized fits all solution (which only really fits  
one kind of problem), you write your own.  And it is typically easier  
to write the control structure than it is to implement it using the OO  
patterns, because of the notion of irreducible complexity.  For  
example, the Factory pattern constructs a functor.  You can write the  
essential semantics of doing this with a single Functor instance,  
instead of writing multiple classes which implement the semantics,  
while relying on implicit, and possibly ill-fitting semantics of  
method dispatch.  The other OO patterns make this objection stronger.   
If you can write a UML diagram, you can turn it into a commutative  
diagram, and write less code by implementing its arrows.


An OO class hierarchy is a very specific functor over objects (which  
attaches methods to objects).  Haskell provides the Functor type  
class.  Write your generic functions for specific functors:



-- The varying "input" types.  Will be attached to arbitrary values by  
the Functor instance.


data A = A  -- Variant 1
data B = B  -- Variant 2

-- Some normalized Output type.
data Output = Output

-- The new control structure.  
data Attaches a = AttachesA A a

| AttachesB B a

-- Stick your conditional (varying) semantics in here.  Corresponds to  
heterogeneousProcessor.  
-- The output presumably depends on whether A or B is attached, so  
this function is not equivalent 
-- to something of the form fmap (f :: a -> Output) (attaches ::  
Attaches a) 


runAttaches :: Attaches a -> Attaches Output
runAttaches = undefined

-- This corresponds roughly to  
heterogeneousProcessor(heterogeneousContainer):

processedOutputs :: [Attaches a] -> [(Attaches Output)]
processedOutputs as = fmap runAttaches as


-- Functor instance.  Now you have a way to treat an (Attaches a)  
value just like you would an a. (modulo calling fmap)

instance Functor Attaches where
 fmap f (AttachesA A a) = (AttachesA A (f a))
 fmap f (AttachesB B a) = (AttachesB B (f a))



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Client-extensible heterogeneous types

2010-10-13 Thread Alexander Solla


On Oct 13, 2010, at 2:18 PM, Jacek Generowicz wrote:

Is there any particular reason why you want to actually to mirror  
Python code?


I don't want to: I merely have a situation in which an OO solution  
(not necessarily a good one) immediately springs to mind, while I  
didn't see any obvious way to do it in Haskell.


Fair enough. :0)




Instead of relying on a one-sized fits all solution (which only  
really fits one kind of problem), you write your own.  And it is  
typically easier to write the control structure than it is to  
implement it using the OO patterns, because of the notion of  
irreducible complexity.  For example, the Factory pattern  
constructs a functor.  You can write the essential semantics of  
doing this with a single Functor instance, instead of writing  
multiple classes which implement the semantics, while relying on  
implicit, and possibly ill-fitting semantics of method dispatch.   
The other OO patterns make this objection stronger.  If you can  
write a UML diagram, you can turn it into a commutative diagram,  
and write less code by implementing its arrows.


Lots of stuff that sounds fascinating, but whose detailed meaning  
is, at the moment, beyond my grasp. So let my start off by getting  
my teeth into your example code:




An OO class hierarchy is a very specific functor over objects  
(which attaches methods to objects).


This sounds very interesting, but, again, I'm having difficulty  
understanding *exactly* how that is.


At a high level, a functor is a "thing" which attaches "things" to the  
elements of an algebra, in an algebraically compatible way.  The  
functor laws express the compatibility conditions.


Let's think about how non-duck typed OO systems are used (internally)  
at run-time.  First, we have an algebra of objects.  If we don't  
consider how the class hierarchy interacts with the objects, the  
objects are a lot like Haskell values.  Basically, just locations in  
memory or another similar abstraction.


Every object has a "principle class".  We can model this by creating a  
functor that attaches a "class" to each location in memory.


Some classes inherit from others.  We can model this by creating a  
functor that attaches a list (or tree) of classes to each class (that  
we have attached to an object).  Interpreting this model means  
searching for a class that has the method with the right name


With these constructs, we can recreate dynamic method dispatch.  In  
particular, a functor over a functor is a functor over the underlying  
functor's algebra.  We can use "functor combinators" to make going  
'up' and 'down' easier.





Haskell provides the Functor type class.  Write your generic  
functions for specific functors:



-- The varying "input" types.  Will be attached to arbitrary values  
by the Functor instance.


data A = A  -- Variant 1
data B = B  -- Variant 2

-- Some normalized Output type.
data Output = Output

-- The new control structure. data Attaches a = AttachesA A a
   | AttachesB B a

-- Stick your conditional (varying) semantics in here.  Corresponds  
to heterogeneousProcessor.


Could you explain this a bit more? heterogeneousProcessor was  
extremely boring: its only interesting feature was the dot between  
"datum" and "method()" Here it is again:


def heterogeneousProcessor(data):
  return [datum.method() for datum in data]

I suspect that runAttaches is (potentially) a lot more interesting  
than that!


It is as interesting as you want it to be.  That's where you put the  
semantics for interpreting a in terms of the types A or B.  For  
example, if A contained a list of named methods of the form (a ->  
Output), your runAttaches could search through the list, find the  
right one, and apply it.




-- The output presumably depends on whether A or B is attached, so  
this function is not equivalent-- to something of the form fmap  
(f :: a -> Output) (attaches :: Attaches a)

runAttaches :: Attaches a -> Attaches Output
runAttaches = undefined

-- This corresponds roughly to  
heterogeneousProcessor(heterogeneousContainer):

processedOutputs :: [Attaches a] -> [(Attaches Output)]
processedOutputs as = fmap runAttaches as


Would it be correct to say that runAttaches replaces Python's  
(Java's, C++'s etc.) dynamically dispatching dot, but also allows  
for a greater variety of behaviour?


Yes, that's right.



Alternatively, would it be interesting to compare and contrast  
runAttach to CLOS' generic functions, or even Clojure's arbitrary  
method selection mechanism?


I don't know, I'm not familiar with either.  On the other hand, method  
dispatch is always pretty similar.  The difference is the shape of the  
structure traversed to find the right method.




-- Functor instance.  Now you have a way to treat an (Attaches a)  
value just like you would an a. (modulo calling fmap)

instance Functor Attaches where
fmap f (AttachesA A a) = (AttachesA A (f a))
fmap f

[Haskell-cafe] ANNOUNCE: Facts

2010-10-13 Thread Alexander Solla
The Facts hierarchy is meant to contain commonly used, relatively  
static facts about the "real world". The facts are meant to be encoded  
using relatively simple Haskell constructs. However, we do make some  
promises: every data type our modules export will have instances of  
Data, Eq, Ord, Show, andTypeable. We will use explicit module export  
lists to control access to internal data structures.


As much of the data we are encoding is tabular, we use simple  
structures like lists and maps to encode the relations. This has two  
practical ramifications: the textual representation of the data can be  
very wide, but are also very easy to edit, with "block editing" tools  
like Vi's visual block mode. The other consequence is that the naive  
approach to writing queries can be tedious, and the resulting naive  
queries are slower than they could be. Template Haskell can eliminate  
much of this drudgery. Felipe Lessa has graciously donated some  
Template Haskell code which we have adapted.


The Facts.* hierarchy currently contains modules with geographical  
information, such as a data type of countries, cross references to  
various ISO-3166-1 names for each, a list of states in the United  
States, and the United States address format. Please see the module  
hierarchy for more specifics. Patches are welcomed, though prospective  
contributors are encouraged to encode data structures using lists of  
pairs to encode bijections, all exposed data types are instances of  
Data, Eq, Ord, Show, and Typeable, and use explicit exports to only  
export queries and their atoms. For now, we will add facts to the  
hierarchy lazily, as our projects need them.___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Who is afraid of arrows, was Re: [Haskell-cafe] ANNOUNCE: Haskell XML Toolbox Version 9.0.0

2010-10-14 Thread Alexander Solla


On Oct 11, 2010, at 11:48 AM, Gregory Crosswhite wrote:

No, but there is no point in using a formalism that adds complexity  
without adding functionality.  Arrows are more awkward to use than  
monads because they were intentionally designed to be less powerful  
than monads in order to cover situations in which one could not use  
a monad.  When your problem is solved by a monad there is no point  
in using arrows since an arrow require you to jump through extra  
hoops to accomplish the same goal.




But they do "add" functionality.  An arrow is something like a monad/ 
co-monad pair.  Half of the arrow is defined to parse input.  The  
other half is defined to process the parse tree and produce output.   
An arrow is "just" a functor from one named type to another.


No, that is not at all the problem with arrows.  The problem with  
arrows is that they are more restrictive than monads in two  
respects.  First, unlike monads, in general they do not let you  
perform an arbitrary action in response to an input.  Second, they  
place restrictions on how you define the input arguments of the  
arrow because you can't feed the output of one arrow into to input  
of the next unless said input is captured in the arrows type.


They aren't more restrictive.  Just use an identity comonad for the  
input half if you want to deal with monads.  You can even make monad  
instances for arrows of this form.  The latter objection is rather the  
point of using an arrow.  "Parser -> Process", in a single construct.   
There is an analogy here.  A monad,  in general corresponds to a  
catamorphism on a functor algebra.  A co-monad corresponds to an  
anamorphism.  The compositon of an anamorphism and catamorphism is a  
hylomorphism.  A thing that unwraps and re-wraps.


I do tend to use the functor type class (or category-extras, or even  
data.category) much more than arrows, though.  It's all more-or-less  
the same.___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: Facts

2010-10-14 Thread Alexander Solla


On Oct 14, 2010, at 3:00 PM, Henk-Jan van Tuyl wrote:

The list of countries is maybe less static then you would think;  
since last Sunday, the Netherlands Antilles does not exist anymore;  
instead there are two new countries: Curaçao and St. Maarten [0]


Thanks for the update.  I'l put it in the errata.  However, I'm  
following the U.N. recommended list of country names, which hasn't  
been updated since August 2009.  I emailed the Working Group on  
Country Names about it a few minutes ago.


-Alex___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] In what language...?

2010-10-15 Thread Alexander Solla


On Oct 15, 2010, at 1:36 PM, Andrew Coppin wrote:

Does anybody have any idea which particular dialect of pure math  
this paper is speaking? (And where I can go read about it...)


It's some kind of typed logic with lambda abstraction and some notion  
of witnessing, using Gertzen (I think!) style derivation notation.   
Those A |- B things mean A "derives" B.  The "|-" is also called a  
Tee.  If your mail client can see underlining, formulas like


A, B
   |
   A

mean:

A, B |- A

That's where the Tee gets its name.  It's a T under A, B.  The former  
notation is better for some uses, since it "meshes" with the method of  
"truth trees".

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] In what language...?

2010-10-25 Thread Alexander Solla


On Oct 25, 2010, at 2:10 PM, Andrew Coppin wrote:

Hypothesis: The fact that the average Haskeller thinks that this  
kind of dense cryptic material is "pretty garden-variety" notation  
possibly explains why normal people think Haskell is scary.


Maybe, but the notation is still clearer than most programming  
notations for expressing the same ideas.  Most "normal" people don't  
realize that mathematicians discovered the constructs for expressing  
structures and computations 50 to hundreds of years before the  
invention of the semiconductor, or that the mathematician's goal of  
"elegant" expression is pragmatic.  Elegant expressions of a problem  
and its solutions are the easiest to work with.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] In what language...?

2010-10-25 Thread Alexander Solla


On Oct 25, 2010, at 2:10 PM, Andrew Coppin wrote:

Type theory doesn't actually interest me, I just wandered what the  
hell all the notation means.


Sorry for the double email.

I recommend  "Language , Proof, and Logic", by Barwise and  
Etchemendy.  It doesn't go into type theory (directly).  It is a book  
about logic for technical people (it was in use at the Reed College  
philosophy department for many years -- maybe it still is).  It is  
very approachable.  The last part of the book is about "advanced"  
logic, including model theory, some aspects of forcing (there's a  
forcing proof of Lowenheim-Skolem), Godel's theorems (they called  
Godel the world's first hacker, with good reason), and some sections  
on higher order logics.


It doesn't include all of the notation you asked about.  But  
understanding the basics will be very helpful, especially if you  
understand the underlying concepts.  For example, set theory is a  
first order logic that can be recast as a second order logic.  This is  
more-or-less the origin of type theory (types were originally  
witnesses to the "level" at which a term is defined -- this will make  
sense in context).  The paper you asked about has a richer "ontology"  
than ZF -- it promotes more things to "named" "kinds" of terms than  
plain old ZF.  But the promotion is straightforward, and the same  
logical rules apply.


You can get it cheap ($1.70 for the cheapest copy) through alibris.com  
(I am not too happy with them at the moment, but their prices are  
low.  Especially if you can wait a few weeks for the book)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] In what language...?

2010-10-26 Thread Alexander Solla


On Oct 26, 2010, at 12:43 PM, Andrew Coppin wrote:

Propositional logic is quite a simple logic, where the building  
blocks
are atomic formulae and the usual logical connectives. An example  
of a
well-formed formula might be "P → Q". It tends to be the first  
system

taught to undergraduates, while the second is usually the first-order
predicate calculus, which introduces predicates and quantifiers.


Already I'm feeling slightly lost. (What does the arrow denote?  
What's are "the usual logcal connectives"?)


The arrow is notation for "If P, then Q".  The other "usual" logical  
connectives are "not" (denoted by ~, !, the funky little sideways L,  
and probably others); "or" (denoted by \/, v, (both are pronounced  
"or" or "vee" even "meet") |, ||,  and probably others);  
"and" (denoted by /\, or a smaller upside-down v (pronounced "wedge"  
or "and" or even "join"),  &, &&, and probably others).





Predicates are usually interpreted as properties; we might write
"P(x)" or "Px" to indicate that object x has the property P.


Right. So a proposition is a statement which may or may not be true,  
while a predicate is some property that an object may or may not  
possess?


Yes.  For any given object a (which is not a "variable" -- we usually  
reserve x, y, z to denote variables, and objects are denoted by  a, b,  
c), P(a) is a proposition "about" a.  Something like "forall x P(x)"  
means that P(x) is true for every object in the domain you are  
considering.




I also don't know exactly what "discrete mathematics" actually  
covers.

Discrete mathematics is concerned with mathematical structures which
are discrete, rather than continuous.


Right... so its domain is simply *everything* that is discrete? From  
graph theory to cellular automina to finite fields to difference  
equations to number theory? That would seem to cover approximately  
50% of all of mathematics. (The other 50% being the continuous  
mathematics, presumably...)


Basically, yes.  There are some nuances, in that continuous structures  
might be studied in terms of discrete structures, and vice-versa. ___

Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


  1   2   3   >