[Haskell] announce: NetSNMP 0.2 and 0.3

2013-04-25 Thread Pavlo Kerestey
Hello everyone,

I have just published an upgrade to the NetSNMP Library. It was previously
managed by John Dorsey and we took over the maintenance for now.

We have simultaneously published two versions:
- the 0.2.* Branch introduces snmpBulkWalk and changes the Maintainer.
- the 0.3.* Branch additionally changes the String to ByteString for the
data representation.

As I write this now, I realize, that I should've mentioned the original
author of the code in the cabal file. I promise to do this with the next
version :)

We also plan to add QuickCheck based tests soon.

If you run into any issues, feel free to contact me here:
https://github.com/ptek/netsnmp

Hope this will become useful,
Pavlo Kerestey.
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] Haskell training in San Francisco Bay Area and New York

2013-04-25 Thread Duncan Coutts
Well-Typed are offering Haskell courses in the San Francisco Bay Area
and New York in early June.

They are for professional developers who want to learn Haskell or
improve their skills. There is a 2-day introductory course and a 2-day
advanced course.

Full course and registration details:
http://www.well-typed.com/services_training

Well-Typed are running these courses in partnership with FP Complete and
Skills Matter.

Locations, dates


San Francisco Bay Area

* Introductory Course: June 4-5th, 2013
* Advanced Course: June 6-7th, 2013


New York

* Introductory Course: June 10-11th, 2013
* Advanced Course: June 12-13th, 2013
* Early bird discount before April 29th


-- 
Duncan Coutts, Haskell Consultant
Well-Typed LLP, http://www.well-typed.com/


___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] MEMOCODE 2013: Call for papers

2013-04-25 Thread Emil Axelsson

CALL FOR PAPERS - MEMOCODE 2013


Eleventh ACM/IEEE
International Conference on Formal Methods and Models for Codesign
http://www.memocode-conference.com

18-20 October 2013, Portland, Oregon, USA
Co-located with DIFTS and FMCAD


SCOPE

The eleventh ACM/IEEE MEMOCODE conference focuses on research and
developments in methods, tools, and architectures for the design of
hardware/software systems. MEMOCODE seeks submissions that present
novel formal methods and design techniques to create, refine, and
verify complex hardware/software systems and to tackle the tight
constraints on timing, power, costs, reliability and security that
these systems face.

We also invite application-oriented papers, and especially encourage
submissions that highlight the tools and design perspective of formal
methods and models, including success as well as failure stories,
constructive analysis thereof, and demonstrations of hardware/software
codesign.

Techniques may range from formal verification to simulation-based
verification technologies, and from languages to design paradigms that
unify hardware and software codesign. Architectures may range from
cloud computing and multi-core platforms to networks on chip.
Applications and demonstrators may address values ranging from
productivity and reuse to performance and quality.


PAPER SUBMISSIONS

Paper submissions must be accepted through the EasyChair review system
on our web site. Papers must be 10 pages or less, and must be formatted
following IEEE Computer Society guidelines. Submissions must be written
in English, describe original work, and not substantially overlap
papers that have been published or are being submitted to a journal or
another conference with published proceedings.


  Abstract submission deadline: May 8
  Paper submission deadline: May 15
  Notification of acceptance: June 19
  Final version for Papers: July 17


DESIGN CONTEST

MEMOCODE 2013 is proud to organize its traditional Design Contest.
The conference will sponsor at least one prize with a significant
monetary award. Each team delivering a complete and working solution
will be invited to prepare a 2-page abstract to be published in the
proceedings and to present it during a dedicated poster session at
the conference. The winning teams may contribute a 4-page short paper
for presentation in the conference program. Further information
will be made available on our website.


  Design Contest start: May 15
  Design submission deadline: June 12
  Notification of design results: June 19
  Final version for abstracts: July 17


PUBLICATION

Conference proceedings will be published by the IEEE Computer Society.
Besides conference proceedings, we are planning a special journal issue
with the very best 2013 papers of MEMOCODE, DIFTS, and FMCAD.

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell-cafe] Fwd: How to do automatic reinstall of all dependencies?

2013-04-25 Thread Erik Hesselink
I think --reinstall only reinstalls the package you are actually
installing, not the dependencies. You could try using a sandboxing
tool, like cabal-dev. Then you just do 'cabal-dev install', and when
you want to reinstall everything, you do 'rm cabal-dev/' to wipe the
sandbox and start over.

Regards,

Erik

On Thu, Apr 25, 2013 at 12:29 AM, capn.fre...@gmail.com
capn.fre...@gmail.com wrote:


 -db

 - Forwarded message -
 From: Captain Freako capn.fre...@gmail.com
 Date: Tue, Apr 23, 2013 9:21 pm
 Subject: How to do automatic reinstall of all dependencies?
 To: haskell-cafe@haskell.org

 Hi all,

 Does anyone know why the following is not working, as an automatic way of
 reinstalling all dependencies?:

 dbanas@dbanas-lap:~/prj/AMI-Tool$ cabal install --only-dependencies
 --reinstall --force-reinstalls parsec
 Resolving dependencies...
 All the requested packages are already installed:
 Use --reinstall if you want to reinstall anyway.
 dbanas@dbanas-lap:~/prj/AMI-Tool$

 Thanks,
 -db


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fwd: How to do automatic reinstall of all dependencies?

2013-04-25 Thread Alexander Kjeldaas
This is not what you asked for, but reinstalling *all
dependencies*probably isn't such a good idea, because ultimately some
dependencies are
shipped with GHC and you might not be able to reinstall them.

Here is a useful formula I developed to avoid cabal-hell and always *upgrade
* dependencies whenever a new package is installed.

$ cabal install --upgrade-dependencies  `eval echo $(ghc-global-constraints
)` package-name

What this does is fix the version of all global packages.  These are by
default the ones that are shipped with ghc, so the above command explicitly
excludes those from being upgraded.

The ghc-global-constraints function is something I have in my .bashrc file,
and it looks like this:

function ghc-global-constraints() {
ghc-pkg list --global | tail -n+2  | head -n-1 | grep -v '(' | while
read a; do
VER=${a##*-}
PKG=${a%-*}
echo -n --constraint='$PKG==$VER' 
done
}

This technique depends on actually fixing broken package dependencies, but
today that's usually just a github fork away, and often easier than dealing
with multiple cabal-dev installations IMO.

Alexander



On Thu, Apr 25, 2013 at 12:29 AM, capn.fre...@gmail.com 
capn.fre...@gmail.com wrote:



 -db

 - Forwarded message -
 From: Captain Freako capn.fre...@gmail.com
 Date: Tue, Apr 23, 2013 9:21 pm
 Subject: How to do automatic reinstall of all dependencies?
 To: haskell-cafe@haskell.org

 Hi all,

 Does anyone know why the following is not working, as an automatic way of
 reinstalling all dependencies?:

 dbanas@dbanas-lap:~/prj/AMI-Tool$ cabal install --only-dependencies
 --reinstall --force-reinstalls parsec
 Resolving dependencies...
 All the requested packages are already installed:
 Use --reinstall if you want to reinstall anyway.
 dbanas@dbanas-lap:~/prj/AMI-Tool$

 Thanks,
 -db


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Why were datatype contexts removed instead of fixing them?

2013-04-25 Thread harry
If I understand correctly, the problem with datatype contexts is that if we
have e.g.
  data Eq a = Foo a = Foo a
the constraint Eq a is thrown away after a Foo is constructed, and any
method using Foos must repeat Eq a in its type signature.

Why were these contexts removed from the language, instead of fixing them?

PS This is following up on a discussion on haskell-beginners, How to avoid
repeating a type restriction from a data constructor. I'm interested in
knowing whether there's a good reason not to allow this, or if it's just a
consequence of the way type classes are implemented by compilers.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why were datatype contexts removed instead of fixing them?

2013-04-25 Thread Joe Quinn
From what I have heard, they are completely subsumed by GADTs, which is 
a stable enough extension that it was considered unimportant to save.


Your Foo would be something like this:

data Foo a where
  Foo :: Eq a = a - Foo a


On 4/25/2013 6:38 AM, harry wrote:

If I understand correctly, the problem with datatype contexts is that if we
have e.g.
   data Eq a = Foo a = Foo a
the constraint Eq a is thrown away after a Foo is constructed, and any
method using Foos must repeat Eq a in its type signature.

Why were these contexts removed from the language, instead of fixing them?

PS This is following up on a discussion on haskell-beginners, How to avoid
repeating a type restriction from a data constructor. I'm interested in
knowing whether there's a good reason not to allow this, or if it's just a
consequence of the way type classes are implemented by compilers.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why were datatype contexts removed instead of fixing them?

2013-04-25 Thread Kim-Ee Yeoh
On Thu, Apr 25, 2013 at 6:36 PM, Joe Quinn headprogrammingc...@gmail.comwrote:

 data Foo a where
   Foo :: Eq a = a - Foo a


is equivalent to

data Foo a = Eq a = Foo a

but is different from

data Eq a = Foo a = Foo a

(Yup, tripped up a few of us already!)

-- Kim-Ee
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fwd: How to do automatic reinstall of all dependencies?

2013-04-25 Thread Johannes Waldmann
Alexander Kjeldaas alexander.kjeldaas at gmail.com writes:

 cabal install --upgrade-dependencies  `eval echo $(ghc-global-constraints
)` package-name

for a moment I was reading ghc --global-constraints there ... - J.W.




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why were datatype contexts removed instead of fixing them?

2013-04-25 Thread harry
Kim-Ee Yeoh ky3 at atamo.com writes:

 data Foo a where
   Foo :: Eq a = a - Foo a
 
 is equivalent to
 
 data Foo a = Eq a = Foo a
 
 but is different from
 
 data Eq a = Foo a = Foo a

... and nothing in GADTs does what one would naively expect the last
declaration to do.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Why the `-ghc7.4.2' suffix on *.SO base names?

2013-04-25 Thread Captain Freako
In trying to compile a 2 year old, previously working project on a new
machine with a fresh Haskell Platform installation, I bumped into this:

ghc -o libami.so -shared -package parsec -lHSrts -lm -lffi -lrt AMIParse.o
AMIModel.o ami_model.o ExmplUsrModel.o Filter.o
/usr/bin/ld: /usr/lib/ghc-7.4.2/libHSrts.a(RtsAPI.o): relocation
R_X86_64_32S against `ghczmprim_GHCziTypes_Czh_con_info' can not be used
when making a shared object; recompile with -fPIC
/usr/lib/ghc-7.4.2/libHSrts.a: could not read symbols: Bad value
collect2: error: ld returned 1 exit status

When I investigated why the *.A, instead of the *.SO, was being picked up,
I found that the *.SO had a `-ghc7.4.2' appended to its base name.

Is this new? What is its purpose?
Do I need to add a `LIB_SUFFIX' variable to my makefile and append it to
all of my `-l...'s, or is there a more elegant solution?

Thanks!
-db
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why were datatype contexts removed instead of fixing them?

2013-04-25 Thread Brandon Allbery
On Thu, Apr 25, 2013 at 6:38 AM, harry volderm...@hotmail.com wrote:

 If I understand correctly, the problem with datatype contexts is that if we
 have e.g.
   data Eq a = Foo a = Foo a
 the constraint Eq a is thrown away after a Foo is constructed, and any
 method using Foos must repeat Eq a in its type signature.

 Why were these contexts removed from the language, instead of fixing
 them?


As I understand it, it's because fixing them involves passing around a
dictionary along with the data, and you can't do that with a standard
declaration (it amounts to an extra chunk of data that's only *sometimes*
wanted, and that sometimes complicates things). GADTs already have to
pass around extra data in order to support their constructors and
destructors; and, being new and not part of the standard, they don't have
backward compatibility or standards compatibility issues, so they can get
away with including the extra dictionary without breaking existing programs.

-- 
brandon s allbery kf8nh   sine nomine associates
allber...@gmail.com  ballb...@sinenomine.net
unix, openafs, kerberos, infrastructure, xmonadhttp://sinenomine.net
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why were datatype contexts removed instead of fixing them?

2013-04-25 Thread Gábor Lehel
I've wondered this too. What would have been wrong with a simple
source-to-source translation, where a constraint on the datatype itself
translates to the same constraint on each of its constructors? Perhaps it
would be unintuitive that you would have to pattern match before gaining
access to the constraint? On a superficial examination it would have been
backwards-compatible, allowing strictly more programs than the previous
handling.

On Thu, Apr 25, 2013 at 12:38 PM, harry volderm...@hotmail.com wrote:

 If I understand correctly, the problem with datatype contexts is that if we
 have e.g.
   data Eq a = Foo a = Foo a
 the constraint Eq a is thrown away after a Foo is constructed, and any
 method using Foos must repeat Eq a in its type signature.

 Why were these contexts removed from the language, instead of fixing
 them?

 PS This is following up on a discussion on haskell-beginners, How to avoid
 repeating a type restriction from a data constructor. I'm interested in
 knowing whether there's a good reason not to allow this, or if it's just a
 consequence of the way type classes are implemented by compilers.


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Your ship was destroyed in a monadic eruption.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why were datatype contexts removed instead of fixing them?

2013-04-25 Thread harry
Brandon Allbery allbery.b at gmail.com writes:

 As I understand it, it's because fixing them involves passing around a
dictionary along with the data, and you can't do that with a standard
declaration (it amounts to an extra chunk of data that's only *sometimes*
wanted, and that sometimes complicates things). GADTs already have to pass
around extra data in order to support their constructors and destructors;
and, being new and not part of the standard, they don't have backward
compatibility or standards compatibility issues, so they can get away with
including the extra dictionary without breaking existing programs.

But you can't do this with GADTs either?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Haskell training in San Francisco Bay Area and New York

2013-04-25 Thread Duncan Coutts
Well-Typed are offering Haskell courses in the San Francisco Bay Area
and New York in early June.

They are for professional developers who want to learn Haskell or
improve their skills. There is a 2-day introductory course and a 2-day
advanced course.

Full course and registration details:
http://www.well-typed.com/services_training

Well-Typed are running these courses in partnership with FP Complete and
Skills Matter.

Locations, dates


San Francisco Bay Area

* Introductory Course: June 4-5th, 2013
* Advanced Course: June 6-7th, 2013


New York

* Introductory Course: June 10-11th, 2013
* Advanced Course: June 12-13th, 2013
* Early bird discount before April 29th


-- 
Duncan Coutts, Haskell Consultant
Well-Typed LLP, http://www.well-typed.com/


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Munich Haskell Meeting

2013-04-25 Thread Heinrich Hördegen



Dear all,

our next regular meeting will take place in Munich on the 29th of April 
at 19h30 at Cafe Puck.


If you plan to join us, please go to

http://www.haskell-munich.de/dates

and click the button.

Until then, have a nice time,
Heinrich

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why were datatype contexts removed instead of fixing them?

2013-04-25 Thread Dan Doel
It is not completely backwards compatible, because (for instance) the
declaration:

newtype C a = Foo a = Foo a

was allowed, but:

newtype Foo a where
  Foo :: C a = a - Foo a

is an illegal definition. It can only be translated to a non-newtype data
declaration, which changes the semantics.


On Thu, Apr 25, 2013 at 10:35 AM, Gábor Lehel illiss...@gmail.com wrote:

 I've wondered this too. What would have been wrong with a simple
 source-to-source translation, where a constraint on the datatype itself
 translates to the same constraint on each of its constructors? Perhaps it
 would be unintuitive that you would have to pattern match before gaining
 access to the constraint? On a superficial examination it would have been
 backwards-compatible, allowing strictly more programs than the previous
 handling.

 On Thu, Apr 25, 2013 at 12:38 PM, harry volderm...@hotmail.com wrote:

 If I understand correctly, the problem with datatype contexts is that if
 we
 have e.g.
   data Eq a = Foo a = Foo a
 the constraint Eq a is thrown away after a Foo is constructed, and any
 method using Foos must repeat Eq a in its type signature.

 Why were these contexts removed from the language, instead of fixing
 them?

 PS This is following up on a discussion on haskell-beginners, How to
 avoid
 repeating a type restriction from a data constructor. I'm interested in
 knowing whether there's a good reason not to allow this, or if it's just a
 consequence of the way type classes are implemented by compilers.


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




 --
 Your ship was destroyed in a monadic eruption.
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] pattern matching on data families constructors

2013-04-25 Thread Alexey Egorov

Hi,
suppose that there is following data family:
 data family D a
 data instance D Int = DInt Int
 data instance D Bool = DBool Bool
it is not possible to match on constructors from different instances:
 -- type error
 a :: D a - a
 a (DInt x) = x
 a (DBool x) = x
however, following works:
 data G :: * - * where
 GInt :: G Int
 GBool :: G Bool

 b :: G a - D a - a
 b GInt (DInt x) = x
 b GBool (DBool x) = x
The reason why second example works is equality constraints (a ~ Int) and (a ~ 
Bool) introduced by GADT's constructors, I suppose.
I'm curious - why data families constructors (such as DInt and DBool) doesn't 
imply such constraints while typechecking pattern matching?
Thanks.___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] pattern matching on data families constructors

2013-04-25 Thread Francesco Mazzoli
At Thu, 25 Apr 2013 20:29:16 +0400,
Alexey Egorov wrote:
 I'm curious - why data families constructors (such as DInt and DBool) doesn't
 imply such constraints while typechecking pattern matching?

I think you are misunderstanding what data families do.  ‘DInt :: DInt - D Int’
and ‘DBool :: DBool - D Bool’ are two data constructors for *different* data
types (namely, ‘D Int’ and ‘D Bool’).  The type family ‘D :: * - *’ relates
types to said distinct data types.

On the other hand the type constructor ‘D :: * - *’ parametrises a *single*
data type over another type—the fact that the parameter can be constrained
depending on the data constructor doesn’t really matter here.

Would you expect this to work?

 newtype DInt a = DInt a
 newtype DBool a = DBool a
 
 type family D a
 type instance D Int = DInt Int
 type instance D Bool = DBool Bool
 
 a :: D a - a
 a (DInt x) = x
 a (DBool x) = x

Francesco

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] pattern matching on data families constructors

2013-04-25 Thread Francesco Mazzoli
At Thu, 25 Apr 2013 19:08:17 +0100,
Francesco Mazzoli wrote:
 ... ‘DInt :: DInt - D Int’ and ‘DBool :: DBool - D Bool’ ...

This should read ‘DInt :: Int - D Int’ and ‘DBool :: Bool - D Bool’.

Francesco

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why were datatype contexts removed instead of fixing them?

2013-04-25 Thread Gábor Lehel
Good point, again. Is that the only problem with it?

On Thu, Apr 25, 2013 at 5:57 PM, Dan Doel dan.d...@gmail.com wrote:

 It is not completely backwards compatible, because (for instance) the
 declaration:

 newtype C a = Foo a = Foo a

 was allowed, but:

 newtype Foo a where
   Foo :: C a = a - Foo a

 is an illegal definition. It can only be translated to a non-newtype data
 declaration, which changes the semantics.


 On Thu, Apr 25, 2013 at 10:35 AM, Gábor Lehel illiss...@gmail.com wrote:

 I've wondered this too. What would have been wrong with a simple
 source-to-source translation, where a constraint on the datatype itself
 translates to the same constraint on each of its constructors? Perhaps it
 would be unintuitive that you would have to pattern match before gaining
 access to the constraint? On a superficial examination it would have been
 backwards-compatible, allowing strictly more programs than the previous
 handling.

 On Thu, Apr 25, 2013 at 12:38 PM, harry volderm...@hotmail.com wrote:

 If I understand correctly, the problem with datatype contexts is that if
 we
 have e.g.
   data Eq a = Foo a = Foo a
 the constraint Eq a is thrown away after a Foo is constructed, and any
 method using Foos must repeat Eq a in its type signature.

 Why were these contexts removed from the language, instead of fixing
 them?

 PS This is following up on a discussion on haskell-beginners, How to
 avoid
 repeating a type restriction from a data constructor. I'm interested in
 knowing whether there's a good reason not to allow this, or if it's just
 a
 consequence of the way type classes are implemented by compilers.


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




 --
 Your ship was destroyed in a monadic eruption.
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





-- 
Your ship was destroyed in a monadic eruption.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] pattern matching on data families constructors

2013-04-25 Thread Francesco Mazzoli
At Thu, 25 Apr 2013 19:08:17 +0100,
Francesco Mazzoli wrote:
 Would you expect this to work?
 
  newtype DInt a = DInt a
  newtype DBool a = DBool a
  
  type family D a
  type instance D Int = DInt Int
  type instance D Bool = DBool Bool
  
  a :: D a - a
  a (DInt x) = x
  a (DBool x) = x

Or even better:

 data family D a
 data instance D Int = DInt1 Int | DInt2 Int
 data instance D Bool = DBool Bool
 
 a :: D a - a
 a (DInt1 x) = x
 a (DInt2 x) = x
 a (DBool x) = x

Francesco

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] pattern matching on data families constructors

2013-04-25 Thread Alexey Egorov
 Would you expect this to work?
 
 newtype DInt a = DInt a
 newtype DBool a = DBool a

 type family D a
 type instance D Int = DInt Int
 type instance D Bool = DBool Bool
 
 a :: D a - a
 a (DInt x) = x
 a (DBool x) = x
 
 Or even better:
 
 data family D a
 data instance D Int = DInt1 Int | DInt2 Int
 data instance D Bool = DBool Bool
 
 a :: D a - a
 a (DInt1 x) = x
 a (DInt2 x) = x
 a (DBool x) = x

Yes, my question is about why different instances are different types even if 
they have the same type constructor (D).
I'm just find it confusing that using GADTs trick it is possible to match on 
different constructors.

Another confusing thing is that following works:

 data G :: * - * where
 { GInt :: G Int
 ; GBool :: G Bool }

 b :: G a - D a - a
 b GInt (DInt x) = x
 b GBool (DBool x) = x

while quite similar doesn't:

 c :: D a - G a - a
 c (DInt x) GInt = x
 c (DBool x) GBool = x

However, viewing data families as type families + per-instance newtype/data 
declaration is helpful, thank you.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] pattern matching on data families constructors

2013-04-25 Thread Francesco Mazzoli
At Fri, 26 Apr 2013 00:20:36 +0400,
Alexey Egorov wrote:
 Yes, my question is about why different instances are different types even if
 they have the same type constructor (D).
 I'm just find it confusing that using GADTs trick it is possible to match on
 different constructors.

See it this way: the two ‘D’s (the GADTs and the data family one) are both type
functions, taking a type and giving you back another type.

In standard Haskell all such type functions return instances of the same data
type (with a set of data constructors you can pattern match on), much like the
‘D’ of the GADTs.  With type/data families the situation changes, and the
returned type can be different depending on the provided type, which is what’s
happening here.

Now the thing you want to do is ultimately write your ‘a’, which as you said
relies on type coercions, but this fact (the ‘GADTs trick’) has nothing to do
with the fact that type families will relate types to different data types.

Francesco

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] pattern matching on data families constructors

2013-04-25 Thread Roman Cheplyaka
Let's look at it from the operational perspective.

In the GADT case, the set of possibilities is fixed in advance (closed).

Every GADT constructor has a corresponding tag (a small integer) which,
when pattern-matching, tells us which branch to take.

In the data family case, the set of possibilities is open. It is harder
to do robust tagging over all the instances, given that new instances
can be added after the module is compiled.

The right way to do what you want is to use a type class and associate
your data family with that class:

  class C a where
data D a
a :: D a - a

  instance C Int where
data D Int = DInt Int
a (DInt x) = x

  instance C Bool where
data D Bool = DBool Bool
a (DBool x) = x

Roman

* Alexey Egorov elect...@list.ru [2013-04-25 20:29:16+0400]
 
 Hi,
 suppose that there is following data family:
  data family D a
  data instance D Int = DInt Int
  data instance D Bool = DBool Bool
 it is not possible to match on constructors from different instances:
  -- type error
  a :: D a - a
  a (DInt x) = x
  a (DBool x) = x
 however, following works:
  data G :: * - * where
  GInt :: G Int
  GBool :: G Bool
 
  b :: G a - D a - a
  b GInt (DInt x) = x
  b GBool (DBool x) = x
 The reason why second example works is equality constraints (a ~ Int) and (a 
 ~ Bool) introduced by GADT's constructors, I suppose.
 I'm curious - why data families constructors (such as DInt and DBool) doesn't 
 imply such constraints while typechecking pattern matching?
 Thanks.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Instances for continuation-based FRP

2013-04-25 Thread Ertugrul Söylemez
Conal Elliott co...@conal.net wrote:

 I first tried an imperative push-based FRP in 1998, and I had exactly
 the same experience as Heinrich mentions. The toughest two aspects of
 imperative implementation were sharing and event merge/union/mappend.

This is exactly why I chose not to follow the imperative path from the
very beginning and followed Yampa's example instead.  Currently the
denotational semantics of Netwire are only in my head, but the following
is planned for the future:

  * Take inspiration from 'pipes' and find a way to add push/pull
without giving up ArrowLoop.  This has the highest priority, but
it's also the hardest part.

  * Write down the denotational semantics as a specification.
Optionally try to prove them in a theorem prover.

  * Engage more with you guys.  We all have brilliant ideas and more
communication could help us bringing FRP to the masses.

I also plan to expose an opaque subset of Netwire which strictly
enforces the traditional notion of FRP, e.g. continuous time.  Netwire
itself is really a stream processing abstraction and doesn't force you
program in a reactive style.  This is both a strength and a weakness.
There is too much potential for abuse in this general setting.


Greets,
Ertugrul

-- 
Not to be or to be and (not to be or to be and (not to be or to be and
(not to be or to be and ... that is the list monad.


signature.asc
Description: PGP signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why were datatype contexts removed instead of fixing them?

2013-04-25 Thread Dan Doel
I can't think of any at the moment that are still in force. However, one
that might have been relevant at the time is:

data C a = Foo a = Foo a a

foo :: Foo a - (a, a)
foo ~(Foo x y) = (x, y)

Irrefutable matches used to be disallowed for GADT-like things, which would
break the above if it were translated to GADTs. Now they just don't
introduce their constraints.

However, another thing to consider is that getting rid of data type
contexts was accepted into the language standard. It's not really possible
to fix them by translation to GADTs in the report, because GADTs aren't in
the report, and probably won't be for some time, if ever. And putting a
fixed version natively into the report would require nailing down a lot of
details. For instance, are the contexts simply invalid on newtypes, or do
they just work the old way?

I don't really think they're worth saving in general, though. I haven't
missed them, at least.

-- Dan


On Thu, Apr 25, 2013 at 3:19 PM, Gábor Lehel illiss...@gmail.com wrote:

 Good point, again. Is that the only problem with it?


 On Thu, Apr 25, 2013 at 5:57 PM, Dan Doel dan.d...@gmail.com wrote:

 It is not completely backwards compatible, because (for instance) the
 declaration:

 newtype C a = Foo a = Foo a

 was allowed, but:

 newtype Foo a where
   Foo :: C a = a - Foo a

 is an illegal definition. It can only be translated to a non-newtype data
 declaration, which changes the semantics.


 On Thu, Apr 25, 2013 at 10:35 AM, Gábor Lehel illiss...@gmail.comwrote:

 I've wondered this too. What would have been wrong with a simple
 source-to-source translation, where a constraint on the datatype itself
 translates to the same constraint on each of its constructors? Perhaps it
 would be unintuitive that you would have to pattern match before gaining
 access to the constraint? On a superficial examination it would have been
 backwards-compatible, allowing strictly more programs than the previous
 handling.

 On Thu, Apr 25, 2013 at 12:38 PM, harry volderm...@hotmail.com wrote:

 If I understand correctly, the problem with datatype contexts is that
 if we
 have e.g.
   data Eq a = Foo a = Foo a
 the constraint Eq a is thrown away after a Foo is constructed, and any
 method using Foos must repeat Eq a in its type signature.

 Why were these contexts removed from the language, instead of fixing
 them?

 PS This is following up on a discussion on haskell-beginners, How to
 avoid
 repeating a type restriction from a data constructor. I'm interested in
 knowing whether there's a good reason not to allow this, or if it's
 just a
 consequence of the way type classes are implemented by compilers.


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




 --
 Your ship was destroyed in a monadic eruption.
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





 --
 Your ship was destroyed in a monadic eruption.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why were datatype contexts removed instead of fixing them?

2013-04-25 Thread wren ng thornton
On 4/25/13 9:49 PM, Dan Doel wrote:
 I don't really think they're worth saving in general, though. I haven't
 missed them, at least.

The thing I've missed them for (and what I believe they were originally
designed for) is adding constraints to derived instances. That is, if I
have:

data Bar a = Foo a = ... deriving Baz

Then this is equivalent to:

data Foo a = ...
instance Bar a = Baz (Foo a) where ...

where the second ellipsis is filled in by the compiler. Now that these
constraints have been removed from the language, I've had to either (a)
allow instances of derived classes which do not enforce sanity
constraints, or (b) implement the instances by hand even though they're
entirely boilerplate.

The behavior of these constraints is certainly unintuitive for beginners,
but the constraints themselves are very helpful when programming with
phantom types and type-level functions for constraints.

-- 
Live well,
~wren


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails

2013-04-25 Thread Ben Lippmeier

On 25/04/2013, at 3:47 AM, Duncan Coutts wrote:

 It looks like fold and unfold fusion systems have dual limitations:
 fold-based fusion cannot handle zip style functions, while unfold-based
 fusion cannot handle unzip style functions. That is fold-based cannot
 consume multiple inputs, while unfold-based cannot produce multiple
 outputs.

Yes. This is a general property of data-flow programs and not just compilation 
via Data.Vector style co-inductive stream fusion, or a property of 
fold/unfold/hylo fusion. 


Consider these general definitions of streams and costreams.

-- A stream must always produce an element.
type Stream a   = IO a

-- A costream must always consume an element.
type CoStream a = a - IO ()


And operators on them (writing S for Stream and C for CoStream).

-- Versions of map.
map :: (a - b) - S a - S b(ok)
comap   :: (a - b) - C b - C a(ok)

-- Versions of unzip.
unzip   :: S (a, b) - (S a, S b)(bad)
counzip :: C a - C b - C (a, b)(ok)
unzipc  :: S (a, b) - C b - S a(ok)

-- Versions of zip.
zip :: S a - S b - S (a, b)(ok)
cozip   :: C (a, b) - (C a, C b)(bad)
zipc:: C (a, b) - S a - C b(ok)



The operators marked (ok) can be implemented without buffering data, while the 
combinators marked (bad) may need an arbitrary sized buffer.

Starting with 'unzip', suppose we pull elements from the first component of the 
result (the (S a)) but not the second component (the (S b)). To provide these 
'a' elements, 'unzip' must pull tuples from its source stream (S (a, b)) and 
buffer the 'b' part until someone pulls from the (S b).

Dually, with 'cozip', suppose we push elements into the first component of the 
result (the (C a)). The implementation must buffer them until someone pushes 
the corresponding element into the (C b), only then can it push the whole tuple 
into the source (C (a, b)) costream.


The two combinators unzipc and zipc are hybrids:

For 'unzipc', if we pull an element from the (S a), then the implementation can 
pull a whole (a, b) tuple from the source (S (a, b)) and then get rid of the 
'b' part by pushing it into the (C b). The fact that it can get rid of the 'b' 
part means it doesn't need a buffer.

Similarly, for 'zipc', if we push a 'b' into the (C b) then the implementation 
can pull the corresponding 'a' part from the (S a) and then push the whole (a, 
b) tuple into the C (a, b). The fact that it can get the corresponding 'a' 
means it doesn't need a buffer.

I've got some hand drawn diagrams of this if anyone wants them (mail me), but 
none of it helps implement 'unzip' for streams or 'cozip' for costreams. 



 I'll be interested to see in more detail the approach that Ben is
 talking about. As Ben says, intuitively the problem is that when you've
 got multiple outputs so you need to make sure that someone is consuming
 them and that that consumption is appropriately synchronised so that you
 don't have to buffer (buffering would almost certainly eliminate the
 gains from fusion). That might be possible if ultimately the multiple
 outputs are combined again in some way, so that overall you still have a
 single consumer, that can be turned into a single lazy or eager loop.


At least for high performance applications, I think we've reached the limit of 
what short-cut fusion approaches can provide. By short cut fusion, I mean 
crafting a special source program so that the inliner + simplifier + 
constructor specialisation transform can crunch down the intermediate code into 
a nice loop. Geoff Mainland's recent paper extended stream fusion with support 
for SIMD operations, but I don't think stream fusion can ever be made to fuse 
programs with unzip/cozip-like operators properly. This is a serious problem 
for DPH, because the DPH vectoriser naturally produces code that contains these 
operators.

I'm currently working on Repa 4, which will include a GHC plugin that hijacks 
the intermediate GHC core code and performs the transformation described in 
Richard Water's paper Automatic transformation of series expressions into 
loops. The plugin will apply to stream programs, but not affect the existing 
fusion mechanism via delayed arrays. I'm using a cut down 'clock calculus' from 
work on synchronous data-flow languages to guarantee that all outputs from an 
unzip operation are consumed in lock-step. Programs that don't do this won't be 
well typed. Forcing synchronicity guarantees that Waters's transform will apply 
to the program.

The Repa plugin will also do proper SIMD vectorisation for stream programs, 
producing the SIMD primops that Geoff recently added. Along the way it will 
brutally convert all operations on boxed/lifted numeric data to their unboxed 
equivalents, because I am sick of adding bang patterns to every single function 
parameter in Repa programs. 

Ben.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org

Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails

2013-04-25 Thread Johan Tibell
Hi Ben,

On Thu, Apr 25, 2013 at 7:46 PM, Ben Lippmeier b...@ouroborus.net wrote:
 The Repa plugin will also do proper SIMD vectorisation for stream programs, 
 producing the SIMD primops that Geoff recently added. Along the way it will 
 brutally convert all operations on boxed/lifted numeric data to their unboxed 
 equivalents, because I am sick of adding bang patterns to every single 
 function parameter in Repa programs.

How far is this plugin from being usable to implement a

{-# LANGUAGE Strict #-}

pragma for treating a single module as if Haskell was strict?

Cheers,
Johan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails

2013-04-25 Thread Ben Lippmeier

On 26/04/2013, at 2:15 PM, Johan Tibell wrote:

 Hi Ben,
 
 On Thu, Apr 25, 2013 at 7:46 PM, Ben Lippmeier b...@ouroborus.net wrote:
 The Repa plugin will also do proper SIMD vectorisation for stream programs, 
 producing the SIMD primops that Geoff recently added. Along the way it will 
 brutally convert all operations on boxed/lifted numeric data to their 
 unboxed equivalents, because I am sick of adding bang patterns to every 
 single function parameter in Repa programs.
 
 How far is this plugin from being usable to implement a
 
 {-# LANGUAGE Strict #-}
 
 pragma for treating a single module as if Haskell was strict?

There is already one that does this, but I haven't used it.

http://hackage.haskell.org/package/strict-ghc-plugin

It's one of the demo plugins, though you need to mark individual functions 
rather than the whole module (which would be straightforward to add).

The Repa plugin is only supposed to munge functions using the Repa library, 
rather than the whole module.

Ben.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why were datatype contexts removed instead of fixing them?

2013-04-25 Thread Emil Axelsson

2013-04-26 04:31, wren ng thornton skrev:

On 4/25/13 9:49 PM, Dan Doel wrote:

I don't really think they're worth saving in general, though. I haven't
missed them, at least.


The thing I've missed them for (and what I believe they were originally
designed for) is adding constraints to derived instances. That is, if I
have:

 data Bar a = Foo a = ... deriving Baz

Then this is equivalent to:

 data Foo a = ...
 instance Bar a = Baz (Foo a) where ...

where the second ellipsis is filled in by the compiler. Now that these
constraints have been removed from the language, I've had to either (a)
allow instances of derived classes which do not enforce sanity
constraints, or (b) implement the instances by hand even though they're
entirely boilerplate.


I think standalone deriving solves this:

deriving instance Bar a = Baz (Foo a)

/ Emil

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails

2013-04-25 Thread Andrew Cowie
On Thu, 2013-04-25 at 21:15 -0700, Johan Tibell wrote:

 {-# LANGUAGE Strict #-}

God, I would love this. Obviously the plugin approach could do it, but
could not GHC itself just _not create thunks_ for things unless told to
be lazy in the presence of such a pragma?

[at which point, we need an annotation for laziness, instead of the
annotation for strictness. We're not using ampersand for anything, are
we?

func Int - Thing - WorldPeace
func a b = ...

Ah, bikeshed, how we love thee]

AfC
Sydney



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails

2013-04-25 Thread Johan Tibell
On Thu, Apr 25, 2013 at 10:30 PM, Andrew Cowie
and...@operationaldynamics.com wrote:
 On Thu, 2013-04-25 at 21:15 -0700, Johan Tibell wrote:

 {-# LANGUAGE Strict #-}

 God, I would love this. Obviously the plugin approach could do it, but
 could not GHC itself just _not create thunks_ for things unless told to
 be lazy in the presence of such a pragma?

 [at which point, we need an annotation for laziness, instead of the
 annotation for strictness. We're not using ampersand for anything, are
 we?

 func Int - Thing - WorldPeace
 func a b = ...

 Ah, bikeshed, how we love thee]

We already have ~ that's used to make lazy patterns. :)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails

2013-04-25 Thread Johan Tibell
On Thu, Apr 25, 2013 at 9:20 PM, Ben Lippmeier b...@ouroborus.net wrote:

 On 26/04/2013, at 2:15 PM, Johan Tibell wrote:

 Hi Ben,

 On Thu, Apr 25, 2013 at 7:46 PM, Ben Lippmeier b...@ouroborus.net wrote:
 The Repa plugin will also do proper SIMD vectorisation for stream programs, 
 producing the SIMD primops that Geoff recently added. Along the way it will 
 brutally convert all operations on boxed/lifted numeric data to their 
 unboxed equivalents, because I am sick of adding bang patterns to every 
 single function parameter in Repa programs.

 How far is this plugin from being usable to implement a

 {-# LANGUAGE Strict #-}

 pragma for treating a single module as if Haskell was strict?

 There is already one that does this, but I haven't used it.

 http://hackage.haskell.org/package/strict-ghc-plugin

 It's one of the demo plugins, though you need to mark individual functions 
 rather than the whole module (which would be straightforward to add).

 The Repa plugin is only supposed to munge functions using the Repa library, 
 rather than the whole module.

I guess what I was really hoping for was a plugin that rigorously
defined what it meant to make the code strict at a source language
level, rather than at a lets make all lets into cases Core level. :)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe