[Haskell] FLOPS 2016: Call for Participation and Posters/Demos

2015-12-10 Thread Oleg

FLOPS 2016: 13th International Symposium on Functional and Logic Programming
March 4-6, 2016, Kochi, Japanhttp://www.info.kochi-tech.ac.jp/FLOPS2016/   

Call for Participation and Posters/Demos

   Registration will be open on Monday, Dec 21, 2015.
   Early registration deadline is Monday, Feb 8, 2016.
   Poster/Demo abstract submission deadline is Monday, Jan 11, 2016.

FLOPS aims to bring together practitioners, researchers and
implementers of the declarative programming, to discuss mutually
interesting results and common problems: theoretical advances, their
implementations in language systems and tools, and applications of
these systems in practice. The scope includes all aspects of the
design, semantics, theory, applications, implementations, and teaching
of declarative programming.  FLOPS specifically aims to
promote cross-fertilization between theory and practice and among
different styles of declarative programming.

In addition to the presentations of regular research papers, the FLOPS
program includes tutorials, as well as the poster/demo session for
demonstrating the tools and systems described during the talks and for
presenting works-in-progress and getting the feedback.

FLOPS has established a Best Paper award. The winner will be 
announced at the symposium.


CALLS FOR POSTERS AND DEMONSTRATIONS

If you wish to present a poster at FLOPS, please send the plain text
abstract by e-mail to  -- by January 11, 2016.
The abstract should include the title, the names of the authors and
their affiliation, along with enough details to judge its scope and
relevance. We will announce the accepted submissions on January 25,
2016.  The format of the poster will be announced at that time.
Important Dates
  * Submission due:  January 11, 2016 (Monday), any time zone
  * Notification:January 25, 2016 (Monday)


INVITED TALKS

Kazunori UEDA (Waseda University)
The exciting time and hard-won lessons of the Fifth Generation
Computer Project

Atze Dijkstra (Utrecht University)
UHC: Coping with Compiler Complexity


TUTORIALS

Andreas Abel, on Agda
Atze Dijkstra, on Attribute Grammars
Neng-Fa Zhou, on programming in Picat


ACCEPTED PAPERS

Ki Yung Ahn and Andrea Vezzosi.
Executable Relational Specifications of Polymorphic Type Systems using Prolog

Markus Triska.
The Boolean Constraint Solver of SWI-Prolog: System Description

Peng Fu, Ekaterina Komendantskaya, Tom Schrijvers and Andrew Pond.
Proof Relevant Corecursive Resolution

Jay McCarthy, Burke Fetscher, Max New and Robert Bruce Findler.
A Coq Library For Internal Verification of Running-Times

Akimasa Morihata.
Incremental Computing with Abstract Data Structures

Wouter Swierstra and Joao Alpuim.
>From proposition to program: embedding the refinement calculus in Coq

Andre Van Delft and Anatoliy Kmetyuk.
Declarative Programming with Algebra

Ian Mackie and Shinya Sato.
An interaction net encoding of Godel's System T

Arthur Blot, Pierre-Evariste Dagand and Julia Lawall.
>From Sets to Bits in Coq

Jeremy Yallop, David Sheets and Anil Madhavapeddy.
Declarative foreign function binding through generic programming

Praveen Narayanan, Jacques Carette, Wren Romano, 
Chung-Chieh Shan and Robert Zinkov.
Probabilistic inference by program transformation in Hakaru: System description

Francisco Javier Lopez-Fraguas, Manuel Montenegro and Juan Rodriguez-Hortala.
Polymorphic Types in Erlang Function Specifications

Remy Haemmerle, Pedro Lopez-Garcia, Umer Liqat, Maximiliano Klemen, 
John Gallagher and Manuel V. Hermenegildo.
A Transformational Approach to Parametric Accumulated-cost Static Profiling

Taus Brock-Nannestad. 
Space-efficient Planar Acyclicity Constraints: A Declarative Pearl
___
Haskell mailing list
Haskell@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell


[Haskell] Final CFP: FLOPS 2016, International Symposium on Functional and Logic Programming

2015-08-30 Thread Oleg

NEW: revised submission deadlines (Sep 21 for abstracts, Sep 25 for papers)

FLOPS 2016: March 3-6, 2016, Kochi, Japan

Final Call For Papers http://www.info.kochi-tech.ac.jp/FLOPS2016/

Writing down detailed computational steps is not the only way of
programming. The alternative, being used increasingly in practice, is
to start by writing down the desired properties of the result. The
computational steps are then (semi-)automatically derived from these
higher-level specifications. Examples of this declarative style
include functional and logic programming, program transformation and
re-writing, and extracting programs from proofs of their correctness.

FLOPS aims to bring together practitioners, researchers and
implementors of the declarative programming, to discuss mutually
interesting results and common problems: theoretical advances, their
implementations in language systems and tools, and applications of
these systems in practice. The scope includes all aspects of the
design, semantics, theory, applications, implementations, and teaching
of declarative programming.  FLOPS specifically aims to
promote cross-fertilization between theory and practice and among
different styles of declarative programming.

Scope

FLOPS solicits original papers in all areas of the declarative
programming:
 * functional, logic, functional-logic programming, re-writing
   systems, formal methods and model checking, program transformations
   and program refinements, developing programs with the help of theorem
   provers or SAT/SMT solvers;
 * foundations, language design, implementation issues (compilation
   techniques, memory management, run-time systems), applications and
   case studies.

FLOPS promotes cross-fertilization among different styles of
declarative programming. Therefore, submissions must be written to be
understandable by the wide audience of declarative programmers and
researchers. Submission of system descriptions and declarative pearls
are especially encouraged.

Submissions should fall into one of the following categories:
 * Regular research papers: they should describe new results and will
   be judged on originality, correctness, and significance.
 * System descriptions: they should contain a link to a working
   system and will be judged on originality, usefulness, and design.
 * Declarative pearls: new and excellent declarative programs or
   theories with illustrative applications.
System descriptions and declarative pearls must be explicitly marked
as such in the title.

Submissions must be unpublished and not submitted for publication
elsewhere. Work that already appeared in unpublished or informally
published workshops proceedings may be submitted. See also ACM SIGPLAN
Republication Policy.

The proceedings will be published by Springer International Publishing
in the Lecture Notes in Computer Science (LNCS) series, as a printed
volume as well as online in the digital library SpringerLink. 

Post-proceedings: The authors of 4-7 best papers will be invited to
submit the extended version of their FLOPS paper to a special issue of
the journal Science of Computer Programming (SCP).


Important dates

Monday, September 21, 2015 (any time zone): Abstract Submission
Friday, September 25, 2015 (any time zone): Submission deadline (FIRM)
Monday, November 16, 2015:  Author notification
March 3-6, 2016:FLOPS Symposium
March 7-9, 2016:PPL Workshop

Invited Talks
- Kazunori UEDA (Waseda University)
  The exciting time and hard-won lessons of the Fifth Generation
  Computer Project

- Atze Dijkstra (Utrecht University)
  UHC: Coping with Compiler Complexity


Submission

Submissions must be written in English and can be up to 15 pages long
including references, though pearls are typically shorter. The
formatting has to conform to Springer's guidelines.  Regular research
papers should be supported by proofs and/or experimental results. In
case of lack of space, this supporting information should be made
accessible otherwise (e.g., a link to a Web page, or an appendix).

Papers should be submitted electronically at
https://easychair.org/conferences/?conf=flops2016



Program Committee

Andreas Abel Gothenburg University, Sweden
Lindsay ErringtonUSA
Makoto HamanaGunma University, Japan
Michael HanusCAU Kiel, Germany
Jacob Howe   City University London, UK
Makoto Kanazawa  National Institute of Informatics, Japan
Andy KingUniversity of Kent, UK (PC Co-Chair)
Oleg KiselyovTohoku University, Japan   (PC Co-Chair)
Hsiang-Shang Ko  National Institute of Informatics, Japan
Julia Lawall Inria-Whisper, France
Andres Loeh  Well-Typed LLP, UK
Anil MadhavapeddyCambridge University, UK
Jeff Polakow USA
Marc Pouzet  Ecole normale superieure, France
Vitor Santos Costa   Universidade do Porto, Portugal
Tom Schrijvers   KU Leuven

[Haskell] FLOPS 2016, Second CFP

2015-07-15 Thread Oleg

FLOPS 2016: 13th International Symposium on Functional and Logic Programming
March 3-6, 2016, Kochi, Japan

Call For Papers http://www.info.kochi-tech.ac.jp/FLOPS2016/

New: best paper award; in-cooperation with ACM SIGPLAN

Writing down detailed computational steps is not the only way of
programming. The alternative, being used increasingly in practice, is
to start by writing down the desired properties of the result. The
computational steps are then (semi-)automatically derived from these
higher-level specifications. Examples of this declarative style
include functional and logic programming, program transformation and
re-writing, and extracting programs from proofs of their correctness.

FLOPS aims to bring together practitioners, researchers and
implementors of the declarative programming, to discuss mutually
interesting results and common problems: theoretical advances, their
implementations in language systems and tools, and applications of
these systems in practice. The scope includes all aspects of the
design, semantics, theory, applications, implementations, and teaching
of declarative programming.  FLOPS specifically aims to
promote cross-fertilization between theory and practice and among
different styles of declarative programming.

Scope

FLOPS solicits original papers in all areas of the declarative
programming:
 * functional, logic, functional-logic programming, re-writing
   systems, formal methods and model checking, program transformations
   and program refinements, developing programs with the help of theorem
   provers or SAT/SMT solvers;
 * foundations, language design, implementation issues (compilation
   techniques, memory management, run-time systems), applications and
   case studies.

FLOPS promotes cross-fertilization among different styles of
declarative programming. Therefore, submissions must be written to be
understandable by the wide audience of declarative programmers and
researchers. Submission of system descriptions and declarative pearls
are especially encouraged.

Submissions should fall into one of the following categories:
 * Regular research papers: they should describe new results and will
   be judged on originality, correctness, and significance.
 * System descriptions: they should contain a link to a working
   system and will be judged on originality, usefulness, and design.
 * Declarative pearls: new and excellent declarative programs or
   theories with illustrative applications.
System descriptions and declarative pearls must be explicitly marked
as such in the title.

Submissions must be unpublished and not submitted for publication
elsewhere. Work that already appeared in unpublished or informally
published workshops proceedings may be submitted. See also ACM SIGPLAN
Republication Policy.

The proceedings will be published by Springer International Publishing
in the Lecture Notes in Computer Science (LNCS) series, as a printed
volume as well as online in the digital library SpringerLink. 

Post-proceedings: The authors of 4-7 best papers will be invited to
submit the extended version of their FLOPS paper to a special issue of
the journal Science of Computer Programming (SCP).


Important dates

Monday, September 14, 2015 (any time zone): Submission deadline
Monday, November 16, 2015:  Author notification
March 3-6, 2016:FLOPS Symposium
March 7-9, 2016:PPL Workshop


Submission

Submissions must be written in English and can be up to 15 pages long
including references, though pearls are typically shorter. The
formatting has to conform to Springer's guidelines.  Regular research
papers should be supported by proofs and/or experimental results. In
case of lack of space, this supporting information should be made
accessible otherwise (e.g., a link to a Web page, or an appendix).

Papers should be submitted electronically at
https://easychair.org/conferences/?conf=flops2016


Program Committee

Andreas Abel Gothenburg University, Sweden
Lindsay ErringtonUSA
Makoto HamanaGunma University, Japan
Michael HanusCAU Kiel, Germany
Jacob Howe   City University London, UK
Makoto Kanazawa  National Institute of Informatics, Japan
Andy KingUniversity of Kent, UK (PC Co-Chair)
Oleg KiselyovTohoku University, Japan   (PC Co-Chair)
Hsiang-Shang Ko  National Institute of Informatics, Japan
Julia Lawall Inria-Whisper, France
Andres Loeh  Well-Typed LLP, UK
Anil MadhavapeddyCambridge University, UK
Jeff Polakow PivotCloud, USA
Marc Pouzet  Ecole normale superieure, France
Vitor Santos Costa   Universidade do Porto, Portugal
Tom Schrijvers   KU Leuven, Belgium
Zoltan Somogyi   Australia
Alwen TiuNanyang Technological University, Singapore
Sam Tobin-Hochstadt  Indiana University, USA
Hongwei Xi   Boston University, USA
Neng-Fa Zhou CUNY Brooklyn College

[Haskell] [ANN] FLOPS CFP 2016

2015-04-27 Thread oleg

FLOPS 2016: 13th International Symposium on Functional and Logic Programming
March 3-6, 2016, Kochi, Japan

Call For Papers http://www.info.kochi-tech.ac.jp/FLOPS2016/

Writing down detailed computational steps is not the only way of
programming. The alternative, being used increasingly in practice, is
to start by writing down the desired properties of the result. The
computational steps are then (semi-)automatically derived from these
higher-level specifications. Examples of this declarative style
include functional and logic programming, program transformation and
re-writing, and extracting programs from proofs of their correctness.

FLOPS aims to bring together practitioners, researchers and
implementors of the declarative programming, to discuss mutually
interesting results and common problems: theoretical advances, their
implementations in language systems and tools, and applications of
these systems in practice. The scope includes all aspects of the
design, semantics, theory, applications, implementations, and teaching
of declarative programming.  FLOPS specifically aims to
promote cross-fertilization between theory and practice and among
different styles of declarative programming.

Scope

FLOPS solicits original papers in all areas of the declarative
programming:
 * functional, logic, functional-logic programming, re-writing
   systems, formal methods and model checking, program transformations
   and program refinements, developing programs with the help of theorem
   provers or SAT/SMT solvers;
 * foundations, language design, implementation issues (compilation
   techniques, memory management, run-time systems), applications and
   case studies.

FLOPS promotes cross-fertilization among different styles of
declarative programming. Therefore, submissions must be written to be
understandable by the wide audience of declarative programmers and
researchers. Submission of system descriptions and declarative pearls
are especially encouraged.

Submissions should fall into one of the following categories:
 * Regular research papers: they should describe new results and will
   be judged on originality, correctness, and significance.
 * System descriptions: they should contain a link to a working
   system and will be judged on originality, usefulness, and design.
 * Declarative pearls: new and excellent declarative programs or
   theories with illustrative applications.
System descriptions and declarative pearls must be explicitly marked
as such in the title.

Submissions must be unpublished and not submitted for publication
elsewhere. Work that already appeared in unpublished or informally
published workshops proceedings may be submitted. See also ACM SIGPLAN
Republication Policy.

The proceedings will be published by Springer International Publishing
in the Lecture Notes in Computer Science (LNCS) series, as a printed
volume as well as online in the digital library SpringerLink. 

Post-proceedings: The authors of 4-7 best papers will be invited to
submit the extended version of their FLOPS paper to a special issue of
the journal Science of Computer Programming (SCP).


Important dates

Monday, September 14, 2015 (any time zone): Submission deadline
Monday, November 16, 2015:  Author notification
March 3-6, 2016:FLOPS Symposium
March 7-9, 2016:PPL Workshop


Submission

Submissions must be written in English and can be up to 15 pages long
including references, though pearls are typically shorter. The
formatting has to conform to Springer's guidelines.  Regular research
papers should be supported by proofs and/or experimental results. In
case of lack of space, this supporting information should be made
accessible otherwise (e.g., a link to a Web page, or an appendix).

Papers should be submitted electronically at
https://easychair.org/conferences/?conf=flops2016


Program Committee

Andreas Abel Gothenburg University, Sweden
Lindsay ErringtonUSA
Makoto HamanaGunma University, Japan
Michael HanusCAU Kiel, Germany
Jacob Howe   City University London, UK
Makoto Kanazawa  National Institute of Informatics, Japan
Andy KingUniversity of Kent, UK   (PC Co-Chair)
Oleg KiselyovTohoku University, Japan   (PC Co-Chair)
Hsiang-Shang Ko  National Institute of Informatics, Japan
Julia Lawall Inria-Whisper, France
Andres Löh   Well-Typed LLP, UK
Anil MadhavapeddyCambridge University, UK
Jeff Polakow PivotCloud, USA
Marc Pouzet  École normale supérieure, France
Vítor Santos Costa   Universidade do Porto, Portugal
Tom Schrijvers   KU Leuven, Belgium
Zoltan Somogyi   Australia
Alwen TiuNanyang Technological University, Singapore
Sam Tobin-Hochstadt  Indiana University, USA
Hongwei Xi   Boston University, USA
Neng-Fa Zhou CUNY Brooklyn College and Graduate Center, USA


Organizers

Andy King

[Haskell] The ML Family workshop: program and the 2nd call for participation

2014-07-30 Thread oleg

Implicits in Practice (Demo)
Nada Amin; Tiark Rompf
  Popularized by Scala, implicits are a versatile language feature
  that are receiving attention from the wider PL community. This demo
  will present common use cases and programming patterns with
  implicits in Scala.

Modular implicits
Leo White; Frédéric Bour
  We propose a system for ad-hoc polymorphism in OCaml based on using
  modules as type-directed implicit parameters.

* Session 5: To the bare metal 15:10 - 16:00

Metaprogramming with ML modules in the MirageOS (Experience report)
Anil Madhavapeddy; Thomas Gazagnaire; David Scott; Richard Mortier
  In this talk, we will go through how MirageOS lets the programmer
  build modular operating system components using a combination of
  functors and metaprogramming to ensure portability across both Unix
  and Xen, while preserving a usable developer workflow.

Compiling SML# with LLVM: a Challenge of Implementing ML on a Common
Compiler Infrastructure
Katsuhiro Ueno; Atsushi Ohori
  We report on an LLVM backend of SML#. This development provides
  detailed accounts of implementing functional language
  functionalities in a common compiler infrastructure developed mainly
  for imperative languages. We also describe techniques to compile
  SML#'s elaborated features including separate compilation with
  polymorphism, and SML#'s unboxed data representation.

* Session 6: No longer foreign 16:30 - 18:00

A Simple and Practical Linear Algebra Library Interface with Static
Size Checking (Experience report)
Akinori Abe; Eijiro Sumii
  While advanced type systems--specifically, dependent types on
  natural numbers--can statically ensure consistency among the sizes
  of collections such as lists and arrays, such type systems generally
  require non-trivial changes to existing languages and application
  programs, or tricky type-level programming. We have developed a
  linear algebra library interface that guarantees consistency (with
  respect to dimensions) of matrix (and vector) operations by using
  generative phantom types as fresh identifiers for statically
  checking the equality of sizes (i.e., dimensions).  This interface
  has three attractive features in particular.  (i) It can be
  implemented only using fairly standard ML types and its module
  system. Indeed, we implemented the interface in OCaml (without
  significant extensions like GADTs) as a wrapper for an existing
  library.  (ii) For most high-level operations on matrices (e.g.,
  addition and multiplication), the consistency of sizes is verified
  statically.  (Certain low-level operations, like accesses to
  elements by indices, need dynamic checks.)  (iii) Application
  programs in a traditional linear algebra library can be easily
  migrated to our interface. Most of the required changes can be made
  mechanically.  To evaluate the usability of our interface, we ported
  to it a practical machine learning library (OCaml-GPR) from an
  existing linear algebra library (Lacaml), thereby ensuring the
  consistency of sizes.

SML3d: 3D Graphics for Standard ML (Demo)
John Reppy
  The SML3d system is a collection of libraries designed to support
  real-time 3D graphics programming in Standard ML (SML). This paper
  gives an overview of the system and briefly highlights some of the
  more interesting aspects of its design and implementation.

* Poster presentation, at the joint poster session with the OCaml workshop
Nullable Type Inference
Michel Mauny; Benoit Vaugon
  We present a type system for nullable types in an ML-like
  programming language. We start with a simple system, presented as an
  algorithm, whose interest is to introduce the formalism that we
  use. We then extend it as system using subtyping constraints, that
  accepts more programs. We state the usual properties for both
  systems. This is work in progress.


Program Committee

Kenichi Asai Ochanomizu University, Japan
Matthew FluetRochester Institute of Technology, USA
Jacques Garrigue Nagoya University, Japan
Dave Herman  Mozilla, USA
Stefan HoldermansVector Fabrics, Netherlands
Oleg Kiselyov (Chair)University of Tsukuba, Japan
Keiko Nakata Tallinn University of Technology, Estonia
Didier Remy  INRIA Paris-Rocquencourt, France
Zhong Shao   Yale University, USA
Hongwei Xi   Boston University, USA

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] ML Family workshop: First Call for Participation

2014-07-06 Thread oleg

Higher-order, Typed, Inferred, Strict: ACM SIGPLAN ML Family Workshop
Thursday September 4, 2014, Gothenburg, Sweden

Call For Participation http://okmij.org/ftp/ML/ML14.html

Early registration deadline is August 3. Please register at
https://regmaster4.com/2014conf/ICFP14/register.php

This workshop specifically aims to recognize the entire extended ML
family and to provide the forum to present and discuss common issues,
both practical (compilation techniques, implementations of concurrency
and parallelism, programming for the Web) and theoretical (fancy
types, module systems, metaprogramming). We also encourage
presentations from related languages (such as Scala, Rust, Nemerle,
ATS, etc.), to exchange experience of further developing ML ideas.

The workshop is conducted in close cooperation with the
OCaml Users and Developers Workshop  http://ocaml.org/meetings/ocaml/2014/
taking place on September 5.


Program

* Andreas Rossberg
1ML -- core and modules as one (Or: F-ing first-class modules)

* Jacques Garrigue and Leo White
Type-level module aliases: independent and equal

* Felix Klock and Nicholas Matsakis
Demo: The Rust Language and Type System

* Tomas Petricek and Don Syme
Doing web-based data analytics with F#

* Thomas Braibant, Jonathan Protzenko and Gabriel Scherer
Well-typed generic smart-fuzzing for APIs

* Ramana Kumar, Magnus O. Myreen, Michael Norrish and Scott Owens
Improving the CakeML Verified ML Compiler

* Leo White and Frederic Bour
Modular implicits

* Nada Amin and Tiark Rompf
Implicits in Practice

* Anil Madhavapeddy, Thomas Gazagnaire, David Scott and Richard Mortier
Metaprogramming with ML modules in the MirageOS

* Katsuhiro Ueno and Atsushi Ohori
Compiling SML# with LLVM: a Challenge of Implementing ML on a Common
Compiler Infrastructure

* Akinori Abe and Eijiro Sumii
A Simple and Practical Linear Algebra Library Interface
with Static Size Checking

* John Reppy
SML3d: 3D Graphics for Standard ML


In addition, the joint poster session with the OCaml workshop will take
place in the afternoon on September 5. The session will include
posters:

* Nicolas Oury
Core.Sequence: a unified interface for sequences

* Thomas Gazagnaire, Amir Chaudhry, Anil Madhavapeddy, Richard Mortier, 
  David Scott, David Sheets, Gregory Tsipenyuk, Jon Crowcroft
Irminsule: a branch-consistent distributed library database

* Michel Mauny and Benoit Vaugon
Nullable Type Inference

* Edwin Toeroek
LibreS3: design, challenges, and steps toward reusable libraries

* Fabrice Le Fessant
A Case for Multi-Switch Constraints in OPAM


Program Committee

Kenichi Asai Ochanomizu University, Japan
Matthew FluetRochester Institute of Technology, USA
Jacques Garrigue Nagoya University, Japan
Dave Herman  Mozilla, USA
Stefan HoldermansVector Fabrics, Netherlands
Oleg Kiselyov (Chair)University of Tsukuba, Japan
Keiko Nakata Tallinn University of Technology, Estonia
Didier Remy  INRIA Paris-Rocquencourt, France
Zhong Shao   Yale University, USA
Hongwei Xi   Boston University, USA

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] ML Family workshop - Extended deadline

2014-05-20 Thread oleg
: A justified argument for or against a language feature.
The argument must be substantiated, either theoretically (e.g., by a
demonstration of (un)soundness, an inference algorithm, a complexity
analysis), empirically or by a substantial experience. Personal experience
is accepted as justification so long as it is extensive and illustrated
with concrete examples.
  * Research Presentations: Research presentations should describe new ideas,
experimental results, or significant advances in ML-related projects. We
especially encourage presentations that describe work in progress, that
outline a future research agenda, or that encourage lively discussion.
These presentations should be structured in a way which can be, at least in
part, of interest to (advanced) users.
  * Experience Reports: Users are invited to submit Experience Reports about
their use of ML and related languages. These presentations do not need to
contain original research but they should tell an interesting story to
researchers or other advanced users, such as an innovative or unexpected
use of advanced features or a description of the challenges they are facing
or attempting to solve.
  * Demos: Live demonstrations or short tutorials should show new developments,
interesting prototypes, or work in progress, in the form of tools,
libraries, or applications built on or related to ML. (You will need to
provide all the hardware and software required for your demo; the workshop
organizers are only able to provide a projector.)


Important dates

Friday May 23 (any time zone):   Abstract submission
Monday June 30:  Author notification
Thursday September 4, 2014:  ML Family Workshop

Submission

Submissions should be at most two pages, in PDF format, and printable on US
Letter or A4 sized paper. A submission should have a synopsis (2-3 lines) and a
body between 1 and 2 pages, in one- or two-column layout. The synopsis should
be suitable for inclusion in the workshop program.

Submissions must be uploaded to the workshop submission website before the
submission deadline (Monday May 19, 2014).
For any question concerning the scope of the workshop or the submission
process, please contact the program chair.


Program Committee

Kenichi Asai Ochanomizu University, Japan
Matthew FluetRochester Institute of Technology, USA
Jacques Garrigue Nagoya University, Japan
Dave Herman  Mozilla, USA
Stefan HoldermansVector Fabrics, Netherlands
Oleg Kiselyov (Chair)Monterey, CA, USA
Keiko Nakata Tallinn University of Technology, Estonia
Didier Remy  INRIA Paris-Rocquencourt, France
Zhong Shao   Yale University, USA
Hongwei Xi   Boston University, USA


___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] ML Family Workshop -- Last call for presentations

2014-05-08 Thread oleg
 argument for or against a language feature.
The argument must be substantiated, either theoretically (e.g., by a
demonstration of (un)soundness, an inference algorithm, a complexity
analysis), empirically or by a substantial experience. Personal experience
is accepted as justification so long as it is extensive and illustrated
with concrete examples.
  * Research Presentations: Research presentations should describe new ideas,
experimental results, or significant advances in ML-related projects. We
especially encourage presentations that describe work in progress, that
outline a future research agenda, or that encourage lively discussion.
These presentations should be structured in a way which can be, at least in
part, of interest to (advanced) users.
  * Experience Reports: Users are invited to submit Experience Reports about
their use of ML and related languages. These presentations do not need to
contain original research but they should tell an interesting story to
researchers or other advanced users, such as an innovative or unexpected
use of advanced features or a description of the challenges they are facing
or attempting to solve.
  * Demos: Live demonstrations or short tutorials should show new developments,
interesting prototypes, or work in progress, in the form of tools,
libraries, or applications built on or related to ML. (You will need to
provide all the hardware and software required for your demo; the workshop
organizers are only able to provide a projector.)



Submission

Submissions should be at most two pages, in PDF format, and printable on US
Letter or A4 sized paper. A submission should have a synopsis (2-3 lines) and a
body between 1 and 2 pages, in one- or two-column layout. The synopsis should
be suitable for inclusion in the workshop program.

Submissions must be uploaded to the workshop submission website before the
submission deadline (Monday May 19, 2014).
For any question concerning the scope of the workshop or the submission
process, please contact the program chair.


Program Committee

Kenichi Asai Ochanomizu University, Japan
Matthew FluetRochester Institute of Technology, USA
Jacques Garrigue Nagoya University, Japan
Dave Herman  Mozilla, USA
Stefan HoldermansVector Fabrics, Netherlands
Oleg Kiselyov (Chair)Monterey, CA, USA
Keiko Nakata Tallinn University of Technology, Estonia
Didier Remy  INRIA Paris-Rocquencourt, France
Zhong Shao   Yale University, USA
Hongwei Xi   Boston University, USA

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] Second CFP: Higher-order, Typed, Inferred, Strict: ML Family Workshop

2014-04-07 Thread oleg
., by a
demonstration of (un)soundness, an inference algorithm, a complexity
analysis), empirically or by a substantial experience. Personal experience
is accepted as justification so long as it is extensive and illustrated
with concrete examples.
  * Research Presentations: Research presentations should describe new ideas,
experimental results, or significant advances in ML-related projects. We
especially encourage presentations that describe work in progress, that
outline a future research agenda, or that encourage lively discussion.
These presentations should be structured in a way which can be, at least in
part, of interest to (advanced) users.
  * Experience Reports: Users are invited to submit Experience Reports about
their use of ML and related languages. These presentations do not need to
contain original research but they should tell an interesting story to
researchers or other advanced users, such as an innovative or unexpected
use of advanced features or a description of the challenges they are facing
or attempting to solve.
  * Demos: Live demonstrations or short tutorials should show new developments,
interesting prototypes, or work in progress, in the form of tools,
libraries, or applications built on or related to ML. (You will need to
provide all the hardware and software required for your demo; the workshop
organizers are only able to provide a projector.)


Important dates

Monday May 19 (any time zone):   Abstract submission
Monday June 30:  Author notification
Thursday September 4, 2014:  ML Family Workshop

Submission

Submissions should be at most two pages, in PDF format, and printable on US
Letter or A4 sized paper. A submission should have a synopsis (2-3 lines) and a
body between 1 and 2 pages, in one- or two-column layout. The synopsis should
be suitable for inclusion in the workshop program.

Submissions must be uploaded to the workshop submission website before the
submission deadline (Monday May 19, 2014).
For any question concerning the scope of the workshop or the submission
process, please contact the program chair.


Program Committee

Kenichi Asai Ochanomizu University, Japan
Matthew FluetRochester Institute of Technology, USA
Jacques Garrigue Nagoya University, Japan
Dave Herman  Mozilla, USA
Stefan HoldermansVector Fabrics, Netherlands
Oleg Kiselyov (Chair)Monterey, CA, USA
Keiko Nakata Tallinn University of Technology, Estonia
Didier Remy  INRIA Paris-Rocquencourt, France
Zhong Shao   Yale University, USA
Hongwei Xi   Boston University, USA


___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] Higher-order, Typed, Inferred, Strict: ACM SIGPLAN ML Family Workshop

2014-02-10 Thread oleg
 be substantiated, either theoretically (e.g., by a
demonstration of (un)soundness, an inference algorithm, a complexity
analysis), empirically or by a substantial experience. Personal experience
is accepted as justification so long as it is extensive and illustrated
with concrete examples.
  * Research Presentations: Research presentations should describe new ideas,
experimental results, or significant advances in ML-related projects. We
especially encourage presentations that describe work in progress, that
outline a future research agenda, or that encourage lively discussion.
These presentations should be structured in a way which can be, at least in
part, of interest to (advanced) users.
  * Experience Reports: Users are invited to submit Experience Reports about
their use of ML and related languages. These presentations do not need to
contain original research but they should tell an interesting story to
researchers or other advanced users, such as an innovative or unexpected
use of advanced features or a description of the challenges they are facing
or attempting to solve.
  * Demos: Live demonstrations or short tutorials should show new developments,
interesting prototypes, or work in progress, in the form of tools,
libraries, or applications built on or related to ML. (You will need to
provide all the hardware and software required for your demo; the workshop
organizers are only able to provide a projector.)


Important dates

Monday May 19 (any time zone):   Abstract submission
Monday June 30:  Author notification
Thursday September 4, 2014:  ML Family Workshop

Submission

Submissions should be at most two pages, in PDF format, and printable on US
Letter or A4 sized paper. A submission should have a synopsis (2-3 lines) and a
body between 1 and 2 pages, in one- or two-column layout. The synopsis should
be suitable for inclusion in the workshop program.

Submissions must be uploaded to the workshop submission website before the
submission deadline (Monday May 19, 2014).
For any question concerning the scope of the workshop or the submission
process, please contact the program chair.


Program Committee

Kenichi Asai Ochanomizu University, Japan
Matthew FluetRochester Institute of Technology, USA
Jacques Garrigue Nagoya University, Japan
Dave Herman  Mozilla, USA
Stefan HoldermansVector Fabrics, Netherlands
Oleg Kiselyov (Chair)Monterey, CA, USA
Keiko Nakata Tallinn University of Technology, Estonia
Didier Re'my INRIA Paris-Rocquencourt, France
Zhong Shao   Yale University, USA
Hongwei Xi   Boston University, USA


___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell-cafe] RankNTypes + ConstraintKinds to use Either as a union

2013-10-09 Thread oleg

Thiago Negri wrote:
 Why type inference can't resolve this code?

 {-# LANGUAGE RankNTypes, ConstraintKinds #-}

 bar :: (Num a, Num b) = (forall c. Num c = c - c) -Either a b -Either a b
 bar f (Left a) = Left (f a)
 bar f (Right b) = Right (f b)

 bar' = bar (+ 2) -- This compiles ok

 foo :: (tc a, tc b) = (forall c. tc c = c - c) - Either a b - Either a b
 foo f (Left a) = Left (f a)
 foo f (Right b) = Right (f b)

 foo' = foo (+ 2) -- This doesn't compile because foo' does not typecheck

The type inference of the constraint fails because it is
ambiguous. Observe that not only bar (+2) compiles OK, but also
bar id. The function id :: c - c has no constraints attached, but
still fits for (forall c. Num c = c - c). 

Let's look at the problematic foo'. What constraint would you think
GHC should infer for tc? Num? Why not the composition of Num and Read,
or Num and Show, or Num and any other set of constraints? Or perhaps
Fractional (because Fractional implies Num)? For
constraints, we get the implicit subtyping (a term well-typed with
constraints C is also well-typed with any bigger constraint set C',
or any constraint set C'' which implies C).
Synonyms and superclass constraints break the principal types. 
So, inference is hopeless.

We got to help the type inference and tell which constraint we want.
For example,

 newtype C ctx = C (forall c. ctx c = c - c)

 foo :: (ctx a, ctx b) = C ctx - (forall c. ctx c = c - c) - 
Either a b - Either a b
 foo _ f (Left a) = Left (f a)
 foo _ f (Right b) = Right (f b)

 foo' = foo (undefined :: C Num) (+2)

Or, better

 xyz :: (ctx a, ctx b) = C ctx - Either a b - Either a b
 xyz (C f) (Left a) = Left (f a)
 xyz (C f) (Right b) = Right (f b)

 xyz' = xyz ((C (+2)) :: C Num)
 xyz'' = xyz ((C (+2)) :: C Fractional)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell] PEPM 2014: Final call for papers

2013-09-18 Thread oleg
 are happy to announce the two invited speakers of PEPM 2014:

Manuel Fahndrich (Microsoft Research, USA)
Sven-Bodo Scholz (Heriott-Watt University, Scotland)

PROGRAM CHAIRS

Wei-Ngan Chin (National University of Singapore, Singapore)
Jurriaan Hage (Utrecht University, Netherlands)

PROGRAM COMMITTEE

Evelyne Contejean (LRI, CNRS, Universite Paris-Sud, France)
Cristina David (University of Oxford, UK)
Alain Frisch (LexiFi, France)
Ronald Garcia (University of British Columbia, Canada)
Zhenjiang Hu (National Institute of Informatics, Japan)
Paul H J Kelly (Imperial College, UK)
Oleg Kiselyov (Monterey, USA)
Naoki Kobayashi (University of Tokyo, Japan)
Jens Krinke (University College London, UK)
Ryan Newton (University of Indiana, USA)
Alberto Pardo (Universidad de la Republica, Uruguay)
Sungwoo Park (Pohang University of Science and Technology, South Korea)
Tiark Rompf (Oracle Labs  EPFL, Switzerland)
Sukyoung Ryu (KAIST, South Korea)
Kostis Sagonas (Uppsala University, Sweden)
Max Schaefer (Nanyang Technological University, Singapore)
Harald Sondergaard (The University of Melbourne, Australia)
Eijiro Sumii (Tohoku University, Japan)
Eric Van Wyk (University of Minnesota, USA)
Jeremy Yallop (University of Cambridge, UK)

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell-cafe] reifying typeclasses

2013-09-17 Thread oleg

 I've been toying with using Data Types a la Carte to get type
 representations, a `Typeable` class and dynamic types parameterized by a
 possibly open universe:

If the universe is potentially open, and if we don't care about
exhaustive pattern-matching check (which is one of the principal
benefits of GADTs -- pronounced in dependently-typed languages), the
problem can be solved far more simply. No type classes, no instances,
no a la Carte, to packages other than the base.

{-# LANGUAGE ScopedTypeVariables #-}

module GG where

import Data.Typeable

argument :: Typeable a = a - Int
argument a
 | Just (x::Int)  - cast a = x
 | Just (x::Char) - cast a = fromEnum x

result :: Typeable a = Int - a
result x
  | Just r  - cast (id x)  = r
  | Just r  - cast ((toEnum x)::Char)  = r

t1 = argument 'a'
t2 = show (result 98 :: Char)


That is it, quite symmetrically. This is essentially how type classes
are implemented on some systems (Chameleoon, for sure, and probably
JHC). By this solution we also gave up on parametricity. Which is why
such a solution is probably better kept `under the hood', as an
implementation of type classes.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] reifying typeclasses

2013-09-15 Thread oleg

Evan Laforge wrote:
 I have a typeclass which is instantiated across a closed set of 3
 types.  It has an ad-hoc set of methods, and I'm not too happy with
 them because being a typeclass forces them to all be defined in one
 place, breaking modularity.  A sum type, of course, wouldn't have that
 problem.  But in other places I want the type-safety that separate
 types provide, and packing everything into a sum type would destroy
 that.  So, expression problem-like, I guess.

 It seems to me like I should be able to replace a typeclass with
 arbitrary methods with just two, to reify the type and back.  This
 seems to work when the typeclass dispatches on an argument, but not on
 a return value.  E.g.:


If the universe (the set of types of interest to instantiate the type
class to) is closed, GADTs spring to mind immediately. See, for
example, the enclosed code. It is totally unproblematic (one should
remember to always write type signatures when programming with
GADTs. Weird error messages otherwise ensue.)

One of the most notable differences between GADT and type-class--based
programming is that GADTs are closed and type classes are open (that
is, new instances can be added at will). In fact, a less popular
technique of implementing type classes (which has been used in some Haskell
systems -- but not GHC)) is intensional type analysis, or typecase.
It is quite similar to the GADT solution.


The main drawback of the intensional type analysis as shown in the
enclosed code is that it breaks parametricity. The constraint Eq a
does not let one find out what the type 'a' is and so what other
operations it may support. (Eq a) says that the type a supports (==),
and does not say any more than that. OTH, Representable a tells quite
a lot about type a, essentially, everything.

 types.  It has an ad-hoc set of methods, and I'm not too happy with
 them because being a typeclass forces them to all be defined in one
 place, breaking modularity.  A sum type, of course, wouldn't have that
Why not to introduce several type classes, even a type class for each
method if necessary. Grouping methods under one type class is
appropriate when such a grouping makes sense. Otherwise, Haskell won't
lose in expressiveness if a type class could have only one method.

{-# LANGUAGE GADTs #-}

module G where

data TRep a where
TInt  :: TRep Int
TChar :: TRep Char

class Representable a where
repr :: TRep a

instance Representable Int where
repr = TInt

instance Representable Char where
repr = TChar

argument :: Representable a = a - Int
argument x = go repr x
 where
 -- For GADTs, signatures are important!
 go :: TRep a - a - Int
 go TInt  x = x
 go TChar x = fromEnum x

-- just the `mirror inverse'
result :: Representable a = Int - a
result x = go repr x
 where
 -- For GADTs, signatures are important!
 go :: TRep a - Int - a
 go TInt  x = x
 go TChar x = toEnum x

t1 = argument 'a'
t2 = show (result 98 :: Char)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] reifying typeclasses

2013-09-15 Thread oleg

[I too had the problem sending this e-mail to Haskell list.
I got a reply saying the message awaits moderator approval]

Evan Laforge wrote:
 I have a typeclass which is instantiated across a closed set of 3
 types.  It has an ad-hoc set of methods, and I'm not too happy with
 them because being a typeclass forces them to all be defined in one
 place, breaking modularity.  A sum type, of course, wouldn't have that
 problem.  But in other places I want the type-safety that separate
 types provide, and packing everything into a sum type would destroy
 that.  So, expression problem-like, I guess.

 It seems to me like I should be able to replace a typeclass with
 arbitrary methods with just two, to reify the type and back.  This
 seems to work when the typeclass dispatches on an argument, but not on
 a return value.  E.g.:


If the universe (the set of types of interest to instantiate the type
class to) is closed, GADTs spring to mind immediately. See, for
example, the enclosed code. It is totally unproblematic (one should
remember to always write type signatures when programming with
GADTs. Weird error messages otherwise ensue.)

One of the most notable differences between GADT and type-class--based
programming is that GADTs are closed and type classes are open (that
is, new instances can be added at will). In fact, a less popular
technique of implementing type classes (which has been used in some Haskell
systems -- but not GHC)) is intensional type analysis, or typecase.
It is quite similar to the GADT solution.


The main drawback of the intensional type analysis as shown in the
enclosed code is that it breaks parametricity. The constraint Eq a
does not let one find out what the type 'a' is and so what other
operations it may support. (Eq a) says that the type a supports (==),
and does not say any more than that. OTH, Representable a tells quite
a lot about type a, essentially, everything.

 types.  It has an ad-hoc set of methods, and I'm not too happy with
 them because being a typeclass forces them to all be defined in one
 place, breaking modularity.  A sum type, of course, wouldn't have that
Why not to introduce several type classes, even a type class for each
method if necessary. Grouping methods under one type class is
appropriate when such a grouping makes sense. Otherwise, Haskell won't
lose in expressiveness if a type class could have only one method.

{-# LANGUAGE GADTs #-}

module G where

data TRep a where
TInt  :: TRep Int
TChar :: TRep Char

class Representable a where
repr :: TRep a

instance Representable Int where
repr = TInt

instance Representable Char where
repr = TChar

argument :: Representable a = a - Int
argument x = go repr x
 where
 -- For GADTs, signatures are important!
 go :: TRep a - a - Int
 go TInt  x = x
 go TChar x = fromEnum x

-- just the `mirror inverse'
result :: Representable a = Int - a
result x = go repr x
 where
 -- For GADTs, signatures are important!
 go :: TRep a - Int - a
 go TInt  x = x
 go TChar x = toEnum x

t1 = argument 'a'
t2 = show (result 98 :: Char)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] stream interface vs string interface: references

2013-09-03 Thread oleg

 For lazy I/O, using shows in Haskell is a good analogue of using
 #printOn: in Smalltalk.  The basic form is include this as PART of
 a stream, with convert this to a whole string as a derived form.

 What the equivalent of this would be for Iteratees I don't yet
 understand.

Why not to try simple generators first, which are simpler, truly. It seems
generators reproduce the Smalltalk printing patterns pretty well --
even simpler since we don't have to specify any stream. The printing
takes linear time in input size. The same `printing' generator can be
used even if we don't actually want to see any output -- rather, we
only want the statistics (e.g., number of characters printed, or
number of lines, etc). Likewise, the same printing generator
print_yield can be used if we are to encode the output somehow (e.g.,
compress it). The entire pipeline can run in constant space (if
encoding is in constant space).

Here is the code

module PrintYield where

-- http://okmij.org/ftp/continuations/PPYield/
import GenT

import Data.Set as S
import Data.Foldable
import Control.Monad.State.Strict

type Producer m e= GenT e m ()

class PrintYield a where
print_yield :: Monad m = a - Producer m String

instance PrintYield Int where
print_yield = yield . show

instance (PrintYield a, PrintYield b) = PrintYield (Either a b) where
print_yield (Left x)  = yield Leftprint_yield x
print_yield (Right x) = yield Right   print_yield x

instance (PrintYield a) = PrintYield (Set a) where
print_yield x = do
  yield {
  let f True  x = print_yield x  return False
  f False x = yield ,   print_yield x  return False
  foldlM f True x 
  yield }

instance PrintYield ISet where
print_yield (ISet x) = print_yield x

newtype ISet = ISet (Either Int (Set ISet))
deriving (Eq, Ord)

set1 :: Set ISet
set1 = Prelude.foldr 
   (\e s - S.fromList [ISet (Left e), ISet (Right s)]) S.empty [1..20]

-- Real printing
print_set :: Set ISet - IO ()
print_set s = print_yield s `runGenT` putStr

t1 = print_set set1

-- Counting the number of characters
-- Could use Writer but the Writer is too lazy, may leak memory

count_printed :: Set ISet - Integer
count_printed s = (print_yield s `runGenT` counter) `execState` 0
 where
 counter _ = get = put . succ_strict
 succ_strict x = x `seq` succ x

-- According to GHCi statistics, counting is linear in time
-- (space is harder to estimate: it is not clear what GHCi prints
-- for memory statistics; we need max bytes allocated rather than
-- total bytes allocated)
t2 = count_printed set1

-- Doesn't do anything but ensures the set is constructed
t3 :: IO ()
t3 = print_yield set1 `runGenT` (\x - return ())




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Yet Another Forkable Class

2013-08-23 Thread oleg

I must stress that OpenUnion1.hs described (briefly) in the paper
is only one implementation of open unions, out of many possible.
For example, I have two more implementations. A year-old version of
the code implemented open unions *WITHOUT* overlapping instances or
Typeable.
http://okmij.org/ftp/Haskell/extensible/TList.hs

The implementation in the paper is essentially the one described in
the full HList paper, Appendix C. The one difference is that the HList
version precluded duplicate summands. Adding the duplication check to
OpenUnion1 takes three lines of code. I didn't add them because it
didn't seem necessary, or even desired.

I should further stress, OverlappingInstances are enabled only
within one module, OpenUnion1.hs. The latter is an internal, closed
module, not meant to be modified by a user. No user program needs to
declare OverlappingInstances in its LANGUAGES pragma. Second,
OverlappingInstances are used only within the closed type class
Member. This type class is not intended to be user-extensible; the
programmer need not and should not define any more instances for
it. The type class is meant to be closed. So Member emulates closed
type families implemented in the recent version of GHC. With the
closed type families, no overlapping instances are needed.

 Simply the fact that the Member class needs -XOverlappingInstances
 means that we cannot have duplicate or polymorphic effects. It will
 arbitrarily pick the first match in the former and fail to compile in
 the latter case.
Of course we can have duplicate layers. In that case, the dynamically closest
handler wins -- which sounds about right (think of reset in delimited
control). The file Eff.hs even has a test case for that, tdup.
BTW, I'm not sure of the word 'pick' -- the Member class is
a purely compile-time constraint. It doesn't do any picking -- it doesn't
do anything at all at run-time. 

 For example we should be able to project the open sum equivalent of
 Either String String into the second String but we cannot with the
 implementation in the paper.
You inject a String or a String, and you will certainly 
project a String (the one your have injected). What is the problem
then? You can always project what you have injected. Member merely
keeps track of what types could possibly be injected/projected. 
So, String + String indeed should be String.


By polymorphic effects you must mean first-class polymorphism (because
the already implemented Reader effect is polymorphic in the
environment). First of all, there are workarounds. Second, I'm not
sure what would be a good example of polymorphic effect (aside from 
ST-monad-like).

 To be honest I'm not so sure about these effects...
Haskell Symposium will have a panel on effect libraries in Haskell.
It seems plausible that effects, one way or the other, will end ip in
Haskell. Come to Haskell Symposium, tell us your doubts and
concerns. We want to hear them.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Yet Another Forkable Class

2013-08-21 Thread oleg

Perhaps effect libraries (there are several to choose from) could be a
better answer to Fork effects than monad transformers. One lesson from
the recent research in effects is that we should start thinking what
effect we want to achieve rather than which monad transformer to
use. Using ReaderT or StateT or something else is an implementation
detail. Once we know what effect to achieve we can write a handler, or
interpreter, to implement the desired operation on the World, obeying
the desired equations. And we are done.

For example, with ExtEff library with which I'm more familiar, the
Fork effect would take as an argument a computation that cannot throw
any requests. That means that the parent has to provide interpreters
for all child effects. It becomes trivially to implement:

 Another example would be a child that should not be able to throw errors as
 opposed to the parent thread.
It is possible to specify which errors will be allowed for the child
thread (the ones that the parent will be willing to reflect and
interpret). The rest of errors will be statically prohibited then.

 instance (Protocol p) = Forkable (WebSockets p) (ReaderT (Sink p) IO) where
 fork (ReaderT f) = liftIO . forkIO . f = getSink

This is a good illustration of too much implementation detail. Why do we
need to know of (Sink p) as a Reader layer? Would it be clearer to
define an Effect of sending to the socket? Computation's type will
make it patent the computation is sending to the socket.
The parent thread, before forking, has to provide a handler for that
effect (and the handler will probably need a socket). 

Defining a new class for each effect is possible but not needed at
all. With monad transformers, a class per effect is meant to hide the
ordering of transformer layers in a monad transformer stack. Effect
libraries abstract over the implementation details out of the
box. Crutches -- extra classes -- are unnecessary. We can start by
writing handlers on a case-by-case basis. Generalization, if any,
we'll be easier to see. From my experience, generalizing from concrete
cases is easier than trying to write a (too) general code at the
outset. Way too often, as I read and saw, code that is meant to be
reusable ends up hardly usable.




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proposal: Non-recursive let

2013-07-25 Thread oleg

ivan.chollet wrote:
 let's consider the following:

 let fd = Unix.open ...
 let fd = Unix.open ...

 At this point one file descriptor cannot be closed. Static analysis will
 have trouble catching these bugs, so do humans.

Both sentences express false propositions.

The given code, if Haskell, does not open any file descriptors, so
there is nothing to close. In the following OCaml code

let fd = open_in /tmp/a in
let fd = open_in /tmp/v in
...

the first open channel becomes unreachable. When GC collects it (which
will happen fairly soon, on a minor collection, because the channel
died young), GC will finalize the channel and close its file
descriptor.

The corresponding Haskell code
do
h - openFile ...
h - openFile ...

works similarly to OCaml. Closing file handles upon GC is codified in
the Haskell report because Lazy IO crucially depends on such behavior.

If one is interested in statically tracking open file descriptors and
making sure they are closed promptly, one could read large literature
on this topic. Google search for monadic regions should be a good
start. Some of the approaches are implemented and used in Haskell.


Now about static analysis. Liveness analysis has no problem whatsoever
determining that a variable fd in our examples has been shadowed and
the corresponding value is dead. We are all familiar with liveness
analysis -- it's the one responsible for `unused variable'
warnings. The analysis is useful for many other things (e.g., if it
determines that a created value dies within the function activation,
the value could be allocated on stack rather than on heap.). Here is
example from C:

#include stdio.h

void foo(void) {
  char x[4]  = abc; /* Intentional copying! */
  {
  char x[4]  = cde; /* Intentional copying and shadowing */
  x[0] = 'x';
  printf(result %s\n,x);
  }
}

Pretty old GCC (4.2.1) had no trouble detecting the shadowing. With
the optimization flag -O4, GCC acted on this knowledge. The generated
assembly code reveals no traces of the string abc, not even in the
.rodata section of the code. The compiler determined the string is
really unused and did not bother even compiling it in.


 Disallowing variable shadowing prevents this.
 The two fd occur in different contexts and should have different names.
 Usage of shadowing is generally bad practice. It is error-prone. Hides
 obnoxious bugs like file descriptors leaks.
 The correct way is to give different variables that appear in different
 contexts a different name, although this is arguably less convenient and
 more verbose.

CS would be better as science if we refrain from passing our
personal opinions and prejudices as ``the correct way''.

I can't say better than the user Kranar in a recent discussion on a
similar `hot topic':

The issue is that much of what we do as developers is simply based on
anecdotal evidence, or statements made by so called evangelicals who
blog about best practices and people believe them because of how
articulate they are or the cache and prestige that the person carries.
...
It's unfortunate that computer science is still advancing the same way
medicine advanced with witch doctors, by simply trusting the wisest
and oldest of the witch doctors without any actual empirical data,
without any evidence, just based on the reputation and overall
charisma or influence of certain bloggers or big names in the field.

http://www.reddit.com/r/programming/comments/1iyp6v/is_there_a_really_an_empirical_difference_between/cb9mf6f



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Proposal: Non-recursive let

2013-07-25 Thread oleg

Here is a snippet from a real code that could benefit from
non-recursive let. The example is notable because it incrementally
constructs not one but two structures (both maps), ast and headers.
The maps are constructed in a bit interleaved fashion, and stuffing
them into the same State would be ungainly. In my real code

-- Add an update record to a buffer file
do_update :: String - Handle - Parsed - [Parsed] - IO ()
do_update TAF h ast asts@(_:_) = do
  rv - reflect $ get_trange TRange ast
  headers0 - return . M.fromList = sequence 
 (map (\ (n,fld) - reflect (fld ast) = \v - return (n,v)) 
  fields_header)
  let headers = M.insert _report_ranges (format_two_tstamps rv) headers0
  foldM write_period (rv,headers,(snd rv,snd rv)) asts
  return ()
 where
 write_period (rv,headers,mv) ast = do
  pv@(p_valid_from,p_valid_until) - reflect $ get_trange TRange ast
  check_inside pv rv
  let prevailing = M.lookup PREVAILING ast
  (mv,pv) - case prevailing of
Just _  - return (pv,pv)   -- set the major valid period
 -- Make sure each VAR period occurs later than the prevailing 
 -- period. If exactly at the same time add 1 min
Nothing - case () of
 _ | fst mv  p_valid_from  - return (mv,pv)
 _ | fst mv == p_valid_from - return (mv,(p_valid_from + 60,
  p_valid_until))
 _  - gthrow . InvalidData . unwords $ [
  VAR period begins before prevailing:,
  show ast, ; prevailing TRange, show mv]
  let token  = maybe (M.findWithDefault  VAR ast) id prevailing
  let ast1 = M.insert _token token .
   M.insert _period_valid (format_two_tstamps pv) .
 M.unionWith (\_ x - x) headers $ ast
  let title  = M.member Title ast
  let headers1 = if title then headers else
M.delete _limit_to  . M.delete _limit_recd $ headers

  write_fields h ast1 fields  

  return (rv,headers1,mv)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Expression problem in the database?

2013-07-23 Thread oleg

Here is one possible approach. First, convert the propositional
formula into the conjunctive normal form (disjunctive will work just
as well). Recall, the conjunctive normal form (CNF) is

type CNF  = [Clause]
type Clause   = [Literal]
data Literal  = Pos PropLetter | Neg PropLetter
type PropLetter -- String or other representation for atomic propositions

We assume that clauses in CNF are ordered and can be identified by
natural indices.

A CNF can be stored in the following table:

CREATE DOMAIN PropLetter ...

CREATE TYPE occurrence AS (
clause_number integer,  (* index of a clause  *)
clause_card   integer,  (* number of literals in that clause  *)
positive  boolean   (* whether a positive or negative occ *)
);

CREATE TABLE Formula (
  prop_letter PropLetter references (table_of_properties),
  occurrences occurrence[]
);


That is, for each prop letter we indicate which clause it occurs in
(as a positive or a negative literal) and how many literals in that
clause. The latter number (clause cardinality) can be factored out if
we are orthodox. Since a letter may occur in many clauses, we use
PostgreSQL arrays (which are now found in many DBMS). 

The formula can be evaluated incrementally: by fetching the rows one
by one, keeping track of not yet decided clauses. We can stop as soon
as we found a clause that evaluates to FALSE.

BTW, `expression problem' typically refers to something else entirely
(how to embed a language and be able to add more syntactic forms to
the language and more evaluators without breaking previously written
code).


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Non-recursive let [Was: GHC bug? Let with guards loops]

2013-07-11 Thread oleg

I'd like to emphasize that there is a precedent to non-recursive let
in the world of (relatively pure) lazy functional programming.
The programming language Clean has such non-recursive let and uses
it and the shadowing extensively. They consider shadowing a virtue,
for uniquely typed data.

Richard A. O'Keefe wrote
  let (x,s) = foo 1 [] in
  let (y,s) = bar x s in
  let (z,s) = baz x y s in ...
 I really wish you wouldn't do that.
 ...
 I find that that when the same name gets reused like
 that I get very confused indeed about which one I am
 looking at right now.
 ...
 If each instance of the variable is labelled with a
 sequence number, I don't get confused because each
 variable has a different name and I can *see* which
 one this is.

 Yes, sequence numbering variable states is a chore for
 the person writing the code, but it's a boon for the
 person reading the code.

Let me point out the latest Report on the programming language Clean
http://clean.cs.ru.nl/download/doc/CleanLangRep.2.2.pdf
specifically PDF pages 38-40 (Sec 3.5.4 Let-Before Expression). Let me
quote the relevant part:

Many of the functions for input and output in the CLEAN I/O library
are state transition functions. Such a state is often passed from one
function to another in a single threaded way (see Chapter 9) to force
a specific order of evaluation. This is certainly the case when the
state is of unique type. The threading parameter has to be renamed to
distinguish its different versions. The following example shows a
typical example: Use of state transition functions. The uniquely typed
state file is passed from one function to another involving a number
of renamings: file, file1, file2)

readchars:: *File - ([Char], *File)
readchars file
| not ok   = ([],file1)
| otherwise = ([char:chars], file2)
where
  (ok,char,file1) = freadc file
  (chars,file2)   = readchars file1

This explicit renaming of threaded parameters not only looks very
ugly, these kind of definitions are sometimes also hard to read as
well (in which order do things happen? which state is passed in which
situation?). We have to admit: an imperative style of programming is
much easier to read when things have to happen in a certain order such
as is the case when doing I/O. That is why we have introduced
let-before expressions.

It seems the designers of Clean have the opposite view on the explicit
renaming (that is, sequential numbering of unique variables).

Let-before expressions have a special scope rule to obtain an
imperative programming look. The variables in the left- hand side of
these definitions do not appear in the scope of the right-hand side of
that definition, but they do appear in the scope of the other
definitions that follow (including the root expression, excluding
local definitions in where blocks.

Notice that a variable defined in a let-before expression cannot be
used in a where expression. The reverse is true however: definitions
in the where expression can be used in the let before expression.  Use
of let before expressions, short notation, re-using names taking use
of the special scope of the let before)

readchars:: *File - ([Char], *File)
readchars file
#(ok,char,file)   = freadc file
|not ok   = ([],file)
#(chars,file) = readchars file
=([char:chars], file)

The code uses the same name 'file' all throughout, shadowing it
appropriately. Clean programmers truly do all IO in this style, see
the examples in
http://clean.cs.ru.nl/download/supported/ObjectIO.1.2/doc/tutorial.pdf

[To be sure I do not advocate using Clean notation '#' for
non-recursive let in Haskell. Clean is well-known for its somewhat
Spartan notation.]

State monad is frequently mentioned as an alternative. But monads are
a poor alternative to uniqueness typing. Granted, if a function has
one unique argument, e.g., World, then it is equivalent to the ST (or
IO) monad. However, a function may have several unique arguments. For
example, Arrays in Clean are uniquely typed so they can be updated
destructively. A function may have several argument arrays. Operations
on one array have to be serialized (which is what uniqueness typing
accomplishes) but the relative order among operations on distinct
arrays may be left unspecified, for the compiler to determine.

Monads, typical of imperative programs, overspecify the order. For
example,
do
  x - readSTRef ref1
  y - readSTRef ref2
  writeSTRef ref2 (x+y)

the write to ref2 must happen after reading ref2, but ref1 could be
read either before or after ref2. (Assuming ref2 and ref1 are distinct
-- the uniqueness typing will make sure of it.)  Alas, in a monad we
cannot leave the order of reading ref1 and ref2 

Proposal: Non-recursive let

2013-07-10 Thread oleg

[reposting to Haskell-prime]

Jon Fairbairn wrote:
 It just changes forgetting to use different variable names because of
 recursion (which is currently uniform throughout the language) to
 forgetting to use non recursive let instead of let.

Let me bring to the record the message I just wrote on Haskell-cafe
http://www.haskell.org/pipermail/haskell-cafe/2013-July/109116.html

and repeat the example:

In OCaml, I can (and often do) write

let (x,s) = foo 1 [] in
let (y,s) = bar x s in
let (z,s) = baz x y s in ...

In Haskell I'll have to uniquely number the s's:

let (x,s1)  = foo 1 [] in
let (y,s2)  = bar x s1 in
let (z,s3)  = baz x y s2 in ...

and re-number them if I insert a new statement. 

I once wrote about 50-100 lines of code with the fragment like the
above and the only problem was my messing up the numbering (at one
place I used s2 where I should've used s3). In the chain of lets, it
becomes quite a chore to use different variable names -- especially as
one edits the code and adds new let statements.

I have also had problems with non-termination, unintended recursion. 
The problem is not caught statically and leads to looping, which may
be quite difficult to debug. Andreas should tell his story.

In my OCaml experience, I don't ever remember writing let rec by
mistake. Occasionally I write let where let rec is meant, and the type
checker very quickly points out a problem (an unbound identifier).
No need to debug anything.

Incidentally, time and again people ask on the Caml list why 'let' in
OCaml is by default non-recursive. The common answer is that the
practitioners find in their experience the non-recursive let to be a
better default. Recursion should be intended and explicit -- more
errors are caught that way.

Let me finally disagree with the uniformity principle. It may be
uniform to have equi-recursive types. OCaml has equi-recursive types;
internally the type checker treats _all_ types as (potentially)
equi-recursive. At one point OCaml allowed equi-recursive types in
user programs as well. They were introduced for the sake of objects;
so the designers felt uniformly warrants to offer them in all
circumstances. The users vocally disagreed. Equi-recursive types mask
many common type errors, making them much more difficult to find. As
the result, OCaml developers broke the uniformity. Now, equi-recursive
types may only appear in surface programs in very specific
circumstances (where objects or their duals are involved). Basically,
the programmer must really intend to use them.

Here is an example from the natural language, English. Some verbs go from
regular (uniform conjugation) to irregular:
http://en.wiktionary.org/wiki/dive


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


[Haskell-cafe] Non-recursive let [Was: GHC bug? Let with guards loops]

2013-07-10 Thread oleg

Andreas wrote: 
 The greater evil is that Haskell does not have a non-recursive let.
 This is source of many non-termination bugs, including this one here.
 let should be non-recursive by default, and for recursion we could have
 the good old let rec.

Hear, hear! In OCaml, I can (and often do) write

let (x,s) = foo 1 [] in
let (y,s) = bar x s in
let (z,s) = baz x y s in ...

In Haskell I'll have to uniquely number the s's:

let (x,s1)  = foo 1 [] in
let (y,s2)  = bar x s1 in
let (z,s3)  = baz x y s2 in ...

and re-number them if I insert a new statement. BASIC comes to mind. I
tried to lobby Simon Peyton-Jones for the non-recursive let a couple
of years ago. He said, write a proposal. It's still being
written... Perhaps you might want to write it now.

In the meanwhile, there is a very ugly workaround:

test = runIdentity $ do
 (x,s) - return $ foo 1 []
 (y,s) - return $ bar x s
 (z,s) - return $ baz x y s
 return (z,s)

After all, bind is non-recursive let.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Proposal: Non-recursive let

2013-07-10 Thread oleg

Jon Fairbairn wrote:
 It just changes forgetting to use different variable names because of
 recursion (which is currently uniform throughout the language) to
 forgetting to use non recursive let instead of let.

Let me bring to the record the message I just wrote on Haskell-cafe
http://www.haskell.org/pipermail/haskell-cafe/2013-July/109116.html

and repeat the example:

In OCaml, I can (and often do) write

let (x,s) = foo 1 [] in
let (y,s) = bar x s in
let (z,s) = baz x y s in ...

In Haskell I'll have to uniquely number the s's:

let (x,s1)  = foo 1 [] in
let (y,s2)  = bar x s1 in
let (z,s3)  = baz x y s2 in ...

and re-number them if I insert a new statement. 

I once wrote about 50-100 lines of code with the fragment like the
above and the only problem was my messing up the numbering (at one
place I used s2 where I should've used s3). In the chain of lets, it
becomes quite a chore to use different variable names -- especially as
one edits the code and adds new let statements.

I have also had problems with non-termination, unintended recursion. 
The problem is not caught statically and leads to looping, which may
be quite difficult to debug. Andreas should tell his story.

In my OCaml experience, I don't ever remember writing let rec by
mistake. Occasionally I write let where let rec is meant, and the type
checker very quickly points out a problem (an unbound identifier).
No need to debug anything.

Incidentally, time and again people ask on the Caml list why 'let' in
OCaml is by default non-recursive. The common answer is that the
practitioners find in their experience the non-recursive let to be a
better default. Recursion should be intended and explicit -- more
errors are caught that way.

Let me finally disagree with the uniformity principle. It may be
uniform to have equi-recursive types. OCaml has equi-recursive types;
internally the type checker treats _all_ types as (potentially)
equi-recursive. At one point OCaml allowed equi-recursive types in
user programs as well. They were introduced for the sake of objects;
so the designers felt uniformly warrants to offer them in all
circumstances. The users vocally disagreed. Equi-recursive types mask
many common type errors, making them much more difficult to find. As
the result, OCaml developers broke the uniformity. Now, equi-recursive
types may only appear in surface programs in very specific
circumstances (where objects or their duals are involved). Basically,
the programmer must really intend to use them.

Here is an example from the natural language, English. Some verbs go from
regular (uniform conjugation) to irregular:
http://en.wiktionary.org/wiki/dive


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Non-recursive let [Was: GHC bug? Let with guards loops]

2013-07-10 Thread oleg

 If you would like to write

 let (x,s) = foo 1 [] in
 let (y,s) = bar x s in
 let (z,s) = baz x y s in

 instead, use a state monad.

Incidentally I did write almost exactly this code once. Ironically, it
was meant as a lead-on to the State monad. 

But there have been other cases where State monad was better
avoided. For instance, functions like foo and bar are already written
and they are not in the state monad. For example, foo may take a
non-empty Set and return the minimal element and the set without the
minimal element. There are several such handy functions in Data.Set
and Data.Map. Injecting such functions into a Set monad for the sake
of three lines seems overkill. 

Also, in the code above s's don't have to have the same type.

I particularly like repeated lets when I am writing the code to apply
transformations to it. Being explicit with state passing improves the
confidence. It is simpler to reason with the pure code.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Non-recursive let [Was: GHC bug? Let with guards loops]

2013-07-10 Thread oleg

Alberto G. Corona wrote:
 I think that a non-non recursive let could be not compatible with the pure
 nature of Haskell.

I have seen this sentiment before. It is quite a mis-understanding. In
fact, the opposite is true. One may say that Haskell is not quite pure
_because_ it has recursive let.

Let's take pure the simply-typed lambda-calculus, or System F, or
System Fomega. Or the Calculus of Inductive Constructions. These
calculi are pure in the sense that the result of evaluation of each
expression does not depend on the evaluation strategy. One can use
call-by-name, call-by-need, call-by-value, pick the next redex at
random or some other evaluation strategy -- and the result will be
just the same. Although the simply-typed lambda-calculus is quite
limited in its expressiveness, already System F is quite powerful
(e.g., allowing for the list library), to say nothing of CIC. In all
these systems, the non-recursive let

let x = e1 in e2
is merely the syntactic sugar for
(\x. e2) e1

OTH, the recursive let is not expressible. (Incidentally, although
System F and above express self-application (\x.x x), a fix-point
combinator is not typeable.) Adding the recursive let introduces
general recursion and hence the dependence on the evaluation
strategy. There are a few people who say non-termination is an
effect. The language with non-termination is no longer pure.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] some questions about Template Haskell

2013-06-30 Thread oleg

TP wrote:
 pr :: Name - ExpQ
 pr n = [| putStrLn $ (nameBase n) ++  =  ++ show $(varE n) |]

The example is indeed problematic. Let's consider a simpler one:

 foo :: Int - ExpQ
 foo n = [|n + 1|]

The function f, when applied to an Int (some bit pattern in a machine
register), produces _code_. It helps to think of the code 
as a text string with the
source code. That text string cannot include the binary value that is
n. That binary value has to be converted to the numeric text string, and
inserted in the code. That conversion is called `lifting' (or
quoting). The function foo is accepted because Int is a liftable type,
the instance of Lift. And Name isn't. 

BTW, the value from the heap of the running program inserted into the
generated code is called `cross-stage persistent'. The constraint Lift
is implicitly generated by TH when it comes across a cross-stage
persistent identifier.  You can read more about it at
http://okmij.org/ftp/ML/MetaOCaml.html#CSP

Incidentally, MetaOCaml would've accepted your example, for now. There
are good reasons to make the behavior match that of Haskell.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Geometric Algebra [Was: writing a function to make a correspondance between type-level integers and value-level integers]

2013-06-26 Thread oleg

 It seems very interesting, but I have not currently the time to make a 
 detailed comparison with vector/tensor algebra. Moreover I have not 

I would suggest the freely available
Oersted Medal Lecture 2002
by David Hestenes
http://geocalc.clas.asu.edu/pdf/OerstedMedalLecture.pdf
the person who discovered and developed the Geometric Algebra.

In particular, see Section V of the above paper. It talks about
vectors, geometric products, the coordinate-free representation for
`vector product' and the geometric meaning of the imaginary unit i.
Section 1 gives a good motivation for the Geometric Algebra. Other
sections of the paper develop physical applications, from classical
mechanics to electrodynamics to non-relativistic and relativistic
quantum mechanics.

Computer Scientists might then like
http://www.geometricalgebra.net/
see the good and free introduction
http://www.geometricalgebra.net/downloads/ga4cs_chapter1.pdf

Incidentally, David Hestenes said in the lecture that he has applied
for an NSF grant to work on Geometric Algebra TWELVE times in a row,
and was rejected every single time.




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] writing a function to make a correspondance between type-level integers and value-level integers

2013-06-25 Thread oleg

Well, I guess you might be interested in geometric algebra then
http://dl.acm.org/citation.cfm?id=1173728
because Geometric Algebra is a quite more principled way of doing
component-free calculations. See also the web page of the author
http://staff.science.uva.nl/~fontijne/

Geigen seems like a nice DSL that could well be embedded in Haskell.

Anyway, the reason I pointed out Vectro is that it answers your
question about reifying and reflecting type-level integers (by means
of a type class).





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] writing a function to make a correspondance between type-level integers and value-level integers

2013-06-17 Thread oleg

I'm late to this discussion and must have missed the motivation for
it. Specifically, how is your goal, vector/tensor operations that are
statically assured well-dimensioned differs from general
well-dimensioned linear algebra, for which several solutions have been
already offered. For example, the Vectro library has been described
back in 2006:
http://ofb.net/~frederik/vectro/draft-r2.pdf
http://ofb.net/~frederik/vectro/

The paper also talks about reifying type-level integers to value-level
integers, and reflecting them back. Recent GHC extensions (like
DataKinds) make the code much prettier but don't fundamentally change
the game.




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] rip in the class-abstraction continuum

2013-05-21 Thread oleg

Type classes are the approach to constrain type variables, to bound
polymorphism and limit the set of types the variables can be
instantiated with. If we have two type variables to constrain,
multi-parameter type classes are the natural answer then. Let's take
this solution and see where it leads to.

Here is the original type class
 class XyConv a where
   toXy :: a b - [Xy b]

and the problematic instance
 data CircAppr a b = CircAppr a b b -- number of points, rotation angle, radius
 deriving (Show)

 instance Integral a = XyConv (CircAppr a) where
   toXy (CircAppr divns ang rad) =
   let dAng = 2 * pi / (fromIntegral divns) in
   let angles = map ((+ ang) . (* dAng) . fromIntegral) [0..divns] in
   map (\a - am2xy a rad) angles

To be more explicit, the type class declaration has the form

 class XyConv a where
   toXy :: forall b. a b - [Xy b]

with the type variable 'b' universally quantified without any
constraints. That means the user of (toXy x) is free to choose any type
for 'b' whatsoever. Obviously that can't be true for 
(toXy (CircAppr x y)) since we can't instantiate pi to any type. It
has to be a Floating type. Hence we have to constrain b. As I said, the
obvious solution is to make it a parameter of the type class.

We get the first solution:

 class XYConv1 a b where
 toXy1 :: a b - [Xy b]

 instance (Floating b, Integral a) = XYConv1 (CircAppr a) b where
   toXy1 (CircAppr divns ang rad) =
   let dAng = 2 * pi / (fromIntegral divns) in
   let angles = map ((+ ang) . (* dAng) . fromIntegral) [0..divns] in
   map (\a - am2xy a rad) angles

The type class declaration proclaims that only certain combinations of
'a' and 'b' are admitted to the class XYConv1. In particular, 'a' is
(CircAppr a) and 'b' is Floating. 

This reminds us of collections (with Int keys, for simplicity)

 class Coll c where
   empty :: c b
   insert :: Int - b - c b - c b

 instance Coll M.IntMap where
   empty  = M.empty
   insert = M.insert

The Coll declaration assumes that a collection is suitable for elements of
any type. Later on one notices that if elements are Bools,
they can be stuffed quite efficiently into an Integer. If we wish to
add ad hoc, efficient collections to the framework, we have to
restrict the element type as well:

 class Coll1 c b where
   empty1 :: c
   insert1 :: Int - b - c - c

Coll1 is deficient since there is no way to specify the type
of elements for the empty collection. When the type checker sees
'empty1', how can it figure out which instance for Coll1 (with the
same c but different element types) to choose?
We can help the type-checker by declaring (by adding the functional
dependency c - b) that for each collection
type c, there can be only one instance of Coll1. In other words, the
collection type determines the element type.

Exactly the same principle works for XYConv.

 class XYConv2 a b | a - b where
 toXy2 :: a - [Xy b]

 instance (Floating b, Integral a) = XYConv2 (CircAppr a b) b where
   toXy2 (CircAppr divns ang rad) =
   let dAng = 2 * pi / (fromIntegral divns) in
   let angles = map ((+ ang) . (* dAng) . fromIntegral) [0..divns] in
   map (\a - am2xy a rad) angles

The third step is to move to associated types. At this stage you can
consider them just as a different syntax of writing functional
dependencies:

 class XYConv3 a where
 type XYT a :: *
 toXy3 :: a - [Xy (XYT a)]

 instance (Floating b, Integral a) = XYConv3 (CircAppr a b) where
   type XYT (CircAppr a b) = b
   toXy3 (CircAppr divns ang rad) =
   let dAng = 2 * pi / (fromIntegral divns) in
   let angles = map ((+ ang) . (* dAng) . fromIntegral) [0..divns] in
   map (\a - am2xy a rad) angles

The step from XYConv2 to XYConv3 is mechanical. The class XYConv3
assumes that for each convertible 'a' there is one and only Xy type
'b' to which it can be converted. This was the case for (CircAppr a
b). It may not be the case in general. But we can say that for each
convertible 'a' there is a _class_ of Xy types 'b' to which they may be
converted. This final step brings Tillmann Rendel's solution.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A use case for *real* existential types

2013-05-18 Thread oleg

  I must say though that I'd rather prefer Adres solution because his
  init
   init :: (forall a. Inotify a - IO b) - IO b
  
  ensures that Inotify does not leak, and so can be disposed of at the
  end. So his init enforces the region discipline and could, after a

 It's probably not easy to do this by accident, but I think ensures is
 too strong a word here.

Indeed. I should've been more precise and said that 'init' -- like
Exception.bracket or System.IO.withFile -- are a good step towards
ensuring the region discipline.  One may wish that true regions were
used for this project (as they have been for similar ones, like
usb-safe).


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Typeclass with an `or' restriction.

2013-05-10 Thread oleg

Mateusz Kowalczyk wrote:
 Is there a way however to do something along the lines of:
  class Eq a = Foo a where bar :: a - a - Bool bar = (==)
 
  class Num a = Foo a where bar :: a - a - Bool bar _ _ = False
 This would allow us to make an instance of Num be an instance of Foo
 or an instance of Eq to be an instance of Foo.

GADTs are a particular good way to constraint disjunction, if you can live
with the closed universe. (In the following example I took a liberty
to replace Int with Ord, to make the example crispier.)

 {-# LANGUAGE GADTs #-}

 data OrdEq a where
 Ord :: Ord a = OrdEq a -- representation of Ord dict
 Eq  :: Eq a  = OrdEq a -- representation of Eq dict

 bar :: OrdEq a - a - a - Bool
 bar Ord x y = x  y
 bar Eq  x y = x == y

The function bar has almost the desired signature, only (OrdEq a -)
has the ordinary arrow rather than the double arrow. We can fix that:

 class Dict a where
 repr :: OrdEq a

 -- We state that for Int, we prefer Ord
 instance Dict Int where
 repr = Ord

 bar' :: Dict a = a - a - Bool
 bar' = bar repr

 test = bar' (1::Int) 2

I can see the utility of this: something like C++ STL iterators and
algorithms? An algorithm could test if a bidirectional iterator is
available, or it has to do, less efficiently, with unidirectional. Of
course we can use ordinary type classes, at the cost of the
significant repetition. In the OrdEq example above, there are only two
choices of the algorithm for Bar: either the type supports Ord, or the
type supports Eq. So the choice depends on wide sets of types rather
than on types themselves.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reinventing a solution to configuration problem

2013-05-10 Thread oleg

I guess you might like then
http://okmij.org/ftp/Haskell/types.html#Prepose
which discusses implicit parameters and their drawbacks (see Sec 6.2).





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] A use case for *real* existential types

2013-05-10 Thread oleg

But Haskell (and GHC) have existential types, and your prototype code
works with GHC after a couple of trivial changes:

 main = do
   W nd0 - init
   wd0 - addWatch nd0 foo
   wd1 - addWatch nd0 bar
   W nd1 - init
   wd3 - addWatch nd1 baz
   printInotifyDesc nd0
   printInotifyDesc nd1
   rmWatch nd0 wd0
   rmWatch nd1 wd3
 -- These lines cause type errors:
 --  rmWatch nd1 wd0
 --  rmWatch nd0 wd3
   printInotifyDesc nd0
   printInotifyDesc nd1

The only change is that you have to write
  W nd - init
and that's all. The type-checker won't let the user forget about the
W. The commented out lines do lead to type errors as desired.

Here is what W is

 data W where
 W :: Inotify a - W
 init :: IO W
  [trivial modification to init's code]

I must say though that I'd rather prefer Adres solution because his
init
 init :: (forall a. Inotify a - IO b) - IO b

ensures that Inotify does not leak, and so can be disposed of at the
end. So his init enforces the region discipline and could, after a
trivial modification to the code, automatically do a clean-up of
notify descriptors -- something you'd probably want to do.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Stream processing

2013-05-10 Thread oleg

I'm a bit curious
   * be reliable in the presence of async exceptions (solved by conduit,
 pipes-safe),
  
   * hold on to resources only as long as necessary (solved by conduit
 and to some degree by pipes-safe),

Are you aware of
http://okmij.org/ftp/Streams.html#regions
which describes both resource deallocation and async signals. Could
you tell what you think is deficient in that code?

   * ideally also allow upstream communication (solved by pipes and to
 some degree by conduit).

Are you aware (of, admittedly) old message whose title was specifically
  ``Sending messages up-and-down the iteratee-enumerator chain''
http://www.haskell.org/pipermail/haskell-cafe/2011-May/091870.html
(there were several messages in that thread). Here is the code for
those messages
http://okmij.org/ftp/Haskell/Iteratee/UpDown.hs



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Is it time to start deprecating FunDeps?

2013-05-02 Thread oleg

 In your class Sum example,

 class Sum x y z | x y - z, x z - y

 your own solution has a bunch of helper classes

First of all, on the top of other issues, I haven't actually shown an
implementation in the message on Haskell'. I posed this as a general
issue.

In special cases like below

 class Sum2 a b c | a b - c, a c - b
 instance Sum2 Z b b
 instance (Sum2 a' b c') = Sum2 (S a') b (S c')

 -- note that in the FunDeps, var a is not a target
 -- so the instances discriminate on var a
I didn't doubt the translation would go through because there is a
case analysis on a. But the general case can be more complex. For
example,

class Sum2 a b c | a b - c, a c - b
instance Sum2 Z Z Z
instance Sum2 O Z O
instance Sum2 Z O O
instance Sum2 O O Z


 In your overlap example you introduce a definition that won't compile!
 
  {- -- correctly prohibited!
  instance x ~ Bool = C1 [Char]  x where
  foo = const True
  -}
 You expect too much if you think a mechanical translation will 'magic' a
 non-compiling program into something that will compile.

 I do expect equality constraints to not play well with overlaps. Combining
 FunDeps with overlaps is also hazardous. I'm only claiming that EC's will
 be at least as good.

I don't understand the remark. The code marked `correctly prohibited'
is in the comments. There are no claims were made about that code. If
you found that comment disturbing, please delete it. It won't affect the
the example: the types were improved in t11 but were not
improved in t21. Therefore, EC's are not just as good. Functional
dependencies seem to do better.



___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell-cafe] Is it time to start deprecating FunDeps?

2013-05-02 Thread oleg

 In your class Sum example,

 class Sum x y z | x y - z, x z - y

 your own solution has a bunch of helper classes

First of all, on the top of other issues, I haven't actually shown an
implementation in the message on Haskell'. I posed this as a general
issue.

In special cases like below

 class Sum2 a b c | a b - c, a c - b
 instance Sum2 Z b b
 instance (Sum2 a' b c') = Sum2 (S a') b (S c')

 -- note that in the FunDeps, var a is not a target
 -- so the instances discriminate on var a
I didn't doubt the translation would go through because there is a
case analysis on a. But the general case can be more complex. For
example,

class Sum2 a b c | a b - c, a c - b
instance Sum2 Z Z Z
instance Sum2 O Z O
instance Sum2 Z O O
instance Sum2 O O Z


 In your overlap example you introduce a definition that won't compile!
 
  {- -- correctly prohibited!
  instance x ~ Bool = C1 [Char]  x where
  foo = const True
  -}
 You expect too much if you think a mechanical translation will 'magic' a
 non-compiling program into something that will compile.

 I do expect equality constraints to not play well with overlaps. Combining
 FunDeps with overlaps is also hazardous. I'm only claiming that EC's will
 be at least as good.

I don't understand the remark. The code marked `correctly prohibited'
is in the comments. There are no claims were made about that code. If
you found that comment disturbing, please delete it. It won't affect the
the example: the types were improved in t11 but were not
improved in t21. Therefore, EC's are not just as good. Functional
dependencies seem to do better.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Is it time to start deprecating FunDeps?

2013-04-30 Thread oleg

Anthony Clayden wrote:
 Better still, given that there is a mechanical way to convert FunDeps to
 equalities, we could start treating the FunDep on a class declaration as
 documentation, and validate that the instances observe the mechanical
 translation.

I think this mechanical way is not complete. First of all, how do you
mechanically convert something like

class Sum x y z | x y - z, x z - y

Second, in the presence of overlapping, the translation gives
different results: compare the inferred types for t11 and t21. There
is no type improvement in t21.
(The code also exhibits the coherence problem for overlapping instances:
the inferred type of t2 changes when we remove the last instance of
C2, from Bool to [Char].)

{-# LANGUAGE MultiParamTypeClasses, FunctionalDependencies #-}
{-# LANGUAGE FlexibleInstances, TypeFamilies #-}
{-# LANGUAGE NoMonomorphismRestriction #-}
{-# LANGUAGE OverlappingInstances #-}

module FD where

class C1 a b | a - b where
  foo :: a - b

instance C1 [a] [a] where
foo = id

instance C1 (Maybe a) (Maybe a) where
foo = id

{- -- correctly prohibited!
instance x ~ Bool = C1 [Char]  x where
foo = const True
-}

t1 = foo a
t11 = \x - foo [x]
-- t11 :: t - [t]

-- Anthony Clayden's translation 
class C2 a b where
  foo2 :: a - b

instance x ~ [a] = C2 [a]  x where
foo2 = id

instance x ~ (Maybe a) = C2 (Maybe a) x where
foo2 = id


instance x ~ Bool = C2 [Char]  x where
foo2 = const True

t2 = foo2 a
t21 = \x - foo2 [x]
-- t21 :: C2 [t] b = t - b


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell-cafe] Space leak in hexpat-0.20.3/List-0.5.1

2013-04-30 Thread oleg

Wren Thornton wrote:
 So I'm processing a large XML file which is a database of about 170k
 entries, each of which is a reasonable enough size on its own, and I only
 need streaming access to the database (basically printing out summary data
 for each entry). Excellent, sounds like a job for SAX.

Indeed a good job for a SAX-like parser. XMLIter is exactly such
parser, and it generates event stream quite like that of Expat. Also
you application is somewhat similar to the following
http://okmij.org/ftp/Haskell/Iteratee/XMLookup.hs

So, it superficially seems XMLIter should be up for the task. Can you
explain which elements your are counting? BTW, xml_enum already checks
for the well-formedness of XML (including the start-end tag
balance, and many more criteria). One can assume that the XMLStream
corresponds to the well-formed document and only count the desired
start tags (or end tags, for that matter).




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] HList with DataKinds [Was: Diving into the records swamp (possible GSoC project)]

2013-04-28 Thread oleg

Aleksandar Dimitrov wrote:
 I've been kicking around the idea of re-implementing HList on the basis of the
 new DataKinds [1] extension.

The current HList already uses DataKinds (and GADTs), to the extent
possible with GHC 7.4 (GHC 7.6 supports the kind polymorphism better, but
it had a critical for me bug, fixed since then.) See, for example
http://code.haskell.org/HList/Data/HList/HListPrelude.hs
(especially the top of the file).
or, better
http://code.haskell.org/HList/Data/HList/HList.hs
HList now supports both type-level folds and unfolds.




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] unsafeInterleaveST (and IO) is really unsafe

2013-04-12 Thread oleg

Timon Gehr wrote:
 I am not sure that the two statements are equivalent. Above you say that
 the context distinguishes x == y from y == x and below you say that it
 distinguishes them in one possible run.

I guess this is a terminological problem. The phrase `context
distinguishes e1 and e2' is the standard phrase in theory of
contextual equivalence. Here are the nice slides
http://www.cl.cam.ac.uk/teaching/0910/L16/semhl-15-ann.pdf

Please see adequacy on slide 17. An expression relation between two
boolean expressions M1 and M2 is adequate if for all program runs (for
all initial states of the program s), M1
evaluates to true just in case M2 does. If in some circumstances M1
evaluates to true but M2 (with the same initial state) evaluates to
false, the expressions are not related or the expression relation is
inadequate.

See also the classic
http://www.ccs.neu.edu/racket/pubs/scp91-felleisen.ps.gz
(p11 for definition and Theorem 3.8 for an example of a
distinguishing, or witnessing context).

 In essence, lazy IO provides unsafe constructs that are not named
 accordingly. (But IO is problematic in any case, partly because it
 depends on an ideal program being run on a real machine which is based
 on a less general model of computation.)

I'd agree with the first sentence. As for the second sentence, all
real programs are real programs executing on real machines. We may
equationally prove (at time Integer) that 
1 + 2^10 == 2^10 + 1
but we may have trouble verifying it in Haskell (or any other
language). That does not mean equational reasoning is useless: we just
have to precisely specify the abstraction boundaries. BTW, the
equality above is still useful even in Haskell: it says that if the
program managed to compute 1 + 2^10 and it also managed to compute
2^10 + 1, the results must be the same. (Of course in the above
example, the program will probably crash in both cases).  What is not
adequate is when equational theory predicts one finite result, and the
program gives another finite result -- even if the conditions of
abstractions are satisfied (e.g., there is no IO, the expression in
question has a pure type, etc).

 I think this context cannot be used to reliably distinguish x == y and y
 == x. Rather, the outcomes would be arbitrary/implementation
 defined/undefined in both cases.

My example uses the ST monad for a reason: there is a formal semantics
of ST (denotational in Launchbury and Peyton-Jones and operational in
Moggi and Sabry). Please look up ``State in Haskell'' by Launchbury
and Peyton-Jones. The semantics is explained in Sec 6. Please see Sec
10.2 Unique supply trees -- you might see some familiar code. Although
my example was derived independently, it has the same kernel of
badness as the example in Launchbury and Peyton-Jones. The authors
point out a subtlety in the code, admitting that they fell into the
trap themselves. So, unsafeInterleaveST is really bad -- and the
people who introduced it know that, all too well.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] unsafeInterleaveST (and IO) is really unsafe [was: meaning of referential transparency]

2013-04-12 Thread oleg

 Lazy I/O *sounds* safe.
 And most of the alternatives (like conduits) hurt my head,
 so it is really *really* tempting to stay with lazy I/O and
 think I'm doing something safe.

Well, conduit was created for the sake of a web framework. I think all
web frameworks, in whatever language, are quite complex, with a steep
learning curve. As to alternatives -- this is may be the issue of
familiarity or the availability of a nice library of combinators.

Here is the example from my FLOPS talk: count the number of words
the in a file.

Lazy IO:

run_countTHEL fname = 
 readFile fname = print . length . filter (==the) . words

Iteratee IO:

run_countTHEI fname = 
  print = fileL fname $ wordsL $ filterL (==the) $ count_i

The same structure of computation and the same size (and the same
incrementality). But there is even a simple way (when it applies):
generators. All languages that tried generators so far (starying from
CLU and Icon) have used them to great success.

 Derek Lowe has a list of Things I Won't Work With.
 http://pipeline.corante.com/archives/things_i_wont_work_with/
This is a really fun site indeed.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Set monad

2013-04-12 Thread oleg

 One problem with such monad implementations is efficiency. Let's define

 step :: (MonadPlus m) = Int - m Int
 step i = choose [i, i + 1]

 -- repeated application of step on 0:
 stepN :: (Monad m) = Int - m (S.Set Int)
 stepN = runSet . f
   where
 f 0 = return 0
 f n = f (n-1) = step

 Then `stepN`'s time complexity is exponential in its argument. This is
 because `ContT` reorders the chain of computations to right-associative,
 which is correct, but changes the time complexity in this unfortunate way.
 If we used Set directly, constructing a left-associative chain, it produces
 the result immediately:

The example is excellent. And yet, the efficient genuine Set monad is
possible.

BTW, a simpler example to see the problem with the original CPS monad is to
repeat
choose [1,1]  choose [1,1] choose [1,1]  return 1

and observe exponential behavior. But your example is much more
subtle.

Enclosed is the efficient genuine Set monad. I wrote it in direct
style (it seems to be faster, anyway). The key is to use the optimized
choose function when we can.

{-# LANGUAGE GADTs, TypeSynonymInstances, FlexibleInstances #-}

module SetMonadOpt where

import qualified Data.Set as S
import Control.Monad

data SetMonad a where
SMOrd :: Ord a = S.Set a - SetMonad a
SMAny :: [a] - SetMonad a

instance Monad SetMonad where
return x = SMAny [x]

m = f = collect . map f $ toList m

toList :: SetMonad a - [a]
toList (SMOrd x) = S.toList x
toList (SMAny x) = x

collect :: [SetMonad a] - SetMonad a
collect []  = SMAny []
collect [x] = x
collect ((SMOrd x):t) = case collect t of
 SMOrd y - SMOrd (S.union x y)
 SMAny y - SMOrd (S.union x (S.fromList y))
collect ((SMAny x):t) = case collect t of
 SMOrd y - SMOrd (S.union y (S.fromList x))
 SMAny y - SMAny (x ++ y)

runSet :: Ord a = SetMonad a - S.Set a
runSet (SMOrd x) = x
runSet (SMAny x) = S.fromList x

instance MonadPlus SetMonad where
mzero = SMAny []
mplus (SMAny x) (SMAny y) = SMAny (x ++ y)
mplus (SMAny x) (SMOrd y) = SMOrd (S.union y (S.fromList x))
mplus (SMOrd x) (SMAny y) = SMOrd (S.union x (S.fromList y))
mplus (SMOrd x) (SMOrd y) = SMOrd (S.union x y)

choose :: MonadPlus m = [a] - m a
choose = msum . map return


test1 = runSet (do
  n1 - choose [1..5]
  n2 - choose [1..5]
  let n = n1 + n2
  guard $ n  7
  return n)
-- fromList [2,3,4,5,6]

-- Values to choose from might be higher-order or actions
test1' = runSet (do
  n1 - choose . map return $ [1..5]
  n2 - choose . map return $ [1..5]
  n  - liftM2 (+) n1 n2
  guard $ n  7
  return n)
-- fromList [2,3,4,5,6]

test2 = runSet (do
  i - choose [1..10]
  j - choose [1..10]
  k - choose [1..10]
  guard $ i*i + j*j == k * k
  return (i,j,k))
-- fromList [(3,4,5),(4,3,5),(6,8,10),(8,6,10)]

test3 = runSet (do
  i - choose [1..10]
  j - choose [1..10]
  k - choose [1..10]
  guard $ i*i + j*j == k * k
  return k)
-- fromList [5,10]

-- Test by Petr Pudlak

-- First, general, unoptimal case
step :: (MonadPlus m) = Int - m Int
step i = choose [i, i + 1]

-- repeated application of step on 0:
stepN :: Int - S.Set Int
stepN = runSet . f
  where
  f 0 = return 0
  f n = f (n-1) = step

-- it works, but clearly exponential
{-
*SetMonad stepN 14
fromList [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14]
(0.09 secs, 31465384 bytes)
*SetMonad stepN 15
fromList [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]
(0.18 secs, 62421208 bytes)
*SetMonad stepN 16
fromList [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]
(0.35 secs, 124876704 bytes)
-}

-- And now the optimization
chooseOrd :: Ord a = [a] - SetMonad a
chooseOrd x = SMOrd (S.fromList x)

stepOpt :: Int - SetMonad Int
stepOpt i = chooseOrd [i, i + 1]

-- repeated application of step on 0:
stepNOpt :: Int - S.Set Int
stepNOpt = runSet . f
  where
  f 0 = return 0
  f n = f (n-1) = stepOpt

{-
stepNOpt 14
fromList [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14]
(0.00 secs, 515792 bytes)
stepNOpt 15
fromList [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]
(0.00 secs, 515680 bytes)
stepNOpt 16
fromList [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]
(0.00 secs, 515656 bytes)

stepNOpt 30
fromList 
[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30]
(0.00 secs, 1068856 bytes)
-}



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Set monad

2013-04-11 Thread oleg

The question of Set monad comes up quite regularly, most recently at
http://www.ittc.ku.edu/csdlblog/?p=134

Indeed, we cannot make Data.Set.Set to be the instance of Monad type
class -- not immediately, that it. That does not mean that there is no
Set Monad, a non-determinism monad that returns the set of answers,
rather than a list. I mean genuine *monad*, rather than a restricted,
generalized, etc. monad. 

It is surprising that the question of the Set monad still comes up
given how simple it is to implement it. Footnote 4 in the ICFP
2009 paper on ``Purely Functional Lazy Non-deterministic Programming'' 
described the idea, which is probably folklore. Just in case, here is
the elaboration, a Set monad transformer.

{-# LANGUAGE TypeSynonymInstances, FlexibleInstances #-}

module SetMonad where

import qualified Data.Set as S
import Control.Monad.Cont

-- Since ContT is a bona fide monad transformer, so is SetMonadT r.
type SetMonadT r = ContT (S.Set r)

-- These are the only two places the Ord constraint shows up

instance (Ord r, Monad m) = MonadPlus (SetMonadT r m) where
mzero = ContT $ \k - return S.empty
mplus m1 m2 = ContT $ \k - liftM2 S.union (runContT m1 k) (runContT m2 k)

runSet :: (Monad m, Ord r) = SetMonadT r m r - m (S.Set r)
runSet m = runContT m (return . S.singleton)

choose :: MonadPlus m = [a] - m a
choose = msum . map return

test1 = print = runSet (do
  n1 - choose [1..5]
  n2 - choose [1..5]
  let n = n1 + n2
  guard $ n  7
  return n)
-- fromList [2,3,4,5,6]

-- Values to choose from might be higher-order or actions
test1' = print = runSet (do
  n1 - choose . map return $ [1..5]
  n2 - choose . map return $ [1..5]
  n  - liftM2 (+) n1 n2
  guard $ n  7
  return n)
-- fromList [2,3,4,5,6]

test2 = print = runSet (do
  i - choose [1..10]
  j - choose [1..10]
  k - choose [1..10]
  guard $ i*i + j*j == k * k
  return (i,j,k))
-- fromList [(3,4,5),(4,3,5),(6,8,10),(8,6,10)]

test3 = print = runSet (do
  i - choose [1..10]
  j - choose [1..10]
  k - choose [1..10]
  guard $ i*i + j*j == k * k
  return k)
-- fromList [5,10]



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] unsafeInterleaveST (and IO) is really unsafe [was: meaning of referential transparency]

2013-04-10 Thread oleg

One may read this message as proving True === False without resorting
to IO. In other words, referential transparency, or the substitution
of equals for equals, may fail even in expressions of type Bool.

This message is intended as an indirect stab at lazy IO. 
Unfortunately, Lazy IO and even the raw unsafeInterleaveIO, on which
it is based, are sometimes recommended, without warnings that usually
accompany unsafePerformIO.
http://www.haskell.org/pipermail/haskell-cafe/2013-March/107027.html

UnsafePerformIO is known to be unsafe, breaking equational reasoning;
unsafeInterleaveIO gets a free pass because any computation with it
has to be embedded in the IO context in order to be evaluated -- and
we can expect anything from IO. But unsafeInterleaveIO has essentially
the same code as unsafeInterleaveST: compare unsafeInterleaveST from
GHC/ST.lhs with unsafeDupableInterleaveIO from GHC/IO.hs keeping in
mind that IO and ST have the same representation, as described in
GHC/IO.hs. And unsafeInterleaveST is really unsafe -- not just mildly
or somewhat or vaguely unsafe. In breaks equational reasoning, in pure
contexts.

Here is the evidence. I hope most people believe that the equality on
Booleans is symmetric. I mean the function

(==) :: Bool - Bool - Bool
True  == True  = True
False == False = True
_ == _ = False

or its Prelude implementation.
The equality x == y to y == x holds even if either x or y (or both)
are undefined. 

And yet there exists a context that distinguishes x == y from y ==x. 
That is, there exists 
bad_ctx :: ((Bool,Bool) - Bool) - Bool
such that

*R bad_ctx $ \(x,y) - x == y
True
*R bad_ctx $ \(x,y) - y == x
False

The function unsafeInterleaveST ought to bear the same stigma as does
unsafePerformIO. After all, both masquerade side-effecting
computations as pure. Hopefully even lazy IO will get
recommended less, and with more caveats. (Lazy IO may be
superficially convenient -- so are the absence of typing discipline and
the presence of unrestricted mutation, as many people in 
Python/Ruby/Scheme etc worlds would like to argue.)

The complete code:

module R where

import Control.Monad.ST.Lazy (runST)
import Control.Monad.ST.Lazy.Unsafe (unsafeInterleaveST)
import Data.STRef.Lazy

bad_ctx :: ((Bool,Bool) - Bool) - Bool
bad_ctx body = body $ runST (do
   r - newSTRef False
   x - unsafeInterleaveST (writeSTRef r True  return True)
   y - readSTRef r
   return (x,y))


t1 = bad_ctx $ \(x,y) - x == y
t2 = bad_ctx $ \(x,y) - y == x



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] closed world instances, closed type families

2013-04-02 Thread oleg

Henning Thielemann wrote:
 However the interesting part of a complete case analysis on type level
 peano numbers was only sketched in section 8.4 Closed type
 families. Thus I tried again and finally found a solution that works
 with existing GHC extensions:

You might like the following message posted in January 2005
http://www.haskell.org/pipermail/haskell-cafe/2005-January/008239.html
(the whole discussion thread is relevant).

In particular, the following excerpt

-- data OpenExpression env r where
--   Lambda :: OpenExpression (a,env) r - OpenExpression env (a - r);
--   Symbol :: Sym env r - OpenExpression env r;
--   Constant :: r - OpenExpression env r;
--   Application :: OpenExpression env (a - r) - OpenExpression env a -
--  OpenExpression env r

-- Note that the types of the efold alternatives are essentially
-- the inversion of arrows in the GADT declaration above
class OpenExpression t env r | t env - r where
efold :: t - env
 - (forall r. r - c r) -- Constant
 - (forall r. r - c r) -- Symbol
 - (forall a r. (a - c r) - c (a-r))  -- Lambda
 - (forall a r. c (a-r) - c a - c r) -- Application
 - c r


shows the idea of the switch, but on more complex example (using
higher-order rather than first-order language). That message has
anticipated the tagless-final approach.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] attoparsec and backtracking

2013-03-19 Thread oleg

Wren Thornton wrote:
 I had some similar issues recently. The trick is figuring out how to
 convince attoparsec to commit to a particular alternative. For example,
 consider the grammar: A (B A)* C; where if the B succeeds then we want to
 commit to parsing an A (and if it fails then return A's error, not C's).

Indeed. Consider the following (greatly simplified) fragment from the
OCaml grammar

| let; r = opt_rec; bi = binding; in;
   x = expr LEVEL ; -
| function; a = match_case -
| if; e1 = SELF; then; e2 = expr LEVEL top;
  else; e3 = expr LEVEL top -
...
| false - 
| true  - 

It would be bizarre if the parser -- upon seeing if but not finding
then -- would've reported the error that `found if when true was
expected'. Many people would think that when the parser comes across
if, it should commit to parsing the conditional. And if it fails later, it
should report the error with the conditional, rather than trying to
test how else the conditional cannot be parsed. This is exactly the
intuition of pattern matching. For example, given

 foo (if:t) = case t of
  (e:then:_) - e
 foo _ = 

we expect that 
foo [if,false,false]
will throw an exception rather than return the empty string. If the
pattern has matched, we are committed to the corresponding
branch. Such an intuition ought to apply to parsing -- and indeed it
does. The OCaml grammar above was taken from the camlp4 code. Camlp4
parsers

http://caml.inria.fr/pub/docs/tutorial-camlp4/tutorial002.html#toc6

do pattern-matching on a stream, for example
 # let rec expr =
 parser
   [ 'If; x = expr; 'Then; y = expr; 'Else; z = expr ] - if
 | [ 'Let; 'Ident x; 'Equal; x = expr; 'In; y = expr ] - let

and raise two different sort of exceptions. A parser raises
Stream.Failure if it failed on the first element of the stream (in the
above case, if the stream contains neither If nor Let). If the parser
successfully consumed the first element but failed later, a different
Stream.Error is thrown. Although Camlp4 has many detractors, even they
admit that the parsing technology by itself is surprisingly powerful,
and produces error messages that are oftentimes better than those by
the yacc-like, native OCaml parser. Camlp4 parsers are used
extensively in Coq.

The idea of two different failures may help in the case of attoparsec
or parsec. Regular parser failure initiates backtracking. If we wish
to terminate the parser, we should raise the exception (and cut the
rest of the choice points). Perhaps the could be a combinator `commit'
that converts a failure to the exception. In the original example
A (B A)* C we would use it as A (B (commit A))* C.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] monadic DSL for compile-time parser generator, not possible?

2013-03-13 Thread oleg

Jeremy Shaw wrote:
 It would be pretty damn cool if you could create a data type for
 generically describing a monadic parser, and then use template haskell
 to generate a concrete parser from that data type. That would allow
 you to create your specification in a generic way and then target
 different parsers like parsec, attoparsec, etc. There is some code
 coming up in a few paragraphs that should describe this idea more
 clearly.

After rather mild and practical restrictions, the problem is
solvable. (BTW, even the problem of lifting arbitrary functional
values, let alone handles and other stuff, is solvable in realistic
settings -- or even completely, although less practically.) 

Rather than starting from the top -- implementing monadic or
applicative parser, let's start from the bottom and figure out what we
really need. It seems that many real-life parsers aren't using the
full power of Applicative, let alone monad. So why to pay, a whole
lot, for what we don't use.

Any parser combinator library has to be able to combine parsers. It
seems the applicative rule 
* :: Parser (a-b) - Parser a - Parser b
is very popular. It is indeed very useful -- although not the only
thing possible. One can come up with a set of combinators that are
used for realistic parsing. For example,
* :: Parser a - Parser b - Parser b
for sequential composition, although expressible via *, could be
defined as primitive. Many other such combinators can be defined as
primitives. 

In other words: the great advantage of Applicative parser combinators
is letting the user supply semantic actions, and executing those
actions as parsing progresses. There is also a traditional approach:
the parser produces an AST or a stream of parsing events, which the
user consumes and semantically processes any way they wish.  Think of
XML parsing: often people parse XML and get a DOM tree, and process it
afterwards. An XML parser can be incremental: SAX.  Parsers that
produce AST need only a small fixed set of combinators. We never need
to lift arbitrary functions since those parsers don't accept arbitrary
semantic actions from the user. For that reason, these parsers are
also much easy to analyze.

Let's take the high road however, applicative parsers. The * rule
is not problematic: it neatly maps to code. Consider

newtype Code a = Code Exp
which is the type-annotated TH Code. We can easily define

app_code :: Code (a-b) - Code a - Code b
app_code (Code f) (Code x) = Code $ AppE f x

So, Code is almost applicative. Almost -- because we only have a
restricted pure:
pureR :: Lift a = a - Code a
with a Lift constraint. Alas, this is not sufficient for realistic
parsers, because often we have to lift functions, as in the example of
parsing a pair of characters:

pure (\x y - (x,y)) * anyChar * anyChar

But aren't functions really unliftable? They are unliftable by value,
but we can also lift by reference. 

Here is an example, using tagless final framework, since it is
extensible. We define the basic minor Applicative

 class Sym repr where
 pureR :: Lift a = a - repr a
 app   :: repr (a-b) - repr a - repr b

 infixl 4 `app`


And a primitive parser, with only one primitive parser.

 class Sym repr = Parser repr where
 anychar :: repr Char

For our example, parsing two characters and returning them as a pair,
we need pairs. So, we extend our parser with three higher-order
_constants_.

 class Sym repr = Pair repr where
 pair :: repr (a - b - (a,b))
 prj1 :: repr ((a,b) - a)
 prj2 :: repr ((a,b) - b)


And here is the example.

 test1 = pair `app` anychar `app` anychar

One interpretation of Sym is to generate code (another one could
analyze the parsers)

 data C a = C{unC :: Q Exp} 

Most interesting is the instance of pairs. Actually, it is not that
interesting: we just lift functions by reference.

 pair0 x y = (x,y)
 
 instance Pair C where
 pair = C [e| pure pair0 |]
 prj1 = C [e| pure fst |]
 prj2 = C [e| pure snd |]

Because tagless-final is so extensible, any time we need a new
functional constant, we can easily introduce it and define its code,
either by building a code expression or by referring to a global name
that is bound to the desired value. The latter is `lift by reference'
(which is what dynamic linking does).


The obvious limitation of this approach is that all functions to
lift must be named -- because we lift by reference. We can also build
anonymous functions, if we just add lambda to our language. If we go
this way we obtain something like

http://okmij.org/ftp/meta-programming/index.html#meta-haskell

(which has lam, let, arrays, loops, etc.)

Sample code, for reference

{-# LANGUAGE TemplateHaskell #-}
{-# LANGUAGE NoMonomorphismRestriction #-}

module P where

import Language.Haskell.TH
import Language.Haskell.TH.Syntax
import Language.Haskell.TH.Ppr

import Control.Applicative

import Text.ParserCombinators.ReadP


class 

[Haskell-cafe] Help to write type-level function

2013-02-27 Thread oleg

Dmitry Kulagin wrote:
 I try to implement typed C-like structures in my little dsl.

HList essentially had those
http://code.haskell.org/HList/

 I was unable to implement required type function:
 type family Find (s :: Symbol) (xs :: [(Symbol,Ty)]) :: Ty
 Which just finds a type in a associative list.

HList also implemented records with named fields. Indeed, you need a
type-level lookup in an associative list, and for that you need type
equality. (The ordinary List.lookup has the Eq constraint, doesn't
it?) 

Type equality can be implemented with type functions, right now.
http://okmij.org/ftp/Haskell/typeEQ.html#TTypeable

(That page also defined a type-level list membership function).






___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Thunks and GHC pessimisation

2013-02-26 Thread oleg

Tom Ellis wrote:
 To avoid retaining a large lazy data structure in memory it is useful to
 hide it behind a function call.  Below, many is used twice.  It is hidden
 behind a function call so it can be garbage collected between uses. 

As you discovered, it is quite challenging to ``go against the grain''
and force recomputation. GHC is quite good at avoiding
recomputation. This is a trade-off, of time vs space. For large
search tree, it is space that is a premium, and laziness and similar
strategies are exactly the wrong trade-off. 

The solution (which I've seen in some of the internal library code) is
to confuse GHC with extra functions:
http://okmij.org/ftp/Haskell/misc.html#memo-off

So, eventually it is possible to force recomputation. But the solution
leaves a poor taste -- fighting a compiler is never a good idea. So,
this is a bug of sort -- not the bug of GHC, but of lazy
evaluation. Lazy evaluation is not the best evaluation strategy. It is
a trade-off, which suits a large class of problems and punishes
another large class of problems.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] parser combinator for left-recursive grammars

2013-02-20 Thread oleg

It is indeed true that a grammar with left-recursion can be
transformed to an equivalent grammar without left recursion --
equivalent in terms of the language recognized -- but _not_ in the
parse trees. Linguists in particular care about parses. Therefore, it was
linguists who developed the parser combinator library for general
grammar with left recursion and eps-loops:

Frost, Richard, Hafiz, Rahmatullah, and Callaghan, Paul. 
Parser Combinators for Ambiguous Left-Recursive Grammars. PADL2008.
http://cs.uwindsor.ca/~richard/PUBLICATIONS/PADL_08.pdf

I have tried dealing with left-recursive grammars and too wrote a parser
combinator library:

http://okmij.org/ftp/Haskell/LeftRecursion.hs

It can handle eps-cycles, ambiguity and other pathology. Here is a
sample bad grammar:

   S - S A C | C
   A - B | aCa
   B - B
   C - b | C A

For more details, see December 2009  Haskell-Cafe thread
Parsec-like parser combinator that handles left recursion?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] generalized, tail-recursive left fold that can

2013-02-19 Thread oleg

  That said, to express foldl via foldr, we need a higher-order
  fold. There are various problems with higher-order folds, related to
  the cost of building closures. The problems are especially severe 
  in strict languages or strict contexts. Indeed,
  
  foldl_via_foldr f z l = foldr (\e a z - a (f z e)) id l z
  
  first constructs the closure and then applies it to z. The closure has
  the same structure as the list -- it is isomorphic to the
  list. However, the closure representation of a list takes typically
  quite more space than the list. So, in strict languages, expressing
  foldl via foldr is a really bad idea. It won't work for big lists.

 If we unroll foldr once (assuming l is not empty), we'll get

   \z - foldr (\e a z - a (f z e)) id (tail l) (f z (head l))

 which is a (shallow) closure. In order to observe what you describe (a
 closure isomorphic to the list) we'd need a language which does
 reductions inside closures.

I should've elaborated this point.

Let us consider monadic versions of foldr and foldl. First, monads,
sort of emulate strict contexts, making it easier to see when
closures are constructed. Second, we can easily add tracing.


import Control.Monad.Trans

-- The following is just the ordinary foldr, with a specialized
-- type for the seed: m z
foldrM :: Monad m =
  (a - m z - m z) - m z - [a] - m z
-- The code below is identical to that of foldr
foldrM f z [] = z
foldrM f z (h:t) = f h (foldrM f z t)

-- foldlM is identical Control.Monad.foldM 
-- Its code is shown below for reference.
foldlM, foldlM' :: Monad m =
  (z - a - m z) - z - [a] - m z
foldlM f z []= return z
foldlM f z (h:t) = f z h = \z' - foldlM f z' t

t1 = foldlM (\z a - putStrLn (foldlM:  ++ show a) 
 return (a:z)) [] [1,2,3]

{-
foldlM: 1
foldlM: 2
foldlM: 3
[3,2,1]
-}

-- foldlM' is foldlM expressed via foldrM
foldlM' f z l = 
foldrM (\e am - am = \k - return $ \z - f z e = k)
   (return return) l = \f - f z

-- foldrM'' is foldlM' with trace printing
foldlM'' :: (MonadIO m, Show a) =
  (z - a - m z) - z - [a] - m z
foldlM'' f z l = 
foldrM (\e am - liftIO (putStrLn $ foldR:  ++ show e) 
am = \k - return $ \z - f z e = k)
   (return return) l = \f - f z


t2 = foldlM'' (\z a - putStrLn (foldlM:  ++ show a) 
   return (a:z)) [] [1,2,3]

{-
foldR: 1
foldR: 2
foldR: 3
foldlM: 1
foldlM: 2
foldlM: 3
[3,2,1]
-}


As we can see from the trace printing, first the whole list is
traversed by foldR and the closure is constructed. Only after foldr
has finished, the closure is applied to z ([] in our case), and
foldl's function f gets a chance to work. The list is effectively
traversed twice, which means the `copy' of the list has to be
allocated -- that is, the closure that incorporates the calls to
f e1, f e2, etc. 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] generalized, tail-recursive left fold that can finish tne computation prematurely

2013-02-18 Thread oleg

As others have pointed out, _in principle_, foldr is not at all
deficient. We can, for example, express foldl via foldr. Moreover, we
can express head, tail, take, drop and even zipWith through
foldr. That is, the entire list processing library can be written in
terms of foldr:

http://okmij.org/ftp/Algorithms.html#zip-folds

That said, to express foldl via foldr, we need a higher-order
fold. There are various problems with higher-order folds, related to
the cost of building closures. The problems are especially severe 
in strict languages or strict contexts. Indeed,

foldl_via_foldr f z l = foldr (\e a z - a (f z e)) id l z

first constructs the closure and then applies it to z. The closure has
the same structure as the list -- it is isomorphic to the
list. However, the closure representation of a list takes typically
quite more space than the list. So, in strict languages, expressing
foldl via foldr is a really bad idea. It won't work for big lists.
BTW, this is why foldM is _left_ fold.

The arguments against higher-order folds as a `big hammer' were made
back in 1998 by Gibbons and Jones
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.42.1735

So, the left-fold with the early termination has a good
justification. In fact, this is how Iteratees were first presented, at
the DEFUN08 tutorial (part of the ICFP2008 conference). The idea of
left fold with early termination is much older though. For example, Takusen
(a database access framework) has been using it since 2003 or so. For
a bit of history, see

http://okmij.org/ftp/Streams.html#fold-stream


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] why GHC cannot infer type in this case?

2013-02-01 Thread oleg

Dmitry Kulagin wrote:
 I try to implement little typed DSL with functions, but there is a problem:
 compiler is unable to infer type for my functions. 

One way to avoid the problem is to start with the tagless final
representation. It imposes fewer requirements on the type system, and
is a quite mechanical way of embedding DSL. The enclosed code
re-implements your example in the tagless final style. If later you
must have a GADT representation, one can easily write an interpreter
that interprets your terms as GADTs (this is mechanical). For more
examples, see the (relatively recent) lecture notes

http://okmij.org/ftp/tagless-final/course/lecture.pdf

{-# LANGUAGE TypeOperators, KindSignatures, DataKinds #-}
{-# LANGUAGE NoMonomorphismRestriction, TypeFamilies #-}
module TestExp where

-- types of DSL terms
data Ty = Int16 | TFun [Ty] Ty | TSeq [Ty]

-- DSL terms
class Exp (repr :: Ty - *) where
int16:: Int - repr Int16
add  :: repr Int16 - repr Int16 - repr Int16 

decl :: (repr (TSeq ts) - repr t) - repr (TFun ts t)
call :: repr (TFun ts t) - repr (TSeq ts) - repr t

-- building and deconstructing sequences
unit :: repr (TSeq '[])
cons :: repr t - repr (TSeq ts) - repr (TSeq (t ': ts))
deco :: (repr t - repr (TSeq ts) - repr w) - repr (TSeq (t ': ts))
- repr w

-- A few convenience functions
deun :: repr (TSeq '[]) - repr w - repr w
deun _ x = x

singleton :: Exp repr = (repr t - repr w) - repr (TSeq '[t]) - repr w
singleton body = deco (\x _ - body x)

-- sample terms
fun =  decl $ singleton (\x - add (int16 2) x)
-- Inferred type
-- fun :: Exp repr = repr (TFun ((:) Ty 'Int16 ([] Ty)) 'Int16)

test = call fun (cons (int16 3) unit)
-- Inferred type
-- test :: Exp repr = repr 'Int16

fun2 =  decl $ deco (\x - singleton (\y - add (int16 2) (add x y)))
{- inferred 
fun2
  :: Exp repr =
 repr (TFun ((:) Ty 'Int16 ((:) Ty 'Int16 ([] Ty))) 'Int16)
-}

test2 = call fun2 (int16 3 `cons` (int16 4 `cons` unit))



-- Simple evaluator

-- Interpret the object type Ty as Haskell type *
type family TInterp (t :: Ty) :: *
type instance TInterp Int16 = Int
type instance TInterp (TFun ts t) = TISeq ts - TInterp t
type instance TInterp (TSeq ts)   = TISeq ts

type family TISeq (t :: [Ty]) :: *
type instance TISeq '[]   = ()
type instance TISeq (t1 ': ts) = (TInterp t1, TISeq ts)

newtype R t = R{unR:: TInterp t}

instance Exp R where
int16 = R
add (R x) (R y) = R $ x + y

decl f = R $ \args - unR . f . R $ args
call (R f) (R args) = R $ f args

unit = R ()
cons (R x) (R y) = R (x,y)
deco f (R (x,y)) = f (R x) (R y)


testv = unR test
-- 5

tes2tv = unR test2
-- 9



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] resources for learning Hindley-Milner type inference for undergraduate students

2013-01-18 Thread oleg

Petr Pudlak wrote:
 could somebody recommend me study materials for learning Hindley-Milner
 type inference algorithm I could recommend to undergraduate students? 

Perhaps you might like a two-lecture course for undergraduates, which
uses Haskell throughout

http://okmij.org/ftp/Haskell/AlgorithmsH.html#teval

It explained HM type inference in a slightly different way, strongly
emphasizing type inference as a non-standard interpretation. The
second main idea was relating polymorphism and `inlining' (copying).
Type schemas were then introduced as an optimization, to inlining. It
becomes clear why it is unsound to infer a polymorphic type for ref
[]: expressions of polymorphic types are always inlined (conceptually,
at least), which goes against the dynamic semantics of reference
cells.

The lectures also show how to infer not only the type but also the
type environment. This inference helps to make the point that `tests'
cannot replace typing. We can type check open expressions (and infer
the minimal environment they make sense to use in). We cannot run
(that is, dynamically check) open expressions.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to fold on types?

2012-12-25 Thread oleg

Magiclouds asked how to build values of data types with many
components from a list of components. For example, suppose we have

data D3 = D3 Int Int Int deriving Show
v3 = [1::Int,2,3]

How can we build the value D3 1 2 3 using the list v3 as the source
for D3's fields? We can't use (foldl ($) D3 values) since the type
changes throughout the iteration: D3 and D3 1 have different type.

The enclosed code shows the solution. It defines the function fcurry
such that

t1 = fcurry D3 v3
-- D3 1 2 3
gives the expected result (D3 1 2 3).

The code is the instance of the general folding over heterogeneous
lists, search for HFoldr in 
http://code.haskell.org/HList/Data/HList/HList.hs

{-# LANGUAGE MultiParamTypeClasses, FlexibleInstances, FlexibleContexts  #-}
{-# LANGUAGE TypeFamilies, DataKinds, PolyKinds, ScopedTypeVariables  #-}
{-# LANGUAGE UndecidableInstances  #-}

-- `Folding' over the data type: creating values of data types
-- with many components from a list of components
-- UndecidableInstances is a bit surprising since everything is decidable,
-- but GHC can't see it.
-- Extensions DataKinds, PolyKinds aren't strictly needed, but
-- they make the code a bit nicer. If we already have them, 
-- why suffer avoiding them. 

module P where

-- The example from MagicCloud's message

data D3 = D3 Int Int Int deriving Show
v3 = [1::Int,2,3]

type family IsArrow a :: Bool
type instance IsArrow (a-b) = True
type instance IsArrow D3 = False
-- add more instances as needed for other non-arrow types

data Proxy a = Proxy

class FarCurry a r t where
fcurry :: (a-t) - [a] - r

instance ((IsArrow t) ~ f, FarCurry' f a r t) = FarCurry a r t where
fcurry = fcurry' (Proxy::Proxy f)

class FarCurry' f a r t where
fcurry' :: Proxy f - (a-t) - [a] - r

instance r ~ r' = FarCurry' False a r' r where
fcurry' _ cons (x:_) = cons x

instance FarCurry a r t = FarCurry' True a r (a-t) where
fcurry' _ cons (x:t) = fcurry (cons x) t

-- Example
t1 = fcurry D3 v3
-- D3 1 2 3

-- Let's add another data type
data D4 = D4 Int Int Int Int deriving Show
type instance IsArrow D4 = False

t2 = fcurry D4 [1::Int,2,3,4]
-- D4 1 2 3 4



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is it possible to have constant-space JSON decoding?

2012-12-15 Thread oleg

Johan Tibell posed an interesting problem of incremental XML parsing
while still detecting and reporting ill-formedness errors.
 What you can't have (I think) is a function:

 decode :: FromJSON a = ByteString - Maybe a

 and constant-memory parsing at the same time. The return type here
 says that we will return Nothing if parsing fails. We can only do so
 after looking at the whole input (otherwise how would we know if it's
 malformed).

The problem is very common: suppose we receive an XML document over a
network (e.g., in an HTTP stream). Network connections are inherently
unreliable and can be dropped at any time (e.g., because someone
tripped over a power cord). The XML document therefore can come
truncated, for example, missing the end tag for the root
element. According to the XML Recommendations, such document is
ill-formed, and hence does not have an Infoset (In contrast, invalid
but well-formed documents do have the Infoset). Strictly speaking, we
should not be processing an XML document until we verified that it is
well-formed, that is, until we parsed it at all and have checked that
all end tags are in place. It seems we can't do the incremental XML
processing in principle.

I should mention first that sometimes people avoid such a strict
interpretation. If we process telemetry data encoded in XML, we may
consider a document meaningful even if the root end tag is missing. We
process as far as we can.

Even if we take the strict interpretation, it is still possible
to handle a document incrementally so long as the processing is
functional or the side effects can be backed out (e.g., in a
transaction). This message illustrates exactly such an incremental
processing that always detects ill-formed XML, and, optionally,
invalid XML. It is possible after all to detect ill-formedness
errors and process the document without loading it all in memory 
first -- using as little memory as needed to hold the state of the
processor -- just a short string in our example.

Our running example is an XML document representing a finite map:
a collection of key-value pairs where key is an integer:

  map
   kvkey1/keyvaluev1/value/kv
   kvkey2/keyvaluev2/value/kv
   kvkeybad/keyvaluev3/value/kv

The above document is both ill-formed (missing the end tag)
and invalid (one key is bad: non-read-able). We would like to write
a lookup function for a key in such an encoded map. We should report
an ill-formedness error always. The reporting of validation
errors may vary. The function

xml_lookup :: Monad m = Key - Iteratee Char m (Maybe Value)
xml_lookup key = id .| xml_enum default_handlers .| xml_normalize 
 .| kv_enum (lkup key)

always reports the validation errors. The function is built
by composition from smaller, separately developed and tested
pipeline components: parsing of a
document to the XMLStream, normalization, converting the XMLStream to
a stream of (Key,Value) pairs and finally searching the stream.
A similar function that replaces kv_enum with kv_enum_pterm
terminates the (Key,Value) conversion as soon as its client iteratee
finished. In that case if the lookup succeeds before we encounter the bad
key, no error is reported. Ill-formedness errors are raised always.

The code
http://okmij.org/ftp/Haskell/Iteratee/XMLookup.hs

shows the examples of both methods of validation error reporting.
This code also illustrates the library of parsing combinators, which
represent the element structure (`DTD').


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Is it possible to have constant-space JSON decoding?

2012-12-04 Thread oleg

I am doing, for several months, constant-space processing of large XML
files using iteratees. The file contains many XML elements (which are
a bit complex than a number). An element can be processed
independently. After the parser finished with one element, and dumped
the related data, the processing of the next element starts anew, so
to speak. No significant state is accumulated for the overall parsing
sans the counters of processed and bad elements, for statistics. XML
is somewhat like JSON, only more complex: an XML parser has to deal
with namespaces, parsed entities, CDATA sections and the other
interesting stuff. Therefore, I'm quite sure there should not be
fundamental problems in constant-space parsing of JSON.

BTW, the parser itself is described there
http://okmij.org/ftp/Streams.html#xml



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Strange behavior with listArray

2012-11-13 Thread oleg

Alex Stangl posed a problem of trying to efficiently memoize a
function without causing divergence:
 solve = let a :: Array Int Int
 a = listArray (0, 3) (0 : f 0)
 f k = if k  0
 then f (a!0)
 else 0 : f 1
 in (intercalate   . map show . elems) a

Daniel Fischer explained the reason for divergence:
 The problem, Alex, is that

 f k = if k  0
 then f (a!0)
 else 0 : f 1

 is strict, it needs to know the value of (a!0) to decide which branch to take.
 But the construction of the array a needs to know how long the list (0 : f 0)
 is (well, if it's at least four elements long) before it can return the array.
 So there the cat eats its own tail, f needs to know (a part of) a before it
 can proceed, but a needs to know more of f to return than it does.

But the problem can be fixed: after all, f k is a list of integers. A
list is an indexable collection. Let us introduce the explicit index:

 import Data.Array((!),Array,elems,listArray)
 import Data.List(intercalate)

 solve = (intercalate   . map show . elems) a
  where
  a :: Array Int Int
  a = listArray (0, 3) (0 : [f 0 i | i - [0..]])

  f 0 0 = 0
  f 0 i = f 1 (i-1)
  f k i = f (a!0) i

This converges. Here is a bit related problem:
http://okmij.org/ftp/Haskell/AlgorithmsH.html#controlled-memoization



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Strange behavior with listArray

2012-11-13 Thread oleg

Alex Stangl wrote:
 To make this concrete, here is the real solve function, which computes
 a border array (Knuth-Morris-Pratt failure function) for a specified
 string, before the broken memoization modification is made:

 solve :: String - String
 solve w = let h = length w - 1
   wa = listArray (0, h) w
   pI = 0 : solveR (tail w) 0
   solveR :: String - Int - [Int]
   solveR [] _ = []
   solveR cl@(c:cs) k = if k  0  wa!k /= c
  then solveR cl (pI!!(k-1))
  else let k' = if wa!k == c
  then k + 1
  else k
   in k' : solveR cs k'
   in (intercalate   . map show) pI

 Here solveR corresponds to f in the toy program and pI is the list
 I would like to memoize since references to earlier elements could
 get expensive for high values of k. 

Ok, let's apply a few program transformations. First we notice that
the list pI must have the same length as the string w. Since we have
already converted the string w to an array, wa, we could index into
that array. We obtain the following version.

 solve1 :: String - String
 solve1 w = (intercalate   . map show) pI
  where
  h = length w - 1
  wa = listArray (0, h) w
  pI = 0 : solveR 1 0
  solveR :: Int - Int - [Int]
  solveR i k | i  h = []
  solveR i k = 
let c = wa!i in 
if k  0  wa!k /= c
   then solveR i (pI!!(k-1))
   else let k' = if wa!k == c
then k + 1
else k
in k' : solveR (i+1) k'

 t1s1 = solve1 the rain in spain
 t1s2 = solve1 
 t1s3 = solve1 abbaabba

We don't need to invent an index: it is already in the problem.
The unit tests confirm the semantics is preserved. The _general_ next
step is to use the pair of indices (i,k) as the key to the two
dimensional memo table. Luckily, our case is much less general. We do
have a very nice dynamic programming problem. The key is the
observation
k' : solveR (i+1) k'
After a new element, k', is produced, it is used as an argument to the
solveR to produce the next element. This leads to a significant
simplification:


 solve2 :: String - Array Int Int
 solve2 w = pI
  where
  h = length w - 1
  wa = listArray (0, h) w
  pI = listArray (0,h) $ 0 : [ solveR i (pI!(i-1)) | i - [1..] ]
  solveR :: Int - Int - Int
  solveR i k = 
let c = wa!i in 
if k  0  wa!k /= c
   then solveR i (pI!(k-1))
   else let k' = if wa!k == c
then k + 1
else k
in k'

 t2s1 = solve2 the rain in spain
 t2s2 = solve2 
 t2s3 = solve2 abbaabba


The unit tests pass.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] forall disappear from type signature

2012-11-03 Thread oleg

Takayuki Muranushi wrote:
 Today, I encountered a strange trouble with higher-rank polymorphism. It
 was finally solved by nominal typing. Was it a bug in type checker? lack of
 power in type inference? 

 runDB :: Lens NetworkState RIB
 runDB = lens (f::NetworkState - RIB) (\x s - s { _runDB = x })
  where f :: NetworkState - RIB

As it becomes apparent (eventually), RIB is a polymorhic type,
something like

type RIB = forall a. DB.DBMT (Maybe Int) IO a0
   - IO (StM (DB.DBMT (Maybe Int) IO) a0)

The field _runDB has this polymorphic type. Hence the argument x 
in (\x s - s { _runDB = x }) is supposed to have a polymorphic type.
But that can't be: absent a type signature, all arguments of a
function are monomorphic. One solution is to give lens explicit,
higher-ranked signature (with explicit forall, that is). A better
approach is to wrap polymorphic types like your did

 newtype RIB = RIB (RunInBase (DB.DBMT (Maybe Int) IO) IO)

The newtype RIB is monomorphic and hence first-class, it can be freely
passed around. In contrast, polymorphic values are not first-class,
in Haskell. There are many restrictions on their use. That is not a
bug in the type checker. You may call it lack of power in type
inference: in calculi with first-class polymorphism (such as System F
and System Fw), type inference is not decidable. Even type checking is
not decidable.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A yet another question about subtyping and heterogeneous collections

2012-10-23 Thread oleg

 And HList paper left me with two questions. The first one is how much
 such an encoding costs both in terms of speed and space. And the
 second one is can I conveniently define a Storable instance for
 hlists. As I said before, I need all this machinery to parse a great
 number of serialized nested C structs from a file.

I'm afraid I've overlooked the part about the great serialized C
structs. Serializing HList is easy -- it's de-serialization that is
difficult. Essentially, we need to write a
mini-type-checker. Sometimes, Template Haskell can help, and we can
use GHC's own type-checker.

Since the approach you outlined relies on Haskell type-classes to
express hierarchies, you'll have the same type-checking
problem. You'll have to somehow deduce those type-class constraints
during the de-serialization, and convince GHC of them. If you assume
a fixed number of classes (C struct types), things become simpler. The
HList-based solution becomes just as simple if you assume a fixed
number of record types.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Why Kleisli composition is not in the Monad signature?

2012-10-23 Thread oleg

Andreas Abel wrote:
 I tell them that monads are for sequencing effects; and the
 sequencing is visible clearly in

()  :: IO a - IO b - IO b
(=) :: IO a - (a - IO b) - IO b

 but not in

fmap :: (a - b) - IO a - IO b
join :: IO (IO a) - IO a

Indeed! I'd like to point out an old academic paper that was written
exactly on the subject at hand: how Category Theory monads relate to
monads in Haskell. Here is the relevant quotation:

Monads are typically equated with single-threadedness, and are
therefore used as a technique for incorporating imperative features
into a purely functional language. Category theory monads have little
to do with single-threadedness; it is the sequencing imposed by
composition that ensures single-threadedness. In a Wadler-ised monad
this is a consequence of bundling the Kleisli star and flipped compose
into the bind operator. There is nothing new in this connection. Peter
Landin in his Algol 60 used functional composition to model
semi-colon. Semi-colon can be thought of as a state transforming
operator that threads the state of the machine throughout a program.
The work of Peyton-Jones and Wadler has turned full circle back to
Landin's earlier work as their use of Moggi's sequencing monad enables
real side-effects to be incorporated into monad operations such as
print.

Quoted from: Sec 3: An historical aside
Jonathan M. D. Hill and Keith Clarke:
An Introduction to Category Theory, Category Theory Monads,
and Their Relationship to Functional Programming. 1994.
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.53.6497


I'd like to stress: the single-threadedness, the trick that lets us
embed imperative language into a pure one, has *little to do* with
category-theoretic monads with their Klesli star. 

The web page
http://okmij.org/ftp/Computation/IO-monad-history.html
describes the work of Landin in detail, contrasting Landin's and
Peyton-Jones' papers.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] A clarification about what happens under the hood with foldMap

2012-10-23 Thread oleg

 I was playing with the classic example of a Foldable structure: Trees.
 So the code you can find both on Haskell Wiki and LYAH is the following:

 data Tree a = Empty | Node (Tree a) a (Tree a) deriving (Show, Eq)

 instance Foldable Tree where
 foldMap f Empty = mempty
 foldMap f (Node l p r) = foldMap f l `mappend` f p `mappend` foldMap f r

 treeSum :: Tree Int - Int
 treeSum = Data.Foldable.foldr (+) 0


 What this code does is straighforward. I was struck from the following
 sentences in LYAH:

 Notice that we didn't have to provide the function that takes a value and
  returns a monoid value.
  We receive that function as a parameter to foldMap and all we have to
  decide is where to apply
  that function and how to join up the resulting monoids from it.

 This is obvious, so in case of (+) f = Sum, so f 3 = Sum 3 which is a
 Monoid.
 What I was wondering about is how Haskell knows that it has to pass, for
 example, Sum in case of (+) and Product in case of (*)?

Hopefully the following paradox helps:

 treeDiff :: Tree Int - Int
 treeDiff = Data.Foldable.foldr (-) 0

 t1 = treeDiff (Node (Node Empty 1 Empty) 2 (Node Empty 3 Empty))

This works just as well. You might be surprised: after all, there is
no Diff monoid that corresponds to (-). In fact, there cannot be since
(-) is not associative. And yet treeDiff type checks and even produces
some integer when applied to a tree. What gives?



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] A yet another question about subtyping and heterogeneous collections

2012-10-19 Thread oleg

First of all, MigMit has probably suggested the parameterization of
Like by the constraint, something like the following:

data Like ctx = forall a. (ctx a, Typeable a) = Like a

instance ALike (Like ALike) where
   toA (Like x) = toA x

instance CLike (Like CLike) where
   toC (Like x) = toC x

get_mono :: Typeable b = [Like ALike] - [b]
get_mono = catMaybes . map ((\(Like x) - cast x))

lst_a :: [Like ALike]
lst_a = [Like a1, Like b1, Like c1, Like d1]

lst_c :: [Like CLike]
lst_c = [Like c1, Like d1]

t1 = map print_a lst_a
t2 = map print_a lst_c

(The rest of the code is the same as in your first message). 
You need the flag ConstraintKinds. 

Second, all your examples so far used structural subtyping (objects
with the same fields have the same type) rather than nominal
subtyping of C++ (distinct classes have distinct types even if they
have the same fields; the subtyping must be declared in the class
declaration). For the structural subtyping, upcasts and downcasts can
be done mostly automatically. See the OOHaskell paper or the code

http://code.haskell.org/OOHaskell
(see the files in the samples directory).



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type of scramblings

2012-10-12 Thread oleg

Sorry for a late reply.

 There are of course more total functions of type `[a]^n - [a]` than of type
 `[a] - [a]`, in the sense that any term of the latter type can be assigned 
 the
 former type. But, on the other hand, any total function `f :: [a]^n - [a]`
 has an equivalent total function

   g :: [a] - [a]
   g xs | length xs == n = f xs
| otherwise = xs

That is correct. It is also correct that f has another equivalent
total function

   g2 :: [a] - [a]
   g2 xs | length xs == n = f xs
 | otherwise = xs ++ xs

and another, and another... That is the problem. The point of the
original article on eliminating translucent existentials was to
characterize scramblings of a list of a given length -- to develop an
encoding of a scrambling which uses only simple types.  Since the
article talks about isomorphisms, the encoding should be tight.

Suppose we have an encoding of [a] - [a] functions, which represents
each [a] - [a] function as a first-order data type, say. The encoding
should be injective, mapping g and g2 above to different
encodings. But for the purposes of characterizing scramblings of a
list of size n, the two encodings should be regarded equivalent. So,
we have to take a quotient. One may say that writing a type as
[a]^n - [a] was taking of the quotient.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Church vs Boehm-Berarducci encoding of Lists

2012-09-25 Thread oleg

 Wouldn't you say then that Church encoding is still the more appropriate
 reference given that Boehm-Berarducci's algorithm is rarely used? 

 When I need to encode pattern matching it's goodbye Church and hello Scott.
 Aside from your projects, where else is the B-B procedure used?

First of all, the Boehm-Berarducci encoding is inefficient only when
doing an operation that is not easily representable as a fold. Quite
many problems can be efficiently tackled by a fold.

Second, I must stress the foundational advantage of the
Boehm-Berarducci encoding: plain System F. Boehm-Berarducci encoding
uses _no_ recursion: not at the term level, not at the type level.  In
contrast, the efficient for pattern-match encodings need general
recursive types. With such types, a fix-point combinator becomes
expressible, and the system, as a logic, becomes inconsistent.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Church vs Boehm-Berarducci encoding of Lists

2012-09-22 Thread oleg

 do you have any references for the extension of lambda-encoding of
 data into dependently typed systems?

 Is there a way out of this quagmire?  Or are we stuck defining actual
 datatypes if we want dependent types?

Although not directly answering your question, the following paper

 Inductive Data Types: Well-ordering Types Revisited
 Healfdene Goguen Zhaohui Luo
 
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.17.8970rep=rep1type=pdf

might still be relevant. Sec 2 reviews the major approaches to
inductive data types in Type Theory.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Church vs Boehm-Berarducci encoding of Lists

2012-09-20 Thread oleg

Dan Doel wrote:
  P.S. It is actually possible to write zip function using Boehm-Berarducci
  encoding:
  http://okmij.org/ftp/Algorithms.html#zip-folds

 If you do, you might want to consider not using the above method, as I
 seem to recall it doing an undesirable amount of extra work (repeated
 O(n) tail). 
It is correct. The Boehm-Berarducci web page discusses at some extent
the general inefficiency of the encoding, the need for repeated
reflections and reifications for some (but not all) operations. That
is why arithmetic on Church numerals is generally a bad idea.

A much better encoding of numerals is what I call P-numerals
http://okmij.org/ftp/Computation/lambda-calc.html#p-numerals
It turns out, I have re-discovered them after Michel Parigot (so my
name P-numerals is actually meaningful). Not only they are faster; one
can _syntactically_ prove that PRED . SUCC is the identity.

The general idea of course is Goedel's recursor R.

   R b a 0 = a
   R b a (Succ n) = b n (R b a n)

which easily generalizes to lists. The enclosed code shows the list
encoding that has constant-time cons, head, tail and trivially
expressible fold and zip.


Kim-Ee Yeoh wrote:
 So properly speaking, tail and pred for Church-encoded lists and nats
 are trial-and-error affairs. But the point is they need not be if we
 use B-B encoding, which looks _exactly_ the same, except one gets a
 citation link to a systematic procedure.

 So it looks like you're trying to set the record straight on who actually
 did what.

Exactly. Incidentally, there is more than one way to build a
predecessor of Church numerals. Kleene's solution is not the only
one. Many years ago I was thinking on this problem and designed a
different predecessor:

excerpted from http://okmij.org/ftp/Haskell/LC_neg.lhs


One ad hoc way of defining a predecessor of a positive numeral
predp cn+1 == cn
is to represent predp cn as cn f v
where f and v are so chosen that (f z) acts as
if z == v then c0 else (succ z)
We know that z can be either a numeral cn or a special value v. All
Church numerals have a property that (cn combI) is combI: the identity
combinator is a fixpoint of every numeral. Therefore, ((cn combI) (succ
cn)) reduces to (succ cn). We only need to choose the value v in such
a way that ((v I) (succ v)) yields c0.

 predp = eval $
   c ^ c 
# (z ^ (z # combI # (succ # z)))   -- function f(z)
# (a ^ x ^ c0) -- value v


{-# LANGUAGE Rank2Types #-}

-- List represented with R

newtype R x = R{unR :: forall w.
  -- b
  (x - R x - w - w)
  -- a
  - w
  -- result
  - w}

nil :: R x
nil = R (\b a - a)

-- constant type
cons :: x - R x - R x
cons x r = R(\b a - b x r (unR r b a))

-- constant time
rhead :: R x - x
rhead (R fr) = fr (\x _ _ - x) (error head of the empty list)

-- constant time
rtail :: R x - R x
rtail (R fr) = fr (\_ r _ - r) (error tail of the empty list)

-- fold
rfold :: (x - w - w) - w - R x - w
rfold f z (R fr) = fr (\x _ w - f x w) z

-- zip is expressed via fold
rzipWith :: (x - y - z) - R x - R y - R z
rzipWith f r1 r2 =  rfold f' z r1 r2
 where f' x tD = \r2 - cons (f x (rhead r2)) (tD (rtail r2))
   z   = \_  - nil

-- tests

toR :: [a] - R a
toR = foldr cons nil

toL :: R a - [a]
toL = rfold (:) []


l1 = toR [1..10]
l2 = toR abcde


t1 = toL $ rtail l2
-- bcde

t2 = toL $ rzipWith (,) l2 l1
-- [('a',1),('b',2),('c',3),('d',4),('e',5)]



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Church vs Boehm-Berarducci encoding of Lists

2012-09-18 Thread oleg

There has been a recent discussion of ``Church encoding'' of lists and
the comparison with Scott encoding.

I'd like to point out that what is often called Church encoding is
actually Boehm-Berarducci encoding. That is, often seen

 newtype ChurchList a =
 CL { cataCL :: forall r. (a - r - r) - r - r }

(in http://community.haskell.org/%7Ewren/list-extras/Data/List/Church.hs )

is _not_ Church encoding. First of all, Church encoding is not typed
and it is not tight. The following article explains the other
difference between the encodings

http://okmij.org/ftp/tagless-final/course/Boehm-Berarducci.html

Boehm-Berarducci encoding is very insightful and influential. The
authors truly deserve credit.

P.S. It is actually possible to write zip function using Boehm-Berarducci
encoding:
http://okmij.org/ftp/ftp/Algorithms.html#zip-folds




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Transforming a ADT to a GADT

2012-09-15 Thread oleg

Florian Lorenzen wrote:
 Now, I'd like to have a function
  typecheck :: Exp - Maybe (Term t) 
  typecheck exp = ...
 that returns the GADT value corresponding to `exp' if `exp' is type
 correct.

Let us examine that type:
typecheck :: forall t. Exp - Maybe (Term t) 

Do you really mean that given expression exp, (typecheck exp) should
return a (Maybe (Term t)) value for any t whatsoever? In other words,
you are interested in a type-*checking* problem: given an expression _and_
given a type, return Just (Term t) if the given expression has the given
type. Both expression and the type are the input. Incidentally, this
problem is like `read': we give the read function a string
and we should tell it to which type to parse. If that is what you
mean, then the solution using Typeable will work (although you may
prefer avoiding Typeable -- in which case you should build type
witnesses on your own).


 that returns the GADT value corresponding to `exp' if `exp' is type
 correct.
Your comment suggests that you mean typecheck exp should return
(Just term) just in case `exp' is well-typed, that is, has _some_
type. The English formulation of the problem already points us towards 
an existential. The typechecking function should return (Just term)
and a witness of its type:

typecheck :: Exp - Sigma (t:Type) (Term t)

Then my
 data TypedTerm = forall ty. (Typ ty) (Term ty)

is an approximation of the Sigma type. As Erik Hesselink rightfully
pointed out, this type does not preclude type transformation
functions. Indeed you have to unpack and then repack. If you 
want to know the concrete type, you can pattern-match on the type
witness (Typ ty), to see if the inferred type is an Int, for example.


Chances are that you wanted the following

typecheck :: {e:Exp} - Result e
where Result e has the type (Just (Term t)) (with some t dependent on
e) if e is typeable, and Nothing otherwise. But this is a dependent
function type (Pi-type). No wonder we have to go through contortions
like Template Haskell to emulate dependent types in Haskell. Haskell
was not designed to be a dependently-typed language.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Either Monad and Laziness

2012-09-12 Thread oleg

 I am currently trying to rewrite the Graphics.Pgm library from hackage
 to parse the PGM to a lazy array. 

Laziness and IO really do not mix. 

 The problem is that even using a lazy array structure, because the
 parser returns an Either structure it is only possible to know if the
 parser was successful or not after the whole file is read, 

That is one of the problems. Unexpected memory blowups could be
another problem. The drawbacks of lazy IO are well documented by now.

 The behaviour I want to achieve is like this: I want the program when
 compiled to read from a file, parsing the PGM and at the same time
 apply transformations to the entries as they are read and write them
 back to another PGM file.

Such problems are the main motivation for iteratees, conduits, pipes,
etc. Every such library contains procedures for doing exactly what you
want. Please check Hackage. John Lato's iteratee library, for example,
has procedure for handling sound (AIFF) files -- which may be very
big. IterateeM has the TIFF decoder -- which is incremental and
strict. TIFF is much harder to parse than PGM.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] type variable in class instance

2012-09-11 Thread oleg

Corentin Dupon wrote about essentially the read-show problem:
 class (Typeable e) = Event e

 data Player = Player Int deriving (Typeable)
 data Message m  = Message String deriving (Typeable)

 instance  Event Player

 instance (Typeable m) = Event (Message m)

 viewEvent :: (Event e) = e - IO ()
 viewEvent event = do
 case cast event of
 Just (Player a) - putStrLn $ show a
 Nothing - return ()
 case cast event of
 Just (Message s) - putStrLn $ show s
 Nothing - return ()

Indeed the overloaded function cast needs to know the target type --
the type to cast to. In case of Player, the pattern
(Player a) uniquely determines the type of the desired value: Player.
This is not so for Message: the pattern (Message s) may correspond to
the type Message (), Message Int, etc. 

To avoid the problem, just specify the desired type explicitly

 case cast event of
Just (Message s::Message ()) - putStrLn $ show s
Nothing - return ()

(ScopedTypeVariables extension is needed). The exact type of the
message doesn't matter, so I chose Message ().

BTW, if the set of events is closed, GADTs seem a better fit

 data Player
 data Message s

 data Event e where
 Player  :: Int- Event Player
 Message :: String - Event (Message s)

 viewEvent :: Event e - IO ()
 viewEvent (Player a)  = putStrLn $ show a
 viewEvent (Message s) = putStrLn $ show s



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] type variable in class instance

2012-09-11 Thread oleg

Let me see if I understand. You have events of different sorts: events
about players, events about timeouts, events about various
messages. Associated with each sort of event is a (potentially open)
set of data types: messages can carry payload of various types. A
handler specifies behavior of a system upon the reception of an
event. A game entity (player, monster, etc) is a collection of
behaviors. The typing problem is building the heterogeneous collection
of behaviors and routing an event to the appropriate handler. Is this
right?

There seem to be two main implementations, with explicit types and latent
(dynamic) types. The explicit-type representation is essentially HList
(a Type-indexed Record, TIR, to be precise). Let's start with the
latent-type representation. Now I understand your problem better, I
think your original approach was the right one. GADT was a
distraction, sorry. Hopefully you find the code below better reflects
your intentions.

{-# LANGUAGE ExistentialQuantification, DeriveDataTypeable #-}
{-# LANGUAGE StandaloneDeriving #-}

import Data.Typeable

-- Events sorts

data Player = Player PlayerN PlayerStatus
deriving (Eq, Show, Typeable)

type PlayerN = Int
data PlayerStatus = Enetering | Leaving
deriving (Eq, Show)

newtype Message m = Message m
deriving (Eq, Show)

deriving instance Typeable1 Message

newtype Time = Time Int
deriving (Eq, Show, Typeable)

data SomeEvent = forall e. Typeable e = SomeEvent e
deriving (Typeable)

-- They are all events

class Typeable e = Event e where   -- the Event predicate
  what_event :: SomeEvent - Maybe e  
  what_event (SomeEvent e) = cast e


instance Event Player
instance Event Time
instance Typeable m = Event (Message m)

instance Event SomeEvent where
  what_event = Just

-- A handler is a reaction on an event
-- Given an event, a handler may decline to handle it
type Handler e = e - Maybe (IO ())

inj_handler :: Event e = Handler e - Handler SomeEvent
inj_handler h se | Just e - what_event se = h e
inj_handler _ _ = Nothing


type Handlers = [Handler SomeEvent]

trigger :: Event e = e - Handlers - IO ()
trigger e [] = fail Not handled
trigger e (h:rest) 
  | Just rh - h (SomeEvent e) = rh
  | otherwise  = trigger e rest

-- Sample behaviors

-- viewing behavior (although viewing is better with Show since all
-- particular events implement it anyway)

view_player :: Handler Player
view_player (Player x s) = Just . putStrLn . unwords $ 
  [Player, show x, show s]

-- View a particular message
view_msg_str :: Handler (Message String)
view_msg_str (Message s) = Just . putStrLn . unwords $ 
 [Message, s]

-- View any message
view_msg_any :: Handler SomeEvent
view_msg_any (SomeEvent e) 
  | (tc1,[tr]) - splitTyConApp (typeOf e),
(tc2,_)- splitTyConApp (typeOf (undefined::Message ())),
tc1 == tc2 =
Just . putStrLn . unwords $ [Some message of the type, show tr]
view_msg_any _ = Nothing

viewers = [inj_handler view_player, inj_handler view_msg_str, view_msg_any]


test1 = trigger (Player 1 Leaving) viewers
-- Player 1 Leaving

test2 = trigger (Message str1) viewers
-- Message str1

test3 = trigger (Message (2::Int)) viewers
-- Some message of the type Int



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Rigid skolem type variable escaping scope

2012-08-23 Thread oleg

Matthew Steele asked why foo type-checks and bar doesn't:
   class FooClass a where ...
 
   foo :: (forall a. (FooClass a) = a - Int) - Bool
   foo fn = ...
 
   newtype IntFn a = IntFn (a - Int)
 
   bar :: (forall a. (FooClass a) = IntFn a) - Bool
   bar (IntFn fn) = foo fn

This and further questions become much simpler if we get a bit more
explicit. Imagine we cannot write type signatures like those of foo
and bar (no higher-ranked type signatures). But we can define
higher-rank newtypes. Since we can't give foo the higher-rank
signature, we must re-write it, introducing the auxiliary
newtype FaI.

 {-# LANGUAGE Rank2Types #-}

 class FooClass a where m :: a

 instance FooClass Int where m = 10

 newtype FaI = FaI{unFaI :: forall a. (FooClass a) = a - Int}

 foo :: FaI - Bool
 foo fn = ((unFaI fn)::(Char-Int)) m  0

We cannot apply fn to a value: we must first remove the wrapper FaI
(and instantiate the type using the explicit annotation -- otherwise
the type-checker has no information how to select the FooClass
instance).

Let's try writing bar in this style. The first attempt

 newtype IntFn a = IntFn (a - Int)
 newtype FaIntFn = FaIntFn{unFaIntFn:: forall a. FooClass a = IntFn a}

 bar :: FaIntFn - Bool
 bar (IntFn fn) = foo fn

does not work: 
Couldn't match expected type `FaIntFn' with actual type `IntFn t0'
In the pattern: IntFn fn

Indeed, the argument of bar has the type FaIntFn rather than IntFn, so
we cannot pattern-match on IntFn. We must first remove the IntFn
wrapper. For example:

 bar :: FaIntFn - Bool
 bar x = case unFaIntFn x of
  IntFn fn  - foo fn

That doesn't work for another reason: 
Couldn't match expected type `FaI' with actual type `a0 - Int'
In the first argument of `foo', namely `fn'
In the expression: foo fn

foo requires the argument of the type FaI, but fn is of the type
a0-Int. To get the desired FaI, we have to apply the wrapper FaI:

 bar :: FaIntFn - Bool
 bar x = case unFaIntFn x of
  IntFn fn  - foo (FaI fn)

And now we get the desired error message, which should become clear:

Couldn't match type `a0' with `a'
  because type variable `a' would escape its scope
This (rigid, skolem) type variable is bound by
  a type expected by the context: FooClass a = a - Int
The following variables have types that mention a0
  fn :: a0 - Int (bound at /tmp/h.hs:16:16)
In the first argument of `FaI', namely `fn'
In the first argument of `foo', namely `(FaI fn)'

When we apply FaI to fn, the type-checker must ensure that fn is
really polymorphic. That is, free type variable in fn's type do not
occur elsewhere type environment: see the generalization rule in the
HM type system. When we removed the wrapper
unFaIntFn, we instantiated the quantified type variable a with some
specific (but not yet determined) type a0. The variable fn receives
the type fn:: a0 - Int. To type-check FaI fn, the type checker should
apply this rule

G |- fn :: FooClass a0 = a0 - Int
---
G |- FaI fn :: FaI

provided a0 does not occur in G. But it does occur: G has the binding
for fn, with the type a0 - Int, with the undesirable occurrence of
a0.

To solve the problem then, we somehow need to move this problematic binding fn
out of G. That binding is introduced by the pattern-match. So, we
should move the pattern-match under the application of FaI:

 bar x = foo (FaI (case unFaIntFn x of IntFn fn  - fn))

giving us the solution already pointed out.

 So my next question is: why does unpacking the newtype via pattern
 matching suddenly limit it to a single monomorphic type?

Because that's what removing wrappers like FaI does. There is no way
around it. That monomorphic type can be later generalized again, if
the side-condition for the generalization rule permits it.

You might have already noticed that `FaI' is like big Lambda of System
F, and unFaI is like System F type application. That's exactly what
they are:
 http://okmij.org/ftp/Haskell/types.html#some-impredicativity

My explanation is a restatement of the Simon Peyton-Jones explanation,
in more concrete, Haskell terms.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Can pipes solve this problem? How?

2012-08-16 Thread oleg

 Consider code, that takes input from handle until special substring matched:

  matchInf a res s | a `isPrefixOf` s = reverse res
  matchInf a res (c:cs)   = matchInf a (c:res) cs
  hTakeWhileNotFound str hdl = hGetContents hdl = return.matchInf str []

 It is simple, but the handle is closed after running. That is not good,
 because I want to reuse this function.

This example is part of one of Iteratee demonstrations
http://okmij.org/ftp/Haskell/Iteratee/IterDemo1.hs

Please search for 
-- Early termination:
-- Counting the occurrences of the word ``the'' and the white space
-- up to the occurrence of the terminating string ``the end''

The iteratee solution is a bit more general because it creates an
inner stream with the part of the outer stream until the match is
found. Here is a sample application:

run_bterm2I fname = 
  print = run = enum_file fname .| take_until_match the end
   (countWS_iter `en_pair` countTHE_iter)

It reads the file until the end is found, and counts white space and
occurrences of a specific word, in parallel. All this processing
happens in constant space and we never need to accumulate anything
into string. If you do need to accumulate into string, there is 
an iteratee stream2list that does that.

The enumeratee take_until_match, as take and take_while, stops when the
terminating condition is satisfied or when EOF is detected. In the
former case, the stream may contain more data and remains usable. 

A part of IterDemo1 is explained in the paper
http://okmij.org/ftp/Haskell/Iteratee/describe.pdf

I am not sure though if I answered your question since you were
looking for pipes. I wouldn't call Iteratee pipes.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data structure containing elements which are instances of the

2012-08-14 Thread oleg

 It's only a test case. The real thing is for a game and will be
 something like:

 class EntityT e where
update  :: e - e
render  :: e - IO ()
handleEvent :: e - Event - e
getBound:: e - Maybe Bound

 data Entity = forall e. (EntityT e) = Entity e

 data Level = Level {
entities = [Entity],
...
}

I suspected the real case would look like that. It is also covered on
the same web page on Eliminating existentials. Here is your example
without existentials, in pure Haskell98 (I took a liberty to fill in
missing parts to make the example running)


data Event = Event Int  -- Stubs
type Bound = Pos
type Pos  = (Float,Float)

data EntityO = EntityO{
  update  :: EntityO,
  render  :: IO (),
  handleEvent :: Event - EntityO,
  getBound:: Maybe Bound
  }

type Name = String

new_entity :: Name - Pos - EntityO
new_entity name pos@(posx,posy) =
  EntityO{update = update,
  render = render,
  handleEvent = handleEvent,
  getBound = getBound}
 where
 update = new_entity name (posx+1,posy+1)
 render = putStrLn $ name ++  at  ++ show pos
 handleEvent (Event x) = new_entity name (posx + fromIntegral x,posy)
 getBound = if abs posx + abs posy  1.0 then Just pos else Nothing


data Level = Level {
entities :: [EntityO]
}

levels = Level {
  entities = [new_entity e1 (0,0),
  new_entity e2 (1,1)]
  }


evolve :: Level - Level
evolve l = l{entities = map update (entities l)}

renderl :: Level - IO ()
renderl l = mapM_ render (entities l)

main = renderl . evolve $ levels


I think the overall pattern should look quite familiar. The
code shows off the encoding of objects as records of closures. See
http://okmij.org/ftp/Scheme/oop-in-fp.txt
for the references and modern reconstruction of Ken Dickey's
``Scheming with Objects''.

It is this encoding that gave birth to Scheme -- after Steele and
Sussman realized that closures emulate actors (reactive
objects). Incidentally, existentials do enter the picture: the
emerge upon closure conversion:

  Yasuhiko Minamide, J. Gregory Morrisett and Robert Harper
  Typed Closure Conversion. POPL1996
  http://www.cs.cmu.edu/~rwh/papers/closures/popl96.ps

This message demonstrates the inverse of the closure conversion,
eliminating existentials and introducing closures.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] CRIP: the Curiously Reoccuring Instance Pattern

2012-08-11 Thread oleg

Anthony Clayden wrote:
 So three questions in light of the approach of abandoning FunDeps and
 therefore not getting interference with overlapping:
 A. Does TTypeable need to be so complicated?
 B. Is TTypeable needed at all?
 C. Does the 'simplistic' version of type equality testing suffer possible
 IncoherentInstances?

It is important to keep in mind that Type Families (and Data Families)
are _strictly_ more expressive than Functional dependencies. For
example, there does not seem to be a way to achieve the injectivity of
Leibniz equality
http://okmij.org/ftp/Computation/extra-polymorphism.html#injectivity
without type families (and relaying instead on functional
dependencies, implemented with TypeCast or directly).

I'd like to be able to write
data Foo a = Foo (SeqT a)
where 
SeqT Bool = Integer
SeqT a= [a]  otherwise
(A sequence of Booleans is often better represented as an Integer). A
more practical example is
http://okmij.org/ftp/Haskell/GMapSpec.hs
http://okmij.org/ftp/Haskell/TTypeable/GMapSpecT.hs

It is possible to sort of combine overlapping with associated types,
but is quite ungainly. It is not possible to have overlapping
associated types _at present_. Therefore, at present, TTYpeable is
necessary and it has to be implemented as it is.

You point out New Axioms. They will change things. I have to repeat my
position however, which I have already stated several times. TTypeable
needs no new features from the language and it is available now. There
is no problem of figuring out how TTypeable interacts with existing
features of Haskell since TTypeable is implemented with what we
already have. New Axioms add to the burden of checking how this new
feature interacts with others. There have been many examples when one
feature, excellent by itself, badly interacts with others. (I recall
GADT and irrefutable pattern matching.)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data.Data and OverlappingInstances

2012-08-11 Thread oleg

Timo von Holtz wrote:

 class Test a where
   foo :: Monad m = m a

 instance Num a = Test a where
   foo = return 1

 instance Test Int where
   foo = return 2

 test constr = fromConstrM foo constr

I'm afraid the type checker is right. From the type of fromConstrM
  fromConstrM :: forall m a. (Monad m, Data a) = (forall d. Data d = m d)
- Constr - m a

we see its first argument has the type
(forall d. Data d = m d)

If instead it had the type 
(forall d. Test d = m d)

we would have no problem. As it is, when you pass 'foo' of the type
(Test d, Monad m) = m d
as the first argument of fromConstrM, which only assures the Data d
constraint on 'd' and _nothing_ else, the compiler checks if it can get
rid of (discharge) the constraint Test d. That is, the compiler is
forced to choose an instance for Test. But there is not information to
do that.

Overlapping here is irrelevant. If you had non-overlapping instances

 class Test a where
   foo :: Monad m = m a

 instance Num a = Test [a] where
   foo = return [1]

 instance Test Int where
   foo = return 2

 test constr = fromConstrM foo constr

'test' still causes the problem. The error message now describes the
real problem:

Could not deduce (Test d) arising from a use of `foo'
from the context (Monad m, Data a)
  bound by the inferred type of
   test :: (Monad m, Data a) = Constr - m a
  at /tmp/d.hs:16:1-36
or from (Data d)
  bound by a type expected by the context: Data d = m d
  at /tmp/d.hs:16:15-36
Possible fix:
  add (Test d) to the context of
a type expected by the context: Data d = m d
or the inferred type of test :: (Monad m, Data a) = Constr - m a
In the first argument of `fromConstrM', namely `foo'

and it recommends the right fix: change the type of fromConstrM to be

  fromConstrM :: forall m a. (Monad m, Data a) = 
( forall d. (Test d, Data d) = m d) - Constr - m a

That will solve the problem. Alas, fromConstrM is a library function
and we are not at liberty to change its type.

 Right now I use a case (typeOf x) of kind of construct
That is precisely the right way to use Data. SYB provides good
combinators for building functions (generic producers) of that sort.
But you never need unSafeCoerce: gcast is sufficient.





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Data structure containing elements which are instances of the same type class

2012-08-11 Thread oleg

 data A = A deriving Show
 data B = B deriving Show
 data C = C deriving Show

 data Foo = forall a. Show a = MkFoo a (Int - Bool)

 instance Show Foo where
show (MkFoo a f) = show a

I'd like to point out that the only operation we can do on the first
argument of MkFoo is to show to it. This is all we can ever do:
we have no idea of its type but we know we can show it and get a
String. Why not to apply show to start with (it won't be evaluated
until required anyway)? Therefore, the data type Foo above is in all
respects equivalent to

 data Foo = MkFoo String (Int - Bool)

and no existentials are ever needed. The following article explains
elimination of existentials in more detail, touching upon the original
problem, of bringing different types into union.

http://okmij.org/ftp/Computation/Existentials.html



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] unix package licensing

2012-08-10 Thread Oleg Sidorkin
System/Posix/DynamicLinker/Module/ByteString.hsc and
System/Posix/DynamicLinker/Prim.hsc sources in unix-2.5.1.0 package
contains the following reference to GPL-2 c2hs package:

--  Derived from GModule.chs by M.Weber  M.Chakravarty which is part of
c2hs
--  I left the API more or less the same, mostly the flags are different.

Does it mean that GPL license should be applied to unix package and all
dependent packages?
unix package is included into Haskel Platform so does it mean that Haskell
Platform should be GPL also?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] CRIP: the Curiously Reoccuring Instance Pattern

2012-08-03 Thread oleg

 I think instead you should have:
 - abandoned FunDeps
 - embraced Overlapping more!

Well, using TypeCast to emulate all FunDeps was demonstrated three
years later after HList (or even sooner -- I don't remember when
exactly the code was written):

http://okmij.org/ftp/Haskell/TypeClass.html#Haskell1

We demonstrate that removing from Haskell the ability to define typeclasses
leads to no loss of expressivity. Haskell restricted to a single,
pre-defined typeclass with only one method can express all of Haskell98
typeclass programming idioms including constructor classes, as well as
multi-parameter type classes and even some functional dependencies. The
addition of TypeCast as a pre-defined constraint gives us all functional
dependencies, bounded existentials, and even associated data types. Besides
clarifying the role of typeclasses as method bundles, we propose a simpler
model of overloading resolution than that of Hall et al.


 So here's my conjecture:
 1. We don't need FunDeps, except to define TypeCast.
(And GHC now has equality constraints, which are prettier.)
 2. Without FunDeps, overlapping works fine.
 ...

I agree on point 1 but not on point 2. The example of incoherence
described in Sec `7.6.3.4. Overlapping instances' of the GHC User
Guide has nothing to do with functional dependencies.




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] CRIP: the Curiously Reoccuring Instance Pattern

2012-08-02 Thread oleg

 did you see this, and the discussion around that time?
 http://www.haskell.org/pipermail/haskell-prime/2012-May/003688.html

 I implemented hDeleteMany without FunDeps -- and it works in Hugs (using 
 TypeCast -- but looks prettier in GHC with equality constraints).

I'm afraid I didn't see that -- I was on travel during that time.

It is nice that hDeleteMany works on Hugs.  I forgot if we tried it
back in 2004. To be sure some HList code did work on Hugs. We even had
a special subset of HList specifically for Hugs (which I removed from
the distribution some time ago). The problem was that Hugs
inexplicably would fail in some other HList code. Perhaps 2006 version
is better in that respect, and more or all of HList would work on it.

Thank you.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] CRIP: the Curiously Reoccuring Instance Pattern

2012-07-31 Thread oleg

Ryan Ingram wrote:
 I've been seeing this pattern in a surprising number of instance
 definitions lately:

 instance (a ~ ar, b ~ br) = Mcomp a ar b br [1]
 instance (b ~ c, CanFilterFunc b a) = CanFilter (b - c) a [2]

And here are a few more earlier instances of the same occurrence:

http://okmij.org/ftp/Haskell/typecast.html

 What I'm wondering is--are there many cases where you really want the
 non-constraint-generating behavior?  It seems like (aside from contrived,
 ahem, instances) whenever you have instance CLASS A B where A and B share
 some type variables, that there aren't any valid instances of the same
 class where they don't share the types in that way.

Instances of the form
class C a b
class C a a
class C a b
are very characteristic of (emulation) of disequality
constraints. Such instances occur, in a hidden form, all the time in
HList -- when checking for membership in a type-level list,
for example. There are naturally two cases: when the sought type is at
the head of the list, or it is (probably) at the tail of the list. 

class Member (h :: *) (t :: List *)
instance Member h (Cons h t)
instance Member h t = Member h (Cons h' t)

Of course instances above are overlapping. And when we add functional
dependencies (since we really want type-functions rather type
relations), they stop working at all. We had to employ work-arounds,
which are described in detail in the HList paper (which is 8 years old
already).



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell's type inference considered harmful

2012-07-21 Thread oleg

 However, if your are using ExtendedDefaultRules then you are likely to 
 know you are leaving the clean sound world of type inference.

First of all, ExtendedDefaultRules is enabled by default in
GHCi. Second, my example will work without ExtendedDefaultRules, in
pure Haskell98. It is even shorter:

instance Num Char
main = do
 x - return []
 let y = x
 print . fst $ (x, abs $ head x)
 -- let dead = if False then y ==  else True
 return ()
The printed result is either [] or .

Mainly, if the point is to demonstrate the non-compositionality of type
inference and the effect of the dead code, one can give many many
examples, in Haskell98 or even in SML.

Here is a short one (which does not relies on defaulting. It uses
ExistentialQuantification, which I think is in the new standard or is
about to be.).

{-# LANGUAGE ExistentialQuantification #-}

data Foo = forall a. Show a = Foo [a]
main = do
 x - return []
 let z = Foo x
 let dead = if False then x ==  else True
 return ()

The code type checks. If you _remove_ the dead code, it won't. As you
can see, the dead can have profound, and beneficial influence on
alive, constraining them. (I guess this example is well-timed for Obon).


For another example, take type classes. Haskell98 prohibits overlapping of
instances. Checking for overlapping requires the global analysis of the
whole program and is clearly non-compositional. Whether you may define
instance Num (Int,Int)
depends on whether somebody else, in a library you use indirectly,
has already introduced that instance. Perhaps that library is imported
for a function that has nothing to do with treating a pair of Ints as
a Num -- that is, the instance is a dead code for your
program. Nevertheless, instances are always imported, implicitly.

The non-compositionality of type inference occurs even in SML (or
other language with value restriction). For example,

   let x = ref [];;

   (* let z = if false then x := [1] else ();; *)

   x := [true];;

This code type checks. If we uncomment the dead definition, it
won't. So, the type of x cannot be fully determined from its
definition; we need to know the context of its use -- which is
precisely what seems to upset you about Haskell.

 To stirr action, mails on haskell-cafe seem useless.

What made you think that? Your questions weren't well answered? What
other venue would you propose?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads with The contexts?

2012-07-19 Thread oleg

 http://en.pk.paraiso-lang.org/Haskell/Monad-Gaussian
 What do you think? Will this be a good approach or bad?

I don't think it is a Monad (or even restricted monad, see
below). Suppose G a is a `Gaussian' monad and n :: G Double is a
random number with the Gaussian (Normal distribution).  Then 
(\x - x * x) `fmap` n 
is a random number with the chi-square distribution (of
the degree of freedom 1). Chi-square is _not_ a normal
distribution. Perhaps a different example is clearer:

(\x - if x  0 then 1.0 else 0.0) `fmap` n

has also the type G Double but obviously does not have the normal
distribution (since that random variable is discrete).

There are other problems

 Let's start with some limitation; we restrict ourselves to Gaussian
 distributions and assume that the standard deviations are small
 compared to the scales we deal with.

That assumption is not stated in types and it is hard to see how can
we enforce it. Nothing prevents us from writing
liftM2 n n
in which case the variance will no longer be small compared with the
mean.

Just a technical remark: The way G a is written, it is a so-called
restricted monad, which is not a monad (the adjective `restricted' is
restrictive here). 
http://okmij.org/ftp/Haskell/types.html#restricted-datatypes



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Probabilistic programming [Was: Monads with The contexts?]

2012-07-19 Thread oleg

  Exercise: how does the approach in the code relate to the approaches
  to sharing explained in
  http://okmij.org/ftp/tagless-final/sharing/sharing.html
 
 Chapter 3 introduces an  implicit impure counter, and Chapter 4 uses a
 database that is passed around.
 let_ in Chapter 5 of sharing.pdf realizes the sharing with sort of
 continuation-passing style.The unsafe counter works across the module
 (c.f. counter.zip) but is generally unpredictable...

The reason sharing has the type m a - m (m a) rather than
m a - m a is the fact new calculations to share may be created
dynamically. Therefore, we need a supply of keys (gensym). We count on
the monad m to provide the supply. However, top-level computations
(bound to top-level identifiers) are created only once, at the
initialization time. Therefore, a static assignment of identifiers
will suffice. The static assignment is similar to the manual label
assignment technique -- the first technique described Sec 3 of the
sharing.pdf paper. John T. O'Donnell has automated this manual
assignment using TH.

 Now I'm on to the next task; how we represent continuous probability
 distributions? The existing libraries:

 Seemingly have restricted themselves to discrete distributions, or at
 least providing Random support for Monte-Carlo simulations. 

I must warn that although it is ridiculously easy to implement
MonadProb in Haskell, the result is not usable. Try to implement HMM
with any of the available MonadProb and see how well it scales. (Hint: we
want the linear time scaling as we evolve the model -- not
exponential). There is a *huge* gap between a naive MonadProb and
something that could be used even for passingly interesting
problems. We need support for so-called `variable elimination'. We
need better sampling procedures: rejection sampling is better
forgotten. Finally, GHC is actually not a very good language system
for probabilistic programming of generative-model--variety. See
http://okmij.org/ftp/Haskell/index.html#memo-off
for details. 

To give you the flavor of difficulties, consider a passingly
interesting target tracking problem: looking at a radar screen and
figuring out how many planes are there and where they are going:
http://okmij.org/ftp/kakuritu/index.html#blip
Since the equipment is imperfect, there could be a random blip on the radar
that corresponds to no target. If we have a 10x10 screen and 2%
probability of a noise blip in every of the 100 `pixels', we get the
search space of 2^100 -- even before we get to the planes and their
stochastic equations of motion. Hansei can deal with the problem --
and even scale linearly with time. 

I don't think you can make a monad out of Gaussian distributions. That
is not to say that monads are useless in these problems -- monads are
useful, but at a different level, to build a code for say, MCMC (Monte
Carlo Markov Chain). It this this meta-programming approach that
underlies Infer.net
http://research.microsoft.com/en-us/um/cambridge/projects/infernet/
and Avi Pfeffer's Figaro
http://www.cs.tufts.edu/~nr/cs257/archive/avi-pfeffer/figaro.pdf


I should point out Professor Sato of Toukyou Tech, who is the pioneer
in the probabilistic programming
http://sato-www.cs.titech.ac.jp/sato/
http://sato-www.cs.titech.ac.jp/en/publication/
You can learn from him all there is to know about probabilistic
programming.




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell's type inference considered harmful

2012-07-17 Thread oleg

1. Haskell's type inference is NON-COMPOSITIONAL!

Yes, it is -- and there are many examples of it. Here is an example
which has nothing to do with MonomorphismRestriction or numeric
literals

{-# LANGUAGE ExtendedDefaultRules #-}

class C a where
m :: a - Int

instance C () where
m _ = 1

instance C Bool where
m _ = 2

main = do
   x - return undefined
   let y = x
   print . fst $ (m x, show x)
   -- let dead = if False then not y else True
   return ()

The program prints 1. If you uncomment the (really) dead code, it will
print 2. Why? The dead code doesn't even mention x, and it appears
after the printing! What is the role of show x, which doesn't do anything?

Exercises: what is the significance of the monadic bind to x? Why
can't we replace it with let x = undefined?

[Significant hint, don't look at it]

Such a behavior always occurs when we have HM polymorphism restriction 
and some sort of non-parametricity -- coupled with default rules or
overlapping instances or some other ways of resolving overloading. All
these features are essential (type-classes are significant,
defaulting is part of the standard and is being used more and more).


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads with The contexts?

2012-07-14 Thread oleg

The bad news is that indeed you don't seem to be able to do what you
want. The good news: yes, you can. The enclosed code does exactly what
you wanted:

 sunPerMars :: NonDet Double
 sunPerMars = (/) $ sunMass * marsMass

 sunPerMars_run = runShare sunPerMars
 sunPerMars_run_len = length sunPerMars_run
 -- 27

where earthMass, sunMass, marsMass are all top-level bindings, which
can be defined in separate and separately-compiled modules.

Let's start with the bad news however. Recall the original problem:

 earthMass, sunMass, marsMass :: [Double]
 earthMass = [5.96e24, 5.97e24, 5.98e24]
 sunMass = (*) $  [2.5e5, 3e5, 4e5] * earthMass
 marsMass = (*) $ [0.01, 0.1, 1.0] * earthMass

The problem was that the computation 
sunPerMars = (/) $ sunMass * marsMass
produces too many answers, because earthMass in sunMass and earthMass
in marsMass were independent non-deterministic computations. Thus the
code says: we measure the earthMass to compute sunMass, and we measure
earthMass again to compute marsMass. Each earthMass measurement is
independent and gives us, in general, a different value. 

However, we wanted the code to behave differently. We wanted to
measure earthMass only once, and use the same measured value to
compute masses of other bodies. There does not seem to be a way to do
that in Haskell. Haskell is pure, so we can substitute equals for
equals. earthMass is equal to [5.96e24, 5.97e24, 5.98e24]. Thus the
meaning of program should not change if we write

 sunMass = (*) $  [2.5e5, 3e5, 4e5] * [5.96e24, 5.97e24, 5.98e24]
 marsMass = (*) $ [0.01, 0.1, 1.0] * [5.96e24, 5.97e24, 5.98e24]

which gives exactly the wrong behavior (and 81 answers for sunPerMars,
as easy to see). Thus there is no hope that the original code should
behave any differently.

 I don't know if memo can solve this problem. I have to test. Is the
 `memo` in your JFP paper section 4.2 Memoization, a psuedo-code? (What
 is type `Thunk` ?) and seems like it's not in explicit-sharing
 hackage.

BTW, the memo in Hansei is different from the memo in the JFP paper.
In JFP, memo is a restricted version of share:
memo_jfp :: m a - m (m a)

In Hansei, memo is a generalization of share:
memo_hansei :: (a - m b) - m (a - m b)

You will soon need that generalization (which is not mention in the
JFP paper).


Given such a let-down, is there any hope at all? Recall, if Haskell
doesn't do what you want, embed a language that does. The solution becomes
straightforward then. (Please see the enclosed code).

Exercise: how does the approach in the code relate to the approaches
to sharing explained in
http://okmij.org/ftp/tagless-final/sharing/sharing.html

Good luck with the contest!

{-# LANGUAGE FlexibleContexts, DeriveDataTypeable #-}

-- Sharing of top-level bindings
-- Solving Takayuki Muranushi's problem
-- http://www.haskell.org/pipermail/haskell-cafe/2012-July/102287.html

module TopSharing where

import qualified Data.Map as M
import Control.Monad.State
import Data.Dynamic
import Control.Applicative

-- Let's pretend this is one separate module.
-- It exports earthMass, the mass of the Earth, which
-- is a non-deterministic computation producing Double.
-- The non-determinism reflects our uncertainty about the mass.

-- Exercise: why do we need the seemingly redundant EarthMass
-- and deriving Typeable?
-- Could we use TemplateHaskell instead?
data EarthMass deriving Typeable
earthMass :: NonDet Double
earthMass = memoSta (typeOf (undefined::EarthMass)) $
msum $ map return [5.96e24, 5.97e24, 5.98e24]


-- Let's pretend this is another separate module
-- It imports earthMass and exports sunMass
-- Muranushi: ``Let's also pretend that we can measure the other 
-- bodies' masses only by their ratio to the Earth mass, and 
-- the measurements have large uncertainties.''

data SunMass deriving Typeable
sunMass :: NonDet Double
sunMass = memoSta (typeOf (undefined::SunMass)) mass
 where mass = (*) $ proportion * earthMass
   proportion = msum $ map return [2.5e5, 3e5, 4e5]

-- Let's pretend this is yet another separate module
-- It imports earthMass and exports marsMass

data MarsMass deriving Typeable
marsMass :: NonDet Double
marsMass = memoSta (typeOf (undefined::MarsMass)) mass
 where mass = (*) $ proportion * earthMass
   proportion = msum $ map return [0.01, 0.1, 1.0]

-- This is the main module, importing the masses of the three bodies
-- It computes ``how many Mars mass object can we create 
-- by taking the sun apart?''
-- This code is exactly the same as in Takayuki Muranushi's message
-- His question: ``Is there a way to represent this? 
-- For example, can we define earthMass'' , sunMass'' , marsMass'' all 
-- in separate modules, and yet have (length $ sunPerMars'' == 27) ?

sunPerMars :: NonDet Double
sunPerMars = (/) $ sunMass * marsMass

sunPerMars_run = runShare sunPerMars
sunPerMars_run_len = length sunPerMars_run
-- 27


-- The following is essentially 

[Haskell-cafe] Monads with The contexts?

2012-07-13 Thread oleg

Tillmann Rendel has correctly noted that the source of the problem is
the correlation among the random variables. Specifically, our
measurement of Sun's mass and of Mars mass used the same rather than
independently drawn samples of the Earth mass. Sharing (which
supports what Functional-Logic programming calls ``call-time choice'')
is indeed the solution. Sharing has very clear physical intuition: it
corresponds to the collapse of the wave function.

Incidentally, a better reference than our ICFP09 paper is the
greatly expanded JFP paper
http://okmij.org/ftp/Computation/monads.html#lazy-sharing-nondet

You would also need a generalization of sharing -- memoization -- to
build stochastic functions. The emphasis is on _function_: when
applied to a value, the function may give an arbitrary sample from a
distribution. However, when applied to the same value again, the
function should return the same sample. The general memo combinator is
implemented in Hansei and is used all the time. The following article
talks about stochastic functions (and correlated variables):

http://okmij.org/ftp/kakuritu/index.html#all-different

and the following two articles show just two examples of using memo:

http://okmij.org/ftp/kakuritu/index.html#noisy-or
http://okmij.org/ftp/kakuritu/index.html#colored-balls

The noisy-or example is quite close to your problem. It deals with the
inference in causal graphs (DAG): finding out the distribution of
conclusions from the distribution of causes (perhaps given
measurements of some other conclusions). Since a cause may contribute
to several conclusions, we have to mind sharing. Since the code works
by back-propagation (so we don't have to sample causes that don't
contribute to the conclusions of interest), we have to use memoization
(actually, memoization of recursively defined functions).




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Interest in typed relational algebra library?

2012-07-10 Thread oleg

 And yes to first order predicate calculus too!

Just two weeks ago Chung-chieh Shan and I were explaining at NASSLLI
the embedding in Haskell of the higher-order predicate logic with two
base types (so-called Ty2). The embedding supports type-safe
simplification of formulas (which was really needed for our
applications). The embedding is extensible: you can add models and
more constants.

http://okmij.org/ftp/gengo/NASSLLI10/course.html#semantics


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] translation between two flavors of lexically-scoped type variables

2012-07-07 Thread oleg

 Do you know why they switched over in GHC 6.6?

If I were to speculate, I'd say it is related to GADTs. Before GADTs,
we can keep conflating quantified type variables with schematic type
variables. GADTs seem to force us to make the distinction.

Consider this code:

data G a where
GI :: Int  - G Int
GB :: Bool - G Bool


evG :: G a - a
evG (GI x) = x
evG (GB x) = x

To type check the first clause of evG, we assume the equality 
(a ~ Int) for the duration of type checking the clause. To type-check the
second clause, we assume the equality (a ~ Bool) for the clause. We
sort of assumed that a is universally quantified, so we may indeed
think that it could reasonably be an Int or a Bool. Now consider that
evG above was actually a part of a larger function


foo = ...
 where
  evG :: G a - a
  evG (GI x) = x
  evG (GB x) = x

  bar x = ... x :: Int ... x::a ...

We were happily typechecking evG thinking that a is universally
quantified when in fact it wasn't. And some later in the code it is
revealed that a is actually an Int. Now, one of our assumptions,
a ~ Bool (which we used to typecheck the second clause of evG) no
longer makes sense.

One can say that logically, from the point of view of _material
implication_, this is not a big deal. If a is Int, the GB clause of evG
can never be executed. So, there is no problem here. This argument
is akin to saying that in the code
let x = any garbage in 1
any garbage will never be evaluated, so we just let it to be garbage.
People don't buy this argument. For the same reason, GHC refuses to type
check the following

evG1 :: G Int - Int
evG1 (GI x) = x
evG1 (GB x) = x


Thus, modular type checking of GADTs requires us to differentiate
between schematic variables (which are akin to `logical' variables,
free at one time and bound some time later) and quantified variables,
which GHC calls `rigid' variables, which can't become bound (within
the scope of the quantifier). For simplicity, GHC just banned
schematic variables.

The same story is now played in OCaml, only banning of the old type
variables was out of the question to avoid breaking a lot of code.
GADTs forced the introduction of rigid variables, which are
syntactically distinguished from the old, schematic type variables. 
We thus have two type variables: schematic 'a and rigid a
(the latter unfortunately look exactly like type constants, but they
are bound by the `type' keyword). There are interesting and sometimes
confusing interactions between the two. OCaml 4 will be released any
hour now. It is interesting to see how the interaction of the two type
variables plays out in practice.





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] translation between two flavors of lexically-scoped type variables

2012-07-06 Thread oleg

Kangyuan Niu wrote:
 Aren't both Haskell and SML translatable into System F, from which
 type-lambda is directly taken?

The fact that both Haskell and SML are translatable to System F does
not imply that Haskell and SML are just as expressive as System
F. Although SML (and now OCaml) does have what looks like a
type-lambda, the occurrences of that type lambda are greatly
restricted. It may only come at the beginning of a polymorphic
definition (it cannot occur in an argument, for example).

 data Ap = forall a. Ap [a] ([a] - Int)
 Why isn't it possible to write something like:

 fun 'a revap (Ap (xs : 'a list) f) = f ys
   where
 ys :: 'a list
 ys = reverse xs

 in SML?

This looks like a polymorphic function: an expression of the form
/\a.body has the type forall a. type. However, the Haskell function

 revap :: Ap - Int
 revap (Ap (xs :: [a]) f) = f ys
   where
 ys :: [a]
 ys = reverse xs

you meant to emulate is not polymorphic. Both Ap and Int are concrete
types. Therefore, your SML code cannot correspond to the Haskell code.

That does not mean we can't use SML-style type variables (which must
be forall-bound) with existentials. We have to write the
elimination principle for existentials explicitly

{-# LANGUAGE ExistentialQuantification, RankNTypes #-}
{-# LANGUAGE ScopedTypeVariables #-}

data Ap = forall a. Ap [a] ([a] - Int)

-- Elimination principle
deconAp :: Ap - (forall a. [a] - ([a] - Int) - w) - w
deconAp (Ap xs f) k = k xs f


revap :: Ap - Int
revap  ap = deconAp ap revap'

revap' :: forall a. [a] - ([a] - Int) - Int
revap' xs f = f ys
  where
  ys :: [a]
  ys = reverse xs


Incidentally, GHC now uses SML-like design for type
variables. However, there is a special exception for
existentials. Please see
7.8.7.4. Pattern type signatures
of the GHC user manual. The entire section 7.8.7 is worth reading.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Long-running request/response protocol server ...

2012-06-28 Thread oleg

Nicolas Trangez wrote

 The protocol I'd like to implement is different: it's long-running using
 repeated requests  responses on a single client connection. Basically,
 a client connects and sends some data to the server (where the length of
 this data is encoded in the header). Now the server reads  parses this
 (binary) data, sets up some initial state for this client connection
 (e.g. opening a file handle), and returns a reply. Now the client can
 send another request, server parses/interprets it using the connection
 state, sends a reply, and so on.''

That is very simple to implement in any Iteratee library; I will use
IterateeM for concreteness. The desired functionality is already
implemented, in decoding of chunk-decoded inputs. Your protocol is
almost the same: read a chunk of data (tagged with its size), and do
something about it. After the chunk is handled, read another
chunk. The iteratee library takes care of errors. In particular, if
the request handler finished (normally or with errors) without reading
all of the chunk, the rest of the chunk is read nevertheless and
disregarded. Otherwise, we deadlock. 

The complete code with a simple test is included. The test reads three
requests, the middle of which causes the request handler to report an
error without reading the rest of the request.

module SeveralRequests where

import IterateeM
import Prelude hiding (head, drop, dropWhile, take, break, catch)
import Data.Char (isHexDigit, digitToInt, isSpace)
import Control.Exception
import Control.Monad.Trans


-- Tell the iteratee the stream is finished and write the result
-- as the reply to the client
-- If the iteratee harbors the error, write that too.
reply :: MonadIO m = Iteratee el m String - Iteratee el m ()
reply r = en_handle show (runI r) = check
 where
 check (Right x) = liftIO . putStrLn $ REPLY:  ++ x
 check (Left  x) = liftIO . putStrLn $ ERROR:  ++ x


-- Read several requests and get iter to handle them
-- Each request is formatted as a single chunk
-- The code is almost identical to IterateeM.enum_chunk_decoded
-- The only difference is in the internal function
-- read_chunk below.
-- After a chunk is handled, the inner iteratee is terminated
-- and we process the new chunk with a `fresh' iter.
-- If iter can throw async errors, we have to wrap it 
-- accordingly to convert async errors into Iteratee errors.
-- That is trivial.
reply_chunk_decoded :: MonadIO m = Enumeratee Char Char m String
reply_chunk_decoded iter = read_size
 where
 read_size = break (== '\r') = checkCRLF iter . check_size
 checkCRLF iter m = do 
   n - heads \r\n
   if n == 2 then m else frame_err (exc Bad Chunk: no CRLF) iter
 check_size 0 = checkCRLF iter (return iter)
 check_size str@(_:_) =
 maybe (frame_err (exc (Bad chunk size:  ++ str)) iter) read_chunk $ 
 read_hex 0 str
 check_size _ = frame_err (exc Error reading chunk size) iter

 read_chunk size = take size iter = \r - checkCRLF r $ 
reply r  reply_chunk_decoded iter

 read_hex acc  = Just acc
 read_hex acc (d:rest) | isHexDigit d = read_hex (16*acc + digitToInt d) rest
 read_hex acc _ = Nothing

 exc msg = toException (ErrorCall $ Chunk decoding exc:  ++ msg)
 -- If the processing is restarted, we report the frame error to the inner
 -- Iteratee, and exit
 frame_err e iter = throwRecoverableErr (exc Frame error)
(\s - enum_err e iter = \i - return (return i,s))


-- Test
-- A simple request_handler iter for handling requests
-- If the input starts with 'abc' it reads and returns the rest
-- Otherwise, it throws an error, without reading the rest of the input.
request_handler :: Monad m = Iteratee Char m String
request_handler = do 
  n - heads abc 
  if n == 3 then stream2list
 else throwErrStr expected abc

test_request_handler :: IO ()
test_request_handler = run = enum_pure_1chunk input 
  (reply_chunk_decoded request_handler  return ())
 where
 input = 
   -- first request
  6++crlf++
  abcdef ++ crlf++
   -- second request
  8++crlf++
  xxxdefgh ++ crlf++
   -- third request
  5++crlf++
  abcde ++ crlf++
  0++crlf++ crlf
 crlf = \r\n

{-
*SeveralRequests test_request_handler
REPLY: def
ERROR: expected abc
REPLY: de
-}


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Using promoted lists

2012-06-08 Thread oleg

Yves Pare`s wrote:
 So I'm trying to make a type level function to test if a type list contains
 a type. Unless I'm wrong, that calls to the use of a type family.

More crucially, list membership also calls for an equality
predicate. Recall, List.elem has the Eq constraint; so the type-level
membership test should have something similar, a similar equality
predicate. As you have discovered, type equality is quite non-trivial.

All is not lost however. If we `register' all the types or type
constructors, we can indeed write a type-level membership function. In
fact,
http://okmij.org/ftp/Haskell/TTypeable/ShowListNO.hs

does exactly that. It uses the type-level (and higher-order) 
function Member to test if a given type is a one of the chosen types. If it
is, a special instance is selected. Please see

http://okmij.org/ftp/Haskell/typeEQ.html#TTypeable
http://okmij.org/ftp/Haskell/typeEQ.html#without-over

for explanations.




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Q] multiparam class undecidable types

2012-05-10 Thread oleg

 i think what i will do is to instantiate all table types individually:
 | instance Show c = Show (SimpleTable c) where
 |   showsPrec p t = showParen (p  10) $ showString FastTable  .
 | shows (toLists t)

I was going to propose this solution, as well as define
newtype SlowType a = SlowType [[a]]
for the ordinary table. That would avoid the conflict with Show [a]
instance. It is also good idea to differentiate [[a]] intended to be a
table from just any list of lists. (Presumably the table has rows of
the same size).

Enclosed is a bit spiffed up variation of that idea.

{-# LANGUAGE TypeFamilies, FlexibleInstances, FlexibleContexts,
 UndecidableInstances #-}

class Table t where
   data TName t :: *
   type TCell t :: *
   toLists   :: TName t - [[TCell t]]
   fromLists :: [[TCell t]] - TName t

instance Table [[a]] where
newtype TName [[a]] = SlowTable [[a]]
type TCell [[a]] = a
toLists  (SlowTable x) = x
fromLists = SlowTable

data FastMapRep a -- = ...

instance Table (FastMapRep a) where
newtype TName (FastMapRep a) = FastTable [[a]]
type TCell (FastMapRep a) = a
toLists   = undefined
fromLists = undefined

instance Table Int where
newtype TName Int = FastBoolTable Int
type TCell Int = Bool
toLists   = undefined
fromLists = undefined

instance (Table t, Show (TCell t)) = Show (TName t) where
  showsPrec p t = showParen (p  10) $
  showString fromLists  . shows (toLists t)


t1 :: TName [[Int]]
t1 = fromLists [[1..10],[2..20]]
-- fromLists [[1,2,3,4,5,6,7,8,9,10],
-- [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]]



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Q] multiparam class undecidable types

2012-05-09 Thread oleg

 | instance (Table a c, Show c) = Show a where
 I would have thought that there is on overlap: the instance in my code
 above defines how to show a table if the cell is showable;

No, the instance defines how to show values of any type; that type
must be an instance of Table. There is no `if' here: instances are
selected regardless of the context such as (Table a c, Show c)
above. The constraints in the context apply after the selection, not
during.

Please see 
``Choosing a type-class instance based on the context''
http://okmij.org/ftp/Haskell/TypeClass.html#class-based-overloading

for the explanation and the solution to essentially the same problem.

There are other problems with the instance:

| instance (Table a c, Show c) = Show a where

For example, there is no information for the type checker to determine
the type c. Presumably there could be instances
Table [[Int]] Int
Table [[Int]] Bool
So, when the a in (Table a c) is instantiated to [[Int]] 
there could be two possibilities for c: Int and Bool. You can argue
that the type of the table uniquely determines the type of its
cells. You should tell the type checker of that, using functional
dependencies or associated types:

class Table table where
  type Cell table :: *
  toLists :: table - [[Cell table]]




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Un-memoization

2012-05-09 Thread oleg

Victor Miller wrote:
 I was writing a Haskell program which builds a large labeled binary tree
 and then does some processing of it, which is fold-like.  In the actual
 application that I have in mind the tree will be *huge*.  If the whole tree
 is kept in memory it would probably take up 100's of gigabytes.  Because of
 the pattern of processing the tree, it occurred to me that it might be
 better (cause much less paging) if some large subtrees could be replaced by
 thunks which can either recalculate the subtree as needed, or write out the
 subtree, get rid of the references to it (so it can be garbage collected)
 and then read back in (perhaps in pieces) as needed.  This could be fairly
 cleanly expressed monadically.  So does anyone know if someone has created
 something like this?

Yes. I had faced essentially the same problem. It is indeed tricky to
prevent memoization: GHC is very good at it. The following article
explains the solution:

``Preventing memoization in (AI) search problems''
http://okmij.org/ftp/Haskell/index.html#memo-off


 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


  1   2   3   4   5   6   >