[Haskell] CFP for Certified Programs and Proofs (CPP 2020)

2019-09-06 Thread Dominique DEVRIESE
## CFP for Certified Programs and Proofs (CPP 2020) ##

Certified Programs and Proofs (CPP) is an international conference on
practical and theoretical topics in all areas that consider
certification as an essential paradigm for their work. Certification
here means formal, mechanized verification of some sort, preferably
with the production of independently checkable certificates.
CPP spans areas of computer science, mathematics, logic, and education.

CPP 2020 will be held on 20-21 January 2020 in New Orleans, Louisiana,
United States and will be co-located with POPL 2020. CPP 2020 is
sponsored by ACM SIGPLAN, in cooperation with ACM SIGLOG.

For more information about this edition and the CPP series please visit:
https://popl20.sigplan.org/home/CPP-2020

### News

- Submission guideline news: **lightweight double-blind reviewing process**
  and **unrestricted appendices** that don't count against the page limit

- Delighted to announce that the **invited speakers** for CPP 2020 will be:
  Adam Chlipala (MIT CSAIL) and Grigore Rosu (UIUC and Runtime Verification)

- CPP 2020 will also host the **POPLmark 15 Year Retrospective Panel**

### Important Dates

   - Abstract Deadline: 16 October 2019 at 23:59 AoE (UTC-12h)
   - Paper Submission Deadline: 21 October 2019 at 23:59 AoE (UTC-12h)
   - Notification: 27 November 2019
   - Camera Ready Deadline: 20 December 2019
   - Conference: 20 - 21 January 2020

Deadlines expire at the end of the day, anywhere on earth. Abstract
and submission deadlines are tight and there will be **no extensions**.

### Topics of Interest

We welcome submissions in research areas related to formal
certification of programs and proofs. The following is a
non-exhaustive list of topics of interests to CPP:

   - certified or certifying programming, compilation, linking, OS
 kernels, runtime systems, and security monitors;
   - certified mathematical libraries and mathematical theorems;
   - proof assistants (e.g, ACL2, Agda, Coq, Dafny, F*, HOL,
 HOL-Light, Idris, Isabelle, Lean, Mizar, Nuprl, PVS, etc)
   - new languages and tools for certified programming;
   - program analysis, program verification, and program synthesis;
   - program logics, type systems, and semantics for certified code;
   - logics for certifying concurrent and distributed systems;
   - mechanized metatheory, formalized programming language semantics,
 and logical frameworks;
   - higher-order logics, dependent type theory, proof theory,
 logical systems, separation logics, and logics for security;
   - verification of correctness and security properties;
   - formally verified blockchains and smart contracts;
   - certificates for decision procedures, including linear algebra,
 polynomial systems, SAT, SMT, and unification in algebras of interest;
   - certificates for semi-decision procedures, including equality,
 first-order logic, and higher-order unification;
   - certificates for program termination;
   - formal models of computation;
   - mechanized (un)decidability and computational complexity proofs;
   - user interfaces for proof assistants and theorem provers;
   - original formal proofs of known results in math or computer science;
   - teaching mathematics and computer science with proof assistants.

### Program Committee Members

   - Jasmin Christian Blanchette (VU Amsterdam, Netherlands -- co-chair)
   - Catalin Hritcu (Inria Paris, France -- co-chair)
   - Nada Amin (Harvard University - USA)
   - Jesús María Aransay Azofra (Universidad de La Rioja - Spain)
   - Mauricio Ayala-Rincon (Universidade de Brasilia - Brazil)
   - Liron Cohen (Cornell University - USA)
   - Dominique Devriese (Vrije Universiteit Brussel - Belgium)
   - Jean-Christophe Filliâtre (CNRS - France)
   - Adam Grabowski (University of Bialystok - Poland)
   - Warren Hunt (University of Texas - USA)
   - Ori Lahav (Tel Aviv University - Israel)
   - Peter Lammich (The University of Manchester - UK)
   - Dominique Larchey-Wendling (Univ. de Lorraine, CNRS, LORIA - France)
   - Hongjin Liang (Nanjing University - China)
   - Assia Mahboubi (Inria and VU Amsterdam - France)
   - Cesar Munoz (NASA - USA)
   - Vivek Nigam (fortiss GmbH - Germany)
   - Benjamin Pierce (University of Pennsylvania - USA)
   - Vincent Rahli (University of Luxembourg, SnT - Luxembourg)
   - Christine Rizkallah (UNSW Sydney - Australia)
   - Ilya Sergey (Yale-NUS College and National University of Singapore)
   - Kathrin Stark (Saarland University - Germany)
   - Nikhil Swamy (Microsoft Research - USA)
   - Nicolas Tabareau (Inria - France)
   - Dmitriy Traytel (ETH Zürich - Switzerland)
   - Floris van Doorn (University of Pittsburgh - USA)
   - Akihisa Yamada (National Institute of Informatics - Japan)
   - Roberto Zunino (University of Trento - Italy)

### Submission Guidelines

Prior to the paper submission deadline, the authors should upload their
**anonymized** paper in PDF format through the HotCRP system at
https

[Haskell] Call for Short Talks on Secure Compilation (PriSC Workshop @ POPL'18)

2017-12-12 Thread Dominique Devriese
===
Call for Short Talks on Secure Compilation (PriSC Workshop @ POPL'18)
===

Do not miss the chance to submit short talks on your cutting-edge
secure compilation research. Submission deadline is 14 December 2017.
More information below.


==
Important Dates
==

  Short talk submission deadline:14 December 2017, AoE
  Short talk notification:   18 December 2017
  PriSC Workshop takes place:13 January 2018


==
Scope of PriSC Short Talks Session
==

In the short talks session of PriSC, participants get 5 minutes to
present intriguing ideas, advertise ongoing work, etc.  Anyone
interested in giving a short 5-minute talk should submit an
abstract. Any topic that could be of interest to the emerging secure
compilation community is in scope. Presentations that provide a useful
outside view or challenge the community are also welcome.

Topics of interest include but are **not** limited to:

- attacker models for secure compiler chains

- secure compilation properties: full abstraction, memory safety,
control-flow integrity, preserving non-interference or
(hyper-)properties against adversarial contexts,
secure multi-language interoperability

- enforcement mechanisms: static checking, program verification,
reference monitoring, program rewriting, software fault isolation,
system-level protection, secure hardware, crypto, randomization

- experimental evaluation and applications of secure compilation

- proof methods: (bi)simulation, logical relations, game semantics,
multi-language semantics, embedded interpreters

- formal verification of secure compilation chain (protection
mechanisms, compilers, linkers, loaders), machine-checked proofs,
translation validation, property-based testing


==
Guidelines for Submitting Short Talk Abstracts
==

Abstracts should be submitted in text format and are not anonymous

Giving a talk at the workshop does not preclude publication elsewhere.

Please submit your abstracts at https://prisc18short.hotcrp.com

For questions about the short talks please contact the Program Chair.


==
2nd Workshop on Principles of Secure Compilation (PriSC 2018)
==

The Workshop on Principles of Secure Compilation (PriSC) is a new
informal 1-day workshop without any proceedings. The goal is to
identify interesting research directions and open challenges and to
bring together researchers interested in secure compilation.

The 2nd PriSC edition will be held on Saturday, 13 January 2018, in
Los Angeles, together with the ACM SIGPLAN-SIGACT Symposium on
Principles of Programming Languages (POPL).

More information including the workshop program available at
http://popl18.sigplan.org/track/prisc-2018


==
Participation and Registration
==

PriSC will be held at the POPL'18 venue (Omni Hotel LA).
To participate, please register through the POPL registration system:
https://popl18.sigplan.org/attending/Registration


==
Program Committee
==

Program Chair
  Catalin Hritcu   Inria Paris

Members
  Amal Ahmed   Inria Paris and Northeastern University
  Lars BirkedalAarhus University
  Dominique Devriese   KU Leuven
  Cédric Fournet   Microsoft Research
  Deepak Garg  MPI-SWS
  Xavier Leroy Inria Paris
  David NaumannStevens Institute of Technology
  Marco Patrignani MPI-SWS
  Frank Piessens   KU Leuven
  Tamara Rezk  Inria Sophia Antipolis
  Nikhil Swamy Microsoft Research


==
Organizing Committee
==

  Amal Ahmed   Inria Paris and Northeastern University
  Dominique Devriese   KU Leuven
  Deepak Garg  MPI-SWS
  Catalin Hritcu   Inria Paris
  Marco Patrignani MPI-SWS
  Tamara Rezk  Inria Sophia Antipolis


==
Contact and More Information
=

More

[Haskell] Call for Participation for Secure Compilation Workshop (PriSC @ POPL'18)

2017-11-17 Thread Dominique Devriese
Quick updates (more details in the Call for Participation below):

- Do not miss the chance to submit short talks on your cutting-edge
research until 14 December 2017, AoE
https://popl18.sigplan.org/track/prisc-2018#Call-for-Short-Talks

- POPL/PriSC registration is open; early rate ends on 10 December 2017

- List of accepted PriSC presentations:
https://popl18.sigplan.org/track/prisc-2018#event-overview

- The PriSC invited talk will be given by Mathias Payer (Purdue University)
on Challenges For Compiler-backed Security: From Sanitizer to Mitigation


=
Call for Participation for Secure Compilation Workshop (PriSC @ POPL'18)
=

Secure compilation is an emerging field that puts together advances in
programming languages, security, verification, systems, compilers, and
hardware architectures in order to devise secure compiler chains that
eliminate many of today's low-level vulnerabilities. Secure
compilation aims to protect high-level language abstractions in
compiled code, even against adversarial low-level contexts, and to
allow sound reasoning about security in the source language. The
emerging secure compilation community aims to achieve this by:
identifying and formalizing properties that secure compilers must
possess; devising efficient enforcement mechanisms; and developing
effective verification and proof techniques.


===
2nd Workshop on Principles of Secure Compilation (PriSC 2018)
===

The Workshop on Principles of Secure Compilation (PriSC) is a new
informal 1-day workshop without any proceedings. The goal is to
identify interesting research directions and open challenges and to
bring together researchers interested in secure compilation.

The 2nd PriSC edition will be held on Saturday, 13 January 2018, in
Los Angeles, together with the ACM SIGPLAN-SIGACT Symposium on
Principles of Programming Languages (POPL).

More information at http://popl18.sigplan.org/track/prisc-2018


==
Important Dates
==

  POPL early registration deadline:  10 December 2017
  Short talk submission deadline:14 December 2017, AoE
  Short talk notification:   18 December 2017
  PriSC Workshop takes place:13 January 2018

Do not miss the chance to submit short talks on your cutting-edge
research. More information below.



Invited Talk


Challenges For Compiler-backed Security: From Sanitizer to Mitigation.
  Mathias Payer (Purdue University, https://nebelwelt.net/)


===
Accepted Presentations
===

Building Secure SGX Enclaves using F*, C/C++ and X64.
  Anitha Gollamudi, Cédric Fournet.

Constant-time WebAssembly.  John Renner, Sunjay Cauligi, Deian Stefan.

Enforcing Well-bracketed Control Flow and Stack Encapsulation using
  Linear Capabilities.  Lau Skorstengaard, Dominique Devriese, Lars Birkedal

Formally Secure Compilation of Unsafe Low-Level Components.
  Guglielmo Fachini, Catalin Hritcu, Marco Stronati, Ana Nora Evans,
  Théo Laurent, Arthur Azevedo de Amorim, Benjamin C. Pierce, Andrew Tolmach

Linear capabilities for modular fully-abstract compilation of verified code.
  Thomas Van Strydonck, Dominique Devriese, Frank Piessens.

On Compositional Compiler Correctness and Fully Abstract Compilation.
  Daniel Patterson, Amal Ahmed.

Per-Thread Compositional Compilation for Confidentiality-Preserving
Concurrent Programs.  Robert Sison.

Robust Hyperproperty Preservation for Secure Compilation.  Deepak Garg,
  Catalin Hritcu, Marco Patrignani, Marco Stronati, David Swasey.

Secure Compilation in a Production Environment. Vijay D'Silva.

Type-Theoretic Galois Connections.
  Pierre-Evariste Dagand, Nicolas Tabareau, Éric Tanter


===
Participation and Registration
===

PriSC will be held on Saturday, 13 Jan 2018 at the POPL'18 venue
(Omni Hotel LA). To participate, please register through the POPL
registration system: https://popl18.sigplan.org/attending/Registration

POPL early registration rate ends on 10 December 2017.



Call for Short Talks


We also have a short talks session, where participants get 5 minutes
to present intriguing ideas, advertise ongoing work, etc.  Anyone
interested in giving a short 5-minute talk should submit an
abstract. Any topic that could be of interest to the emerging secure
compilation community is in scope. Presentations that provide a useful
outside view or challenge the community are also welcome.

Topics of interest include but are **not** limited to:

- attacker models for secure compiler chains

- secure compilation properties: full abstraction, memory safety,
control-flow integrity, preserving non-interference or
(hyper-)properties against adversarial contexts,
secure multi-language interoperability

- enforcement mechanisms: static checking, program verification

[Haskell] PriSC'18 deadline extended by 1 week

2017-10-20 Thread Dominique Devriese
===
Call for Presentations on Secure Compilation (PriSC Workshop @ POPL'18)
===

Secure compilation is an emerging field that puts together advances in
programming languages, security, verification, systems, compilers, and
hardware architectures in order to devise secure compiler chains that
eliminate many of today's low-level vulnerabilities. Secure
compilation aims to protect high-level language abstractions in
compiled code, even against adversarial low-level contexts, and to
allow sound reasoning about security in the source language. The
emerging secure compilation community aims to achieve this by:
identifying and formalizing properties that secure compilers must
possess; devising efficient enforcement mechanisms; and developing
effective verification and proof techniques.


==
2nd Workshop on Principles of Secure Compilation (PriSC 2018)
==

The Workshop on Principles of Secure Compilation (PriSC) is a new
informal 1-day workshop without any proceedings. The goal is to
identify interesting research directions and open challenges and to
bring together researchers interested in secure compilation.

The 2nd PriSC edition will be held on Saturday, 13 January 2018, in
Los Angeles, together with the ACM SIGPLAN-SIGACT Symposium on
Principles of Programming Languages (POPL).

More information at http://popl18.sigplan.org/track/prisc-2018


==
Important Dates
==

  Presentation proposal submission deadline:  25 October 2017, AoE
  Presentation proposal notification: 15 November 2017
  PriSC Workshop takes place: 13 January 2018


==
Scope of the Workshop
==

Anyone interested in presenting at the workshop should submit an
extended abstract (up to 2 pages, details below). This can cover past,
ongoing, or future work. Any topic that could be of interest to the
emerging secure compilation community is in scope. Talks that provide
a useful outside view or challenge the community are also welcome.

Topics of interest include but are **not** limited to:

- attacker models for secure compiler chains

- secure compilation properties: full abstraction, memory safety,
control-flow integrity, preserving non-interference or
(hyper-)properties against adversarial contexts,
secure multi-language interoperability

- enforcement mechanisms: static checking, program verification,
reference monitoring, program rewriting, software fault isolation,
system-level protection, secure hardware, crypto, randomization

- experimental evaluation and applications of secure compilation

- proof methods: (bi)simulation, logical relations, game semantics,
multi-language semantics, embedded interpreters

- formal verification of secure compilation chain (protection
mechanisms, compilers, linkers, loaders), machine-checked proofs,
translation validation, property-based testing


==
Guidelines for Submitting Extended Abstracts
==

Extended abstracts should be submitted in PDF format and not exceed 2
pages. They should be formatted in two-column layout, 10pt font, and
be printable on A4 and US Letter sized paper. We recommend using the
new `acmart` LaTeX style in `sigplan` mode:
http://www.sigplan.org/sites/default/files/acmart/current/acmart-sigplanproc.zip

Submissions are not anonymous and should provide sufficient detail to
be assessed by the program committee. Presentation at the workshop
does not preclude publication elsewhere.

Please submit your extended abstracts at https://prisc18.hotcrp.com/


==
Short Talks Session
==

We will also run a short talks session, where participants get five
minutes to present intriguing ideas, advertise ongoing work, etc. You
can expect a call for short talks closer to the event.


==
Program Committee
==

Program Chair
  Catalin Hritcu   Inria Paris

Members
  Amal Ahmed   Inria Paris and Northeastern University
  Lars BirkedalAarhus University
  Dominique Devriese   KU Leuven
  Cédric Fournet   Microsoft Research
  Deepak Garg  MPI-SWS
  Xavier Leroy Inria Paris
  David

[Haskell] Final call for Secure Compilation presentations (PriSC Workshop @ POPL'18)

2017-10-16 Thread Dominique Devriese
===
Call for Presentations on Secure Compilation (PriSC Workshop @ POPL'18)
===

Secure compilation is an emerging field that puts together advances in
programming languages, security, verification, systems, compilers, and
hardware architectures in order to devise secure compiler chains that
eliminate many of today's low-level vulnerabilities. Secure
compilation aims to protect high-level language abstractions in
compiled code, even against adversarial low-level contexts, and to
allow sound reasoning about security in the source language. The
emerging secure compilation community aims to achieve this by:
identifying and formalizing properties that secure compilers must
possess; devising efficient enforcement mechanisms; and developing
effective verification and proof techniques.


==
2nd Workshop on Principles of Secure Compilation (PriSC 2018)
==

The Workshop on Principles of Secure Compilation (PriSC) is a new
informal 1-day workshop without any proceedings. The goal is to
identify interesting research directions and open challenges and to
bring together researchers interested in secure compilation.

The 2nd PriSC edition will be held on Saturday, 13 January 2018, in
Los Angeles, together with the ACM SIGPLAN-SIGACT Symposium on
Principles of Programming Languages (POPL).

More information at http://popl18.sigplan.org/track/prisc-2018


==
Important Dates
==

  Presentation proposal submission deadline:  18 October 2017, AoE
  Presentation proposal notification: 8 November 2017
  PriSC Workshop takes place: 13 January 2018


==
Scope of the Workshop
==

Anyone interested in presenting at the workshop should submit an
extended abstract (up to 2 pages, details below). This can cover past,
ongoing, or future work. Any topic that could be of interest to the
emerging secure compilation community is in scope. Talks that provide
a useful outside view or challenge the community are also welcome.

Topics of interest include but are **not** limited to:

- attacker models for secure compiler chains

- secure compilation properties: full abstraction, memory safety,
control-flow integrity, preserving non-interference or
(hyper-)properties against adversarial contexts,
secure multi-language interoperability

- enforcement mechanisms: static checking, program verification,
reference monitoring, program rewriting, software fault isolation,
system-level protection, secure hardware, crypto, randomization

- experimental evaluation and applications of secure compilation

- proof methods: (bi)simulation, logical relations, game semantics,
multi-language semantics, embedded interpreters

- formal verification of secure compilation chain (protection
mechanisms, compilers, linkers, loaders), machine-checked proofs,
translation validation, property-based testing


==
Guidelines for Submitting Extended Abstracts
==

Extended abstracts should be submitted in PDF format and not exceed 2
pages. They should be formatted in two-column layout, 10pt font, and
be printable on A4 and US Letter sized paper. We recommend using the
new `acmart` LaTeX style in `sigplan` mode:
http://www.sigplan.org/sites/default/files/acmart/current/acmart-sigplanproc.zip

Submissions are not anonymous and should provide sufficient detail to
be assessed by the program committee. Presentation at the workshop
does not preclude publication elsewhere.

Please submit your extended abstracts at https://prisc18.hotcrp.com/


==
Short Talks Session
==

We will also run a short talks session, where participants get five
minutes to present intriguing ideas, advertise ongoing work, etc. You
can expect a call for short talks closer to the event.


==
Program Committee
==

Program Chair
  Catalin Hritcu   Inria Paris

Members
  Amal Ahmed   Inria Paris and Northeastern University
  Lars BirkedalAarhus University
  Dominique Devriese   KU Leuven
  Cédric Fournet   Microsoft Research
  Deepak Garg  MPI-SWS
  Xavier Leroy Inria Paris
  David

[Haskell] Call for Presentations on Secure Compilation (PriSC Workshop @ POPL'18)

2017-09-15 Thread Dominique Devriese
=
Call for Presentations on Secure Compilation (PriSC Workshop @ POPL'18)
=

Secure compilation is an emerging field that puts together advances in
programming languages, security, verification, systems, compilers, and
hardware architectures in order to devise secure compiler chains that
eliminate many of today's low-level vulnerabilities. Secure
compilation aims to protect high-level language abstractions in
compiled code, even against adversarial low-level contexts, and to
allow sound reasoning about security in the source language. The
emerging secure compilation community aims to achieve this by:
identifying and formalizing properties that secure compilers must
possess; devising efficient enforcement mechanisms; and developing
effective verification and proof techniques.


=
2nd Workshop on Principles of Secure Compilation (PriSC 2018)
=

The Workshop on Principles of Secure Compilation (PriSC) is a new
informal 1-day workshop without any proceedings. The goal is to
identify interesting research directions and open challenges and to
bring together researchers interested in secure compilation.

The 2nd PriSC edition will be held on Saturday, 13 January 2018, in
Los Angeles, together with the ACM SIGPLAN-SIGACT Symposium on
Principles of Programming Languages (POPL).

More information at http://popl18.sigplan.org/track/prisc-2018


=
Important Dates
=

  Presentation proposal submission deadline:  18 October 2017, AoE
  Presentation proposal notification: 8 November 2017
  PriSC Workshop takes place: 13 January 2018


=
Scope of the Workshop
=

Anyone interested in presenting at the workshop should submit an
extended abstract (up to 2 pages, details below). This can cover past,
ongoing, or future work. Any topic that could be of interest to the
emerging secure compilation community is in scope. Talks that provide
a useful outside view or challenge the community are also welcome.

Topics of interest include but are **not** limited to:

- attacker models for secure compiler chains

- secure compilation properties: full abstraction, memory safety,
control-flow integrity, preserving non-interference or
(hyper-)properties against adversarial contexts,
secure multi-language interoperability

- enforcement mechanisms: static checking, program verification,
reference monitoring, program rewriting, software fault isolation,
system-level protection, secure hardware, crypto, randomization

- experimental evaluation and applications of secure compilation

- proof methods: (bi)simulation, logical relations, game semantics,
multi-language semantics, embedded interpreters

- formal verification of secure compilation chain (protection
mechanisms, compilers, linkers, loaders), machine-checked proofs,
translation validation, property-based testing


=
Guidelines for Submitting Extended Abstracts
=

Extended abstracts should be submitted in PDF format and not exceed 2
pages. They should be formatted in two-column layout, 10pt font, and
be printable on A4 and US Letter sized paper. We recommend using the
new `acmart` LaTeX style in `sigplan` mode:
http://www.sigplan.org/sites/default/files/acmart/current/acmart-sigplanproc.zip

Submissions are not anonymous and should provide sufficient detail to
be assessed by the program committee. Presentation at the workshop
does not preclude publication elsewhere.

Please submit your extended abstracts at https://prisc18.hotcrp.com/


=
Short Talks Session
=

We will also run a short talks session, where participants get five
minutes to present intriguing ideas, advertise ongoing work, etc. You
can expect a call for short talks closer to the event.


=
Program Committee
=

Program Chair
  Catalin Hritcu   Inria Paris

Members
  Amal Ahmed   Inria Paris and Northeastern University
  Lars BirkedalAarhus University
  Dominique Devriese   KU Leuven
  Cédric Fournet   Microsoft Research
  Deepak Garg  MPI-SWS
  Xavier Leroy Inria Paris
  David Naumann

Re: Are there GHC extensions we'd like to incorporate wholesale?

2016-05-04 Thread Dominique Devriese
As an outsider, I would like to suggest thinking about MonoLocalBinds.  GHC
has a rather convincing story (at least to me) that "(local) let should not
be generalised" (since it becomes problematic in combination with several
other language extensions) and the journal version of the OutsideIn(X)
paper has empirical data that indicates it is not a big problem to remove.
If there is a concern about backwards compatibility, perhaps the committee
could deprecate local let generalisation in Haskell2020 and remove it in a
subsequent iteration of the report?

Regards,
Dominique

Op wo 4 mei 2016 om 07:00 schreef M Farkas-Dyck :

> On 02/05/2016, Cale Gibbard  wrote:
> > This question implicitly has two parts:
> >
> > 1) Are there GHC extensions which the Report ought to describe in their
> > entirety? ...
> >
> > 2) Are there extensions which ought to stop being extensions? ...
>
> I agree here, except as noted in my earlier mail in this thread.
>
> An extension i might like to no longer be an extension is
> NoMonomorphismRestriction, which i haven't yet seen brought up in this
> thread. I recognize it has some rationale for it, but i surely want
> this committee to at least re-evaluate its merits.
> ___
> Haskell-prime mailing list
> Haskell-prime@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime
>
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Re: Proposal: ValidateMonoLiterals - Initial bikeshed discussion

2015-02-06 Thread Dominique Devriese
Merijn,

Perhaps only for the sake of discussion: have you considered doing
something at the type-level instead of using TH? I mean that you could
change the type of 42 from `forall a. Num a = a` to `forall a.
HasIntLiteral a '42 = a` where HasIntegerLiteral is a type class of
kind `* - 'Integer - Constraint` and people can instantiate it for
their types:

class HasIntegerLiteral (a :: *) (k :: 'Integer) where
  literal :: a

The desugarer could then just generate an invocation of literal.

An advantage would be that you don't need TH (although you do need
DataKinds and type-level computation).  Specifically, type-checking
remains decidable and you can do it in safe haskell and so on.  I
haven't thought this through very far, so there may be other
advantages/disadvantages/glaring-holes-in-the-idea that I'm missing.

Regards,
Dominique

2015-02-06 11:07 GMT+01:00 Merijn Verstraaten mer...@inconsistent.nl:
 And no one of my proofreaders noticed that .

 I would propose to have the extension replace the 'fromString foo', 
 'fromIntegral 5' and 'fromList [1,2,3]' calls (for monomorphic cases) in the 
 AST with the relevant Typed TH splice.

 I considered quasi-quotation initially too, but there's no quasi quotation 
 syntax for Typed TH. I'm guessing that's just an oversight, but I'd really be 
 in favour of adding a typed quasiquoter too. Similarly to thinking we should 
 have an easier way to obtain Lift  instances since, to me at least, it seems 
 that the Lift instance for most ADTs should be fairly trivial?

 I'll quickly clarify the proposal on the wiki :)

 Cheers,
 Merijn

 On 5 Feb 2015, at 22:48, Simon Peyton Jones simo...@microsoft.com wrote:

 I'm all for it.  Syntax sounds like the main difficulty.  Today you could 
 use quasiquotatoin
   [even| 38 |]
 and get the same effect as $$(validate 38).  But it's still noisy.

 So: what is the non-noisy scheme you want to propose?  You don't quite get 
 to that in the wiki page!

 Simon

 | -Original Message-
 | From: ghc-devs [mailto:ghc-devs-boun...@haskell.org] On Behalf Of Merijn
 | Verstraaten
 | Sent: 05 February 2015 14:46
 | To: ghc-d...@haskell.org; GHC users
 | Subject: Proposal: ValidateMonoLiterals - Initial bikeshed discussion
 |
 | I've been repeatedly running into problems with overloaded literals and
 | partial conversion functions, so I wrote up an initial proposal
 | (https://ghc.haskell.org/trac/ghc/wiki/ValidateMonoLiterals) and I'd like
 | to commence with the bikeshedding and hearing other opinions :)
 |
 | Cheers,
 | Merijn


 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Proposal: ValidateMonoLiterals - Initial bikeshed discussion

2015-02-06 Thread Dominique Devriese
Hi Merijn,

2015-02-06 13:45 GMT+01:00 Merijn Verstraaten mer...@inconsistent.nl:
 I don't see how that would replace the usecase I describe?

I've written out the Even use case a bit, to hopefully clarify my
suggestion.  The code is a bit cumbersome and inefficient because I
can't use GHC type-lits because some type-level primitives seem to be
missing (modulo specifically).  Type-level Integers (i.e. a kind with
*negative* numbers and literals) would probably also be required for
an actual solution.

 {-# LANGUAGE FlexibleInstances, MultiParamTypeClasses, DataKinds,
KindSignatures, ExplicitForAll, PolyKinds, ScopedTypeVariables,
ConstraintKinds, TypeFamilies, GADTs, FlexibleContexts #-}

module ValidateMonoLiterals where


data Nat where
  Zero :: Nat
  Suc :: Nat - Nat

class KnownNat (n :: Nat) where
  natSing :: forall proxy. proxy n - Integer

instance KnownNat Zero where natSing _ = 0
instance KnownNat k = KnownNat (Suc k) where natSing _ = natSing
(Proxy :: Proxy k) + 1

data Proxy (t :: k) = Proxy

class HasNatLiteral a (k :: Nat) where
  literal :: Proxy k - a

data Even = Even Integer

class CheckEven (k :: Nat) where
instance CheckEven Zero
instance CheckEven k = CheckEven (Suc (Suc k)) where

instance (KnownNat k, CheckEven k) = HasNatLiteral Even (k :: Nat) where
  literal _ = Even (fromInteger (natSing (Proxy :: Proxy k)))

instance (KnownNat k) = HasNatLiteral Integer k where
  literal _ = natSing (Proxy :: Proxy k)

four :: HasNatLiteral n (Suc (Suc (Suc (Suc Zero = n
four = literal (Proxy :: Proxy (Suc (Suc (Suc (Suc Zero)

three :: HasNatLiteral n (Suc (Suc (Suc Zero))) = n
three = literal (Proxy :: Proxy (Suc (Suc (Suc Zero

fourI :: Integer
fourI = four

fourEI :: Even
fourEI = four

-- fails with No instance for CheckEven (Suc Zero)
-- threeEI :: Even
-- threeEI = three

 I'll give you a more concrete example from a library I'm working on. I'm 
 working on a Haskell implementation of ZeroMQ, the ZMTP protocol lets sockets 
 be named by a binary identifier with length = 255 and NOT starting with a 
 NUL byte. As a programmer using this library I would have to write these 
 socket identifiers in my source code. Now I have four options:

 1) The library just doesn't validate identifiers to be compatible with the 
 protocol (awful!)

 2) My library produces a runtime error on every single invocation of the 
 program (if it doesn't satisfy the constraints it will never successfully 
 work)

 3) I require a newtype'd input type with a smart constructor, which means the 
 programmer still has to handle the error case even though it should never 
 happen for literals written in the source.

 4) Using a trick like what I desribed, the above newtype and smart 
 constructor, and check at compile time that it is correct.

Well, I think my suggestion could be used as another alternative. As I
mentioned, the compiler could translate the literal 42 to an
appropriately typed invocation of HasNatLiteral.literal, so that you
could also just write 42 but get the additional compile-time checking.

 To be honest, I don't even see how your example would generalise to the 
 rather trivial example using Even? For example, suppose we have foo :: Even 
 - SomeData how would I write foo 2 using your idea in a way that, at 
 compile time, checks that I'm not passing an invalid literal to foo?

See above: the type of foo doesn't change w.r.t. your approach.

 As a further aside, your type checking remains decidable comment seems to 
 imply that you think that type checking becomes undecidable with what I 
 propose? Can you explain how that could be, considering that it already works 
 in GHC, albeit in a very cumbersome way?

What I mean is that meta-programs invoked through TH can always fail
to terminate
(even though the ones you are using in your example are terminating).
Consider what happens if you change the definition of your validate to
this (or someone else implements your validateInteger like this for a
type):
  validate :: forall a . Validate a = Integer - Q (TExp a)
  validate i = validate (i+1)

Regards,
Dominique
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Proposal: ValidateMonoLiterals - Initial bikeshed discussion

2015-02-06 Thread Dominique Devriese
2015-02-06 14:20 GMT+01:00 Adam Gundry a...@well-typed.com:
 It seems to me that what you would describe would work, and the
 avoidance of TH is a merit, but the downside is the complexity of
 implementing even relatively simple validation at the type level (as
 opposed to just reusing a term-level function). For Merijn's Even
 example I guess one could do something like this in current GHC:

 type family IsEven (n :: Nat) :: Bool where
   IsEven 0 = True
   IsEven 1 = False
   IsEven n = n - 2

 instance (KnownNat n, IsEven n ~ True)
 = HasIntegerLiteral Even n where
   literal = Even (natVal (Proxy :: Proxy n))

 but anything interesting to do with strings (e.g. checking that
 ByteStrings are ASCII) is rather out of reach at present.

Agreed.  For the idea to scale, good support for type-level
programming with Integers/Strings/... is essential.  Something else
that would be useful is an unsatisfiable primitive constraint
constructor `UnsatisfiableConstraint :: String - Constraint` that can
be used to generate custom error messages. Then one could write
something like

  type family MustBeTrue (t :: Bool) (error :: String) :: Constraint
  type family MustBeTrue True _ = ()
  type family MustBeTrue False error = UnsatisfiableConstraint error

  type family MustBeEven (n :: Nat) :: Constraint
  type family MustBeEven n = MustBeTrue (IsEven n) (Error in Even
literal :' ++ show n ++ ' is not even!)

  instance (KnownNat n, MustBeEven n) = HasIntegerLiteral Even n where ...

Regards,
Dominique
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Desugaring do-notation to Applicative

2013-10-02 Thread Dominique Devriese
Perhaps an alternative for this could be extending McBride's idiom brackets:
  https://personal.cis.strath.ac.uk/conor.mcbride/pub/she/idiom.html
with a form of top-level let, something like:

(| let x = expr1
y = expr2
z = expr3
   in x*y + y*z + z*x |)
=
pure (\x y z - x*y + y*z + z*x) * expr1 * expr2 * expr3

This seems like it would nicely commute with the desugaring of let x
= e1 in e2 into (\x - e2) e1:

(| let x = expr1
y = expr2
z = expr3
   in x*y + y*z + z*x |)
=
(| (\x y z - x*y + y*z + z*x) expr1 expr2 expr3 |)
=
pure (\x y z - x*y + y*z + z*x) * expr1 * expr2 * expr3

Regards,
Dominique

2013/10/2 Dan Doel dan.d...@gmail.com:
 Unfortunately, in some cases, function application is just worse. For
 instance, when the result is a complex arithmetic expression:

 do x - expr1; y - expr2; z - expr3; return $ x*y + y*z + z*x

 In cases like this, you have pretty much no choice but to name intermediate
 variables, because the alternative is incomprehensible. But applicative
 notation:

 (\x y z - x*y + y*z + z*x) $ expr1 * expr2 * expr3

 moves the variable bindings away from the expressions they're bound to, and
 we require extra parentheses to delimit things, and possibly more.

 Desugaring the above do into applicative is comparable to use of plain let
 in scheme (monad do is let*, mdo was letrec). And sometimes, let is nice,
 even if it has an equivalent lambda form.

 And as Jake mentioned, syntax isn't the only reason for Applicative.
 Otherwise it'd just be some alternate names for functions involving Monad.



 On Wed, Oct 2, 2013 at 5:12 AM, p.k.f.holzensp...@utwente.nl wrote:

 I thought the whole point of Applicative (at least, reading Connor’s
 paper) was to restore some function-application-style to the whole
 effects-thing, i.e. it was the very point *not* to resort to binds or
 do-notation.



 That being said, I’m all for something that will promote the use of the
 name “pure” over “return”.



 +1 for the Opt-In



 Ph.







 From: Glasgow-haskell-users
 [mailto:glasgow-haskell-users-boun...@haskell.org] On Behalf Of Iavor
 Diatchki



 do x1 - e1



-- The following part is `Applicative`

(x2,x3) - do x2 - e2 x1

  x3 - e3

  pure (x2,x3)



f x1 x2 x3


 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Overloaded record fields

2013-06-28 Thread Dominique Devriese
Simon,

I see your point.  Essentially, the original proposal keeps the
namespace for field names syntactically distinguishable from that of
functions, so that the type given to r.foo doesn't depend on what is
in scope.  (.foo) is always defined and it is always a function of
type (r { foo::t }) = r - t. With the orthogonal proposal, it
would only be defined if there is a record with a foo field in scope,
although its definition or type does not actually depend on the
record.   One would then need to define an Unused record with a field
foo, or declare the following
  foo :: r { foo ::t} = r - t
  foo = getFld
to essentially declare that foo should be treated as a field selector
and I'm not even sure if type inference would work for this
definition... Maybe we could provide syntax like a declaration field
foo; as equivalent to the latter, but I have to acknowledge that this
is a downside for the orthogonal proposal.

Regards,
Dominique

2013/6/28 Simon Peyton-Jones simo...@microsoft.com:
 | Folks, I'm keenly aware that GSoC has a limited timespan; and that there
 | has already been much heat generated on the records debate.

 I am also keenly aware of this.  I think the plan Ant outlines below makes 
 sense; I'll work on it with Adam.

 I have, however, realised why I liked the dot idea.  Consider

 f r b = r.foo  b

 With dot-notation baked in (non-orthogonally), f would get the type

 f :: (r { foo::Bool }) = r - Bool - Bool

 With the orthogonal proposal, f is equivalent to
 f r b = foo r  b

 Now it depends.

 * If there is at least one record in scope with a field foo
   and no other foo's, then you get the above type

 * If there are no records in scope with field foo
   and no other foo's, the program is rejected

 * If there are no records in scope with field foo
   but there is a function foo, then the usual thing happens.

 This raises the funny possibility that you might have to define a local type
 data Unused = U { foo :: Int }
 simply so that there *is* at least on foo field in scope.

 I wanted to jot this point down, but I think it's a lesser evil than falling 
 into the dot-notation swamp.  After all, it must be vanishingly rare to write 
 a function manipulating foo fields when there are no such records around. 
 It's just a point to note (NB Adam: design document).

 Simon

 | -Original Message-
 | From: glasgow-haskell-users-boun...@haskell.org [mailto:glasgow-haskell-
 | users-boun...@haskell.org] On Behalf Of AntC
 | Sent: 27 June 2013 13:37
 | To: glasgow-haskell-users@haskell.org
 | Subject: Re: Overloaded record fields
 |
 | 
 |  ... the orthogonality is also an important benefit.
 |   It could allow people like Edward and others who dislike ...
 |   to still use ...
 | 
 |
 | Folks, I'm keenly aware that GSoC has a limited timespan; and that there
 | has already been much heat generated on the records debate.
 |
 | Perhaps we could concentrate on giving Adam a 'plan of attack', and help
 | resolving any difficulties he runs into. I suggest:
 |
 | 1. We postpone trying to use postfix dot:
 |It's controversial.
 |The syntax looks weird whichever way you cut it.
 |It's sugar, whereas we'd rather get going on functionality.
 |(This does mean I'm suggesting 'parking' Adam's/Simon's syntax, too.)
 |
 | 2. Implement class Has with method getFld, as per Plan.
 |
 | 3. Implement the Record field constraints new syntax, per Plan.
 |
 | 4. Implicitly generate Has instances for record decls, per Plan.
 |Including generating for imported records,
 |even if they weren't declared with the extension.
 |(Option (2) on-the-fly.)
 |
 | 5. Implement Record update, per Plan.
 |
 | 6. Support an extension to suppress generating field selector functions.
 |This frees the namespace.
 |(This is -XNoMonoRecordFields in the Plan,
 | but Simon M said he didn't like the 'Mono' in that name.)
 |Then lenses could do stuff (via TH?) with the name.
 |
 |[Those who've followed so far, will notice that
 | I've not yet offered a way to select fields.
 | Except with explicit getFld method.
 | So this 'extension' is actually 'do nothing'.]
 |
 | 7. Implement -XPolyRecordFields, not quite per Plan.
 |This generates a poly-record field selector function:
 |
 |x :: r {x :: t} = r - t-- Has r x t = ...
 |x = getFld
 |
 | And means that H98 syntax still works:
 |
 |x e -- we must know e's type to pick which instance
 |
 | But note that it must generate only one definition
 | for the whole module, even if x is declared in multiple data types.
 | (Or in both a declared and an imported.)
 |
 | But not per the Plan:
 | Do _not_ export the generated field selector functions.
 | (If an importing module wants field selectors,
 |  it must set the extension, and generate them for imported data
 | types.
 |  Otherwise we risk name clash on the import.
 |  

Re: Overloaded record fields

2013-06-27 Thread Dominique Devriese
Simon,

Yes, your summary is exactly what I meant.

2013/6/26 Simon Peyton-Jones simo...@microsoft.com:
 In fact, your observation allows us to regard our proposal as consisting of 
 two entirely orthogonal parts
   * Generalise the type of record field selectors
   * Introduce period as reverse function application

As Anthony points out below, I think the orthogonality is also an
important benefit.  It could allow people like Edward and others who
dislike DotAsPostFixApply to still use OverloadedRecordFields.  I
expect just the OverloadedRecordFields extension would fit reasonably
well into the existing lens libraries somehow.

Regards
Dominique

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Repeated variables in type family instances

2013-06-26 Thread Dominique Devriese
Richard,

Thanks for your answers.

2013/6/24 Richard Eisenberg e...@cis.upenn.edu:

 The nub of the difference is that type families can pattern-match on kinds,
 whereas term-level functions cannot pattern-match on types. So, while the @a
 is repeated in the pattern as written above, GHC does not, when matching a
 pattern, check that these are the same. In fact, they have to be the same if
 the function argument is well-typed. On the other hand, type family
 equations *can* branch based solely on kind information, making the repeated
 variable semantically important.

Interesting,  I wasn't aware this was possible.  I agree that if it's
possible to branch solely on kind info, then they should be taken into
account for the linearity check. Do you perhaps have an example of how
to do this sort of branching?

Thanks,
Dominique

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Overloaded record fields

2013-06-26 Thread Dominique Devriese
I think it's a good idea to push forward on the records design because
it seems futile to hope for an ideal consensus proposal.

The only thing I dislike though is that dot notation is special-cased to
record projections.  I would prefer to have dot notation for a
general, very tightly-binding reverse application, and the type of the record
selector for a field f changed to forall r t. r { f :: t } = r - t
instead of
SomeRecordType - t.  Such a general reverse application dot would
allow things like string.toUpper and for me personally, it would
make a Haskell OO library that I'm working on more elegant...

But I guess you've considered such a design and decided against it,
perhaps because of the stronger backward compatibility implications of
changing the selectors' types?

Dominique

2013/6/24 Adam Gundry adam.gun...@strath.ac.uk:
 Hi everyone,

 I am implementing an overloaded record fields extension for GHC as a
 GSoC project. Thanks to all those who gave their feedback on the
 original proposal! I've started to document the plan on the GHC wiki:

 http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Plan

 If you have any comments on the proposed changes, or anything is unclear
 about the design, I'd like to hear from you.

 Thanks,

 Adam Gundry

 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Repeated variables in type family instances

2013-06-24 Thread Dominique Devriese
Richard,

I'm interested by your argument at the bottom of the wiki page about
the alternative (non-)solution of disallowing non-linear patterns
altogether.  I'll quote it for reference:

  One way we (Simon, Dimitrios, and Richard) considered proceeding was
to prohibit nonlinear unbranched
  instances entirely. Unfortunately, that doesn't work. Consider this:

  type family H (x :: [k]) :: *
  type instance H '[] = Bool

  Innocent enough, it seems. However, that instance expands to type
instance H k ('[] k) = Bool internally.
  And that expansion contains a repeated variable! Yuck. We Thought
Hard about this and came up with
  various proposals to fix it, but we weren't convinced that any of
them were any good. So, we concluded
  to allow nonlinear unbranched instances, but we linearize them when
checking overlap. This may surprise
  some users, but we will put in a helpful error message in this case.

So in summary, if you extend the pattern with explicit kind info, you
get linearity for patterns that are actually fine.  It's not clear to
me why you would in fact use the expanded form when checking
linearity.  Wouldn't you get the same problem if you try to check a
value-level pattern's linearity after including explicit type info. If
so, why is that a problem for type instance patterns if it isn't a
problem for value-level patterns?

For example, take the following value-level function
  null :: [a] - Bool
  null [] = True
  null (_:_) = False
After including explicit System F-like type arguments, that would become
  null @a ([] @a) = True
  null @a ((:) @a _ _) = False
So we get the same problem of non-linearity of the expansion even
though the original is fine, right?

Perhaps we could consider adding an internal annotation like the @
above on the arguments inserted by the expansion of a type instance
pattern with kinding info (or sort info if that's more correct?). Then
one could ignore those arguments altogether during the linearity
check.

Note: I'm in favor of avoiding non-linear patterns for type families
if at all possible, in order to keep the type-level computational
model functional and close to the value-leve one.  I would hope we
could forbid linear patterns for type classes as well at some point in
the future, although that could perhaps be more controversial still...

Thanks!
Dominique

P.S.: I hope you'll be writing a more detailed account of this work (a
research paper?) at some point and I look forward to reading it...

P.S.2: On an unrelated note: will you also do a completeness check on
closed type family definitions?

2013/5/29 Richard Eisenberg e...@cis.upenn.edu:
 (Sorry for the re-send – got the wrong subject line last time.)



 Hello all,



 It’s come to my attention that there is a tiny lurking potential hole in
 GHC’s type safety around type families in the presence of
 UndecidableInstances. Consider the following:



 type family F a b

 type instance F x x = Int

 type instance F [x] x = Bool



 type family G

 type family G = [G]



 This declarations are all valid in GHC 7.6.3. However, depending on the
 reduction strategy used, it is possible to reduce `F G G` to either Int or
 Bool. I haven’t been able to get this problem to cause a seg-fault in
 practice, but I think that’s more due to type families’ strictness than
 anything more fundamental. Thus, we need to do something about it.



 I have written a proposal on the GHC wiki at
 http://hackage.haskell.org/trac/ghc/wiki/NewAxioms/Nonlinearity



 It proposes a change to the syntax for branched instances (a new form of
 type family instance that allows for ordered matching), but as those haven’t
 yet been officially released (only available in HEAD), I’m not too worried
 about it.



 There are two changes, though, that break code that compiles in released
 versions of GHC:

 ---

 1) Type family instances like the two for F, above, would no longer be
 accepted. In particular, the overlap check for unbranched (regular
 standalone instances like we’ve had for years) would now check for overlap
 if all variables were distinct. (This freshening of variables is called
 ‘linearization’.) Thus, the check for F would look at `F x y` and `F [x] y`,
 which clearly overlap and would be conflicting.



 2) Coincident overlap between unbranched instances would now be prohibited.
 In the new version of GHC, these uses of coincident overlap would have to be
 grouped in a branched instance. This would mean that all equations that
 participate in coincident overlap would have be defined in the same module.
 Cross-module uses of coincident overlap may be hard to refactor.

 ---



 Breaking change #1 is quite necessary, as that’s the part that closes the
 hole in the type system.

 Breaking change #2 is strongly encouraged by the fix to #1, as the
 right-hand side of an instance is ill-defined after the left-hand side is
 linearized. It is conceivable that there is some way to recover coincident
 overlap on unbranched instances, 

[Haskell-cafe] mapFst and mapSnd

2013-05-28 Thread Dominique Devriese
Hi all,

I often find myself needing the following definitions:

  mapPair :: (a - b) - (c - d) - (a,c) - (b,d)
  mapPair f g (x,y) = (f x, g y)

  mapFst :: (a - b) - (a,c) - (b,c)
  mapFst f = mapPair f id

  mapSnd :: (b - c) - (a,b) - (a,c)
  mapSnd = mapPair id

But they seem missing from the prelude and Hoogle or Hayoo only turn
up versions of them in packages like scion or fgl.  Has anyone else
felt the need for these functions?  Am I missing some generalisation
of them perhaps?

Regards,
Dominique

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] mapFst and mapSnd

2013-05-28 Thread Dominique Devriese
2013/5/28 Tikhon Jelvis tik...@jelv.is:
 These are present in Control.Arrow as (***), first and second respectively.

Right, thanks. Strange that neither Hayoo nor Hoogle turned these up..

Dominique

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] monadic DSL for compile-time parser generator, not possible?

2013-03-13 Thread Dominique Devriese
All,

2013/3/13  o...@okmij.org:
 So, Code is almost applicative. Almost -- because we only have a
 restricted pure:
 pureR :: Lift a = a - Code a
 with a Lift constraint. Alas, this is not sufficient for realistic
 parsers, because often we have to lift functions, as in the example of
 parsing a pair of characters:

I've previously used an approach like this in the grammar-combinators
library.  See 
http://hackage.haskell.org/packages/archive/grammar-combinators/0.2.7/doc/html/Text-GrammarCombinators-Base-ProductionRule.html#t:LiftableProductionRule
and 
http://hackage.haskell.org/packages/archive/grammar-combinators/0.2.7/doc/html/Text-GrammarCombinators-Utils-LiftGrammar.html.

The approach uses a restricted pure like this:

class ProductionRule p = LiftableProductionRule p where
  epsilonL :: a - Q Exp - p aSource

and associated
  epsilonLS :: (Lift v, LiftableProductionRule p) = v - p v
  epsilonLS v = epsilonL v $ lift v

There is a function liftGrammar which lifts a grammar that uses the
type class to a list of declarations using TH.

This allowed me to start from a context-free grammar, transform it to
a non-left-recursive grammar, optimize it and then lift it using TH.
In some tests, I found that this improved performance significantly
over using the transformed grammar directly, even when I try to force
the transformation to happen before the benchmark.  I assume this is
because the lifted grammar is optimised better by the compiler.

Regards,
Dominique

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] monadic DSL for compile-time parser generator, not possible?

2013-03-13 Thread Dominique Devriese
2013/3/13 Dominique Devriese dominique.devri...@cs.kuleuven.be:
 class ProductionRule p = LiftableProductionRule p where
   epsilonL :: a - Q Exp - p aSource

 and associated
   epsilonLS :: (Lift v, LiftableProductionRule p) = v - p v
   epsilonLS v = epsilonL v $ lift v

Note that the point of providing epsilonL as primitive and not just
epsilonLS is that I can then still lift most functions I use:

  epsilonL (,) [| (,) |]

Even though functions are not necessarily liftable. This is an
alternative to Oleg's adding of e.g. pair etc. as DSL primitives.

Dominique

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parser left recursion

2013-02-26 Thread Dominique Devriese
2013/2/26 Martin Drautzburg martin.drautzb...@web.de:
 I wonder if I can enforce the nonNr property somehow, i.e. enforce the rule
 will not consider the same nonterminal again without having consumed any
 input.

You might be interested in this paper:

  Danielsson, Nils Anders. Total parser combinators. ACM Sigplan
Notices. Vol. 45. No. 9. ACM, 2010.

Regards,
Dominique

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parser left recursion

2013-02-20 Thread Dominique Devriese
All,

Many (but not all) of the parsing algorithms that support left
recursion cannot be implemented in Haskell using the standard
representation of recursion in parser combinators.  The problem
can be avoided in Scala because it has imperative features like
referential identity and/or mutable references. The most practical
solution currently is probably to manually transform your grammars
to a non-left-recursive form (as suggested above) and then use a
standard parser combinator library with a top-down parsing algorithm
(I suggest uu-parsinglib).

That being said, there is active research into alternative functional
representations of recursion in grammars/parsers that support a wider
range of algorithms. If you want to read up on such research, I
suggest the following papers to get an idea of some of the approaches:

  Baars, Arthur, S. Doaitse Swierstra, and Marcos Viera. Typed
transformations of typed grammars: The left corner transform.
Electronic Notes in Theoretical Computer Science 253.7 (2010): 51-64.
  Devriese, Dominique, et al. Fixing idioms: A recursion primitive
for applicative dsls. Proceedings of the ACM SIGPLAN 2013 workshop on
Partial evaluation and program manipulation. ACM, 2013.
 Oliveira, Bruno CdS, and William R. Cook. Functional programming
with structured graphs. Proceedings of the 17th ACM SIGPLAN
international conference on Functional programming. ACM, 2012.
 Oliveira, Bruno C. D. S., and Andres Löh. Abstract syntax graphs for
domain specific languages. Proceedings of the ACM SIGPLAN 2013
workshop on Partial evaluation and program manipulation. ACM, 2013.
  DEVRIESE, DOMINIQUE, and FRANK PIESSENS. Finally tagless observable
recursion for an abstract grammar model. Journal of Functional
Programming 1.1: 1-40.

For the last one, you can check out
http://projects.haskell.org/grammar-combinators/ about the
grammar-combinators library on Hackage. It has a packrat parser that
can deal with left-recursion and a grammar transformation that
transforms it away. There is a tutorial you can checkout.

Dominique

2013/2/20 Tillmann Rendel ren...@informatik.uni-marburg.de:
 Hi,


 Roman Cheplyaka wrote:

 Another workaround is to use memoization of some sort — see e.g. GLL
 (Generalized LL) parsing.


 Is there a GLL parser combinator library for Haskell? I know about the
 gll-combinators for Scala, but havn't seen anything for Haskell.

 Bonus points for providing the graph-structured stack (for maximal sharing
 in the computation) and the shared packed parse forest (for maximal sharing
 in the results) as reusable components.

   Tillmann


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Agda] How to avoid T3 fonts in pdf generated with lhs2TeX?

2012-11-01 Thread Dominique Devriese
Andreas,

2012/11/1 Andreas Abel andreas.a...@ifi.lmu.de:
 Hello,

 maybe someone has experience in publishing papers that use lhs2TeX and
 unicode characters with ACM, and has been in my situation before...

 Sheridan, who publishes for ACM, does not like T3 fonts. However, lhs2tex
 --agda does make use of T3 fonts via:

   \RequirePackage[utf8x]{inputenc}

 If I remove this, my unicode characters are garbled in the lhs2tex-generated
 code. Does anoyone know a smart workaround besides replacing all the unicode
 characters manually by some math symbols in the .tex file?

Not sure about all this, but perhaps you can try to use utf8 instead
of utf8x and manually define translations for the unicode characters
that you use,e.g.:

\DeclareUnicodeCharacter{2032}{'}
\DeclareUnicodeCharacter{2080}{_0}
\DeclareUnicodeCharacter{2081}{_1}
\DeclareUnicodeCharacter{2082}{_2}
\DeclareUnicodeCharacter{2115}{\mathbb{N}}
\DeclareUnicodeCharacter{2192}{\to}
\DeclareUnicodeCharacter{2200}{\forall\,}

Perhaps you can then somehow avoid translations that use T3 fonts (not
sure what these are though). Note: the numbers are the characters'
unicode hexadecimal representation (AFAIU), which you can find e.g.
using emacs's describe-char.

Dominique

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Applicative functors with branch/choice ?

2012-07-25 Thread Dominique Devriese
Евгений,

 The possible extension may look somehow like this:

 class Applicative a = Branching a where
  branch :: a (Either b c) - (a b - a d) - (a c - a d) - a d

What about the following alternative that does not require an extension?

  import Control.Applicative

  eitherA :: Applicative f = f (a - c) - f (b - c) - f (Either a b) - f c
  eitherA = liftA3 either

Note by the way that the result of this function will execute the
effects of all of its arguments (as you would expect for an
Applicative functor).

Dominique

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] specifying using type class

2012-07-23 Thread Dominique Devriese
Patrick,

 -- Class with functional dependency
 class QUEUE_SPEC_CLASS2 a q | q - a where
newC2 :: q a -- ??
sizeC2  :: q a - Int
restC2  :: q a - Maybe (q a)
insertC2 :: q a - a - q a

The above is a reasonable type class definition for what you seem to intend.

 -- Without committing to some concrete representation such as list I do not 
 know how to specify constructor for insertC2 ?? =  ??
insertC2  newC2 a = newC2 -- wrong
isEmptyC2  :: q a - Bool
isEmptyC2 newC2  = True
 --   isEmptyC2 (insertC2 newC2 a) = False wrong

Correct me if I'm wrong, but what I understand you want to do here is
specify axioms on the behaviour of the above interface methods,
similar to how the well-known |Monad| class specifies for example m
= return === m.  You seem to want for example an axiom saying

  isEmptyC2 newC2 === True

and similar for possible other equations. With such axioms you don't
need access to actual constructors and you don't want access to them
because concrete implementations may use a different representation
that does not use such constructors. Anyway, in current Haskell, such
type class axioms can not be formally specified or proven but they are
typically formulated as part of the documentation of a type class and
implementations of the type class are required to satisfy them but
this is not automatically verified.

Dominique

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] TLS 0.9.6, question about session resumption.

2012-07-21 Thread Dominique Devriese
Hi,

2012/7/21 C Gosch ch.go...@googlemail.com:
 I am trying to use the TLS package from hackage, and it works fine so
 far -- except when a client wants to
 do session resumption (note I am not an expert in TLS, so it might be
 something quite simple).
 In that case, I get an alert, unexpected message, during handshake.

 The handshake goes like this:
 ClientHello (with a SessionID)
 ServerHello (with the same SessionID)
 ServerHelloDone

Not an expert either, but section 7.4 of the TLS 1.2 spec (rfc 5246)
does seem to say that this ServerHelloDone should be a Finished
message instead.

 and then the server says
  (AlertLevel_Fatal,UnexpectedMessage)

Do you mean that the client says this? If so, this may obviously be
correct if the server sends the wrong message. Pehaps you can test
with a different server implementation?

 I'm not sure whether the ServerHelloDone should happen when resuming.
 Does anyone have a hint what may be going wrong?
 I am using TLS10 and the tls package with version 0.9.6.

Bye
Dominique

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Martin Odersky on What's wrong with Monads

2012-06-28 Thread Dominique Devriese
2012/6/27 Tillmann Rendel ren...@informatik.uni-marburg.de:
 MightyByte wrote:

 Of course every line of your program that uses a Foo will change if you
 switch
 to IO Foo instead.


 But we often have to also change lines that don't use Foo at all. For
 example, here is the type of binary trees of integers:

  data Tree = Leaf Integer | Branch (Tree Integer) (Tree Integer)

 A function to add up all integers in a tree:

  amount:: Tree - Integer
  amount (Leaf x) = x
  amount (Branch t1 t2) = amountt1 + amountt2

 All fine so far. Now, consider the following additional requirement: If the
 command-line flag --multiply is set, the function amount computes the
 product instead of the sum.

 In a language with implicit side effects, it is easy to implement this. We
 just change the third line of the amount function to check whether to call
 (+) or (*). In particular, we would not touch the other two lines.

 How would you implement this requirement in Haskell without changing the
 line amount (Leaf x) = x?

I may be missing the point here, but having worked on large code bases
with a wide variety contributors before, I find it very advantageous
that programmers are prevented from writing an amount function whose
behaviour depends on command line arguments without at least an
indication in the type. The fact that the function can not perform
stuff like that is precisely the guarantee that the Haskell type gives
me...

Dominique

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] lhs2TeX: automatic line wrap within code blocks?

2012-04-05 Thread Dominique Devriese
David,

The easiest solution is probably to use multi-line string literals and
line-wrap manually:

\begin{code}
cyphertext = rlkmlj, zlnift ekblvke pqc elvm if pzlp gblrk, akrlomk zk zle \
  lfpiriglpke pzlp, if pzk flpojlb rcojmk cs knkfpm, morz qcobe ak pzk rcfeorp \
  cs nkjriftkpcjiu, bklnkm pzk ljdv ofekj gjkpkfmk cs jlimift jkrjoipm lfe \
  rlnlbjv
\end{code}

Dominique

Op 4 april 2012 20:14 heeft david.mihola david.mih...@gmail.com het
volgende geschreven:
 Hello,

 I am currently using lhs2TeX for the first time and have encountered a
 problem which I am unable to solve myself: Some code lines are too long to
 fit into a single line of the output (PDF) file and thus go off the right
 edge of the page.

 Consider the following example:

 -

 \documentclass{article}
 \usepackage[utf8]{inputenc}

 %include polycode.fmt
 %options ghci
 \begin{document}

 Our encrypted message:

 \begin{code}
 cyphertext = rlkmlj, zlnift ekblvke pqc elvm if pzlp gblrk, akrlomk zk zle
 lfpiriglpke pzlp, if pzk flpojlb rcojmk cs knkfpm, morz qcobe ak pzk rcfeorp
 cs nkjriftkpcjiu, bklnkm pzk ljdv ofekj gjkpkfmk cs jlimift jkrjoipm lfe
 rlnlbjv
 \end{code}

 Our decryption function:

 \begin{code}
 decrypt = id
 \end{code}

 The original message was:

 \eval{decrypt cyphertext}

 \end{document}

 -

 Converting this to .tex with lhs2TeX and to .pdf with pdflatex produces a
 PDF in which both instances of the cyphertext go off the right edge of the
 page.

 Is there any way to tell lhs2TeX to allow/force line wrap within code blocks
 and eval-statements?

 Thank you very much for any help!

 David

 P.S.: I have only found one prior discussion of my question
 (http://tex.stackexchange.com/questions/15048/how-to-typeset-a-multiline-text-in-math-environment)
 but to my understanding no real answer came out of that.

 --
 View this message in context: 
 http://haskell.1045720.n5.nabble.com/lhs2TeX-automatic-line-wrap-within-code-blocks-tp5618600p5618600.html
 Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] decoupling type classes

2012-01-17 Thread Dominique Devriese
2012/1/16 Yin Wang yinwa...@gmail.com:
 The typical example would be

 instance Eq a = Eq [a] where
  [] == [] = True
  (a : as) == (b : bs) = a == b  as == bs
  _ == _ = False

 It can handle this case, although it doesn't handle it as a parametric
 instance. I suspect that we don't need the concept of parameter
 instances at all. We just searches for instances recursively at the
 call site:

 That seems like it could work, but typically, one would like
 termination guarantees for this search, to avoid the type-checker
 getting stuck...

 Good point. Currently I'm guessing that we need to keep a stack of the
 traced calls. If a recursive call needs an implicit parameter X which
 is matched by one of the functions in the stack, we back up from the
 stack and resolve X to the function found on stack.

You may want to look at scala's approach for their implicit arguments.
They use a certain to conservatively detect infinite loops during the
instance search, but I don't remember the details off hand. While
talking about related work, you may also want to take a look at
Scala's implicit arguments, GHC implicit arguments and C++ concepts...


 foo x =
   let overload bar (x:Int) = x + 1
   in \() - bar x


 baz =
  in foo (1::Int)

 Even if we have only one definition of bar in the program, we should
 not resolve it to the definition of bar inside foo. Because that
 bar is not visible at the call site foo (1::int). We should report
 an error in this case. Think of bar as a typed dynamically scoped
 variable helps to justify this decision.

 So you're saying that any function that calls an overloaded function
 should always allow its own callers to provide this, even if a correct
 instance is in scope. Would that mean all instances have to be
 resolved from main? This also strikes me as strange, since I gather
 you would get something like length :: Monoid Int = [a] - Int,
 which would break if you happen to have a multiplicative monoid in
 scope at the call site?

 If you already have a correct instance in scope, then you should have
 no way defining another instance with the same name and type in the
 scope as the existing one. This is the case for Haskell.

Yes, but different ones may be in scope at different places in the code, right?

 But it may be useful to allow nested definitions (using let) to shadow
 the existing instances in the outer scope of the overloaded call.

I considered something like this for instance arguments in Agda, but
it was hard to make the instance resolution deterministic when
allowing such a form of prioritisation. The problem occurred if a
shadower and shadowee instance had slightly different types, such that
only the shadowee was actually type-valid for a certain instance
argument. However, the type information which caused the shadower to
become invalid only became available late in the type inference
process. In such a case, it is necessary to somehow ascertain that the
shadower instance is not chosen, but I did not manage to figure out
how to get this right.

Dominique

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] decoupling type classes

2012-01-16 Thread Dominique Devriese
Yin,

2012/1/14 Yin Wang yinwa...@gmail.com:
 On Sat, Jan 14, 2012 at 2:38 PM, Dominique Devriese
 dominique.devri...@cs.kuleuven.be wrote:
 I may or may not have thought about it. Maybe you can give an example
 of parametric instances where there could be problems, so that I can
 figure out whether my system works on the example or not.

 The typical example would be

 instance Eq a = Eq [a] where
  [] == [] = True
  (a : as) == (b : bs) = a == b  as == bs
  _ == _ = False

 It can handle this case, although it doesn't handle it as a parametric
 instance. I suspect that we don't need the concept of parameter
 instances at all. We just searches for instances recursively at the
 call site:

That seems like it could work, but typically, one would like
termination guarantees for this search, to avoid the type-checker
getting stuck...

 foo x =
   let overload bar (x:Int) = x + 1
   in \() - bar x


 baz =
  in foo (1::Int)

 Even if we have only one definition of bar in the program, we should
 not resolve it to the definition of bar inside foo. Because that
 bar is not visible at the call site foo (1::int). We should report
 an error in this case. Think of bar as a typed dynamically scoped
 variable helps to justify this decision.

So you're saying that any function that calls an overloaded function
should always allow its own callers to provide this, even if a correct
instance is in scope. Would that mean all instances have to be
resolved from main? This also strikes me as strange, since I gather
you would get something like length :: Monoid Int = [a] - Int,
which would break if you happen to have a multiplicative monoid in
scope at the call site?

Dominique

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] decoupling type classes

2012-01-12 Thread Dominique Devriese
Yin,

2012/1/12 Yin Wang yinwa...@gmail.com:
 I have an idea about type classes that I have been experimenting. It
 appears to be a generalization to Haskell’s type classes and seems to
 be doable. It seems to related the three ideas: type classes, implicit
 parameters, and (typed) dynamic scoping. But I don't know whether it
 is good or not. I hope to get some opinions before going further.

I find your ideas interesting. You may be interested in a related
design which I recently implemented for Agda [2], and an ICFP 2011
paper that presents it [1].

Also, you don't seem to have thought about the question of parametric
instances: do you allow them or not, if you do, what computational
power do they get etc.?

 I have an experimental system which “decouples” the dictionary.
 Instead of passing on a dictionary, it passes individual “implicit
 parameters around. Those implicit parameters are type inferenced and
 they can contain type parameters just as methods in a type class.
 Similarly, they are resolved by their types in the call site's scope.

I'm surprised that you propose passing all type class methods
separately. It seems to me that for many type classes, you want to
impose a certain correspondence between the types of the different
methods in a type class (for example, for the Monad class, you would
expect return to be of type (a - m a) if (=) is of type (m a - (a
- m b) - m b)). I would expect that inferencing these releations in
each function that uses either of the methods will lead to overly
general inferenced types and the need for more guidance to the type
inferencer?

By separating the methods, you would also lose the laws that associate
methods in a type class, right?

An alternative to what you suggest, is the approach I recommend for
using instance arguments: wrapping all the methods in a standard data
type (i.e. define the dictionary explicitly), and pass this around as
an implicit argument.

 The convenience of this approach compared to Haskell’s type classes is
 that we no longer require a user of a type class to define ALL the
 methods in a type class. For example, a user could just define a
 method + without defining other methods in the Num class: -, *, … He
 can use the method + independently. For example, if + is defined on
 the String type to be concatenation, we can use + in another function:

 weirdConcat x y = x + y + y

 This has a utility, because the methods in the Num class don’t “make
 sense” for Strings except +, but the current type class design
 requires us to define them. Note here that weirdConcat will not have
 the type (Num a) = a - a - a, since we no longer have the Num
 class, it is decoupled into separate methods.

For this example, one might also argue that the problem is in fact
that the Num type class is too narrow, and + should instead be defined
in a parent type class (Monoid comes to mind) together with 0 (which
also makes sense for strings, by the way)?

 There is another benefit of this decoupling: it can subsume the
 functionality of MPTC. Because the methods are no longer grouped,
 there is no “common” type parameter to the methods. Thus we can easily
 have more than one parameter in the individual methods and
 conveniently use them as MPTC methods.

Could you explain this a bit further?

 Here g is explicitly declared as “overloaded”, although my
 experimental system doesn’t need this. Any undefined variable inside
 function body automatically becomes overloaded. This may cause
 unintended overloading and it catches bugs late. That’s why we need
 the “overload” declarations.

I would definitely argue against treating undefined variables as
overloaded automatically. It seems this will lead to strange errors if
you write typo's for example.

 But the automatic overloading of the undefined may be useful in
 certain situations. For example, if we are going to use Haskell as a
 shell language. Every “command” must be evaluated when we type them.
 If we have mutually recursive definitions, the shell will report
 “undefined variables” either way we order the functions. The automatic
 overloading may solve this problem. The undefined variables will
 temporarily exist as automatic overloaded functions. Once we actually
 define a function with the same name AND satisfies the type
 constraints, they become implicit parameters to the function we
 defined before. If we call a function whose implicit parameters are
 not associated, the shell reports error very similar to Haskell’s
 “type a is not of class Num …”

The design you suggest seems to differ from Haskell's current
treatment, where functions can refer to other functions defined
further in the file, but still have them resolved statically?

 RELATIONSHIP TO DYNAMIC SCOPING

 It seems to be helpful to think of the “method calls” as referencing
 dynamically scoped variables. They are dispatched depending on the
 bindings we have in the call site's scope (and not the scope where the
 method is defined!). So it 

Re: [Haskell-cafe] Is it possible to represent such polymorphism?

2011-10-04 Thread Dominique Devriese
All,

In case anyone is interested, I just want to point out an interesting
article about the relation between Haskell type classes and C++
(overloading + concepts):

http://sms.cs.chalmers.se/publications/papers/2008-WGP.pdf

Dominique


2011/10/3 Ketil Malde ke...@malde.org:
 sdiy...@sjtu.edu.cn writes:

 This has nothing to do with OOP or being imperative. It's just about types.

 Of course, it's not necessarily linked to OOP, but OO languages - to the
 extent they have types - tend towards ad-hoc polymorphism instead of
 parametric polymorphism.  There are different trade-offs, one is the
 lack of return-type overloading in C++.

 -k
 --
 If I haven't seen further, it is by standing in the footprints of giants

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Smarter do notation

2011-09-04 Thread Dominique Devriese
It's not the same as what you propose, but it's related, so for
discussion, I just want to point out idiom brackets (an analog for
do-notation for Applicative functors) which have been introduced in
some Haskell-related languages. Examples are Idris
(http://www.cs.st-andrews.ac.uk/~eb/Idris/donotation.html) and SHE
(http://personal.cis.strath.ac.uk/~conor/pub/she/idiom.html).

Dominique

2011/9/4 Daniel Peebles pumpkin...@gmail.com:
 Hi all,
 I was wondering what people thought of a smarter do notation. Currently,
 there's an almost trivial desugaring of do notation into (=), (), and
 fail (grr!) which seem to naturally imply Monads (although oddly enough,
 return is never used in the desugaring). The simplicity of the desugaring is
 nice, but in many cases people write monadic code that could easily have
 been Applicative.
 For example, if I write in a do block:
 x - action1
 y - action2
 z - action3
 return (f x y z)
 that doesn't require any of the context-sensitivty that Monads give you, and
 could be processed a lot more efficiently by a clever Applicative instance
 (a parser, for instance). Furthermore, if return values are ignored, we
 could use the ($), (*), or (*) operators which could make the whole thing
 even more efficient in some instances.
 Of course, the fact that the return method is explicitly mentioned in my
 example suggests that unless we do some real voodoo, Applicative would have
 to be a superclass of Monad for this to make sense. But with the new default
 superclass instances people are talking about in GHC, that doesn't seem too
 unlikely in the near future.
 On the implementation side, it seems fairly straightforward to determine
 whether Applicative is enough for a given do block. Does anyone have any
 opinions on whether this would be a worthwhile change? The downsides seem to
 be a more complex desugaring pass (although still something most people
 could perform in their heads), and some instability with making small
 changes to the code in a do block. If you make a small change to use a
 variable before the return, you instantly jump from Applicative to Monad and
 might break types in your program. I'm not convinced that's necessary a bad
 thing, though.
 Any thoughts?
 Thanks,
 Dan
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] SIGPLAN Programming Languages Software Award

2011-06-09 Thread Dominique Devriese
2011/6/9 Stephen Tetley stephen.tet...@gmail.com:
 On 9 June 2011 09:02, Yves Parès limestr...@gmail.com wrote:
 Were templates an original feature of C++ or did they appear in a revision
 of the langage ?
 Because C++ appeared in 1982 and Haskell in 1990.

 Templates were a later addition to C++. There is a strong tradition of
 generics in OO and related languages that seems rather unrelated to
 typed functional programming - ADA, Eiffel and particularly CLU.

Note that the more recent C++ concepts are related to (and inspired
by?) Haskell type classes. See Bernardy et al.'s interesting paper A
comparison of c++ concepts and haskell type classes:

  http://portal.acm.org/citation.cfm?id=1411324

 Congratulations to the Simon's of course, kudos _and_ escudos from the ACM!

Idem. The Simons, GHC and the associated research very much deserve this award.

Dominique

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Random thoughts about typeclasses

2011-05-18 Thread Dominique Devriese
Robert,

2011/5/16 Robert Clausecker fuz...@gmail.com:
 I found out, that GHC implements typeclasses as an extra argument, a
 record that stores all functions of the typeclass. So I was wondering,
 is there a way (apart from using newtype) to pass a custom record as the
 typeclass record, to modify the behavior of the typeclass? I thought
 about something like this:

You may be interested in Agda's upcoming instance arguments
(inspired upon Scala implicits and Agda's implicit arguments). These
will be available in Agda 2.2.12 (you may find references to an older
name non-canonical implicit arguments). The new type of function
arguments are automatically inferred from call-site scope unless they
are explicitly provided. Type classes are directly (not just under the
hood) modelled as records, and you can do what you suggest. You can
also define local instances, and there are other advantages. We have
chosen a more limited-power instance search though. More discussion
online.

  
http://wiki.portal.chalmers.se/agda/pmwiki.php?n=ReferenceManual.InstanceArguments
  http://people.cs.kuleuven.be/~dominique.devriese/agda-instance-arguments/

I believe a similar Haskell extension (perhaps with a less principled
instance search) would improve and simplify Haskell's type class
system.

By the way, Kahl and Scheffczyk proposed extending Haskell with named
instances in 2001 which allowed something like this to a limited
extent. Look for Named instances for Haskell Type Classes in Google
Scholar.

Dominique

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Robert Harper on monads and laziness

2011-05-03 Thread Dominique Devriese
2011/5/3 Manuel M T Chakravarty c...@cse.unsw.edu.au:
 Interestingly, today (at least the academic fraction of) the Haskell
 community appears to hold the purity of the language in higher
 regard than its laziness.

I find Greg Morissett's comment on Lennart Augustsson's article pro
lazy evaluation very interesting:

  
http://augustss.blogspot.com/2011/05/more-points-for-lazy-evaluation-in.html#c7969361694724090315

What I find interesting is that he considers (non-)termination an
effect, which Haskell does not manage to control like it does other
types of effects. Dependently typed purely functional languages like
Coq (or Agda if you prefer ;)) do manage to control this (at the cost
of replacing general recursion with structural recursion) and require
you to model non-termination in a monad (or Applicative functor) like
in YNot or Agda's partiality monad (written _⊥) which models just
non-termination.

I have the impression that this separation of the partiality effect
provides a certain independence of evaluation order which neither ML
(because of side-effects) nor Haskell (because of non-strict
semantics) manage to provide. Such an independence seems very useful
for optimization and parallel purposes.

Dominique

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Robert Harper on monads and laziness

2011-05-02 Thread Dominique Devriese
2011/5/2 Ketil Malde ke...@malde.org:
  There is a particular reason why monads had to arise in Haskell,
   though, which is to defeat the scourge of laziness.

 My own view is/was that monads were so successful in Haskell since it
 allowed writing flexible programs with imperative features, without
 sacrificing referential transparency.  [...]

 Laziness does require referential transparency (or at least, it is
 easier to get away with the lack of RT in a strict language), so I can
 see that he is indirectly correct, but RT is a goal in itself.  Thus, I
 wonder if there are any other rationale for a statement like that?

I agree with your analysis. Throughout his different articles, I think
Harper partly has a point when he says that laziness brings certain
disadvantages (like e.g. complex memory and CPU behaviour) to Haskell
(although I disagree with some of his other  arguments here). However,
like you say, he misses the ball by amalgamating laziness with
referential transparency, where the first probably requires the second
but not vice versa. This allows him to simply dismiss both, which is
convenient for his apparent conclusion that ML is strictly better
than Haskell, since referential transparency and purity are (in my
view) one of the things ML lacks most when compared to Haskell. His
only other argument against referential transparency and purity seems
to be his mentioning of benign effects, which is weak for two
reasons: first, benign effects are clearly not what typical ML
programs use non-purity for and second, benign effects can be
supported much more conservatively using Haskell's unsafePerformIO.

Dominique

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proving correctness

2011-02-11 Thread Dominique Devriese
Kashyap,

2011/2/11 C K Kashyap ckkash...@gmail.com:
 I've come across this a few times - In Haskell, once can prove the
 correctness of the code - Is this true?
 I know that static typing and strong typing of Haskell eliminate a whole
 class of problems - is that related to the proving correctness?
 Is it about Quickcheck - if so, how is it different from having test sutites
 in projects using mainstream languages?

You may be confusing Haskell with dependently typed programming languages such
as Coq or Agda, where formal proofs of correctness properties of
programs can be verified by the type checker.

Dominique

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell] Problems with (projects|community).haskell.org

2011-02-10 Thread Dominique Devriese
Also, is there any news yet on a procedure for community members with
accounts on projects.haskell.org to get access to them again? My ssh
publickey login is no longer being accepted. I had an account mainly
for hosting the darcs repo and the website for my project
grammar-combinators. The website has been down for a couple of weeks
now.

Dominique

P.S.: This is not a complaint, I'm just hoping for a status update.
P.P.S.: Thanks to the people working on fixing this..

2011/2/9 Erik de Castro Lopo mle...@mega-nerd.com:
 Hi all,

 Still a couple of problems with these servers.

 Firstly, community.haskell.org shows the default Apache It works
 page. It would be nice to have something better there.

 Secondly the mailman web interface on projects.haskell.org [0] is
 giving a Service Temporarily Unavailable message (and has been
 for a couple of days).

 Cheers,
 Erik

 [0] http://projects.haskell.org/cgi-bin/mailman/admindb/haskell-llvm
 --
 --
 Erik de Castro Lopo
 http://www.mega-nerd.com/

 ___
 Haskell mailing list
 Haskell@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell


___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell-cafe] [Haskell] Problems with (projects|community).haskell.org

2011-02-10 Thread Dominique Devriese
Also, is there any news yet on a procedure for community members with
accounts on projects.haskell.org to get access to them again? My ssh
publickey login is no longer being accepted. I had an account mainly
for hosting the darcs repo and the website for my project
grammar-combinators. The website has been down for a couple of weeks
now.

Dominique

P.S.: This is not a complaint, I'm just hoping for a status update.
P.P.S.: Thanks to the people working on fixing this..

2011/2/9 Erik de Castro Lopo mle...@mega-nerd.com:
 Hi all,

 Still a couple of problems with these servers.

 Firstly, community.haskell.org shows the default Apache It works
 page. It would be nice to have something better there.

 Secondly the mailman web interface on projects.haskell.org [0] is
 giving a Service Temporarily Unavailable message (and has been
 for a couple of days).

 Cheers,
 Erik

 [0] http://projects.haskell.org/cgi-bin/mailman/admindb/haskell-llvm
 --
 --
 Erik de Castro Lopo
 http://www.mega-nerd.com/

 ___
 Haskell mailing list
 hask...@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell for children? Any experience?

2011-01-27 Thread Dominique Devriese
Hi,

I'm also curious about this. Is a pure programming style like
Haskell's less or more natural than an imperative mutable-state based
one to kids without experience. I intuitively expect that for kids
with a high-school background in mathematics would find the first more
natural, but this is not based on any teaching experience. Does anyone
have real-life experience with this or know of any related literature?

Thanks
Dominique

2011/1/27 Vo Minh Thu not...@gmail.com:
 Hi,

 You said Haskell's immutability is good for mathematics but doing anything 
 else
 takes a great deal of up-front patience and perseverance[...]

 I guess it is true for imperative programmers... but are you saying
 that about kids that just know how to use a calculator?

 Cheers,
 Thu

 2011/1/27 aditya siram aditya.si...@gmail.com:
 Ye gods! A B  D [1] language for kids? At least give them a fighting
 chance [2] at becoming future developers.

 Haskell's immutability is good for mathematics but doing anything else
 takes a great deal of up-front patience and perseverance, two very
 rare qualities in that demographic if my own childhood is any
 indication.

 BTW I want to be wrong so if you do succeed with this I will feast on
 crow with gusto.

 -deech

 [1] http://c2.com/cgi/wiki?BondageAndDisciplineLanguage
 [2] http://scratch.mit.edu/

 On Thu, Jan 27, 2011 at 9:04 AM, Chris Smith cdsm...@gmail.com wrote:
 So I find myself being asked to plan Haskell programming classes for one
 hour, once a week, from September through May this coming school year.
 The students will be ages 11 to 13.  I'm wondering if anyone has
 experience in anything similar that they might share with me.  I'm
 trying to decide if this is feasible, or it I should try to do something
 different.

 To be honest, as much as I love Haskell, I tried to push the idea of
 learning a different language; perhaps Python.  So far, the kids will
 have none of it!  This year, I've been teaching a once-a-week
 exploratory mathematics sort of thing, and we've made heavy use of
 GHCi... and they now insist on learning Haskell.

 (By the way, GHCi is truly amazing for exploratory mathematics.  We
 really ought to promote the idea of Haskell for elementary / junior-high
 level math teachers!  It's so easy to just try stuff; and there are so
 many patterns you can just discover and then say Huh, why do you think
 that happens?  Can you write it down precisely? ...)

 --
 Chris Smith


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Template Haskell a Permanent solution?

2011-01-04 Thread Dominique Devriese
All,

2010/12/27 Jonathan Geddes geddes.jonat...@gmail.com:
 I see TH used most for the following tasks:

 #1 Parse a string at compile-time so that a custom syntax for
 representing data can be used. At the extreme, this data might even
 be an EDSL.
 #2 Provide instances automatically.

Just a note that TH is also sometimes used in its generality: as a
general compile time meta-programming facility. For example, in my
experimental grammar-combinators parsing library [1], I am using it to
perform grammar transformations at compile time by simply generating
the definition for the transformed grammar using TH. This could be
extended in the future to provide a low-cost parser generator that
works from within TH, which can reuse the library's infrastructure.

Dominique

[1] http://projects.haskell.org/grammar-combinators/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Decoupling type classes (e.g. Applicative)?

2010-10-29 Thread Dominique Devriese
Hi all,

I have a problem with the design of the Applicative type class, and
I'm interested to know people's opinion about this.

Currently, the Functor and Applicative type class are defined like this:

  class  Functor f  where
  fmap:: (a - b) - f a - f b

  class Functor f = Applicative f where
pure :: a - f a
(*) :: f (a - b) - f a - f b

My problem is that in the grammar-combinators library [1], the pure
combinator is too general for me. I would propose a hierarchy like
the following:

  class  Pointed f  where
  pure :: a - f a

  class  ApplicativeC f where
(*) :: f (a - b) - f a - f b

The original type class Applicative can then be recovered as follows,
and the applicative laws can be specified informally in this class's
definition.

  class  (Pointed f, ApplicativeC f, Functor f) = Applicative f where

This would allow me to restrict injected values to stuff I can lift
into Template Haskell later on:

  class  LiftablyPointed f where
  pureL :: (Lift a) - a - f a
  pureL' :: a - Q Exp - f a

  class  (LiftablyPointed f, ApplicativeC) = LiftablyApplicative f where

This problem currently makes it impossible for me to use the (*)
combinator and I have to redefine it under a different name (I
currently use ()). To me the problem seems similar to the well
known example of the inclusion of the fail primitive in the monad
class, where the general opinion seems to be that it was a bad idea to
include fail in the Monad class (see
e.g. the article on the haskell wiki about handling failure [2]).

I've been thinking about the following type class design principles:

* Only include two functions in the same design class if both can be
  implemented in terms of each other.

* Only introduce a dependency from type class A to type class B if all
  functions in type class B can be implemented in terms of the
  functions in type class A or if type class A is empty.

(Disclaimer: I currently do not follow these principles myself ;))

I would like to know people's opinions about this. Are there any
issues with this advice that I don't see? Have other people
encountered similar problems? Any interesting references?

Thanks,
Dominique

Footnotes:
[1]  http://projects.haskell.org/grammar-combinators/
[2]  http://www.haskell.org/haskellwiki/Failure
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Specification and prover for Haskell

2010-10-25 Thread Dominique Devriese
Romain,

2010/10/25 Romain Demeyer r...@info.fundp.ac.be:
 I'm working on static verification in Haskell, and I search for existing
 works on specification of Haskell programs (such as pre/post conditions, for
 example) or any other functional language. It would be great if there exists
 a prover based on this kind of specifications. I already found the
 ESC/Haskell. Do you know some other works which could be interesting?

I found the paper Verifying Haskell using constructive type theory
[1] interesting...

Dominique

Footnotes:
[1] http://www.mimuw.edu.pl/~ben/Papers/monadic.pdf
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] pointers for EDSL design

2010-10-12 Thread Dominique Devriese
2010/10/12  o...@okmij.org:
 An alternative approach to model sharing at the object level is the
 technique I use for modelling context-free grammars in my PADL 2011
 paper Explicitly Recursive Grammar Combinators...  Using ideas from
 the Multirec generic programming library and some recent Haskell type
 system extensions (most importantly GADTs and type families), you can
 do this in a well-typed way for sets of mutually recursive
 object-level expressions.

 I guess you are using what I call `the initial embedding' of an object
 language. Somehow I prefer the final embedding.

No. In the library, I use both embedding styles for different
purposes, but what I was referring to here (the construction of
production rules) is actually implemented using what you call a typed
tagless-final embedding. I see the technique as an encoding of
*recursion* in a typed tagless final object language in such a way
that the recursion is observable in the host language.

Suppose you have the following (logically inconsistent ;)) code (in
Haskell notation):
  term1 :: Int
  term1 = if term2 then 1 else 2
  term2 :: Bool
  term2 = term1 == 2

and you want to model it in the typed tagless final encoding of simply
typed lambda calculus from the examples in your online typed tagless
final lecture notes [1] extended with implicit arbitrary recursion.
Then you could do

  data Term1
  data Term2

  data TermDomain ix where
   Term1 :: TermDomain Term1
   Term2 :: TermDomain Term2

  data family TermVal ix
  newtype instance TermVal Term1 = TV1 {unTV1 :: Int}
  newtype instance TermVal Term2 = TV2 {unTV2 :: Bool}

  class ExtraSymantics repr where
if_ :: repr h Bool - repr h a - repr h a - repr h a
eq_int :: repr h Int - repr h Int - repr h Bool

  class RecSymantics repr phi | repr - phi where
ref :: phi ix - repr h (TermVal ix)

  terms :: (Functor (repr h), Symantics repr, ExtraSymantics repr,
RecSymantics repr TermDomain) = TermDomain ix - repr h
(TermVal ix)
  terms Term1 = fmap TV1 $ if_ (fmap unTV2 (ref Term2)) (int 1) (int 2)
  terms Term2 = fmap TV2 $ eq_int (fmap unTV1 (ref Term1)) (int 2)

In this way, the embedding models the object language recursion in
such a way that the recursion remains observable in the host language
because you can implement it the way you want in your RecSymantics
instance. Possible needs for this observable recursion could be that
you want to do some form of recursion depth-bounded evaluation or some
form of static analysis or whatever... Such modifications are
fundamentally impossible if you model object language recursion
naively using direct host language recursion.

For my parsing library, I need these techniques to get a good view on
the recursion in the grammar. This allows me perform grammar
transformations and analysis.

Dominique

Footnotes:
[1]  http://okmij.org/ftp/tagless-final/course/#infin1
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] pointers for EDSL design

2010-10-11 Thread Dominique Devriese
John, Oleg,

2010/10/9  o...@okmij.org:
 So here's a very simple expression:

 t1 = let v = sigGen (cnst 1) in outs v v

 which is what led to my question.  I'm binding the sigGen to 'v' to
 introduce sharing at the meta-level.  Would it be better to introduce
 support for this in the dsl?

 Often this is not a question of preference but that of
 necessity. Sharing at the meta-level may help the generator, but it
 does _not_ translate into the sharing at the object level. In the
 generated code, the code for 'sigGen (cnst 1)' shall be
 duplicated. It could be that two csound blocks must share the same
 signal source, to receive samples in parallel. Meta-level sharing
 (Haskell's let) would not do. We need a construct for an object-level
 let, for example

  t1 = let_ (SigGen (cnst 1)) (\v - outs v v)

An alternative approach to model sharing at the object level is the
technique I use for modelling context-free grammars in my PADL 2011
paper Explicitly Recursive Grammar Combinators [1] (just got
acceptance notification this morning!). The idea is basically that you make the
sharing in the object-level expression explicit by modelling all
your terms as the results of one big recursive function and then
opening up the recursion. Using ideas from the Multirec generic
programming library and some recent Haskell type system extensions
(most importantly GADTs and type families), you can do this in a
well-typed way for sets of mutually recursive object-level
expressions.

In this case, you would get something like the following (written
without any understanding of your problem domain, so sorry if I
interpret stuff wrong here ;) ):

data I1
data T1
data CircuitNode ix where
   I1 :: CircuitNode I1
   T1 :: CircuitNode T1

myCircuit self I1 = sigGen (cnst 1)
myCircuit self T1 = outs (self I1) (self I1)

With a type class such as RecProductionRule in my paper, you can then
even get rid of the self argument and get something like this:

myCircuit I1 = sigGen (cnst 1)
myCircuit T1 = outs (ref I1) (ref I1)

The main advantage is that this approach extends to circuits with
mutually recursive nodes, but contrary to simple meta-level sharing,
allows you to observe and manipulate the recursive structure of the
circuit. Oh, and it stays properly typed. More info in the paper and
the accompanying technical report [2].

cheers
Dominique

Footnotes:
[1]  
http://people.cs.kuleuven.be/~dominique.devriese/permanent/cfgc-submitted-PADL.pdf
[2]  http://www.cs.kuleuven.be/publicaties/rapporten/cw/CW594.abs.html
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Off-topic]Functional parsing theory

2010-10-06 Thread Dominique Devriese
Mauricio,

2010/10/6 Maurí­cio CA mauricio.antu...@gmail.com:
 I've been working in a tool that reads a grammar with associated
 actions and act on input based on that grammar. I would like to
 rewrite it in a functional style, but I've not been able to find a
 theory that would handle any possible grammar with cyclicity and
 empty productions, and flexibility is more important for this tool
 than performance.

 Do you have a suggestion on that? What I'm using now is this
 (non-functional) article on Earley method:

I'm not sure what you're looking for exactly, but my
grammar-combinators library [1] might be interesting for you. It is
not yet industry-proof at the moment, but might benefit from some more
real-world use and comments. It is a novel functional parsing library
using an explicit representation of recursion which allows it to
support many different parsing algorithms and grammar transformations.

Anyway, in the functional world, parsing algorithms used are often LL
parsing algorithms, often used with parser combinators. Other
algorithms can sometimes be emulated in a functional style using a
top-down parsing algorithm on a transformed grammar (e.g. left-corner
transform, but I also suspect you can emulate LR parsing using what I
call the uniform Paull transformation). My library automates two such
important transformations (supporting for example left-recursion).

Dominique

Footnotes:
[1] http://projects.haskell.org/grammar-combinators/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Suggestions for improvement

2010-10-05 Thread Dominique Devriese
2010/10/5 N. Raghavendra ra...@mri.ernet.in:
 At 2010-10-03T22:45:30+02:00, Dominique Devriese wrote:

 comma :: (a - b) - (a - c) - a - (b,c)
 comma f g x = (f x, g x)

 comma = liftA2 (,)

 blowup = (uncurry (++)) . liftA2 (,) (blowup . allButLast) lastToTheLength

 I tried both of them, but they don't seem to work:

    -- Pointfree blowup.
    blowup1 :: String - String
    blowup1 = (uncurry (++)) . comma1 (blowup1 . allButLast) lastToTheLength

Sorry, I didn't look in detail at your solution in my answer, just
focused on the solution, and only checked that it compiled. Your
problem is that both your blowup functions recurse infinitely on the
empty string (blowup{1,2} [] will always call blowup{1,2} [] again).
Instead of fixing it, I recommend you study one of the other solutions
proposed in this thread, since they are superior in many ways
(shorter, more elegant, more lazy, probably more efficient).

cheers
Dominique
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Suggestions for improvement

2010-10-03 Thread Dominique Devriese
 One question I have is whether I can eliminate points in the above
 definition of blowup, and write something like

blowup = (++) . (blowup . allButLast, lastToTheLength)

 thinking of (++) as a function String x String - String.

Actually (++) is of type String - String - String. When you want
something of the type you mean (you normally write that as (String,
String) - String in Haskell, then you can use (uncurry (++)).

Additionally, you can't combine the functions (blowup . allButLast)
and lastToTheLength into a function that returns a pair like you seem
to attempt. You need a function like the following for that:

comma :: (a - b) - (a - c) - a - (b,c)
comma f g x = (f x, g x)

Then you could say:

blowup = (uncurry (++)) . comma (blowup . allButLast) lastToTheLength

Ignore this if you haven't read about Applicative or type classes yet,
but using the Applicative instance for arrow types (-) a, you can
also write

comma = liftA2 (,)

or

blowup = (uncurry (++)) . liftA2 (,) (blowup . allButLast) lastToTheLength

 Also, I can't
 figure out whether it is possible to get a shorter solution using fold.
 I have tried Hlint on my file, but it gave no suggestions.

 I am sure there are better ways, and would like some pointers and any
 general suggestions for improvement.

By the way, shorter is not always better. Trying to recognize
abstraction patterns in your code is never a bad thing though.

Dominique
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Suggestions for improvement

2010-10-03 Thread Dominique Devriese
Gregory,

2010/10/3 Gregory Crosswhite gcr...@phys.washington.edu:
  On 10/3/10 1:45 PM, Dominique Devriese wrote:

 Additionally, you can't combine the functions (blowup . allButLast)
 and lastToTheLength into a function that returns a pair like you seem
 to attempt. You need a function like the following for that:

 comma :: (a -  b) -  (a -  c) -  a -  (b,c)
 comma f g x = (f x, g x)

 Then you could say:

 blowup = (uncurry (++)) . comma (blowup . allButLast) lastToTheLength

 It is worth noting that such a function already exists in the standard
 libraries;  it is the  operator in Control.Arrow:

    blowup = uncurry (++) . (blowup . allButLast  lastToTheLength)

Or you can write it as (liftA2 (,)) as I noted a few lines further in my mail ;)

Dominique
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fwd: Type families - how to resolve ambiguities?

2010-09-12 Thread Dominique Devriese
Paolo,

 The problem with mult is that k is not specified unambiguously. You either
 need v to determine k (which is probably not what you want, at a guess), mult
 to take a dummy argument that determines what k is:
 [...]
 or, to make Tensor a data family instead of a type family.
 What is the difference making it work?
The problem is that in your definition of mult
  mult :: Tensor k v v - v
you seem to expect the type v to be determined by the type of Tensor
you apply it to. However, since Tensor is not an injective type
functor, it does not uniquely determine the type v. It is possible
that there is a completely unrelated type instance for Tensor, for
different types k and v that is also equal to the type you apply mult
to.

If you make Tensor a data family, then the types k and v are uniquely
determined, because then it is an injective type functor.

 However, it would make more sense to have it be a type family, without
 the overhead of data (both in space and in typing).

You can make Tensor a data family and use newtype instances. As I
understand these, there should not be a space overhead. The only
overhead I would expect this to introduce is the extra newtype
constructor.

 Is there a non-
 hacky approach, without dummies and without making Tensor a data
 family without a semantic need?

Without thinking about your problem domain, I have the impression that
you do have a semantic need, because you seem to expect your Tensor
type to uniquely determine at least the type v. If this is not the
case, you need a different solution.

Anyway, the below compiles fine with the following result:
*Main mult $ Tensor $ (V [(T [1] [2],3)] :: Vect Integer (TensorBasis
[Int] [Int]))
V [([1,2],3)]

{-# LANGUAGE MultiParamTypeClasses, TypeFamilies #-}
{-# LANGUAGE FlexibleInstances, TypeSynonymInstances #-}

data Vect k b = V [(b,k)] deriving (Eq,Show)

data TensorBasis a b = T a b deriving (Eq, Ord, Show)

data family Tensor k u v :: *

newtype instance Tensor k (Vect k a) (Vect k b) = Tensor (Vect k
(TensorBasis a b))

class Algebra k v where -- v is a k-algebra
   unit :: k - v
   mult :: Tensor k v v - v

instance Algebra Integer (Vect Integer [Int]) where
   unit 0 = V []
   unit x = V [([],x)]
   mult (Tensor (V ts)) = V [(g++h,x) | (T g h, x) - ts]
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell] ANNOUNCE: grammar-combinators 0.1 (initial release): A parsing library of context-free grammar combinators

2010-09-08 Thread Dominique Devriese
The grammar-combinators library is a parsing library employing a novel
grammar representation with explicit recursion. The library features
much of the power of a parser generator like Happy or ANTLR, but with
the library approach and most of the benefits of a parser combinator
library. Grammars and grammar algorithms are defined in a
functional style. The library currently has the following features:

* Grammar specified completely in Haskell using an elegant syntax
* Grammar algorithms implemented in a functional style (no fresh
 identifiers), with elegant and meaningful types.
* Multi-backend: use the same grammar with a Packrat, Parsec or
 UUParse parser
* Grammar transformations: use left-recursive grammars directly thanks
 to a powerful grammar transformation library, featuring the
 left-corner left-recursion removal transform, a uniform version of
 the classic Paull left-recursion removal, and various smaller
 transformations (dead-branch removal, dead non-terminal removal,
 consecutive epsilon combination, selective unfolding etc.).
* Grammar utility functions: printing of grammars, FIRST-set
 calculation, reachability analysis of non-terminals, etc.
* Compile-time transformations (using Template Haskell), given a
 suitable definition of the grammar. This is currently limited to a
 certain set of transformations.

The library is currently not intended for mainstream use. Its API is
relatively stable, but performance needs to be looked at further.

We are submitting a paper about the ideas behind this library to PADL
2011. A draft is linked on the project's website.

More information:

* Project website: http://projects.haskell.org/grammar-combinators/
* Tutorial: http://projects.haskell.org/grammar-combinators/tutorial.html
* Hackage: http://hackage.haskell.org/package/grammar-combinators

All comments welcome!

Dominique

PS. The documentation on hackage currently doesn't build because of
(seemingly) a Hackage dependency problem during the build [1].
Compiling and generating the documentation locally should work fine. A
version of the docs is available on the project's webpage as a
temporary replacement [2].

Footnotes:
[1]  http://www.haskell.org/pipermail/libraries/2010-September/014168.html
[2]  http://projects.haskell.org/grammar-combinators/docs/index.html
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell-cafe] ANNOUNCE: grammar-combinators 0.1 (initial release): A parsing library of context-free grammar combinators

2010-09-08 Thread Dominique Devriese
The grammar-combinators library is a parsing library employing a novel
grammar representation with explicit recursion. The library features
much of the power of a parser generator like Happy or ANTLR, but with
the library approach and most of the benefits of a parser combinator
library. Grammars and grammar algorithms are defined in a
functional style. The library currently has the following features:

* Grammar specified completely in Haskell using an elegant syntax
* Grammar algorithms implemented in a functional style (no fresh
 identifiers), with elegant and meaningful types.
* Multi-backend: use the same grammar with a Packrat, Parsec or
 UUParse parser
* Grammar transformations: use left-recursive grammars directly thanks
 to a powerful grammar transformation library, featuring the
 left-corner left-recursion removal transform, a uniform version of
 the classic Paull left-recursion removal, and various smaller
 transformations (dead-branch removal, dead non-terminal removal,
 consecutive epsilon combination, selective unfolding etc.).
* Grammar utility functions: printing of grammars, FIRST-set
 calculation, reachability analysis of non-terminals, etc.
* Compile-time transformations (using Template Haskell), given a
 suitable definition of the grammar. This is currently limited to a
 certain set of transformations.

The library is currently not intended for mainstream use. Its API is
relatively stable, but performance needs to be looked at further.

We are submitting a paper about the ideas behind this library to PADL
2011. A draft is linked on the project's website.

More information:

* Project website: http://projects.haskell.org/grammar-combinators/
* Tutorial: http://projects.haskell.org/grammar-combinators/tutorial.html
* Hackage: http://hackage.haskell.org/package/grammar-combinators

All comments welcome!

Dominique

PS. The documentation on hackage currently doesn't build because of
(seemingly) a Hackage dependency problem during the build [1].
Compiling and generating the documentation locally should work fine. A
version of the docs is available on the project's webpage as a
temporary replacement [2].

Footnotes:
[1]  http://www.haskell.org/pipermail/libraries/2010-September/014168.html
[2]  http://projects.haskell.org/grammar-combinators/docs/index.html
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: ANNOUNCE: grammar-combinators 0.1 (initial release): A parsing library of context-free grammar combinators

2010-09-08 Thread Dominique Devriese
Some snippets from the Tutorial [1] to give an idea of the
grammar-combinator library's approach, its functional style and its
additional power (e.g. the transformations used):

Defining a simple expresssions grammar:
  grammarArith :: ExtendedContextFreeGrammar ArithDomain Char
  grammarArith Line =
LineF $ ref Expr * endOfInput
  grammarArith Expr =
SubtractionF $  ref Expr * token '-'  ref Term
||| SumF $  ref Expr * token '+'  ref Term
||| SingleTermF $   ref Term
  grammarArith Term =
SingleFactorF $ ref Factor
||| QuotientF $ ref Term * token '/'  ref Factor
||| ProductF $  ref Term * token '*'  ref Factor
  grammarArith Factor =
NumberF $   many1Ref Digit
||| ParenthesizedF $*   token '('  ref Expr * token ')'
  grammarArith Digit =
DigitF $tokenRange ['0' .. '9']

A semantic processor:
  data family ArithValue ix
  newtype instance ArithValue Line   = ArithValueL Int deriving (Show)
  newtype instance ArithValue Expr   = ArithValueE Int deriving (Show)
  newtype instance ArithValue Term   = ArithValueT Int deriving (Show)
  newtype instance ArithValue Factor = ArithValueF Int deriving (Show)
  newtype instance ArithValue Digit  = ArithValueD Char deriving (Show)

  calcArith :: Processor ArithDomain ArithValue
  calcArith Line   (LineF (ArithValueE e))= ArithValueL e
  calcArith Expr   (SumF (ArithValueE e) (ArithValueT t)) =
ArithValueE $ e + t
  calcArith Expr   (SingleTermF (ArithValueT t))  = ArithValueE t
  calcArith Term   (ProductF (ArithValueT e) (ArithValueF t)) =
ArithValueT $ e * t
  calcArith Term   (SingleFactorF (ArithValueF t))= ArithValueT t
  calcArith Factor (ParenthesizedF (ArithValueE e))   = ArithValueF e
  calcArith Factor (NumberF ds)   =
ArithValueF $ read $ map unArithValueD ds
  calcArith Digit  (DigitF c) = ArithValueD c

  unArithValueD :: ArithValue Digit - Char
  unArithValueD (ArithValueD c) = c

Transforming the grammar:
  calcGrammarArith :: ProcessingExtendedContextFreeGrammar ArithDomain
Char ArithValue
  calcGrammarArith = applyProcessorE grammarArith calcArith
  calcGrammarArithTP :: ProcessingExtendedContextFreeGrammar (UPDomain
ArithDomain) Char (UPValue ArithValue)
  calcGrammarArithTP = transformUniformPaullE calcGrammarArith
  calcGrammarArithTPF :: ProcessingExtendedContextFreeGrammar
(UPDomain ArithDomain) Char (UPValue ArithValue)
  calcGrammarArithTPF = filterDiesE (unfoldDeadE calcGrammarArithTP)
  calcGrammarArithTPFF :: ProcessingContextFreeGrammar
(FoldLoopsDomain (UPDomain ArithDomain)) Char (FoldLoopsValue (UPValue
ArithValue))
  calcGrammarArithTPFF = foldAndProcessLoops calcGrammarArithTPF

Parsing:
  *Main parsePackrat calcGrammarArithTPFF (FLBase (UPBase Line)) 123
  Parsed FLBV {unFLBV = UPBV {unUPBV = ArithValueL 123}} _
  *Main parsePackrat calcGrammarArithTPFF (FLBase (UPBase Line)) 123+
  NoParse
  *Main parsePackrat calcGrammarArithTPFF (FLBase (UPBase Line)) 123+12
  Parsed FLBV {unFLBV = UPBV {unUPBV = ArithValueL 135}} _
  *Main parseParsec calcGrammarArithTPFF (FLBase (UPBase Line))  123+12
  Right (FLBV {unFLBV = UPBV {unUPBV = ArithValueL 135}})
  *Main parseUU calcGrammarArithTPFF (FLBase (UPBase Line)) 123+12
  FLBV {unFLBV = UPBV {unUPBV = ArithValueL 135}}

Dominique

Footnotes:
[1]  http://projects.haskell.org/grammar-combinators/tutorial.html
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: ANNOUNCE: grammar-combinators 0.1 (initial release): A parsing library of context-free grammar combinators

2010-09-08 Thread Dominique Devriese
Johannes,

(sorry for the double mail)

I will give some short answers below, but you can find more details in
the paper we are submitting to PADL 2011 [1].

2010/9/8 Johannes Waldmann waldm...@imn.htwk-leipzig.de:
 .. grammar-combinator library's approach ..
 am I reading this correctly: in the traditional combinator approach,
 a grammer (a parser) is a Haskell value,
 while in your approach, the grammar is a Haskell (GAD)type?

Not completely. A grammar-combinators grammar is a Haskell value with
a different (more complicated) type than a traditional parser
combinator value. It is actually a function that returns the
production rules for a given non-terminal. Because the non-terminals
are modelled using a GADT and do not have the same type, the grammar's
production rules' types can depend on the non-terminal in question.

 then you'll get more static guarantees (e.g., context-freeness)
 but you need extra (type-level, or even syntax-level) machinery
 to handle grammars. Convince me that it's worth it ...

The advantage of the grammar-combinators approach is that grammar
algorithms have a lot more power, because they can reason explicitly
about the recursion in the grammar, whereas the recursion is not
observable in the traditional parser combinators approach. The Parser
combinator model is in fact so limited that something simple as
pretty-printing a BNF representation of the grammar is fundamentally
impossible. More details in the PADL-submitted draft.

As James says below, a grammar algorithm using grammar-combinators
grammars can observe the recursion in the grammar and can therefore do
stuff for which you would otherwise have to use a parser generator.

 I guess the proper solution (TM) is to blur the distiction
 between types and values by switching to dependent types altogether...

There is actually some very interesting work about dependently typed
parser combinator libraries, I discuss this in the related work
section of the PADL paper.

Dominique

Footnotes:
[1]  http://projects.haskell.org/grammar-combinators/#background
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: ANNOUNCE: grammar-combinators 0.1 (initial release): A parsing library of context-free grammar combinators

2010-09-08 Thread Dominique Devriese
Johannes,

2010/9/8 Johannes Waldmann waldm...@imn.htwk-leipzig.de:

 That compilation process is highly nonlocal
 and would never be possible with, e.g., the Parsec approach.

 Pipe dream: attach such a grammar object to every Parsec parser,
 and include the compiler with the combinators,
 and have them run at (Haskell) compile time (in ghc's specializer).

You can actually use a grammar-combinators parser with Parsec
(although the current implementation will use backtracking on every
branch), keeping the original grammar around for other purposes.

About the compile-time stuff, there is code in the library doing
compile-time transformations using Template-Haskell (but requiring a
grammar with embedded TH splices for injected values). You could also
do a similar compilation to a PDA parser in TH if you want, again
keeping the full grammar available for other stuff.

Additionally, I have noted that passing certain GHC inlining flags as
has been suggested for generic code [1] produces spectacular
(execution time/16) optimizations for a test grammar, but I have not
investigated what resulting code GHC actually produces in this case.
This is also related to what you talk about, since the compiler does
part of the transformation from abstract grammar at compile time.

 Should work for some subset (e.g., just let, not letrec, use
 proper combinators instead) and with some future ghc version ...

 When I teach parsing (in Compiler Construction), for lack of time
 it's either traditional (CFG - PDA) or combinator (not both),
 and I'm not happy with that, since both are important concepts.
 But then, semantics is more important than syntax ...

I actually think of the grammar-combinators approach as an attempt to
bring the power available in parser combinator libraries to the level
of what can be done in parser generators.

Dominique

Footnotes:
[1] http://www.cs.uu.nl/research/techreps/repo/CS-2009/2009-022.pdf
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Which Mail editor or mail chain do you use ?

2010-08-17 Thread Dominique Devriese
2010/8/17 Luc TAESCH luc.tae...@googlemail.com:
 May I ask you how you redact your answers and which toolchain you are using?

You can use footnote-mode [1] for easily producing the
footnotes/references if you're an emacs user.

Dominique

Footnotes:
[1]  http://www.emacswiki.org/emacs/FootnoteMode
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] datatype contexts

2010-07-26 Thread Dominique Devriese
2010/7/26 Ryan Ingram ryani.s...@gmail.com:
 There are two types of datatype contexts; haskell'98 contexts (which I
 think are terrible), and GHC existential contexts (which I like):

See also GADT-style data type declarations [1] and full GADT's [2],
which both behave like GHC existential contexts mentioned above: pattern
matching on them makes available the context constraint.

Dominique

Footnotes:
[1]  
http://www.haskell.org/ghc/docs/6.12.2/html/users_guide/data-type-extensions.html#gadt-style
[2]  
http://www.haskell.org/ghc/docs/6.12.2/html/users_guide/data-type-extensions.html#gadt
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] DpH/repa cache locality

2010-07-13 Thread Dominique Devriese
Hi,

2010/7/13 Gregory Crosswhite gcr...@phys.washington.edu:
 Just out of curiosity, what work is being done in the data parallel
 haskell / repa projects regarding cache locality?
Hi,

I'm not knowledgeable at all about this, but for a technical
introduction to DPH, I found the following excellent. Locality
is a recurring theme in the talk, IIRC. (Sorry for the double reply)

http://www.youtube.com/watch?v=NWSZ4c9yqW8

Dominique
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe