Re: Records in Haskell

2012-03-02 Thread AntC
Isaac Dupree ml at isaac.cedarswampstudios.org writes:

...
  So under your proposal, a malicious client could guess at the fieldnames in
  your abstraction, then create their own record with those fieldnames as
  SharedFields, and then be able to update your precious hidden record type.
 
 Show me how a malicious client could do that.  Under DORF plus my 
 mini-proposal,
 
 module Abstraction (AbstractData) where
...
 fieldLabel field1 --however it goes
 [code]

Isaac here's a suggestion: please write up your proposal on a wiki.

Don't expect that your readers will have bothered to read any of your posts, 
not even those a few back in the thread.

Don't expect they'll have read the DORF proposal, or the SORF, or TDNR, nor 
even the front page of the Records wiki.

Don't expect anybody will believe your claims unless you produce a prototype 
showing how you translate your code into legal Haskell.

Don't expect anybody will believe your prototype unless it has meaningful 
field names and is illustrating a realistic business application.

Once people look at your code or wiki, don't expect they'll get your syntax 
right: you'll have to explain that from scratch.

Don't expect they'll even bother to get this right:
 fieldLabel field1 --however it goes

Don't expect they'll understand the difference between a polymorphic record 
system vs. the narrow namespacing issue - in fact expect them to make all 
sorts of suggestions for polymorphic record systems.

Don't expect they'll try running the prototype code you laboured so hard to 
get working.

Do expect to get a heap of requests for clarifications, which you also put up 
on to the wiki so that it grows and grows -- even to explain things which you 
thought were pretty obvious.

Do expect to explain the standard Haskell behaviour that you have not changed. 
It's not enough to say This follows standard Haskell behaviour. Do expect to 
find your wiki page growing and growing.

Do expect to get a load of posts starting I haven't followed everything, ... 
or It's a while since I 'tuned out' of the Records thread, ... and wanting 
you to explain all the bits they could read for themselves on the wiki.

Then expect they'll pick a few words out of your summary and lambast you for 
it, even though you politely requested they read the wiki to get the full 
story (and which they clearly did not do).

Throughout all this do expect to remain patient, civil and polite.

Do not expect to have a social life or get much sleep. Do expect your wife to 
ask who you're writing to, and why.


AntC



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-03-02 Thread AntC
Gábor Lehel illissius at gmail.com writes:

 ...
 
 ... My main complaint against DORF is
 that having to write fieldLabel declarations for every field you want
 to use is onerous. If that could be solved, I don't think there are
 any others. (But even if it can't be, I still prefer DORF.)
 

Thank you Gábor, I understand that 'complaint'.

I have been trying to keep the design 'clean': either the module is totally 
DORF, or it's totally H98.

But I've also tried to conform to H98 style where possible. So:
* DORF field selectors are just functions, like H98 field selector functions.
* dot syntax is just reverse apply, so could be used for H98 selectors
* pattern syntax still works, and explicit record constructor syntax
   (relying on DisambiguateRecordFields)
* record update syntax is the same (but with a different desugarring)
* record decl syntax is the same
  (but desugars to a Has instance, instead of a function)

There have been several suggestions amongst the threads to mix H98-style 
fields with DORF-style records (or perhaps I mean vice-versa!):
* We'd need to change the record decl syntax to 'flag' DORF fields (somehow).
* H98 fields desugar to monomorphic field selector functions, as usual.
  So if you have more than one in scope, that's a name clash.
* DORF fields desugar to Has instances.
  (providing you've declared the fieldLabel somewhere)
  Perhaps we could take advantage of knowing it's DORF
   to pick up the field type from the fieldLabel decl?

I think you could then 'mix and match' DORF and H98 fields in your expressions 
and patterns (that was certainly part of my intention in designing DORF).

There's one difficulty I can see:
* record update would have to know which sort of field it was updating in:
r{ fld = expr }
  If `fld` is DORF, this desugars to a call to `set`.
  If H98, this code stands as is.
What about:
r{ fldH98 = expr1, fldDORF = expr2, fldH983 = expr3, fldDORF4 = expr4 }
I think:
* for DORF updates `set` can only go one field at a time,
  so it turns into a bunch of nested `set`s
  (One for fldDORF, inside one for fldDORF4.)
* for H98 it can do simultaneous, so in effect we go:
  let r' = r{ fldDORF = expr2, fldDORF4 = expr4 }   -- desugar to nested
in r'{ fldH98 = expr1, fldH983 = expr3 }

Remaining question: how do we tell a DORF field from a H98,
at the point of the record update expression?
What is the difference? Find the field selector in the environment from the 
name:
- if monomorphic, it's H98
- if overloaded, it's DORF

But! but! we don't know its type until the type inference phase.
Yet we need to desugar the syntax at the syntax phase(!)

Suggestions please!


Also an obfuscation factor: perversely, the record type and field labels might 
have been exported, but not the selector function.



AntC




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ghci 7.4.1 no longer loading .o files?

2012-03-02 Thread Simon Marlow

On 02/03/2012 04:21, Evan Laforge wrote:

On Tue, Feb 28, 2012 at 1:53 AM, Simon Marlowmarlo...@gmail.com  wrote:

I don't see how we could avoid including -D, since it might really affect
the source of the module that GHC eventually sees.  We've never taken -D
into account before, and that was incorrect.  I can't explain the behaviour
you say you saw with older GHC's. unless your CPP flags only affected the
imports of the module.


In fact, that's what I do.  I put system specific stuff or expensive
stuff into a module and then do

#ifdef EXPENSIVE_FEATURE
import qualified ExpensiveFeature
#else
import qualified StubbedOutFeature as ExpensiveFeature
#endif

I think this is a pretty common strategy.  I know it's common for
os-specific stuff, e.g. filepath does this.  Although obviously for OS
stuff we're not interested in saving recompilation :)


Well, one solution would be to take the hash of the source file after
preprocessing.  That would be accurate and would automatically take into
account -D and -I in a robust way.  It could also cause too much
recompilation, if for example a preprocessor injected some funny comments or
strings containing the date/time or detailed version numbers of components
(like the gcc version).


By take the hash of the source file do you mean the hash of the
textual contents, or the usual hash of the interface etc?  I assumed
it was the latter, i.e. that the normal hash was taken after
preprocessing.

But suppose it's the former, I still think it's better than
unconditional recompilation (which is what always including -D in the
hash does, right?).  Unconditionally including -D in the hash either
makes it *always* compile too much--and likely drastically too much,
if you have one module out of 300 that switches out depending on a
compile time flag, you'll still recompile all 300 when you change the
flag.  And there's nothing you can really do about it if you're using
--make.


There is a way around it: create a .h file containing #define 
MY_SETTING, and have the Haskell code #include the .h file.  The 
recompilation checker does track .h files:


http://hackage.haskell.org/trac/ghc/ticket/3589

When you want to change the setting, just modify the .h file.  Make sure 
you don't #include the file in source code that doesn't depend on it.


Cheers,
Simon


 If you try to get around that by using a build system that
knows which files it has to recompile, then you get in a situation
where the files have been compiled with different flags, and now ghci
can't cope since it can't switch flags while loading.

If your preprocessor does something like put the date in... well,
firstly I think that's much less common than switching out module
imports, since for the latter as far as I know CPP is the only way to
do it, while for dates or version numbers you'd be better off with a
config file anyway.  And it's still correct, right?  You changed your
gcc version or date or whatever, if you want a module to have the
build date then of course you have to rebuild the module every
time---you got exactly what you asked for.  Even if for some reason
you have a preprocessor that nondeterministically alters comments,
taking the interface hash after preprocessing would handle that.  And
come to think of it, these are CPP flags not some arbitrary pgmF...
can CPP even do something like insert the current date without also
changing its -D flags?



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Error while installing new packages with GHC 7.4.1

2012-03-02 Thread Antoras
You are right. When I try to install parsec by myself I get the same 
error message.


But neither ghc-pkg list nor ghc-pkg check prints any error 
messages. The latter only prints some warnings because of missing 
haddock files.


Complete output of the error message:

$cabal install parsec -v
Reading available packages...
Resolving dependencies...
In order, the following would be installed:
parsec-3.1.2 (new package)
Extracting
/home/antoras/.cabal/packages/hackage.haskell.org/parsec/3.1.2/parsec-3.1.2.tar.gz
to /tmp/parsec-3.1.25437...
Configuring parsec-3.1.2...
Flags chosen: base4=True
Dependency base ==4.5.0.0: using base-4.5.0.0
Dependency bytestring ==0.9.2.1: using bytestring-0.9.2.1
Dependency mtl ==2.0.1.0: using mtl-2.0.1.0
Dependency text ==0.11.1.13: using text-0.11.1.13
Using Cabal-1.10.1.0 compiled by ghc-7.0
Using compiler: ghc-7.4.1
Using install prefix: /home/antoras/.cabal
Binaries installed in: /home/antoras/.cabal/bin
Libraries installed in: /home/antoras/.cabal/lib/parsec-3.1.2/ghc-7.4.1
Private binaries installed in: /home/antoras/.cabal/libexec
Data files installed in: /home/antoras/.cabal/share/parsec-3.1.2
Documentation installed in: /home/antoras/.cabal/share/doc/parsec-3.1.2
Using alex version 2.3.5 found on system at: /usr/bin/alex
Using ar found on system at: /usr/bin/ar
No c2hs found
Using cpphs version 1.12 found on system at: /home/antoras/.cabal/bin/cpphs
No ffihugs found
Using gcc version 4.6.2 found on system at: /usr/bin/gcc
Using ghc version 7.4.1 found on system at: /usr/bin/ghc
Using ghc-pkg version 7.4.1 found on system at: /usr/bin/ghc-pkg
No greencard found
Using haddock version 2.10.0 found on system at: /usr/bin/haddock
Using happy version 1.18.6 found on system at: /usr/bin/happy
No hmake found
Using hsc2hs version 0.67 found on system at: /usr/bin/hsc2hs
Using hscolour version 1.19 found on system at:
/home/antoras/.cabal/bin/HsColour
No hugs found
No jhc found
Using ld found on system at: /usr/bin/ld
No lhc found
No lhc-pkg found
No nhc98 found
Using pkg-config version 0.26 found on system at: /usr/bin/pkg-config
Using ranlib found on system at: /usr/bin/ranlib
Using strip found on system at: /usr/bin/strip
Using tar found on system at: /bin/tar
No uhc found
Creating dist/build (and its parents)
Creating dist/build/autogen (and its parents)
Preprocessing library parsec-3.1.2...
Building parsec-3.1.2...
Building library...
Creating dist/build (and its parents)
/usr/bin/ghc --make -package-name parsec-3.1.2 -hide-all-packages 
-fbuilding-cabal-package -i -idist/build -i. -idist/build/autogen 
-Idist/build/autogen -Idist/build -optP-include 
-optPdist/build/autogen/cabal_macros.h -odir dist/build -hidir 
dist/build -stubdir dist/build -package-id 
base-4.5.0.0-6db966b4cf8c1a91188e66d354ba065e -package-id 
bytestring-0.9.2.1-18f26186028d7c0e92e78edc9071d376 -package-id 
mtl-2.0.1.0-db19dd8a7700e3d3adda8aa8fe5bf53d -package-id 
text-0.11.1.13-9b63b6813ed4eef16b7793151cdbba4d -O -O2 -XHaskell98 
-XExistentialQuantification -XPolymorphicComponents 
-XMultiParamTypeClasses -XFlexibleInstances -XFlexibleContexts 
-XDeriveDataTypeable -XCPP Text.Parsec Text.Parsec.String 
Text.Parsec.ByteString Text.Parsec.ByteString.Lazy Text.Parsec.Text 
Text.Parsec.Text.Lazy Text.Parsec.Pos Text.Parsec.Error Text.Parsec.Prim 
Text.Parsec.Char Text.Parsec.Combinator Text.Parsec.Token 
Text.Parsec.Expr Text.Parsec.Language Text.Parsec.Perm 
Text.ParserCombinators.Parsec Text.ParserCombinators.Parsec.Char 
Text.ParserCombinators.Parsec.Combinator 
Text.ParserCombinators.Parsec.Error Text.ParserCombinators.Parsec.Expr 
Text.ParserCombinators.Parsec.Language 
Text.ParserCombinators.Parsec.Perm Text.ParserCombinators.Parsec.Pos 
Text.ParserCombinators.Parsec.Prim Text.ParserCombinators.Parsec.Token
command line: cannot satisfy -package-id 
text-0.11.1.13-9b63b6813ed4eef16b7793151cdbba4d:
text-0.11.1.13-9b63b6813ed4eef16b7793151cdbba4d is unusable due to 
missing or recursive dependencies:

  deepseq-1.3.0.0-a73ec930018135e0dc0a1a3d29c74c88
(use -v for more information)
World file is already up to date.
cabal: Error: some packages failed to install:
parsec-3.1.2 failed during the building phase. The exception was:
ExitFailure 1


On Fri 02 Mar 2012 07:02:26 AM CET, Neil Mitchell wrote:


Hi Antoras,

My suspicion is you've ended up with corrupted packages in your
package database - nothing to do with Hoogle. I suspect trying to
install parsec-3.1.2 directly would give the same error message. Can
you try ghc-pkg list, and at the bottom it will probably say something
like:

The following packages are broken, either because they have a problem
listed above, or because they depend on a broken package.
warp-1.1.0

I often find ghc-pkg unregisterwarp --force on all the packages
cleans them up enough, but someone else may have a better suggestion.

Thanks, Neil

On Fri, Mar 2, 2012 at 12:02 AM, Antorasm...@antoras.de wrote:


Hi Neil,

thanks for your effort. But it still does not 

Re: Error while installing new packages with GHC 7.4.1

2012-03-02 Thread Antoras
Ok, I got Hoogle to work with GHC 7.0.3. I abandoned my try to update to 
the newest version of GHC, due to some other things which don't work. 
For example I can't import Data.Map any more.


As long as these errors occur I see no reason to switch to a newer version.

With Hoogle 4.2.9 I could successfully create a database and work with it.

Neil, thanks for your update, once again.


On 03/02/2012 07:02 AM, Neil Mitchell wrote:

Hi Antoras,

My suspicion is you've ended up with corrupted packages in your
package database - nothing to do with Hoogle. I suspect trying to
install parsec-3.1.2 directly would give the same error message. Can
you try ghc-pkg list, and at the bottom it will probably say something
like:

The following packages are broken, either because they have a problem
listed above, or because they depend on a broken package.
warp-1.1.0

I often find ghc-pkg unregisterwarp  --force on all the packages
cleans them up enough, but someone else may have a better suggestion.

Thanks, Neil

On Fri, Mar 2, 2012 at 12:02 AM, Antorasm...@antoras.de  wrote:

Hi Neil,

thanks for your effort. But it still does not work. The old errors
disappeared, but new ones occur.

Maybe I have not yet the most current versions:


$ ghc --version
The Glorious Glasgow Haskell Compilation System, version 7.4.1

$ cabal --version
cabal-install version 0.10.2
using version 1.10.1.0 of the Cabal library

This seems to be the most current version of Cabal. The command 'cabal info
cabal' brings: Versions installed: 1.14.0 but not 1.15

An extract of the error messages:

[...]
Configuring parsec-3.1.2...
Preprocessing library parsec-3.1.2...
Building parsec-3.1.2...
command line: cannot satisfy -package-id
text-0.11.1.13-9b63b6813ed4eef16b7793151cdbba4d:
text-0.11.1.13-9b63b6813ed4eef16b7793151cdbba4d is unusable due to missing
or recursive dependencies:
deepseq-1.3.0.0-a73ec930018135e0dc0a1a3d29c74c88

(use -v for more information)
command line: cannot satisfy -package Cabal-1.14.0:
Cabal-1.14.0-5875475606fe70ef919bbc055077d744 is unusable due to missing or
recursive dependencies:
array-0.4.0.0-59d1cc0e7979167b002f021942d60f46
containers-0.4.2.1-cfc6420ecc2194c9ed977b06bdfd9e69
directory-1.1.0.2-07820857642f1427d8b3bb49f93f97b0
process-1.1.0.1-18dadd8ad5fc640f55a7afdc7aace500
(use -v for more information)
[...]


On Thu 01 Mar 2012 11:06:43 PM CET, Neil Mitchell wrote:

Hi Antoras,

I've just released Hoogle 4.2.9, which allows Cabal 1.15, so hopefully
will install correctly for you.

Thanks, Neil

On Thu, Mar 1, 2012 at 5:02 PM, Neil Mitchellndmitch...@gmail.com
  wrote:

Hi Antoras,

The darcs version of Hoogle has had a more permissive dependency for a
few
weeks. Had I realised the dependency caused problems I'd have released a
new
version immediately! As it stands, I'll release a new version in about 4
hours. If you can't wait that long, try darcs get
http://code.haskell.org/hoogle

Thanks, Neil


On Thursday, March 1, 2012, Antoras wrote:


Ok, interesting info. But how to solve the problem now? Should I contact
the author of Hoogle and ask him about how solving this?


On 03/01/2012 02:02 AM, Albert Y. C. Lai wrote:


On 12-02-29 06:04 AM, Antoras wrote:


I don't know where the dependency to array-0.3.0.3 comes from. Is it
possible to get more info from cabal than -v?



hoogle-4.2.8 has Cabal= 1.81.13, this brings in Cabal-1.12.0.

Cabal-1.12.0 has array= 0.10.4, this brings in array-0.3.0.3.


It is a mess to have 2nd instances of libraries that already come with
GHC, unless you are an expert in knowing and avoiding the treacherous
consequences. See my
http://www.vex.net/~trebla/haskell/sicp.xhtml

It is possible to fish the output of cabal install --dry-run -v3
hoogle
for why array-0.3.0.3 is brought in. It really is fishing, since the
output
is copious and of low information density. Chinese idiom: needle in
ocean
(haystack is too easy). Example:

selecting hoogle-4.2.8 (hackage) and discarding Cabal-1.1.6, 1.2.1,
1.2.2.0,
1.2.3.0, 1.2.4.0, 1.4.0.0, 1.4.0.1, 1.4.0.2, 1.6.0.1, 1.6.0.2, 1.6.0.3,
1.14.0, blaze-builder-0.1, case-insensitive-0.1,

We see that selecting hoogle-4.2.8 causes ruling out Cabal 1.14.0

Similarly, the line for selecting Cabal-1.12.0 mentions ruling out
array-0.4.0.0

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users







___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Do something with TypecheckedSource during build?

2012-03-02 Thread JP Moresmau
Hello,
I know that in 7.4 I can add a Core transformation plugin, but I
didn't find in the doc if there was a way to do what I'd like to. I
don't really want to go as far as Core, I think. What I do at the
moment is that I use the GHC API to get to the point where I have a
TypedcheckedSource and then dump information about it in a file. I was
wondering if I could do the same thing by plugging something into
GHC via the command line. So I could do ghc --make test.hs -c
-fsourceplugin=myplugin, and my plugin would be given the
TypecheckedSource. That means I could do my transformation directly my
calling Cabal and not have to worry about launching the GHC API with
all the proper flags, but without even actually generating something.
Is what I want achievable with today's tools or shall I stick to using the API?

Thanks!

-- 
JP Moresmau
http://jpmoresmau.blogspot.com/

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Do something with TypecheckedSource during build?

2012-03-02 Thread Simon Marlow

On 02/03/2012 10:48, JP Moresmau wrote:

Hello,
I know that in 7.4 I can add a Core transformation plugin, but I
didn't find in the doc if there was a way to do what I'd like to. I
don't really want to go as far as Core, I think. What I do at the
moment is that I use the GHC API to get to the point where I have a
TypedcheckedSource and then dump information about it in a file. I was
wondering if I could do the same thing by plugging something into
GHC via the command line. So I could do ghc --make test.hs -c
-fsourceplugin=myplugin, and my plugin would be given the
TypecheckedSource. That means I could do my transformation directly my
calling Cabal and not have to worry about launching the GHC API with
all the proper flags, but without even actually generating something.
Is what I want achievable with today's tools or shall I stick to using the API?


We don't have a way of doing that from the command line I'm afraid.  It 
wouldn't be hard to add though - just look at how core-to-core plugins 
are done.


Cheers,
Simon




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ghci 7.4.1 no longer loading .o files?

2012-03-02 Thread Evan Laforge
 There is a way around it: create a .h file containing #define MY_SETTING,
 and have the Haskell code #include the .h file.  The recompilation checker
 does track .h files:

 http://hackage.haskell.org/trac/ghc/ticket/3589

 When you want to change the setting, just modify the .h file.  Make sure you
 don't #include the file in source code that doesn't depend on it.

Ahh, I do believe that would work.  Actually, I'm not using --make but
the build system I am using (shake) can easily track those
dependencies.  It would fix the inconsistent-flags problem because now
I'm not passing any -D flags at all.

It's more awkward though, I'm using make flags or env vars to control
the defines, I would have to either change to editing a config.h file,
or have the build system go rewrite config.h on each run, making sure
to preserve the timestamp if it hasn't changed.  But that's not really
all that bad, and you could argue config.h is more common practice
than passing -D, probably because it already cooperates with
file-based make systems.

I'll give it a try, thanks!

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Glasgow-haskell-users Digest, Vol 103, Issue 4

2012-03-02 Thread Iain Alexander
On 1 Mar 2012 at 14:15, Simon Marlow wrote:
 does anyone have some 
 .ghci magic for doing conditional compilation?

Do you mean something like the attached?

HTH,
Iain.
-- 
i...@stryx.demon.co.uk

The following section of this message contains a file attachment
prepared for transmission using the Internet MIME message format.
If you are using Pegasus Mail, or any other MIME-compliant system,
you should be able to save it or view it from within your mailer.
If you cannot, please ask your system administrator for assistance.

    File information ---
 File:  .ghci
 Date:  5 Mar 2011, 1:32
 Size:  358 bytes.
 Type:  Unknown


.ghci
Description: Binary data
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Abstracting over things that can be unpacked

2012-03-02 Thread Johan Tibell
Hi all,

These ideas are still in very early stages. I present them here in hope of
starting a discussion. (We discussed this quite a bit at last year's ICFP,
I hope this slightly different take on the problem might lead to new ideas.)

I think the next big step in Haskell performance is going to come from
using better data representation in common types such as list, sets, and
maps. Today these polymorphic data structures use both more memory and have
more indirections than necessary, due to boxing of values. This boxing is
due to the values being stored in fields of polymorphic type.

First idea: instead of rejecting unpack pragmas on polymorphic fields, have
them require a class constraint on the field types. Example:

data UnboxPair a b = (Unbox a, Unbox b) = UP {-# UNPACK #-} !a {-#
UNPACK #-} !b

The Unbox type class would be similar in spirit to the class with the same
name in the vector package, but be implemented internally by GHC. To a
first approximation instances would only exist for fields that unpack to
non-pointer types (e.g. Int.)

Second idea: Introduce a new pragma, that has similar effect on
representations as DPH's [::] vector type. This new pragma does deep
unpacking, allowing for more types to be instances of the Unbox type e.g.
pairs. Example:

data T = C {-# UNWRAP #-} (a, b)

If you squint a bit this pragma does the same as [: (a, b) :], except no
vectors are involved. The final representation would be the
unpacked representation of a and b, concatenated together (e.g. (Int, Int)
would result in the field above being 128-bit wide on a 64-bit machine.

The meta-idea tying these two ideas together is to allow for some
abstraction over representation transforming pragmas, such as UNPACK.

P.S. Before someone suggest using type families. Please read my email
titled Avoiding O(n^2) instances when using associated data types to
unpack values into constructors.

Cheers,
  Johan
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-03-02 Thread AntC
Isaac Dupree ml at isaac.cedarswampstudios.org writes:

 
 
  In the meantime, I had an idea (that could work with SORF or DORF) :
 
  data Foo = Foo { name :: String } deriving (SharedFields)
 
  The effect is: without that deriving, the declaration behaves just
  like H98.
 
  Thanks Isaac, hmm: that proposal would work against what DORF is trying to 
do.
 
  What you're not getting is that DORF quite intentionally helps you hide the
  field names if you don't want your client to break your abstraction.
 
  So under your proposal, a malicious client could guess at the fieldnames in
  your abstraction, then create their own record with those fieldnames as
  SharedFields, and then be able to update your precious hidden record type.
 
 Show me how a malicious client could do that.  Under DORF plus my 
 mini-proposal,
 
 module Abstraction (AbstractData) where
 data AbstractData = Something { field1 :: Int, field2 :: Int }
 ...
 --break abstraction how? let's try...
 
 module Client1 where
 import Abstraction
 data Breaker = Something { field1 :: Int } deriving (SharedFields)
 -- compile fails because there are no field-labels in scope

Correct that the fieldLabel is not in scope, so that compile will fail; but 
what price did you pay?

Hint: what did you import with `Abstraction`?
Answer: you did not import `field1` selector function, nor the mechanism 
behind it.

So in module Client1 you can't access the `field1` content of a record type 
AbstractData. 

OK, that's sometimes something you want: to be able to pass around records of 
a specific type without allowing the client to look inside them at all.

But I was talking about the more common requirement for encapsulation. I want 
to control access to my record type: the client can read (certain) fields, but 
not update them. Other fields I don't want the client to even know about. 
(You've achieved the last part with your Client1, for all of the fields.)

(FYI: that's how wiki pages turn out so long; specifying exactly all the ins 
and outs at that sort of subtle detail.)

AntC




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-03-02 Thread AntC
AntC anthony_clayden at clear.net.nz writes:

 
 Gábor Lehel illissius at gmail.com writes:
 
  ...
  
  ... My main complaint against DORF is
  that having to write fieldLabel declarations for every field you want
  to use is onerous. If that could be solved, I don't think there are
  any others. (But even if it can't be, I still prefer DORF.)
  
 
 Thank you Gábor, I understand that 'complaint'.
 
 I have been trying to keep the design 'clean': either the module is totally 
 DORF, or it's totally H98.
 
 ...
 There have been several suggestions amongst the threads to mix H98-style 
 fields with DORF-style records (or perhaps I mean vice-versa!):
 * We'd need to change the record decl syntax to 'flag' DORF fields (somehow).
 ...
 There's one difficulty I can see:
 ...
 
 Suggestions please!
 

Wow! well thank you for all that hard thought going into my question.

I've put up a tweak to the proposal as Option Three: Mixed In-situ and 
Declared ORF.

This does _not_ re-introduce H98 style fields, but does simulate them in a way 
that fits better with DORF.

Do I dub this MIDORF? How will the cat with the hariballs pronounce it ;-)?

[Oh, and sorry Isaac: the word count on the wiki has gone up some more.]

AntC



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users