This doesn't sound like the right explanation to me. Untouchable variables
don't have anything (necessarily) to do with existential quantification.
What they have to do with is GHC's (equality) constraint solving.
I don't completely understand the algorithm. However, from what I've read
and seen
On Thu, May 26, 2016 at 5:14 AM, Peter wrote:
> Solving for everything but f, we get f :: T -> Int.
So TDNR happens for things in function position (applied to something).
> Solving for everything but f, we get f :: T -> Int.
So TDNR happens for things in argument
As a supplement, here's a series of definitions to think about. The
question is: what should happen in each section, and why? The more
detailed the answer, the better. Definitions from previous sections
are in scope in subsequent ones, for convenience. The examples are
arranged in a slippery
On Tue, May 10, 2016 at 4:45 AM, Harendra Kumar
wrote:
> Thanks Dan, that helped. I did notice and suspect the update frame and the
> unboxed tuple but given my limited knowledge about ghc/core/stg/cmm I was
> not sure what is going on. In fact I thought that the
I'm no expert on reading GHC's generated assembly. However, there may
be a line you've overlooked in explaining the difference, namely:
movq $stg_upd_frame_info,-16(%rbp)
This appears only in the IO code, according to what you've pasted, and
it appears to be pushing an update frame (I
It seems to me the problem is that there's no way to define classes by
consecutive cases to match the family definitions. I don't know what a good
syntax for that would be, since 'where' syntax is taken for those. But it
seems like it would correspond fill the hole here.
On Sun, Jun 7, 2015 at
vector generates a considerable amount of code using CPP macros.
And with regard to other mails, I'm not too eager (personally) to port that
to template Haskell, even though I'm no fan of CPP. The code generation
being done is so dumb that CPP is pretty much perfect for it, and TH would
probably
You aren't the only one. The vector test suite also has these kind of
issues. In its case, it's hard for me to tell how big the code is, because
template haskell is being used to generate it. However, I don't think the
template haskell is what's using the additional performance, because the
test
Assuming a separate syntax, I believe that the criterion would be as simple
as ensuring that no ValidateFoo constraints are left outstanding. The
syntax would add the relevant validate call, and type variables involved in
a ValidateFoo constraint would not be generalizable, and would have to be
On Mon, Aug 11, 2014 at 11:36 AM, Twan van Laarhoven twa...@gmail.com
wrote:
To me, perhaps naively, IncoherentInstances is way more scary than
OverlappingInstances.
It might be a bit naive. Most things that incoherent instances would allow
are allowed with overlapping instances so long as
Filed. Bug #8952.
On Wed, Apr 2, 2014 at 3:41 PM, wren romano winterkonin...@gmail.comwrote:
On Tue, Apr 1, 2014 at 3:02 PM, Dan Doel dan.d...@gmail.com wrote:
Specifically, consider:
case Nothing of
!(~(Just x)) - 5
Nothing - 12
Now, the way I'd expect
In the past year or two, there have been multiple performance problems in
various areas related to the fact that lambda abstraction is not free,
though we
tend to think of it as so. A major example of this was deriving of Functor.
If we
were to derive Functor for lists, we would end up with
Greetings,
I've been thinking about bang patterns as part of implementing our own
Haskell-like compiler here, and have been testing out GHC's implementation
to see how it works. I've come to one case that seems like it doesn't work
how I think it should, or how it is described, and wanted to ask
Unfortunately, in some cases, function application is just worse. For
instance, when the result is a complex arithmetic expression:
do x - expr1; y - expr2; z - expr3; return $ x*y + y*z + z*x
In cases like this, you have pretty much no choice but to name intermediate
variables, because the
This is already a separate extension: PatternSignatures. However, that
extension is deprecated for some reason.
On Tue, Aug 6, 2013 at 2:46 PM, Evan Laforge qdun...@gmail.com wrote:
Occasionally I have to explicitly add a type annotation, either for
clarity or to help choose a typeclass
There's something strange going on in this example. For instance, setting
(-M) heap limits as low as 40K seems to have no effect, even though the
program easily uses more than 8G. Except, interrupting the program in such
a case does seem to give a message about heap limits being exceeded (it
won't
On Sun, Sep 16, 2012 at 11:49 AM, Simon Peyton-Jones
simo...@microsoft.com wrote:
I don't really want to eagerly eta-expand every type variable, because (a)
we'll bloat the constraints and (b) we might get silly error messages. For
(b) consider the insoluble constraint
[W] a~b
where a
On Fri, Aug 31, 2012 at 9:06 AM, Edward Kmett ekm...@gmail.com wrote:
I know Agda has to jump through some hoops to make the refinement work on
pairs when they do eta expansion. I wonder if this could be made less
painful.
To flesh this out slightly, there are two options for defining pairs in
On Aug 3, 2012 11:13 PM, Brandon Simmons brandon.m.simm...@gmail.com
wrote:
In particular I don't fully understand why these sorts of contortions...
http://hackage.haskell.org/packages/archive/base/latest/doc/html/src/GHC-List.html#foldl
...are required. It seems like a programmer has to
If we're voting
I think \of is all right, and multi-argument case could be handy,
which rules out using 'case of' for lambda case, because it's the
syntax for a 0-argument case:
case of
| guard1 - ...
| guard2 - ...
Then multi-argument lambda case could use the comma syntax
On Thu, Jan 26, 2012 at 12:45 PM, Thijs Alkemade
thijsalkem...@gmail.com wrote:
Let me try to describe the goal better. The intended users are people
new to Haskell or people working with existing code they are not
familiar with.
Also me. I want this feature. It pretty much single handedly
On Thu, Jan 26, 2012 at 2:36 PM, Simon Peyton-Jones
simo...@microsoft.com wrote:
| Let me try to describe the goal better. The intended users are people
| new to Haskell or people working with existing code they are not
| familiar with.
|
| Also me. I want this feature.
My question
On Wed, Jan 11, 2012 at 8:41 AM, Simon Marlow marlo...@gmail.com wrote:
On 10/01/2012 16:18, Dan Doel wrote:
Copying the list, sorry. I have a lot of trouble replying correctly
with gmail's interface for some reason. :)
On Tue, Jan 10, 2012 at 11:14 AM, Dan Doeldan.d...@gmail.com wrote
Copying the list, sorry. I have a lot of trouble replying correctly
with gmail's interface for some reason. :)
On Tue, Jan 10, 2012 at 11:14 AM, Dan Doel dan.d...@gmail.com wrote:
On Tue, Jan 10, 2012 at 5:01 AM, Simon Marlow marlo...@gmail.com wrote:
On 09/01/2012 04:46, wren ng thornton wrote
Greetings,
In the process of working on a Haskell-alike language recently, Ed
Kmett and I realized that we had (without really thinking about it)
implemented type synonyms that are a bit more liberal than GHC's. With
LiberalTypeSynonyms enabled, GHC allows:
type Foo a b = b - a
type Bar
type family Bar a :: *
type instance Bar () = String
data Foo a = Bar a | Baz a a
ghci Bar ()
What happens?
There is a lot of ambiguity between term and type levels in Haskell.
(,); []; etc. It's only the overall structure of the language that
disambiguates them; you can't necessarily tell
Classes are not always exported from a module. Only instances are. It
is even possible to export methods of a class that isn't itself
exported, making it impossible to write the types for them explicitly
(GHC will infer qualified types that you can't legally write given the
imports).
I don't
2011/7/22 Gábor Lehel illiss...@gmail.com:
Yeah, this is pretty much what I ended up doing. As I said, I don't
think I lose anything in expressiveness by going the MPTC route, I
just think the two separate but linked classes way reads better. So
it's just a would be nice thing. Do recursive
This isn't a GHC limitation. The report specifies that the class
hierarchy must be a DAG. So C cannot require itself as a prerequisite,
even if it's on a 'different' type.
Practically, in the implementation strategy that GHC (and doubtless
other compilers) use, the declaration:
class C (A x)
On Wed, Apr 20, 2011 at 3:01 PM, Daniel Fischer
daniel.is.fisc...@googlemail.com wrote:
I'm sure it's not criterion, because after I've found that NaNs were
introduced to the resamples vectors during sorting (check the entire
vectors for NaNs before and aftersorting, tracing the count; before:
On Monday 14 February 2011 5:51:55 PM Daniel Peebles wrote:
I think what you want is closed type families, as do I and many others :)
Unfortunately we don't have those.
Closed type families wouldn't necessarily be injective, either. What he wants
is the facility to prove (or assert) to the
On Tuesday 28 September 2010 11:10:58 pm David Fox wrote:
I'm seeing errors like this in various places, which I guess are
coming from the new type checker:
Data/Array/Vector/Prim/BUArr.hs:663:3:
Couldn't match type `s' with `s3'
because this skolem type variable would escape:
On Sunday 11 July 2010 1:31:23 pm Simon Peyton-Jones wrote:
This is the first I've heard of this. Do you have a test case that shows
up the problem? Then we can put it in the regression tests so it won't go
wrong again.
That depends on what dependencies you're willing to accept. I think all
On Saturday 10 July 2010 2:09:48 pm Bryan O'Sullivan wrote:
Recently, I switched the mwc-random package (
http://hackage.haskell.org/package/mwc-random) over from running in the ST
monad to using your primitive package. I didn't notice initially, but this
caused a huge performance regression.
On Thursday 15 April 2010 8:10:42 am Sebastian Fischer wrote:
Dear GHC experts,
Certain behaviour when using
{-# LANGUAGE GADTs, TypeFamilies #-}
surprises me. The following is accepted by GHC 6.12.1:
data GADT a where
BoolGADT :: GADT Bool
foo :: GADT a - a
On Wednesday 14 April 2010 1:07:05 pm Roman Leshchinskiy wrote:
On 15/04/2010, at 02:55, John Lato wrote:
The problem isn't with criterion itself, but with vector-algorithms.
The vector library relies heavily on type families, which have dodgy
support in ghc-6.10.
As a matter of fact,
On Wednesday 03 February 2010 11:34:27 am Stefan Holdermans wrote:
I don't think it's the same thing. The whole point of the existential
is that at the creation site of any value of type Ex the type of the
value being packaged is hidden. At the use site, therefore, the only
suitable instance
Greetings,
I've actually known about this for a while, but while discussing it, it
occurred to me that perhaps it's something I should report to the proper
authorities, as I've never seen a discussion of it. But, I thought I'd start
here rather than file a bug, since I'm not sure it isn't
On Friday 30 October 2009 5:51:37 am Simon Peyton-Jones wrote:
One more update about GHC 6.12, concerning impredicative polymorphism.
GHC has had an experimental implementation of impredicative polymorphism
for a year or two now (flag -XImpredicativePolymorphism). But
a) The
On Friday 10 July 2009 5:03:00 am Wolfgang Jeltsch wrote:
Isn’t ExistentialQuantification more powerful than using GADTs for
emulating existential quantification? To my knowledge, it is possible to
use lazy patterns with existential types but not with GADTs.
6.10.4 doesn't allow you to use ~
On Monday 09 March 2009 11:56:14 am Simon Peyton-Jones wrote:
For what it's worth, here's why. Suppose we have
type family N a :: *
f :: forall a. N a - Int
f = blah
g :: forall b. N b - Int
g x = 1 + f x
The defn of 'g' fails with a very similar
Greetings,
Someone on comp.lang.functional was asking how to map through arbitrary
nestings of lists, so I thought I'd demonstrate how his non-working ML
function could actually be typed in GHC, like so:
--- snip ---
{-# LANGUAGE TypeFamilies, GADTs, EmptyDataDecls,
Rank2Types,
On Wednesday 17 December 2008 1:25:26 pm Jorge Marques Pelizzoni wrote:
Hi,
While playing with type families in GHC 6.10.1, I guess I bumped into
the no-overlap restriction. As I couldn't find any examples on that, I
include the following (non-compiling) code so as to check with you if
On Wednesday 19 November 2008 11:38:07 pm David Menendez wrote:
One possibility would be to add minimum and maximum to Ord with the
appropriate default definitions, similar to Monoid's mconcat.
This is probably the most sensible way. However, first seeing this, I wanted to
see if I could do it
On Monday 23 June 2008, Isaac Dupree wrote:
there's no chance for the lower-level near code generation to
reverse-SAT to eliminate the heap usage? (which would obviously be a
different optimization that might be useful in other ways too, if it
could be made to work) (did someone say that
On Friday 30 May 2008, Duncan Coutts wrote:
This is for two reasons. One is because your second foldl' is directly
recursive so does not get inlined. The static argument transformation it
what you're doing manually to turn the latter into the former. The SAT
is implemented in ghc 6.9 (though
On Friday 20 June 2008, Max Bolingbroke wrote:
Of course, if you have any suggestions for good heuristics based on
your benchmarking experience then we would like to hear them! There
was some discussion of this in the original ticket,
http://hackage.haskell.org/trac/ghc/ticket/888, but when
On Wednesday 18 June 2008, Daniel Fischer wrote:
Am Dienstag, 17. Juni 2008 22:37 schrieb Dan Doel:
I'll attach new, hopefully bug-free versions of the benchmark to this
message.
With -O2 -fvia-C -optc-O3, the difference is small (less than 1%), but
today, ByteArr is faster more often
On Tuesday 17 June 2008, Simon Marlow wrote:
So I tried your examples and the Addr# version looks slower than the MBA#
version:
Hmm...
I tried with 6.8.2 and 6.8.3, using -O2 in both cases. I tried the Ptr
version with and without -fvia-C -optc-O2, no difference.
I had forgotten about the
On Tuesday 17 June 2008, Daniel Fischer wrote:
I've experimented a bit and found that Ptr is faster for small arrays (only
very slightly so if compiled with -fvia-C -optc-O3), but ByteArr performs
much better for larger arrays
...
The GC time for the Addr# version is frightening
I had an
On Tuesday 17 June 2008, [EMAIL PROTECTED] wrote:
I see that Dan Doel's post favoring Ptr/Addr#
has the same allocation amounts (from +RTS -sstderr) for Ptr/Addr# and the
MutableByteArray#
Everyone else sees more allocation for Ptr/Addr# than MBA# and see MBA# as
faster in these cases.
I
Greetings,
Recently, due to scattered complaints I'd seen on the internet, I set about to
rewrite the fannkuch [1] benchmark on the Great Computer Language Shootout.
The current entry uses Ptr/Addr#, malloc, etc. so it's not particularly
representative of code one would actually write in
On Thursday 29 May 2008, Tyson Whitehead wrote:
I thought this was interesting. Is it to be expected? Am I right in
interpreting this to mean it was just too much for the strictness
analyzer. I believe the first ultimately produces significantly
superior code, so should one always write
53 matches
Mail list logo