On 2/11/13 11:47 AM, Johan Holmquist wrote:
I was about to leave this topic not to swamp the list with something
that appears to go nowere. But now I feel that I must answer the
comments, so here it goes.
By agressive optimisation I mean an optimisation that drastically
reduces run-time
On Mon, Feb 11, 2013 at 6:47 PM, Johan Holmquist holmi...@gmail.com wrote:
By agressive optimisation I mean an optimisation that drastically
reduces run-time performance of (some part of) the program. So I guess
automatic vectorisation could fall into this term.
Even something like running the
I was about to leave this topic not to swamp the list with something
that appears to go nowere. But now I feel that I must answer the
comments, so here it goes.
By agressive optimisation I mean an optimisation that drastically
reduces run-time performance of (some part of) the program. So I guess
As a software developer, who typically inherits code to work on rather
than simply writing new, I see a potential of aggressive compiler
optimizations causing trouble.
I would be grateful if someone could explain the
difference between aggressive optimisation and
obviously sensible compilation
As a software developer, who typically inherits code to work on rather
than simply writing new, I see a potential of aggressive compiler
optimizations causing trouble. It goes like this:
Programmer P inherits some application/system to improve upon. Someday
he spots some piece of rather badly
On Sat, Feb 09, 2013 at 09:56:12AM +0100, Johan Holmquist wrote:
As a software developer, who typically inherits code to work on rather
than simply writing new, I see a potential of aggressive compiler
optimizations causing trouble. It goes like this:
Programmer P inherits some
On 02/09/2013 09:56 AM, Johan Holmquist wrote:
[--snip--]
It just so happened that the old code triggered some aggressive
optimization unbeknownst to everyone, **including the original
developer**, while the new code did not. (This optimization maybe even
was triggered only on a certain
I guess I fall more to the reason about code side of the scale
rather than testing the code side. Testing seem to induce false
hopes about finding all defects even to the point where the tester is
blamed for not finding a bug rather than the developer for introducing
it.
[Bardur]
It's definitely
On 02/09/2013 10:50 AM, Johan Holmquist wrote:
I guess I fall more to the reason about code side of the scale
rather than testing the code side. Testing seem to induce false
hopes about finding all defects even to the point where the tester is
blamed for not finding a bug rather than the
On Sat, Feb 9, 2013 at 3:56 AM, Johan Holmquist holmi...@gmail.com wrote:
The code goes into production and, disaster. The new improved
version runs 3 times slower than the old, making it practically
unusable. The new version has to be rolled back with loss of uptime
and functionality and
Inline.
On Sat, Feb 9, 2013 at 1:50 AM, Johan Holmquist holmi...@gmail.com wrote:
I guess I fall more to the reason about code side of the scale
rather than testing the code side. Testing seem to induce false
hopes about finding all defects even to the point where the tester is
blamed for
Hi all,
some time ago me and my friend had a discussion about optimizations performed
by GHC. We wrote a
bunch of nested list comprehensions that selected objects based on their
properties (e.g. finding
products with a price higher than 50). Then we rewrote our queries in a more
efficient
You don't reason about the bits churned out by a compiler but about the
actual code you write. If you want to preserve such information during
the compilation process, you probably want to run the compiler without
any optimization flags at all.
At the moment, with the way you are thinking about
Ouch, forgot the Cafe.
Would you object to this particular optimisation (replacing an algorithm
with an entirely different one) if you were guaranteed that the space
behaviour would not change?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
This is pretty much a core idea behind Data Parallel Haskell - it
transforms nested data parallel programs into flat ones. That's
crucial to actually making it perform well and is an algorithmic
change to your program. If you can reason about your program, and
perhaps have an effective cost model
On Wed, Feb 6, 2013 at 1:18 PM, Austin Seipp mad@gmail.com wrote:
Now, on a slight tangent, in practice, I guess it depends on your
target market. C programs don't necessarily expose the details to make
such rich optimizations possible. And Haskell programmers generally
rely on
On Wed, Feb 6, 2013 at 6:45 AM, Jan Stolarek jan.stola...@p.lodz.pl wrote:
nevertheless I objected to his opinion, claiming that if compiler
performed such a high-level
optimization - replace underlying data structure with a different one and
turn one algorithm into
a completely different
Would you object to this particular optimisation (replacing an algorithm
with an entirely different one) if you were guaranteed that the space
behaviour would not change?
No, I wouldn't.
Janek
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
You're right, somehow I didn't thought that DPH is doing exactly the same
thing. Well, I think
this is a convincing argument.
Janek
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
19 matches
Mail list logo