On 30/05/15 03:57, Steven Schveighoffer wrote:

I saw the slide from Liran that shows your compiler requirements :) I
can see why it's important to you.

Then you misunderstood Liran's slides.

Our compile resources problem isn't with GDC. It's with DMD. Single object compilation requires more RAM than most developers machines have, resulting in a complicated "rsync to AWS, run script there, compile, fetch results" cycle that adds quite a while to the compilation time.

Conversly, our problem with GDC is that IT !@$#%&?!@# PRODUCES ASSEMBLY THAT DOES NOT MATCH THE SOURCE.

I have not seen LDC myself, but according to Liran, situation there is even worse. The compiler simply does not finish compilation without crashing.

But compiled code outlives the compiler execution. It's the wart that
persists.
So does algorithmic code that, due to a compiler bugs, produces an assembly that does not implement the correct algorithm.

When doing RAID parity calculation, it is imperative that the correct bit gets to the correct location with the correct value. If that doesn't happen, compilation speed is the least of your problems.

Like Liran said in the lecture, we are currently faster than all of our competition. Still, in a correctly functioning storage system, the RAID part needs to take considerable amount of the total processing time under load (say, 30%). If we're losing x3 speed because we don't have compiler optimizations, the system, as a whole, is losing about half of its performance.

But I don't see how speed of compiler should sacrifice runtime performance.
Our plan was to compile with DMD during the development stage, and then switch to GDC for code intended for deployment. This plan simply cannot work if each time we try and make that switch, Liran has to spend two months, each time yanking a different developer from the work said developer needs to be doing, in order to figure out which line of source gets compiled incorrectly.


-Steve

Shachar

Reply via email to