On Tue, Aug 27, 2013 at 2:30 AM, Jonathan S. Shapiro <[email protected]>wrote:
> On Mon, Aug 26, 2013 at 9:12 AM, Bennie Kloosteman <[email protected]>wrote: > >> >> If you think you will create or await a new jit runtime then any design >> decision that requires a feature not supported by the CLR can wait till >> the the new runtime is created / built first ... which should be a huge >> time saving ;-p >> > > Quite the opposite. I know of a number of groups that are starting to look > at next-generation runtimes and compiler infrastructures. The designs of > these things are very conservative. Because of the scale of work involved, > it tends to be the case that they consider only proven concepts and > technologies. Proven doesn't have to mean that it's running in millions of > user sites. Proven means that we know how to do it technically and there is > enough demonstrated benefit to justify inclusion of the technology or > concept. > Proven to me means that it has been done at sufficient scale that you get the impact of the bad decisions , not necessarily in terms of users but enough combined design elements for a mostly complete system eg a decent relocating GC can make a lot of things difficult many projects use boehm and it all works great .. and say all we need is a decent gc. That said a lot of things proven ( and perfectly valid ) in the sixties and seventies were never built. > > >> Im not really convinced by the JIT model anymore ..the promised >> optomizations seem to have stalled or require so much time to work out that >> its inpractical to use in a JIT . >> > > I'm not sure what promised optimizations you refer to. The original JITS > merely promised that compiling was better than interpreting, which it > generally is. Later ones played with trace-driven inlining and > optimization. The payoff for that was less clear. Then you get to hotspot, > which is trying to do *value* driven optimizations. That's a line whose > benefit has always seemed dubious to me, and my opinion on that goes all > the way back to the Ungar's early work on self. It seemed to me then that > the desire for value-driven optimization was motivated by Self's refusal of > static typing, and that this was a case of one bad decision mandating a > second one to justify the first one. Robert McNamara would have approved. > > But on a less snarky note, I'm not aware of any work on static > metacompilation in such languages. There are definitely cases where > value-driven optimization is a big win, but I haven't seen any work that > would clearly tell me *which* cases those are. I'm inclined to wonder > whether these are the cases that metacompilation handles well. > > Just to be clear, I *do* know that the hotspot technology can do things > that metacompilation cannot. The bit I'm not clear about is what the real > payoff is. Metrics, when published, talk about program speedup, but less > often about the program slowdown while resources are devoted to > optimization. As we showed pretty clearly in the HDTrans project, it can > easily be the case that improving the performance of the underlying dynamic > translation system erases the benefit that subsequent optimization gives > you. There are a lot of moving parts, so these things aren't easy to > analyze. > Agree .. except i dont know what static metacompilation is and need to look it up :-) Java with JIT made promises and did amazingly well especially with hotspot and inline virt calls . In fact i think java is probably faster than C++ for real programs eg cross lib , not basic C optomized to the bone and written to be maintable .. That said optomizations were finding now tend to be even more computationally expensive and SIMD algorithms are comming that blow everything away and David even posted a paper where such instructions can be compiler generated though its expensive to analyse ( though a human can still do better eg translation betweem UTF8 , UTF16 and ASCII . ) .. you can get gains of several factors for heavily used code. > > >> That said the existing C++ could go onto the CLR via managed C++ very >> quick ... >> > > That hasn't been the experience of people trying to port it. :-) > Its quick compared to a rewrite !...Also a lof of people have issues with a limited stdlib , mvc and guis ,but you need to address that anyway and i would think a compiler such as bitc would have less depedencies ... > > >> Regarding LLVM .. not tracking type information for registers - does any >> decent compiler after it goes through all the optimnization stages? >> > > All of the good compilers for managed languages do, and there is no reason > not to. At any point where a compiler is introducing a temporary it is > sitting there with an expression tree that it is trying to compute. That > expression tree, of necessity, has a known result type. It's very little > trouble to add a type argument to the "make a temporary" procedure. > Why cant we add the type in metadata to the IR instruction with the result ? > > >> Mono uses LLVM so how critical is it ? >> > > Since Mono performance sucks, I'd say it's pretty critical. > Its not great with LLVM as well for real programs ( eg virt calls , interfaces , exceptions) ...due to limitations in how they can use it they just default to mono jit and you loose a lot because they cant work that well together . LLVM is better for microbenches ( if you turn off bounds checking) arrays , no virt calls etc etc. Ben
_______________________________________________ bitc-dev mailing list [email protected] http://www.coyotos.org/mailman/listinfo/bitc-dev
