Re: series of switchpoints or better

2016-10-05 Thread Chris Seaton
Hi Jochen,

I’m not an expert on the implementation of switch points, but my understanding 
is that they don’t appear in the dynamically compiled machine code at all. They 
use the safe point mechanism of the VM (the same thing that does the 
stop-the-world in the garbage collectors) for which polling instructions are 
already there anyway.

http://chrisseaton.com/rubytruffle/icooolps15-safepoints/safepoints.pdf 


See figure 6 (no significant difference in runtime with switch points there or 
not), and figure 9 (machine code contains no trace of them). So switch points 
aren’t just fast - they don’t take any time at all. (Ignore the references to 
Truffle if you aren’t using that.)

I don’t think any of this would change no matter how many of them you have.

I’m sure they do have an impact on interpreter performance, of course, where 
they can’t be optimised away.

I suppose it could conceivably be the case that a great many switch points may 
start to upset the compiler in terms of things like inlining budgets? I’m not 
sure, but seems unlikely.

Chris

> On 5 Oct 2016, at 13:37, Jochen Theodorou  wrote:
> 
> Hi all,
> 
> I am constructing a new meta class system for Groovy (ok, I say that for 
> several years already, but bear with me) and I was wondering about the actual 
> performance of switchpoints.
> 
> In my current scenario I would need a way to say a certain group of meta 
> classes got updated and the method for this callsite needs potentially be 
> reselected.
> 
> So if I have class A, class B and then I have a meta class for Object and one 
> for A.
> 
> If the meta class for A is changed, all handles operating on instances of A 
> may have to reselect. the handles for B and Object need not to be affected. 
> If the meta class for Object changes, I need to invalidate all the handles 
> for A, B and Object.
> 
> Doing this with switchpoints means probably one switchpoint per metaclass and 
> a small number of meta classes per class (in total 3 in my example). This 
> would mean my MethodHandle would have to get through a bunch of switchpoints, 
> before it can do the actual method invocation. And while switchpoints might 
> be fast it does not sound good to me.
> 
> Or I can do one switchpoint for all methodhandles in the system, which makes 
> me wonder if after a meta class change the callsite ever gets Jitted again. 
> The later performance penalty is actually also not very attractive to me.
> 
> So what is the way to go here? Or is there an even better way?
> 
> bye Jochen
> 
> ___
> mlvm-dev mailing list
> mlvm-dev@openjdk.java.net
> http://mail.openjdk.java.net/mailman/listinfo/mlvm-dev

___
mlvm-dev mailing list
mlvm-dev@openjdk.java.net
http://mail.openjdk.java.net/mailman/listinfo/mlvm-dev


CFP - Truffle/Graal Languages Workshop

2015-03-13 Thread Chris Seaton
**
CALL FOR PARTICIPATION

Truffle 2015

Truffle/Graal Languages Workshop

July 7, 2015
Prague, Czech Republic

Co-located with ECOOP 2015

http://2015.ecoop.org/track/Truffle-2015-papers 
http://2015.ecoop.org/track/Truffle-2015-papers
**

In recent years we have observed a change in the way people think
about implementing programming languages. In the past, an implementation
of a given language was monolithic, with all the components, such as
the runtime, compiler or memory management subsystem, developed from
scratch. With the appearance of Java, developers of other languages started
targeting its bytecode format in order to re-use high-performance services
provided by Java virtual machines. Evolution of these ideas has resulted in
the creation of a specialized open-source Java-based language implementation
toolkit, consisting of the Graal optimizing compiler and Graal’s multi-language
framework, Truffle. The toolkit facilitates the creation of high-performance
language implementations using partial evaluation of self-specializing
interpreters and attempts to rectify some of the limitations of previous
approaches. In particular, it circumvents possible mismatches between “guest”
language semantics and “host” bytecodes. It is rapidly gaining popularity
in both industry and academia as a foundation for guest languages (e.g.,
JavaScript, Ruby, Python, R and others).

*** Workshop Goal ***

The goal of this full day workshop is to attract programming language developers
interested in using Truffle and Graal for creating programming language
implementations and tools, as well as, more broadly, developers interested in
discussing language implementation approaches heavily relying on dynamic
profiling feedback and specialization. The workshop is meant to be a forum
where language developers can learn about Truffle and Graal, share their
experience using the toolkit, identify potential limitations and discuss methods
of rectifying them, as well as propose future directions for the development
of Truffle languages tooling support and of the toolkit itself. We are 
especially
interested in attracting participation of language developers that are not yet
familiar with Truffle or Graal but are interested in exploring how they can 
simplify
development of their own current or future projects. 

*** Workshop Format ***

The workshop will be divided into two segments. The morning segment will
consist of a number of short talks and discussions led by experienced language
developers, and is aimed at introducing Truffle and Graal as well as sharing
experience implementing Truffle languages. The afternoon segment is
aimed at providing support for developers planning to jump-start their own 
projects
using Truffle or contributing to one of the existing Truffle-based 
implementations,
as well as discussing how the Truffle platform can be used for programming 
language
research. This segment will start with a hands-on tutorial, and experienced 
Truffle
language developers as well as members of the Truffle/Graal core team will also
be available for individual/group mentoring and/or coding sessions.

*** Call for Submissions ***

We solicit discussion topic proposals, describing both ongoing and future 
projects,
in the form of extended (1-3 page) abstracts. The discussion topics include but
are not limited to the following areas:

- Case studies of existing Truffle language implementations.
- Comparing alternative language implementation techniques to Truffle.
- Performance analysis and/or optimizations for Truffle language 
implementations.
- Tooling support for Truffle languages.
- Infrastructure-level optimizations and extensions that can benefit languages
built with Truffle.
- New research project proposals utilizing Truffle and/or Graal.

Depending on the number of accepted submissions, we expect topics to cover 
between
30 minutes and 60 minutes time slots at the workshop. All proposals should be
submitted by email to Adam Welc (adam.w...@oracle.com 
mailto:adam.w...@oracle.com). 

- deadline for proposal submissions: April 23, 2015 (by 11:59 PM AoE)
- notification: May 1, 2015

Participants with accepted proposals may ask for financial support to cover 
travel
costs. The financial support is optional and its total amount, if any, will be
determined by the organizing committee. Please indicate if financial support
is being requested as part of the submission.___
mlvm-dev mailing list
mlvm-dev@openjdk.java.net
http://mail.openjdk.java.net/mailman/listinfo/mlvm-dev


Re: Truffle and mlvm

2014-08-31 Thread Chris Seaton
 also make it even more
 object-heavy during interpretation, aggravating startup time further.
 
 4. Limited availability
 
 This is the chicken-and-egg issue. Truffle is just a library, so we
 can ignore that for the moment (given any JVM, you can run a Truffle
 language).
 
 Graal is required for Truffle to perform well at all. The Truffle
 interpreter is without a doubt the slowest interpreter we've ever had
 for JRuby, and that's saying something (there could be startup/warmup
 effects in play here too). In order for us to go 100% Truffle, we'd
 need a Graal VM. That limits us to either pre-release or hand-made
 builds of Graal/OpenJDK. Even if Graal somehow did get into Java 9,
 we'd still have legions of users on 8, 7, ... even 6 in some cases,
 though we're probably leaving them behind with JRuby 9000. Ignoring
 other platforms (non-OpenJDK, Android) and assuming Graal in Java 9,
 I'd conservatively estimate JRuby could still not go 100% Truffle
 until 2017 or later.
 
 And it gets worse. Graal will probably never exist on other JVMs.
 Graal will probably never exist in an Android VM. Graal may not even
 be available in other non-Oracle OpenJDK derivatives for a very long
 time. We have users on dozens of different platform/JVM combinations,
 so there's really no practical way for us to abandon our JVM bytecode
 runtimes in the near future.
 
 Now of course if Graal became essential to users, it would be
 available in more places. We recognize the potential of Truffle and
 Graal, which is why we've been thrilled to work with Oracle on a
 RubyTruffle that's part of JRuby. We also recognize that the
 Truffle/Graal approach has some very compelling features for our
 users, and that our users may often be comfortable running custom
 JVMs. We're allowing all flowers to bloom and our users will pick the
 ones that work for them.
 
 5. Unclear benefits for real-world applications
 
 There have been many published microbenchmarks for Truffle-based
 languages, but very few benchmarks of real-world applications
 performing significantly better than custom-made VMs (JS versus V8).
 There have been practically no studies of a Truffle-based language
 running a large application for a long period of time...and by long I
 mean server-scale.
 
 Chris Seaton has pushed this forward recently for Ruby, getting
 general-purpose, numeric-heavy libraries to run and optimize very well
 (a png library and a psd library). Going deeper requires having more
 of the language's standard libraries to be available, and I believe
 this is where Chris has spent much of his time (RubyTruffle currently
 requires mostly-custom versions of JRuby's core classes...versions
 that Truffle can recognize, specialize, and escape-analyze away).
 
 * Conclusion
 
 I again want to emphasize that we think Truffle and Graal are really
 awesome technology. I spent years with my nose smooshed against the
 glass, watching the Pypy guys add optimizations I wanted and make good
 on their promise of just implement an interpreter...we'll do the
 rest. Finally we have what I wanted: a Pypy for JVM (in Truffle) and
 an LLVM for JVM (in Graal). These are exciting times indeed.
 
 But reality steps in. There's a long road ahead.
 
 I think we need to separate the questions about Truffle from questions
 about Graal. Truffle is ultimately just a library that uses Graal.
 
 Graal is promising JIT technology. Graal is simpler than C2 and may be
 able to match or beat its performance. Graal provides a better way to
 communicate intent to the JIT. These facts are not in question.
 
 However, Graal is not (other than when used as the JVM's JIT) a JVM.
 Targeting Graal directly acts against the promise of a standard,
 platform-and-VM-agnostic bytecode -- and that's the promise that
 brought most of us here. Graal is not yet ready to replace C2, which
 would mean adding to the size and complexity of Java 9. And Graal is
 almost completely untested in large production settings.
 
 I personally would love to see Graal get into a Java release soon as
 an experimental feature, but Java 9 seems ambitious but any standard.
 It *might* be possible/reasonable to include Graal as experimental in
 9. Java 10 is certainly feasible for experimental, and may be feasible
 for product. But even if Graal got into mainstream OpenJDK and Java,
 there's a very long adoption tail ahead.
 
 I'd like to hear more from folks on the Graal and Truffle teams. Prove
 me wrong :-)
 
 - Charlie
 ___
 mlvm-dev mailing list
 mlvm-dev@openjdk.java.net
 http://mail.openjdk.java.net/mailman/listinfo/mlvm-dev

___
mlvm-dev mailing list
mlvm-dev@openjdk.java.net
http://mail.openjdk.java.net/mailman/listinfo/mlvm-dev