Hi,
thanks for the pointer... :-)
However, while it's true that SVM (in its current form) imposes severe
limits to dynamicity, it'd be a bug if jaotc, jlink or any other of the
startup efforts (such as AppCDS) in the OpenJDK project changed
semantics or imposed limits to indy-style
JRuby loads about 4000 own classes (above 1000 of system classes) during
execution of just '-e 1'. It is a lot of data to load, parse, verify.
I played with CDS (Class Data Sharing) which includes jruby classes. We
can do that since jruby.jar is on boot class path but it requires some
manual
Jochen,
N frames per chain of N method handles looks reasonable for me, but it
depends on average number of transformations users apply. If the case of
deep method handle chains is common in practice, we need to optimize for
it as well and linear dependency in stack space may be too much.
Am 02.09.2014 16:38, schrieb Vladimir Ivanov:
[...]
It's possible to optimize some shapes of method handle chains (like
nested GWTs) and tailor special LambdaForm shape or do some inlining
during bytecode translation. Though such specialization contradicts LF
sharing goal, probable benefits may
Charlie,
Is it acceptable and solves the problem for you?
This is acceptable for JRuby. Our worst-case Ruby method handle chain
will include at most:
* Two CatchExceptions for pre/post logic (heap frames, etc). Perf of
CatchException compared to literal Java try/catch is important here.
* Up
Jochen,
The stack traces you provide are so long due to LambdaForm
interpretation. Most of the stack frames are the following:
java.lang.invoke.LambdaForm$NamedFunction.invokeWithArguments(LambdaForm.java:1147)
java.lang.invoke.LambdaForm.interpretName(LambdaForm.java:625)
Am 01.09.2014 09:07, schrieb Vladimir Ivanov:
Jochen,
The stack traces you provide are so long due to LambdaForm
interpretation. Most of the stack frames are the following:
java.lang.invoke.LambdaForm$NamedFunction.invokeWithArguments(LambdaForm.java:1147)
I'd like to focus on reducing amount of LambdaForm instances.
It benefits both dynamic memory footprint (less LambdaForms = less
heap/metaspace used) and warmup (less LambdaForms = less LF
instantiation/interpretation/bytecode translation).
After JVMLS we had a discussion on that topic and
On Mon, Sep 1, 2014 at 2:07 AM, Vladimir Ivanov
vladimir.x.iva...@oracle.com wrote:
Stack usage won't be constant though. Each compiled LF being executed
consumes 1 stack frame, so for a method handle chain of N elements, it's
invocation consumes ~N stack frames.
Is it acceptable and solves
Am 01.09.2014 15:24, schrieb Vladimir Ivanov:
[...]
N frames per chain of N method handles looks reasonable for me, but it
depends on average number of transformations users apply. If the case of
deep method handle chains is common in practice, we need to optimize for
it as well and linear
Ah yes- this was a bad example given that we cache the lambda forms. Sorry. I
do see lambda form execution time for other things that aren’t inlined, though.
Let me get back to you with profiles.
When it comes to generating call site specific typed invokers as discussed in
this thread, I think
Am 29.08.2014 21:19, schrieb Jochen Theodorou:
[...]
Maybe the situation would
already improve if I would make callID, safeNavigation, thisCall,
spreadCall into one int
addendum to this... I actually already have those as single int. I tried
moving all that information into the callsite
Comment on Jochen's long stack traces.
The difference must be in how our languages expect the call site to
resolve.
In my case I compile all of the target methods to match the callsite stack
structure.
So the fast path adds no additional manipulations ( binds etc ) between
the callsite
and the
Thanks John, it does bring up a topic I have wanted to ask about
Hotspot's specialization
for Java and how I could take advantage of it. Particularly in the area
of PIC optimization.
You mention:
And we expect to get better at capturing the call-site specific
types,
values,
John,
Thanks for this detailed analysis on the current status and proposed future
work for invokedynamic. Can you also add some comments on what you believe the
advantages and disadvantages of using Truffle instead of invokedynamic for
implementing dynamic languages on top of the JVM are?
-
Thomas,
Thanks for this detailed analysis on the current status and proposed
future work for invokedynamic. Can you also add some comments on what you
believe the advantages and disadvantages of using Truffle instead of
invokedynamic for implementing dynamic languages on top of the JVM are?
Yes. Truffle aims to become a production-quality system. A successful research
project should ultimately also advance the state of the art of what is used in
production. We are well beyond the initial exploration phase for Truffle and
focusing currently on stabilisation. There is a Truffle
Am 29.08.2014 11:03, schrieb Thomas Wuerthinger:
John,
Thanks for this detailed analysis on the current status and proposed
future work for invokedynamic. Can you also add some comments on what
you believe the advantages and disadvantages of using Truffle instead of
invokedynamic for
We are happy to no longer discuss Truffle in this thread if you are looking for
more short-term solutions and keeping the usage of invokedynamic as an
invariant. I am confident that Truffle can reach production quality within 12
months. People interested in Truffle can take a look at the
Thomas stated
A successful research project should ultimately also advance the
state
of the art of what is used in production.
Thomas one of the reasons many of us are building on the JVM is to take
advantage of the entire
universe of Java code available. Truffle, to me at
Thanks for your comment, Mark. Truffle is not at all meant as a replacement for
Java or the JVM. We fully rely on regular and unmodified Java bytecodes for the
definition of the Truffle guest language interpreters and on regular Java
objects for the Truffle guest language object model. We
I think I said two things at the same time and I apologize. I am
interested in startup time and also warmup time. Eventual performance
looks great on Chris's blogs...
-Tom
On Fri, Aug 29, 2014 at 1:29 PM, Thomas E Enebo tom.en...@gmail.com wrote:
Thomas,
I am very excited about
Truffle is an architecture based on AST interpreters. This means that immediate
startup is as fast as it can get to load an AST interpreter written in Java
into your VM and start executing it. Obviously, this heavily depends on fine
engineering of the guest language runtime and interpreter
Thanks for the stack traces Jochen, interesting.
I really have no place to complain but I can see your point.
regards
mark___
mlvm-dev mailing list
mlvm-dev@openjdk.java.net
http://mail.openjdk.java.net/mailman/listinfo/mlvm-dev
On Aug 22, 2014, at 1:08 PM, Charles Oliver Nutter head...@headius.com wrote:
Marcus coaxed me into making a post about our indy issues. Our indy
issues mostly surround startup and warmup time, so I'm making this a
general post about startup and warmup.
This is a vigorous and interesting
Am 24.08.2014 20:33, schrieb Charles Oliver Nutter:
On Sun, Aug 24, 2014 at 12:55 PM, Jochen Theodorou blackd...@gmx.org wrote:
afaik you can set how many times a lambda form has to be executed before it
is compiled... what happens if you set that very low... like 1 and disable
tiered
Regarding indy dense code:
It is certainly a problem both for JRuby with indy and Nashorn with indy that
indy scalability is so bad in 9 builds with the current JITs. I suspect that as
Java 8 grows as a code base and as a language, it will turn into a problem with
Java 8 lambdas too. Nashorn
As I said (many times) before, the methodhandles/indy should be built using
a minimal interpreter supported calling convention:
invoke (that can box/unbox at runtime)
All methodhandles should then be written as static bytecode working with
objects (no dynamic generation of specialized bytecode)
On Mon, Aug 25, 2014 at 4:32 AM, Marcus Lagergren
marcus.lagerg...@oracle.com wrote:
LambdaForms were most likely introduced as a platform independent way of
implementing methodhandle combinators in 8, because the 7 native
implementation was not very stable, but it was probably a mistake to
Charlie,
Truffle is such a general-purpose automatic specialization mechanism that works
like you say via just writing Java code and without a need to use invokedynamic
and without a need to dynamically generate bytecodes.
- thomas
On 25 Aug 2014, at 15:25, Charles Oliver Nutter
Hi Per!
This is mostly invokedynamic related. Basically, an indy callsite requires a
lot of implicit class and byte code generation, that is the source of the
overhead we are mostly discussing. While tiered compilation adds non
determinism, it is usually (IMHO) bearable…
/M
On 23 Aug 2014,
On 08/24/2014 03:46 AM, Marcus Lagergren wrote:
Hi Per!
This is mostly invokedynamic related. Basically, an indy callsite
requires a lot of implicit class and byte code generation, that is
the source of the overhead we are mostly discussing. While tiered
compilation adds non determinism, it is
Am 22.08.2014 22:08, schrieb Charles Oliver Nutter:
[...]
2. Lambda forms are too slow to execute and take too long to optimize
down to native code. Lambda forms work sorta like the tiered compiler.
They'll be interpreted for a while, then they'll become JVM bytecode
for a while, which
On Sun, Aug 24, 2014 at 12:55 PM, Jochen Theodorou blackd...@gmx.org wrote:
afaik you can set how many times a lambda form has to be executed before it
is compiled... what happens if you set that very low... like 1 and disable
tiered compilation?
Forcing all handles to compiler early has the
In answer to Charles question on what others do to help the startup.
Smalltalk is like Ruby in that we always start from source code. In our
case
there are a few hundred classes and a few thousand methods that would
be considered minimal ( we call these the base ) and around a thousand
classes
I am more of the side that invoke dynamic is awesome for enabling dynamic
languages on the
JVM. Given that there are two areas where I can see some help for my use
case which
is a true Smalltalk on the JVM.
First like Charles I do have a few dependencies on plain old Java methods
during
I agree completely with Charlie’s assessment about Lambda Forms being a
problematic mechanism for indy call site linking due to its
* Lack of scalability (explosion of byte code)
* Metaspace usage
and everything else that has been described below.
I’m currently recovering after surgery and a
On 08/22/2014 01:08 PM, Charles Oliver Nutter wrote:
What are the rest of you doing to deal with these issues?
Start-up does not appear to a problem for Kawa:
https://sourceware.org/ml/kawa/2014-q2/msg00069.html
I tried running the 'meteor' benchmark program ('2098 solutions found').
which is
On 08/23/2014 12:25 PM, Per Bothner wrote:
On 08/22/2014 01:08 PM, Charles Oliver Nutter wrote:
What are the rest of you doing to deal with these issues?
Start-up does not appear to a problem for Kawa:
I should mention I'm not using invokedynamic, and have no
concrete plans to do so.
Hi Charles
Just out of curiosity and a desire to compare my times to yours,
how long is it from the time of launch until the ruby code can
execute? Any how long in time until you see the peak performance?
I always run 64bit and mainly on a Mac so I have been using server
mode from the start.
40 matches
Mail list logo