for a lot of
things, adding dynamicity and laziness in places where indy might not be
the best fit. One example of that here:
https://bugs.openjdk.java.net/browse/JDK-8186216
/Claes
[1]
http://mail.openjdk.java.net/pipermail/core-libs-dev/2018-November/056480.html
On 2018-11-07 18:01
The Great Startup Problem [1] was a wonderful read highlighting the
problems surrounding the early life of a JVM application. In-flight efforts
that allow (to a limited extent) better startup characteristics include
jaotc, substrateVM, jlink. These efforts all trade away dynamicity of some
sort
JRuby loads about 4000 own classes (above 1000 of system classes) during
execution of just '-e 1'. It is a lot of data to load, parse, verify.
I played with CDS (Class Data Sharing) which includes jruby classes. We
can do that since jruby.jar is on boot class path but it requires some
manual ste
Charlie,
Is it acceptable and solves the problem for you?
This is acceptable for JRuby. Our worst-case Ruby method handle chain
will include at most:
* Two CatchExceptions for pre/post logic (heap frames, etc). Perf of
CatchException compared to literal Java try/catch is important here.
* Up
Am 02.09.2014 16:38, schrieb Vladimir Ivanov:
[...]
It's possible to optimize some shapes of method handle chains (like
nested GWTs) and tailor special LambdaForm shape or do some inlining
during bytecode translation. Though such specialization contradicts LF
sharing goal, probable benefits may w
Jochen,
>> "N frames per chain of N method handles" looks reasonable for me, but it
depends on average number of transformations users apply. If the case of
deep method handle chains is common in practice, we need to optimize for
it as well and linear dependency in stack space may be too much.
Java or
>>>> C# background, where their IDE is a text editor and a command line.
>>>
>>> Now I feel almost insulted ;) I get scolded so often, that I treat my IDE
>>> only as a better text editor... I agree in general though.
>>> I think this is not s
Am 01.09.2014 15:24, schrieb Vladimir Ivanov:
[...]
"N frames per chain of N method handles" looks reasonable for me, but it
depends on average number of transformations users apply. If the case of
deep method handle chains is common in practice, we need to optimize for
it as well and linear depe
On Mon, Sep 1, 2014 at 2:07 AM, Vladimir Ivanov
wrote:
> Stack usage won't be constant though. Each compiled LF being executed
> consumes 1 stack frame, so for a method handle chain of N elements, it's
> invocation consumes ~N stack frames.
>
> Is it acceptable and solves the problem for you?
Thi
Jochen,
Is it acceptable and solves the problem for you?
Let me ask you what you consider as acceptable. I am quite interested in
the JVM-engineers point of view here.
"N frames per chain of N method handles" looks reasonable for me, but it
depends on average number of transformations users a
ould have to create a new JVM it would easily take over an
hour to execute. With Groovy startup included probably more than 6 hours.
Yes, this is a result of the great startup problem. But, the Java community
finds ways around. The problem is that in JRuby you have to try to force a Ruby
mecha
s kept around, but imho this is already done from the
Java world. We do nothing special here most of the time. But of course this is
related to slow startup speeds of the JVM. groovy-core has around 7k tests, if
for each of them we would have to create a new JVM it would easily take over an
hour to
Am 01.09.2014 09:07, schrieb Vladimir Ivanov:
Jochen,
The stack traces you provide are so long due to LambdaForm
interpretation. Most of the stack frames are the following:
java.lang.invoke.LambdaForm$NamedFunction.invokeWithArguments(LambdaForm.java:1147)
java.lang.invoke.LambdaForm.interpretN
Jochen,
The stack traces you provide are so long due to LambdaForm
interpretation. Most of the stack frames are the following:
java.lang.invoke.LambdaForm$NamedFunction.invokeWithArguments(LambdaForm.java:1147)
java.lang.invoke.LambdaForm.interpretName(LambdaForm.java:625)
java.lang.invoke.La
Comment on Jochen's long stack traces.
The difference must be in how our languages expect the call site to
resolve.
In my case I compile all of the target methods to match the callsite stack
structure.
So the fast path adds no additional manipulations ( binds etc ) between
the callsite
and the
Am 29.08.2014 21:19, schrieb Jochen Theodorou:
[...]
Maybe the situation would
already improve if I would make callID, safeNavigation, thisCall,
spreadCall into one int
addendum to this... I actually already have those as single int. I tried
moving all that information into the callsite object
Thanks for the stack traces Jochen, interesting.
I really have no place to complain but I can see your point.
regards
mark___
mlvm-dev mailing list
mlvm-dev@openjdk.java.net
http://mail.openjdk.java.net/mailman/listinfo/mlvm-dev
Truffle is an architecture based on AST interpreters. This means that immediate
startup is as fast as it can get to load an AST interpreter written in Java
into your VM and start executing it. Obviously, this heavily depends on fine
engineering of the guest language runtime and interpreter (i.e.
Am 29.08.2014 19:28, schrieb Mark Roos:
Hi Jochen,
you wrote:
I also see potential for cases in which the MethodHandle gets
overly
complex. In Groovy we have for example up to N+1 guards for a
method
call with N arguments plus a catchException part and a
switchpoint. Most
I think I said two things at the same time and I apologize. I am
interested in startup time and also warmup time. Eventual performance
looks great on Chris's blogs...
-Tom
On Fri, Aug 29, 2014 at 1:29 PM, Thomas E Enebo wrote:
> Thomas,
>
> I am very excited about RubyTruffle and Truffle/G
Thomas,
I am very excited about RubyTruffle and Truffle/Graal in general but to
date I have never seen any numbers based on startup time? From what I have
gleamed startup time is not a fundamental design goal currently. I have
heard that some of these great numbers take many minutes to warm up
Thanks for your comment, Mark. Truffle is not at all meant as a replacement for
Java or the JVM. We fully rely on regular and unmodified Java bytecodes for the
definition of the Truffle guest language interpreters and on regular Java
objects for the Truffle guest language object model. We suppor
Thomas stated
A successful research project should ultimately also advance the
state
of the art of what is used in production.
Thomas one of the reasons many of us are building on the JVM is to take
advantage of the entire
universe of Java code available. Truffle, to me at leas
Hi Jochen,
you wrote:
I also see potential for cases in which the MethodHandle gets
overly
complex. In Groovy we have for example up to N+1 guards for a
method
call with N arguments plus a catchException part and a
switchpoint. Most
of them ending up in select
We are happy to no longer discuss Truffle in this thread if you are looking for
more short-term solutions and keeping the usage of invokedynamic as an
invariant. I am confident that Truffle can reach production quality within 12
months. People interested in Truffle can take a look at the Truffle
I think this is an excellent summary, John.
I also think that a scary point Charlie and I were trying to make in this
thread is that we have <= 12 months or so to slim down existing mechanisms to
something that works with startup for the existing indy solutions before 9 is
frozen. I think, giv
Am 29.08.2014 11:03, schrieb Thomas Wuerthinger:
John,
Thanks for this detailed analysis on the current status and proposed
future work for invokedynamic. Can you also add some comments on what
you believe the advantages and disadvantages of using Truffle instead of
invokedynamic for implementin
Yes. Truffle aims to become a production-quality system. A successful research
project should ultimately also advance the state of the art of what is used in
production. We are well beyond the initial exploration phase for Truffle and
focusing currently on stabilisation. There is a Truffle branc
Thomas,
>
> Thanks for this detailed analysis on the current status and proposed
> future work for invokedynamic. Can you also add some comments on what you
> believe the advantages and disadvantages of using Truffle instead of
> invokedynamic for implementing dynamic languages on top of the JVM
John,
Thanks for this detailed analysis on the current status and proposed future
work for invokedynamic. Can you also add some comments on what you believe the
advantages and disadvantages of using Truffle instead of invokedynamic for
implementing dynamic languages on top of the JVM are?
- th
Thanks John, it does bring up a topic I have wanted to ask about
Hotspot's specialization
for Java and how I could take advantage of it. Particularly in the area
of PIC optimization.
You mention:
And we expect to get better at capturing the call-site specific
types,
values, an
On Aug 22, 2014, at 1:08 PM, Charles Oliver Nutter wrote:
> Marcus coaxed me into making a post about our indy issues. Our indy
> issues mostly surround startup and warmup time, so I'm making this a
> general post about startup and warmup.
This is a vigorous and interesting discussion. I will m
Charlie,
Truffle is such a general-purpose automatic specialization mechanism that works
like you say via just writing Java code and without a need to use invokedynamic
and without a need to dynamically generate bytecodes.
- thomas
On 25 Aug 2014, at 15:25, Charles Oliver Nutter wrote:
> On
On Mon, Aug 25, 2014 at 6:59 AM, Fredrik Öhrström wrote:
> Calle Wilund and I implemented such a indy/methodhandle solution for
> JRockit, so I know it works. You can see a demonstration here:
> http://medianetwork.oracle.com/video/player/589206011001 That
> implementations jump to C-code that per
On Mon, Aug 25, 2014 at 4:32 AM, Marcus Lagergren
wrote:
> LambdaForms were most likely introduced as a platform independent way of
> implementing methodhandle combinators in 8, because the 7 native
> implementation was not very stable, but it was probably a mistake to add them
> as “real” clas
don't start a new JVM for each
> test. Maybe not even for each test suite. Groovy generally goes with the
> JVM instance here. Actually it is not even easily possible to spawn
> separate Groovy environments in the same JVM. In Grails a new environment
> might be spawned on a per s
ady done from
> the Java world. We do nothing special here most of the time. But of course
> this is related to slow startup speeds of the JVM. groovy-core has around 7k
> tests, if for each of them we would have to create a new JVM it would easily
> take over an hour to execute. With
more than 6 hours.
Yes, this is a result of the great startup problem. But, the Java
community finds ways around. The problem is that in JRuby you have to
try to force a Ruby mechanism onto the JVM. And this works properly only
if the JVM can behave as much as the Ruby as needed. And in regards
I am more of the side that invoke dynamic is awesome for enabling dynamic
languages on the
JVM. Given that there are two areas where I can see some help for my use
case which
is a true Smalltalk on the JVM.
First like Charles I do have a few dependencies on plain old Java methods
during startu
In answer to Charles question on what others do to help the startup.
Smalltalk is like Ruby in that we always start from source code. In our
case
there are a few hundred classes and a few thousand methods that would
be considered minimal ( we call these the base ) and around a thousand
classes a
On 08/24/2014 11:25 AM, Charles Oliver Nutter wrote:
On Sun, Aug 24, 2014 at 12:02 PM, Per Bothner wrote:
(1) Kawa shows you can have dynamic languages on the JVM that both
run fast and have fast start-up.
Like Clojure, I'd only consider Kawa to be *somewhat* dynamic. Most
function calls ca
On Sun, Aug 24, 2014 at 12:55 PM, Jochen Theodorou wrote:
> afaik you can set how many times a lambda form has to be executed before it
> is compiled... what happens if you set that very low... like 1 and disable
> tiered compilation?
Forcing all handles to compiler early has the same negative
ef
Hi Per,
> (4) Invokedynamic was a noble experiment to alleviate (2), but so far it
> does not seem to have solved the problems.
>
> (5) It is reasonable to continue to seek improvements in invokedynamic,
> but in terms of resource prioritization other enhancement in the Java
> platform
> (value t
On Sun, Aug 24, 2014 at 12:02 PM, Per Bothner wrote:
> On 08/24/2014 03:46 AM, Marcus Lagergren wrote:
>> This is mostly invokedynamic related. Basically, an indy callsite
>> requires a lot of implicit class and byte code generation, that is
>> the source of the overhead we are mostly discussing.
Am 22.08.2014 22:08, schrieb Charles Oliver Nutter:
[...]
2. Lambda forms are too slow to execute and take too long to optimize
down to native code. Lambda forms work sorta like the tiered compiler.
They'll be interpreted for a while, then they'll become JVM bytecode
for a while, which interprets
On 08/24/2014 03:46 AM, Marcus Lagergren wrote:
Hi Per!
This is mostly invokedynamic related. Basically, an indy callsite
requires a lot of implicit class and byte code generation, that is
the source of the overhead we are mostly discussing. While tiered
compilation adds non determinism, it is u
Hi Per!
This is mostly invokedynamic related. Basically, an indy callsite requires a
lot of implicit class and byte code generation, that is the source of the
overhead we are mostly discussing. While tiered compilation adds non
determinism, it is usually (IMHO) bearable…
/M
On 23 Aug 2014,
On 08/23/2014 12:25 PM, Per Bothner wrote:
On 08/22/2014 01:08 PM, Charles Oliver Nutter wrote:
What are the rest of you doing to deal with these issues?
Start-up does not appear to a problem for Kawa:
I should mention I'm not using invokedynamic, and have no
concrete plans to do so. Howe
On 08/22/2014 01:08 PM, Charles Oliver Nutter wrote:
What are the rest of you doing to deal with these issues?
Start-up does not appear to a problem for Kawa:
https://sourceware.org/ml/kawa/2014-q2/msg00069.html
I tried running the 'meteor' benchmark program ('2098 solutions found').
which is
I agree completely with Charlie’s assessment about Lambda Forms being a
problematic mechanism for indy call site linking due to its
* Lack of scalability (explosion of byte code)
* Metaspace usage
and everything else that has been described below.
I’m currently recovering after surgery and a b
Hi Charles
Just out of curiosity and a desire to compare my times to yours,
how long is it from the time of launch until the ruby code can
execute? Any how long in time until you see the peak performance?
I always run 64bit and mainly on a Mac so I have been using server
mode from the start.
Fo
Marcus coaxed me into making a post about our indy issues. Our indy
issues mostly surround startup and warmup time, so I'm making this a
general post about startup and warmup.
When I started working on JRuby 7 years ago, I hoped we'd have a good
answer for poor startup time and long warmup times.
52 matches
Mail list logo