Am 2014-03-06 14:23, schrieb Tal Liron:
Have you looked at the dump I linked to before?
https://dl.dropboxusercontent.com/u/122806/jvm8_gc2.zip
Yes, this is the dump I looked at.
Of course it could be a memory leak in my code, but by far the largest
amount of instances are held by lambda for
Have you looked at the dump I linked to before?
https://dl.dropboxusercontent.com/u/122806/jvm8_gc2.zip
Of course it could be a memory leak in my code, but by far the largest
amount of instances are held by lambda forms. It could be that they take
very little memory, so it might not be importa
Hi Tal,
I'm sorry this problem still persists.
Back when I tried your app I threw a lot of apache bench requests at it
but didn't see a leak. Looking at your dump again I notice that while
there are a lot of LambdaForm related instances they don't occupy a
large part of the heap.
The bigges
Well, I've reached the limits of my personal knowledge to work on this
problem. I'm happy to assist in any way I can, including providing
access to servers.
As it stands, however, it seems that this problem will follow through
into the official release of OpenJDK 8, which means that I won't be
I've been away for a month. Has anyone with knowhow followed up on this?
The issue is still present.
On 01/18/2014 02:51 PM, Tal Liron wrote:
I have a new dump that will hopefully be more useful:
https://dl.dropboxusercontent.com/u/122806/jvm8_gc2.zip
From what I can tell, indeed lambda forms
Just run the Prudence example applications. There's the default
example that comes with the distribution, but it's not data-driven.
You can try the MongoVision application to test a MongoDB backend. Or
the Stickstick demo to test relational databases (comes with H2 built
in, but can be easily modif
We can safely disable function node snapshots (which are going away anyway)
since lazy code generation will be done differently in 8u20, is the only thing
that uses this and is not enabled or supported. Should be fairly simple and low
risk to do.
Tal - is there a way to set up your reproduction
I looked at the heap dump - in particular Nashorn objects in it. Lots of
codegen Label /Frame retained from RecompilableScriptFunctionData.
Didn't find any specific leak as such - but lots of stuff is retained
for recompilation. We'll check if we can avoid that.
-Sundar
On Monday 20 January 2
Hi,
Haven't had chance yet to look at the zip. But, I plan to look at it
before EOD.
-Sundar
On Saturday 18 January 2014 12:21 PM, Tal Liron wrote:
I have a new dump that will hopefully be more useful:
https://dl.dropboxusercontent.com/u/122806/jvm8_gc2.zip
From what I can tell, indeed lam
I have a new dump that will hopefully be more useful:
https://dl.dropboxusercontent.com/u/122806/jvm8_gc2.zip
From what I can tell, indeed lambda forms are way out of control here.
Generally, too, there is a huge amount of Nashorn-related instances,
which may be related.
(Note that Log4j 2.0
On 2014-01-09 16:29, Kirk Pepperdine wrote:
Hi Marcus,
Looks like some of the details have been chopped off. Is there a GC log
available? If there is a problem with MethodHandle a work around might be a
simple as expanding perm.. but wait, this is meta space now and it should grow
as long as
Thanks! I've restarted everything with more flags, so hopefully we'll
get more data next time.
In the meantime, I've also learned about this Ubuntu-specific issue with
ptrace that affects jmap use:
http://blog.thecodingmachine.com/fr/content/fixing-java-memory-leaks-ubuntu-1104-using-jmap
On
Hi,
The heap dump contains not much info. When I tried to open with 'jhat'
tool, I see only basic JDK core classes (Class, ClassLoader etc.) and
nothing else. jmap with -F flag uses hotspot serviceability agent to
dump the heap. i.e., It is read from outside the process. Such a dump is
done
Well, it happened again: I did only catch it a few hours after, when CPU
levels went back to normal.
Couldn't connect with VisualVM (my fault, I did not enable remote JMX
support when I started it -- you know, the thing with the RMI port
usually being random).
But, here's the dump I got from
If you do get such a heap dump, please make .tar.gz of the same
available to us. We could debug at our end as well.
Thanks
-Sundar
On Friday 10 January 2014 01:06 AM, Vladimir Ivanov wrote:
Heap dumps enables post-mortem analysis of OOMs.
Pass -XX:+HeapDumpOnOutOfMemoryError to VM and it'll
Heap dumps enables post-mortem analysis of OOMs.
Pass -XX:+HeapDumpOnOutOfMemoryError to VM and it'll dump the heap
before exiting or use jmap (-dump:live,format=b,file= ) or
visualvm to take a snapshot from a running process.
There are a number of tools to browse the contents.
Best regards,
k.java.net
[mailto:nashorn-dev-boun...@openjdk.java.net] On Behalf Of Tal Liron
Sent: Thursday, January 09, 2014 2:01 PM
To: nashorn-dev@openjdk.java.net
Subject: Re: GC overhead limit exceeded
Indeed, scripts are reused in this case, though I can't guarantee that there
isn't a bug somew
Hi Marcus,
Looks like some of the details have been chopped off. Is there a GC log
available? If there is a problem with MethodHandle a work around might be a
simple as expanding perm.. but wait, this is meta space now and it should grow
as long as your system has memory to give to the process.
Indeed, scripts are reused in this case, though I can't guarantee that
there isn't a bug somewhere on my end.
I'm wondering if it might be triggered by another issue: Prudence
supports an internal crontab-life feature (based on cron4j), and these
are again Nashorn scripts being run, once a min
Tal,
I've been thowing requests at the Prudence test app for the last 20
minutes or so. I do see that it uses a lot of metaspace, close to 50M in
my case. The test app seems to load/unload 2 classes per request with
Rhino compared to 4 classes per request with Nashorn, which is probably
due t
You may download the latest release of Prudence, run it and bombard it
with hits (use ab or a similar tool):
http://threecrickets.com/prudence/download/
To get the GC logs, start it like so:
JVM_SWITCHES=\
-Xloggc:/full/path/to/logs/gc.log \
-XX:+PrintGCDetails \
-XX:+PrintTenuring
Tal - The GC people 10 meters behind me want to know if you have a repro of
your full GC to death problem that they can look at? They’re interested.
/M
On 09 Jan 2014, at 16:29, Kirk Pepperdine wrote:
> Hi Marcus,
>
> Looks like some of the details have been chopped off. Is there a GC log
>
Unfortunately, this workaround is unacceptable in many deployment
environments. I would thus consider this a showstopping bug for Nashorn,
and I hope it can be escalated.
(I understand that this is not the Nashorn project's "fault," but the
bottom line is that Nashorn cannot be used in long-ru
Regarding OOME, it's expected in this situation.
If you look at the end of the log, you'll see a set of consecutive Full
GCs. It means Java heap is almost full and reached it's maximum size.
And application is almost halted - VM collects the whole heap over and
over again (>98% of application
This almost certainly stems from the implementation from MethodHandle
combinators being implemented as lambda forms as anonymous java classes. One of
the things that is being done for 8u20 is to drastically reduce the number of
lambda forms created. I don’t know of any workaround at the moment
It happened again, and here's the gc.log: http://pastebin.com/DFA7CYC1
Interestingly enough, the application kept working, though I was getting
intermittent 100% CPU use.
On 01/06/2014 01:57 PM, Benjamin Sieffert wrote:
Hi everyone,
we have been observing similar symptoms from 7u40 onwards (
Hi everyone,
we have been observing similar symptoms from 7u40 onwards (using
nashorn-backport with j7 -- j8 has the same problems as 7u40 and 7u45...
7u25 is the last version that works fine) and suspect the cause to be the
JSR-292 changes that took place there. Iirc I already asked over on their
Thanks! I didn't know of these. I'm not sure how to read the log, but
this doesn't look so good. I get a lot of "allocation failures" that
look like this:
Java HotSpot(TM) 64-Bit Server VM (25.0-b63) for linux-amd64 JRE
(1.8.0-ea-b121), built on Dec 19 2013 17:29:18 by "java_re" with gcc
4.3.
If this is a serverside application, then presumably you have at least the
minimum GC logging flags on?
-Xloggc: -XX:+PrintGCDetails -XX:+PrintTenuringDistribution
I would regard these as the absolute minimum information for tools to be
able to help you - no JVM server process (and especially not
Flags are all default. I will do some VisualVM snooping and see what I
can find.
On 01/04/2014 06:40 PM, Ben Evans wrote:
Can you be more specific?
What kind of errors, what are your current GC flags, etc? How have you
determined that you don't have a memory leak & what the correct size
of y
Can you be more specific?
What kind of errors, what are your current GC flags, etc? How have you
determined that you don't have a memory leak & what the correct size of
your working set is?, etc.
Thanks,
Ben
On Sat, Jan 4, 2014 at 6:58 AM, Tal Liron wrote:
> I've been getting GC errors for lo
I've been getting GC errors for long-running Prudence/Nashorn processes.
Is this a known issue, perhaps JVM-related and not specific to Nashorn?
32 matches
Mail list logo