Can you just set the localWorkers to a smaller amount? It automatically
sets it to the # of CPUs available. (which is why you are likely needing
more memory)

-Clint

On Wed, Feb 1, 2012 at 8:03 AM, pansen <[email protected]> wrote:

> hm, was just too less memory.
>
> unless we will have more memory built in, we will reduce the
> permutations by ``mvn -U clean integration-test -Penv-config,env-
> hudson-javabot-1 -Djetty.port=9855 -Dgwt.compiler.workDir=/data/tmp -
> Dgwt.compiler.localWorkers=10``
>
> cheers, andi
>
> $ dmesg
> ....
> Out of memory: kill process 22718 (bash) score 7275710 or a child
> Killed process 22728 (java) vsz:6459436kB, anon-rss:4396kB, file-rss:
> 260kB
> java invoked oom-killer: gfp_mask=0x280da, order=0, oom_adj=0
> java cpuset=/ mems_allowed=0-1
> Pid: 5308, comm: java Tainted: G   M       ----------------
> 2.6.32-131.12.1.el6.x86_64 #1
> Call Trace:
> [<ffffffff8111028b>] ? oom_kill_process+0xcb/0x2e0
> [<ffffffff81110850>] ? select_bad_process+0xd0/0x110
> [<ffffffff811108e8>] ? __out_of_memory+0x58/0xc0
> [<ffffffff81110ae9>] ? out_of_memory+0x199/0x210
> [<ffffffff81120232>] ? __alloc_pages_nodemask+0x812/0x8b0
> [<ffffffff811547da>] ? alloc_pages_vma+0x9a/0x150
> [<ffffffff81137dfb>] ? handle_pte_fault+0x76b/0xb50
> [<ffffffff8105738e>] ? activate_task+0x2e/0x40
> [<ffffffff8105d8ea>] ? try_to_wake_up+0xca/0x400
> [<ffffffff811383b8>] ? handle_mm_fault+0x1d8/0x2c0
> [<ffffffff810a2682>] ? do_futex+0x682/0xb00
> [<ffffffff810414e9>] ? __do_page_fault+0x139/0x480
> [<ffffffff81269326>] ? rwsem_wake+0x76/0x170
> [<ffffffff814e067e>] ? do_page_fault+0x3e/0xa0
> [<ffffffff814dda05>] ? page_fault+0x25/0x30
> Mem-Info:
> Node 0 DMA per-cpu:
> CPU    0: hi:    0, btch:   1 usd:   0
> CPU    1: hi:    0, btch:   1 usd:   0
> CPU    2: hi:    0, btch:   1 usd:   0
> CPU    3: hi:    0, btch:   1 usd:   0
> CPU    4: hi:    0, btch:   1 usd:   0
> CPU    5: hi:    0, btch:   1 usd:   0
> CPU    6: hi:    0, btch:   1 usd:   0
> CPU    7: hi:    0, btch:   1 usd:   0
> CPU    8: hi:    0, btch:   1 usd:   0
> CPU    9: hi:    0, btch:   1 usd:   0
> CPU   10: hi:    0, btch:   1 usd:   0
> CPU   11: hi:    0, btch:   1 usd:   0
> CPU   12: hi:    0, btch:   1 usd:   0
> CPU   13: hi:    0, btch:   1 usd:   0
> CPU   14: hi:    0, btch:   1 usd:   0
> CPU   15: hi:    0, btch:   1 usd:   0
> Node 0 DMA32 per-cpu:
> CPU    0: hi:  186, btch:  31 usd:   0
> CPU    1: hi:  186, btch:  31 usd:   0
> CPU    2: hi:  186, btch:  31 usd:   0
> CPU    3: hi:  186, btch:  31 usd:   0
> CPU    4: hi:  186, btch:  31 usd:   0
> CPU    5: hi:  186, btch:  31 usd:   0
> CPU    6: hi:  186, btch:  31 usd:   0
> CPU    7: hi:  186, btch:  31 usd:   0
> CPU    8: hi:  186, btch:  31 usd:   0
> CPU    9: hi:  186, btch:  31 usd:   0
> CPU   10: hi:  186, btch:  31 usd:   0
> CPU   11: hi:  186, btch:  31 usd:   0
> CPU   12: hi:  186, btch:  31 usd:   0
> CPU   13: hi:  186, btch:  31 usd:   0
> CPU   14: hi:  186, btch:  31 usd:   0
> CPU   15: hi:  186, btch:  31 usd:   0
> Node 0 Normal per-cpu:
> CPU    0: hi:  186, btch:  31 usd:   0
> CPU    1: hi:  186, btch:  31 usd:   0
> CPU    2: hi:  186, btch:  31 usd:   0
> CPU    3: hi:  186, btch:  31 usd:   0
> CPU    4: hi:  186, btch:  31 usd:   0
> CPU    5: hi:  186, btch:  31 usd:   0
> CPU    6: hi:  186, btch:  31 usd:  30
> CPU    7: hi:  186, btch:  31 usd:   0
> CPU    8: hi:  186, btch:  31 usd:   0
> CPU    9: hi:  186, btch:  31 usd:   0
> CPU   10: hi:  186, btch:  31 usd:  53
> CPU   11: hi:  186, btch:  31 usd:   0
> CPU   12: hi:  186, btch:  31 usd:   0
> CPU   13: hi:  186, btch:  31 usd:   0
> CPU   14: hi:  186, btch:  31 usd:   0
> CPU   15: hi:  186, btch:  31 usd:   0
> Node 1 Normal per-cpu:
> CPU    0: hi:  186, btch:  31 usd:   0
> CPU    1: hi:  186, btch:  31 usd:   0
> CPU    2: hi:  186, btch:  31 usd:   0
> CPU    3: hi:  186, btch:  31 usd:  59
> CPU    4: hi:  186, btch:  31 usd:   0
> CPU    5: hi:  186, btch:  31 usd:   0
> CPU    6: hi:  186, btch:  31 usd:   0
> CPU    7: hi:  186, btch:  31 usd:   0
> CPU    8: hi:  186, btch:  31 usd:   0
> CPU    9: hi:  186, btch:  31 usd:   0
> CPU   10: hi:  186, btch:  31 usd:   0
> CPU   11: hi:  186, btch:  31 usd:   0
> CPU   12: hi:  186, btch:  31 usd:   0
> CPU   13: hi:  186, btch:  31 usd:   0
> CPU   14: hi:  186, btch:  31 usd:   0
> CPU   15: hi:  186, btch:  31 usd:   0
> active_anon:3437718 inactive_anon:505697 isolated_anon:25859
> active_file:776 inactive_file:499 isolated_file:0
> unevictable:0 dirty:31 writeback:170 unstable:0
> free:33292 slab_reclaimable:4619 slab_unreclaimable:82150
> mapped:649 shmem:0 pagetables:11835 bounce:0
> Node 0 DMA free:15688kB min:80kB low:100kB high:120kB active_anon:0kB
> inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB
> isolated(anon):0kB isolated(file):0kB present:15300kB mlocked:0kB
> dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB
> slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB
> bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
> lowmem_reserve[]: 0 3502 8047 8047
> Node 0 DMA32 free:38796kB min:19548kB low:24432kB high:29320kB
> active_anon:2836148kB inactive_anon:567268kB active_file:92kB
> inactive_file:8kB unevictable:0kB isolated(anon):0kB isolated(file):
> 0kB present:3586464kB mlocked:0kB dirty:0kB writeback:104kB mapped:
> 48kB shmem:0kB slab_reclaimable:108kB slab_unreclaimable:52kB
> kernel_stack:0kB pagetables:116kB unstable:0kB bounce:0kB
> writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
> lowmem_reserve[]: 0 0 4545 4545
> Node 0 Normal free:27864kB min:25368kB low:31708kB high:38052kB
> active_anon:3869080kB inactive_anon:559148kB active_file:736kB
> inactive_file:1192kB unevictable:0kB isolated(anon):29028kB
> isolated(file):0kB present:4654080kB mlocked:0kB dirty:56kB writeback:
> 376kB mapped:636kB shmem:0kB slab_reclaimable:9360kB
> slab_unreclaimable:176740kB kernel_stack:4176kB pagetables:23316kB
> unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0
> all_unreclaimable? no
> lowmem_reserve[]: 0 0 0 0
> Node 1 Normal free:50432kB min:45104kB low:56380kB high:67656kB
> active_anon:7043712kB inactive_anon:892348kB active_file:2276kB
> inactive_file:796kB unevictable:0kB isolated(anon):81236kB
> isolated(file):0kB present:8273916kB mlocked:0kB dirty:68kB writeback:
> 200kB mapped:1912kB shmem:0kB slab_reclaimable:9008kB
> slab_unreclaimable:151808kB kernel_stack:2720kB pagetables:23908kB
> unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:2048
> all_unreclaimable? no
> lowmem_reserve[]: 0 0 0 0
> Node 0 DMA: 2*4kB 2*8kB 1*16kB 1*32kB 2*64kB 1*128kB 0*256kB 0*512kB
> 1*1024kB 1*2048kB 3*4096kB = 15688kB
> Node 0 DMA32: 146*4kB 47*8kB 31*16kB 9*32kB 13*64kB 11*128kB 10*256kB
> 29*512kB 13*1024kB 2*2048kB 0*4096kB = 38800kB
> Node 0 Normal: 786*4kB 558*8kB 329*16kB 184*32kB 48*64kB 17*128kB
> 4*256kB 2*512kB 0*1024kB 0*2048kB 0*4096kB = 26056kB
> Node 1 Normal: 769*4kB 723*8kB 699*16kB 369*32kB 164*64kB 31*128kB
> 8*256kB 3*512kB 0*1024kB 0*2048kB 0*4096kB = 49900kB
> 52420 total pagecache pages
> 50902 pages in swap cache
> Swap cache stats: add 35758417, delete 35707509, find
> 11245520/14149999
> Free swap  = 1460kB
> Total swap = 2097144kB
> 4194302 pages RAM
> 81228 pages reserved
> 25296 pages shared
> 3982847 pages non-shared
> Out of memory: kill process 3927 (java) score 338032 or a child
> Killed process 3927 (java) vsz:8112776kB, anon-rss:901872kB, file-rss:
> 540kB
> ...
>
>
> On Jan 23, 3:36 pm, pansen <[email protected]> wrote:
> > thanks thomas.
> >
> > the ``<gen>`` option is set::
> >
> >         <plugin>
> >             <groupId>org.codehaus.mojo</groupId>
> >             <artifactId>gwt-maven-plugin</artifactId>
> >             <version>${org.codehaus.mojo.gwt-maven-plugin.version}</
> > version>
> >             <!-- at this point there are no modules to configure. all
> > real modules
> >                  are configured in subprojects. -->
> >
> >             <configuration>
> >                 <!--
> >
> http://stackoverflow.com/questions/88235/how-to-deal-with-java-lang-o...
> > error
> >                     -->
> >                 <extraJvmArgs>-Xmx2048m -XX:MaxPermSize=1024m -
> > Xss4096k -XX:+CMSClassUnloadingEnabled</extraJvmArgs>
> >                 <strict>true</strict>
> >                 <logLevel>${com.google.gwt.logLevel}</logLevel>
> >                 <style>OBF</style>
> >                 <mode>htmlunit</mode>
> >                 <testFailureIgnore>false</testFailureIgnore>
> >                 <testTimeOut>120</testTimeOut>
> >                 <generateDirectory>${root.basedir}/vz-gwt-main/target/
> > generated-sources/gwt</generateDirectory>
> >                 <gen>${root.basedir}/vz-gwt-main/target/.generated</
> > gen>
> >
> > i will try ``<timeout>`` but i'm pretty sure there will be no
> > success.
> >
> > the point is: on a 4-core buildbot with identical os installed and
> > identical build settings, the same project succeeds building. its
> > always during the permutations where a 16-core machine fails.
> >
> > we tried this with a virtualized centos a few weeks ago, now we just
> > tried a real production hp g6 machine with rhel 6.1 installed. the
> > environment was deployed via jenkins.
> >
> > andi
> >
> > On Jan 19, 5:27 pm, Thomas Broyer <[email protected]> wrote:
> >
> >
> >
> >
> >
> >
> >
> > > Have you tried setting <timeOut>?
> http://mojo.codehaus.org/gwt-maven-plugin/compile-mojo.html#timeOut
> >
> > > Also, one issue with the gwt-maven-plugin is that it always pass the
> -gen
> > > to the GWT Compiler (the <gen> in the POM), which can make extensive
> use of
> > > disk I/O (particularly when using UiBinder, RPC/RequestFactory, the
> Editor
> > > framework, ClientBundle, I18N, GIN, etc.), so you'll want that
> directory to
> > > be on a fast disk.
> > > There's no way of disabling this (yet! patches welcome)
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google Web Toolkit" group.
> To post to this group, send email to [email protected].
> To unsubscribe from this group, send email to
> [email protected].
> For more options, visit this group at
> http://groups.google.com/group/google-web-toolkit?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google Web Toolkit" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/google-web-toolkit?hl=en.

Reply via email to