I think I've solved this - the linux server was running J6u21, whereas the desktop is on J6u10. It seems there have been some subtle changes in the GC implementation between these which were causing the GC to thrash when nearing the heap limit. Running with u10 on the server brings the time down to 13s. Running u22, but doubling the available memory, brings it down to half that.
Tim On 19 October 2010 14:09, Tim 4076 <[email protected]> wrote: > My dev box runs Windows XP with an oldish intel core 2. It processes my > rules in 13s, with a peak memory usage of about 600MB. > > When I run the exact same code on my 64bit linux server with a quad 2.40Ghz > Xeon, it takes 22s and uses a peak of 1.5Gb memory. > > What on earth is going on here? I would expect it to run faster on the > server and for the memory usage to be similar. > > > Cheers, > Tim >
_______________________________________________ rules-users mailing list [email protected] https://lists.jboss.org/mailman/listinfo/rules-users
