[ 
http://issues.apache.org/jira/browse/DERBY-1614?page=comments#action_12425580 ] 
            
Myrna van Lunteren commented on DERBY-1614:
-------------------------------------------

Assuming that things get put in for a reason and people would document 
(possibly) the reason I did some historical research.

The first incantation in source control of this harness server code that I 
could find already forces a setting of -sm16m -xm32 -noasyncgc for connecting 
through RmiJdbc to Weblogic Server - which was supported in version 1.5 of 
cloudscape.

Code to accept jvmflags was added in in 2001 in cloudscape, and this section 
looked then much like how it was contributed to apache. (see:  
http://svn.apache.org/viewvc/db/derby/code/trunk/java/testing/org/apache/derbyTesting/functionTests/harness/NetServer.java?revision=57503&view=markup)

The present code, making this optional depending on jvmflags being set or not 
went in with revision 160372 for DERBY-121.
(see: 
http://svn.apache.org/viewvc/db/derby/code/trunk/java/testing/org/apache/derbyTesting/functionTests/harness/NetServer.java?r1=160372&r2=179859)
There is no specific documentation for this part of the change, but I assume 
the intent was to just ensure the NetworkServer's heap size settings would get 
overwritten when set by jvmflags.

I keep going back and forth between A and B. 
I wonder whether the default heap sizes of jdk131 vs. jdk142 vs. jdk15 etc have 
changed. 
I also wonder if it's possible for certain jvms to have default sizes that 
differ from machine to machine (I know they do differ between Windows and 
Linux, for example).
I worry about causing failures on machines where previously the tests ran fine.

On the other hand, this change has been in for a while, so maybe it's not 
affected anyone. And also, the setting is hidden from view. And not working 
properly. And to get it working properly would take more effort, and we should 
also make sure it can't get overwritten by settings coming through on 
testSpecialProps or testJavaFlags. It would be clearer to leave the setting to 
the test and remove it from NetServer.java...

So, I cautiously vote for A. 


> Test harness overrides heap size settings when starting Network Server
> ----------------------------------------------------------------------
>
>                 Key: DERBY-1614
>                 URL: http://issues.apache.org/jira/browse/DERBY-1614
>             Project: Derby
>          Issue Type: Bug
>          Components: Test
>    Affects Versions: 10.2.0.0
>         Environment: Test frameworks DerbyNet and DerbyNetClient
>            Reporter: John H. Embretsen
>         Assigned To: John H. Embretsen
>             Fix For: 10.2.0.0
>
>         Attachments: DERBY-1614_v1.diff, DERBY-1614_v2.diff
>
>
> Test specific heap size settings can be passed to the test harness using the 
> jvmflags system property, for example in a <testname>_app.properties file or 
> at the command line when starting a test, e.g "-Djvmflags=-Xms32m^-Xmx32m".
> The test harness almost always overrides such settings when starting a new 
> Network Server using the 
> org.apache.derbyTesting.functionTests.harness.NetServer class of the test 
> harness. Currently, if _either_ -ms _or_ -Xms is missing from the jvmflags, 
> NetServer.start() adds -ms16777216. Also, if _either_ -mx _or_ -Xmx is 
> missing from the jvmflags, NetServer.start() adds -ms33554432. This has been 
> the case since SVN revision 420048 (July 8, 2006).
> Earlier revisions did not override the heap settings unless the newer -Xms or 
> -Xmx flags were used instead of the -ms and -mx flags. A patch for DERBY-1091 
> attempted (among other things) to make the harness recognize the newer flags 
> as well as the older flags, but the resulting behavior is (most likely) not 
> as intended. 
> If a test is run in either the DerbyNet framework or the DerbyNetClient 
> framework, the test-specific JVM flags should (probably) be used for the 
> Network Server JVM as well as the test JVM. Currently, even if non-default 
> heap size flags are passed to the harness, the server JVM will ignore these 
> settings since the harness adds -ms and/or -mx flags _after_ all other heap 
> flags. The exception is if both new and old versions of heap flags are passed 
> to the harness, e.g:
> jvmflags=-ms32m^-Xms32m^-mx128m^-Xmx128m
> Here is the code causing this behaviour:
> if (setJvmFlags && ((jvmflags.indexOf("-ms") == -1) || 
> (jvmflags.indexOf("-Xms") == -1)))
>      // only setMs if no starting memory was given
>      jvm.setMs(16*1024*1024); // -ms16m
> if (setJvmFlags && ((jvmflags.indexOf("-mx") == -1) || 
> (jvmflags.indexOf("-Xmx") == -1)))
>      // only setMx if no max memory was given
>      jvm.setMx(32*1024*1024); // -mx32m

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to