Where is Tristan
I will be OOO from Aug 24th to Sept 7th. I will be checking Email intermittently but I won't be online. Drop me a mail if you need something from me. Tristan
RFR for 8068761 : Test java/nio/channels/ServerSocketChannel/AdaptServerSocket.java failed with SocketTimeoutException
Hi Please review small fix for test java/nio/channels/ServerSocketChannel/AdaptServerSocket.java. webrev: http://cr.openjdk.java.net/~tyan/8068761/webrev/ http://cr.openjdk.java.net/~tyan/8068761/webrev/ bug: https://bugs.openjdk.java.net/browse/JDK-8068761 https://bugs.openjdk.java.net/browse/JDK-8068761 Thanks Tristan
Re: RFR: JDK-8051563: Convert JAXP function tests in xslt components: : org.apache.qetest.dtm package, org.apache.qetest.trax package to openjdk
Thanks Joe and Lance. do you mind sponsor this for me. Tristan On Jan 8, 2015, at 2:54 PM, huizhe wang huizhe.w...@oracle.com wrote: Thanks for the update. I think the webrev is ready for putback. Best, Joe On 1/7/2015 9:41 PM, Tristan Yan wrote: Hi Joe/Lance and others. Please review updated version for this one. http://cr.openjdk.java.net/~tyan/8051563/webrev.01/ http://cr.openjdk.java.net/%7Etyan/8051563/webrev.01/ Thank you Tristan On Jan 6, 2015, at 2:27 PM, huizhe wang huizhe.w...@oracle.com mailto:huizhe.w...@oracle.com wrote: On 1/6/2015 2:25 PM, Lance Andersen wrote: One more thing :-) Remember to update your copyright year to 2015 also Indeed, that applies to my webrevs as well :-) Joe Best Lance On Jan 6, 2015, at 5:04 PM, Lance Andersen lance.ander...@oracle.com mailto:lance.ander...@oracle.com wrote: On Jan 6, 2015, at 4:31 PM, Tristan Yan tristan@oracle.com mailto:tristan@oracle.com wrote: Thank you Lance and Joe. You are very welcome. I am working on the fix. No rush from my perspective, have a lot to keep me busy in the interim before your next webrev . :-) it may take a couple of days that the code needs some refactor. Understand, as I have been working on tests for RowSets, I have continued to play the refactor dance myself. Best, Lance I will send out next review once I finish it. Thanks Tristan On Jan 6, 2015, at 1:22 PM, huizhe wang huizhe.w...@oracle.com mailto:huizhe.w...@oracle.com wrote: Thanks for taking the initiative and effort to refactor and clean up all of the Functional tests in previous changeset! We've gone through several iterations for the new ones (xslt tests). I totally agree with Lance, it looks very good overall, a lot better than its original form. It's nice to group all of the tests that required file access, that prepares all other tests to run with minimal permission for a (future) secure-test-target. Unit tests might need a similar treatment, but no pressure to do now :-) A minor comment on utility classes: there's JAXPTestUtilities and TestUtils. The former is an util for all tests while the later contained a couple of SAX handlers, it may make sense to move them into their own classes just as the other Handlers. The constants (XML_DIR and GOLDEN_DIR) then can be declared in a base class for the SAX tests. I understand each group of tests have their own XML source and golden files, thus XML_DIR and GOLDEN_DIR. It may be possible to add a base for each group while put test.src and test.classes into the very base class for all tests. So in general, we would have: TestBaseFileTestBase Base for each group (e.g. SAXTestBase, DOMTestBase, XSLTTestBase...) that extends either TestBase or FileTestBase I remember there were a few tests that required a http server. So then we may need a HttpTestBase as well. I know this is not what we've discussed (or planned) previously. But since you've done a great job to incorporate what were previously quite separate test suites into one whole test suite. I can't resist to ask a bit more. Don't get me wrong, what you've done exceeded my expectation in a big way (only a month ago, we were still talking about quick/straight conversion so that you can start to take care of the new features)! SAXParserTest: I noticed Old: testParse11, testParse27 -- New: testParse05. So is testParse03 a new test? I can see SAXException is expected, but not IOE. In fact, this shows that the JAXP spec missed defining how empty string shall be handled (should have been an IAE). Best, Joe On 1/6/2015 11:33 AM, Lance Andersen wrote: Hi Tristan, Sorry for the delay but with the holidays and the number of files, it took a while to go through :-) Overall, it looks pretty good. A couple of suggestions, but I would not necessarily hold back from committing: - For tests where you are not caring about the actual exception that is thrown to indicate a failure, such as in DocumentBuilderFactory01.java and testCheckSchemaSupport1, I would just have the method declaration use throws Exception vs list all of the possible individual Exceptions, as it keeps it more compact. Glad to see you are not using failUnexcepted() now. - In test classes such as in testCheckSchemaSupport3. java and DocumentBuilderImpl01.java, some tests do not use assertXXX or expect an Exception. Would be good at least to document what could cause a failure or make it clear the expected behavior of the test for passing. -SAXParserTest02.java and other tests where you get a reader/parser such as testXMLReader01, I would at least assert that null is not returned where as it is now, you only validate that an exception is not returned - I know you are porting existing tests, but I would consider consolidating tests as it seems
Re: RFR: JDK-8051563: Convert JAXP function tests in xslt components: : org.apache.qetest.dtm package, org.apache.qetest.trax package to openjdk
Hi Joe/Lance and others. Please review updated version for this one. http://cr.openjdk.java.net/~tyan/8051563/webrev.01/ http://cr.openjdk.java.net/~tyan/8051563/webrev.01/ Thank you Tristan On Jan 6, 2015, at 2:27 PM, huizhe wang huizhe.w...@oracle.com mailto:huizhe.w...@oracle.com wrote: On 1/6/2015 2:25 PM, Lance Andersen wrote: One more thing :-) Remember to update your copyright year to 2015 also Indeed, that applies to my webrevs as well :-) Joe Best Lance On Jan 6, 2015, at 5:04 PM, Lance Andersen lance.ander...@oracle.com mailto:lance.ander...@oracle.com wrote: On Jan 6, 2015, at 4:31 PM, Tristan Yan tristan@oracle.com mailto:tristan@oracle.com wrote: Thank you Lance and Joe. You are very welcome. I am working on the fix. No rush from my perspective, have a lot to keep me busy in the interim before your next webrev . :-) it may take a couple of days that the code needs some refactor. Understand, as I have been working on tests for RowSets, I have continued to play the refactor dance myself. Best, Lance I will send out next review once I finish it. Thanks Tristan On Jan 6, 2015, at 1:22 PM, huizhe wang huizhe.w...@oracle.com mailto:huizhe.w...@oracle.com wrote: Thanks for taking the initiative and effort to refactor and clean up all of the Functional tests in previous changeset! We've gone through several iterations for the new ones (xslt tests). I totally agree with Lance, it looks very good overall, a lot better than its original form. It's nice to group all of the tests that required file access, that prepares all other tests to run with minimal permission for a (future) secure-test-target. Unit tests might need a similar treatment, but no pressure to do now :-) A minor comment on utility classes: there's JAXPTestUtilities and TestUtils. The former is an util for all tests while the later contained a couple of SAX handlers, it may make sense to move them into their own classes just as the other Handlers. The constants (XML_DIR and GOLDEN_DIR) then can be declared in a base class for the SAX tests. I understand each group of tests have their own XML source and golden files, thus XML_DIR and GOLDEN_DIR. It may be possible to add a base for each group while put test.src and test.classes into the very base class for all tests. So in general, we would have: TestBaseFileTestBase Base for each group (e.g. SAXTestBase, DOMTestBase, XSLTTestBase...) that extends either TestBase or FileTestBase I remember there were a few tests that required a http server. So then we may need a HttpTestBase as well. I know this is not what we've discussed (or planned) previously. But since you've done a great job to incorporate what were previously quite separate test suites into one whole test suite. I can't resist to ask a bit more. Don't get me wrong, what you've done exceeded my expectation in a big way (only a month ago, we were still talking about quick/straight conversion so that you can start to take care of the new features)! SAXParserTest: I noticed Old: testParse11, testParse27 -- New: testParse05. So is testParse03 a new test? I can see SAXException is expected, but not IOE. In fact, this shows that the JAXP spec missed defining how empty string shall be handled (should have been an IAE). Best, Joe On 1/6/2015 11:33 AM, Lance Andersen wrote: Hi Tristan, Sorry for the delay but with the holidays and the number of files, it took a while to go through :-) Overall, it looks pretty good. A couple of suggestions, but I would not necessarily hold back from committing: - For tests where you are not caring about the actual exception that is thrown to indicate a failure, such as in DocumentBuilderFactory01.java and testCheckSchemaSupport1, I would just have the method declaration use throws Exception vs list all of the possible individual Exceptions, as it keeps it more compact. Glad to see you are not using failUnexcepted() now. - In test classes such as in testCheckSchemaSupport3. java and DocumentBuilderImpl01.java, some tests do not use assertXXX or expect an Exception. Would be good at least to document what could cause a failure or make it clear the expected behavior of the test for passing. -SAXParserTest02.java and other tests where you get a reader/parser such as testXMLReader01, I would at least assert that null is not returned where as it is now, you only validate that an exception is not returned - I know you are porting existing tests, but I would consider consolidating tests as it seems like there is not a real reason to have a testNG class with just 1 test. I would group the like tests (such as SAXTFactoryTest ) in one testNG test class as that allows you to streamline further. - TfClearParamTest.java
Re: RFR: JDK-8051563: Convert JAXP function tests in xslt components: : org.apache.qetest.dtm package, org.apache.qetest.trax package to openjdk
Thanks, I will update it. Tristan On Jan 6, 2015, at 2:27 PM, huizhe wang huizhe.w...@oracle.com wrote: On 1/6/2015 2:25 PM, Lance Andersen wrote: One more thing :-) Remember to update your copyright year to 2015 also Indeed, that applies to my webrevs as well :-) Joe Best Lance On Jan 6, 2015, at 5:04 PM, Lance Andersen lance.ander...@oracle.com mailto:lance.ander...@oracle.com wrote: On Jan 6, 2015, at 4:31 PM, Tristan Yan tristan@oracle.com mailto:tristan@oracle.com wrote: Thank you Lance and Joe. You are very welcome. I am working on the fix. No rush from my perspective, have a lot to keep me busy in the interim before your next webrev . :-) it may take a couple of days that the code needs some refactor. Understand, as I have been working on tests for RowSets, I have continued to play the refactor dance myself. Best, Lance I will send out next review once I finish it. Thanks Tristan On Jan 6, 2015, at 1:22 PM, huizhe wang huizhe.w...@oracle.com mailto:huizhe.w...@oracle.com wrote: Thanks for taking the initiative and effort to refactor and clean up all of the Functional tests in previous changeset! We've gone through several iterations for the new ones (xslt tests). I totally agree with Lance, it looks very good overall, a lot better than its original form. It's nice to group all of the tests that required file access, that prepares all other tests to run with minimal permission for a (future) secure-test-target. Unit tests might need a similar treatment, but no pressure to do now :-) A minor comment on utility classes: there's JAXPTestUtilities and TestUtils. The former is an util for all tests while the later contained a couple of SAX handlers, it may make sense to move them into their own classes just as the other Handlers. The constants (XML_DIR and GOLDEN_DIR) then can be declared in a base class for the SAX tests. I understand each group of tests have their own XML source and golden files, thus XML_DIR and GOLDEN_DIR. It may be possible to add a base for each group while put test.src and test.classes into the very base class for all tests. So in general, we would have: TestBaseFileTestBase Base for each group (e.g. SAXTestBase, DOMTestBase, XSLTTestBase...) that extends either TestBase or FileTestBase I remember there were a few tests that required a http server. So then we may need a HttpTestBase as well. I know this is not what we've discussed (or planned) previously. But since you've done a great job to incorporate what were previously quite separate test suites into one whole test suite. I can't resist to ask a bit more. Don't get me wrong, what you've done exceeded my expectation in a big way (only a month ago, we were still talking about quick/straight conversion so that you can start to take care of the new features)! SAXParserTest: I noticed Old: testParse11, testParse27 -- New: testParse05. So is testParse03 a new test? I can see SAXException is expected, but not IOE. In fact, this shows that the JAXP spec missed defining how empty string shall be handled (should have been an IAE). Best, Joe On 1/6/2015 11:33 AM, Lance Andersen wrote: Hi Tristan, Sorry for the delay but with the holidays and the number of files, it took a while to go through :-) Overall, it looks pretty good. A couple of suggestions, but I would not necessarily hold back from committing: - For tests where you are not caring about the actual exception that is thrown to indicate a failure, such as in DocumentBuilderFactory01.java and testCheckSchemaSupport1, I would just have the method declaration use throws Exception vs list all of the possible individual Exceptions, as it keeps it more compact. Glad to see you are not using failUnexcepted() now. - In test classes such as in testCheckSchemaSupport3. java and DocumentBuilderImpl01.java, some tests do not use assertXXX or expect an Exception. Would be good at least to document what could cause a failure or make it clear the expected behavior of the test for passing. -SAXParserTest02.java and other tests where you get a reader/parser such as testXMLReader01, I would at least assert that null is not returned where as it is now, you only validate that an exception is not returned - I know you are porting existing tests, but I would consider consolidating tests as it seems like there is not a real reason to have a testNG class with just 1 test. I would group the like tests (such as SAXTFactoryTest ) in one testNG test class as that allows you to streamline further. - TfClearParamTest.java (as and example) minor nit that you have your @Before/AfterGroups method in between tests. I would suggest grouping all methods such as this DataProviders before or after the actual tests for better
Re: RFR: JDK-8051563: Convert JAXP function tests in xslt components: : org.apache.qetest.dtm package, org.apache.qetest.trax package to openjdk
Thank you Lance and Joe. I am working on the fix. it may take a couple of days that the code needs some refactor. I will send out next review once I finish it. Thanks Tristan On Jan 6, 2015, at 1:22 PM, huizhe wang huizhe.w...@oracle.com wrote: Thanks for taking the initiative and effort to refactor and clean up all of the Functional tests in previous changeset! We've gone through several iterations for the new ones (xslt tests). I totally agree with Lance, it looks very good overall, a lot better than its original form. It's nice to group all of the tests that required file access, that prepares all other tests to run with minimal permission for a (future) secure-test-target. Unit tests might need a similar treatment, but no pressure to do now :-) A minor comment on utility classes: there's JAXPTestUtilities and TestUtils. The former is an util for all tests while the later contained a couple of SAX handlers, it may make sense to move them into their own classes just as the other Handlers. The constants (XML_DIR and GOLDEN_DIR) then can be declared in a base class for the SAX tests. I understand each group of tests have their own XML source and golden files, thus XML_DIR and GOLDEN_DIR. It may be possible to add a base for each group while put test.src and test.classes into the very base class for all tests. So in general, we would have: TestBaseFileTestBase Base for each group (e.g. SAXTestBase, DOMTestBase, XSLTTestBase...) that extends either TestBase or FileTestBase I remember there were a few tests that required a http server. So then we may need a HttpTestBase as well. I know this is not what we've discussed (or planned) previously. But since you've done a great job to incorporate what were previously quite separate test suites into one whole test suite. I can't resist to ask a bit more. Don't get me wrong, what you've done exceeded my expectation in a big way (only a month ago, we were still talking about quick/straight conversion so that you can start to take care of the new features)! SAXParserTest: I noticed Old: testParse11, testParse27 -- New: testParse05. So is testParse03 a new test? I can see SAXException is expected, but not IOE. In fact, this shows that the JAXP spec missed defining how empty string shall be handled (should have been an IAE). Best, Joe On 1/6/2015 11:33 AM, Lance Andersen wrote: Hi Tristan, Sorry for the delay but with the holidays and the number of files, it took a while to go through :-) Overall, it looks pretty good. A couple of suggestions, but I would not necessarily hold back from committing: - For tests where you are not caring about the actual exception that is thrown to indicate a failure, such as in DocumentBuilderFactory01.java and testCheckSchemaSupport1, I would just have the method declaration use throws Exception vs list all of the possible individual Exceptions, as it keeps it more compact. Glad to see you are not using failUnexcepted() now. - In test classes such as in testCheckSchemaSupport3. java and DocumentBuilderImpl01.java, some tests do not use assertXXX or expect an Exception. Would be good at least to document what could cause a failure or make it clear the expected behavior of the test for passing. -SAXParserTest02.java and other tests where you get a reader/parser such as testXMLReader01, I would at least assert that null is not returned where as it is now, you only validate that an exception is not returned - I know you are porting existing tests, but I would consider consolidating tests as it seems like there is not a real reason to have a testNG class with just 1 test. I would group the like tests (such as SAXTFactoryTest ) in one testNG test class as that allows you to streamline further. - TfClearParamTest.java (as and example) minor nit that you have your @Before/AfterGroups method in between tests. I would suggest grouping all methods such as this DataProviders before or after the actual tests for better organization - TraxSAXWrapper.java, not sure I understand the Sorry I could not resist comment - TransformerHandlerAPITest.java has typos in comments: IllegalArgumentExceptionis thrown…. - Minitest.java, I would add a comment for your Data Provider Best, Lance On Jan 2, 2015, at 1:49 PM, Tristan Yan tristan@oracle.com mailto:tristan@oracle.com wrote: Hi Joe and Lance Sorry for my late reply. I just uploaded the new webrev which is trying to limit minimal permissions for most of tests. The new changeset has two base classes; class JAXPBaseTest has only minimal set permissions; class JAXPFileBaseTest adds two permissions that need access local files in the directory directory test.src and test.classes. Most of tests only inherit JAXPBaseTest that provides only minimal set permissions. Some tests
Re: RFR: JDK-8051563: Convert JAXP function tests in xslt components: : org.apache.qetest.dtm package, org.apache.qetest.trax package to openjdk
Hi Joe and Lance Sorry for my late reply. I just uploaded the new webrev which is trying to limit minimal permissions for most of tests. The new changeset has two base classes; class JAXPBaseTest has only minimal set permissions; class JAXPFileBaseTest adds two permissions that need access local files in the directory directory test.src and test.classes. Most of tests only inherit JAXPBaseTest that provides only minimal set permissions. Some tests will inherit JAXPFileBaseTest because tests need access local files. Please help to review the code. http://cr.openjdk.java.net/~tyan/8051563/webrev.00/ http://cr.openjdk.java.net/~tyan/8051563/webrev.00/ Thank you Tristan On Jan 2, 2015, at 10:39 AM, huizhe wang huizhe.w...@oracle.com wrote: Lance, Tristan is looking into adding an extension base class for about 60 tests that require file permission, then the current base class would indeed set minimal permission. So please wait for his update :-) Best, Joe On 12/30/2014 3:07 PM, Lance @ Oracle wrote: Hi Tristan, I will look at this but doubt I will get to this tomorrow Best, Lance http://oracle.com/us/design/oracle-email-sig-198324.gifLance Andersen| Principal Member of Technical Staff | +1.781.442.2037 tel:+1.781.442.2037 Oracle Java Engineering 1 Network Drive x-apple-data-detectors://34/0 Burlington, MA 01803 x-apple-data-detectors://34/0 lance.ander...@oracle.com mailto:lance.ander...@oracle.com Sent from my iPad On Dec 30, 2014, at 5:28 PM, Tristan Yan tristan@oracle.com mailto:tristan@oracle.com wrote: Hi All Can I get your review on this change. http://cr.openjdk.java.net/~tyan/8051563/webrev.00/ http://cr.openjdk.java.net/%7Etyan/8051563/webrev.00/ http://cr.openjdk.java.net/~tyan/8051563/webrev.00/ http://cr.openjdk.java.net/%7Etyan/8051563/webrev.00/ These fixes originally come from bug https://bugs.openjdk.java.net/browse/JDK-8051563 https://bugs.openjdk.java.net/browse/JDK-8051563 https://bugs.openjdk.java.net/browse/JDK-8051563 https://bugs.openjdk.java.net/browse/JDK-8051563 as part of our XML test colocation work. ThIS change-set mainly covers tests for package org.apache.qetest.dtm and org.apache.qetest.trax. In the meantime I took steps at fixing some of our existed test code on below issues: 1. Add a base test class for all functional tests that enable security manager running. A limited minimal permissions set have been set for every test. 2. Remove all unnecessary exception capture for functional tests that we’re using testng to handle all the exceptions. 3. Use try-resource block to solve all possible resource leaks (including InputStream, OutputStream, Writer, Reader). Thanks a lot. Tristan
RFR: JDK-8051563: Convert JAXP function tests in xslt components: : org.apache.qetest.dtm package, org.apache.qetest.trax package to openjdk
Hi All Can I get your review on this change. http://cr.openjdk.java.net/~tyan/8051563/webrev.00/ http://cr.openjdk.java.net/~tyan/8051563/webrev.00/ These fixes originally come from bug https://bugs.openjdk.java.net/browse/JDK-8051563 https://bugs.openjdk.java.net/browse/JDK-8051563 as part of our XML test colocation work. ThIS change-set mainly covers tests for package org.apache.qetest.dtm and org.apache.qetest.trax. In the meantime I took steps at fixing some of our existed test code on below issues: 1. Add a base test class for all functional tests that enable security manager running. A limited minimal permissions set have been set for every test. 2. Remove all unnecessary exception capture for functional tests that we’re using testng to handle all the exceptions. 3. Use try-resource block to solve all possible resource leaks (including InputStream, OutputStream, Writer, Reader). Thanks a lot. Tristan
Re: RFR: JDK-8065673: Makefile enable JAXP tests running
Thanks Joe answering this. Updated webrev by changing jaxp_test to jaxp_all, Also correct “Jaxp as “JAXP”. http://cr.openjdk.java.net/~tyan/JDK-8065673/jaxp/webrev.01/ http://cr.openjdk.java.net/~tyan/JDK-8065673/jaxp/webrev.01/ http://cr.openjdk.java.net/~tyan/JDK-8065673/webrev.02/ http://cr.openjdk.java.net/~tyan/JDK-8065673/webrev.02/ Tristan On Dec 12, 2014, at 9:45 AM, huizhe wang huizhe.w...@oracle.com wrote: On 12/12/2014 2:45 AM, Alan Bateman wrote: On 11/12/2014 21:10, Tristan Yan wrote: Thanks Alan and Joe I added jaxp testset, now jaxp_tests is under jaxp testset only. Also I have added jaxp testset to pit. http://cr.openjdk.java.net/~tyan/JDK-8065673/webrev.01/ http://cr.openjdk.java.net/%7Etyan/JDK-8065673/webrev.01/ http://cr.openjdk.java.net/~tyan/JDK-8065673/jaxp/webrev.00/ http://cr.openjdk.java.net/%7Etyan/JDK-8065673/jaxp/webrev.00/ Thanks I think this looks much better. A suggestion for the test group name is jaxp_all rather than jaxp_test. A minor comment on jprt.properties is that the comments should probably use JAXP rather than Jaxp. A side question: Are there plans to move the JAXP tests from the jdk/test directory to the jaxp repo so that they can be with their friends? I think that would make this new testset more useful. Yes, it's in our plan to move jaxp tests from jdk/test to jaxp/test. -Joe -Alan.
RFR: JDK-8065673: Makefile enable JAXP tests running
Hi everyone Could you please to review this small makefile change for JAXP tests http://cr.openjdk.java.net/~tyan/JDK-8065673/webrev.00/ http://cr.openjdk.java.net/~tyan/JDK-8065673/webrev.00/ http://cr.openjdk.java.net/~tyan/JDK-8065673/jaxp/webrev.00/ http://cr.openjdk.java.net/~tyan/JDK-8065673/jaxp/webrev.00/ corresponding bug is https://bugs.openjdk.java.net/browse/JDK-8065673 https://bugs.openjdk.java.net/browse/JDK-8065673 Here is the background for this change. As you may know we’ve pushed some new jaxp tests into openjdk. Since these tests have been put into different repo with current idk tests. We need a makefile to support these tests running in jprt system and nightly. In the future, we may put these tests back to idk repo, we have to maintain a consistent makefile with current idk tests. So I steal the makefile from idk test repo and make minimal change as possible. For supporting running tests in jprt system one new target(jaxp_test) has been added in top level makefile and jaxp_test will be considered as part of core testset. Thank you very much. Tristan
Re: RFR: JDK-8065673: Makefile enable JAXP tests running
Thanks Alan and Joe I added jaxp testset, now jaxp_tests is under jaxp testset only. Also I have added jaxp testset to pit. http://cr.openjdk.java.net/~tyan/JDK-8065673/webrev.01/ http://cr.openjdk.java.net/~tyan/JDK-8065673/webrev.01/ http://cr.openjdk.java.net/~tyan/JDK-8065673/jaxp/webrev.00/ http://cr.openjdk.java.net/~tyan/JDK-8065673/jaxp/webrev.00/ Thanks On Dec 11, 2014, at 9:46 AM, huizhe wang huizhe.w...@oracle.com wrote: On 12/11/2014 7:07 AM, Alan Bateman wrote: On 11/12/2014 14:38, Tristan Yan wrote: Hi everyone Could you please to review this small makefile change for JAXP tests http://cr.openjdk.java.net/~tyan/JDK-8065673/webrev.00/ http://cr.openjdk.java.net/~tyan/JDK-8065673/webrev.00/ http://cr.openjdk.java.net/~tyan/JDK-8065673/jaxp/webrev.00/ http://cr.openjdk.java.net/~tyan/JDK-8065673/jaxp/webrev.00/ corresponding bug is https://bugs.openjdk.java.net/browse/JDK-8065673 https://bugs.openjdk.java.net/browse/JDK-8065673 Here is the background for this change. As you may know we’ve pushed some new jaxp tests into openjdk. Since these tests have been put into different repo with current idk tests. We need a makefile to support these tests running in jprt system and nightly. In the future, we may put these tests back to idk repo, we have to maintain a consistent makefile with current idk tests. So I steal the makefile from idk test repo and make minimal change as possible. For supporting running tests in jprt system one new target(jaxp_test) has been added in top level makefile and jaxp_test will be considered as part of core testset. The alternative to adding this to the core testset is to create a new testset (jaxp or xml for example). You could then include this in the pit testset so that it runs langtools, core, svc, and jaxp. Ok, we'll add a new testset jaxp. -Joe -Alan
RFR 8067183: TEST_BUG:File locked when processing the cleanup on test jaxp/test/javax/xml/jaxp/functional/javax/xml/transform/ptests/TransformerFactoryTest.java
Hi All Could you please review a small fix for JAXP test bug. We found this one in JPRT running. It’s caused by resource isn’t released correctly. webrev: http://cr.openjdk.java.net/~tyan/8067183/webrev.01/ http://cr.openjdk.java.net/~tyan/8067183/webrev.01/ bug: https://bugs.openjdk.java.net/browse/JDK-8067183 https://bugs.openjdk.java.net/browse/JDK-8067183 Thank you Tristan
Re: RFR 8067183: TEST_BUG:File locked when processing the cleanup on test jaxp/test/javax/xml/jaxp/functional/javax/xml/transform/ptests/TransformerFactoryTest.java
Thank you. Could you sponsor this for me. Tristan On 12/10/2014 1:49 PM, huizhe wang wrote: Hi Tristan, Thanks for the fix, looks good. I also agree that we do further improvement/clean-up (such as failUnexpected as Lance pointed out) across all tests migrated. Thanks, Joe On 12/10/2014 12:42 PM, Tristan Yan wrote: Hi All Could you please review a small fix for JAXP test bug. We found this one in JPRT running. It’s caused by resource isn’t released correctly. webrev: http://cr.openjdk.java.net/~tyan/8067183/webrev.01/ http://cr.openjdk.java.net/%7Etyan/8067183/webrev.01/ bug: https://bugs.openjdk.java.net/browse/JDK-8067183 Thank you Tristan
Re: RFR 8047962: XML test colocation: AuctionPortal test for bid.com
Hi Joe I changed movies.xml to a txt format for avoiding binary file is not supported issue. Also I’ve added the BOM header support for txt file in JAXPTestUtilities. Please see the latest webrev at http://cr.openjdk.java.net/~tyan/JDK-8047962/webrev.04/ http://cr.openjdk.java.net/~tyan/JDK-8047962/webrev.04/ I appreciate you can push it if you’re okay with this change. Thank you Tristan On Nov 5, 2014, at 4:01 PM, Tristan Yan tristan@oracle.com wrote: Thanks again This makes more sense to me. Now they look clearer. http://cr.openjdk.java.net/~tyan/JDK-8047962/webrev.03/ http://cr.openjdk.java.net/~tyan/JDK-8047962/webrev.03/ Thank you for sponsoring this. Tristan On Nov 5, 2014, at 3:23 PM, huizhe wang huizhe.w...@oracle.com wrote: Hi Tristan, It's good to see movie.xml is replaced with a literal string. Sorry if I wasn't clear on the directory structure. It would be nice if all files for AuctionPortal to be placed under AuctionPortal, that is: test/javax/xml/jaxp/functional/test/auctionportal/ content/ (xml and xsd) golden/ (gold files) Would that make sense? It's nice you have the paths in one place (HiBidConstants) that makes it easier to move files around. Yes, I'll sponsor that for you. Thanks, Joe On 11/5/2014 2:57 PM, Tristan Yan wrote: Thank you. Joe. Git plugin for mercurial works well for hg command but webrev script still doesn’t support the binary file. I did one small change to replace movies.xml with a Java string to suppress this error. Also I took your advice to move xml files into auction portal directory. Changed tests have been run with and without the Security Manager as usual. All the tests pass under both scenarios. http://cr.openjdk.java.net/~tyan/JDK-8047962/webrev.02/ http://cr.openjdk.java.net/%7Etyan/JDK-8047962/webrev.02/ Could you please sponsor this for me. Thank you very much. Tristan On Nov 3, 2014, at 3:55 PM, huizhe wang huizhe.w...@oracle.com mailto:huizhe.w...@oracle.com wrote: Hi Tristan, Looks good overall. Once again, it's great to see that you've made the tests a lot cleaner, getting rid of the old report system and etc. The only issue I see is with movies.xml. If I use patch to apply your patch to my workspace, I get no movies.xml that in turn causes AuctionController to fail. It's probably related how your webrev was generated (note the error: Unexpected Error occurred reading `diff -e /dev/null new/test/javax/xml/jaxp/functional/test/content/movies.xml`: $?=0, err= 1 ). hg diff works fine generally, but not for changes in binary files. Use the git option, -git, can fix the problem. You may see the following in your configuration: [diff] git = true Thanks, Joe On 10/31/2014 12:16 PM, Tristan Yan wrote: Hi Joe, Alan and all others Would you please help reviewing these tests? The intent is moving some JAXP tests from closed to open. The associated bug number is https://bugs.openjdk.java.net/browse/JDK-8047962 https://bugs.openjdk.java.net/browse/JDK-8047962. These tests have been ran with and without the Security Manager. All the tests pass under both scenarios. http://cr.openjdk.java.net/~tyan/JDK-8047962/webrev.01/ http://cr.openjdk.java.net/%7Etyan/JDK-8047962/webrev.01/ Thank you. Tristan
Re: RFR 8047962: XML test colocation: AuctionPortal test for bid.com
Thank you. Joe. Git plugin for mercurial works well for hg command but webrev script still doesn’t support the binary file. I did one small change to replace movies.xml with a Java string to suppress this error. Also I took your advice to move xml files into auction portal directory. Changed tests have been run with and without the Security Manager as usual. All the tests pass under both scenarios. http://cr.openjdk.java.net/~tyan/JDK-8047962/webrev.02/ http://cr.openjdk.java.net/~tyan/JDK-8047962/webrev.02/ Could you please sponsor this for me. Thank you very much. Tristan On Nov 3, 2014, at 3:55 PM, huizhe wang huizhe.w...@oracle.com wrote: Hi Tristan, Looks good overall. Once again, it's great to see that you've made the tests a lot cleaner, getting rid of the old report system and etc. The only issue I see is with movies.xml. If I use patch to apply your patch to my workspace, I get no movies.xml that in turn causes AuctionController to fail. It's probably related how your webrev was generated (note the error: Unexpected Error occurred reading `diff -e /dev/null new/test/javax/xml/jaxp/functional/test/content/movies.xml`: $?=0, err= 1 ). hg diff works fine generally, but not for changes in binary files. Use the git option, -git, can fix the problem. You may see the following in your configuration: [diff] git = true Thanks, Joe On 10/31/2014 12:16 PM, Tristan Yan wrote: Hi Joe, Alan and all others Would you please help reviewing these tests? The intent is moving some JAXP tests from closed to open. The associated bug number is https://bugs.openjdk.java.net/browse/JDK-8047962 https://bugs.openjdk.java.net/browse/JDK-8047962. These tests have been ran with and without the Security Manager. All the tests pass under both scenarios. http://cr.openjdk.java.net/~tyan/JDK-8047962/webrev.01/ http://cr.openjdk.java.net/%7Etyan/JDK-8047962/webrev.01/ Thank you. Tristan
Re: RFR 8047962: XML test colocation: AuctionPortal test for bid.com
Thanks again This makes more sense to me. Now they look clearer. http://cr.openjdk.java.net/~tyan/JDK-8047962/webrev.03/ http://cr.openjdk.java.net/~tyan/JDK-8047962/webrev.03/ Thank you for sponsoring this. Tristan On Nov 5, 2014, at 3:23 PM, huizhe wang huizhe.w...@oracle.com wrote: Hi Tristan, It's good to see movie.xml is replaced with a literal string. Sorry if I wasn't clear on the directory structure. It would be nice if all files for AuctionPortal to be placed under AuctionPortal, that is: test/javax/xml/jaxp/functional/test/auctionportal/ content/ (xml and xsd) golden/ (gold files) Would that make sense? It's nice you have the paths in one place (HiBidConstants) that makes it easier to move files around. Yes, I'll sponsor that for you. Thanks, Joe On 11/5/2014 2:57 PM, Tristan Yan wrote: Thank you. Joe. Git plugin for mercurial works well for hg command but webrev script still doesn’t support the binary file. I did one small change to replace movies.xml with a Java string to suppress this error. Also I took your advice to move xml files into auction portal directory. Changed tests have been run with and without the Security Manager as usual. All the tests pass under both scenarios. http://cr.openjdk.java.net/~tyan/JDK-8047962/webrev.02/ http://cr.openjdk.java.net/%7Etyan/JDK-8047962/webrev.02/ Could you please sponsor this for me. Thank you very much. Tristan On Nov 3, 2014, at 3:55 PM, huizhe wang huizhe.w...@oracle.com mailto:huizhe.w...@oracle.com wrote: Hi Tristan, Looks good overall. Once again, it's great to see that you've made the tests a lot cleaner, getting rid of the old report system and etc. The only issue I see is with movies.xml. If I use patch to apply your patch to my workspace, I get no movies.xml that in turn causes AuctionController to fail. It's probably related how your webrev was generated (note the error: Unexpected Error occurred reading `diff -e /dev/null new/test/javax/xml/jaxp/functional/test/content/movies.xml`: $?=0, err= 1 ). hg diff works fine generally, but not for changes in binary files. Use the git option, -git, can fix the problem. You may see the following in your configuration: [diff] git = true Thanks, Joe On 10/31/2014 12:16 PM, Tristan Yan wrote: Hi Joe, Alan and all others Would you please help reviewing these tests? The intent is moving some JAXP tests from closed to open. The associated bug number is https://bugs.openjdk.java.net/browse/JDK-8047962 https://bugs.openjdk.java.net/browse/JDK-8047962. These tests have been ran with and without the Security Manager. All the tests pass under both scenarios. http://cr.openjdk.java.net/~tyan/JDK-8047962/webrev.01/ http://cr.openjdk.java.net/%7Etyan/JDK-8047962/webrev.01/ Thank you. Tristan
Re: Review request for XML JAXP unit test colocation
Hi Frank What I meant is uploading webrev for you. An open JDK review means you send your code to open JDK alias. Anyone is open JDK alias is eligible to review your code. There are some people they are not working on Oracle. They have no access to Oracle network. According the open JDK process your webrev has to be uploaded into somewhere that everyone can access. Most of people are using cr.openjdk.java.net http://cr.openjdk.java.net/ because this is provided to people who has Author/Role in open JDK community. I can’t create an account for you. You need contribute before you get an open JDK id. Please see the detail information as following link. http://openjdk.java.net/contribute/ Thanks Tristan On Nov 5, 2014, at 7:03 PM, Frank Yuan frank.y...@oracle.com wrote: Oh, thanks for your reminder! Do you mean you can help me to generate an account to access cr.openjdk.java.net http://cr.openjdk.java.net/? Certainly I want, thanks in advance. Best Regards Frank From: Tristan Yan [mailto:tristan@oracle.com] Sent: Thursday, November 06, 2014 3:54 AM To: Frank Yuan Cc: Core Java Libraries SQE Subject: Re: Review request for XML JAXP unit test colocation Hi Frank I think you’re using internal link for open review. You need a standard one in cr.openjdk.java.net http://cr.openjdk.java.net/. Do you want me to help you to generate one? Thanks Tristan On Nov 5, 2014, at 1:12 AM, Frank Yuan frank.y...@oracle.com mailto:frank.y...@oracle.com wrote: Hi, Joe and All Thanks for your previous reviews and comments. Per your comments: I added description for every test, got them running with and without security manager and fixed the known test issues. Could you help review the changes for JAXP unittest co-location again? After your review, I will push the tests into JDK repo: jaxp/test. bug: https://bugs.openjdk.java.net/browse/JDK-8043090 https://bugs.openjdk.java.net/browse/JDK-8043090 webrev: http://sqeweb.us.oracle.com/jsn/users/yc/webrev/ http://sqeweb.us.oracle.com/jsn/users/yc/webrev/ test results: http://sqeweb.us.oracle.com/jsn/users/yc/result/JTreport/html/index.html http://sqeweb.us.oracle.com/jsn/users/yc/result/JTreport/html/index.html Thanks, Frank
RFR 8047962: XML test colocation: AuctionPortal test for bid.com
Hi Joe, Alan and all others Would you please help reviewing these tests? The intent is moving some JAXP tests from closed to open. The associated bug number is https://bugs.openjdk.java.net/browse/JDK-8047962 https://bugs.openjdk.java.net/browse/JDK-8047962. These tests have been ran with and without the Security Manager. All the tests pass under both scenarios. http://cr.openjdk.java.net/~tyan/JDK-8047962/webrev.01/ http://cr.openjdk.java.net/~tyan/JDK-8047962/webrev.01/ Thank you. Tristan
Re: Review request for JDK-8051561: Convert JAXP function tests: javax.xml.xpath.* to jtreg (testNG) tests
Hi Joe If you’re okay with the code; would you be my sponsor for this. We need move forward and push these tests into openjdk repo. Thank you so much Tristan On Aug 29, 2014, at 11:21 AM, huizhe wang huizhe.w...@oracle.com wrote: Hi Tristan, Looks good. I left notes in the bug's comment section as a record and status of the original test development. Thanks, Joe On 8/29/2014 9:50 AM, Tristan Yan wrote: Hi Joe, Alan and others I took over Eric’s last work and did some refactor for his code. Please help to review the code change again. webrev: http://cr.openjdk.java.net/~tyan/JDK-8051561/webrev.01/ http://cr.openjdk.java.net/%7Etyan/JDK-8051561/webrev.01/ bug: https://bugs.openjdk.java.net/browse/JDK-8051561 https://bugs.openjdk.java.net/browse/JDK-8051561 These code has been run with security manager and without security manager both and all passed. Thank you Tristan On Jul 25, 2014, at 6:12 AM, Eric Wang yiming.w...@oracle.com mailto:yiming.w...@oracle.com wrote: Hi Joe, alan and every one I'm working on jaxp functional test colocation which is traced by the bug JDK-8043091 https://bugs.openjdk.java.net/browse/JDK-8043091. We have finished to convert a few suite and the jaxp/xpath tracked by bug JDK-8051561 https://bugs.openjdk.java.net/browse/JDK-8051561 is the first one chosen for public review. Can you please review the webrev below? your comments given would be helpful for our future work. http://cr.openjdk.java.net/~ewang/JDK-8051561/webrev.00/ http://cr.openjdk.java.net/%7Eewang/JDK-8051561/webrev.00/ Thanks, Eric
Re: Review request for JDK-8051561: Convert JAXP function tests: javax.xml.xpath.* to jtreg (testNG) tests
Hi Joe, Alan and others I took over Eric’s last work and did some refactor for his code. Please help to review the code change again. webrev: http://cr.openjdk.java.net/~tyan/JDK-8051561/webrev.01/ http://cr.openjdk.java.net/~tyan/JDK-8051561/webrev.01/ bug: https://bugs.openjdk.java.net/browse/JDK-8051561 These code has been run with security manager and without security manager both and all passed. Thank you Tristan On Jul 25, 2014, at 6:12 AM, Eric Wang yiming.w...@oracle.com wrote: Hi Joe, alan and every one I'm working on jaxp functional test colocation which is traced by the bug JDK-8043091 https://bugs.openjdk.java.net/browse/JDK-8043091. We have finished to convert a few suite and the jaxp/xpath tracked by bug JDK-8051561 https://bugs.openjdk.java.net/browse/JDK-8051561 is the first one chosen for public review. Can you please review the webrev below? your comments given would be helpful for our future work. http://cr.openjdk.java.net/~ewang/JDK-8051561/webrev.00/ http://cr.openjdk.java.net/~ewang/JDK-8051561/webrev.00/ Thanks, Eric
Re: Review request for JDK-8051540: Convert JAXP functin tests: org.xml.sax to jtreg (testNG) tests
Thanks Joe. I will move the tests to [openjdk]/jaxp repo then. Tristan On Aug 19, 2014, at 10:32 AM, huizhe wang huizhe.w...@oracle.com wrote: By the way, the plan has been that all of the JAXP SQE and Unit tests be migrated into [openjdk]/jaxp repo under jaxp/test. Tests currently in the jdk repo shall be moved to jaxp/test as well. I see that your webrev was generated in jdk9/dev/jdk. I hope it doesn't mean you're checking tests into the jdk repo. Thanks, Joe On 8/18/2014 4:42 PM, Tristan Yan wrote: Thanks Joe We intend to replace the base class with test library because that doesn’t look like a real base class but an utilities class. I haven’t tried to run these tests with security manager, I will run them with security manager then get back you soon. Thank you. Tristan On Aug 18, 2014, at 4:32 PM, huizhe wang huizhe.w...@oracle.com wrote:
Review request for JDK-8051540: Convert JAXP functin tests: org.xml.sax to jtreg (testNG) tests
Hi Joe, Alan and others We’re working on moving our internal jaxp functional tests to open idk repo(Include refactoring effort). This is the first open review I am asking for SAX and Transform. Would you please review these tests. Any comment will be appreciated. I put the webrev as follows: http://cr.openjdk.java.net/~tyan/JDK-8051540/webrev00/ Thank you very much. Tristan
Re: Review request for JDK-8051540: Convert JAXP functin tests: org.xml.sax to jtreg (testNG) tests
Thanks Joe We intend to replace the base class with test library because that doesn’t look like a real base class but an utilities class. I haven’t tried to run these tests with security manager, I will run them with security manager then get back you soon. Thank you. Tristan On Aug 18, 2014, at 4:32 PM, huizhe wang huizhe.w...@oracle.com wrote:
RFR for JDK-8035388: TEST_BUG: java/rmi/activation/Activatable/checkActivateRef/CheckActivateRef.java fails
Hi All Could you please help to review code fix for JDK-8035388. http://cr.openjdk.java.net/~tyan/JDK-8035388/webrev.00/ Description: method inactiveObject in ActivationGroupImpl.java, release lock happen before checkInactiveGroup. This makes groupInactive reset even before next inactiveObject started. Regards Tristan
答复: RFR for JDK-8031374: java/util/concurrent/ConcurrentQueues/OfferRemoveLoops.java fails Intermittently
Thank you for taking care of this. Chris Filed a bug for this https://bugs.openjdk.java.net/browse/JDK-8035661 Tristan -邮件原件- 发件人: Chris Hegarty 发送时间: Monday, February 24, 2014 5:05 PM 收件人: Tristan Yan 抄送: Martin Buchholz; core-libs-dev 主题: Re: RFR for JDK-8031374: java/util/concurrent/ConcurrentQueues/OfferRemoveLoops.java fails Intermittently Tristan, Can you please file a new bug and bring in the changes that Martin pushed to the 166 CVS. http://gee.cs.oswego.edu/cgi-bin/viewcvs.cgi/jsr166/src/test/jtreg/util/concurrent/ConcurrentQueues/OfferRemoveLoops.java?view=co Create a changeset, and I can do the OpenJDK review. -Chris. On 24 Feb 2014, at 03:03, Tristan Yan tristan@oracle.com wrote: I am ok with this unfix for now. Martin. And thank you for bringing the improvement to jsr166 CVS. Regards Tristan 发件人: Martin Buchholz [mailto:marti...@google.com] 发送时间: Monday, February 24, 2014 10:59 AM 收件人: Tristan Yan 抄送: core-libs-dev 主题: Re: 答复: 答复: RFR for JDK-8031374: java/util/concurrent/ConcurrentQueues/OfferRemoveLoops.java fails Intermittently Hi Tristan, I don't think we should change the test without a sufficiently solid reason, which usually means reproducing the failure, even if e.g. only once in 1000 times. Here we don't know which queue implementation is causing the failure, and we don't know what chunk size was being used, which might cause the failure to be reproduced more reliably. A single test failure might be due to cosmic rays! Let's leave unfixed this while it cannot be reproduced. You can check in my general purpose improvements to the test from jsr166 CVS. On Sun, Feb 23, 2014 at 6:41 PM, Tristan Yan HYPERLINK mailto:tristan@oracle.comtristan@oracle.com wrote: Hi Martin This test failed once in our nightly test. So this is a real case failure. But unfortunately we couldn’t know more except the log we had. 10 extra seconds may need serve 1024 loop totally, also in windows Thread.yield() doesn’t give up CPU most of times, then we can may have the situation that remover is trying to remove object from a empty queue but it doesn’t find anything a couple of times(it doesn’t give up cpu time) then offer thread get cpu time. And offer thread insert a couple of elements and queue is full. Offer thread tries to give up CPU but Thread.yield() doesn’t really give up. Let’s assume it takes 1024 loop again. And offer thread got timeout. Then remover thread try to remove elements, and it takes another 1024 loop to get to timeout as well. So 10 seconds may need distribute to 2048 loop at most. Every loop has 5 ms foreach. That’s why I simulate this case with a Thread.sleep(20). I admit I can’t reproduce it without changing code, this is all theoretical analysis. But this is closest possible reason that I can deduce. Thank you. Tristan 发件人: Martin Buchholz [mailto:HYPERLINK mailto:marti...@google.commarti...@google.com] 发送时间: Monday, February 24, 2014 10:16 AM 收件人: Tristan Yan 抄送: core-libs-dev 主题: Re: 答复: RFR for JDK-8031374: java/util/concurrent/ConcurrentQueues/OfferRemoveLoops.java fails Intermittently I may very well be missing something, but the actual extra timeout for threads to finish is 10 whole seconds, which should be long enough to process a single chunk, even on Windows. If you can repro this consistently, try to find out which queue implementation is affected. We can also shrink max queue size and chunk size to reduce time to traverse the queue elements. On Sun, Feb 23, 2014 at 4:23 PM, Tristan Yan HYPERLINK mailto:tristan@oracle.comtristan@oracle.com wrote: Thank you for reviewing this. Martin The original bug report is here: https://bugs.openjdk.java.net/browse/JDK-8031374. I haven’t reproduced this bug but I can simulate to reproduce this failure by changing yield() in remover thread to Thread.sleep(20). You have commented “You've completely broken the intent of this code, which was to poll ***occasionally*** for quitting time”. I tried to not changed the logic for the test. This failure comes when only 1st rounds(1024 loop) wasn’t finished in given quitting time(before timeout). Which was 300ms. One solution is increasing default timeout as you suggested. But question is how big should we increase. When the test is really slow and could not finish 1st round(1024 loop) before timeout, I couldn’t figure out what’s the reason timeout value. Also this definitely will slow down the tests when it run in fast machine. Which is the case most of time. So I took other step that skip wait if test doesn't finish 1st round(1024 loop) before timeout. I won’t say I completely broke the intent of the code here because it’s rare case.(Only if the machine is slow and slow enough that the test doesn't finish 1st round before timeout
RFR for JDK-8031374: java/util/concurrent/ConcurrentQueues/OfferRemoveLoops.java fails Intermittently
Hi All Could you please help to review fix for JDK-803174. http://cr.openjdk.java.net/~tyan/JDK-8031374/webrev.00/ Description: There are a couple of code change for this code. 1. Original code has used known problematic Thread.stop. Which could cause ThreadDeath as all we know. I change it to a customized stopThread method. 2. Test failed in windows, I analyze the failure by changing Thread.yield() in remover thread to Thread.sleep(50). This is a simulation for slow machine. Which shows exact same failures as we can see in bug description. By adding more debug info, we can see although we tried to give up CPU by using Thread.yield().(I have tried to use Thread.sleep(1L) as well, the result is same), there is no guarantee that os will pick up a new thread to execute. In Windows, this is more obvious. Because the execution is slow. even when the timeout happens, offer thread and remove thread won’t have chance to get the point that i % 1024 == 0. This will cause the failure like we see in the log. My fix here is when the timeout happens, but i is still less than 1024. Stop offer thread and remover thread right away instead letting them continuously wait the point to i == 1024. 3. I replace Thread.yield to Thread.sleep(0L). I saw a couple of discussion that Thread.yield is not required to give up CPU. Thank you Tristan
答复: RFR for JDK-8031374: java/util/concurrent/ConcurrentQueues/OfferRemoveLoops.java fails Intermittently
Thank you for reviewing this. Martin The original bug report is here: https://bugs.openjdk.java.net/browse/JDK-8031374. I haven’t reproduced this bug but I can simulate to reproduce this failure by changing yield() in remover thread to Thread.sleep(20). You have commented “You've completely broken the intent of this code, which was to poll ***occasionally*** for quitting time”. I tried to not changed the logic for the test. This failure comes when only 1st rounds(1024 loop) wasn’t finished in given quitting time(before timeout). Which was 300ms. One solution is increasing default timeout as you suggested. But question is how big should we increase. When the test is really slow and could not finish 1st round(1024 loop) before timeout, I couldn’t figure out what’s the reason timeout value. Also this definitely will slow down the tests when it run in fast machine. Which is the case most of time. So I took other step that skip wait if test doesn't finish 1st round(1024 loop) before timeout. I won’t say I completely broke the intent of the code here because it’s rare case.(Only if the machine is slow and slow enough that the test doesn't finish 1st round before timeout). The reason that replace Thread.yield to Thread.sleep(0L) because David Holmes point out in the bug “sleep will certainly give up the CPU, whereas yield is not required to.” Also in other mail, David pointed I should use Thread.sleep(10L) instead of Thread.sleep(0L) preferably a bit longer as we don't know how the OS will behave if the sleep time requested is less than the natural resolution of the sleep mechanism. For the experiment I’ve done in Windows. Thread.yeild() or Thread.sleep(10L) won’t guarantee current thread give up the CPU. This is a hint to OS. This makes in windows remover and offer thread could wait to each other more more than other other operating system. This is also the one of the reason that can explain why we've seen this in windows only. Regards Tristan 发件人: Martin Buchholz [mailto:marti...@google.com] 发送时间: Monday, February 24, 2014 3:47 AM 收件人: Tristan Yan 抄送: core-libs-dev 主题: Re: RFR for JDK-8031374: java/util/concurrent/ConcurrentQueues/OfferRemoveLoops.java fails Intermittently Hi Tristan, (thanks for working on my old crappy tests, and apologies for always giving you a hard time) I couldn't find the bug report. Can you provide a URL? Thread.stop is only called in case the test has already failed (presumed hung), as a last resort. Perhaps the timeout used in the test (300 ms) is too small on some systems? + protected void giveupCPU(){ + try { + Thread.sleep(0L); + } catch (InterruptedException ignore) {} } I know of no reason why Thread.sleep(0) should be any more effective than Thread.yield. If it was more effective, then why wouldn't Thread.yield on that platform be fixed to be implemented exactly the same way? IOW if this is a problem, fix the JDK! /** Polls occasionally for quitting time. */ protected boolean quittingTime(long i) { - return (i % 1024) == 0 quittingTime(); + return stopRequest || quittingTime() (i % 1024 == 0 || i 1024); + } You've completely broken the intent of this code, which was to poll ***occasionally*** for quitting time. On Sun, Feb 23, 2014 at 1:40 AM, Tristan Yan HYPERLINK mailto:tristan@oracle.comtristan@oracle.com wrote: Hi All Could you please help to review fix for JDK-803174. http://cr.openjdk.java.net/~tyan/JDK-8031374/webrev.00/ Description: There are a couple of code change for this code. 1. Original code has used known problematic Thread.stop. Which could cause ThreadDeath as all we know. I change it to a customized stopThread method. 2. Test failed in windows, I analyze the failure by changing Thread.yield() in remover thread to Thread.sleep(50). This is a simulation for slow machine. Which shows exact same failures as we can see in bug description. By adding more debug info, we can see although we tried to give up CPU by using Thread.yield().(I have tried to use Thread.sleep(1L) as well, the result is same), there is no guarantee that os will pick up a new thread to execute. In Windows, this is more obvious. Because the execution is slow. even when the timeout happens, offer thread and remove thread won’t have chance to get the point that i % 1024 == 0. This will cause the failure like we see in the log. My fix here is when the timeout happens, but i is still less than 1024. Stop offer thread and remover thread right away instead letting them continuously wait the point to i == 1024. 3. I replace Thread.yield to Thread.sleep(0L). I saw a couple of discussion that Thread.yield is not required to give up CPU. Thank you Tristan
答复: 答复: RFR for JDK-8031374: java/util/concurrent/ConcurrentQueues/OfferRemoveLoops.java fails Intermittently
Hi Martin This test failed once in our nightly test. So this is a real case failure. But unfortunately we couldn’t know more except the log we had. 10 extra seconds may need serve 1024 loop totally, also in windows Thread.yield() doesn’t give up CPU most of times, then we can may have the situation that remover is trying to remove object from a empty queue but it doesn’t find anything a couple of times(it doesn’t give up cpu time) then offer thread get cpu time. And offer thread insert a couple of elements and queue is full. Offer thread tries to give up CPU but Thread.yield() doesn’t really give up. Let’s assume it takes 1024 loop again. And offer thread got timeout. Then remover thread try to remove elements, and it takes another 1024 loop to get to timeout as well. So 10 seconds may need distribute to 2048 loop at most. Every loop has 5 ms foreach. That’s why I simulate this case with a Thread.sleep(20). I admit I can’t reproduce it without changing code, this is all theoretical analysis. But this is closest possible reason that I can deduce. Thank you. Tristan 发件人: Martin Buchholz [mailto:marti...@google.com] 发送时间: Monday, February 24, 2014 10:16 AM 收件人: Tristan Yan 抄送: core-libs-dev 主题: Re: 答复: RFR for JDK-8031374: java/util/concurrent/ConcurrentQueues/OfferRemoveLoops.java fails Intermittently I may very well be missing something, but the actual extra timeout for threads to finish is 10 whole seconds, which should be long enough to process a single chunk, even on Windows. If you can repro this consistently, try to find out which queue implementation is affected. We can also shrink max queue size and chunk size to reduce time to traverse the queue elements. On Sun, Feb 23, 2014 at 4:23 PM, Tristan Yan HYPERLINK mailto:tristan@oracle.comtristan@oracle.com wrote: Thank you for reviewing this. Martin The original bug report is here: https://bugs.openjdk.java.net/browse/JDK-8031374. I haven’t reproduced this bug but I can simulate to reproduce this failure by changing yield() in remover thread to Thread.sleep(20). You have commented “You've completely broken the intent of this code, which was to poll ***occasionally*** for quitting time”. I tried to not changed the logic for the test. This failure comes when only 1st rounds(1024 loop) wasn’t finished in given quitting time(before timeout). Which was 300ms. One solution is increasing default timeout as you suggested. But question is how big should we increase. When the test is really slow and could not finish 1st round(1024 loop) before timeout, I couldn’t figure out what’s the reason timeout value. Also this definitely will slow down the tests when it run in fast machine. Which is the case most of time. So I took other step that skip wait if test doesn't finish 1st round(1024 loop) before timeout. I won’t say I completely broke the intent of the code here because it’s rare case.(Only if the machine is slow and slow enough that the test doesn't finish 1st round before timeout). The reason that replace Thread.yield to Thread.sleep(0L) because David Holmes point out in the bug “sleep will certainly give up the CPU, whereas yield is not required to.” Also in other mail, David pointed I should use Thread.sleep(10L) instead of Thread.sleep(0L) preferably a bit longer as we don't know how the OS will behave if the sleep time requested is less than the natural resolution of the sleep mechanism. For the experiment I’ve done in Windows. Thread.yeild() or Thread.sleep(10L) won’t guarantee current thread give up the CPU. This is a hint to OS. This makes in windows remover and offer thread could wait to each other more more than other other operating system. This is also the one of the reason that can explain why we've seen this in windows only. Regards Tristan 发件人: Martin Buchholz [mailto:HYPERLINK mailto:marti...@google.commarti...@google.com] 发送时间: Monday, February 24, 2014 3:47 AM 收件人: Tristan Yan 抄送: core-libs-dev 主题: Re: RFR for JDK-8031374: java/util/concurrent/ConcurrentQueues/OfferRemoveLoops.java fails Intermittently Hi Tristan, (thanks for working on my old crappy tests, and apologies for always giving you a hard time) I couldn't find the bug report. Can you provide a URL? Thread.stop is only called in case the test has already failed (presumed hung), as a last resort. Perhaps the timeout used in the test (300 ms) is too small on some systems? + protected void giveupCPU(){ + try { + Thread.sleep(0L); + } catch (InterruptedException ignore) {} } I know of no reason why Thread.sleep(0) should be any more effective than Thread.yield. If it was more effective, then why wouldn't Thread.yield on that platform be fixed to be implemented exactly the same way? IOW if this is a problem, fix the JDK! /** Polls occasionally
答复: 答复: 答复: RFR for JDK-8031374: java/util/concurrent/ConcurrentQueues/OfferRemoveLoops.java fails Intermittently
I am ok with this unfix for now. Martin. And thank you for bringing the improvement to jsr166 CVS. Regards Tristan 发件人: Martin Buchholz [mailto:marti...@google.com] 发送时间: Monday, February 24, 2014 10:59 AM 收件人: Tristan Yan 抄送: core-libs-dev 主题: Re: 答复: 答复: RFR for JDK-8031374: java/util/concurrent/ConcurrentQueues/OfferRemoveLoops.java fails Intermittently Hi Tristan, I don't think we should change the test without a sufficiently solid reason, which usually means reproducing the failure, even if e.g. only once in 1000 times. Here we don't know which queue implementation is causing the failure, and we don't know what chunk size was being used, which might cause the failure to be reproduced more reliably. A single test failure might be due to cosmic rays! Let's leave unfixed this while it cannot be reproduced. You can check in my general purpose improvements to the test from jsr166 CVS. On Sun, Feb 23, 2014 at 6:41 PM, Tristan Yan HYPERLINK mailto:tristan@oracle.comtristan@oracle.com wrote: Hi Martin This test failed once in our nightly test. So this is a real case failure. But unfortunately we couldn’t know more except the log we had. 10 extra seconds may need serve 1024 loop totally, also in windows Thread.yield() doesn’t give up CPU most of times, then we can may have the situation that remover is trying to remove object from a empty queue but it doesn’t find anything a couple of times(it doesn’t give up cpu time) then offer thread get cpu time. And offer thread insert a couple of elements and queue is full. Offer thread tries to give up CPU but Thread.yield() doesn’t really give up. Let’s assume it takes 1024 loop again. And offer thread got timeout. Then remover thread try to remove elements, and it takes another 1024 loop to get to timeout as well. So 10 seconds may need distribute to 2048 loop at most. Every loop has 5 ms foreach. That’s why I simulate this case with a Thread.sleep(20). I admit I can’t reproduce it without changing code, this is all theoretical analysis. But this is closest possible reason that I can deduce. Thank you. Tristan 发件人: Martin Buchholz [mailto:HYPERLINK mailto:marti...@google.commarti...@google.com] 发送时间: Monday, February 24, 2014 10:16 AM 收件人: Tristan Yan 抄送: core-libs-dev 主题: Re: 答复: RFR for JDK-8031374: java/util/concurrent/ConcurrentQueues/OfferRemoveLoops.java fails Intermittently I may very well be missing something, but the actual extra timeout for threads to finish is 10 whole seconds, which should be long enough to process a single chunk, even on Windows. If you can repro this consistently, try to find out which queue implementation is affected. We can also shrink max queue size and chunk size to reduce time to traverse the queue elements. On Sun, Feb 23, 2014 at 4:23 PM, Tristan Yan HYPERLINK mailto:tristan@oracle.comtristan@oracle.com wrote: Thank you for reviewing this. Martin The original bug report is here: https://bugs.openjdk.java.net/browse/JDK-8031374. I haven’t reproduced this bug but I can simulate to reproduce this failure by changing yield() in remover thread to Thread.sleep(20). You have commented “You've completely broken the intent of this code, which was to poll ***occasionally*** for quitting time”. I tried to not changed the logic for the test. This failure comes when only 1st rounds(1024 loop) wasn’t finished in given quitting time(before timeout). Which was 300ms. One solution is increasing default timeout as you suggested. But question is how big should we increase. When the test is really slow and could not finish 1st round(1024 loop) before timeout, I couldn’t figure out what’s the reason timeout value. Also this definitely will slow down the tests when it run in fast machine. Which is the case most of time. So I took other step that skip wait if test doesn't finish 1st round(1024 loop) before timeout. I won’t say I completely broke the intent of the code here because it’s rare case.(Only if the machine is slow and slow enough that the test doesn't finish 1st round before timeout). The reason that replace Thread.yield to Thread.sleep(0L) because David Holmes point out in the bug “sleep will certainly give up the CPU, whereas yield is not required to.” Also in other mail, David pointed I should use Thread.sleep(10L) instead of Thread.sleep(0L) preferably a bit longer as we don't know how the OS will behave if the sleep time requested is less than the natural resolution of the sleep mechanism. For the experiment I’ve done in Windows. Thread.yeild() or Thread.sleep(10L) won’t guarantee current thread give up the CPU. This is a hint to OS. This makes in windows remover and offer thread could wait to each other more more than other other operating system. This is also the one of the reason that can explain why we've seen this in windows only. Regards Tristan 发件人: Martin Buchholz
Re: RFR: JDK-8032050: TEST_BUG: java/rmi/activation/Activatable/shutdownGracefully/ShutdownGracefully.java fails intermittently
Thank you Stuart This is nice. I thought two variables were weird but I didn’t figure out the right solution. Also I wasn’t so sure why we print out so many messages now it’s clear to me. http://cr.openjdk.java.net/~tyan/JDK-8032050/webrev.05/ I’m sorry I have to ask you review this again. regards Tristan On Feb 15, 2014, at 9:47 AM, Stuart Marks stuart.ma...@oracle.com wrote: Hi Tristan, OK, we're getting close. Just a couple things about ShutdownGracefully.java. It's a bit clumsy to have both a boolean and a message string to keep track of the state of the test, but by itself this isn't such a big deal. A bigger deal is the lack of exception chaining. If we catch an unexpected exception at line 162, its class and message are printed out, but its stack trace isn't. If this were to occur it might be fiendishly difficult to debug. The TimeoutException isn't chained up either, but this isn't so bad, since there's only one place the timeout can occur. Still, it's good to chain exceptions where possible. There's another failure case that doesn't have an exception at all, which is when the registration request we're expecting to fail actually succeeds. This doesn't have an exception, but we should consider creating one. Here's an approach to getting rid of the boolean+string and also chaining up exceptions. The key insight is that an exception can be created in a different place from where it's thrown. Instead of the boolean+stream, have a variable of type Exception that's initialized to null. Anyplace where we catch an exception that indicates failure, fill in this variable. The idea is that if this exception variable is non-null, a failure has occurred. The exception being non-null replaces the boolean, and the exception message replaces the failure string. In addition, the exception has a stack trace, which is essential for debugging. For the case where we don't get the expected exception (i.e., registration succeeds unexpectedly), simply set the exception variable to a newly created exception, but don't throw it yet. For example, exception = new RuntimeException( The registration request succeeded unexpectedly); At the place where we catch an unexpected exception, wrap the caught exception in a new RuntimeException with a message like caught unexpected exception. The reason to do this is so we can add an additional message. The stack trace will also be a bit easier to follow. For the timeout, just assign the TimeoutException to the exception variable. Also, at each location where the exception variable is assigned to, print out a message at that point. It will at least show the state of the test to be a failure. The reason is that, if rmid.destroy() were to throw an exception from the finally-block, our carefully recorded exception state will be thrown away. (An alternative would be to put this into its own try-block, and add any exceptions caught from it to the suppressed exception list of the exception variable, but it's not clear to me that this is worth it.) At the end of the test, if the exception variable is non-null, call TestLibrary.bomb() with it to fail the test. Finally, one typo: prcoess. Thanks for updating this webrev again. s'marks On 2/13/14 12:25 AM, Tristan Yan wrote: Thank you Stuart I have fixed comment in JavaVM.java. Dealing with different cases in ShutdownGracefully.java, two variables were added. One is a flag indicate test passed or not. Other variable keeps the error message when test failed. I put TestLibrary.bomb in the bottom of the main method which only shows test fail message. Could you review it again http://cr.openjdk.java.net/~tyan/JDK-8032050/webrev.04/ Tristan On 02/13/2014 05:29 AM, Stuart Marks wrote: Hi Tristan, JavaVM.waitFor looks mostly fine. The indentation of the start of the waitFor(timeout) javadoc comment is a bit off, though; please fix. There are still some adjustments that need to be made in ShutdownGracefully.java. Both have to do with the case where the last registration succeeds unexpectedly -- this is the one that we expect to fail. First, if this registration has succeeded unexpectedly, that means rmid is still running. If that occurs, the call to rmid.waitFor(timeout) will inevitably time out. It may be worth calling rmid.destroy() directly at this point. Second, still in the succeeded-unexpectedly case, at line 154 TestLibrary.bomb() is called. This throws an exception, but it's caught by the catch-block at lines 157-158, which calls TestLibrary.bomb() again, saying unexpected exception. Except that this is kind of expected, since it was thrown from an earlier call to TestLibrary.bomb(). This is quite confusing. There are several cases that need to be handled. 1. Normal case. Registration fails as expected, rmid has terminated gracefully. Test passes. 2. Rmid is still running
Re: RFR: JDK-8032050: TEST_BUG: java/rmi/activation/Activatable/shutdownGracefully/ShutdownGracefully.java fails intermittently
Thank you Stuart I have fixed comment in JavaVM.java. Dealing with different cases in ShutdownGracefully.java, two variables were added. One is a flag indicate test passed or not. Other variable keeps the error message when test failed. I put TestLibrary.bomb in the bottom of the main method which only shows test fail message. Could you review it again http://cr.openjdk.java.net/~tyan/JDK-8032050/webrev.04/ Tristan On 02/13/2014 05:29 AM, Stuart Marks wrote: Hi Tristan, JavaVM.waitFor looks mostly fine. The indentation of the start of the waitFor(timeout) javadoc comment is a bit off, though; please fix. There are still some adjustments that need to be made in ShutdownGracefully.java. Both have to do with the case where the last registration succeeds unexpectedly -- this is the one that we expect to fail. First, if this registration has succeeded unexpectedly, that means rmid is still running. If that occurs, the call to rmid.waitFor(timeout) will inevitably time out. It may be worth calling rmid.destroy() directly at this point. Second, still in the succeeded-unexpectedly case, at line 154 TestLibrary.bomb() is called. This throws an exception, but it's caught by the catch-block at lines 157-158, which calls TestLibrary.bomb() again, saying unexpected exception. Except that this is kind of expected, since it was thrown from an earlier call to TestLibrary.bomb(). This is quite confusing. There are several cases that need to be handled. 1. Normal case. Registration fails as expected, rmid has terminated gracefully. Test passes. 2. Rmid is still running and has processed the registration request successfully. Need to kill rmid and fail the test. 3. Rmid is still running but is in some bad state where the registration request failed. Need to kill rmid and fail the test. 4. Some other unexpected failure. This is what the catch and finally blocks at lines 157-161 are for. These four cases need to be handled independently. Ideally they should be separated from the cleanup code. As noted above, you don't want to throw an exception from the try-block, because it will get caught by your own catch block. Similarly, it's tempting to return from the midst of the try-block in the success case, but this still runs the finally-block. This can be quite confusing. A typical technique for dealing with this kind of issue is to record results of operations from within the try block, and then analyze the results outside, throwing a test failure (TestLibrary.bomb) at that point, where it won't be caught by the test's own catch block. Editoral: - line 153, there should be a space between 'if' and the opening paren - line 156, typo, gracefuuly Finally, it would be helpful if you could get webrev to generate the actual changeset instead of the plain patch, per my other review email. Thanks, s'marks On 2/11/14 9:39 PM, Tristan Yan wrote: Thank you for your thorough mail. This is very educational. I took you advice and generated a new webrev for this. http://cr.openjdk.java.net/~tyan/JDK-8032050/webrev.03/ I appreciate you can review this again. Regards Tristan On Feb 11, 2014, at 8:32 AM, Stuart Marks stuart.ma...@oracle.com mailto:stuart.ma...@oracle.com wrote: Hi Tristan, Sorry about my recurring delays. Several comments on these changes. JavaVM.java -- The waitFor(timeout) method is mostly ok. The new thread started at line 208 and following seems unnecessary, though. This code is reached when a timeout occurs, so throwing TimeoutException is the only thing necessary in this case. Should the code to start the new thread be removed? There should be a similar check for vm == null as in the waitFor() [no args] method. ShutdownGracefully.java -- The condition that's checked after calling rmid.waitFor(SHUTDOWN_TIMEOUT) is incorrect. It's testing the exit status against zero. Offhand, when and if rmid exits, it might exit with a nonzero exit status. If rmid has exited at this point, then the test should succeed. Instead of testing against zero, the code should catch TimeoutException, which means that rmid is still running. It's probably reasonable to catch TimeoutException, print a message, and then let the finally-block destroy the rmid. Calling TestLibrary.bomb() from within the try-block is confusing, since that method throws an exception, which is then caught by the catch-block, when then calls TestLibrary.bomb() again. We should also make sure to test the success case properly. If rmid.waitFor() returns in a timely fashion without throwing TimeoutException, it doesn't matter what the exit status is. (It might be worth printing it out.) At that point we know that rmid *has* exited gracefully, so we need to set rmid to null so that the finally-block doesn't attempt to destroy rmid redundantly. Some additional messages about rmid having exited and the test passing are also warranted for this case. Some additional cleanup can
Re: RFR for JDK-8030844:sun/rmi/rmic/classpath/RMICClassPathTest.java timeout in same binaries run
Thank you Stuart This is a very nice tutorial. I did try both ways. Adopt your fix doesn't seem work for me. It still doesn't generate changeset. But without -r option works. http://cr.openjdk.java.net/~tyan/JDK-8030844/webrev.02/ Could you push it for me? Tristan On 02/13/2014 03:48 AM, Stuart Marks wrote: Hi Tristan, Changes look good. Unfortunately the webrev doesn't contain an actual changeset; it just contains a patch. In the webrev header there is the line Patch of Changes. Instead it should say Changeset. Like this one: http://cr.openjdk.java.net/~smarks/reviews/7900311/webrev.1/ Unfortunately webrev doesn't always produce an actual changeset, in particular it doesn't if you use webrev -r. (My example webrev above is a patch that changes webrev to generate the changeset even with -r. It hasn't been pushed yet; possibly it will later today. Or you can apply my webrev changeset to your local webrev.) Or, you can just run webrev without -r and it should check hg outgoing to determine the changesets used in generating the webrev, which does place the changeset into the webrev. Can you regenerate the webrev so that it includes the actual changeset? Sorry, this is a small thing, but as someone who is sponsoring changesets, I'd appreciate an actual changeset in the webrev instead of having to construct one manually. I think other sponsors would appreciate this too. s'marks On 2/11/14 6:59 PM, Tristan Yan wrote: Thank you Stuart http://cr.openjdk.java.net/~tyan/JDK-8030844/webrev.01/ Regards Tristan On Feb 12, 2014, at 10:06 AM, Stuart Marks stuart.ma...@oracle.com mailto:stuart.ma...@oracle.com wrote: Hi, yes, I'll take this one. It's slightly odd that this is creating filenames that already have / in them (as opposed to File.separator) but since these files don't actually have to exist, I suppose it doesn't really matter. I'm not convinced that we actually have any evidence that /home/~user is really causing the hang/timeout (either caused by the automounter hanging on /home or LDAP or other nameservice lookup on ~user), but this is harmless, and it'll fix the problem on the off chance that this really is the root cause. Tristan, please update the test's @bug tag to add 8030844, create a changeset, and create a webrev with the changeset in it (as opposed to a bare patch). I'll then push it for you. s'marks On 2/10/14 4:08 AM, Alan Bateman wrote: On 10/02/2014 10:57, Tristan Yan wrote: Ping: Can anyone give a review on this. Thank you Tristan Changing the test so that it doesn't try to /home/~user seems reasonable to me. Stuart - I see you've been sponsoring Tristan's updates to the RMI tests. Are you going to take this one too? -Alan On Feb 6, 2014, at 5:13 PM, Tristan Yantristan@oracle.com mailto:tristan@oracle.com wrote: Hi All Please help to review a simple fix for https://bugs.openjdk.java.net/browse/JDK-8030844 http://cr.openjdk.java.net/~tyan/JDK-8030844/webrev.00/. Description: Change replace a “/home/~user folder with an test source path. Folder “/home/~user” cause some problem whenever something wrong with the automount filesystem or an username lookup problem. Thank you Tristan
Re: RFR for JDK-8030844:sun/rmi/rmic/classpath/RMICClassPathTest.java timeout in same binaries run
Thank you Stuart http://cr.openjdk.java.net/~tyan/JDK-8030844/webrev.01/ Regards Tristan On Feb 12, 2014, at 10:06 AM, Stuart Marks stuart.ma...@oracle.com wrote: Hi, yes, I'll take this one. It's slightly odd that this is creating filenames that already have / in them (as opposed to File.separator) but since these files don't actually have to exist, I suppose it doesn't really matter. I'm not convinced that we actually have any evidence that /home/~user is really causing the hang/timeout (either caused by the automounter hanging on /home or LDAP or other nameservice lookup on ~user), but this is harmless, and it'll fix the problem on the off chance that this really is the root cause. Tristan, please update the test's @bug tag to add 8030844, create a changeset, and create a webrev with the changeset in it (as opposed to a bare patch). I'll then push it for you. s'marks On 2/10/14 4:08 AM, Alan Bateman wrote: On 10/02/2014 10:57, Tristan Yan wrote: Ping: Can anyone give a review on this. Thank you Tristan Changing the test so that it doesn't try to /home/~user seems reasonable to me. Stuart - I see you've been sponsoring Tristan's updates to the RMI tests. Are you going to take this one too? -Alan On Feb 6, 2014, at 5:13 PM, Tristan Yantristan@oracle.com wrote: Hi All Please help to review a simple fix for https://bugs.openjdk.java.net/browse/JDK-8030844 http://cr.openjdk.java.net/~tyan/JDK-8030844/webrev.00/. Description: Change replace a “/home/~user folder with an test source path. Folder “/home/~user” cause some problem whenever something wrong with the automount filesystem or an username lookup problem. Thank you Tristan
Re: RFR: JDK-8032050: TEST_BUG: java/rmi/activation/Activatable/shutdownGracefully/ShutdownGracefully.java fails intermittently
Thank you for your thorough mail. This is very educational. I took you advice and generated a new webrev for this. http://cr.openjdk.java.net/~tyan/JDK-8032050/webrev.03/ I appreciate you can review this again. Regards Tristan On Feb 11, 2014, at 8:32 AM, Stuart Marks stuart.ma...@oracle.com wrote: Hi Tristan, Sorry about my recurring delays. Several comments on these changes. JavaVM.java -- The waitFor(timeout) method is mostly ok. The new thread started at line 208 and following seems unnecessary, though. This code is reached when a timeout occurs, so throwing TimeoutException is the only thing necessary in this case. Should the code to start the new thread be removed? There should be a similar check for vm == null as in the waitFor() [no args] method. ShutdownGracefully.java -- The condition that's checked after calling rmid.waitFor(SHUTDOWN_TIMEOUT) is incorrect. It's testing the exit status against zero. Offhand, when and if rmid exits, it might exit with a nonzero exit status. If rmid has exited at this point, then the test should succeed. Instead of testing against zero, the code should catch TimeoutException, which means that rmid is still running. It's probably reasonable to catch TimeoutException, print a message, and then let the finally-block destroy the rmid. Calling TestLibrary.bomb() from within the try-block is confusing, since that method throws an exception, which is then caught by the catch-block, when then calls TestLibrary.bomb() again. We should also make sure to test the success case properly. If rmid.waitFor() returns in a timely fashion without throwing TimeoutException, it doesn't matter what the exit status is. (It might be worth printing it out.) At that point we know that rmid *has* exited gracefully, so we need to set rmid to null so that the finally-block doesn't attempt to destroy rmid redundantly. Some additional messages about rmid having exited and the test passing are also warranted for this case. Some additional cleanup can be done here as well, over and above the changes you've proposed. (This stuff is left over from earlier RMI test messes.) In order to shut down an active object, the code here spawns a new thread that sleeps for a while and then deactivates this object. This isn't necessary. (It might have been necessary in the past.) It's sufficient simply to unexport this object and then deactivate it, directly within the shutdown() method. See test/java/rmi/activation/ActivationSystem/unregisterGroup/UnregisterGroup.java for an example of this. In addition, the run() method can be removed, and the implements Runnable declaration can also be removed from the ShutdownGracefully test class. Finally, revisiting some code farther up in the test, the try-block at lines 135-140 issues a registration request that the test expects to fail. If it succeeds, the message at line 139 isn't very clear; it should say that the registration request succeeded unexpectedly. This should cause the test to fail. We still probably want to go through the waitFor(timeout) path and eventual rmid cleanup, but a flag should be set here to ensure that the test indeed fails if the registration succeeds unexpectedly, and the messages should clearly indicate that this is going on. A good way to test this last case is to change rmid's security manager to the normal security manager java.lang.SecurityManager instead of TestSecurityManager. Thanks, s'marks On 2/10/14 2:59 AM, Tristan Yan wrote: Hi Stuart Could you help to review this. Thank you Tristan On Jan 31, 2014, at 4:36 PM, Tristan Yan tristan@oracle.com mailto:tristan@oracle.com wrote: Thank you for fixing JDK-8023541. Then I leave ActivationLibrary.java for now. I still did some clean up following your suggestion. 1. I changed waitFor(long timeout) method, this method is going to use code like Process.waitFor(timeout, unit). This can be backported to JDK7. Also exitValue is kept as a return value. For making sure there is no Pipe leak, a cleanup thread will start when timeout happens. 2. Change in ShutdownGracefully is a little tricky. I think we should just destroy JVM once exception is thrown. So I move the wait logic into try block instead keep them in finally block. Can you receive it again. http://cr.openjdk.java.net/~tyan/JDK-8032050/webrev.02/ Thank you Tristan On 01/29/2014 03:16 PM, Stuart Marks wrote: Hi Tristan, I don't want to put the workaround into ActivationLibrary.rmidRunning() for a null return from the lookup, because this is only a workaround for an actual bug in rmid initialization. See the review I just posted for JDK-8023541. Adding JavaVM.waitFor(timeout) is something that would be useful in general, but it needs to be handled carefully. It uses the new Process.waitFor(timeout, unit) which is new in Java SE 8; this makes
Re: RFR for JDK-8030844:sun/rmi/rmic/classpath/RMICClassPathTest.java timeout in same binaries run
Ping: Can anyone give a review on this. Thank you Tristan On Feb 6, 2014, at 5:13 PM, Tristan Yan tristan@oracle.com wrote: Hi All Please help to review a simple fix for https://bugs.openjdk.java.net/browse/JDK-8030844 http://cr.openjdk.java.net/~tyan/JDK-8030844/webrev.00/. Description: Change replace a “/home/~user folder with an test source path. Folder “/home/~user” cause some problem whenever something wrong with the automount filesystem or an username lookup problem. Thank you Tristan
Re: RFR: JDK-8032050: TEST_BUG: java/rmi/activation/Activatable/shutdownGracefully/ShutdownGracefully.java fails intermittently
Hi Stuart Could you help to review this. Thank you Tristan On Jan 31, 2014, at 4:36 PM, Tristan Yan tristan@oracle.com wrote: Thank you for fixing JDK-8023541. Then I leave ActivationLibrary.java for now. I still did some clean up following your suggestion. 1. I changed waitFor(long timeout) method, this method is going to use code like Process.waitFor(timeout, unit). This can be backported to JDK7. Also exitValue is kept as a return value. For making sure there is no Pipe leak, a cleanup thread will start when timeout happens. 2. Change in ShutdownGracefully is a little tricky. I think we should just destroy JVM once exception is thrown. So I move the wait logic into try block instead keep them in finally block. Can you receive it again. http://cr.openjdk.java.net/~tyan/JDK-8032050/webrev.02/ Thank you Tristan On 01/29/2014 03:16 PM, Stuart Marks wrote: Hi Tristan, I don't want to put the workaround into ActivationLibrary.rmidRunning() for a null return from the lookup, because this is only a workaround for an actual bug in rmid initialization. See the review I just posted for JDK-8023541. Adding JavaVM.waitFor(timeout) is something that would be useful in general, but it needs to be handled carefully. It uses the new Process.waitFor(timeout, unit) which is new in Java SE 8; this makes backporting to 7 more complicated. Not clear whether we'll do so, but I don't want to forclose the opportunity without discussion. It's also not clear how one can get the vm's exit status after JavaVM.waitFor() has returned true. With the Process API it's possible simply to call waitFor() or exitValue(). With JavaVM, a new API needs to be created, or the rule has to be established that one must call JavaVM.waitFor() to collect the exit status as well as to close the pipes from the subprocess. If JavaVM.waitFor(timeout, unit) is called without subsequently calling JavaVM.waitFor(), the pipes are leaked. In ShutdownGracefully.java, the finally-block needs to check to see if rmid is still running, and if it is, to shut it down. Simply calling waitFor(timeout, unit) isn't sufficient, because if the rmid process is still running, it will be left running. The straightforward approach would be to call ActivationLibrary.rmidRunning() to test if it's still running. Unfortunately this isn't quite right, because rmidRunning() has a timeout loop in it -- which should probably be removed. (I think there's a bug for this.) Another approach would be simply to call rmid.destroy(). This calls rmid's shutdown() method first, which is reasonable, but I'm not sure it kills the process if that fails. In any case, this already has a timeout loop waiting for the process to die, so ShutdownGracefully.java needn't use a new waitFor(timeout, unit) call. Removing the commented-out code that starts with no longer needed is good, and removing the ShutdownDetectThread is also good, since that's unnecessary. There are some more cleanups I have in mind here but I'd like to see a revised webrev before proceeding. Thanks, s'marks On 1/25/14 8:57 PM, Tristan Yan wrote: Hi Stuart Thank you for your review and suggestion. Yes, since this failure mode is very hard to be reproduced. I guess it's make sense to do some hack. And I also noticed in ActivationLibrary.rmidRunning. It does try to look up ActivationSystem but doesn't check if it's null. So I add the logic to make sure we will look up the non-null ActivationSystem. Also I did some cleanup if you don't mind. Add a waitFor(long timeout, TimeUnit unit) for JavaVM. Which we can have a better waitFor control. I appreciate you can review the code again. http://cr.openjdk.java.net/~tyan/JDK-8032050/webrev.01/ Thank you Tristan On 01/25/2014 10:20 AM, Stuart Marks wrote: On 1/23/14 10:34 PM, Tristan Yan wrote: Hi All Could you review the bug fix for JDK-8032050. http://cr.openjdk.java.net/~tyan/JDK-8032050/webrev.00/ Description: This rare happened failure caused because when RMID starts. It don't guarantee sun.rmi.server.Activation.startActivation finishes. Fix by adding a iterative getSystem with a 5 seconds timeout. Hi Tristan, Adding a timing/retry loop into this test isn't the correct approach for fixing this test. The timing loop isn't necessary because there is already a timing loop in RMID.start() in the RMI test library. (There's another timing loop in ActivationLibrary.rmidRunning() which should probably be removed.) So the intent of this library call is that it start rmid and wait for it to become ready. That logic doesn't need to be added to the test. In the bug report JDK-8032050 you had mentioned that the NullPointerException was suspicious. You're right! I took a look and it seemed like it was related to JDK-8023541, and I added a note to this effect to the bug report. The problem here is that rmid can come up
RFR for JDK-8030844:sun/rmi/rmic/classpath/RMICClassPathTest.java timeout in same binaries run
Hi All Please help to review a simple fix for https://bugs.openjdk.java.net/browse/JDK-8030844 http://cr.openjdk.java.net/~tyan/JDK-8030844/webrev.00/. Description: Change replace a “/home/~user folder with an test source path. Folder “/home/~user” cause some problem whenever something wrong with the automount filesystem or an username lookup problem. Thank you Tristan
答复: Demo for Parallel Core Collection API
Thank you Paul. I was told maybe we will target this on 8u20 if we can't catch 8. Let's do the open review first. http://cr.openjdk.java.net/~tyan/JDK-8033358/webrev.02/ Fibonacci: Update as you suggested. Test shows parallel version is faster than sequential version when the input is greater than 100,000. ImageTransform Inner parallel was eliminated. Also BufferedImage.set is a thread safe method. MonteCarloPI Unused hit method was removed. RandomPrimeNumber It's changed into ProbablePrimeNumber. Following your suggestion to use BigInteger.probablePrime method. Thank you Tristan -邮件原件- 发件人: Paul Sandoz 发送时间: Tuesday, February 04, 2014 5:34 PM 收件人: Tristan Yan 抄送: core-libs-dev@openjdk.java.net; Aleksandre Iline; Mikhail Kondratyev 主题: Re: Demo for Parallel Core Collection API On Feb 4, 2014, at 2:58 AM, Tristan Yan tristan@oracle.com wrote: Hi Paul I know this may be a little bit late. Yes, likely too late... (see below). But I am still asking you review this. http://cr.openjdk.java.net/~tyan/JDK-8033358/webrev.01/ This is a whole demo code include the stream demo code you reviewed before also included parallel part. There is one other parallel demo that Paul has already pushed. Could you please review it again? Fibonacci - There is no parallel execution; one can write as follows for better splitting: return IntStream.range(0, n).mapToObj(i - matrix). parallel(). reduce(Fibonacci::times). get(); - While the approach is interesting (elegant in terms of the reduction) one is not likely to observe any speed up (did you measure?) since the long value will overflow on the 93rd number [1]. You would need to use BigInteger and a larger number of iterations. ImageTransform - In rotate/shift/zoom/invert the inner range (over height) should not be parallel, it's just gonna fight for resources with the outer range (over width). - It's not entirely clear to me if BufferedImage is thread safe to concurrently set the pixels MonteCarloPI - hit() method is no longer used. RandomPrimeNumber - better to use ThreadLocalRandom or SplittableRandom, as both of those produce higher quality sequences of random numbers. - i don't think this example has much value as there are better ways to determine primes. As i have said before, if you want to show a simple example it is better to show the parallel generation of probable primes using BigInteger. -- I am sorry to have to say this, i know if you have worked on this for a while, but i think this is now too late to go through another round of reviews. Paul. [1] Think rabbits! http://www.maths.surrey.ac.uk/hosted-sites/R.Knott/Fibonacci/fibtable.html You can think
答复: Demo for Parallel Core Collection API
Hi Paul I know this may be a little bit late. But I am still asking you review this. http://cr.openjdk.java.net/~tyan/JDK-8033358/webrev.01/ This is a whole demo code include the stream demo code you reviewed before also included parallel part. There is one other parallel demo that Paul has already pushed. Could you please review it again? Thank you so much Tristan -邮件原件- 发件人: Paul Sandoz 发送时间: Tuesday, January 14, 2014 9:27 PM 收件人: core-libs-dev@openjdk.java.net 抄送: Tristan Yan 主题: Re: Demo for Parallel Core Collection API Hi Tristan, See below for a patch. The location of the code seems a little out of place with the other code. I would have expected a structure such as: stream/parallel/*.java where the source code is in the default no-package. I am not yet convinced of the value of the RandomPrimeNumber example. In your original code you had a parallel stream invoking another parallel stream in the filter (the simple division to return a prime), this has the effect of actually slowing down the computation due to each calculation fighting for F/J resources and threads. Remove the additional parallelism and it is not clear when eyeballing the execution time that parallel is faster than sequential (it probably is, but it is handy to have obvious examples). A simpler example is to generate a list of probable primes of a certain bit length. IMHO a better example in the category of slightly more complex is the rendering of the Mandelbrot set, parallelized along the real or imaginary axis. Once can easily generate some nice ASCII-like art :-) Paul. On Dec 20, 2013, at 6:25 PM, Paul Sandoz paul.san...@oracle.com wrote: Thanks, I need to look at this in more detail, but here are some quick comments. - recommend you try and avoid using limit with parallel ops, for example the Pi example cam be reformulated as: long M = LongStream.range(0, N).parallel().filter(sr - { double x = ThreadLocalRandom.current().nextDouble(-1, 1); double y = ThreadLocalRandom.current().nextDouble(-1, 1); return x * x + y * y R * R; // Don't use need to use sqrt }).count(); double pi = (M / N) * 4.0; the Primes example could be reformulated as: LongStream.range(0, limit).parallel().map(/odd values/).filter(RandomPrimeNumber::isPrime).findAny(); you don't need to declare unordered() since findAny implicitly makes the stream unordered by definition. The key message here is range has better decomposition characteristics than generate or iterate. More later, probably after the break, Paul. diff -r d56cd0872ec4 src/share/sample/demo/parallel/MonteCarloPI.java --- a/src/share/sample/demo/parallel/MonteCarloPI.java Tue Jan 14 13:34:22 2014 +0100 +++ b/src/share/sample/demo/parallel/MonteCarloPI.java Tue Jan 14 14:15:22 2014 +0100 @@ -1,5 +1,5 @@ /* - * Copyright (c) 2013 Oracle and/or its affiliates. All rights reserved. + * Copyright (c) 2014 Oracle and/or its affiliates. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions @@ -36,16 +36,14 @@ * input validation and proper error handling, might not be present in * this sample code. */ -package demo.parallel; import java.util.concurrent.ThreadLocalRandom; import java.util.stream.LongStream; /** - * This demo shows how to use the parallel mode and the Monte Carlo method to - * calculate the value of PI + * This demo shows how to use parallel streams to approximately + calculate the + * value of π using the Monte Carlo method. * - * @author tyan */ public class MonteCarloPI { @@ -62,39 +60,32 @@ } /** - * Use the Monte Carlo method to calculate the value of PI. basic algorithm + * Use the Monte Carlo method to calculate the value of π. basic + algorithm * is: 1. Draw a square on the ground, then inscribe a circle within it. 2. * Scatter some objects of uniform size (grains of rice or sand) over the * square. 3. Count the total number of objects inside the circle and the - * total number of objects overall. 4. The ratio of the two total is an - * estimate of the ratio of the two areas, which is PI/4. Multiply the - * result by 4 to estimate PI. + * total number of objects overall. 4. The ratio of the two totals is an + * estimate of the ratio of the two areas, which is pi/4. Multiply the + * result by 4 to estimate π. * - * @param x how many times randomly selected a point - * @return value of π by x times calculation + * @param n how many times to randomly selected a point + * @return the approximate value of π */ -private static double pi(long x) { -return LongStream.generate(() - hit()). -// using parallel mode -parallel(). -// select only x elements -limit(x).sum
Re: RFR: JDK-8032050: TEST_BUG: java/rmi/activation/Activatable/shutdownGracefully/ShutdownGracefully.java fails intermittently
Thank you for fixing JDK-8023541. Then I leave ActivationLibrary.java for now. I still did some clean up following your suggestion. 1. I changed waitFor(long timeout) method, this method is going to use code like Process.waitFor(timeout, unit). This can be backported to JDK7. Also exitValue is kept as a return value. For making sure there is no Pipe leak, a cleanup thread will start when timeout happens. 2. Change in ShutdownGracefully is a little tricky. I think we should just destroy JVM once exception is thrown. So I move the wait logic into try block instead keep them in finally block. Can you receive it again. http://cr.openjdk.java.net/~tyan/JDK-8032050/webrev.02/ Thank you Tristan On 01/29/2014 03:16 PM, Stuart Marks wrote: Hi Tristan, I don't want to put the workaround into ActivationLibrary.rmidRunning() for a null return from the lookup, because this is only a workaround for an actual bug in rmid initialization. See the review I just posted for JDK-8023541. Adding JavaVM.waitFor(timeout) is something that would be useful in general, but it needs to be handled carefully. It uses the new Process.waitFor(timeout, unit) which is new in Java SE 8; this makes backporting to 7 more complicated. Not clear whether we'll do so, but I don't want to forclose the opportunity without discussion. It's also not clear how one can get the vm's exit status after JavaVM.waitFor() has returned true. With the Process API it's possible simply to call waitFor() or exitValue(). With JavaVM, a new API needs to be created, or the rule has to be established that one must call JavaVM.waitFor() to collect the exit status as well as to close the pipes from the subprocess. If JavaVM.waitFor(timeout, unit) is called without subsequently calling JavaVM.waitFor(), the pipes are leaked. In ShutdownGracefully.java, the finally-block needs to check to see if rmid is still running, and if it is, to shut it down. Simply calling waitFor(timeout, unit) isn't sufficient, because if the rmid process is still running, it will be left running. The straightforward approach would be to call ActivationLibrary.rmidRunning() to test if it's still running. Unfortunately this isn't quite right, because rmidRunning() has a timeout loop in it -- which should probably be removed. (I think there's a bug for this.) Another approach would be simply to call rmid.destroy(). This calls rmid's shutdown() method first, which is reasonable, but I'm not sure it kills the process if that fails. In any case, this already has a timeout loop waiting for the process to die, so ShutdownGracefully.java needn't use a new waitFor(timeout, unit) call. Removing the commented-out code that starts with no longer needed is good, and removing the ShutdownDetectThread is also good, since that's unnecessary. There are some more cleanups I have in mind here but I'd like to see a revised webrev before proceeding. Thanks, s'marks On 1/25/14 8:57 PM, Tristan Yan wrote: Hi Stuart Thank you for your review and suggestion. Yes, since this failure mode is very hard to be reproduced. I guess it's make sense to do some hack. And I also noticed in ActivationLibrary.rmidRunning. It does try to look up ActivationSystem but doesn't check if it's null. So I add the logic to make sure we will look up the non-null ActivationSystem. Also I did some cleanup if you don't mind. Add a waitFor(long timeout, TimeUnit unit) for JavaVM. Which we can have a better waitFor control. I appreciate you can review the code again. http://cr.openjdk.java.net/~tyan/JDK-8032050/webrev.01/ Thank you Tristan On 01/25/2014 10:20 AM, Stuart Marks wrote: On 1/23/14 10:34 PM, Tristan Yan wrote: Hi All Could you review the bug fix for JDK-8032050. http://cr.openjdk.java.net/~tyan/JDK-8032050/webrev.00/ Description: This rare happened failure caused because when RMID starts. It don't guarantee sun.rmi.server.Activation.startActivation finishes. Fix by adding a iterative getSystem with a 5 seconds timeout. Hi Tristan, Adding a timing/retry loop into this test isn't the correct approach for fixing this test. The timing loop isn't necessary because there is already a timing loop in RMID.start() in the RMI test library. (There's another timing loop in ActivationLibrary.rmidRunning() which should probably be removed.) So the intent of this library call is that it start rmid and wait for it to become ready. That logic doesn't need to be added to the test. In the bug report JDK-8032050 you had mentioned that the NullPointerException was suspicious. You're right! I took a look and it seemed like it was related to JDK-8023541, and I added a note to this effect to the bug report. The problem here is that rmid can come up and transiently return null instead of the stub of the activation system. That's what JDK-8023541 covers. I think that rmid itself needs to be fixed, though modifying the timing loop in the RMI test library to wait for rmid
Re: RFR(s): 8023541 Race condition in rmid initialization
Hi Stuart Looks like in you new webrev. The wait will continue to go loop until systemStub is not null. If it’s what you meant to do that? Thank you Tristan On Jan 30, 2014, at 10:57 AM, Stuart Marks stuart.ma...@oracle.com wrote: Hi all, wow, lots of comments on this. Let me see if I can tackle them in one message. Quick aside before I get to the issues: my priorities for this code are correctness and maintainability, possibly at the expense of performance. In other words I'm willing to make the code more complex so that it's correct, but I'm less willing to make it more complex in order to make it go faster. (Tristan, David) Making 'initialized' be volatile. As things stand, as David has pointed out (thanks!) it's not necessary for it to be volatile. There are other circumstances (see below) where it would be necessary to make it volatile, though. (Alan, Paul) We could convert to double-checked locking and avoid a synchronization in the common case, paying only a volatile read. Something like, volatile boolean initialized = false; ... private void awaitInitialized() { if (!initialized) { synchronized (this) { try { while (!initialized) { wait(); } catch (InterruptedException ie) { Thread.currentThread().interrupt(); } } } } I *think* that's right. (Is it?) I don't know whether this performs any better, or if it does, whether it's worth the additional complexity. I had to squint at this for a while to convince myself it's correct. I am fairly sure this is not a performance-critical area of code. (Famous last words, I know.) The other threads that can be active here are handling remote calls, so they're already doing network I/O, unmarshalling, and dispatching to the RMI thread pool. Compared to this, the incremental cost of a synchronization block seems inconsequential. I don't have much intuition about how much we'd save by substituting a volatile read for a full synchronization in the common case, but given that this is within the context of a fairly expensive operation, it doesn't seem like it's worth it to me. (Alan, Paul) Calling awaitInitialized isn't strictly necessary anywhere except for the equals(NAME) case of lookup(). Yes, that's right. I added it everywhere because of a possibly misguided sense of completeness and consistency. One could essentially redefine awaitInitialized() to protect just the systemStub field, not the entire object, whose only state is that field anyway. Also, see below regarding renaming this method. (Alan) Use systemStub == null as the condition instead of !initialized. I considered at first this but it got confusing really fast. Take a look: private final ActivationSystem systemStub; SystemRegistryImpl(..., systemStub) { ... this.systemStub = systemStub; notifyAll(); ... } private synchronized void awaitInitialized() { ... while (systemStub == null) { wait(); } ... } We rely on systemStub to be initialized at object creation time (before construction!) to its default value of null. I think this is right. The constructor then initializes the blank final to non-null and notifies. Then, awaitInitialized seems straightforward until you see that the condition is waiting for the value of a final variable to change! JLS sec 17.5 [1] allows all sorts of optimizations for final fields, but they all are qualified with what is essentially a safe publication requirement on the reference: An object is considered to be completely initialized when its constructor finishes. A thread that can only see a reference to an object after that object has been completely initialized is guaranteed to see the correctly initialized values for that object's final fields. [1] http://docs.oracle.com/javase/specs/jls/se7/html/jls-17.html#jls-17.5 Unfortunately this code *unsafely* publishes a reference to 'this' which is the root of this whole problem. Under these circumstances I'd prefer to be really conservative about the code here. I can't quite convince myself that a condition loop waiting for a final field to change value is safe. That's why I added a separate field. I can see removing the boolean and using systemStub == null as the condition making things simpler, since there are fewer variables to reason about, but only if systemStub is made non-final. Making it non-final prevents any unwanted optimizations resulting from it being final. Having it be final doesn't add much anyway. I'll also move all accesses to systemStub within synchronized blocks so there is no question about visibility. (As a result, awaitInitialized now gets turned into getSystemStub.) (Peter) Should the interrupt
Re: RFR(s): 8023541 Race condition in rmid initialization
Hi Stuart I didn’t make my comment clear. You set interrupted as true when thread gets interrupted. Here it's still going to wait until systemStub is not null. Why do you still need interrupt current thread in this case. Thank you Tristan On Jan 30, 2014, at 11:24 AM, David Holmes david.hol...@oracle.com wrote: http://cr.openjdk.java.net/~smarks/reviews/8023541/webrev.1/
Re: RFR(s): 8023541 Race condition in rmid initialization
Hi Stuart Should variable initialized be volatile here? Otherwise looks good. Thank you Tristan On Jan 29, 2014, at 2:51 PM, Stuart Marks stuart.ma...@oracle.com wrote: Hi all, Please review this fix to a race condition in rmid initialization. Briefly, rmid subclasses the RMI registry implementation and provides special handling for its own stub. Unfortunately the registry is exported in the super() call, making remote calls possible before rmid's stub initialization is complete. The fix is to ensure that all remote calls wait for initialization before proceeding. Bug: https://bugs.openjdk.java.net/browse/JDK-8023541 Webrev: http://cr.openjdk.java.net/~smarks/reviews/8023541/webrev.0/ Thanks, s'marks
Re: RFR: JDK-8032050: TEST_BUG: java/rmi/activation/Activatable/shutdownGracefully/ShutdownGracefully.java fails intermittently
Hi Stuart Thank you for your review and suggestion. Yes, since this failure mode is very hard to be reproduced. I guess it's make sense to do some hack. And I also noticed in ActivationLibrary.rmidRunning. It does try to look up ActivationSystem but doesn't check if it's null. So I add the logic to make sure we will look up the non-null ActivationSystem. Also I did some cleanup if you don't mind. Add a waitFor(long timeout, TimeUnit unit) for JavaVM. Which we can have a better waitFor control. I appreciate you can review the code again. http://cr.openjdk.java.net/~tyan/JDK-8032050/webrev.01/ Thank you Tristan On 01/25/2014 10:20 AM, Stuart Marks wrote: On 1/23/14 10:34 PM, Tristan Yan wrote: Hi All Could you review the bug fix for JDK-8032050. http://cr.openjdk.java.net/~tyan/JDK-8032050/webrev.00/ Description: This rare happened failure caused because when RMID starts. It don't guarantee sun.rmi.server.Activation.startActivation finishes. Fix by adding a iterative getSystem with a 5 seconds timeout. Hi Tristan, Adding a timing/retry loop into this test isn't the correct approach for fixing this test. The timing loop isn't necessary because there is already a timing loop in RMID.start() in the RMI test library. (There's another timing loop in ActivationLibrary.rmidRunning() which should probably be removed.) So the intent of this library call is that it start rmid and wait for it to become ready. That logic doesn't need to be added to the test. In the bug report JDK-8032050 you had mentioned that the NullPointerException was suspicious. You're right! I took a look and it seemed like it was related to JDK-8023541, and I added a note to this effect to the bug report. The problem here is that rmid can come up and transiently return null instead of the stub of the activation system. That's what JDK-8023541 covers. I think that rmid itself needs to be fixed, though modifying the timing loop in the RMI test library to wait for rmid to come up *and* for the lookup to return non-null is an easy way to fix the problem. (Or at least cover it up.) The next step in the analysis is to determine, given that ActivationLibrary.getSystem can sometimes return null, whether this has actually caused this test failure. This is pretty easy to determine; just hack in a line system = null in the right place and run the test. I've done this, and the test times out and the output log is pretty much identical to the one in the bug report. (I recommend you try this yourself.) So I think it's fairly safe to say that the problem in JDK-8023541 has caused the failure listed in JDK-8032050. I can see a couple ways to proceed here. One way is just to close this out as a duplicate of JDK-8023541 since that bug caused this failure. Another is that this test could use some cleaning up. This bug certainly covers a failure, but the messages emitted are confusing and in some cases completely wrong. For example, the rmid has shutdown message at line 180 is incorrect, because in this case rmid is still running and the wait() call has timed out. Most of the code here can be replaced with calls to various bits of the RMI test library. There are a bunch of other things in this test that could be cleaned up as well. It's up to you how you'd like to proceed. s'marks
RFR: JDK-8032050: TEST_BUG: java/rmi/activation/Activatable/shutdownGracefully/ShutdownGracefully.java fails intermittently
Hi All Could you review the bug fix for JDK-8032050. http://cr.openjdk.java.net/~tyan/JDK-8032050/webrev.00/ Description: This rare happened failure caused because when RMID starts. It don't guarantee sun.rmi.server.Activation.startActivation finishes. Fix by adding a iterative getSystem with a 5 seconds timeout. Thank you. Tristan
Move forward for JDK-8029891
Hi Mandy We have discussed bug https://bugs.openjdk.java.net/browse/JDK-8029891 in JBS. And we don't think this is a good time to fix it. I suggest we do two things for this bug if it's okay for you. 1. We open a new JDK bug for tracking down future JDK change. Also link JDK-8029891 to this new jdk bug. 2. We put java/lang/ClassLoader/deadlock/GetResource.java to ProblemList. Please let know if you agree this. Thank you. Tristan
Re: Move forward for JDK-8029891
Thank you Mandy If you want to fix it soon. I am okay to use this bug as the one tracking the fix. BTW This is an intermittent failure. Regards Tristan On Jan 22, 2014, at 8:52 AM, Mandy Chung mandy.ch...@oracle.com wrote: Hi Tristan, On 1/21/2014 4:26 PM, Tristan Yan wrote: Hi Mandy We have discussed bug https://bugs.openjdk.java.net/browse/JDK-8029891 in JBS. And we don't think this is a good time to fix it. I suggest we do two things for this bug if it's okay for you. 1. We open a new JDK bug for tracking down future JDK change. Also link JDK-8029891 to this new jdk bug. Why do we need a new JDK bug? I think we should use JDK-8029891 and fix it in JDK 9. 2. We put java/lang/ClassLoader/deadlock/GetResource.java to ProblemList. Is this intermittent? I believe it's not. Since this is a bug, it's more appropriate to add @ignore to the test as I think I won't fix this soon. Mandy Please let know if you agree this. Thank you. Tristan
Re: RFR for JDK-7027502: Test failures in demo/jvmti/hprof testcases, need to be othervm
Okay. But I think we all agree we should at least bring this test back for further tacking. http://cr.openjdk.java.net/~tyan/JDK-7027502/webrev.06/ Thank you. Tristan On 01/14/2014 10:39 AM, David Holmes wrote: On 13/01/2014 4:21 PM, Tristan Yan wrote: Hi All I add more trace output to track down possible reason of this failure. Please help to review it again. http://cr.openjdk.java.net/~tyan/JDK-7027502/webrev.05/ You seem to have dragged in some unrelated changes to ProblemList.txt Also I don't see much in the way of trace output. I see two potentially useful printlns (and one unnecessary one). Further all the changes from the static variables are still there but we already determined that they should have no affect on the running of the test, nor did they explain the failure mode. So other than reenabling this test to see if it actually still fails, I don't see the point of the functional changes. Sorry. David Thank you Tristan On 01/10/2014 07:20 AM, Tristan Yan wrote: Hi David I wasn't able to reproduce this failure either in local or in our same binaries running(This is a continuous running with same JDK binaries). So intention for this code change is bringing this test back; add some debug info and try to avoid possible issues in this test. I agree this code change won't solve the failure happened. But this test was put into ProblemList two years ago better move for this is move out it from ProblemList and trace down the issue in our normal nightly. Thank you Tristan On 01/10/2014 06:35 AM, David Holmes wrote: On 9/01/2014 10:14 PM, Alan Bateman wrote: On 09/01/2014 11:27, David Holmes wrote: Okay I think I get it now. Both MonitorTest and WaitersTest use the Context class, so if both tests run in the same VM the second test will see the static total_turns_taken and turn in the state they were left from the first test - hence the second test will always fail. The bug report suggests making the tests othervm to avoid the problem but instead you have changed from using static state to instance state so that there is no interference. I haven't been following this one closely but I thought that jtreg created a class loader for each test (irrespective of mode) so I wouldn't expect statics to be an issue. That aside DemoRun forks off its own JVM to run a given test anyway! So I don't understand how the proposed fixes could actually be addressing the hangs that are occurring. Even if the statics were being shared I don't see how that leads to the failure mode in the bug report. David -Alan.
Re: RFR for JDK-7027502: Test failures in demo/jvmti/hprof testcases, need to be othervm
Thank you Alan. I have changed this bug's synopsis. Is it okay you could push this? Tristan On 01/14/2014 09:48 PM, Alan Bateman wrote: On 14/01/2014 12:40, Tristan Yan wrote: Okay. But I think we all agree we should at least bring this test back for further tacking. http://cr.openjdk.java.net/~tyan/JDK-7027502/webrev.06/ This make sense so I think this should be pushed. If JDK-7027502 is used for this then the synopsis should be changed as it would otherwise not reflect the fact that this is not fixing the issue, rather just liberating the test with some additional output to help in the event that it fails in the future. -Alan
Re: RFR for JDK-7027502: Test failures in demo/jvmti/hprof testcases, need to be othervm
Hi All I add more trace output to track down possible reason of this failure. Please help to review it again. http://cr.openjdk.java.net/~tyan/JDK-7027502/webrev.05/ Thank you Tristan On 01/10/2014 07:20 AM, Tristan Yan wrote: Hi David I wasn't able to reproduce this failure either in local or in our same binaries running(This is a continuous running with same JDK binaries). So intention for this code change is bringing this test back; add some debug info and try to avoid possible issues in this test. I agree this code change won't solve the failure happened. But this test was put into ProblemList two years ago better move for this is move out it from ProblemList and trace down the issue in our normal nightly. Thank you Tristan On 01/10/2014 06:35 AM, David Holmes wrote: On 9/01/2014 10:14 PM, Alan Bateman wrote: On 09/01/2014 11:27, David Holmes wrote: Okay I think I get it now. Both MonitorTest and WaitersTest use the Context class, so if both tests run in the same VM the second test will see the static total_turns_taken and turn in the state they were left from the first test - hence the second test will always fail. The bug report suggests making the tests othervm to avoid the problem but instead you have changed from using static state to instance state so that there is no interference. I haven't been following this one closely but I thought that jtreg created a class loader for each test (irrespective of mode) so I wouldn't expect statics to be an issue. That aside DemoRun forks off its own JVM to run a given test anyway! So I don't understand how the proposed fixes could actually be addressing the hangs that are occurring. Even if the statics were being shared I don't see how that leads to the failure mode in the bug report. David -Alan.
Re: RFR for JDK-7027502: Test failures in demo/jvmti/hprof testcases, need to be othervm
Can someone else give a second review on this. Thank you Tristan On 01/07/2014 07:29 PM, David Holmes wrote: On 7/01/2014 8:36 PM, Tristan Yan wrote: Hi David You're totally right. Sorry I ask you review it again. http://cr.openjdk.java.net/~tyan/JDK-7027502/webrev.02/ Looks good now. Thanks, David Thank you very much. Tristan On 01/07/2014 05:18 PM, David Holmes wrote: On 7/01/2014 6:16 PM, Tristan Yan wrote: Thank you, David I fixed copyright and change back sleep. println was intended to be left in. This test was failed with timeout, printf could help us to detect the value of total_turns_taken and expected_turns_taken. Please review it again http://cr.openjdk.java.net/~tyan/JDK-7027502/webrev.01/ Comma after 2014 still missing in copyright. You need to read total_turns_taken.get() once and use that value in both the println and the == test, so that you print the value you actually tested. David Regards Tristan On 01/07/2014 03:10 PM, David Holmes wrote: Hi Tristan, On 7/01/2014 4:43 PM, Tristan Yan wrote: Hi All Please help to review the code change for JDK-7027502. http://cr.openjdk.java.net/~tyan/JDK-7027502/ Description: This test was failed in JPRT test but recently we test with same binaries run, it doesn't fail any more. The intention for this code change is bringing this test back to normal test and make this test robust and more informative. Change includes 1. Remove this test from ProblemList. 2. Change static member total_turns_taken into a local variable. Please let me know your comment on this. 2 * Copyright (c) 2004,2014 Oracle and/or its affiliates. All rights reserved. Correct copyright format should have a space before 2014 and a comma after: * Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved. --- Was this println intended to be left in? 114 System.out.println(total_turns_taken = + total_turns_taken + 115 ;expected_turns_taken = + expected_turns_taken); 116 if ( total_turns_taken.get() == expected_turns_taken ) { You only want to read total_turns_taken once otherwise you may see misleading print outs. --- 119 /* Create some monitor contention by sleeping with lock */ 120 if ( default_contention_sleep 0 ) { 121 System.out.println(Context sleeping, to create contention); 122 try { 123 turn.wait((long) default_contention_sleep); 124 } catch (InterruptedException ignore) { } 125 } By changing the Thread.sleep to turn.wait you no longer introduce any contention as the wait() will release the monitor. David Thank you. Tristan
Re: RFR for JDK-7027502: Test failures in demo/jvmti/hprof testcases, need to be othervm
Thank you Paul I change turn to local variable as well. http://cr.openjdk.java.net/~tyan/JDK-7027502/webrev.03/ http://cr.openjdk.java.net/%7Etyan/JDK-7027502/webrev.03/ I am not sure I understand your second suggestion here, sum up thread_turns of each Context(This is a fixed value) doesn't equal total_turns_taken. Regards Tristan On 01/09/2014 06:28 PM, Paul Sandoz wrote: On Jan 9, 2014, at 10:52 AM, Tristan Yan tristan@oracle.com wrote: Can someone else give a second review on this. In a comment the bug you state: here total_turns_taken is a static variable, it could affect by other tests I don't quite know under what test circumstances that can happen, but if so is the following also an issue: 52 private final static TurnChecker turn = new TurnChecker(-1); ? FWIW an alternative to using an AtomicInteger would be for the main loop to sum up thread_turns of each Context, since read/writes are all performed in a synchronized block. Paul.
Re: RFR for JDK-7027502: Test failures in demo/jvmti/hprof testcases, need to be othervm
Thanks David I removed those pointless yield, is it okay you can sponsor this for me? http://cr.openjdk.java.net/~tyan/JDK-7027502/webrev.04/ http://cr.openjdk.java.net/%7Etyan/JDK-7027502/webrev.04/ Regards Tristan On 01/09/2014 07:34 PM, David Holmes wrote: On 9/01/2014 9:27 PM, David Holmes wrote: On 9/01/2014 9:07 PM, Tristan Yan wrote: Thank you Paul I change turn to local variable as well. http://cr.openjdk.java.net/~tyan/JDK-7027502/webrev.03/ http://cr.openjdk.java.net/%7Etyan/JDK-7027502/webrev.03/ Okay I think I get it now. Both MonitorTest and WaitersTest use the Context class, so if both tests run in the same VM the second test will see the static total_turns_taken and turn in the state they were left from the first test - hence the second test will always fail. The bug report suggests making the tests othervm to avoid the problem but instead you have changed from using static state to instance state so that there is no interference. But you can't pass a reference to a simple int, for total_turns_taken, hence you turned it into the only mutable Integer type: AtomicInteger. I'm okay with this final form of the change. othervm would have been simpler :) These are ugly tests even with your changes. BTW: 107 Thread.yield(); 108 Thread.sleep(sleepTime); The yield() before the sleep is completely pointless. Ditto for: 126 Thread.yield(); 127 Thread.sleep((long)default_contention_sleep); David - David - I am not sure I understand your second suggestion here, sum up thread_turns of each Context(This is a fixed value) doesn't equal total_turns_taken. Regards Tristan On 01/09/2014 06:28 PM, Paul Sandoz wrote: On Jan 9, 2014, at 10:52 AM, Tristan Yan tristan@oracle.com wrote: Can someone else give a second review on this. In a comment the bug you state: here total_turns_taken is a static variable, it could affect by other tests I don't quite know under what test circumstances that can happen, but if so is the following also an issue: 52 private final static TurnChecker turn = new TurnChecker(-1); ? FWIW an alternative to using an AtomicInteger would be for the main loop to sum up thread_turns of each Context, since read/writes are all performed in a synchronized block. Paul.
Re: RFR for JDK-7027502: Test failures in demo/jvmti/hprof testcases, need to be othervm
Hi David I wasn't able to reproduce this failure either in local or in our same binaries running(This is a continuous running with same JDK binaries). So intention for this code change is bringing this test back; add some debug info and try to avoid possible issues in this test. I agree this code change won't solve the failure happened. But this test was put into ProblemList two years ago better move for this is move out it from ProblemList and trace down the issue in our normal nightly. Thank you Tristan On 01/10/2014 06:35 AM, David Holmes wrote: On 9/01/2014 10:14 PM, Alan Bateman wrote: On 09/01/2014 11:27, David Holmes wrote: Okay I think I get it now. Both MonitorTest and WaitersTest use the Context class, so if both tests run in the same VM the second test will see the static total_turns_taken and turn in the state they were left from the first test - hence the second test will always fail. The bug report suggests making the tests othervm to avoid the problem but instead you have changed from using static state to instance state so that there is no interference. I haven't been following this one closely but I thought that jtreg created a class loader for each test (irrespective of mode) so I wouldn't expect statics to be an issue. That aside DemoRun forks off its own JVM to run a given test anyway! So I don't understand how the proposed fixes could actually be addressing the hangs that are occurring. Even if the statics were being shared I don't see how that leads to the failure mode in the bug report. David -Alan.
Re: RFR for JDK-8030089: java/util/zip/ZipFile/FinalizeZipFile.java intermittently fails with fastdebug builds
Thank you Alan. I am appreciated that you sponsor this. The reason I changed it to othervm is I don't want System.gc do any impact to other tests, assume in concurrent mode. Regards Tristan On 01/08/2014 05:58 PM, Alan Bateman wrote: On 08/01/2014 02:38, Tristan Yan wrote: Hi All Please help to review code fix for bug JDK-8030089. http://cr.openjdk.java.net/~tyan/JDK-8030089/webrev.00/ http://cr.openjdk.java.net/%7Etyan/JDK-8030089/webrev.00/ Description: 1. Set test running on othervm mode. 2. Use infinite wait to the CountDownLatch. Changing it to use await looks fine. I'm curious about the reason for changing the test to run in othervm mode as I assume this is not strictly necessary (right?). (In any case, I can sponsor this one as we need our tests to be stable when running with a fastdebug build). -Alan
Re: RFR for JDK-8030089: java/util/zip/ZipFile/FinalizeZipFile.java intermittently fails with fastdebug builds
Hi Alan Maybe my understanding is wrong. Are you saying even in agentvm mode, there will be still different VM for every test. If the answer is yes, when should we use othervm mode? Thank you Tristan On 01/08/2014 06:45 PM, Alan Bateman wrote: On 08/01/2014 10:09, Tristan Yan wrote: Thank you Alan. I am appreciated that you sponsor this. The reason I changed it to othervm is I don't want System.gc do any impact to other tests, assume in concurrent mode. With concurrency then an agent VM still only runs one test at a time, it's just that there is a pool of agent VMs running concurrently. So the System.gc doesn't impact other agent VMs. -Alan.
Re: RFR for JDK-8030089: java/util/zip/ZipFile/FinalizeZipFile.java intermittently fails with fastdebug builds
Thank you. This is a very detailed explanation. I had new webrev with removing othervm. http://cr.openjdk.java.net/~tyan/JDK-8030089/webrev.01/ Regards Tristan On 01/08/2014 08:51 PM, Alan Bateman wrote: On 08/01/2014 11:01, Tristan Yan wrote: Hi Alan Maybe my understanding is wrong. Are you saying even in agentvm mode, there will be still different VM for every test. If the answer is yes, when should we use othervm mode? The purpose of -agentvm is to speed up the test execution by eliding the VM startup that otherwise dominates the execution of many short running tests. Tests are still executed sequentially, it's just that there isn't a VM started up for each test. With concurrency then the tests don't execute concurrently in the same VM, instead there is a pool of independent VMs that run concurrently. We still need othervm for some tests and for varied reasons. One reason is tests that can't restore things or clean up, another reason is tests that require special VM options. We moved to -agentvm a few years ago but we still periodically find issues where a test does something that causes a test that runs subsequently to have a problem. Anyway, for this test then my point was that it probably doesn't need to run in othervm mode, just changing it to await indefinitely should be fine (as it's await will timeout would require a long timeout when running with a fastdebug build). -Alan.
Re: RFR for JDK-7027502: Test failures in demo/jvmti/hprof testcases, need to be othervm
Thank you, David I fixed copyright and change back sleep. println was intended to be left in. This test was failed with timeout, printf could help us to detect the value of total_turns_taken and expected_turns_taken. Please review it again http://cr.openjdk.java.net/~tyan/JDK-7027502/webrev.01/ Regards Tristan On 01/07/2014 03:10 PM, David Holmes wrote: Hi Tristan, On 7/01/2014 4:43 PM, Tristan Yan wrote: Hi All Please help to review the code change for JDK-7027502. http://cr.openjdk.java.net/~tyan/JDK-7027502/ Description: This test was failed in JPRT test but recently we test with same binaries run, it doesn't fail any more. The intention for this code change is bringing this test back to normal test and make this test robust and more informative. Change includes 1. Remove this test from ProblemList. 2. Change static member total_turns_taken into a local variable. Please let me know your comment on this. 2 * Copyright (c) 2004,2014 Oracle and/or its affiliates. All rights reserved. Correct copyright format should have a space before 2014 and a comma after: * Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved. --- Was this println intended to be left in? 114 System.out.println(total_turns_taken = + total_turns_taken + 115 ;expected_turns_taken = + expected_turns_taken); 116 if ( total_turns_taken.get() == expected_turns_taken ) { You only want to read total_turns_taken once otherwise you may see misleading print outs. --- 119 /* Create some monitor contention by sleeping with lock */ 120 if ( default_contention_sleep 0 ) { 121 System.out.println(Context sleeping, to create contention); 122 try { 123 turn.wait((long) default_contention_sleep); 124 } catch (InterruptedException ignore) { } 125 } By changing the Thread.sleep to turn.wait you no longer introduce any contention as the wait() will release the monitor. David Thank you. Tristan
Re: RFR for JDK-7027502: Test failures in demo/jvmti/hprof testcases, need to be othervm
Hi David You're totally right. Sorry I ask you review it again. http://cr.openjdk.java.net/~tyan/JDK-7027502/webrev.02/ Thank you very much. Tristan On 01/07/2014 05:18 PM, David Holmes wrote: On 7/01/2014 6:16 PM, Tristan Yan wrote: Thank you, David I fixed copyright and change back sleep. println was intended to be left in. This test was failed with timeout, printf could help us to detect the value of total_turns_taken and expected_turns_taken. Please review it again http://cr.openjdk.java.net/~tyan/JDK-7027502/webrev.01/ Comma after 2014 still missing in copyright. You need to read total_turns_taken.get() once and use that value in both the println and the == test, so that you print the value you actually tested. David Regards Tristan On 01/07/2014 03:10 PM, David Holmes wrote: Hi Tristan, On 7/01/2014 4:43 PM, Tristan Yan wrote: Hi All Please help to review the code change for JDK-7027502. http://cr.openjdk.java.net/~tyan/JDK-7027502/ Description: This test was failed in JPRT test but recently we test with same binaries run, it doesn't fail any more. The intention for this code change is bringing this test back to normal test and make this test robust and more informative. Change includes 1. Remove this test from ProblemList. 2. Change static member total_turns_taken into a local variable. Please let me know your comment on this. 2 * Copyright (c) 2004,2014 Oracle and/or its affiliates. All rights reserved. Correct copyright format should have a space before 2014 and a comma after: * Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved. --- Was this println intended to be left in? 114 System.out.println(total_turns_taken = + total_turns_taken + 115 ;expected_turns_taken = + expected_turns_taken); 116 if ( total_turns_taken.get() == expected_turns_taken ) { You only want to read total_turns_taken once otherwise you may see misleading print outs. --- 119 /* Create some monitor contention by sleeping with lock */ 120 if ( default_contention_sleep 0 ) { 121 System.out.println(Context sleeping, to create contention); 122 try { 123 turn.wait((long) default_contention_sleep); 124 } catch (InterruptedException ignore) { } 125 } By changing the Thread.sleep to turn.wait you no longer introduce any contention as the wait() will release the monitor. David Thank you. Tristan
RFR for JDK-7027502: Test failures in demo/jvmti/hprof testcases, need to be othervm
Hi All Please help to review the code change for JDK-7027502. http://cr.openjdk.java.net/~tyan/JDK-7027502/ Description: This test was failed in JPRT test but recently we test with same binaries run, it doesn't fail any more. The intention for this code change is bringing this test back to normal test and make this test robust and more informative. Change includes 1. Remove this test from ProblemList. 2. Change static member total_turns_taken into a local variable. Please let me know your comment on this. Thank you. Tristan
Re: RFR for JDK-8030284 TEST_BUG: intermittent StackOverflow in RMI bench/serial test
Hi Stuart I did an experiment that set a small thread stack size using the -Xss228k or -Xss512k. The result is surprised that jtreg reports the test passed. Although I can see the StackOverflowError showing in the log even when I set thread stack size as 512k . So the problem is old shell script doesn't report the error even there is a StackOverflowError. Thank you. Tristan On 12/21/2013 08:01 AM, Stuart Marks wrote: On 12/19/13 8:29 PM, David Holmes wrote: If you were always one frame from the end then it is not so surprising that a simple change pushes you past the limit :) Try running the shell test with additional recursive loads and see when it fails. David doesn't seem surprised, but I guess I still am. :-) Tristan, do you think you could do some investigation here, regarding the shell script based test's stack consumption? Run the shell-based test with some different values for -Xss and see how low you have to set it before it generates a stack overflow. It's also kind of strange that in the two stack traces I've seen (I think I managed to capture only one in the bug report though) the StackOverflowError occurs on loading exactly the 50th class. Since we're observing intermittent behavior (happens sometimes but not others) the stack size is apparently variable. Since it's variable I'd expect to see it failing at different times, possibly the 49th or 48th recursive classload, not just the 50th. And in such circumstances, do we know what the default stack size is? Classloading consumes a reasonable chunk of stack so if the variance elsewhere is quite small it is not that surprising that the test always fails on the 50th class. I would not expect run-to-run stack usage variance to be high unless there is some random component to the test. Hm. There should be no variance in stack usage coming from the test itself. I believe the test does the same thing every time. The thing I'm concerned about is whether the Java-based test is doing something different from the shell-based test, because of the execution environment (jtreg or other). We may end up simply raising the stack limit anyway, but I still find it hard to believe that the shell-based test was consistently just a few frames shy of a stack overflow. The failure is intermittent; we've seen it twice in JPRT (our internal buildtest system). Possible sources of the intermittency are from the different machines on which the test executes. So environmental factors could be at play. How does the JVM determine the default stack size? Could different test runs on different machines be running with different stack sizes? Another source of variance is the JIT. I believe JIT-compiled code consumes stack differently from interpreted code. At least, I've seen differences in stack usage between -Xint and -Xcomp runs, and in the absence of these options (which means -Xmixed, I guess) the results sometimes vary unpredictably. I guess this might have to do with when the JIT compiler decides to kick in. This test does perform a bunch of iterations, so JIT compilation could be a factor. I don't know if you were able to reproduce this issue. If you were, it would be good to understand in more detail exactly what's going on. FWIW there was a recent change in 7u to bump up the number of stack shadow pages in hotspot as suddenly StackOverflow tests were crashing instead of triggering StackOverflowError. So something started using more stack in a way the caused there to not be enough space to process a stackoverflow properly. Finding the exact cause can be somewhat tedious. This seems like a different problem. We're seeing actual StackOverflowErrors, not crashes. Good to look out for this, though. s'marks Cheers, David s'marks
Re: Demo for Parallel Core Collection API
Hi Paul And Everyone Sorry for getting back late. I took Paul's suggestion and have written other two demos which presents usage of parallel computation. One is using Monte-Carlo to calculate value of PI. Other is find a big prime by given length. Please review it. http://cr.openjdk.java.net/~tyan/sample/webrev.00/ http://cr.openjdk.java.net/%7Etyan/sample/webrev.00/ There is another demo which present mandelbrot set was designed Alexander Kouznetsov has been already in reviewing. It's not my code review request. Thank you very much Tristan On 10/15/2013 11:20 PM, Paul Sandoz wrote: On Oct 15, 2013, at 4:35 PM, Tristan Yan tristan@oracle.com mailto:tristan@oracle.com wrote: Hi Paul you have comments suggest that all streams are sequential. There is an inconsistency in the use and in some cases it is embedded in other stream usages. We do not really understand what exactly is meant, could you elaborate a little bit. Is it because we want to show ppl that we should use stream more than parallelStream? Going parallel is easy to do but not always the right thing to do. Going parallel almost always requires more work with the expectation that work will complete sooner than the work required to get the same result sequentially. There are a number of factors that affect whether parallel is faster than sequential. Two of those factors are N, the size of the data, and Q the cost of processing an element in the pipeline. N * Q is a simple cost model, the large that product the better the chances of parallel speed up. N is easy to know, Q not so easy but can often be intuitively guessed. (Note that there are other factors such as the properties of the stream source and operations that Brian and I talked about in our J1 presentation.) Demo code that just makes everything (or most streams) parallel is sending out the wrong message. So i think the demo code should present two general things: 1) various stream functionality, as you have done; 2) parallel vs. sequential for various cases where it is known that parallel is faster on a multi-core system. For 2) i strongly recommend measuring using jmh [1]. The data sets you have may or may not be amenable to parallel processing, it's worth investigating though. I have ideas for other parallel demos. One is creating probably primes (now that SecureRandom is replaced with ThreadLocalRandom), creating a probably prime that is a BigInteger is an relatively expensive operation so Q should be high. Another more advanced demo is a Monte-Carlo calculation of PI using SplittableRandom and a special Spliterator, in this case N should be largish. But there are other simpler demonstrations like sum of squares etc to get across that N should be large. Another demo could be calculation of a mandelbrot set, which is embarrassingly parallel over an area in the complex plane. So while you should try and fit some parallel vs. sequential execution into your existing demos i do think it worth having a separate set of demos that get across the the simple cost model of N * Q. So feel free to use some of those ideas previously mentioned, i find those ideas fun so perhaps others will too :-) Paul. [1] http://openjdk.java.net/projects/code-tools/jmh/ On Oct 15, 2013, at 4:37 PM, Tristan Yan tristan@oracle.com mailto:tristan@oracle.com wrote: Also there is one more question I missed You suggested ParallelCore is not a very descriptive name. Suggest streams. 1) yes we agree this demo is not for parallel computation per se 2) but we do not have a clear demo for parallel computation 3) if we are to rename this, we need to develop another one, do you have a scenario for that?
Re: RFR for JDK-7168267: TEST_BUG: Cleanup of rmi regression tests (activation and others)
Thanks Stuart I changed ReadTimeoutTest.java only apply CountdownLatch part. Please review. http://cr.openjdk.java.net/~tyan/JDK-7168267/webrev.02/ Thank you Tristan On 12/20/2013 10:47 AM, Stuart Marks wrote: Hi Tristan, Changes mostly look good. There is an ignored InterruptedException in both versions of UseCustomSocketFactory.java, but that was there already; it's not clear what should be done in this case anyway. This is something to keep in mind for a future cleanup. Hm, some duplicate code here as well, again something to think about for the future. There is a serious problem with the change to ReadTimeoutTest.java, however. The change removes the (old) line 72, which is TestIface stub = impl.export(); probably because there was an IDE warning that the variable stub is unused. This much is true, but it's essential for impl.export() to be called, because that exports the object, which creates a socket using the socket factory, which eventually results in the fac.whichPort() call below returning the port that was open. In the absence of the export() call, whichPort() returns zero which causes the test to abort immediately. In addition, the refactoring to use try-with-resources changes the order of execution of certain code, and it changes the scope of things handled by the finally-block. One purpose of the finally-block is to unexport the remote object so it makes sense to begin the try-block immediately following the export. The original code did this (well, only after a benign local variable declaration). The change moves the try-block few lines down, which means there is a larger window of time within which the finally-block won't be executed. This isn't obviously a problem, but it's a change nonetheless. Also, the change alters the order of opening the client socket and the connecting to listening port message, so the message comes after the port is opened, instead of before. Again, an apparently small change, but if there's an exception opening the port, the port number being opened won't be printed out. The main point of the changes to this file, however, is good, which is to replace the unsafe use of multi-thread access to a boolean array and polling of that value, with a CountDownLatch. So that part of the change should go in. The problem is the apparently innocuous code cleanups (use of try-with-resources, removal of apparently unused local variable) which cause the test to break or otherwise change its behavior. I could go ahead and push this changeset, omitting the changes to ReadTimeoutTest.java. Or, you could update the changeset to revert all of the changes to ReadTimeoutTest.java except for those necessary to implement the use of CountDownLatch. Either way is fine with me. Which would you prefer? s'marks On 12/18/13 6:51 AM, Tristan Yan wrote: Hi Everyone Please review the code fix for bug JDK-7168267 http://cr.openjdk.java.net/~tyan/JDK-7168267/webrev.01/ http://cr.openjdk.java.net/%7Etyan/JDK-7168267/webrev.01/ This is a cleanup for RMI tests. trying to use real timeout to replace a fixed number of loop. Thank you Tristan On 12/12/2013 05:33 AM, Stuart Marks wrote: On 12/10/13 6:10 PM, Tristan Yan wrote: /Hi everyone I am working on bug JDK-7168267 Correct link is https://bugs.openjdk.java.net/browse/JDK-7168267 Root Cause: - Per Stuart's comment, this is a clean up bug. Suggested Fix: - Will use timeout to replace loop. We should probably look at specific cases for this. There are places where the test is waiting for some external service to become ready (e.g., rmiregistry). There's no notification for things like this so wait-with-timeout cannot be used. Pretty much the only thing that can be done is to poll reasonably often until the service is ready, or until the timeout is exceeded. - Also I am fixing two test's performance java/rmi/activation/Activatable/forceLogSnapshot - method waitAllStarted is using sleep to poll 50 restartedObject to be true, we can use modern CountDownLatch to implement blocking-time wait. java/rmi/activation/Activatable/checkAnnotations - We can subclass ByteArrayOutputStream which support notification when data was written. Also use two thread wait output string and error string to be not null. These sound reasonble. Go ahead and file sub-tasks for these and then choose one to work on first. (I think it will get too confusing if we try to work on them all simultaneously.) Either post a detailed description of what you intend to do, or if it's simple enough, just post a webrev. s'marks Please let me know if you have any comments or suggestions. / / Thank you Tristan On 12/05/2013 09:02 AM, Stuart Marks wrote: / /On 12/3/13 11:05 PM, Tristan Yan wrote: / /I am working on https://bugs.openjdk.java.net/browse/JDK-7168267. This bug is asking performance improvement for RMI test. Because this would involve different RMI tests. I’d like to use
Re: RFR for JDK-7168267: TEST_BUG: Cleanup of rmi regression tests (activation and others)
Hi Everyone Please review the code fix for bug JDK-7168267 http://cr.openjdk.java.net/~tyan/JDK-7168267/webrev.01/ http://cr.openjdk.java.net/%7Etyan/JDK-7168267/webrev.01/ This is a cleanup for RMI tests. trying to use real timeout to replace a fixed number of loop. Thank you Tristan On 12/12/2013 05:33 AM, Stuart Marks wrote: On 12/10/13 6:10 PM, Tristan Yan wrote: /Hi everyone I am working on bug JDK-7168267 Correct link is https://bugs.openjdk.java.net/browse/JDK-7168267 Root Cause: - Per Stuart's comment, this is a clean up bug. Suggested Fix: - Will use timeout to replace loop. We should probably look at specific cases for this. There are places where the test is waiting for some external service to become ready (e.g., rmiregistry). There's no notification for things like this so wait-with-timeout cannot be used. Pretty much the only thing that can be done is to poll reasonably often until the service is ready, or until the timeout is exceeded. - Also I am fixing two test's performance java/rmi/activation/Activatable/forceLogSnapshot - method waitAllStarted is using sleep to poll 50 restartedObject to be true, we can use modern CountDownLatch to implement blocking-time wait. java/rmi/activation/Activatable/checkAnnotations - We can subclass ByteArrayOutputStream which support notification when data was written. Also use two thread wait output string and error string to be not null. These sound reasonble. Go ahead and file sub-tasks for these and then choose one to work on first. (I think it will get too confusing if we try to work on them all simultaneously.) Either post a detailed description of what you intend to do, or if it's simple enough, just post a webrev. s'marks Please let me know if you have any comments or suggestions. / / Thank you Tristan On 12/05/2013 09:02 AM, Stuart Marks wrote: / /On 12/3/13 11:05 PM, Tristan Yan wrote: / /I am working on https://bugs.openjdk.java.net/browse/JDK-7168267. This bug is asking performance improvement for RMI test. Because this would involve different RMI tests. I’d like to use this cr as an umbrella bug, create sub-cr for different test. Then I can make progress on sub-cr. Please let me know your opinion on this. / / Actually JDK-7168267 is more about various test cleanups, and JDK-8005436 is more about performance. Both bugs, though, make general statements about the RMI tests and don't have much information about specific actions that need to be taken. I've added some notes to JDK-7168267 about some cleanups that could be done. / / If there are specific actions for either of these bugs, then yes, creating Sub-Tasks of these bugs and fixing them individually is the right thing to do. / / s'marks / / /
RFR for JDK-8030057: speed up forceLogSnapshot and checkAnnotations
Hi Everyone Please help to review the code change for bug JDK-8030057. http://cr.openjdk.java.net/~tyan/JDK-8030057/webrev.00/ http://cr.openjdk.java.net/%7Etyan/JDK-8030057/webrev.00/ Description: Performance improvement for two RMI tests java/rmi/activation/Activatable/forceLogSnapshot method waitAllStarted is using recursive sleep to poll 50 restartedObject to be true, we can use modern CountDownLatch to implement blocking-time wait. Also suggest shorten waiting time to 5 seconds. java/rmi/activation/Activatable/checkAnnotations We can subclass ByteArrayOutputStream which support notification when data was written. Also use two thread wait output string and error string to be not null Thank you Tristan
RFR for JDK-8030284 TEST_BUG: intermittent StackOverflow in RMI bench/serial test
Hi Everyone Please help to review the fix for JDK-8030284. http://cr.openjdk.java.net/~tyan/JDK-8030284/webrev.00/ http://cr.openjdk.java.net/%7Etyan/JDK-8030284/webrev.00/ This is a one line fix that add -Xss to prevent StackOverflowError. Thank you Tristan
RFR for JDK-7168267: TEST_BUG: Cleanup of rmi regression tests (activation and others)
/Hi everyone I am working on bug JDK-7168267 https://bugs.openjdk.java.net/browse/JDK-6963118 . Root Cause: - Per Stuart's comment, this is a clean up bug. Suggested Fix: - Will use timeout to replace loop. - Also I am fixing two test's performance java/rmi/activation/Activatable/forceLogSnapshot - method waitAllStarted is using sleep to poll 50 restartedObject to be true, we can use modern CountDownLatch to implement blocking-time wait. java/rmi/activation/Activatable/checkAnnotations - We can subclass ByteArrayOutputStream which support notification when data was written. Also use two thread wait output string and error string to be not null. Please let me know if you have any comments or suggestions. / / Thank you Tristan On 12/05/2013 09:02 AM, Stuart Marks wrote: / /On 12/3/13 11:05 PM, Tristan Yan wrote: / /I am working on https://bugs.openjdk.java.net/browse/JDK-7168267. This bug is asking performance improvement for RMI test. Because this would involve different RMI tests. I’d like to use this cr as an umbrella bug, create sub-cr for different test. Then I can make progress on sub-cr. Please let me know your opinion on this. / / Actually JDK-7168267 is more about various test cleanups, and JDK-8005436 is more about performance. Both bugs, though, make general statements about the RMI tests and don't have much information about specific actions that need to be taken. I've added some notes to JDK-7168267 about some cleanups that could be done. / / If there are specific actions for either of these bugs, then yes, creating Sub-Tasks of these bugs and fixing them individually is the right thing to do. / / s'marks / / /
Initial with JDK-7168267
Hi Stuart I am working on https://bugs.openjdk.java.net/browse/JDK-7168267. This bug is asking performance improvement for RMI test. Because this would involve different RMI tests. I'd like to use this cr as an umbrella bug, create sub-cr for different test. Then I can make progress on sub-cr. Please let me know your opinion on this. Thank you. Tristan Yan(Haibo Yan)
答复: 答复: RFR for JDK-7190106 RMI benchmark fails intermittently because of use of fixed port
Thank you Stuart, next time I will keep the last version. I'm appreciated that you can sponsor this for me. Thank you very much. Tristan. -邮件原件- 发件人: Stuart Marks 发送时间: Tuesday, December 03, 2013 9:11 AM 收件人: Tristan Yan 抄送: core-libs-dev@openjdk.java.net 主题: Re: 答复: RFR for JDK-7190106 RMI benchmark fails intermittently because of use of fixed port Hi Tristan, Sorry (again^2) for the delay. Holiday weekend here in the U.S. The latest webrev looks fine. Did you need a sponsor for this? I can push it for you. I guess we also need an official Reviewer before I can push it. (One additional point for the future. If there is more than one round of review, it's useful to post the new webrev alongside the old one, instead of overwriting it, so that old links are still valid. It's not likely to happen in this case, but if someone were to read the email archives and follow the link to an earlier webrev, they'd have no idea what I was talking about.) s'marks On 11/25/13 9:12 PM, Tristan Yan wrote: Hi Stuart Thanks for your code review. I updated the webrev according your suggestion. Could you review it again? http://cr.openjdk.java.net/~ewang/tristan/JDK-7190106/webrev.00/ Tristan On 11/26/2013 10:36 AM, Stuart Marks wrote: Hi Tristan, Finally getting back to this. Again, sorry for the delay. The changes look much better now. I've listed a bunch of items below, but they're all comparatively minor, even nitpicky. But there are a few things that should be cleaned up before we integrate this. Items listed below. ** bench/serial/Main.java - The description should simply be The Serialization benchmark test. This test has nothing to do with RMI, even though it's under the java/rmi hierarchy in the filesystem. - parseArgs() uses strings-in-switch! Good, but put break on its own line instead of following the close-brace of the catch clause. (rmi/Main.java doesn't have this issue.) ** bench/rmi/Main.java - Imports of java.util.logging.Level and Logger are unused? - Missing -server line from usage(). - Interesting use of enum. Note by convention an enum is like a class and names are in mixed case, thus use OutputFormat instead of OUTPUT_FORMAT. Also note that the enum doesn't buy much, at least in terms of lines of code, since the enum declaration and enum instance overrides add about as much code as the case statement that got removed from setupReporter(). It does buy a bit of type-safety, though, so might as well leave it in. - Enum instance HTML should be on a new line, like XML. - Reflection code can be chained instead of declaring several locals. This is mainly a matter of taste, but to my eye it's cleaner. The main advantage is avoiding the need to come up with names for intermediate locals. For example: port = (int) Class.forName(TestLibrary) .getMethod(getUnusedRandomPort) .invoke(null); - Catch clause at 389 can be of ReflectiveOperationException. This covers everything except IllegalArgumentException, which is unchecked, so you don't need to catch it. (Not sure why Method.invoke is declared to throw IllegalArgumentException, since generally only checked exceptions are declared in the throws clause.) - Line 416, no need to mention RMISecurityManager in a comment, just make the change to use SecurityManager. - It's kind of surprising that TEST_SRC_PATH appends the file separator to the test.src property. At line 241 test.src has to be fetched again to use it without the file separator. - Line 234, instead of the java.home property, use the test.jdk property. This will use the JDK under test instead of the JDK that's running jtreg. In practice it's unclear whether this makes any actual difference today, but it's good to try to keep this separation. Also, use file separators here instead of appending /bin/java. (Hmmm. I observe that the RMI testlibrary invokes JVM subprocesses using java.home.) Thanks, s'marks On 11/20/13 1:49 PM, Stuart Marks wrote: Hi, sorry about the delay, I'm still backlogged from traveling. I'll get to this soon. s'marks On 11/19/13 6:27 PM, Tristan Yan wrote: Hi Stuart Did you get chance to review it again. Let me know if you have any new comments or suggestions. Thanks Tristan -邮件原件- 发件人: Tristan Yan 发送时间: Thursday, November 14, 2013 11:09 PM 收件人: Stuart Marks 抄送: core-libs-dev@openjdk.java.net 主题: 答复: RFR for JDK-7190106 RMI benchmark fails intermittently because of use of fixed port Thank you Stuart It took me a little time to correct the code. sorry be late. This is new webrev for the code change. Please help to review it again. Description: 1. Convert shell script test to Java program test. 2. Using random server port by reusing Darryl Mocek's work(TestLibrary.getUnusedRandomPort) to replace fixed server port. 3. Because TestLibrary doesn't have
Re: RFR for JDK-6933803 Bring back test java/util/concurrent/ThreadPoolExecutor/CoreThreadTimeOut.java
Hi Alan Could you sponsor this small change for me if you're ok with this change. Thank you very much. Tristan On 11/20/2013 12:45 PM, Tristan Yan wrote: Hi Everyone We have a test java/util/concurrent/ThreadPoolExecutor/CoreThreadTimeOut.java that was put into ProblemList because of bug JDK-6933803, this test has been fixed in http://hg.openjdk.java.net/jdk8/tl/jdk/diff/cb3ecb5e4ce5/test/java/util/concurrent/ThreadPoolExecutor/CoreThreadTimeOut.java. We have run a 1000 times loop test for making sure there is no issue in this test anymore and we don't see any failure . I think it's good time to bring this test back from ProblemList. http://cr.openjdk.java.net/~ewang/tristan/JDK-6933803/webrev.00/test/ProblemList.txt.sdiff.html http://cr.openjdk.java.net/%7Eewang/tristan/JDK-6933803/webrev.00/test/ProblemList.txt.sdiff.html Please let me know if you have comment on this. Thank you. /Tristan Yan(Haibo Yan)/
Re: RFR for JDK-6933803 Bring back test java/util/concurrent/ThreadPoolExecutor/CoreThreadTimeOut.java
Thank you. Chris. Tristan On 11/26/2013 11:16 PM, Chris Hegarty wrote: Tristan, The removal of this test from the ProblemList.txt looks like the right thing to do. I can push this for you. -Chris. On 26 Nov 2013, at 12:47, Tristan Yan wrote: Hi Alan Could you sponsor this small change for me if you're ok with this change. Thank you very much. Tristan On 11/20/2013 12:45 PM, Tristan Yan wrote: Hi Everyone We have a test java/util/concurrent/ThreadPoolExecutor/CoreThreadTimeOut.java that was put into ProblemList because of bug JDK-6933803, this test has been fixed in http://hg.openjdk.java.net/jdk8/tl/jdk/diff/cb3ecb5e4ce5/test/java/util/concurrent/ThreadPoolExecutor/CoreThreadTimeOut.java. We have run a 1000 times loop test for making sure there is no issue in this test anymore and we don't see any failure . I think it's good time to bring this test back from ProblemList. http://cr.openjdk.java.net/~ewang/tristan/JDK-6933803/webrev.00/test/ProblemList.txt.sdiff.html http://cr.openjdk.java.net/%7Eewang/tristan/JDK-6933803/webrev.00/test/ProblemList.txt.sdiff.html Please let me know if you have comment on this. Thank you. /Tristan Yan(Haibo Yan)/
Re: 答复: RFR for JDK-7190106 RMI benchmark fails intermittently because of use of fixed port
Hi Stuart Thanks for your code review. I updated the webrev according your suggestion. Could you review it again? http://cr.openjdk.java.net/~ewang/tristan/JDK-7190106/webrev.00/ Tristan On 11/26/2013 10:36 AM, Stuart Marks wrote: Hi Tristan, Finally getting back to this. Again, sorry for the delay. The changes look much better now. I've listed a bunch of items below, but they're all comparatively minor, even nitpicky. But there are a few things that should be cleaned up before we integrate this. Items listed below. ** bench/serial/Main.java - The description should simply be The Serialization benchmark test. This test has nothing to do with RMI, even though it's under the java/rmi hierarchy in the filesystem. - parseArgs() uses strings-in-switch! Good, but put break on its own line instead of following the close-brace of the catch clause. (rmi/Main.java doesn't have this issue.) ** bench/rmi/Main.java - Imports of java.util.logging.Level and Logger are unused? - Missing -server line from usage(). - Interesting use of enum. Note by convention an enum is like a class and names are in mixed case, thus use OutputFormat instead of OUTPUT_FORMAT. Also note that the enum doesn't buy much, at least in terms of lines of code, since the enum declaration and enum instance overrides add about as much code as the case statement that got removed from setupReporter(). It does buy a bit of type-safety, though, so might as well leave it in. - Enum instance HTML should be on a new line, like XML. - Reflection code can be chained instead of declaring several locals. This is mainly a matter of taste, but to my eye it's cleaner. The main advantage is avoiding the need to come up with names for intermediate locals. For example: port = (int) Class.forName(TestLibrary) .getMethod(getUnusedRandomPort) .invoke(null); - Catch clause at 389 can be of ReflectiveOperationException. This covers everything except IllegalArgumentException, which is unchecked, so you don't need to catch it. (Not sure why Method.invoke is declared to throw IllegalArgumentException, since generally only checked exceptions are declared in the throws clause.) - Line 416, no need to mention RMISecurityManager in a comment, just make the change to use SecurityManager. - It's kind of surprising that TEST_SRC_PATH appends the file separator to the test.src property. At line 241 test.src has to be fetched again to use it without the file separator. - Line 234, instead of the java.home property, use the test.jdk property. This will use the JDK under test instead of the JDK that's running jtreg. In practice it's unclear whether this makes any actual difference today, but it's good to try to keep this separation. Also, use file separators here instead of appending /bin/java. (Hmmm. I observe that the RMI testlibrary invokes JVM subprocesses using java.home.) Thanks, s'marks On 11/20/13 1:49 PM, Stuart Marks wrote: Hi, sorry about the delay, I'm still backlogged from traveling. I'll get to this soon. s'marks On 11/19/13 6:27 PM, Tristan Yan wrote: Hi Stuart Did you get chance to review it again. Let me know if you have any new comments or suggestions. Thanks Tristan -邮件原件- 发件人: Tristan Yan 发送时间: Thursday, November 14, 2013 11:09 PM 收件人: Stuart Marks 抄送: core-libs-dev@openjdk.java.net 主题: 答复: RFR for JDK-7190106 RMI benchmark fails intermittently because of use of fixed port Thank you Stuart It took me a little time to correct the code. sorry be late. This is new webrev for the code change. Please help to review it again. Description: 1. Convert shell script test to Java program test. 2. Using random server port by reusing Darryl Mocek's work(TestLibrary.getUnusedRandomPort) to replace fixed server port. 3. Because TestLibrary doesn't have package. Here is using reflection to call TestLibrary.getUnusedRandomPort. This is going to change when TestLibrary moves to named package. 4. Using java Process class to start client process. Client and server are sharing IO. 5. Also convert other shell script test runSerialBench.sh to java program test also. 6. ProblemList has been changed to get back java/rmi/reliability/benchmark/runRmiBench.sh test. http://cr.openjdk.java.net/~pzhang/Tristan/7190106/webrev/ Thank you so much Tristan -邮件原件- 发件人: Stuart Marks 发送时间: Tuesday, November 12, 2013 11:38 PM 收件人: Tristan Yan 抄送: core-libs-dev@openjdk.java.net; Alexandre (Shura) Iline 主题: Re: RFR for JDK-7190106 RMI benchmark fails intermittently because of use of fixed port Unfortunately we can't use jdk.testlibrary.Utils.getFreePort() for the RMI tests, since RMI's TestLibrary.getUnusedRandomPort() respects a reserved port range that's used by some RMI tests that have to used fixed ports. s'marks On 11/11/13 2:39 PM, Tristan Yan wrote: Hi Stuart Also there is one more solution, which is there is one
答复: RFR for JDK-7190106 RMI benchmark fails intermittently because of use of fixed port
Hi Stuart Did you get chance to review it again. Let me know if you have any new comments or suggestions. Thanks Tristan -邮件原件- 发件人: Tristan Yan 发送时间: Thursday, November 14, 2013 11:09 PM 收件人: Stuart Marks 抄送: core-libs-dev@openjdk.java.net 主题: 答复: RFR for JDK-7190106 RMI benchmark fails intermittently because of use of fixed port Thank you Stuart It took me a little time to correct the code. sorry be late. This is new webrev for the code change. Please help to review it again. Description: 1. Convert shell script test to Java program test. 2. Using random server port by reusing Darryl Mocek's work(TestLibrary.getUnusedRandomPort) to replace fixed server port. 3. Because TestLibrary doesn't have package. Here is using reflection to call TestLibrary.getUnusedRandomPort. This is going to change when TestLibrary moves to named package. 4. Using java Process class to start client process. Client and server are sharing IO. 5. Also convert other shell script test runSerialBench.sh to java program test also. 6. ProblemList has been changed to get back java/rmi/reliability/benchmark/runRmiBench.sh test. http://cr.openjdk.java.net/~pzhang/Tristan/7190106/webrev/ Thank you so much Tristan -邮件原件- 发件人: Stuart Marks 发送时间: Tuesday, November 12, 2013 11:38 PM 收件人: Tristan Yan 抄送: core-libs-dev@openjdk.java.net; Alexandre (Shura) Iline 主题: Re: RFR for JDK-7190106 RMI benchmark fails intermittently because of use of fixed port Unfortunately we can't use jdk.testlibrary.Utils.getFreePort() for the RMI tests, since RMI's TestLibrary.getUnusedRandomPort() respects a reserved port range that's used by some RMI tests that have to used fixed ports. s'marks On 11/11/13 2:39 PM, Tristan Yan wrote: Hi Stuart Also there is one more solution, which is there is one jdk.testlibrary.Utils.getFreePort() method under test/lib. It's same function as TestLibrary.getUnusedRandomPort() and it has named package. Do you mind I use this one? Since these two functions provide same functionality. Maybe we should think about to merge them at the same time. Thank you Tristan On 11/10/2013 11:19 AM, Tristan Yan wrote: Hi Stuart I tried your suggestion but unfortunately all the benchmarks have dependencies to Main class because they need get stub from server. I suggest we move the benchmark tests to unnamed package unless we do want to put TestLibrary into a named package right now. Please let me know if you have objection on this. Thank you Tristan On 11/09/2013 02:28 AM, Stuart Marks wrote: Hi Tristan, Yes, it's kind of a problem that the RMI TestLibrary is in the unnamed package. Classes in a named package cannot import classes from the unnamed package. We've run into problems with this before. Eventually, we should move TestLibrary a named package. I think it's possible to work around this without too much difficulty. Note that classes in the unnamed package can import classes from named packages. So, perhaps you can put the RmiBench main class in the unnamed package so it has access to TestLibrary. Then have the benchmarks themselves in the bench.rmi package. The config file already references the benchmarks by fully qualified class name (e.g., bench.rmi.NullCalls) so with a bit of tinkering you ought to be able to get this to work. s'marks On 11/8/13 3:00 AM, Tristan Yan wrote: Thank you, Stuart There is one review point I want to ask you opinion. Which is the reason that I moved from test/java/rmi/reliability/benchmark/bench/rmi to test/java/rmi/reliability/benchmark is Main.java need access class TestLibrary for supporting random port. TestLibrary is a unpackage class, I couldn't find a way to let a class which has Package to access the class without package. Do you have suggestion on that? Thank you so much. Tristan On 11/06/2013 09:50 AM, Stuart Marks wrote: On 11/1/13 9:18 AM, Tristan Yan wrote: Hi Everyone http://cr.openjdk.java.net/~pzhang/Tristan/7190106/webrev/ Description: 1. Convert shell script test to Java program test. 2. Using random server port by reusing Darryl Mocek's work to replace fixed server port. 3. Using java Process class to start client process. 4. Also convert other shell script test runSerialBench.sh to java program test also Hi Tristan, Several comments on this webrev. ** The original arrangement within the test/java/rmi/reliability/benchmark directory had the main benchmark files (scripts) at the top, some benchmark framework files in the bench subdirectory, and the actual RMI and serialization benchmarks in bench/rmi and bench/serial subdirectories. The webrev moves all the RMI benchmarks to the top benchmark directory but leaves the serial benchmarks in bench/serial. The RMI benchmarks are now all cluttering the top directory, but the main serial benchmark test is now buried in the bench/serial directory. The nice organization that was there before is now
RFR for JDK-6933803 Bring back test java/util/concurrent/ThreadPoolExecutor/CoreThreadTimeOut.java
Hi Everyone We have a test java/util/concurrent/ThreadPoolExecutor/CoreThreadTimeOut.java that was put into ProblemList because of bug JDK-6933803, this test has been fixed in http://hg.openjdk.java.net/jdk8/tl/jdk/diff/cb3ecb5e4ce5/test/java/util/concurrent/ThreadPoolExecutor/CoreThreadTimeOut.java. We have run a 1000 times loop test for making sure there is no issue in this test anymore and we don't see any failure . I think it's good time to bring this test back from ProblemList. http://cr.openjdk.java.net/~ewang/tristan/JDK-6933803/webrev.00/test/ProblemList.txt.sdiff.html Please let me know if you have comment on this. Thank you. Tristan Yan(Haibo Yan)
答复: RFR for JDK-7190106 RMI benchmark fails intermittently because of use of fixed port
Thank you Stuart It took me a little time to correct the code. sorry be late. This is new webrev for the code change. Please help to review it again. Description: 1. Convert shell script test to Java program test. 2. Using random server port by reusing Darryl Mocek's work(TestLibrary.getUnusedRandomPort) to replace fixed server port. 3. Because TestLibrary doesn't have package. Here is using reflection to call TestLibrary.getUnusedRandomPort. This is going to change when TestLibrary moves to named package. 4. Using java Process class to start client process. Client and server are sharing IO. 5. Also convert other shell script test runSerialBench.sh to java program test also. 6. ProblemList has been changed to get back java/rmi/reliability/benchmark/runRmiBench.sh test. http://cr.openjdk.java.net/~pzhang/Tristan/7190106/webrev/ Thank you so much Tristan -邮件原件- 发件人: Stuart Marks 发送时间: Tuesday, November 12, 2013 11:38 PM 收件人: Tristan Yan 抄送: core-libs-dev@openjdk.java.net; Alexandre (Shura) Iline 主题: Re: RFR for JDK-7190106 RMI benchmark fails intermittently because of use of fixed port Unfortunately we can't use jdk.testlibrary.Utils.getFreePort() for the RMI tests, since RMI's TestLibrary.getUnusedRandomPort() respects a reserved port range that's used by some RMI tests that have to used fixed ports. s'marks On 11/11/13 2:39 PM, Tristan Yan wrote: Hi Stuart Also there is one more solution, which is there is one jdk.testlibrary.Utils.getFreePort() method under test/lib. It's same function as TestLibrary.getUnusedRandomPort() and it has named package. Do you mind I use this one? Since these two functions provide same functionality. Maybe we should think about to merge them at the same time. Thank you Tristan On 11/10/2013 11:19 AM, Tristan Yan wrote: Hi Stuart I tried your suggestion but unfortunately all the benchmarks have dependencies to Main class because they need get stub from server. I suggest we move the benchmark tests to unnamed package unless we do want to put TestLibrary into a named package right now. Please let me know if you have objection on this. Thank you Tristan On 11/09/2013 02:28 AM, Stuart Marks wrote: Hi Tristan, Yes, it's kind of a problem that the RMI TestLibrary is in the unnamed package. Classes in a named package cannot import classes from the unnamed package. We've run into problems with this before. Eventually, we should move TestLibrary a named package. I think it's possible to work around this without too much difficulty. Note that classes in the unnamed package can import classes from named packages. So, perhaps you can put the RmiBench main class in the unnamed package so it has access to TestLibrary. Then have the benchmarks themselves in the bench.rmi package. The config file already references the benchmarks by fully qualified class name (e.g., bench.rmi.NullCalls) so with a bit of tinkering you ought to be able to get this to work. s'marks On 11/8/13 3:00 AM, Tristan Yan wrote: Thank you, Stuart There is one review point I want to ask you opinion. Which is the reason that I moved from test/java/rmi/reliability/benchmark/bench/rmi to test/java/rmi/reliability/benchmark is Main.java need access class TestLibrary for supporting random port. TestLibrary is a unpackage class, I couldn't find a way to let a class which has Package to access the class without package. Do you have suggestion on that? Thank you so much. Tristan On 11/06/2013 09:50 AM, Stuart Marks wrote: On 11/1/13 9:18 AM, Tristan Yan wrote: Hi Everyone http://cr.openjdk.java.net/~pzhang/Tristan/7190106/webrev/ Description: 1. Convert shell script test to Java program test. 2. Using random server port by reusing Darryl Mocek's work to replace fixed server port. 3. Using java Process class to start client process. 4. Also convert other shell script test runSerialBench.sh to java program test also Hi Tristan, Several comments on this webrev. ** The original arrangement within the test/java/rmi/reliability/benchmark directory had the main benchmark files (scripts) at the top, some benchmark framework files in the bench subdirectory, and the actual RMI and serialization benchmarks in bench/rmi and bench/serial subdirectories. The webrev moves all the RMI benchmarks to the top benchmark directory but leaves the serial benchmarks in bench/serial. The RMI benchmarks are now all cluttering the top directory, but the main serial benchmark test is now buried in the bench/serial directory. The nice organization that was there before is now spoiled. Is this rearrangement necessary in order to convert the scripts to Java? I would prefer the original arrangement be left in place. ** The RMI benchmark Main.java file has a @run tag of the form, @run main/othervm/policy=policy.all/timeout=1800 -server Main -c config There is a subtle but serious problem here
Re: RFR for JDK-7190106 RMI benchmark fails intermittently because of use of fixed port
Hi Stuart Also there is one more solution, which is there is one jdk.testlibrary.Utils.getFreePort() method under test/lib. It's same function as TestLibrary.getUnusedRandomPort() and it has named package. Do you mind I use this one? Since these two functions provide same functionality. Maybe we should think about to merge them at the same time. Thank you Tristan On 11/10/2013 11:19 AM, Tristan Yan wrote: Hi Stuart I tried your suggestion but unfortunately all the benchmarks have dependencies to Main class because they need get stub from server. I suggest we move the benchmark tests to unnamed package unless we do want to put TestLibrary into a named package right now. Please let me know if you have objection on this. Thank you Tristan On 11/09/2013 02:28 AM, Stuart Marks wrote: Hi Tristan, Yes, it's kind of a problem that the RMI TestLibrary is in the unnamed package. Classes in a named package cannot import classes from the unnamed package. We've run into problems with this before. Eventually, we should move TestLibrary a named package. I think it's possible to work around this without too much difficulty. Note that classes in the unnamed package can import classes from named packages. So, perhaps you can put the RmiBench main class in the unnamed package so it has access to TestLibrary. Then have the benchmarks themselves in the bench.rmi package. The config file already references the benchmarks by fully qualified class name (e.g., bench.rmi.NullCalls) so with a bit of tinkering you ought to be able to get this to work. s'marks On 11/8/13 3:00 AM, Tristan Yan wrote: Thank you, Stuart There is one review point I want to ask you opinion. Which is the reason that I moved from test/java/rmi/reliability/benchmark/bench/rmi to test/java/rmi/reliability/benchmark is Main.java need access class TestLibrary for supporting random port. TestLibrary is a unpackage class, I couldn't find a way to let a class which has Package to access the class without package. Do you have suggestion on that? Thank you so much. Tristan On 11/06/2013 09:50 AM, Stuart Marks wrote: On 11/1/13 9:18 AM, Tristan Yan wrote: Hi Everyone http://cr.openjdk.java.net/~pzhang/Tristan/7190106/webrev/ Description: 1. Convert shell script test to Java program test. 2. Using random server port by reusing Darryl Mocek's work to replace fixed server port. 3. Using java Process class to start client process. 4. Also convert other shell script test runSerialBench.sh to java program test also Hi Tristan, Several comments on this webrev. ** The original arrangement within the test/java/rmi/reliability/benchmark directory had the main benchmark files (scripts) at the top, some benchmark framework files in the bench subdirectory, and the actual RMI and serialization benchmarks in bench/rmi and bench/serial subdirectories. The webrev moves all the RMI benchmarks to the top benchmark directory but leaves the serial benchmarks in bench/serial. The RMI benchmarks are now all cluttering the top directory, but the main serial benchmark test is now buried in the bench/serial directory. The nice organization that was there before is now spoiled. Is this rearrangement necessary in order to convert the scripts to Java? I would prefer the original arrangement be left in place. ** The RMI benchmark Main.java file has a @run tag of the form, @run main/othervm/policy=policy.all/timeout=1800 -server Main -c config There is a subtle but serious problem here: the -server option is passed to the JVM and not as an argument to the main() method. The main() method gets neither a -server nor a -client argument, so its default run mode as defined by the benchmark itself is SAMEVM. This runs the client and server in the same JVM, which is different from the shell script, which ran separate client and server JVMs. ** The logic to process the -server argument still expects to take a port, even though the port is assigned automatically. So the obvious fix to the above, @run main/othervm/policy=policy.all/timeout=1800 Main -server -c config doesn't work, since a port is missing. The logic to increment the argument index to collect the port argument should be removed. Also, the -server line should be restored to the usage message, but without the port argument. ** After this is done, the client's command line is constructed improperly. The command line ends up looking something like this: java client -cp classpath Main client localhost:58583 -c config The word client appears twice, but what's really required is -client to appear as an argument after Main. ** The client is run using ProcessBuilder, which by default sends stdout and stderr to pipes to be read by the parent. But the parent never reads them, thus any messages from the client are never seen. The client is the one that emits the benchmark report, so its output needs to be seen. It might be sufficient
答复: Demo for Parallel Core Collection API
Also there is one more question I missed You suggested ParallelCore is not a very descriptive name. Suggest streams. 1) yes we agree this demo is not for parallel computation per se 2) but we do not have a clear demo for parallel computation 3) if we are to rename this, we need to develop another one, do you have a scenario for that? Thank you -邮件原件- 发件人: Paul Sandoz 发送时间: Monday, October 14, 2013 11:28 PM 收件人: Tristan Yan 抄送: core-libs-dev@openjdk.java.net; lambda-...@openjdk.java.net; Taras Ledkov; Andrey Nazarov; Aleksandre Iline 主题: Re: Demo for Parallel Core Collection API Hi, Some high-level points first: - try and use static import where possible. - suggest that all streams are sequential. There is an inconsistency in the use and in some cases it is embedded in other stream usages. - ParallelCore is not a very descriptive name. Suggest streams. - suggest moving the data classes and XML parsing code to a separate package. - Unfortunately Supplier is overloaded with the functional interface in j.u.function. Not sure much could be done about that. More details below. I am not really commenting on the specific use-case, just the usages of the API itself, plus for brevity i have removed comments. Conversion -- A more compact form of mostTProductsByCategory (without comments) is: public static T extends ComparableT Product[] mostTProductsByCategory( FunctionProduct, T func){ MapString, OptionalProduct m = products.stream(). collect(groupingBy(Product::getCategory, maxBy(comparing(func; return m.values().stream(). map(Optional::get). toArray(Product[]::new); } i.e. show the Map rather than the Collector. A DRYer form: public static CollectionString countries(boolean ordered) { StreamString countries = customers.stream().map(Customer::getCountry); if (ordered) { return countries.distinct().collect(toCollection(LinkedList::new)); } else { return countries.collect(Collectors.StringtoSet()); } } Shame that the type witness is required. For sequential streams is probably no advantage here to providing a link list over a list except for API educational value. Elements -- Simpler form. The function to apply the terminal op obfuscates: public static OptionalSupplier suppliersInCountry(boolean findAny, String country) { StreamSupplier s = suppliers.stream(). //filter supplier who is same sa given country filter(supplier - country. equals(supplier.getCountry())); return findAny ? s.findAny() : s.findFirst(); } The use of the collector is complicating matters. Off the top of my head what you require is a reducer that reduces to the first product whose stock units is 0: public static MapString, Product inStockProductsByCategory( boolean findAny) { BinaryOperatorProduct reduceToFirstMatch = (l, r) - (l != null) ? l : (r != null r.getUnitsInStock() 0) ? r : null; return products.stream().collect( groupingBy(Product::getCategory, reducing(null, reduceToFirstMatch))); } The above relies on the associativity of the binary operator. There is no need to collect into a filtered list then stream with findAny/findFirst since we can reduce to the result as each element is received. That reduceToFirstMatch can easily be abstracted into a higher order function taking a predicate Grouping -- public static MapString, ListOrder groupOrdersByCustomer() { //a collector that generate a order list from a customer CollectorCustomer, ListOrder, ListOrder collectOrder = Collector. of(ArrayListOrder::new, (orders, customer) - { orders.addAll(customer.getOrders()); }, (left, right) - { left.addAll(right); return left; }); return customers.parallelStream(). //convert customers to a Map which key is it name, value is its //orders collect(Collectors. groupingBy(Customer::getCompanyName, collectOrder)); } Not clear to me if there are multiple customers with the same name so the catamorphic collect may not be necessary and toMap can be used instead. Perhaps there is another way to show this collect usage? Can simplify: public static S, T extends ComparableT MapS, OptionalProduct productMaxByTGroupByS(boolean maxBy, FunctionProduct, S groupFunc, FunctionProduct, T compFunc){ //Comparator of Product which will compare on T by given function ComparatorProduct comp = Comparator.comparing(compFunc
答复: Demo for Parallel Core Collection API
Hi Paul you have comments suggest that all streams are sequential. There is an inconsistency in the use and in some cases it is embedded in other stream usages. We do not really understand what exactly is meant, could you elaborate a little bit. Is it because we want to show ppl that we should use stream more than parallelStream? Thank you -邮件原件- 发件人: Paul Sandoz 发送时间: Monday, October 14, 2013 11:28 PM 收件人: Tristan Yan 抄送: core-libs-dev@openjdk.java.net; lambda-...@openjdk.java.net; Taras Ledkov; Andrey Nazarov; Aleksandre Iline 主题: Re: Demo for Parallel Core Collection API Hi, Some high-level points first: - try and use static import where possible. - suggest that all streams are sequential. There is an inconsistency in the use and in some cases it is embedded in other stream usages. - ParallelCore is not a very descriptive name. Suggest streams. - suggest moving the data classes and XML parsing code to a separate package. - Unfortunately Supplier is overloaded with the functional interface in j.u.function. Not sure much could be done about that. More details below. I am not really commenting on the specific use-case, just the usages of the API itself, plus for brevity i have removed comments. Conversion -- A more compact form of mostTProductsByCategory (without comments) is: public static T extends ComparableT Product[] mostTProductsByCategory( FunctionProduct, T func){ MapString, OptionalProduct m = products.stream(). collect(groupingBy(Product::getCategory, maxBy(comparing(func; return m.values().stream(). map(Optional::get). toArray(Product[]::new); } i.e. show the Map rather than the Collector. A DRYer form: public static CollectionString countries(boolean ordered) { StreamString countries = customers.stream().map(Customer::getCountry); if (ordered) { return countries.distinct().collect(toCollection(LinkedList::new)); } else { return countries.collect(Collectors.StringtoSet()); } } Shame that the type witness is required. For sequential streams is probably no advantage here to providing a link list over a list except for API educational value. Elements -- Simpler form. The function to apply the terminal op obfuscates: public static OptionalSupplier suppliersInCountry(boolean findAny, String country) { StreamSupplier s = suppliers.stream(). //filter supplier who is same sa given country filter(supplier - country. equals(supplier.getCountry())); return findAny ? s.findAny() : s.findFirst(); } The use of the collector is complicating matters. Off the top of my head what you require is a reducer that reduces to the first product whose stock units is 0: public static MapString, Product inStockProductsByCategory( boolean findAny) { BinaryOperatorProduct reduceToFirstMatch = (l, r) - (l != null) ? l : (r != null r.getUnitsInStock() 0) ? r : null; return products.stream().collect( groupingBy(Product::getCategory, reducing(null, reduceToFirstMatch))); } The above relies on the associativity of the binary operator. There is no need to collect into a filtered list then stream with findAny/findFirst since we can reduce to the result as each element is received. That reduceToFirstMatch can easily be abstracted into a higher order function taking a predicate Grouping -- public static MapString, ListOrder groupOrdersByCustomer() { //a collector that generate a order list from a customer CollectorCustomer, ListOrder, ListOrder collectOrder = Collector. of(ArrayListOrder::new, (orders, customer) - { orders.addAll(customer.getOrders()); }, (left, right) - { left.addAll(right); return left; }); return customers.parallelStream(). //convert customers to a Map which key is it name, value is its //orders collect(Collectors. groupingBy(Customer::getCompanyName, collectOrder)); } Not clear to me if there are multiple customers with the same name so the catamorphic collect may not be necessary and toMap can be used instead. Perhaps there is another way to show this collect usage? Can simplify: public static S, T extends ComparableT MapS, OptionalProduct productMaxByTGroupByS(boolean maxBy, FunctionProduct, S groupFunc, FunctionProduct, T compFunc){ //Comparator of Product which will compare on T by given function ComparatorProduct comp = Comparator.comparing(compFunc
答复: Demo for Parallel Core Collection API
Thank you Paul There is one minor question, I can't compile one of your code, then I realized that's because grouping by signature is public static T, K, A, D CollectorT, ?, MapK, D groupingBy(Function? super T, ? extends K classifier, Collector? super T, A, D downstream) Which makes me wonder why we need type A here, why we don't use variant here. Why the method signature is not public static T, K, D CollectorT, ?, MapK, D groupingBy(Function? super T, ? extends K classifier, Collector? super T, ?, D downstream) I do remember there was a discussion about this, but I couldn't find the answer for this. Code can't be passed compilation return products.stream(). //collect products into a Map, which key is S by function of //groupFunc, value is max or min by given max and function of //compFunc collect(groupingBy( groupFunc, maxBy ? maxBy(comp) : minBy(comp))); -邮件原件- 发件人: Paul Sandoz 发送时间: Monday, October 14, 2013 11:28 PM 收件人: Tristan Yan 抄送: core-libs-dev@openjdk.java.net; lambda-...@openjdk.java.net; Taras Ledkov; Andrey Nazarov; Aleksandre Iline 主题: Re: Demo for Parallel Core Collection API Hi, Some high-level points first: - try and use static import where possible. - suggest that all streams are sequential. There is an inconsistency in the use and in some cases it is embedded in other stream usages. - ParallelCore is not a very descriptive name. Suggest streams. - suggest moving the data classes and XML parsing code to a separate package. - Unfortunately Supplier is overloaded with the functional interface in j.u.function. Not sure much could be done about that. More details below. I am not really commenting on the specific use-case, just the usages of the API itself, plus for brevity i have removed comments. Conversion -- A more compact form of mostTProductsByCategory (without comments) is: public static T extends ComparableT Product[] mostTProductsByCategory( FunctionProduct, T func){ MapString, OptionalProduct m = products.stream(). collect(groupingBy(Product::getCategory, maxBy(comparing(func; return m.values().stream(). map(Optional::get). toArray(Product[]::new); } i.e. show the Map rather than the Collector. A DRYer form: public static CollectionString countries(boolean ordered) { StreamString countries = customers.stream().map(Customer::getCountry); if (ordered) { return countries.distinct().collect(toCollection(LinkedList::new)); } else { return countries.collect(Collectors.StringtoSet()); } } Shame that the type witness is required. For sequential streams is probably no advantage here to providing a link list over a list except for API educational value. Elements -- Simpler form. The function to apply the terminal op obfuscates: public static OptionalSupplier suppliersInCountry(boolean findAny, String country) { StreamSupplier s = suppliers.stream(). //filter supplier who is same sa given country filter(supplier - country. equals(supplier.getCountry())); return findAny ? s.findAny() : s.findFirst(); } The use of the collector is complicating matters. Off the top of my head what you require is a reducer that reduces to the first product whose stock units is 0: public static MapString, Product inStockProductsByCategory( boolean findAny) { BinaryOperatorProduct reduceToFirstMatch = (l, r) - (l != null) ? l : (r != null r.getUnitsInStock() 0) ? r : null; return products.stream().collect( groupingBy(Product::getCategory, reducing(null, reduceToFirstMatch))); } The above relies on the associativity of the binary operator. There is no need to collect into a filtered list then stream with findAny/findFirst since we can reduce to the result as each element is received. That reduceToFirstMatch can easily be abstracted into a higher order function taking a predicate Grouping -- public static MapString, ListOrder groupOrdersByCustomer() { //a collector that generate a order list from a customer CollectorCustomer, ListOrder, ListOrder collectOrder = Collector. of(ArrayListOrder::new, (orders, customer) - { orders.addAll(customer.getOrders()); }, (left, right) - { left.addAll(right); return left; }); return customers.parallelStream(). //convert customers to a Map which key is it name, value is its //orders collect(Collectors
Re: RFR for JDK-7190106 RMI benchmark fails intermittently because of use of fixed port
Hi Stuart I tried your suggestion but unfortunately all the benchmarks have dependencies to Main class because they need get stub from server. I suggest we move the benchmark tests to unnamed package unless we do want to put TestLibrary into a named package right now. Please let me know if you have objection on this. Thank you Tristan On 11/09/2013 02:28 AM, Stuart Marks wrote: Hi Tristan, Yes, it's kind of a problem that the RMI TestLibrary is in the unnamed package. Classes in a named package cannot import classes from the unnamed package. We've run into problems with this before. Eventually, we should move TestLibrary a named package. I think it's possible to work around this without too much difficulty. Note that classes in the unnamed package can import classes from named packages. So, perhaps you can put the RmiBench main class in the unnamed package so it has access to TestLibrary. Then have the benchmarks themselves in the bench.rmi package. The config file already references the benchmarks by fully qualified class name (e.g., bench.rmi.NullCalls) so with a bit of tinkering you ought to be able to get this to work. s'marks On 11/8/13 3:00 AM, Tristan Yan wrote: Thank you, Stuart There is one review point I want to ask you opinion. Which is the reason that I moved from test/java/rmi/reliability/benchmark/bench/rmi to test/java/rmi/reliability/benchmark is Main.java need access class TestLibrary for supporting random port. TestLibrary is a unpackage class, I couldn't find a way to let a class which has Package to access the class without package. Do you have suggestion on that? Thank you so much. Tristan On 11/06/2013 09:50 AM, Stuart Marks wrote: On 11/1/13 9:18 AM, Tristan Yan wrote: Hi Everyone http://cr.openjdk.java.net/~pzhang/Tristan/7190106/webrev/ Description: 1. Convert shell script test to Java program test. 2. Using random server port by reusing Darryl Mocek's work to replace fixed server port. 3. Using java Process class to start client process. 4. Also convert other shell script test runSerialBench.sh to java program test also Hi Tristan, Several comments on this webrev. ** The original arrangement within the test/java/rmi/reliability/benchmark directory had the main benchmark files (scripts) at the top, some benchmark framework files in the bench subdirectory, and the actual RMI and serialization benchmarks in bench/rmi and bench/serial subdirectories. The webrev moves all the RMI benchmarks to the top benchmark directory but leaves the serial benchmarks in bench/serial. The RMI benchmarks are now all cluttering the top directory, but the main serial benchmark test is now buried in the bench/serial directory. The nice organization that was there before is now spoiled. Is this rearrangement necessary in order to convert the scripts to Java? I would prefer the original arrangement be left in place. ** The RMI benchmark Main.java file has a @run tag of the form, @run main/othervm/policy=policy.all/timeout=1800 -server Main -c config There is a subtle but serious problem here: the -server option is passed to the JVM and not as an argument to the main() method. The main() method gets neither a -server nor a -client argument, so its default run mode as defined by the benchmark itself is SAMEVM. This runs the client and server in the same JVM, which is different from the shell script, which ran separate client and server JVMs. ** The logic to process the -server argument still expects to take a port, even though the port is assigned automatically. So the obvious fix to the above, @run main/othervm/policy=policy.all/timeout=1800 Main -server -c config doesn't work, since a port is missing. The logic to increment the argument index to collect the port argument should be removed. Also, the -server line should be restored to the usage message, but without the port argument. ** After this is done, the client's command line is constructed improperly. The command line ends up looking something like this: java client -cp classpath Main client localhost:58583 -c config The word client appears twice, but what's really required is -client to appear as an argument after Main. ** The client is run using ProcessBuilder, which by default sends stdout and stderr to pipes to be read by the parent. But the parent never reads them, thus any messages from the client are never seen. The client is the one that emits the benchmark report, so its output needs to be seen. It might be sufficient to have the client inherit the parent's stdout and stderr. This might intermix the client's and server's output, but it's better than nothing. ** The config file is checked with the following code: try { confFile = args[i]; confstr = new FileInputStream(System.getProperty(test.src) + System.getProperty(file.separator) + confFile); } catch (IOException e) { die(Error
Re: RFR for JDK-7190106 RMI benchmark fails intermittently because of use of fixed port
/Thank you, Stuart There is one review point I want to ask you opinion. Which is the reason that I moved from test/java/rmi/reliability/benchmark/bench/rmi to test/java/rmi/reliability/benchmark is Main.java need access class TestLibrary for supporting random port. TestLibrary is a unpackage class, I couldn't find a way to let a class which has Package to access the class without package. Do you have suggestion on that? Thank you so much. Tristan On 11/06/2013 09:50 AM, Stuart Marks wrote: / / / / On 11/1/13 9:18 AM, Tristan Yan wrote: / /Hi Everyone http://cr.openjdk.java.net/~pzhang/Tristan/7190106/webrev/ / / Description: 1. Convert shell script test to Java program test. 2. Using random server port by reusing Darryl Mocek's work to replace fixed server port. 3. Using java Process class to start client process. 4. Also convert other shell script test runSerialBench.sh to java program test also / / Hi Tristan, / / Several comments on this webrev. / / / / ** The original arrangement within the test/java/rmi/reliability/benchmark directory had the main benchmark files (scripts) at the top, some benchmark framework files in the bench subdirectory, and the actual RMI and serialization benchmarks in bench/rmi and bench/serial subdirectories. / / The webrev moves all the RMI benchmarks to the top benchmark directory but leaves the serial benchmarks in bench/serial. The RMI benchmarks are now all cluttering the top directory, but the main serial benchmark test is now buried in the bench/serial directory. The nice organization that was there before is now spoiled. Is this rearrangement necessary in order to convert the scripts to Java? I would prefer the original arrangement be left in place. / / / / ** The RMI benchmark Main.java file has a @run tag of the form, / / @run main/othervm/policy=policy.all/timeout=1800 -server Main -c config / / There is a subtle but serious problem here: the -server option is passed to the JVM and not as an argument to the main() method. The main() method gets neither a -server nor a -client argument, so its default run mode as defined by the benchmark itself is SAMEVM. This runs the client and server in the same JVM, which is different from the shell script, which ran separate client and server JVMs. / / / / ** The logic to process the -server argument still expects to take a port, even though the port is assigned automatically. So the obvious fix to the above, / / @run main/othervm/policy=policy.all/timeout=1800 Main -server -c config / / doesn't work, since a port is missing. The logic to increment the argument index to collect the port argument should be removed. Also, the -server line should be restored to the usage message, but without the port argument. / / / / ** After this is done, the client's command line is constructed improperly. The command line ends up looking something like this: / / java client -cp classpath Main client localhost:58583 -c config / / The word client appears twice, but what's really required is -client to appear as an argument after Main. / / / / ** The client is run using ProcessBuilder, which by default sends stdout and stderr to pipes to be read by the parent. But the parent never reads them, thus any messages from the client are never seen. The client is the one that emits the benchmark report, so its output needs to be seen. It might be sufficient to have the client inherit the parent's stdout and stderr. This might intermix the client's and server's output, but it's better than nothing. / / / / ** The config file is checked with the following code: / / try { confFile = args[i]; confstr = new FileInputStream(System.getProperty(test.src) + System.getProperty(file.separator) + confFile); } catch (IOException e) { die(Error: unable to open \ + args[i] + \); } / / This is potentially misleading, as the message doesn't print the actual filename that was attempted to be opened. / / This is important, as the test.src property doesn't exist in the client JVM. / / Note that the original shell script passed full pathnames for the config file to both the client and the server. The new @run tag merely says -c config which redefines the config filename to be relative to the test.src directory. You could pass -Dtest.src=... to the client, but it seems like there should be something better than can be done. / / / / ** The client needs to have its security policy set up. This is missing from the construction of the client's command line. / / / / ** ProcessBuilder takes a ListString for its command; there is no need to turn the list into an array. / / / / ** In the benchmark main methods, code of the form, / / while (true) { runBenchmarks(); if (exitRequested) { System.exit(); } } / / was replaced with / / while (!exitRequested) { runBenchmarks(); } / / This is a subtle
Re: RFR: 8027822: ProblemList.txt Updates (11/2013)
Hi Amy Can we exclude below two also test/com/sun/net/httpserver/Test9a.java test/java/util/concurrent/ThreadPoolExecutor/CoreThreadTimeOut.java Thank you Tristan On 06/11/2013 10:45, Amy Lu wrote: On 11/5/13 7:09 PM, Amy Lu wrote: On 11/5/13 6:01 PM, Alan Bateman wrote: On 05/11/2013 09:26, Amy Lu wrote: : Please review ProblemList.txt changes: https://dl.dropboxusercontent.com/u/5812451/yl153753/8027822/webrev/index.html This looks okay to me although I'm not sure about sun/util/resources/TimeZone/Bug6317929.java (for some reason Aleksej fixed the test and also excluded it, pending a decision/discussion on whether this test should be removed). Aleksej, Should this test be removed from ProblemList? Thanks, Amy Should we use the opportunity to add all the tests that are failing due to the exact math intrinsic (JDK-8027622)? -Alan. Added 8027622 and affected tests into ProblemList: https://dl.dropboxusercontent.com/u/5812451/yl153753/8027822/webrev.01/index.html Thanks, Amy
Re: RFR: 8027822: ProblemList.txt Updates (11/2013)
Sorry, I replied to the wrong thread, please ignore my previous mail Tristan On 06/11/2013 11:04, Tristan Yan wrote: Hi Amy Can we exclude below two also test/com/sun/net/httpserver/Test9a.java test/java/util/concurrent/ThreadPoolExecutor/CoreThreadTimeOut.java Thank you Tristan On 06/11/2013 10:45, Amy Lu wrote: On 11/5/13 7:09 PM, Amy Lu wrote: On 11/5/13 6:01 PM, Alan Bateman wrote: On 05/11/2013 09:26, Amy Lu wrote: : Please review ProblemList.txt changes: https://dl.dropboxusercontent.com/u/5812451/yl153753/8027822/webrev/index.html This looks okay to me although I'm not sure about sun/util/resources/TimeZone/Bug6317929.java (for some reason Aleksej fixed the test and also excluded it, pending a decision/discussion on whether this test should be removed). Aleksej, Should this test be removed from ProblemList? Thanks, Amy Should we use the opportunity to add all the tests that are failing due to the exact math intrinsic (JDK-8027622)? -Alan. Added 8027622 and affected tests into ProblemList: https://dl.dropboxusercontent.com/u/5812451/yl153753/8027822/webrev.01/index.html Thanks, Amy
RFR for JDK-8025198 Intermittent test failure: java/util/concurrent/ThreadPoolExecutor/ThrowingTasks.java
Thank you Martin I had code change. I took the similar way as you do here. Please review the code change for JDK-8025198. Description: Race condition exists in the test ThrowingTasks.java. We should sync CountDownLatch.countDown and CountDownLatch.getCount. http://cr.openjdk.java.net/~ewang/tristan/JDK-8025198/webrev.01/test/java/util/concurrent/ThreadPoolExecutor/ThrowingTasks.java.sdiff.html Thank you Tristan On 11/03/2013 12:46 PM, Martin Buchholz wrote: I think the author of this test was sleep-deprived by baby Tristan back in 2007.Inline image 1 I tried and failed to find a single synchronizer that properly does what we are trying to do with allStarted here. Too bad CountDownLatch.countDown doesn't return the count. Best to introduce another synchronizer. I propose this fix: diff --git a/test/java/util/concurrent/ThreadPoolExecutor/ThrowingTasks.java b/test/java/util/concurrent/ThreadPoolExecutor/ThrowingTasks.java --- a/test/java/util/concurrent/ThreadPoolExecutor/ThrowingTasks.java +++ b/test/java/util/concurrent/ThreadPoolExecutor/ThrowingTasks.java @@ -136,6 +136,7 @@ } static final CountDownLatch allStarted = new CountDownLatch(flakes.size()); +static final AtomicInteger taskSerial = new AtomicInteger(0); static final CountDownLatch allContinue = new CountDownLatch(1); static class PermissiveSecurityManger extends SecurityManager { @@ -164,7 +165,8 @@ } @Override protected void beforeExecute(Thread t, Runnable r) { allStarted.countDown(); - if (allStarted.getCount() getCorePoolSize()) + // Get last core pool cohort to start in unison + if (flakes.size() - taskSerial.incrementAndGet() getCorePoolSize()) try { allContinue.await(); } catch (InterruptedException x) { unexpected(x); } beforeExecuteCount.getAndIncrement(); On Fri, Nov 1, 2013 at 6:48 PM, Tristan Yan tristan@oracle.com mailto:tristan@oracle.com wrote: This only happened when I tried a 1000 times loop run, see the jstack out put below: Also by using jmap/jhat, below values help me find the reason flakes.size() = 25 allStarted.state = 1 beforeExecuteCount.count = 1 Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.0-b56 mixed mode): Attach Listener #32 daemon prio=9 os_prio=31 tid=0x7faf1d808000 nid=0x390b waiting on condition [0x] java.lang.Thread.State: RUNNABLE Thread-22 #31 prio=5 os_prio=31 tid=0x7faf1d07b000 nid=0x5d0b waiting on condition [0x00019b53] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0x0001683e9610 (a java.util.concurrent.CountDownLatch$Sync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231) at ThrowingTasks$CheckingExecutor.beforeExecute(ThrowingTasks.java:168) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1139) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:744) Thread-21 #30 prio=5 os_prio=31 tid=0x7faf1d80c800 nid=0x5507 waiting on condition [0x00019ae1b000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0x0001683e9610 (a java.util.concurrent.CountDownLatch$Sync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231) at ThrowingTasks$CheckingExecutor.beforeExecute(ThrowingTasks.java:168) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1139) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:744) Thread-20 #29 prio=5 os_prio=31 tid=0x7faf1c10b000 nid=0x6307 waiting on condition [0x00019b227000
Re: RFR for JDK-8025198 Intermittent test failure: java/util/concurrent/ThreadPoolExecutor/ThrowingTasks.java
Thank you Martin I updated the code as below http://cr.openjdk.java.net/~ewang/tristan/JDK-8025198/webrev.02/ Thank you On 11/04/2013 11:38 AM, Martin Buchholz wrote: Tristan, I think your change it correct. There are some stylistic improvements I would make: - make lock final - make lessThanCorePoolSize blank final; final boolean lessThanCorePoolSize; - add spaces after // and before { On Sun, Nov 3, 2013 at 4:49 PM, Tristan Yan tristan@oracle.com mailto:tristan@oracle.com wrote: Thank you Martin I had code change. I took the similar way as you do here. Please review the code change for JDK-8025198. Description: Race condition exists in the test ThrowingTasks.java. We should sync CountDownLatch.countDown and CountDownLatch.getCount. http://cr.openjdk.java.net/~ewang/tristan/JDK-8025198/webrev.01/test/java/util/concurrent/ThreadPoolExecutor/ThrowingTasks.java.sdiff.html http://cr.openjdk.java.net/%7Eewang/tristan/JDK-8025198/webrev.01/test/java/util/concurrent/ThreadPoolExecutor/ThrowingTasks.java.sdiff.html Thank you Tristan On 11/03/2013 12:46 PM, Martin Buchholz wrote: I think the author of this test was sleep-deprived by baby Tristan back in 2007.Inline image 1 I tried and failed to find a single synchronizer that properly does what we are trying to do with allStarted here. Too bad CountDownLatch.countDown doesn't return the count. Best to introduce another synchronizer. I propose this fix: diff --git a/test/java/util/concurrent/ThreadPoolExecutor/ThrowingTasks.java b/test/java/util/concurrent/ThreadPoolExecutor/ThrowingTasks.java --- a/test/java/util/concurrent/ThreadPoolExecutor/ThrowingTasks.java +++ b/test/java/util/concurrent/ThreadPoolExecutor/ThrowingTasks.java @@ -136,6 +136,7 @@ } static final CountDownLatch allStarted = new CountDownLatch(flakes.size()); +static final AtomicInteger taskSerial = new AtomicInteger(0); static final CountDownLatch allContinue = new CountDownLatch(1); static class PermissiveSecurityManger extends SecurityManager { @@ -164,7 +165,8 @@ } @Override protected void beforeExecute(Thread t, Runnable r) { allStarted.countDown(); -if (allStarted.getCount() getCorePoolSize()) +// Get last core pool cohort to start in unison +if (flakes.size() - taskSerial.incrementAndGet() getCorePoolSize()) try { allContinue.await(); } catch (InterruptedException x) { unexpected(x); } beforeExecuteCount.getAndIncrement(); On Fri, Nov 1, 2013 at 6:48 PM, Tristan Yan tristan@oracle.com mailto:tristan@oracle.com wrote: This only happened when I tried a 1000 times loop run, see the jstack out put below: Also by using jmap/jhat, below values help me find the reason flakes.size() = 25 allStarted.state = 1 beforeExecuteCount.count = 1 Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.0-b56 mixed mode): Attach Listener #32 daemon prio=9 os_prio=31 tid=0x7faf1d808000 nid=0x390b waiting on condition [0x] java.lang.Thread.State: RUNNABLE Thread-22 #31 prio=5 os_prio=31 tid=0x7faf1d07b000 nid=0x5d0b waiting on condition [0x00019b53] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0x0001683e9610 (a java.util.concurrent.CountDownLatch$Sync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231) at ThrowingTasks$CheckingExecutor.beforeExecute(ThrowingTasks.java:168) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1139) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:744) Thread-21 #30 prio=5 os_prio=31 tid=0x7faf1d80c800 nid=0x5507 waiting on condition [0x00019ae1b000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for 0x0001683e9610
Re: RFR for JDK-7190106 RMI benchmark fails intermittently because of use of fixed port
Hi Everyone http://cr.openjdk.java.net/~pzhang/Tristan/7190106/webrev/ Description: 1. Convert shell script test to Java program test. 2. Using random server port by reusing Darryl Mocek's work to replace fixed server port. 3. Using java Process class to start client process. 4. Also convert other shell script test runSerialBench.sh to java program test also Thank you Tristan On 01/11/2013 23:58, Stuart Marks wrote: On 10/31/13 10:22 PM, Tristan Yan wrote: I am working on bug https://bugs.openjdk.java.net/browse/JDK-7190106. Based on my research, it looks like the issue of fixed port was already addressed by Stuart Marks in other RMI tests which are Java based. I would like to reuse his solution, however it does not work for shell based tests. (Darryl Mocek did the unique port work for the RMI tests.) Was the patch attached to your message? If so, it didn't get through. Most OpenJDK mailing lists strip off attachments before forwarding the message to the recipients. 2. My recommendation would be to convert this shell script test into Java based test and re-use the dynamic port allocation solution by Stuart Marks to address the issue 3. Also this test was written with server/client mode in shell script. In the past there have been sync issues between server/client which caused the test to fail. If we convert the shell script into Java based test, it would avoid using sleep 10 mechanism to allow for server and client to start up and also give us better control in synchronizing server and client. (Background for interested readers.) In general, yes, it's quite difficult to make reliable shell tests, especially for multi-process tests like this one. There is the unique port issue, and there is also the issue of how long for the client to wait until the server is ready. Error handling is also a problem, for example, if one of the JVMs gets an unexpected exception, it's easy for shell tests to mishandle this case. They might hang or erroneously report success. -- If this is a rewrite, it's probably fairly large, so you need to upload it somewhere (e.g., cr.openjdk.java.net) and then post a link to it. Thanks. s'marks
RFR for JDK-8025198 Intermittent test failure: java/util/concurrent/ThreadPoolExecutor/ThrowingTasks.java
/Hi Everyone / /I am working on bug https://bugs.openjdk.java.net/browse/JDK-8025198. Root cause for this bug is //there is a race condition on below code. //there is a very small chance that when 11th thread finishes allStarted.countDown() and before check the count, 10th thread does countDown, then 2nd to 11th threads go into wait allStarted and last thread can't get the execution because the BlockingQueue is full. / /allStarted.countDown(); // //if (allStarted.getCount() getCorePoolSize()) // / /I am going to fixed above code as below// //lock.lock();// //boolean lessThanCorePoolSize = false;// //try{// //allStarted.countDown();// //lessThanCorePoolSize = allStarted.getCount() getCorePoolSize();// //} finally {// //lock.unlock();// //}// //if (lessThanCorePoolSize) // /// // /Please let me know if you have any comments or suggestions.// // //Tristan// /