[jira] [Issue Comment Edited] (MATH-780) BSPTree class and recovery of a Euclidean 3D BRep
[ https://issues.apache.org/jira/browse/MATH-780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13258362#comment-13258362 ] Andrew Willis edited comment on MATH-780 at 4/20/12 4:42 PM: - The code in BSPMesh2.java produces the following error when I run it. If you comment in the line that re-assigns the coordinate data to be cubeCoords1, then the code works fine. The only difference in the two data sets is that one coordinate has changed by a small amount. Exception in thread main java.lang.ClassCastException: org.apache.commons.math3.geometry.partitioning.BoundaryAttribute cannot be cast to java.lang.Boolean at org.apache.commons.math3.geometry.euclidean.twod_exact.PolygonsSet.computeGeometricalProperties(PolygonsSet.java:135) at org.apache.commons.math3.geometry.partitioning.AbstractRegion.getSize(AbstractRegion.java:380) at org.apache.commons.math3.geometry.euclidean.threed_exact.PolyhedronsSet$FacetsContributionVisitor.addContribution(PolyhedronsSet.java:171) at org.apache.commons.math3.geometry.euclidean.threed_exact.PolyhedronsSet$FacetsContributionVisitor.visitInternalNode(PolyhedronsSet.java:153) at org.apache.commons.math3.geometry.partitioning.BSPTree.visit(BSPTree.java:262) at org.apache.commons.math3.geometry.partitioning.BSPTree.visit(BSPTree.java:261) at org.apache.commons.math3.geometry.partitioning.BSPTree.visit(BSPTree.java:261) at org.apache.commons.math3.geometry.partitioning.BSPTree.visit(BSPTree.java:263) at org.apache.commons.math3.geometry.partitioning.BSPTree.visit(BSPTree.java:261) at org.apache.commons.math3.geometry.euclidean.threed_exact.PolyhedronsSet.computeGeometricalProperties(PolyhedronsSet.java:118) at org.apache.commons.math3.geometry.partitioning.AbstractRegion.getSize(AbstractRegion.java:380) at datastructures.j3d.bsptree.BSPMesh.init(BSPMesh.java:130) at datastructures.j3d.bsptree.BSPMesh2.main(BSPMesh2.java:206) was (Author: arwillis): This code produces the following error: Exception in thread main java.lang.ClassCastException: org.apache.commons.math3.geometry.partitioning.BoundaryAttribute cannot be cast to java.lang.Boolean at org.apache.commons.math3.geometry.euclidean.twod_exact.PolygonsSet.computeGeometricalProperties(PolygonsSet.java:135) at org.apache.commons.math3.geometry.partitioning.AbstractRegion.getSize(AbstractRegion.java:380) at org.apache.commons.math3.geometry.euclidean.threed_exact.PolyhedronsSet$FacetsContributionVisitor.addContribution(PolyhedronsSet.java:171) at org.apache.commons.math3.geometry.euclidean.threed_exact.PolyhedronsSet$FacetsContributionVisitor.visitInternalNode(PolyhedronsSet.java:153) at org.apache.commons.math3.geometry.partitioning.BSPTree.visit(BSPTree.java:262) at org.apache.commons.math3.geometry.partitioning.BSPTree.visit(BSPTree.java:261) at org.apache.commons.math3.geometry.partitioning.BSPTree.visit(BSPTree.java:261) at org.apache.commons.math3.geometry.partitioning.BSPTree.visit(BSPTree.java:263) at org.apache.commons.math3.geometry.partitioning.BSPTree.visit(BSPTree.java:261) at org.apache.commons.math3.geometry.euclidean.threed_exact.PolyhedronsSet.computeGeometricalProperties(PolyhedronsSet.java:118) at org.apache.commons.math3.geometry.partitioning.AbstractRegion.getSize(AbstractRegion.java:380) at datastructures.j3d.bsptree.BSPMesh.init(BSPMesh.java:130) at datastructures.j3d.bsptree.BSPMesh2.main(BSPMesh2.java:206) BSPTree class and recovery of a Euclidean 3D BRep - Key: MATH-780 URL: https://issues.apache.org/jira/browse/MATH-780 Project: Commons Math Issue Type: Bug Affects Versions: 3.0 Environment: Linux Reporter: Andrew Willis Labels: BSPTree, euclidean.threed Attachments: BSPMesh2.java New to the work here. Thanks for your efforts on this code. I create a BSPTree from a BoundaryRep (Brep) my test Brep is a tetrahedron as represented by a float array containing 4 3D points (x,y,z) order and an array of indices (4 triplets for the 4 faces of the tet). I construct a BSPMesh() as shown in the code below. I can construct the PolyhedronsSet() however, when I interrogate the shape (with getSize() or getBoundarySize() I get infinity back as a result). When I try to get back the BRep (by traversing the BSPTree resulting from PolyhedronsSet.getTree(true) and getting the PolygonsSet() associated with each 3D SubPlane, I get a null vertex back and strange values. Any ideas? public class BSPMesh { public BSPMesh(float[] coords, int[] indices) {
[jira] [Issue Comment Edited] (IO-323) What should happen in FileUtils.sizeOf[Directory] when an overflow takes place?
[ https://issues.apache.org/jira/browse/IO-323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13254737#comment-13254737 ] Gary D. Gregory edited comment on IO-323 at 4/16/12 2:53 PM: - Hm, how about -1 for the current API and adding an API that uses BigInteger if you really care about huge sizes? There is no point on keeping on counting once you overflow. was (Author: garydgregory): Hm, how about -1 for the current API and adding an API that uses BigInteger if you really care about huge sizes? What should happen in FileUtils.sizeOf[Directory] when an overflow takes place? --- Key: IO-323 URL: https://issues.apache.org/jira/browse/IO-323 Project: Commons IO Issue Type: Bug Components: Utilities Affects Versions: 2.3 Environment: Apache Maven 3.0.4 (r1232337; 2012-01-17 03:44:56-0500) Maven home: C:\Java\apache-maven-3.0.4\bin\.. Java version: 1.6.0_31, vendor: Sun Microsystems Inc. Java home: C:\Program Files\Java\jdk1.6.0_31\jre Default locale: en_US, platform encoding: Cp1252 OS name: windows 7, version: 6.1, arch: amd64, family: windows Reporter: Gary D. Gregory FileUtils.sizeOf[Directory] adds longs. What should happen when an overflow happens? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (MATH-718) inverseCumulativeProbability of BinomialDistribution returns wrong value for large trials.
[ https://issues.apache.org/jira/browse/MATH-718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253360#comment-13253360 ] Thomas Neidhart edited comment on MATH-718 at 4/13/12 1:22 PM: --- The problem Christian described wrt the PascalDistribution is a simple integer overflow in the class itself: {noformat} public double cumulativeProbability(int x) { double ret; if (x 0) { ret = 0.0; } else { ret = Beta.regularizedBeta(probabilityOfSuccess, numberOfSuccesses, x + 1); } return ret; } {noformat} when x = Integer.MAX_VALUE, adding 1 to it will result in an overflow. As the parameter of regularizedBeta is anyway a double, it should be cast to long/double before the addition. Edit: Similar things happen btw also in other Distribution implementations, so it should be fixed also there, e.g. BinomialDistribution was (Author: tn): The problem Christian described wrt the PascalDistribution is a simple integer overflow in the class itself: {noformat} public double cumulativeProbability(int x) { double ret; if (x 0) { ret = 0.0; } else { ret = Beta.regularizedBeta(probabilityOfSuccess, numberOfSuccesses, x + 1); } return ret; } {noformat} when x = Integer.MAX_VALUE, adding 1 to it will result in an overflow. As the parameter of regularizedBeta is anyway a double, it should be cast to long/double before the addition. inverseCumulativeProbability of BinomialDistribution returns wrong value for large trials. -- Key: MATH-718 URL: https://issues.apache.org/jira/browse/MATH-718 Project: Commons Math Issue Type: Bug Affects Versions: 2.2, 3.0 Reporter: Yuji Uchiyama Assignee: Sébastien Brisard Fix For: 3.1, 4.0 The inverseCumulativeProbability method of the BinomialDistributionImpl class returns wrong value for large trials. Following code will be reproduce the problem. {{System.out.println(new BinomialDistributionImpl(100, 0.5).inverseCumulativeProbability(0.5));}} This returns 499525, though it should be 49. I'm not sure how it should be fixed, but the cause is that the cumulativeProbability method returns Infinity, not NaN. As the result the checkedCumulativeProbability method doesn't work as expected. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (IO-319) FileUtils.sizeOfDirectory follows symbolic links.
[ https://issues.apache.org/jira/browse/IO-319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13251677#comment-13251677 ] Gary D. Gregory edited comment on IO-319 at 4/11/12 3:34 PM: - On Windows, the sym link is called an NTFS junction point. This has been available since Windows 2000 according to https://en.wikipedia.org/wiki/NTFS_symbolic_link {noformat} MKLINK [[/D] | [/H] | [/J]] Link Target /D Creates a directory symbolic link. Default is a file symbolic link. /H Creates a hard link instead of a symbolic link. /J Creates a Directory Junction. Linkspecifies the new symbolic link name. Target specifies the path (relative or absolute) that the new link refers to. {noformat} Can you inlude Windows support in your patch? I am on Windows myself. Thank you! was (Author: garydgregory): On Windows, the sym link is called an NTFS junction point. This has been available since Windows 2000 according to https://en.wikipedia.org/wiki/NTFS_symbolic_link MKLINK [[/D] | [/H] | [/J]] Link Target /D Creates a directory symbolic link. Default is a file symbolic link. /H Creates a hard link instead of a symbolic link. /J Creates a Directory Junction. Linkspecifies the new symbolic link name. Target specifies the path (relative or absolute) that the new link refers to. FileUtils.sizeOfDirectory follows symbolic links. - Key: IO-319 URL: https://issues.apache.org/jira/browse/IO-319 Project: Commons IO Issue Type: Bug Affects Versions: 2.1 Reporter: Ravi Prakash Priority: Critical Attachments: commons-io-319.patch First of all Thanks tons Apache Commons folks for all the amazing work! :) My first JIRA. Yayyy. I contributed B-) A symbolic link may create a cycle and so sizeOfDirectory crashes with an IllegalArgumentException. e.g. {noformat} $ tree test test ├── file └── ravi ├── cycle - ../../test └── file {noformat} causes FileUtils.sizeOfDirectory to crash like so {noformat} java TestJAVA Exception in thread main java.lang.IllegalArgumentException: somepath/test/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle/ravi/cycle does not exist at org.apache.commons.io.FileUtils.sizeOf(FileUtils.java:2053) at org.apache.commons.io.FileUtils.sizeOfDirectory(FileUtils.java:2089) at org.apache.commons.io.FileUtils.sizeOf(FileUtils.java:2057) at org.apache.commons.io.FileUtils.sizeOfDirectory(FileUtils.java:2089) at org.apache.commons.io.FileUtils.sizeOf(FileUtils.java:2057) at org.apache.commons.io.FileUtils.sizeOfDirectory(FileUtils.java:2089) at org.apache.commons.io.FileUtils.sizeOf(FileUtils.java:2057) at org.apache.commons.io.FileUtils.sizeOfDirectory(FileUtils.java:2089) {noformat} We faced the same issue in Hadoop :(. Checkout https://issues.apache.org/jira/browse/HADOOP-6963 for our solution -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (IO-278) Improve Tailer performance with buffered reads
[ https://issues.apache.org/jira/browse/IO-278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13251745#comment-13251745 ] James Herrmann edited comment on IO-278 at 4/11/12 5:08 PM: Looks like theses patches have been rolled in. So, congrats to all on that. I went ahead and maven repo'ed your fork anyway because of the IO-279 issue. Tailer would read from start of file on frequent occasions. Not good when you're sending alerts! Sergio, does your fork handle the IO-269 bug? Right now, that's the last known issue I'm concerned about. Thanks! Jim Edited: wrong bug # was (Author: herrjj): Looks like theses patches have been rolled in. So, congrats to all on that. I went ahead and maven repo'ed your fork anyway because of the IO-269 issue. Tailer would read from start of file on frequent occasions. Not good when you're sending alerts! Sergio, does your fork handle the IO-269 bug? Right now, that's the last known issue I'm concerned about. Thanks! Jim Improve Tailer performance with buffered reads -- Key: IO-278 URL: https://issues.apache.org/jira/browse/IO-278 Project: Commons IO Issue Type: Improvement Affects Versions: 2.0.1 Reporter: Sergio Bossa Attachments: Tailer.diff, TailerTest.diff I noticed Tailer read performances are pretty poor when dealing with large, frequently written, log files, and this is due to the use of RandomAccessFile which does unbuffered reads, hence causing lots of disk I/O. So I improved the Tailer implementation by introducing buffered reads: it works by loading large (configurable) file chunks in memory, and reading lines from there; this enhances performances in my tests from 10x to 30x depending on the file size. I also added two test cases: one to simulate reading of a large file (you can use it to compare performances), the other to verify correct handling on buffer breaks; obviously, all tests pass. I'm attaching the diff files, let me know if it's okay for you guys! -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (SANSELAN-72) Incorrect reading TIFF file
[ https://issues.apache.org/jira/browse/SANSELAN-72?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13250099#comment-13250099 ] VVD edited comment on SANSELAN-72 at 4/9/12 7:35 PM: - Files from description of this issue. was (Author: vvd): Files from descrition. Incorrect reading TIFF file --- Key: SANSELAN-72 URL: https://issues.apache.org/jira/browse/SANSELAN-72 Project: Commons Sanselan Issue Type: Bug Components: Format: PNG, Format: TIFF Affects Versions: 1.x Reporter: VVD Attachments: in.png, in.tif, out-png-IM.png, out-tif.png I found 2 bugs. I have tif file in.tif. 1. After convert it to png (bmp, tga, etc) by Sanselan it getting horizontal lines. {code} Sanselan.writeImage(Sanselan.getBufferedImage(new File(in.tif)), new File(out-tif.png), ImageFormat.IMAGE_FORMAT_PNG, null); {code} gwenview, eog, kolourpaint, gimp, etc show in.tif without lines and out.png with lines. For example 1st line at ~860 points from top and in full width of picture. 2. After convert it to png by convert utility from ImageMagic, and then convert it to png (bmp, tga, etc) by Sanselan it became gray - all white pixels become gray. {code}$ convert in.tif in.png{code} {code} Sanselan.writeImage(Sanselan.getBufferedImage(new File(in.png)), new File(out-png-IM.png), ImageFormat.IMAGE_FORMAT_PNG, null); {code} gwenview, eog, kolourpaint, gimp, etc show in.png as black and white, but out-png-IM.png as black and gray. I'll attach all 4 files. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (LANG-686) StringUtils.replaceEachRepeatedly(aaa, new String[]{aa}, new String[]{aXa}); throw an exception
[ https://issues.apache.org/jira/browse/LANG-686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13245710#comment-13245710 ] Thomas Neidhart edited comment on LANG-686 at 4/4/12 1:38 PM: -- I have worked on this a bit and provide a patch that does the following: * refactor the replaceEach method to avoid code duplication * change loop detection code Instead of using a timeToLive variable that tries to detect infinite loops, I used a quite simple but effective method: Whenever at the end of a replace cycle we end up having a result string that has already be seen so far, we have possibly detected an infinite loop and thus abort the execution (with an exception as before). I have also added two additional unit test cases. Edit: The new loop detection code does not prevent stackoverflow exceptions of course. But as others have pointed out, just let it happen (it is anyway very unlikely to occur, need to come up with a test case for it somehow). I would prefer correct behavior in normal cases, and a robust loop detection for recursive/weird cases. was (Author: tn): I have worked on this a bit and provide a patch that does the following: * refactor the replaceEach method to avoid code duplication * change loop detection code Instead using a timeToLive variable that trys to detect infinite loops, I use a quite simple but effective method: Whenever at the end of a replace cycle we end up having a result string that has already be seen so far, we have possibly detected an infinite loop and thus abort the execution (with an exception as before). I have also added two additional unit test cases. StringUtils.replaceEachRepeatedly(aaa, new String[]{aa}, new String[]{aXa}); throw an exception - Key: LANG-686 URL: https://issues.apache.org/jira/browse/LANG-686 Project: Commons Lang Issue Type: Bug Components: lang.* Affects Versions: 2.6 Environment: jdk 1.6.24, windows xp pro sp3, eclipse helios Reporter: qed Fix For: 3.x Attachments: LANG-686.patch After executing line StringUtils.replaceEachRepeatedly(aaa, new String[]{aa}, new String[]{aXa}); exception is thrown: Exception in thread main java.lang.IllegalStateException: TimeToLive of -1 is less than 0: aXaXa at org.apache.commons.lang.StringUtils.replaceEach(StringUtils.java:3986) at org.apache.commons.lang.StringUtils.replaceEach(StringUtils.java:4099) at org.apache.commons.lang.StringUtils.replaceEach(StringUtils.java:4099) at org.apache.commons.lang.StringUtils.replaceEachRepeatedly(StringUtils.java:3920) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (FILEUPLOAD-183) commons-io dependency does not get loaded by maven if only dependency to commons-fileupload is specified
[ https://issues.apache.org/jira/browse/FILEUPLOAD-183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13246450#comment-13246450 ] Darren Hartford edited comment on FILEUPLOAD-183 at 4/4/12 4:45 PM: Just recently ran into this exact issue with 1.2.2 as well. Commons-io should not be optional. The 'common usecases' always need commons-io. EDIT: researched, there is an older ticket https://issues.apache.org/jira/browse/FILEUPLOAD-172 that identifies that if you do not use DiskFileItem, then you do not need commons-io. Unfortunately, this is where maven dependency management and based-on-code-used dependencies fall apart. Doesn't sound like a commons-fileupload issue as much as a Maven issue for them to be more verbose about showing/listing transitive optional dependencies (instead, you always have to dig to find them). was (Author: dhartford): Just recently ran into this exact issue with 1.2.2 as well. Commons-io should not be optional. The 'common usecases' always need commons-io. commons-io dependency does not get loaded by maven if only dependency to commons-fileupload is specified Key: FILEUPLOAD-183 URL: https://issues.apache.org/jira/browse/FILEUPLOAD-183 Project: Commons FileUpload Issue Type: Bug Affects Versions: 1.2.1 Environment: Maven 2.2.1 Reporter: Roman Arkadijovych Muntyanu If commons-fileupload is added as dependency (without commons-io explicitly defined) like the following {code:xml} dependency groupIdcommons-fileupload/groupId artifactIdcommons-fileupload/artifactId version1.2.1/version scopecompile/scope /dependency {code} and fileupload is referenced in the code like {code:java} // Create a factory for disk-based file items FileItemFactory factory = new DiskFileItemFactory(); // Create a new file upload handler ServletFileUpload upload = new ServletFileUpload(factory); // Parse the request List /* FileItem */ items = upload.parseRequest(request); {code} then NoClassDefFoundError occurs {code:none} java.lang.NoClassDefFoundError: org/apache/commons/io/output/DeferredFileOutputStream at org.apache.commons.fileupload.disk.DiskFileItemFactory.createItem(DiskFileItemFactory.java:196) at org.apache.commons.fileupload.FileUploadBase.parseRequest(FileUploadBase.java:358) at org.apache.commons.fileupload.servlet.ServletFileUpload.parseRequest(ServletFileUpload.java:126) {code} The reason is that commons-fileupload artifact has *optional* dependency to commons-io in its pom-file {code:xml} dependency groupIdcommons-io/groupId artifactIdcommons-io/artifactId version1.3.2/version optionaltrue/optional /dependency {code} Which results in commons-io not being downloaded and added to the project by maven. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (FILEUPLOAD-183) commons-io dependency does not get loaded by maven if only dependency to commons-fileupload is specified
[ https://issues.apache.org/jira/browse/FILEUPLOAD-183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13246450#comment-13246450 ] Darren Hartford edited comment on FILEUPLOAD-183 at 4/4/12 4:46 PM: Just recently ran into this exact issue with 1.2.2 as well. Commons-io should not be optional. The 'common usecases' always need commons-io. EDIT: researched, there is an older ticket https://issues.apache.org/jira/browse/FILEUPLOAD-172 that identifies that if you do not use DiskFileItem, then you do not need commons-io. Unfortunately, this is where maven dependency management and based-on-code-used dependencies fall apart. Doesn't sound like a commons-fileupload issue as much as a Maven issue for them to be more verbose about showing/listing transitive optional dependencies (instead, you always have to dig to find them). Depending on commonality, maybe make this a compile dependency, but for users who do not want to have commons-io to instead exclude the dependency? was (Author: dhartford): Just recently ran into this exact issue with 1.2.2 as well. Commons-io should not be optional. The 'common usecases' always need commons-io. EDIT: researched, there is an older ticket https://issues.apache.org/jira/browse/FILEUPLOAD-172 that identifies that if you do not use DiskFileItem, then you do not need commons-io. Unfortunately, this is where maven dependency management and based-on-code-used dependencies fall apart. Doesn't sound like a commons-fileupload issue as much as a Maven issue for them to be more verbose about showing/listing transitive optional dependencies (instead, you always have to dig to find them). commons-io dependency does not get loaded by maven if only dependency to commons-fileupload is specified Key: FILEUPLOAD-183 URL: https://issues.apache.org/jira/browse/FILEUPLOAD-183 Project: Commons FileUpload Issue Type: Bug Affects Versions: 1.2.1 Environment: Maven 2.2.1 Reporter: Roman Arkadijovych Muntyanu If commons-fileupload is added as dependency (without commons-io explicitly defined) like the following {code:xml} dependency groupIdcommons-fileupload/groupId artifactIdcommons-fileupload/artifactId version1.2.1/version scopecompile/scope /dependency {code} and fileupload is referenced in the code like {code:java} // Create a factory for disk-based file items FileItemFactory factory = new DiskFileItemFactory(); // Create a new file upload handler ServletFileUpload upload = new ServletFileUpload(factory); // Parse the request List /* FileItem */ items = upload.parseRequest(request); {code} then NoClassDefFoundError occurs {code:none} java.lang.NoClassDefFoundError: org/apache/commons/io/output/DeferredFileOutputStream at org.apache.commons.fileupload.disk.DiskFileItemFactory.createItem(DiskFileItemFactory.java:196) at org.apache.commons.fileupload.FileUploadBase.parseRequest(FileUploadBase.java:358) at org.apache.commons.fileupload.servlet.ServletFileUpload.parseRequest(ServletFileUpload.java:126) {code} The reason is that commons-fileupload artifact has *optional* dependency to commons-io in its pom-file {code:xml} dependency groupIdcommons-io/groupId artifactIdcommons-io/artifactId version1.3.2/version optionaltrue/optional /dependency {code} Which results in commons-io not being downloaded and added to the project by maven. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (CODEC-133) Please add a function for the MD5/SHA1/SHA-512 based Unix crypt(3) hash variants
[ https://issues.apache.org/jira/browse/CODEC-133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13243758#comment-13243758 ] Christian Hammers edited comment on CODEC-133 at 4/1/12 4:02 PM: - A new approach: While playing around with the new Kotlin JVM language, I tried to convert the original C sources of MD5 and SHA2 crypt() to Kotlin and after this to Java just to see the differences. The nice benefit of this excersise is that we now have Java implementations that are not only better commented than the ones from UTexas but also sufficiently different to not have any copyright problems. Any resemblance is due to the fact that we both translated the same C code nearly line by line. So please accept the attached patch commons-codec-crypt3.diff! :-) was (Author: lathspell): A new approach: While playing around with the new Kotlin JVM language, I tried to convert the original C sources of MD5 and SHA2 crypt() to Kotlin and after this to Java just to see the differences. The nice benefit of this excersise is that we now have Java implementations that are not only better commented than the ones from UTexas but also sufficiently different to not have any copyright problems. Any resemblance is due to the fact that we both translated the same C code nearly line by line. So please accept the attached patch :-) Please add a function for the MD5/SHA1/SHA-512 based Unix crypt(3) hash variants Key: CODEC-133 URL: https://issues.apache.org/jira/browse/CODEC-133 Project: Commons Codec Issue Type: New Feature Affects Versions: 1.6 Reporter: Christian Hammers Labels: MD5, SHA-512, crypt(3), crypto, hash Attachments: commons-codec-crypt3.diff, crypt3-with-utexas-licence.diff The Linux libc6 crypt(3) function, which is used to generate e.g. the password hashes in /etc/shadow, is available in nearly all other programming languages (Perl, PHP, Python, C, C++, ...) and databases like MySQL and offers MD5/SHA1/SHA-512 based algorithms that were improved by adding a salt and several iterations to make rainbow table attacks harder. Thus they are widely used to store user passwords. Java, though, has due it's platform independence, no direct access to the libc functions and still lacks an proper port of the crypt(3) function. I already filed a wishlist bug (CODEC-104) for the traditional 56-bit DES based crypt(3) method but would also like to see the much stronger algorithms. There are other bug reports like DIRSTUDIO-738 that demand those crypt variants for some specific applications so there it would benefit other Apache projects as well. Java ports of most of the specific crypt variants are already existing, but they would have to be cleaned up, properly tested and license checked: ftp://ftp.arlut.utexas.edu/pub/java_hashes/ I would be willing to help here by cleaning the source code and writing unit tests etc. but I'd like to generally know if you are interested and if there's someone who can do a code review (it's security relevant after all and I'm no crypto guy) bye, -christian- -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (CSV-88) Not possible to create a CSVFormat from scratch
[ https://issues.apache.org/jira/browse/CSV-88?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13239359#comment-13239359 ] Sebb edited comment on CSV-88 at 3/29/12 6:53 PM: -- bq. +0 for a no arg constructor equivalent to the default format. If you mean the DEFAULT format here then that achieves nothing, as the user would have to override any unwanted settings - and the use would have to know which ones to override. If the DEFAULT format were ever later updated, that could invalidate the user's format. I meant that the ctor should be equivalent to PRISTINE or perhaps PRISTINE + CRLF. To fit in with the fluent API, there needs to be a static method. was (Author: s...@apache.org): bq. +0 for a no arg constructor equivalent to the default format. If you mean the DEFAULT format here then that achieves nothing, as the user would have to override any unwanted settings - and the use would have to know which ones to override. If the DEFAULT format were ever later updated, that could invalidate the user's format. I meant that the ctor should be equivalent to PRISTINE or perhaps PRISTINE + CRLF. To fit in with the fluent API, there needs to be a static method. For example, withDefault(char) would do. Not possible to create a CSVFormat from scratch --- Key: CSV-88 URL: https://issues.apache.org/jira/browse/CSV-88 Project: Commons CSV Issue Type: Bug Reporter: Sebb It's not possible to create a CSVFormat except by modifying an existing format. Could either make the PRISTINE format public, or provide a constructor with a single parameter (the delimiter). Could provide a no-args ctor instead, but there seems little point in that. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (CSV-52) Keep track of record numbers
[ https://issues.apache.org/jira/browse/CSV-52?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241528#comment-13241528 ] Sebb edited comment on CSV-52 at 3/29/12 7:58 PM: -- Maybe the number should also be stored in the record? The counter should be the same type (long, int) as the line number. was (Author: s...@apache.org): Maybe the number should also be stored in the record? The counter should be long. Keep track of record numbers Key: CSV-52 URL: https://issues.apache.org/jira/browse/CSV-52 Project: Commons CSV Issue Type: Improvement Components: Parser Reporter: Emmanuel Bourg Priority: Minor Fix For: 1.x The parser is able to return the current line number of the file, it should also be able to return the record number. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (IO-218) Introduce new filter input stream with replacement facilities
[ https://issues.apache.org/jira/browse/IO-218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13238233#comment-13238233 ] Aaron Digulla edited comment on IO-218 at 3/26/12 9:56 AM: --- Isn't this a duplicate of issue IO-199? was (Author: digulla): Isn't this a duplicate if issue IO-199? Introduce new filter input stream with replacement facilities - Key: IO-218 URL: https://issues.apache.org/jira/browse/IO-218 Project: Commons IO Issue Type: Improvement Components: Filters Affects Versions: 1.4 Environment: all environments Reporter: Denis Zhdanov Attachments: ReplaceFilterInputStream.java, ReplaceFilterInputStreamTest.java It seems convenient to have a FilterInputStream that allows to apply predefined repalcement rules against the read data. For example we may want to configure the following replacements: {noformat} {1, 2} - {7, 8} {1} - {9} {3, 2} - {} {noformat} and apply them to the input like {noformat} {4, 3, 2, 1, 2, 1, 3} {noformat} in order to get a result like {noformat} {4, 7, 8, 9, 3} {noformat} I created the class that allows to do that and attached it to this ticket. Unit test class at junit4 format is attached as well. So, the task is to review the provided classes, consider if it's worth to add them to commons-io distribution and perform the inclusion in the case of possible result. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (IO-315) Replace all String encoding parameters with a value type
[ https://issues.apache.org/jira/browse/IO-315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13238598#comment-13238598 ] Sebb edited comment on IO-315 at 3/26/12 5:43 PM: -- That makes more sense now, but I think it would be overkill to introduce a new interface here. Using Charset would be better IMO. Using Charset would convert the checked {{UnsupportedEncodingException}} into the unchecked {{UnsupportedCharsetException}}. This should simplify application code that does not already catch {{IOException}}, though of course in Commons IO many methods throw IOE already. AFAICT, parameters would need to be changed to use (e.g.) {{Charset.forName(UTF-8)}} instead of {{UTF-8}} so user code would be slightly longer. was (Author: s...@apache.org): That makes a more sense now, but I think it would be overkill to introduce a new interface here. Using Charset would be better IMO. Using Charset would convert the checked {{UnsupportedEncodingException}} into the unchecked {{UnsupportedCharsetException}}. This should simplify application code that does not already catch {{IOException}}, though of course in Commons IO many methods throw IOE already. AFAICT, parameters would need to be changed to use (e.g.) {{Charset.forName(UTF-8)}} instead of {{UTF-8}} so user code would be slightly longer. Replace all String encoding parameters with a value type -- Key: IO-315 URL: https://issues.apache.org/jira/browse/IO-315 Project: Commons IO Issue Type: New Feature Components: Streams/Writers Affects Versions: 2.1 Reporter: Aaron Digulla Please create an interface Encoding plus a set of useful defaults (UTF_8, ISO_LATIN_1, CP_1250 and CP_1252). Use this interface in all places where String encoding is used now. This would make the API more reliable, improve code reuse and reduce futile catch blocks for {{UnsupportedEncodingException}}. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (CSV-70) Improve readability of CSVLexer
[ https://issues.apache.org/jira/browse/CSV-70?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13238039#comment-13238039 ] Sebb edited comment on CSV-70 at 3/26/12 1:29 AM: -- Yes, I tried something similar and it broke the tests. I think it would be useful to provide (optional) access to the comment fields so one way to solve this would be to return the comment as a token, and deal with it at the record level. was (Author: s...@apache.org): Yes, I tried something similar and it broke the tests. I think it would be useful to provide (optional) access to the comment fields so one way to solve this would be to return the comment as a token, and deal will it at the record level. Improve readability of CSVLexer --- Key: CSV-70 URL: https://issues.apache.org/jira/browse/CSV-70 Project: Commons CSV Issue Type: Improvement Components: Parser Affects Versions: 1.0 Reporter: Benedikt Ritter Fix For: 1.0 There are several things that can be improved in the token lexer (this has also been discussed on ML, see http://markmail.org/thread/c6x5ji4v44nx5k4h): * Remove Token input parameter in nextToken() * Add convenience methods isDelimiter(c) and isEncapsulator(c) * Remove current caracter input parameter from methods * If possible: replace while(true) loops -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (DBUTILS-88) Make AsyncQueryRunner be a decorator around a QueryRunner
[ https://issues.apache.org/jira/browse/DBUTILS-88?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237481#comment-13237481 ] Moandji Ezana edited comment on DBUTILS-88 at 3/24/12 8:53 AM: --- Does anything need to happen for this issue and DBUTILS-87 to be committed? was (Author: mwanji): Does anything need to happen for this issue and DB-UTILS-87 to be committed? Make AsyncQueryRunner be a decorator around a QueryRunner - Key: DBUTILS-88 URL: https://issues.apache.org/jira/browse/DBUTILS-88 Project: Commons DbUtils Issue Type: Task Reporter: Moandji Ezana Priority: Minor Attachments: AsyncQueryRunner_wraps_QueryRunner.txt, DBUTILS-88v1.patch, DBUTILS-88v2.patch AsyncQueryRunner duplicates much of the code in QueryRunner. Would it be possible for AsyncQueryRunner to simply decorate a QueryRunner with async functionality, in the same way a BufferedInputStream might decorate an InputStream? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (VFS-407) reading a RAM FileSystem file fails because it never returns EOF -1.
[ https://issues.apache.org/jira/browse/VFS-407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13235292#comment-13235292 ] Miroslav Pokorny edited comment on VFS-407 at 3/22/12 2:14 AM: --- Trunk seems to be fixed but not included in the latest release for download. I believe i reported this same bug previously when VFS was 1.x but trunk has no tests. I have submitted the tests as a raw file *.java and in patch form. At the least these tests should help stop regressions. was (Author: mp1): Trunk seems to be fixed but not included in the latest release for download. I believe i reported this same bug previously but trunk has no tests. I have submitted the tests in whole and in patch form. At the least these tests should help stop regressions. reading a RAM FileSystem file fails because it never returns EOF -1. Key: VFS-407 URL: https://issues.apache.org/jira/browse/VFS-407 Project: Commons VFS Issue Type: Bug Affects Versions: 2.0 Reporter: Miroslav Pokorny Attachments: CustomRamProviderTest.java, EmptyRamProviderFileBugTests-from-miroslav.pokorny-20120322.patch Original Estimate: 5m Remaining Estimate: 5m RamFileRandomAccessContent ORIGINAL @Override public int read(byte[] b, int off, int len) throws IOException { int retLen = Math.min(len, getLeftBytes()); RamFileRandomAccessContent.this.readFully(b, off, retLen); return retLen; } // getLeftBytes() returns 0 when empty. retLen is 0 when empty and never -1. FIXED // HACK Patched to return -1 when empty previously it returned 0 @Override public int read(final byte[] b, final int off, final int len) throws IOException { int retLen = InputStreams.END; final int left = RamFileRandomAccessContent.this.getLeftBytes(); if (left 0) { retLen = Math.min(len, left); RamFileRandomAccessContent.this.readFully(b, off, retLen); } return retLen; } -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (MATH-761) Improve encapsulation of data in the nested classes of SymmLQ
[ https://issues.apache.org/jira/browse/MATH-761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13232487#comment-13232487 ] Sébastien Brisard edited comment on MATH-761 at 3/20/12 6:39 AM: - In {{r1302298}} and {{r1302785}} * moved some {{static}} helper methods from {{SymmLQ}} to nested class {{SymmLQ.State}} * changed visibility of some {{static}} fields from {{private}} to {{package protected}} in order to avoid the use of synthetic getters. was (Author: celestin): In {{r1302298}} * moved some {{static}} helper methods from {{SymmLQ}} to nested class {{SymmLQ.State}} * changed visibility of some {{static}} fields from {{private}} to {{protected}} in order to avoid the use of synthetic getters. Improve encapsulation of data in the nested classes of SymmLQ - Key: MATH-761 URL: https://issues.apache.org/jira/browse/MATH-761 Project: Commons Math Issue Type: Improvement Affects Versions: 3.1 Reporter: Sébastien Brisard Assignee: Sébastien Brisard Labels: linear In order to limit object creation, the current implementation of the {{SymmLQ}} solver makes heavy use of references accross nested classes in {{SymmLQ}}. This makes the code difficult to read, and should be modified, keeping the public API. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (DAEMON-244) prunsrv does not propagate exit code
[ https://issues.apache.org/jira/browse/DAEMON-244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13230989#comment-13230989 ] Peter Ehrbar edited comment on DAEMON-244 at 3/16/12 9:29 AM: -- I have attached a patch (to be applied on 1.0.10) that fixes the VM exit code propagation so that it meets my requirements. With this patch, it is possible to terminate a VM with a non-zero exit code, which is then detected and handled by the Windows Service Controller (e.g. automatically restart the service). was (Author: peter.ehrbar): Patch on Rel 1.0.10 to handle propagation of VM exit code prunsrv does not propagate exit code Key: DAEMON-244 URL: https://issues.apache.org/jira/browse/DAEMON-244 Project: Commons Daemon Issue Type: Bug Components: Procrun Affects Versions: 1.0.10 Environment: MS Windows Reporter: Peter Ehrbar Attachments: exit_code_patch_on_1_0_10.txt In order to perform recovery actions (e.g. restart service) the Windows Service Controller needs to detect abnormal program terminations (failures). The Service controller detects a failure if either the service process crashes or the process terminates with a non-zero exit code. For my Java server application I want to define recovery actions for the following conditions: 1) VM crash 2) Error was thrown (e.g. OutOfMemoryError) 3) System.exit() with non-zero exit code When using prunsrv as a wrapper, I observe the following behaviour: 1) VM crash is detected only when StartMode=jvm, otherwise the Service Controller ignores the failure situation 2) When an Error is thrown and StartMode=jvm, prunsrv does not terminate but seems to hang. Therefore, the Service Controller is not aware of the failure. For other StartModes, prunsrv terminates, but the Service Controller does not detect the failure. 3) When System.exit(42) is called, prunsrv terminates but the Service Controller does not detect the non-zero exit code. This applies for all StartModes. It seems to me as if prunsrv always terminates with exit code zero. But I expect the following behaviour: 1) VM crash with StartMode=jvm - OK as it is now, but with other StartModes, prunsrv should terminate with a non-zero exit code in order to indicate the abnormal termination. 2) When an Error is thrown, prunsrv should terminate with a non-zero exit code in order to indicate the abnormal termination. 3) When System.exit() is called, prunsrv should terminate with the exit code passed to System.exit() (transparent behaviour), in order to let the application indicate a failure situation. With the current behaviour, it is not possible to let the Windows Service Controller perform recovery actions. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (CSV-71) Add convenience Methods to CSVLexer
[ https://issues.apache.org/jira/browse/CSV-71?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13231415#comment-13231415 ] Benedikt Ritter edited comment on CSV-71 at 3/16/12 5:36 PM: - On my machine the performance tests takes around 5 secs. I get the following results: || ||min||max||avg|| | before | 5103 | 5260 | 5170 | | after | 5091 | 5238 | 5142 | At least on my machine there is no negative impact. I ran the test with VM param -server. I'm using {{Java version: 1.7.0_01, vendor: Oracle Corporation}} was (Author: britter): On the performance tests takes around 5 secs. I get the following results: || ||min||max||avg|| | before | 5103 | 5260 | 5170 | | after | 5091 | 5238 | 5142 | At least on my machine there is no negative impact. I ran the test with VM param -server. I'm using {{Java version: 1.7.0_01, vendor: Oracle Corporation}} Add convenience Methods to CSVLexer --- Key: CSV-71 URL: https://issues.apache.org/jira/browse/CSV-71 Project: Commons CSV Issue Type: Sub-task Components: Parser Affects Versions: 1.0 Reporter: Benedikt Ritter Fix For: 1.0 Attachments: CSV-71.patch Add {{isDelimiter(c)}} and {{isEncapsulator(c)}} to CSVLexer -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (DIGESTER-163) ConcurrentModificationException creating a new Digester via loaderInstance.newDigester()
[ https://issues.apache.org/jira/browse/DIGESTER-163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13231548#comment-13231548 ] Thomas Neidhart edited comment on DIGESTER-163 at 3/16/12 7:34 PM: --- I have no idea about the digester, but I looked at this issue out of curiosity (and I like to debug concurrency problems ;-). The problems is in the DigesterLoader#addRules method: * createRuleSet clears the underlying rule binder * addRuleInstance iterates over the same rule binder If you make this method synchronized, you will not get an exception anymore. What I do not know, if the createRuleSet is implemented correctly. I would expect that the initialization of the rules binder happens when the DigesterLoader is created, not when a new digester is requested, but this is may be due to lazy loading? was (Author: tn): I have no idea about the digester, but I looked at this issue out of curiosity (and I like to debug concurrency problems ;-). The problems is in the DigesterLoader#addRules method: * createRuleSet clears the underlying rule binder * addRuleInstance iterates of the same rule binder If you make this method synchronized, you will not get an exception anymore. What I do not know, if the createRuleSet is implemented correctly. I would expect that the initialization of the rules binder happens when the DigesterLoader is created, not when a new digester is requested, but this is may be due to lazy loading? ConcurrentModificationException creating a new Digester via loaderInstance.newDigester() Key: DIGESTER-163 URL: https://issues.apache.org/jira/browse/DIGESTER-163 Project: Commons Digester Issue Type: Bug Affects Versions: 3.2 Environment: Linux, JDK 6 Reporter: Torsten Krah Attachments: 163-2.patch, 163.patch, Digester163TestCase.java, cli-mvn-test-withfix.txt, stack-afterfix.txt, stack-mvn.txt, stack-next.txt, stack-next2.txt I am gettig a ConcurrentModificationException when trying to create new Digester instance from a configured loader: Trace is: {code} java.util.ConcurrentModificationException: null at java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:761) ~[na:1.6.0_27] at java.util.LinkedList$ListItr.next(LinkedList.java:696) ~[na:1.6.0_27] at org.apache.commons.digester3.binder.FromBinderRuleSet.addRuleInstances(FromBinderRuleSet.java:130) ~[commons-digester3-3.2.jar:3.2] at org.apache.commons.digester3.binder.DigesterLoader.addRules(DigesterLoader.java:581) ~[commons-digester3-3.2.jar:3.2] at org.apache.commons.digester3.binder.DigesterLoader.newDigester(DigesterLoader.java:568) ~[commons-digester3-3.2.jar:3.2] at org.apache.commons.digester3.binder.DigesterLoader.newDigester(DigesterLoader.java:516) ~[commons-digester3-3.2.jar:3.2] at org.apache.commons.digester3.binder.DigesterLoader.newDigester(DigesterLoader.java:475) ~[commons-digester3-3.2.jar:3.2] at org.apache.commons.digester3.binder.DigesterLoader.newDigester(DigesterLoader.java:462) ~[commons-digester3-3.2.jar:3.2] {code} The binder documentation (employee servlet) and the mailing list did confirm to me, that the loader should be safe to be shared, so this should not happen. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (CODEC-130) Base64InputStream.skip skips underlying stream, not output
[ https://issues.apache.org/jira/browse/CODEC-130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13231575#comment-13231575 ] Thomas Neidhart edited comment on CODEC-130 at 3/16/12 8:26 PM: Imho the request for change has some merit. When looking at the InflaterInputStream (which is also a derived FilterInputStream), the skip method is implemented to skip uncompressed bytes rather than compressed ones (aka from the underlying stream): {noformat} Skips specified number of bytes of uncompressed data. {noformat} was (Author: tn): Imho the request for change has some merit. If looking at the InflaterInputStream (which is also a derived FilterInputStream), the skip method is implemented to skip uncompressed bytes: {noformat} Skips specified number of bytes of uncompressed data. {noformat} Base64InputStream.skip skips underlying stream, not output -- Key: CODEC-130 URL: https://issues.apache.org/jira/browse/CODEC-130 Project: Commons Codec Issue Type: Bug Affects Versions: 1.5 Reporter: James Pickering Priority: Minor Attachments: base64snippet.java Base64InputStream.skip() skips within underlying stream, leading to unexpected behaviour. The following code will reproduce the issue: @Test public void testSkip() throws Throwable { InputStream ins = new ByteArrayInputStream(.getBytes(ISO-8859-1));//should decode to {0, 0, 0, 255, 255, 255} Base64InputStream instance = new Base64InputStream(ins); assertEquals(3L, instance.skip(3L)); //should skip 3 decoded characters, or 4 encoded characters assertEquals(255, instance.read()); //Currently returns 3, as it is decoding A/, not // } The following code, if added to Base64InputStream, or (BaseNCodecInputStream in the dev build) would resolve the issue: @Override public long skip(long n) throws IOException { //delegate to read() long bytesRead = 0; while ((bytesRead n) (read() != -1)) { bytesRead++; } return bytesRead; } More efficient code may be possible. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (CODEC-130) Base64InputStream.skip skips underlying stream, not output
[ https://issues.apache.org/jira/browse/CODEC-130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13231638#comment-13231638 ] Thomas Neidhart edited comment on CODEC-130 at 3/16/12 9:20 PM: I think we do not have a choice other than skipping decoded bytes. Consider the case where a user receives an InputStream that is in fact a filtered base64 stream. From the POV of the user, he may not be even aware that the underlying stream is providing encoded data. What he sees and is interested in are decoded chars, thus a skip on any filtered input stream should skip the amount of bytes the stream is responsible for (i.e. is producing as output). was (Author: tn): I think we do not have a choice other than skipping uncompressed bytes. Consider the case where a user receives an InputStream that is in fact a filtered base64 stream. From the POV of the user, he may not be even aware that the underlying stream is providing encoded data. What he sees and is interested in are decoded chars, thus a skip on any filtered input stream should skip the amount of bytes the stream is responsible for (i.e. is producing as output). Base64InputStream.skip skips underlying stream, not output -- Key: CODEC-130 URL: https://issues.apache.org/jira/browse/CODEC-130 Project: Commons Codec Issue Type: Bug Affects Versions: 1.5 Reporter: James Pickering Priority: Minor Attachments: base64snippet.java Base64InputStream.skip() skips within underlying stream, leading to unexpected behaviour. The following code will reproduce the issue: @Test public void testSkip() throws Throwable { InputStream ins = new ByteArrayInputStream(.getBytes(ISO-8859-1));//should decode to {0, 0, 0, 255, 255, 255} Base64InputStream instance = new Base64InputStream(ins); assertEquals(3L, instance.skip(3L)); //should skip 3 decoded characters, or 4 encoded characters assertEquals(255, instance.read()); //Currently returns 3, as it is decoding A/, not // } The following code, if added to Base64InputStream, or (BaseNCodecInputStream in the dev build) would resolve the issue: @Override public long skip(long n) throws IOException { //delegate to read() long bytesRead = 0; while ((bytesRead n) (read() != -1)) { bytesRead++; } return bytesRead; } More efficient code may be possible. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (DIGESTER-163) ConcurrentModificationException creating a new Digester via loaderInstance.newDigester()
[ https://issues.apache.org/jira/browse/DIGESTER-163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13230070#comment-13230070 ] Simone Tripodi edited comment on DIGESTER-163 at 3/15/12 11:07 AM: --- I imported a simplified version of your testcase in the Digester codebase, see [r1300873|https://svn.apache.org/viewvc?view=revisionrevision=1300873]; unfortunately I am not able to reproduce the issue :( IIUC - apologize if I am wrong - the problem could be in your wrappers that allows concurrency in the loader creation - using the {{ConcurrentHashMap}} you keep the data structure thread-safe but it doesn't shield from races and concurrent accesses to the Loader creation... Can you please check it? TIA!!! was (Author: simone.tripodi): I imported a simplified version of your testcase in the Digester codebase, see [r1300873|https://svn.apache.org/viewvc?view=revisionrevision=1300873]; unfortunately I am not able to reproduce the issue :( IIUC - apologize if I am wrong - the problem could be in your wrappers that allows concurrency in the loader creation - using the {{{ConcurrentHashMap}} you keep the data structure thread-safe but it doesn't shield from races and concurrent accesses to the Loader creation... Can you please check it? TIA!!! ConcurrentModificationException creating a new Digester via loaderInstance.newDigester() Key: DIGESTER-163 URL: https://issues.apache.org/jira/browse/DIGESTER-163 Project: Commons Digester Issue Type: Bug Affects Versions: 3.2 Environment: Linux, JDK 6 Reporter: Torsten Krah I am gettig a ConcurrentModificationException when trying to create new Digester instance from a configured loader: Trace is: {code} java.util.ConcurrentModificationException: null at java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:761) ~[na:1.6.0_27] at java.util.LinkedList$ListItr.next(LinkedList.java:696) ~[na:1.6.0_27] at org.apache.commons.digester3.binder.FromBinderRuleSet.addRuleInstances(FromBinderRuleSet.java:130) ~[commons-digester3-3.2.jar:3.2] at org.apache.commons.digester3.binder.DigesterLoader.addRules(DigesterLoader.java:581) ~[commons-digester3-3.2.jar:3.2] at org.apache.commons.digester3.binder.DigesterLoader.newDigester(DigesterLoader.java:568) ~[commons-digester3-3.2.jar:3.2] at org.apache.commons.digester3.binder.DigesterLoader.newDigester(DigesterLoader.java:516) ~[commons-digester3-3.2.jar:3.2] at org.apache.commons.digester3.binder.DigesterLoader.newDigester(DigesterLoader.java:475) ~[commons-digester3-3.2.jar:3.2] at org.apache.commons.digester3.binder.DigesterLoader.newDigester(DigesterLoader.java:462) ~[commons-digester3-3.2.jar:3.2] {code} The binder documentation (employee servlet) and the mailing list did confirm to me, that the loader should be safe to be shared, so this should not happen. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (CSV-68) Use Builder pattern for CSVFormat
[ https://issues.apache.org/jira/browse/CSV-68?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13230818#comment-13230818 ] Sebb edited comment on CSV-68 at 3/16/12 1:54 AM: -- Patch with skeleton implementation. The build() method should be updated to include validation. was (Author: s...@apache.org): Patch with skeleton implementation Use Builder pattern for CSVFormat - Key: CSV-68 URL: https://issues.apache.org/jira/browse/CSV-68 Project: Commons CSV Issue Type: Improvement Reporter: Sebb Attachments: CSV-68.patch Using a builder pattern to create CSVFormat instances would allow the settings to be validated at creation time and would eliminate the need to keep creating new CSVFormat instances whilst still allowing the class to be immutable. A possible API is as follows: {code} CSVFormat DEFAULT = CSVFormat.init(',') // delimiter is required .withEncapsulator('') .withLeadingSpacesIgnored(true) .withTrailingSpacesIgnored(true) .withEmptyLinesIgnored(true) .withLineSeparator(\r\n) // optional, as it would be the default .build(); CSVFormat format = CSVFormat.init(CSVFormat.DEFAULT) // alternatively start with pre-defined format .withSurroundingSpacesIgnored(false) .build(); {code} Compare this with the current syntax: {code} // internal syntax; not easy to determine what all the parameters do CSVFormat DEFAULT1 = new CSVFormat(',', '', DISABLED, DISABLED, true, true, false, true, CRLF); // external syntax CSVFormat format = CSVFormat.DEFAULT.withSurroundingSpacesIgnored(false); {code} As a proof of concept I've written skeleton code which compiles (but needs completing). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (SANDBOX-406) CSV Parser loops inifinitely if last line starts with a comment char
[ https://issues.apache.org/jira/browse/SANDBOX-406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13225952#comment-13225952 ] Edgar Philipp edited comment on SANDBOX-406 at 3/9/12 9:06 AM: --- Hmm, I am a bit irritated by the recent change in the constructor visibility of {code} CSVFormat(char, char, char) {code} from public to package protected: IMHO it is a bit awkward having to use {code} CSVFormat.DEFAULT.withDelimiter(SEPARATOR).withEncapsulator(QUOTE).withCommentStart(COMMENT); {code} instead of the above constructor. What is the philosophy behind that change? was (Author: edgarphilipp): Hmm, I am a bit irritated by the recent change in the constructor visibility of {{CSVFormat(char, char, char)}} from public to package protected: IMHO it is a bit awkward having to use {{CSVFormat.DEFAULT.withDelimiter(SEPARATOR).withEncapsulator(QUOTE).withCommentStart(COMMENT);}} instead of the above constructor. What is the philosophy behind that change? CSV Parser loops inifinitely if last line starts with a comment char Key: SANDBOX-406 URL: https://issues.apache.org/jira/browse/SANDBOX-406 Project: Commons Sandbox Issue Type: Bug Components: CSV Reporter: Edgar Philipp Behaviour: Whenever the last non-empty line of the CSV file starts with a comment, the CSVParser loops infinitely! Examplary CSV file: {code} some # comment OK line # comment OK value # problematic comment {code} Excerpt of Java code: {code:java} private static final char COMMENT = '#'; private static final char QUOTE = ''; private static final char SEPARATOR = ';'; CSVStrategy csvStrategy = new CSVStrategy(SEPARATOR, QUOTE, COMMENT); CSVParser parser = new CSVParser(reader, csvStrategy); String[] line = parser.getLine(); while (line != null) { Log.debug(Line: + line[0]); // Do something line = parser.getLine(); } {code} Used Maven Dependency: {code:xml} dependency groupIdorg.apache.solr/groupId artifactIdsolr-commons-csv/artifactId version1.4.0/version /dependency {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (JEXL-130) Ternary Conditional fails for Object values
[ https://issues.apache.org/jira/browse/JEXL-130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13226118#comment-13226118 ] Henri Biestro edited comment on JEXL-130 at 3/9/12 2:53 PM: https://svn.apache.org/repos/asf/commons/proper/jexl/branches/2.0/src/main/java/org/apache/commons/jexl2/JexlArithmetic.java Committed revision 1298857. (NOT trunk BUT 2.0 branch!) was (Author: henrib): https://svn.apache.org/repos/asf/commons/proper/jexl/branches/2.0/src/main/java/org/apache/commons/jexl2/JexlArithmetic.javaCommitted revision 1298857. (NOT trunk BUT 2.0 branch!) Ternary Conditional fails for Object values --- Key: JEXL-130 URL: https://issues.apache.org/jira/browse/JEXL-130 Project: Commons JEXL Issue Type: Bug Affects Versions: 2.1, 2.1.1 Reporter: William Bakker Assignee: Henri Biestro Fix For: 2.1.2 The documentation states on http://commons.apache.org/jexl/reference/syntax.html#Operators : {quote} The usual ternary conditional operator condition ? if_true : if_false operator can be used as well as the abbreviation value ?: if_false which returns the value if its evaluation is defined, non-null and non-false {quote} For object values however, it seems that this definition no longer holds in 2.1 and higher. The following unittests run successfully in 2.0.1, but the test ternaryConditional_mapContainsObject_shouldReturnObject fails in 2.1 and 2.1.1. {code} import org.apache.commons.jexl2.*; import org.junit.*; public class JexlTernaryConditionalTest { @Test public void ternaryConditional_mapContainsString_shouldReturnString() { String myName = Test.Name; Object myValue = Test.Value; JexlEngine myJexlEngine = new JexlEngine(); MapContext myMapContext = new MapContext(); myMapContext.set(myName, myValue); Object myObjectWithTernaryConditional = myJexlEngine.createScript(myName + ?:null).execute(myMapContext); Assert.assertEquals(myValue, myObjectWithTernaryConditional); } @Test public void ternaryConditional_mapContainsObject_shouldReturnObject() { String myName = Test.Name; Object myValue = new Object(); JexlEngine myJexlEngine = new JexlEngine(); MapContext myMapContext = new MapContext(); myMapContext.set(myName, myValue); Object myObjectWithTernaryConditional = myJexlEngine.createScript(myName + ?:null).execute(myMapContext); Assert.assertEquals(myValue, myObjectWithTernaryConditional); } } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (SANDBOX-404) Simplify weight model
[ https://issues.apache.org/jira/browse/SANDBOX-404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13226589#comment-13226589 ] Simone Tripodi edited comment on SANDBOX-404 at 3/9/12 11:29 PM: - Hi Claudio, unfortunately the patch you committed doesn't work when launching tests from CLI (OTOH works well inside eclipse), follows below the errors I got: {code} $ svn info Path: . URL: https://svn.apache.org/repos/asf/commons/sandbox/graph/trunk Repository Root: https://svn.apache.org/repos/asf Repository UUID: 13f79535-47bb-0310-9956-ffa450edef68 Revision: 1299097 Node Kind: directory Schedule: normal Last Changed Author: cs Last Changed Rev: 1298136 Last Changed Date: 2012-03-07 22:34:21 +0100 (Wed, 07 Mar 2012) ... mvn clean test ... [ERROR] COMPILATION ERROR : [INFO] - [ERROR] /private/tmp/graph/src/test/java/org/apache/commons/graph/shortestpath/BellmannFordTestCase.java:[77,63] type parameters of WOorg.apache.commons.graph.shortestpath.AllVertexPairsShortestPathorg.apache.commons.graph.model.BaseLabeledVertex,org.apache.commons.graph.model.BaseLabeledWeightedEdgejava.lang.Double,java.lang.Double,WO cannot be determined; no unique maximal instance exists for type variable WO with upper bounds java.lang.Object,org.apache.commons.graph.weight.Monoidjava.lang.Double,java.util.Comparatorjava.lang.Double [ERROR] /private/tmp/graph/src/test/java/org/apache/commons/graph/spanning/KruskalTestCase.java:[71,77] type parameters of WOorg.apache.commons.graph.SpanningTreeorg.apache.commons.graph.model.BaseLabeledVertex,org.apache.commons.graph.model.BaseLabeledWeightedEdgejava.lang.Double,java.lang.Double cannot be determined; no unique maximal instance exists for type variable WO with upper bounds java.lang.Object,org.apache.commons.graph.weight.Monoidjava.lang.Double,java.util.Comparatorjava.lang.Double [ERROR] /private/tmp/graph/src/test/java/org/apache/commons/graph/shortestpath/AStarTestCase.java:[98,65] type parameters of WOorg.apache.commons.graph.shortestpath.HeuristicBuilderorg.apache.commons.graph.model.BaseLabeledVertex,org.apache.commons.graph.model.BaseLabeledWeightedEdgejava.lang.Double,java.lang.Double,org.apache.commons.graph.model.UndirectedMutableWeightedGraphorg.apache.commons.graph.model.BaseLabeledVertex,org.apache.commons.graph.model.BaseLabeledWeightedEdgejava.lang.Double,java.lang.Double,WO cannot be determined; no unique maximal instance exists for type variable WO with upper bounds java.lang.Object,org.apache.commons.graph.weight.Monoidjava.lang.Double,java.util.Comparatorjava.lang.Double [ERROR] /private/tmp/graph/src/test/java/org/apache/commons/graph/spanning/ReverseDeleteTestCase.java:[53,67] type parameters of WOorg.apache.commons.graph.SpanningTreeorg.apache.commons.graph.model.BaseLabeledVertex,org.apache.commons.graph.model.BaseLabeledWeightedEdgejava.lang.Double,java.lang.Double cannot be determined; no unique maximal instance exists for type variable WO with upper bounds java.lang.Object,org.apache.commons.graph.weight.Monoidjava.lang.Double,java.util.Comparatorjava.lang.Double [ERROR] /private/tmp/graph/src/test/java/org/apache/commons/graph/spanning/PrimTestCase.java:[71,77] type parameters of WOorg.apache.commons.graph.SpanningTreeorg.apache.commons.graph.model.BaseLabeledVertex,org.apache.commons.graph.model.BaseLabeledWeightedEdgejava.lang.Double,java.lang.Double cannot be determined; no unique maximal instance exists for type variable WO with upper bounds java.lang.Object,org.apache.commons.graph.weight.Monoidjava.lang.Double,java.util.Comparatorjava.lang.Double [ERROR] /private/tmp/graph/src/test/java/org/apache/commons/graph/shortestpath/DijkstraTestCase.java:[69,68] type parameters of WOorg.apache.commons.graph.WeightedPathorg.apache.commons.graph.model.BaseLabeledVertex,org.apache.commons.graph.model.BaseLabeledWeightedEdgejava.lang.Double,java.lang.Double cannot be determined; no unique maximal instance exists for type variable WO with upper bounds java.lang.Object,org.apache.commons.graph.weight.Monoidjava.lang.Double,java.util.Comparatorjava.lang.Double [ERROR] /private/tmp/graph/src/test/java/org/apache/commons/graph/spanning/BoruvkaTestCase.java:[71,77] type parameters of WOorg.apache.commons.graph.SpanningTreeorg.apache.commons.graph.model.BaseLabeledVertex,org.apache.commons.graph.model.BaseLabeledWeightedEdgejava.lang.Double,java.lang.Double cannot be determined; no unique maximal instance exists for type variable WO with upper bounds java.lang.Object,org.apache.commons.graph.weight.Monoidjava.lang.Double,java.util.Comparatorjava.lang.Double [INFO] 7 errors [INFO] - [INFO] [INFO] BUILD FAILURE [INFO]
[jira] [Issue Comment Edited] (SANDBOX-411) Implement a new version of ConcurrentGraph
[ https://issues.apache.org/jira/browse/SANDBOX-411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13224695#comment-13224695 ] Marco Speranza edited comment on SANDBOX-411 at 3/7/12 8:48 PM: The implementation can use the Read/Write locks. A typical scenario could be the execution of different algorithms in a multi-threading environment. The main problem is ensure that the algorithm is executed into a critical-section in order to avoid a possible graph modification. Thoughts? was (Author: marco.speranza): The implementation can use the Read/Write locks. A typical scenario could be the execution of different algorithms in a multi-threading environment. The main problem is ensure that the algorithm is executed into a critical-section in order to avoid a possible graph modification. Implement a new version of ConcurrentGraph -- Key: SANDBOX-411 URL: https://issues.apache.org/jira/browse/SANDBOX-411 Project: Commons Sandbox Issue Type: Sub-task Components: Graph Reporter: Marco Speranza Priority: Minor As widely discussed in ML it's convenient create a new package 'concurrent' and create a parallel implementation on ConcurrentGraph. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (NET-445) The method listFiles in FTPClient can not list these files which upload to FTP Server in Feb, 29 2012
[ https://issues.apache.org/jira/browse/NET-445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13219204#comment-13219204 ] Sebb edited comment on NET-445 at 3/6/12 9:02 PM: -- NET uses SimpleDateFormat to parse short dates; however this does not work properly for Feb 29. The current work-round fails for leap-years. [Later] the problem with SimpleDateFormat is that it assumes the year is 1970 - which was not a leap year. The workround is to append the current year to the short date before parsing. This works fine if the current year is a leap year, as is true of 2012. However, if the ftp server continues to display short dates after the end of 2012, the work-round will fail at the start of 2013. Many ftp servers only display short dates for +/- 6 months; such servers will switch to long format dates for Feb 29 before the end of 2012, so won't be affected. was (Author: s...@apache.org): NET uses SimpleDateFormat to parse short dates; however this does not work properly for Feb 29. The current work-round fails for leap-years. Hope to have a fix soon. The method listFiles in FTPClient can not list these files which upload to FTP Server in Feb, 29 2012 - Key: NET-445 URL: https://issues.apache.org/jira/browse/NET-445 Project: Commons Net Issue Type: Bug Components: FTP Affects Versions: 3.0.1 Reporter: keming.hu Before Feb, 29 2012, the method listFiles() in FTPClient.class can list all files in FTP Server. But today, for any file which be uploaded to FTP Server in Feb, 29 2012, this API can not list these files. When I change the date of FTP Server(it is not Feb, 29 2012), this API can list all files in FTP Server also. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (CODEC-132) BeiderMorseEncoder OOM issues
[ https://issues.apache.org/jira/browse/CODEC-132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13223729#comment-13223729 ] Thomas Neidhart edited comment on CODEC-132 at 3/6/12 10:13 PM: Hi, please find attached a patch for the outlined solution: addind a maximum phoneme parameter to the engine that limits the number of phonemes processed / returned. By now, I have assumed a default of 20, if the user does not provide a value himself. Would like to hear some feedback from the original author on that. Ah, testcoverage improved from 91% to 92% ;-) was (Author: tn): Hi, please find attached a patch for the outlined solution: addind a maximum phoneme parameter to the engine that limits the number of phonemes processed / returned. By now, I have assumed a default of 20, if the user does not provide a value himself. Would like to hear some feedback from the original author on that. BeiderMorseEncoder OOM issues - Key: CODEC-132 URL: https://issues.apache.org/jira/browse/CODEC-132 Project: Commons Codec Issue Type: Bug Affects Versions: 1.6 Reporter: Robert Muir Attachments: CODEC-132.patch, CODEC-132_test.patch In Lucene/Solr, we integrated this encoder into the latest release. Our tests use a variety of random strings, and we have recent jenkins failures from some input streams (of length = 10), using huge amounts of memory (e.g. 64MB), resulting in OOM. I've created a test case (length is 30 here) that will OOM with -Xmx256M. I haven't dug into this much as to what's causing it, but I suspect there might be a bug revolving around certain punctuation characters: we didn't see this happening until we beefed up our random string generation to start producing html-like strings. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (CODEC-132) BeiderMorseEncoder OOM issues
[ https://issues.apache.org/jira/browse/CODEC-132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13223729#comment-13223729 ] Thomas Neidhart edited comment on CODEC-132 at 3/6/12 10:35 PM: Hi, please find attached a patch for the outlined solution: adding a maximum phoneme parameter to the engine that limits the number of phonemes processed / returned. By now, I have assumed a default of 20, if the user does not provide a value himself. Would like to hear some feedback from the original author on that. Ah, testcoverage improved from 91% to 92% ;-) was (Author: tn): Hi, please find attached a patch for the outlined solution: addind a maximum phoneme parameter to the engine that limits the number of phonemes processed / returned. By now, I have assumed a default of 20, if the user does not provide a value himself. Would like to hear some feedback from the original author on that. Ah, testcoverage improved from 91% to 92% ;-) BeiderMorseEncoder OOM issues - Key: CODEC-132 URL: https://issues.apache.org/jira/browse/CODEC-132 Project: Commons Codec Issue Type: Bug Affects Versions: 1.6 Reporter: Robert Muir Attachments: CODEC-132.patch, CODEC-132_test.patch In Lucene/Solr, we integrated this encoder into the latest release. Our tests use a variety of random strings, and we have recent jenkins failures from some input streams (of length = 10), using huge amounts of memory (e.g. 64MB), resulting in OOM. I've created a test case (length is 30 here) that will OOM with -Xmx256M. I haven't dug into this much as to what's causing it, but I suspect there might be a bug revolving around certain punctuation characters: we didn't see this happening until we beefed up our random string generation to start producing html-like strings. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (NET-449) listFiles bug with folder that begins with -
[ https://issues.apache.org/jira/browse/NET-449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222468#comment-13222468 ] Sebb edited comment on NET-449 at 3/5/12 5:51 PM: -- Not sure there's anything Net can do about this. The parameter to the LIST command is supposed to be a pathname, and it is up to the server to determine if it is a file or a directory. However, the - prefix is used by some servers for supporting qualifiers, e.g. -a, -l This obviously has the potential to be confused with a valid pathname. If the server fails to recognise -data as a pathname, then it seems to me that this is a bug in the server. It ought to behave according to the RFCs. was (Author: s...@apache.org): Not sure there's anything Net can do about this. The parameter to the LIST command is supposed to be a pathname, and it is up to the server to determine if it is a file or a directory. The - prefix is used by some servers for supporting qualifiers, e.g. -a. I don't think there is any way to escape the leading -, at least not one that would work on all servers. The parameter ./-data would probably work on most FTP servers, but AFAIK servers don't have to support that, and some may not do so. listFiles bug with folder that begins with - -- Key: NET-449 URL: https://issues.apache.org/jira/browse/NET-449 Project: Commons Net Issue Type: Bug Components: FTP Affects Versions: 3.1 Reporter: Stéphane Verger FTP Server status: {code} root@xxx-srv:/data/Library# tree -A . ├── -dash │ ├── -dash.txt │ ├── file1.txt │ └── file2.txt └── test ├── file2.txt └── file.txt {code} Test code: {code} final org.apache.commons.net.ftp.FTPClient ftp = new org.apache.commons.net.ftp.FTPClient(); ftp.connect(host, port); ftp.login(login, pwd); System.out.println(PWD: + ftp.printWorkingDirectory()); final FTPFile[] listFiles = ftp.listFiles(); for (int i = 0; i listFiles.length; i++) { System.out.println([ + i + ] + listFiles[i]); } System.out.println(Files in /-dash); final FTPFile[] listFiles2 = ftp.listFiles(/-dash); for (int i = 0; i listFiles2.length; i++) { System.out.println([ + i + ] + listFiles2[i]); } System.out.println(Files in -dash); final FTPFile[] listFiles3 = ftp.listFiles(-dash); for (int i = 0; i listFiles3.length; i++) { System.out.println([ + i + ] + listFiles3[i]); } {code} results: {code} PWD: / [0] -dash [1] test Files in /-dash [0] -dash.txt [1] file1.txt [2] file2.txt Files in -dash [0] -dash [1] . [2] .. [3] test {code} When listing -dash, it list the current directory instead of the destination one. If I do the same test with the folder test, this time it works as expected. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (SANDBOX-404) Simplify weight model
[ https://issues.apache.org/jira/browse/SANDBOX-404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13221749#comment-13221749 ] Claudio Squarcella edited comment on SANDBOX-404 at 3/4/12 10:18 AM: - Hi Simone, I am attaching a patch that begins with yours and goes one step further, getting rid of {{OrderedMonoid}} basically everywhere (although I did not delete {{OrderedMonoid}} itself for now) and replacing it with {{Monoid Comparator}}. That has two reasons: * separating main operations/properties, so that every algorithm specifies what is needed in terms of a set of interfaces; * leading the way to the next refactoring step where {{Monoid}} is converted into {{Addition}}, in order to better represent what it actually does. So one step is missing, i.e. renaming {{Monoid}} to {{Addition}} (and {{Monoid#append}} to {{Addition#sum}}, etc) -- but first I want to get some feedback on this one. Ciao Claudio was (Author: claudio.squarcella): Hi Simone, I am attaching a patch that begins with yours and goes one step further, getting rid of {{OrderedMonoid}} basically everywhere and replacing it with {{Monoid Comparator}}. That has two reasons: * separating main operations/properties, so that every algorithm specifies what is needed in terms of a set of interfaces; * leading the way to the next refactoring step where {{Monoid}} is converted into {{Addition}}, in order to better represent what it actually does. So one step is missing, i.e. renaming {{Monoid}} into {{Addition}} (and {{Monoid#append}} into {{Addition#sum}}, etc) -- but I first wanted to get some feedback on this one. Ciao Claudio Simplify weight model - Key: SANDBOX-404 URL: https://issues.apache.org/jira/browse/SANDBOX-404 Project: Commons Sandbox Issue Type: Improvement Components: Graph Reporter: Simone Tripodi Attachments: SANDBOX-404.patch, SANDBOX-404_gettingRidOfOrderedMonoid.patch As discussed on {{dev@}}, {{Zero}}, {{Semigroup}} and {{Monoid}} can be merged directly in one single interface -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (SANDBOX-404) Simplify weight model
[ https://issues.apache.org/jira/browse/SANDBOX-404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13221901#comment-13221901 ] Claudio Squarcella edited comment on SANDBOX-404 at 3/4/12 2:33 PM: Hi! First things first: My idea is to completely get rid of {{Monoid}}, in favor of a group of interfaces directly representing operations. In the specific case, {{Addition}} would immediately take its place not semantically but *functionally*, to cover algorithm needs -- indeed so far we compute additions and not generic monoid operations, so that would also increase consistency and readability. It would look more or less like this: {code} public interface AdditionE { E sum(E e1, E e2); E zero(); E negate(E e); } {code} In case later we want to add {{Multiplication}} it will be totally independent, as explained in my first comment above. Something like: {code} public interface MultiplicationE { E multiply(E e1, E e2); E one(); // or identity(), we'll see E reciprocal(E e); } {code} As for the signature change, I did it because I would prefer not to stack interfaces on top of each other like we did with {{Zero}}, {{Semigroup}}, {{Monoid}} and {{OrderedMonoid}}. As long as we can easily write in the signatures all the individual properties we need (in the example {{Addition}} and {{Comparator}}) we can avoid to add interfaces like {{ComparableAddition}}, {{ComparableMultiplication}}, {{ComparableAdditionMultiplication}}... see my point? Concluding: I can work on {{Addition}} if, and as soon as, we agree. Ciao, Claudio was (Author: claudio.squarcella): Hi! First things first: My idea is to completely get rid of {{Monoid}}, in favor of a group of interfaces directly representing operations. In the specific case, {{Addition}} would immediately take its place not semantically but *functionally*, to cover algorithm needs -- they indeed need to apply addition and not a generic monoid, so that would also increase consistency. It would look more or less like this: {code} public interface AdditionE { E sum(E e1, E e2); E zero(); E negate(E e); } {code} In case later we want to add {{Multiplication}} it will be totally independent, as explained in my first comment above. Something like: {code} public interface MultiplicationE { E multiply(E e1, E e2); E one(); // or identity(), we'll see E reciprocal(E e); } {code} As for the signature change, I did it because I would prefer not to stack interfaces on top of each other like we did with {{Zero}}, {{Semigroup}}, {{Monoid}} and {{OrderedMonoid}}. As long as we can easily write in the signatures all the individual properties we need (in the example {{Addition}} and {{Comparator}}) we can avoid to add interfaces like {{ComparableAddition}}, {{ComparableMultiplication}}, {{ComparableAdditionMultiplication}}... see my point? Concluding: I can work on {{Addition}} if, and as soon as, we agree. Ciao, Claudio Simplify weight model - Key: SANDBOX-404 URL: https://issues.apache.org/jira/browse/SANDBOX-404 Project: Commons Sandbox Issue Type: Improvement Components: Graph Reporter: Simone Tripodi Attachments: SANDBOX-404.patch, SANDBOX-404_gettingRidOfOrderedMonoid.patch As discussed on {{dev@}}, {{Zero}}, {{Semigroup}} and {{Monoid}} can be merged directly in one single interface -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (CODEC-134) Base32 would decode some invalid Base32 encoded string into arbitrary value
[ https://issues.apache.org/jira/browse/CODEC-134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222063#comment-13222063 ] Hanson Char edited comment on CODEC-134 at 3/4/12 11:47 PM: patch.txt attached was (Author: hchar): Sorry, should have attached the patch as a file. Base32 would decode some invalid Base32 encoded string into arbitrary value --- Key: CODEC-134 URL: https://issues.apache.org/jira/browse/CODEC-134 Project: Commons Codec Issue Type: Bug Affects Versions: 1.6 Environment: All Reporter: Hanson Char Labels: security Attachments: patch.txt Example, there is no byte array value that can be encoded into the string C5CYMIHWQUUZMKUGZHGEOSJSQDE4L===, but the existing Base32 implementation would not reject it but decode it into an arbitrary value which if re-encoded again using the same implementation would result in the string C5CYMIHWQUUZMKUGZHGEOSJSQDE4K===. Instead of blindly decoding the invalid string, the Base32 codec should reject it (eg by throwing IlleglArgumentException) to avoid security exploitation (such as tunneling additional information via seemingly valid base 32 strings). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (LANG-754) embedded objects are not toString-ed like top-level objects
[ https://issues.apache.org/jira/browse/LANG-754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13215870#comment-13215870 ] Thomas Neidhart edited comment on LANG-754 at 3/1/12 6:22 PM: -- I digged into this problem and the problem you describe occurs only when using setUseShortName(true), and the class is located in the default package and the name is one of [IZFJSBDC], explanation follows. The ToStringStyle uses ClassUtils to get the short name for the class, which uses an internal reverse abbreviation map to resolve primitive array types which are something like [B for a byte[]. Now there seems to be a bug in the ClassUtils.getShortName method as it does this reverse resolve all the time, and if you happen to have a class called B in the default package, it is wrongly identified as byte. was (Author: tn): I digged into this problem and the problem you describe occurs only when using setUseShortName(true), and the class is located in the default package and the name is one of [IZFJSBDC], explanation follows. The ToStringStyle uses ClassUtils to get the short name for the class, which uses an internal reverse abbreviation map to resolve primitive array types which are something like [B for a byte[]. Now there seems to be a bug in the ClassUtils.getShortName method as it does this reverse resolve all the time, and if you happen to have a class called B in the default package, it is wrongly identified as byte. So the fix would be to do the reverse lookup only in case of arrays. embedded objects are not toString-ed like top-level objects --- Key: LANG-754 URL: https://issues.apache.org/jira/browse/LANG-754 Project: Commons Lang Issue Type: Bug Components: lang.builder.* Affects Versions: 2.5, 3.0.1 Environment: Linux Ubuntu java version 1.6.0_24 Java(TM) SE Runtime Environment (build 1.6.0_24-b07) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode) Reporter: Dominique De Vito Priority: Minor Original Estimate: 24h Remaining Estimate: 24h I have a simple class 'A' defined as follows: == public class A { int p1; String p2; B b; } == While I execute the following instructions: ToStringBuilder builder = new ReflectionToStringBuilder(a); System.out.println(builder.toString()); The output is: A@3ea981ca[p1=0,p2=null,b=B@1ee7b241] that's normal, without recursion So, I defined my own style, for recursive toString-ing display: == class MyStyle extends ToStringStyle { private final static ToStringStyle instance = new MyStyle(); public MyStyle() { setArrayContentDetail(true); setUseShortClassName(true); setUseClassName(true); setUseIdentityHashCode(true); setFieldSeparator(, ); } public static ToStringStyle getInstance() { return instance; }; @Override public void appendDetail(final StringBuffer buffer, final String fieldName, final Object value) { if (!value.getClass().getName().startsWith(java)) { buffer.append(ReflectionToStringBuilder.toString(value, instance)); } else { super.appendDetail(buffer, fieldName, value); } } @Override public void appendDetail(final StringBuffer buffer, final String fieldName, final Collection value) { appendDetail(buffer, fieldName, value.toArray()); } } == When I use my custom MyStyle: String s = ReflectionToStringBuilder.toString(a, MyStyle.getInstance()); System.out.println(s); The output is: A@3ea981ca[p1=0, p2=null, b=byte@1ee7b241[p4=234]] So, the name of the class 'B' is not displayed. I expected something like: b=B@1ee7b241[p4=234] Instead, the name of the class 'B' is replaced with 'byte'. I don't know why. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (CONFIGURATION-136) Reloading may corrupt the configuration
[ https://issues.apache.org/jira/browse/CONFIGURATION-136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13218027#comment-13218027 ] Laurent Michelet edited comment on CONFIGURATION-136 at 2/28/12 9:41 AM: - THe problem here is that filesystem is not transactional. When an exception is raised, there is no rollback on the file which is written. A proposal of correction is to save in memory the file before any modifications. When an exception is raised we replace the current file with the original one. The 2 previous patches has been tested successfully with XMLConfiguration file. {code:title=AbstractFileConfiguration.java|borderStyle=solid} /** * Save the configuration to the specified URL. This doesn't change the * source of the configuration, use setURL() if you need it. * * @param url *the URL * * @throws ConfigurationException * if an error occurs during the save operation */ public void save(URL url) throws ConfigurationException { OutputStream out = null; ByteArrayInputStream originalFile = null; try { InputStream inputStreamOfOrignalFile = fileSystem .getInputStream(url); originalFile = saveOriginalFile(inputStreamOfOrignalFile); out = fileSystem.getOutputStream(url); save(out); if (out instanceof VerifiableOutputStream) { ((VerifiableOutputStream) out).verify(); } } catch (IOException e) { // Rollback for IOException reloadOriginalFile(url, originalFile, out); throw new ConfigurationException(Could not save to URL + url, e); } catch (Exception e) { // Rollback when any kind of Exception is raised reloadOriginalFile(url, originalFile, out); throw new ConfigurationException(e); } finally { closeSilent(out); } } /** * Save the original file before any modifications * * @param in * @return * @throws IOException * @since 1.9 */ private ByteArrayInputStream saveOriginalFile(InputStream inputStreamOfOrignalFile) throws IOException { ByteArrayOutputStream baos = new ByteArrayOutputStream(); byte[] buffer = new byte[1024]; int len; while ((len = inputStreamOfOrignalFile.read(buffer)) -1) { baos.write(buffer, 0, len); } baos.flush(); return new ByteArrayInputStream(baos.toByteArray()); } /** * Replace the current file with the original one before any modifications * * @param url * @param originalFile * @param outOfCurrentFile * @throws ConfigurationException * @since 1.9 */ private void reloadOriginalFile(URL url, ByteArrayInputStream originalFile, OutputStream outOfCurrentFile) throws ConfigurationException { if (outOfCurrentFile != null originalFile != null) { try { int nextChar; while ((nextChar = originalFile.read()) != -1) outOfCurrentFile.write((char) nextChar); outOfCurrentFile.write('\n'); outOfCurrentFile.flush(); } catch (IOException ioe) { throw new ConfigurationException( Could not save to URL + url, ioe); } } } {code} was (Author: l.michelet): THe problem here is that filesystem is not transactional. When an exception is raised, there is no rollback on the file which is written. A proposal of correction is to save in memory the file before any modifications. When an exception is raised we replace the current file with the original one. The 2 previous patch has been tested successfully with XMLConfiguration file. {code:title=AbstractFileConfiguration.java|borderStyle=solid} /** * Save the configuration to the specified URL. This doesn't change the * source of the configuration, use setURL() if you need it. * * @param url *the URL
[jira] [Issue Comment Edited] (CONFIGURATION-136) Reloading may corrupt the configuration
[ https://issues.apache.org/jira/browse/CONFIGURATION-136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13218027#comment-13218027 ] Laurent Michelet edited comment on CONFIGURATION-136 at 2/28/12 10:15 AM: -- THe problem here is that filesystem is not transactional. When an exception is raised, there is no rollback on the file which is written. A proposal of correction is to save in memory the file before any modifications. When an exception is raised we replace the current file with the original one. The 2 previous patches has been tested successfully with XMLConfiguration file. {code:title=AbstractFileConfiguration.java|borderStyle=solid} /** * Save the configuration to the specified URL. This doesn't change the * source of the configuration, use setURL() if you need it. * * @param url *the URL * * @throws ConfigurationException * if an error occurs during the save operation */ public void save(URL url) throws ConfigurationException { OutputStream out = null; ByteArrayInputStream originalFile = null; try { final File file = new File(url.getFile()); // Save original file if existing if (file.exists()) { InputStream inputStreamOfOrignalFile = fileSystem .getInputStream(url); originalFile = saveOriginalFile(inputStreamOfOrignalFile); } out = fileSystem.getOutputStream(url); save(out); if (out instanceof VerifiableOutputStream) { ((VerifiableOutputStream) out).verify(); } } catch (IOException e) { // Rollback for IOException if existing if (originalFile != null) { reloadOriginalFile(url, originalFile, out); } throw new ConfigurationException(Could not save to URL + url, e); } catch (Exception e) { // Rollback when any kind of Exception is raised if existing if (originalFile != null) { reloadOriginalFile(url, originalFile, out); } throw new ConfigurationException(e); } finally { closeSilent(out); } } /** * Save the original file before any modifications * * @param in * @return * @throws IOException * @since 1.9 */ private ByteArrayInputStream saveOriginalFile( InputStream inputStreamOfOrignalFile) throws IOException { ByteArrayOutputStream baos = new ByteArrayOutputStream(); byte[] buffer = new byte[1024]; int len; while ((len = inputStreamOfOrignalFile.read(buffer)) -1) { baos.write(buffer, 0, len); } baos.flush(); return new ByteArrayInputStream(baos.toByteArray()); } /** * Replace the current file with the original one before any modifications * * @param url * @param originalFile * @param outOfCurrentFile * @throws ConfigurationException * @since 1.9 */ private void reloadOriginalFile(URL url, ByteArrayInputStream originalFile, OutputStream outOfCurrentFile) throws ConfigurationException { if (outOfCurrentFile != null originalFile != null) { try { int nextChar; while ((nextChar = originalFile.read()) != -1) outOfCurrentFile.write((char) nextChar); outOfCurrentFile.write('\n'); outOfCurrentFile.flush(); } catch (IOException ioe) { throw new ConfigurationException( Could not save to URL + url, ioe); } } } {code} was (Author: l.michelet): THe problem here is that filesystem is not transactional. When an exception is raised, there is no rollback on the file which is written. A proposal of correction is to save in memory the file before any modifications. When an exception is raised we
[jira] [Issue Comment Edited] (CONFIGURATION-136) Reloading may corrupt the configuration
[ https://issues.apache.org/jira/browse/CONFIGURATION-136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13218027#comment-13218027 ] Laurent Michelet edited comment on CONFIGURATION-136 at 2/28/12 10:30 AM: -- THe problem here is that filesystem is not transactional. When an exception is raised, there is no rollback on the file which is written. A proposal of correction is to save in memory the file before any modifications. When an exception is raised we replace the current file with the original one. The 2 following patches has been tested successfully with XMLConfiguration file. {code:title=AbstractFileConfiguration.java|borderStyle=solid} /** * Save the configuration to the specified URL. This doesn't change the * source of the configuration, use setURL() if you need it. * * @param url *the URL * * @throws ConfigurationException * if an error occurs during the save operation */ public void save(URL url) throws ConfigurationException { OutputStream out = null; ByteArrayInputStream originalFile = null; try { final File file = new File(url.getFile()); // Save original file if existing if (file.exists()) { InputStream inputStreamOfOrignalFile = fileSystem .getInputStream(url); originalFile = saveOriginalFile(inputStreamOfOrignalFile); } out = fileSystem.getOutputStream(url); save(out); if (out instanceof VerifiableOutputStream) { ((VerifiableOutputStream) out).verify(); } } catch (IOException e) { // Rollback for IOException if existing if (originalFile != null) { reloadOriginalFile(url, originalFile, out); } throw new ConfigurationException(Could not save to URL + url, e); } catch (Exception e) { // Rollback when any kind of Exception is raised if existing if (originalFile != null) { reloadOriginalFile(url, originalFile, out); } throw new ConfigurationException(e); } finally { closeSilent(out); } } /** * Save the original file before any modifications * * @param in * @return * @throws IOException * @since 1.9 */ private ByteArrayInputStream saveOriginalFile( InputStream inputStreamOfOrignalFile) throws IOException { ByteArrayOutputStream baos = new ByteArrayOutputStream(); byte[] buffer = new byte[1024]; int len; while ((len = inputStreamOfOrignalFile.read(buffer)) -1) { baos.write(buffer, 0, len); } baos.flush(); return new ByteArrayInputStream(baos.toByteArray()); } /** * Replace the current file with the original one before any modifications * * @param url * @param originalFile * @param outOfCurrentFile * @throws ConfigurationException * @since 1.9 */ private void reloadOriginalFile(URL url, ByteArrayInputStream originalFile, OutputStream outOfCurrentFile) throws ConfigurationException { if (outOfCurrentFile != null originalFile != null) { try { int nextChar; while ((nextChar = originalFile.read()) != -1) outOfCurrentFile.write((char) nextChar); outOfCurrentFile.write('\n'); outOfCurrentFile.flush(); } catch (IOException ioe) { throw new ConfigurationException( Could not save to URL + url, ioe); } } } {code} was (Author: l.michelet): THe problem here is that filesystem is not transactional. When an exception is raised, there is no rollback on the file which is written. A proposal of correction is to save in memory the file before any modifications. When an exception is raised we
[jira] [Issue Comment Edited] (COMPRESS-181) Tar files created by AIX native tar, and which contain symlinks, cannot be read by TarArchiveInputStream
[ https://issues.apache.org/jira/browse/COMPRESS-181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13218093#comment-13218093 ] Stefan Bodewig edited comment on COMPRESS-181 at 2/28/12 12:32 PM: --- GNU tar from_header in list.c contains a workaround for this case: {noformat} /* Accommodate buggy tar of unknown vintage, which outputs leading NUL if the previous field overflows. */ where += !*where; {noformat} this basically skips the first byte if it is a binary 0. was (Author: bodewig): GNU tar from_header in list.c contains a workaround for this case: /* Accommodate buggy tar of unknown vintage, which outputs leading NUL if the previous field overflows. */ where += !*where; this basically skips the first byte if it is a binary 0. Tar files created by AIX native tar, and which contain symlinks, cannot be read by TarArchiveInputStream Key: COMPRESS-181 URL: https://issues.apache.org/jira/browse/COMPRESS-181 Project: Commons Compress Issue Type: Bug Components: Archivers Affects Versions: 1.2, 1.3, 1.4 Environment: AIX 5.3 Reporter: Robert Clark Attachments: simple-aix-native-tar.tar A simple tar file created on AIX using the native ({{/usr/bin/tar}} tar utility) *and* which contains a symbolic link, cannot be loaded by TarArchiveInputStream: {noformat} java.io.IOException: Error detected parsing the header at org.apache.commons.compress.archivers.tar.TarArchiveInputStream.getNextTarEntry(TarArchiveInputStream.java:201) at Extractor.extract(Extractor.java:13) at Extractor.main(Extractor.java:28) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.taskdefs.ExecuteJava.run(ExecuteJava.java:217) at org.apache.tools.ant.taskdefs.ExecuteJava.execute(ExecuteJava.java:152) at org.apache.tools.ant.taskdefs.Java.run(Java.java:771) at org.apache.tools.ant.taskdefs.Java.executeJava(Java.java:221) at org.apache.tools.ant.taskdefs.Java.executeJava(Java.java:135) at org.apache.tools.ant.taskdefs.Java.execute(Java.java:108) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:390) at org.apache.tools.ant.Target.performTasks(Target.java:411) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399) at org.apache.tools.ant.Project.executeTarget(Project.java:1368) at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) at org.apache.tools.ant.Project.executeTargets(Project.java:1251) at org.apache.tools.ant.Main.runBuild(Main.java:809) at org.apache.tools.ant.Main.startAnt(Main.java:217) at org.apache.tools.ant.launch.Launcher.run(Launcher.java:280) at org.apache.tools.ant.launch.Launcher.main(Launcher.java:109) Caused by: java.lang.IllegalArgumentException: Invalid byte 0 at offset 0 in '{NUL}1722000726 ' len=12 at org.apache.commons.compress.archivers.tar.TarUtils.parseOctal(TarUtils.java:99) at org.apache.commons.compress.archivers.tar.TarArchiveEntry.parseTarHeader(TarArchiveEntry.java:819) at org.apache.commons.compress.archivers.tar.TarArchiveEntry.init(TarArchiveEntry.java:314) at org.apache.commons.compress.archivers.tar.TarArchiveInputStream.getNextTarEntry(TarArchiveInputStream.java:199) ... 29 more {noformat} Tested with 1.2 and the 1.4 nightly build from Feb 23 ({{Implementation-Build: trunk@r1292625; 2012-02-23 03:20:30+}}) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (EMAIL-115) Need a way to remove emails from an already created, but not sent message.
[ https://issues.apache.org/jira/browse/EMAIL-115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13218429#comment-13218429 ] Thomas Neidhart edited comment on EMAIL-115 at 2/28/12 6:29 PM: ah stupid me, missed the isEmpty(). But you could still set all three fields to the same email-address, the mail server should send it normally just once. Note: this idea is just to support your use-case without changing the existing code. was (Author: tn): ah stupid me, missed the isEmpty(). But you could still set all three fields to the same email-address, the mail server should send it normally just once. Need a way to remove emails from an already created, but not sent message. -- Key: EMAIL-115 URL: https://issues.apache.org/jira/browse/EMAIL-115 Project: Commons Email Issue Type: Improvement Affects Versions: 1.2 Reporter: Brian Telintelo Priority: Blocker Fix For: 2.0 Ok, so we have one send email method which takes org.apache.commons.mail.Email param. It then checks to see if email sending is enabled(configured by server instance), then sends the mail if it is. Problem happens for our QA testing. We need to test email content, but don't want to send emails to actual users in QA environment. What we want to do is modify our one send email method and clear out the To,CC,BCC fields and then set the TO field to be our testing list. But, there is no way to remove emails already added. Setting it to null or empty collection results in an EmailException. And we can't create a new email instance and copy because there is no get message accessor available. We need a way to remove emailssomehow. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (COLLECTIONS-393) Split / Partition a collection into smaller collections
[ https://issues.apache.org/jira/browse/COLLECTIONS-393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13218894#comment-13218894 ] Chris Shayan edited comment on COLLECTIONS-393 at 2/29/12 4:19 AM: --- the class is in attachment. and example of it is: ListListString strings = Partition.partition(raws, 5); for (ListString list : strings) { // } was (Author: hamedsha...@gmail.com): the class is like this: package dk.sebpension; import java.util.AbstractList; import java.util.List; /** * Split / Partition a collection into smaller collections Inspired by Lars * Vogel */ public final class PartitionT extends AbstractListListT { final ListT list; final int size; public Partition(ListT list, int size) { this.list = list; this.size = size; } @Override public ListT get(int index) { int listSize = size(); if (listSize 0) { throw new IllegalArgumentException(negative size: + listSize); } if (index 0) { throw new IndexOutOfBoundsException(index + index + must not be negative); } if (index = listSize) { throw new IndexOutOfBoundsException(index + index + must be less than size + listSize); } int start = index * size; int end = Math.min(start + size, list.size()); return list.subList(start, end); } @Override public int size() { return (list.size() + size - 1) / size; } @Override public boolean isEmpty() { return list.isEmpty(); } /** * Returns consecutive {@linkplain List#subList(int, int) sublists} of a * list, each of the same size (the final list may be smaller). For example, * partitioning a list containing {@code [a, b, c, d, e]} with a partition * size of 3 yields {@code [[a, b, c], [d, e]]} -- an outer list containing * two inner lists of three and two elements, all in the original order. * * p * The outer list is unmodifiable, but reflects the latest state of the * source list. The inner lists are sublist views of the original list, * produced on demand using {@link List#subList(int, int)}, and are subject * to all the usual caveats about modification as explained in that API. * * Adapted from http://code.google.com/p/google-collections/ * @param T * * @param list *the list to return consecutive sublists of * @param size *the desired size of each sublist (the last may be smaller) * @return a list of consecutive sublists */ public static T ListListT partition(ListT list, int size) { if (list == null) { throw new NullPointerException('list' must not be null); } if (!(size 0)) { throw new IllegalArgumentException('size' must be greater than 0); } return new PartitionT(list, size); } } and example of it is: ListListString strings = Partition.partition(raws, 5); for (ListString list : strings) { // } Split / Partition a collection into smaller collections --- Key: COLLECTIONS-393 URL: https://issues.apache.org/jira/browse/COLLECTIONS-393 Project: Commons Collections Issue Type: New Feature Components: Collection Reporter: Chris Shayan Attachments: Partition.java Original Estimate: 24h Remaining Estimate: 24h Returns consecutive sublists of a list, each of the same size (the final list may be smaller). For example, partitioning a list containing [a, b, c, d, e] with a partition size of 3 yields [[a, b, c], [d, e]] -- an outer list containing two inner lists of three and two elements, all in the original order. The outer list is unmodifiable, but reflects the latest state of the source list. The inner lists are sublist views of the original list, produced on demand using List.subList(int, int), and are
[jira] [Issue Comment Edited] (EXEC-63) Race condition in DefaultExecutor#execute(cmd, handler)
[ https://issues.apache.org/jira/browse/EXEC-63?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13217797#comment-13217797 ] Martin Sandiford edited comment on EXEC-63 at 2/28/12 1:26 AM: --- Patch for fix from https://github.com/msandiford/commons-exec commit 0c7e9457f31ea0322d18dcb157d5192a25f0e490 {code:title=fix.diff|borderStyle=solid} --- a/src/main/java/org/apache/commons/exec/DefaultExecutor.java +++ b/src/main/java/org/apache/commons/exec/DefaultExecutor.java @@ -161,7 +161,7 @@ public class DefaultExecutor implements Executor { throw new IOException(workingDirectory + doesn't exist.); } -return executeInternal(command, environment, workingDirectory, streamHandler); +return executeInternal(command, null, environment, workingDirectory, streamHandler); } @@ -189,13 +189,14 @@ public class DefaultExecutor implements Executor { watchdog.setProcessNotStarted(); } +final SimpleCondition condition = new SimpleCondition(); Runnable runnable = new Runnable() { public void run() { int exitValue = Executor.INVALID_EXITVALUE; try { -exitValue = executeInternal(command, environment, workingDirectory, streamHandler); +exitValue = executeInternal(command, condition, environment, workingDirectory, streamHandler); handler.onProcessComplete(exitValue); } catch (ExecuteException e) { handler.onProcessFailed(e); @@ -207,6 +208,9 @@ public class DefaultExecutor implements Executor { this.executorThread = createThread(runnable, Exec Default Executor); getExecutorThread().start(); + +// Wait until the thread tells us we have actually started +condition.sleep(); } /** @see org.apache.commons.exec.Executor#setExitValue(int) */ @@ -316,6 +320,33 @@ public class DefaultExecutor implements Executor { } /** + * Simple thread notify mechanism. + */ +private static final class SimpleCondition { +private final Object lock = new Object(); +private volatile boolean notified = false; + +public void sleep() { +try { +synchronized (lock) { +while (!notified) { +lock.wait(); +} +} +} catch (InterruptedException e) { +Thread.currentThread().interrupt(); +} +} + +public void wakeup() { +synchronized (lock) { +notified = true; +lock.notifyAll(); +} +} +} + +/** * Execute an internal process. If the executing thread is interrupted while waiting for the * child process to return the child process will be killed. * @@ -326,12 +357,20 @@ public class DefaultExecutor implements Executor { * @return the exit code of the process * @throws IOException executing the process failed */ -private int executeInternal(final CommandLine command, final Map environment, -final File dir, final ExecuteStreamHandler streams) throws IOException { +private int executeInternal(final CommandLine command, final SimpleCondition cond, +final Map environment, final File dir, final ExecuteStreamHandler streams) +throws IOException { setExceptionCaught(null); -final Process process = this.launch(command, environment, dir); +final Process process; +try { +process = this.launch(command, environment, dir); +} finally { +// If necessary, let our parent know that the process has launched +if (cond != null) +cond.wakeup(); +} try { streams.setProcessInputStream(process.getOutputStream()); {code} was (Author: msandiford): Patch for test case from https://github.com/msandiford/commons-exec commit 0c7e9457f31ea0322d18dcb157d5192a25f0e490 {code:title=fix.diff|borderStyle=solid} --- a/src/main/java/org/apache/commons/exec/DefaultExecutor.java +++ b/src/main/java/org/apache/commons/exec/DefaultExecutor.java @@ -161,7 +161,7 @@ public class DefaultExecutor implements Executor { throw new IOException(workingDirectory + doesn't exist.); } -return executeInternal(command, environment, workingDirectory, streamHandler); +return executeInternal(command, null, environment, workingDirectory, streamHandler); } @@ -189,13 +189,14 @@ public class DefaultExecutor implements Executor { watchdog.setProcessNotStarted(); } +final SimpleCondition condition = new SimpleCondition();
[jira] [Issue Comment Edited] (CONFIGURATION-481) Variable interpolation across files broken in 1.7 1.8
[ https://issues.apache.org/jira/browse/CONFIGURATION-481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13216838#comment-13216838 ] Jim Prantzalos edited comment on CONFIGURATION-481 at 2/26/12 9:03 PM: --- Added a test case that demonstrates the issue using 1.6, 1.7, and 1.8. was (Author: dprantzalos): A test case that demonstrates the issue using 1.6, 1.7, and 1.8. Variable interpolation across files broken in 1.7 1.8 --- Key: CONFIGURATION-481 URL: https://issues.apache.org/jira/browse/CONFIGURATION-481 Project: Commons Configuration Issue Type: Bug Components: Interpolation Affects Versions: 1.7, 1.8 Environment: Any OS, but have verified with Windows 7 and AIX 6.1, running Java 1.6.0. Reporter: Jim Prantzalos Attachments: ApacheBug-CONFIGURATION-481.7z With Commons Configuration 1.6, I was able to declare a variable in a properties file, and then reference it in a XML file using the $\{myvar\} syntax. For example: global.properties: {noformat}myvar=abc{noformat} test.xml: {code:xml} products product name=abc desc${myvar}-product/desc /product /products {code} config.xml: {code:xml} properties fileName=global.properties/ xml fileName=test.xml config-name=test expressionEngine config-class=org.apache.commons.configuration.tree.xpath.XPathExpressionEngine/ /xml {code} When I try to retrieve the value, like so: {code}combinedConfig.getConfiguration(test).configurationAt(products/product[@name='abc'], true).getString(desc){code} I get $\{myvar\}-product instead of abc-product. This was working in Commons Configuration 1.6, but seems to be broken in 1.7 and 1.8. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (SANDBOX-397) [BeanUtils2] Replace NullPointerExceptions been thrown in DefaultBeanAccessor with NoSuchMethodEceptions
[ https://issues.apache.org/jira/browse/SANDBOX-397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13216947#comment-13216947 ] Benedikt Ritter edited comment on SANDBOX-397 at 2/26/12 9:59 PM: -- I've thought about the {{Properties}} class some more and I think, that it is the right thing to do. So I've created a patch for SANDBOX-399 that will fix the problems that {{Properties}} caused. I've done some more clean up and I was able to eliminate the dependency between {{DefaultBeanAccessor}} and {{PropertiesRegistry}}. {{PropertiesRegistry}} is on a lower abstraction level than {{DefautlBeanAccessor}} so I think that is a good thing :) If SANDBOX-399 is applied first, all test will pass when SANDBOX-397-SRPv2.txt is applied. Regards, Benedikt was (Author: britter): I've thought about the {{Properties}} class some more and I think, that it is the right thing to do. So I've created a patch for SANDBOX-399 that will fix the problems that {{Properties}} caused. I've done some more clean up and I was able to eliminate the dependency between {{DefaultBeanAccessor}} and {{PropertiesRegistry}}. {{PropertiesRegistry}} is on a lower abstraction level than {{DefautlBeanAccessor}} so I think that is a good thing :) If SANDBOX-399 is applied first, all test will pass when SANDBOX-398-SRPv2.txt is applied. Regards, Benedikt [BeanUtils2] Replace NullPointerExceptions been thrown in DefaultBeanAccessor with NoSuchMethodEceptions Key: SANDBOX-397 URL: https://issues.apache.org/jira/browse/SANDBOX-397 Project: Commons Sandbox Issue Type: Task Components: BeanUtils2 Affects Versions: Nightly Builds Reporter: Benedikt Ritter Attachments: SANDBOX-397.txt, SANDBOX-397_SRP.txt, SANDBOX-397_SRPv2.txt At the moment, methods in {{DefaultBeanAccessor}} throw a {{NullPointerException}}, if no {{PropertyDescriptor}} for a given property name can be retrieved. As discussed on the ML (see http://markmail.org/thread/zlehclmybp5xgn5n) this behavior should be changed to throwing {{NoSuchMethodException}}. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (MATH-753) Better implementation for the gamma distribution density function
[ https://issues.apache.org/jira/browse/MATH-753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13216529#comment-13216529 ] Sébastien Brisard edited comment on MATH-753 at 2/25/12 8:19 PM: - This suits me fine. The only concern I have is that using exp(log( x )) in place of x might incur a loss of accuracy. Maybe we should use this substitution only when it is necessary (for large values of alpha and beta). This would require a little bit of investigation to find the appropriate thresholds. was (Author: celestin): This suits me fine. The only concern I have is that using exp(log(x)) in place of x might incur a loss of accuracy. Maybe we should use this substitution only when it is necessary (for large values of alpha and beta). This would require a little bit of investigation to find the appropriate thresholds. Better implementation for the gamma distribution density function - Key: MATH-753 URL: https://issues.apache.org/jira/browse/MATH-753 Project: Commons Math Issue Type: Improvement Affects Versions: 2.2 Reporter: Francesco Strino Priority: Minor Labels: improvement, stability Fix For: 2.2 The way the density of the gamma distribution function is estimated can be improved. It's much more stable to calculate first the log of the density and then exponentiate, otherwise the function returns NaN for high values of the parameters alpha and beta. It would be sufficient to change the public double density(double x) function at line 204 in the file org.apache.commons.math.distribution.GammaDistributionImpl as follows: return Math.exp(Math.log( x )*(alpha-1) - Math.log(beta)*alpha - x/beta - Gamma.logGamma(alpha)); In order to improve performance, log(beta) and Gamma.logGamma(alpha) could also be precomputed and stored during initialization. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (JXPATH-152) Concurrent access on hashmap of JXPathIntrospector
[ https://issues.apache.org/jira/browse/JXPATH-152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13215899#comment-13215899 ] Matt Benson edited comment on JXPATH-152 at 2/24/12 8:57 PM: - It would seem that synchronizing each {{HashMap#put()}} call against the target map would defend against the worst-case scenarios you suggest. Thanks for the report! {quote} Committed revision 1293412. {quote} was (Author: mbenson): It would seem that synchronizing access to the {{HashMap}}s' {{#put}} calls would defend against the worst-case scenarios you suggest. Thanks for the report! {quote} Committed revision 1293412. {quote} Concurrent access on hashmap of JXPathIntrospector -- Key: JXPATH-152 URL: https://issues.apache.org/jira/browse/JXPATH-152 Project: Commons JXPath Issue Type: Bug Affects Versions: 1.3 Environment: Java5, Windows/AIX Reporter: pleutre Assignee: Matt Benson Priority: Minor Fix For: 1.4 Original Estimate: 24h Remaining Estimate: 24h JXPathIntrospector.registerDynamicClass method can be called in static part of classes. If two classes A B try to registerDynamicClass in the same time a concurrent access exception can append on hashmap of JXPathIntrospector. Replace hashmap by concurrent hashmap or synchronized access to these hashmaps. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (IO-173) FileUtils.listFiles() doesn't return directories
[ https://issues.apache.org/jira/browse/IO-173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13215308#comment-13215308 ] Marcos Vinícius da Silva edited comment on IO-173 at 2/24/12 2:02 AM: -- I'm working on a path for this issue. Should I include a new method which lists the directories too, or keep as is, and just change the java docs to point the change? Something like that: {code} public static CollectionFile listFilesAndDirs( File directory, IOFileFilter fileFilter, IOFileFilter dirFilter) { ... } {code} with the proper java docs. Actually the patch is with the second option. was (Author: detinho): I'm working on a path for this issue. Should I include a new method which lists the directories too, or keep as is, and just change the java docs to point the change? Something like that: {code} public static CollectionFile listFilesAndDirs( File directory, IOFileFilter fileFilter, IOFileFilter dirFilter) { ... } {code} with the proper java docs. Actually the patch is with the first option. FileUtils.listFiles() doesn't return directories Key: IO-173 URL: https://issues.apache.org/jira/browse/IO-173 Project: Commons IO Issue Type: Improvement Components: Utilities Affects Versions: 1.4 Reporter: François Loison Attachments: IO-173.patch FileUtils.listFiles() returns only files and not directories. So it can't be used to retrieve sub-directories. Some fix could be applied: {code} private static void innerListFiles(Collection files, File directory, IOFileFilter filter) { File[] found = directory.listFiles((FileFilter) filter); if (found != null) { for (int i = 0; i found.length; i++) { if (found[i].isDirectory()) { fix if ( addDirectories ) { files.add(found[i]); } end fix innerListFiles(files, found[i], filter); } else { files.add(found[i]); } } } } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (COMPRESS-177) TarArchiveInputStream rejects valid TAR file
[ https://issues.apache.org/jira/browse/COMPRESS-177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13212657#comment-13212657 ] Gili edited comment on COMPRESS-177 at 2/21/12 3:29 PM: I filed a separate issue for the wrong exception type being thrown: https://issues.apache.org/jira/browse/COMPRESS-178 was (Author: cowwoc): I filed a second issue: https://issues.apache.org/jira/browse/COMPRESS-178 TarArchiveInputStream rejects valid TAR file Key: COMPRESS-177 URL: https://issues.apache.org/jira/browse/COMPRESS-177 Project: Commons Compress Issue Type: Bug Components: Archivers Affects Versions: 1.3 Reporter: Gili Issue originally reported at http://java.net/jira/browse/TRUEZIP-219 # Download http://sourceforge.net/projects/boost/files/boost/1.48.0/boost_1_48_0.tar.gz?use_mirror=autoselect # I invoke Files.newDirectoryStream() on a TPath pointing to the resulting .tar.gz file # The following exception is thrown: {code} java.lang.IllegalArgumentException: Invalid byte -1 at offset 7 in 'some bytes' len=8 at org.apache.commons.compress.archivers.tar.TarUtils.parseOctal(TarUtils.java:86) at org.apache.commons.compress.archivers.tar.TarArchiveEntry.parseTarHeader(TarArchiveEntry.java:790) at org.apache.commons.compress.archivers.tar.TarArchiveEntry.init(TarArchiveEntry.java:308) at org.apache.commons.compress.archivers.tar.TarArchiveInputStream.getNextTarEntry(TarArchiveInputStream.java:198) at org.apache.commons.compress.archivers.tar.TarArchiveInputStream.getNextEntry(TarArchiveInputStream.java:380) at de.schlichtherle.truezip.fs.archive.tar.TarInputShop.init(TarInputShop.java:91) at de.schlichtherle.truezip.fs.archive.tar.TarDriver.newTarInputShop(TarDriver.java:159) at de.schlichtherle.truezip.fs.archive.tar.TarGZipDriver.newTarInputShop(TarGZipDriver.java:82) at de.schlichtherle.truezip.fs.archive.tar.TarDriver.newInputShop(TarDriver.java:151) at de.schlichtherle.truezip.fs.archive.tar.TarDriver.newInputShop(TarDriver.java:47) at de.schlichtherle.truezip.fs.archive.FsDefaultArchiveController.mount(FsDefaultArchiveController.java:170) at de.schlichtherle.truezip.fs.archive.FsFileSystemArchiveController$ResetFileSystem.autoMount(FsFileSystemArchiveController.java:98) at de.schlichtherle.truezip.fs.archive.FsFileSystemArchiveController.autoMount(FsFileSystemArchiveController.java:47) at de.schlichtherle.truezip.fs.archive.FsArchiveController.autoMount(FsArchiveController.java:129) at de.schlichtherle.truezip.fs.archive.FsArchiveController.getEntry(FsArchiveController.java:160) at de.schlichtherle.truezip.fs.archive.FsContextController.getEntry(FsContextController.java:117) at de.schlichtherle.truezip.fs.FsDecoratingController.getEntry(FsDecoratingController.java:76) at de.schlichtherle.truezip.fs.FsDecoratingController.getEntry(FsDecoratingController.java:76) at de.schlichtherle.truezip.fs.FsConcurrentController.getEntry(FsConcurrentController.java:164) at de.schlichtherle.truezip.fs.FsSyncController.getEntry(FsSyncController.java:108) at de.schlichtherle.truezip.fs.FsFederatingController.getEntry(FsFederatingController.java:156) at de.schlichtherle.truezip.nio.file.TFileSystem.newDirectoryStream(TFileSystem.java:348) at de.schlichtherle.truezip.nio.file.TPath.newDirectoryStream(TPath.java:963) at de.schlichtherle.truezip.nio.file.TFileSystemProvider.newDirectoryStream(TFileSystemProvider.java:344) at java.nio.file.Files.newDirectoryStream(Files.java:400) at com.googlecode.boostmavenproject.GetSourcesMojo.convertToJar(GetSourcesMojo.java:248) at com.googlecode.boostmavenproject.GetSourcesMojo.download(GetSourcesMojo.java:221) at com.googlecode.boostmavenproject.GetSourcesMojo.execute(GetSourcesMojo.java:111) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101) ... 20 more {code} Christian Schlichtherle (the TrueZip author) is expecting the Commons Compress to throw IOException instead of IllegalArgumentException. I am expecting no exception to be thrown because as far as I can tell the TAR file is valid (opens up in WinRar and Ubuntu's built-in Archiver). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (SANDBOX-396) [BeanUtils2] Implement clone() on DefaultBeanAccessor
[ https://issues.apache.org/jira/browse/SANDBOX-396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13212981#comment-13212981 ] Simone Tripodi edited comment on SANDBOX-396 at 2/21/12 9:42 PM: - patch looks good, anyway: {code} +@SuppressWarnings( unchecked ) +B clone = (B) bean.getClass().newInstance(); +DefaultBeanAccessorB cloneAccessor = new DefaultBeanAccessorB( clone ); +cloneAccessor.populate( this.describe() ); +return clone; {code} * it is a good practice adding a comment that justifies why the unchecked warning can be suppressed; * the {{this.}} prefix can be omitted. We could even reuse the {{BeanUtils.on( beanType ).newInstance()}} chain that would help removing the duplicated code. Can you please apply these modifications and resubmit the patch? TIA, -Simo was (Author: simone.tripodi): patch looks good, anyway: {quote} +@SuppressWarnings( unchecked ) +B clone = (B) bean.getClass().newInstance(); +DefaultBeanAccessorB cloneAccessor = new DefaultBeanAccessorB( clone ); +cloneAccessor.populate( this.describe() ); +return clone; {quote} * it is a good practice adding a comment that justifies why the unchecked warning can be suppressed; * the {{this.}} prefix can be omitted. We could even reuse the {{BeanUtils.on( beanType ).newInstance()}} that would help removing the duplicated code. Can you please apply these modifications and resubmit the patch? TIA, -Simo [BeanUtils2] Implement clone() on DefaultBeanAccessor - Key: SANDBOX-396 URL: https://issues.apache.org/jira/browse/SANDBOX-396 Project: Commons Sandbox Issue Type: Improvement Components: BeanUtils2 Affects Versions: Nightly Builds Reporter: Benedikt Ritter Attachments: SANDBOX-396.txt Implement {{clone()}} on DefaultBeanAccessor: * create a new instance of the same type as the bean encapsulated by the Accessor * create a {{DefaultBeanAccessor}} for the new instance * call populate on the new {{DefaultBeanAccessor}} with {{this.describe()}} as argument * return the clone -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (IO-288) Supply a ReversedLinesFileReader
[ https://issues.apache.org/jira/browse/IO-288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13210186#comment-13210186 ] Lars Kolb edited comment on IO-288 at 2/17/12 11:20 AM: Works like a charm, thanks Georg. I had a similar requirement. I wanted to chunk-wise read a HDFS file backwards to allow file browsing similar to the Hadoop namenode's web interface. By clicking a button a user triggers to fetch the previous N lines starting from a specific offset. With a few changes to your ReversedLinesFileReader implementation I was able to implement this functionality. I would suggest to extend your ReversedLinesFileReader to be able to operate on InputStreams and to return the number of consumed bytes ((i.e. the number of bytes actually read for line construction not the number of buffered bytes). This actually results in a Reverse org.apache.hadoop.util.LineReader. was (Author: weihnachtsmann): Works like a charm, thanks Georg. I had a similar requirement. I wanted to chunk-wise read a HDFS file backwards to allow file browsing similar to the Hadoop namenode's web interface. By clicking a button a user triggers to fetch the previous N lines starting from a specific offset. With a few changes to your ReversedLinesFileReader implementation I was able to implement this functionality. I would suggest to extend your ReversedLinesFileReader to be able to operate on InputStreams and to return the number of consumed bytes ((i.e. the number of bytes actually read for line construction not the number of buffered)). This actually results in a Reverse org.apache.hadoop.util.LineReader. Supply a ReversedLinesFileReader - Key: IO-288 URL: https://issues.apache.org/jira/browse/IO-288 Project: Commons IO Issue Type: New Feature Components: Utilities Reporter: Georg Henzler Fix For: 2.2 Attachments: ReversedLinesFileReader0.3.zip I needed to analyse a log file today and I was looking for a ReversedLinesFileReader: A class that behaves exactly like BufferedReader except that it goes from bottom to top when readLine() is called. I didn't find it in IOUtils and the internet didn't help a lot either, e.g. http://www.java2s.com/Tutorial/Java/0180__File/ReversingaFile.htm is a fairly inefficient - the log files I'm analysing are huge and it is not a good idea to load the whole content in the memory. So I ended up writing an implementation myself using little memory and the class RandomAccessFile - see attached file. It's used as follows: int blockSize = 4096; // only that much memory is needed, no matter how big the file is ReversedLinesFileReader reversedLinesFileReader = new ReversedLinesFileReader (myFile, blockSize, UTF-8); // encoding is supported String line = null; while((line=reversedLinesFileReader.readLine())!=null) { ... // use the line if(enoughLinesSeen) { break; } } reversedLinesFileReader.close(); I believe this could be useful for other people as well! -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (IO-288) Supply a ReversedLinesFileReader
[ https://issues.apache.org/jira/browse/IO-288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13210212#comment-13210212 ] Lars Kolb edited comment on IO-288 at 2/17/12 12:41 PM: * [Web|http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LineReader.java?view=log] * [SVN|http://hadoop.apache.org/common/version_control.html] Basically, it would be required to support: {code} FSDataInputStream is= FileSystem.get(conf); is.seek(offset); //constructor takes (positioned) Input stream public ReversedLinesReader(InputStream is) { //do not seek to end of file, simply start reading from is } public int readLine(Text text) { //return bytes read and store line in text //alternatively one could return a PairString,Integer to not depend on org.apache.hadoop.io.Text } {code} was (Author: weihnachtsmann): * [Web|http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LineReader.java?view=log] * [SVN|http://hadoop.apache.org/common/version_control.html] Basically, it would be required to support: {code} FSDataInputStream is= FileSystem.get(conf); is.seek(offset); //constructor takes (positioned) Input stream public ReversedLinesFileReader(InputStream is) { //do not seek to end of file, simply start reading from is } public int readLine(Text text) { //return bytes read and store line in text //alternatively one could return a PairString,Integer to not depend on org.apache.hadoop.io.Text } {code} Supply a ReversedLinesFileReader - Key: IO-288 URL: https://issues.apache.org/jira/browse/IO-288 Project: Commons IO Issue Type: New Feature Components: Utilities Reporter: Georg Henzler Fix For: 2.2 Attachments: ReversedLinesFileReader0.3.zip I needed to analyse a log file today and I was looking for a ReversedLinesFileReader: A class that behaves exactly like BufferedReader except that it goes from bottom to top when readLine() is called. I didn't find it in IOUtils and the internet didn't help a lot either, e.g. http://www.java2s.com/Tutorial/Java/0180__File/ReversingaFile.htm is a fairly inefficient - the log files I'm analysing are huge and it is not a good idea to load the whole content in the memory. So I ended up writing an implementation myself using little memory and the class RandomAccessFile - see attached file. It's used as follows: int blockSize = 4096; // only that much memory is needed, no matter how big the file is ReversedLinesFileReader reversedLinesFileReader = new ReversedLinesFileReader (myFile, blockSize, UTF-8); // encoding is supported String line = null; while((line=reversedLinesFileReader.readLine())!=null) { ... // use the line if(enoughLinesSeen) { break; } } reversedLinesFileReader.close(); I believe this could be useful for other people as well! -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (IO-288) Supply a ReversedLinesFileReader
[ https://issues.apache.org/jira/browse/IO-288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13210212#comment-13210212 ] Lars Kolb edited comment on IO-288 at 2/17/12 12:46 PM: * [Web|http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LineReader.java?view=log] * [SVN|http://hadoop.apache.org/common/version_control.html] Basically, it would be required to support: {code} Text str= new Text(); FSDataInputStream is= FileSystem.get(conf); is.seek(offset); ReversedLinesReader reader= new ReversedLinesReader(is); int bytesConsumed; while((bytesConsumed=reader.readLine(str))0) { ... } public class ReversedLinesReader { //constructor takes (positioned) Input stream public ReversedLinesReader(InputStream is) { //do not seek to end of file, simply start reading from is } public int readLine(Text text) { //return bytes read and store line in text //alternatively one could return a PairString,Integer to not depend on org.apache.hadoop.io.Text } } {code} was (Author: weihnachtsmann): * [Web|http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LineReader.java?view=log] * [SVN|http://hadoop.apache.org/common/version_control.html] Basically, it would be required to support: {code} FSDataInputStream is= FileSystem.get(conf); is.seek(offset); //constructor takes (positioned) Input stream public ReversedLinesReader(InputStream is) { //do not seek to end of file, simply start reading from is } public int readLine(Text text) { //return bytes read and store line in text //alternatively one could return a PairString,Integer to not depend on org.apache.hadoop.io.Text } {code} Supply a ReversedLinesFileReader - Key: IO-288 URL: https://issues.apache.org/jira/browse/IO-288 Project: Commons IO Issue Type: New Feature Components: Utilities Reporter: Georg Henzler Fix For: 2.2 Attachments: ReversedLinesFileReader0.3.zip I needed to analyse a log file today and I was looking for a ReversedLinesFileReader: A class that behaves exactly like BufferedReader except that it goes from bottom to top when readLine() is called. I didn't find it in IOUtils and the internet didn't help a lot either, e.g. http://www.java2s.com/Tutorial/Java/0180__File/ReversingaFile.htm is a fairly inefficient - the log files I'm analysing are huge and it is not a good idea to load the whole content in the memory. So I ended up writing an implementation myself using little memory and the class RandomAccessFile - see attached file. It's used as follows: int blockSize = 4096; // only that much memory is needed, no matter how big the file is ReversedLinesFileReader reversedLinesFileReader = new ReversedLinesFileReader (myFile, blockSize, UTF-8); // encoding is supported String line = null; while((line=reversedLinesFileReader.readLine())!=null) { ... // use the line if(enoughLinesSeen) { break; } } reversedLinesFileReader.close(); I believe this could be useful for other people as well! -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (IO-288) Supply a ReversedLinesFileReader
[ https://issues.apache.org/jira/browse/IO-288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13210212#comment-13210212 ] Lars Kolb edited comment on IO-288 at 2/17/12 12:50 PM: * [Web|http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LineReader.java?view=log] * [SVN|http://hadoop.apache.org/common/version_control.html] Basically, it would be required to support: {code} Text str= new Text(); FSDataInputStream is= FileSystem.get(conf); is.seek(offset); ReversedLinesReader reader= new ReversedLinesReader(is); int bytesConsumed; while((bytesConsumed=reader.readLine(str))0) { ... } public class ReversedLinesReader { //constructor takes (positioned) Input stream public ReversedLinesReader(InputStream is) { //do not seek to end of file, simply start reading from is } //constructor takes (positioned) Input stream public ReversedLinesReader(File file) { //seek to end of file } public int readLine(Text text) { //return bytes read and store line in text //alternatively one could return a PairString,Integer to not depend on org.apache.hadoop.io.Text } } {code} was (Author: weihnachtsmann): * [Web|http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LineReader.java?view=log] * [SVN|http://hadoop.apache.org/common/version_control.html] Basically, it would be required to support: {code} Text str= new Text(); FSDataInputStream is= FileSystem.get(conf); is.seek(offset); ReversedLinesReader reader= new ReversedLinesReader(is); int bytesConsumed; while((bytesConsumed=reader.readLine(str))0) { ... } public class ReversedLinesReader { //constructor takes (positioned) Input stream public ReversedLinesReader(InputStream is) { //do not seek to end of file, simply start reading from is } public int readLine(Text text) { //return bytes read and store line in text //alternatively one could return a PairString,Integer to not depend on org.apache.hadoop.io.Text } } {code} Supply a ReversedLinesFileReader - Key: IO-288 URL: https://issues.apache.org/jira/browse/IO-288 Project: Commons IO Issue Type: New Feature Components: Utilities Reporter: Georg Henzler Fix For: 2.2 Attachments: ReversedLinesFileReader0.3.zip I needed to analyse a log file today and I was looking for a ReversedLinesFileReader: A class that behaves exactly like BufferedReader except that it goes from bottom to top when readLine() is called. I didn't find it in IOUtils and the internet didn't help a lot either, e.g. http://www.java2s.com/Tutorial/Java/0180__File/ReversingaFile.htm is a fairly inefficient - the log files I'm analysing are huge and it is not a good idea to load the whole content in the memory. So I ended up writing an implementation myself using little memory and the class RandomAccessFile - see attached file. It's used as follows: int blockSize = 4096; // only that much memory is needed, no matter how big the file is ReversedLinesFileReader reversedLinesFileReader = new ReversedLinesFileReader (myFile, blockSize, UTF-8); // encoding is supported String line = null; while((line=reversedLinesFileReader.readLine())!=null) { ... // use the line if(enoughLinesSeen) { break; } } reversedLinesFileReader.close(); I believe this could be useful for other people as well! -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (IO-288) Supply a ReversedLinesFileReader
[ https://issues.apache.org/jira/browse/IO-288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13210212#comment-13210212 ] Lars Kolb edited comment on IO-288 at 2/17/12 12:51 PM: * [Web|http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LineReader.java?view=log] * [SVN|http://hadoop.apache.org/common/version_control.html] Basically, it would be required to support: {code} Text str= new Text(); FSDataInputStream is= FileSystem.get(conf); is.seek(offset); ReversedLinesReader reader= new ReversedLinesReader(is); int bytesConsumed; while((bytesConsumed=reader.readLine(str))0) { ... } public class ReversedLinesReader { //constructor takes (positioned) Input stream public ReversedLinesReader(InputStream is) { //simply start reading from is } //current behaviour public ReversedLinesReader(File file) { //seek to end of file } public int readLine(Text text) { //return bytes read and store line in text //alternatively one could return a PairString,Integer to not depend on org.apache.hadoop.io.Text } } {code} was (Author: weihnachtsmann): * [Web|http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LineReader.java?view=log] * [SVN|http://hadoop.apache.org/common/version_control.html] Basically, it would be required to support: {code} Text str= new Text(); FSDataInputStream is= FileSystem.get(conf); is.seek(offset); ReversedLinesReader reader= new ReversedLinesReader(is); int bytesConsumed; while((bytesConsumed=reader.readLine(str))0) { ... } public class ReversedLinesReader { //constructor takes (positioned) Input stream public ReversedLinesReader(InputStream is) { //do not seek to end of file, simply start reading from is } //current behaviour public ReversedLinesReader(File file) { //seek to end of file } public int readLine(Text text) { //return bytes read and store line in text //alternatively one could return a PairString,Integer to not depend on org.apache.hadoop.io.Text } } {code} Supply a ReversedLinesFileReader - Key: IO-288 URL: https://issues.apache.org/jira/browse/IO-288 Project: Commons IO Issue Type: New Feature Components: Utilities Reporter: Georg Henzler Fix For: 2.2 Attachments: ReversedLinesFileReader0.3.zip I needed to analyse a log file today and I was looking for a ReversedLinesFileReader: A class that behaves exactly like BufferedReader except that it goes from bottom to top when readLine() is called. I didn't find it in IOUtils and the internet didn't help a lot either, e.g. http://www.java2s.com/Tutorial/Java/0180__File/ReversingaFile.htm is a fairly inefficient - the log files I'm analysing are huge and it is not a good idea to load the whole content in the memory. So I ended up writing an implementation myself using little memory and the class RandomAccessFile - see attached file. It's used as follows: int blockSize = 4096; // only that much memory is needed, no matter how big the file is ReversedLinesFileReader reversedLinesFileReader = new ReversedLinesFileReader (myFile, blockSize, UTF-8); // encoding is supported String line = null; while((line=reversedLinesFileReader.readLine())!=null) { ... // use the line if(enoughLinesSeen) { break; } } reversedLinesFileReader.close(); I believe this could be useful for other people as well! -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (IO-288) Supply a ReversedLinesFileReader
[ https://issues.apache.org/jira/browse/IO-288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13210212#comment-13210212 ] Lars Kolb edited comment on IO-288 at 2/17/12 12:50 PM: * [Web|http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LineReader.java?view=log] * [SVN|http://hadoop.apache.org/common/version_control.html] Basically, it would be required to support: {code} Text str= new Text(); FSDataInputStream is= FileSystem.get(conf); is.seek(offset); ReversedLinesReader reader= new ReversedLinesReader(is); int bytesConsumed; while((bytesConsumed=reader.readLine(str))0) { ... } public class ReversedLinesReader { //constructor takes (positioned) Input stream public ReversedLinesReader(InputStream is) { //do not seek to end of file, simply start reading from is } //current behaviour public ReversedLinesReader(File file) { //seek to end of file } public int readLine(Text text) { //return bytes read and store line in text //alternatively one could return a PairString,Integer to not depend on org.apache.hadoop.io.Text } } {code} was (Author: weihnachtsmann): * [Web|http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LineReader.java?view=log] * [SVN|http://hadoop.apache.org/common/version_control.html] Basically, it would be required to support: {code} Text str= new Text(); FSDataInputStream is= FileSystem.get(conf); is.seek(offset); ReversedLinesReader reader= new ReversedLinesReader(is); int bytesConsumed; while((bytesConsumed=reader.readLine(str))0) { ... } public class ReversedLinesReader { //constructor takes (positioned) Input stream public ReversedLinesReader(InputStream is) { //do not seek to end of file, simply start reading from is } //constructor takes (positioned) Input stream public ReversedLinesReader(File file) { //seek to end of file } public int readLine(Text text) { //return bytes read and store line in text //alternatively one could return a PairString,Integer to not depend on org.apache.hadoop.io.Text } } {code} Supply a ReversedLinesFileReader - Key: IO-288 URL: https://issues.apache.org/jira/browse/IO-288 Project: Commons IO Issue Type: New Feature Components: Utilities Reporter: Georg Henzler Fix For: 2.2 Attachments: ReversedLinesFileReader0.3.zip I needed to analyse a log file today and I was looking for a ReversedLinesFileReader: A class that behaves exactly like BufferedReader except that it goes from bottom to top when readLine() is called. I didn't find it in IOUtils and the internet didn't help a lot either, e.g. http://www.java2s.com/Tutorial/Java/0180__File/ReversingaFile.htm is a fairly inefficient - the log files I'm analysing are huge and it is not a good idea to load the whole content in the memory. So I ended up writing an implementation myself using little memory and the class RandomAccessFile - see attached file. It's used as follows: int blockSize = 4096; // only that much memory is needed, no matter how big the file is ReversedLinesFileReader reversedLinesFileReader = new ReversedLinesFileReader (myFile, blockSize, UTF-8); // encoding is supported String line = null; while((line=reversedLinesFileReader.readLine())!=null) { ... // use the line if(enoughLinesSeen) { break; } } reversedLinesFileReader.close(); I believe this could be useful for other people as well! -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (IO-288) Supply a ReversedLinesFileReader
[ https://issues.apache.org/jira/browse/IO-288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13210212#comment-13210212 ] Lars Kolb edited comment on IO-288 at 2/17/12 12:52 PM: * [Web|http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LineReader.java?view=log] * [SVN|http://hadoop.apache.org/common/version_control.html] Basically, it would be required to support: {code} Text str= new Text(); FSDataInputStream is= FileSystem.get(conf); is.seek(offset); ReversedLinesReader reader= new ReversedLinesReader(is); int bytesConsumed; while((bytesConsumed=reader.readLine(str))0) { ... } public class ReversedLinesReader { public ReversedLinesReader(InputStream is) { //simply start reading from (positioned) is } public ReversedLinesReader(File file) { //current behaviour seek to end of file } public int readLine(Text text) { //return bytes read and store line in text //alternatively one could return a PairString,Integer to not depend on org.apache.hadoop.io.Text } public String readLine() { //current behaviour } } {code} was (Author: weihnachtsmann): * [Web|http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LineReader.java?view=log] * [SVN|http://hadoop.apache.org/common/version_control.html] Basically, it would be required to support: {code} Text str= new Text(); FSDataInputStream is= FileSystem.get(conf); is.seek(offset); ReversedLinesReader reader= new ReversedLinesReader(is); int bytesConsumed; while((bytesConsumed=reader.readLine(str))0) { ... } public class ReversedLinesReader { //constructor takes (positioned) Input stream public ReversedLinesReader(InputStream is) { //simply start reading from is } //current behaviour public ReversedLinesReader(File file) { //seek to end of file } public int readLine(Text text) { //return bytes read and store line in text //alternatively one could return a PairString,Integer to not depend on org.apache.hadoop.io.Text } } {code} Supply a ReversedLinesFileReader - Key: IO-288 URL: https://issues.apache.org/jira/browse/IO-288 Project: Commons IO Issue Type: New Feature Components: Utilities Reporter: Georg Henzler Fix For: 2.2 Attachments: ReversedLinesFileReader0.3.zip I needed to analyse a log file today and I was looking for a ReversedLinesFileReader: A class that behaves exactly like BufferedReader except that it goes from bottom to top when readLine() is called. I didn't find it in IOUtils and the internet didn't help a lot either, e.g. http://www.java2s.com/Tutorial/Java/0180__File/ReversingaFile.htm is a fairly inefficient - the log files I'm analysing are huge and it is not a good idea to load the whole content in the memory. So I ended up writing an implementation myself using little memory and the class RandomAccessFile - see attached file. It's used as follows: int blockSize = 4096; // only that much memory is needed, no matter how big the file is ReversedLinesFileReader reversedLinesFileReader = new ReversedLinesFileReader (myFile, blockSize, UTF-8); // encoding is supported String line = null; while((line=reversedLinesFileReader.readLine())!=null) { ... // use the line if(enoughLinesSeen) { break; } } reversedLinesFileReader.close(); I believe this could be useful for other people as well! -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (IO-288) Supply a ReversedLinesFileReader
[ https://issues.apache.org/jira/browse/IO-288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13210212#comment-13210212 ] Lars Kolb edited comment on IO-288 at 2/17/12 12:57 PM: * [Web|http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LineReader.java?view=log] * [SVN|http://hadoop.apache.org/common/version_control.html] Basically, it would be required to support: {code} Text str= new Text(); FSDataInputStream is= FileSystem.get(conf); is.seek(offset); ReversedLinesReader reader= new ReversedLinesReader(is); int bytesConsumed; long bytesConsumedTotal=0L; while((bytesConsumed=reader.readLine(str))0 bytesConsumedTotaltreshold) { //... bytesConsumedTotal+= bytesConsumed; } public class ReversedLinesReader { public ReversedLinesReader(InputStream is) { //simply start reading from (positioned) is } public ReversedLinesReader(File file) { //current behaviour seek to end of file } public int readLine(Text text) { //return bytes read and store line in text //alternatively one could return a PairString,Integer to not depend on org.apache.hadoop.io.Text } public String readLine() { //current behaviour } } {code} was (Author: weihnachtsmann): * [Web|http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LineReader.java?view=log] * [SVN|http://hadoop.apache.org/common/version_control.html] Basically, it would be required to support: {code} Text str= new Text(); FSDataInputStream is= FileSystem.get(conf); is.seek(offset); ReversedLinesReader reader= new ReversedLinesReader(is); int bytesConsumed; while((bytesConsumed=reader.readLine(str))0) { ... } public class ReversedLinesReader { public ReversedLinesReader(InputStream is) { //simply start reading from (positioned) is } public ReversedLinesReader(File file) { //current behaviour seek to end of file } public int readLine(Text text) { //return bytes read and store line in text //alternatively one could return a PairString,Integer to not depend on org.apache.hadoop.io.Text } public String readLine() { //current behaviour } } {code} Supply a ReversedLinesFileReader - Key: IO-288 URL: https://issues.apache.org/jira/browse/IO-288 Project: Commons IO Issue Type: New Feature Components: Utilities Reporter: Georg Henzler Fix For: 2.2 Attachments: ReversedLinesFileReader0.3.zip I needed to analyse a log file today and I was looking for a ReversedLinesFileReader: A class that behaves exactly like BufferedReader except that it goes from bottom to top when readLine() is called. I didn't find it in IOUtils and the internet didn't help a lot either, e.g. http://www.java2s.com/Tutorial/Java/0180__File/ReversingaFile.htm is a fairly inefficient - the log files I'm analysing are huge and it is not a good idea to load the whole content in the memory. So I ended up writing an implementation myself using little memory and the class RandomAccessFile - see attached file. It's used as follows: int blockSize = 4096; // only that much memory is needed, no matter how big the file is ReversedLinesFileReader reversedLinesFileReader = new ReversedLinesFileReader (myFile, blockSize, UTF-8); // encoding is supported String line = null; while((line=reversedLinesFileReader.readLine())!=null) { ... // use the line if(enoughLinesSeen) { break; } } reversedLinesFileReader.close(); I believe this could be useful for other people as well! -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (IO-288) Supply a ReversedLinesFileReader
[ https://issues.apache.org/jira/browse/IO-288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13210186#comment-13210186 ] Lars Kolb edited comment on IO-288 at 2/17/12 12:58 PM: Works like a charm, thanks Georg. I had a similar requirement. I wanted to chunk-wise read a HDFS file backwards to allow file browsing similar to the Hadoop namenode's web interface. By clicking a button a user triggers to fetch the previous N lines starting from a specific offset. With a few changes to your ReversedLinesFileReader implementation I was able to implement this functionality. I would suggest to extend your ReversedLinesFileReader to be able to operate on InputStreams and to return the number of consumed bytes (i.e. the number of bytes actually read for line construction not the number of buffered bytes). This actually results in a Reverse org.apache.hadoop.util.LineReader. was (Author: weihnachtsmann): Works like a charm, thanks Georg. I had a similar requirement. I wanted to chunk-wise read a HDFS file backwards to allow file browsing similar to the Hadoop namenode's web interface. By clicking a button a user triggers to fetch the previous N lines starting from a specific offset. With a few changes to your ReversedLinesFileReader implementation I was able to implement this functionality. I would suggest to extend your ReversedLinesFileReader to be able to operate on InputStreams and to return the number of consumed bytes ((i.e. the number of bytes actually read for line construction not the number of buffered bytes). This actually results in a Reverse org.apache.hadoop.util.LineReader. Supply a ReversedLinesFileReader - Key: IO-288 URL: https://issues.apache.org/jira/browse/IO-288 Project: Commons IO Issue Type: New Feature Components: Utilities Reporter: Georg Henzler Fix For: 2.2 Attachments: ReversedLinesFileReader0.3.zip I needed to analyse a log file today and I was looking for a ReversedLinesFileReader: A class that behaves exactly like BufferedReader except that it goes from bottom to top when readLine() is called. I didn't find it in IOUtils and the internet didn't help a lot either, e.g. http://www.java2s.com/Tutorial/Java/0180__File/ReversingaFile.htm is a fairly inefficient - the log files I'm analysing are huge and it is not a good idea to load the whole content in the memory. So I ended up writing an implementation myself using little memory and the class RandomAccessFile - see attached file. It's used as follows: int blockSize = 4096; // only that much memory is needed, no matter how big the file is ReversedLinesFileReader reversedLinesFileReader = new ReversedLinesFileReader (myFile, blockSize, UTF-8); // encoding is supported String line = null; while((line=reversedLinesFileReader.readLine())!=null) { ... // use the line if(enoughLinesSeen) { break; } } reversedLinesFileReader.close(); I believe this could be useful for other people as well! -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (IO-288) Supply a ReversedLinesFileReader
[ https://issues.apache.org/jira/browse/IO-288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13210212#comment-13210212 ] Lars Kolb edited comment on IO-288 at 2/17/12 1:06 PM: --- * [Web|http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LineReader.java?view=log] * [SVN|http://hadoop.apache.org/common/version_control.html] Basically, it would be required to support: {code} Text str= new Text(); FSDataInputStream is= FileSystem.get(conf); is.seek(offset); ReversedLinesReader reader= new ReversedLinesReader(is); int bytesConsumed; long bytesConsumedTotal=0L; while(bytesConsumedTotaltreshold (bytesConsumed=reader.readLine(str))0) { //... bytesConsumedTotal+= bytesConsumed; } public class ReversedLinesReader { public ReversedLinesReader(InputStream is) { //simply start reading from (positioned) is } public ReversedLinesReader(File file) { //current behaviour seek to end of file } public int readLine(Text text) { //return bytes read and store line in text //alternatively one could return a PairString,Integer to not depend on org.apache.hadoop.io.Text } public String readLine() { //current behaviour } } {code} was (Author: weihnachtsmann): * [Web|http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LineReader.java?view=log] * [SVN|http://hadoop.apache.org/common/version_control.html] Basically, it would be required to support: {code} Text str= new Text(); FSDataInputStream is= FileSystem.get(conf); is.seek(offset); ReversedLinesReader reader= new ReversedLinesReader(is); int bytesConsumed; long bytesConsumedTotal=0L; while((bytesConsumed=reader.readLine(str))0 bytesConsumedTotaltreshold) { //... bytesConsumedTotal+= bytesConsumed; } public class ReversedLinesReader { public ReversedLinesReader(InputStream is) { //simply start reading from (positioned) is } public ReversedLinesReader(File file) { //current behaviour seek to end of file } public int readLine(Text text) { //return bytes read and store line in text //alternatively one could return a PairString,Integer to not depend on org.apache.hadoop.io.Text } public String readLine() { //current behaviour } } {code} Supply a ReversedLinesFileReader - Key: IO-288 URL: https://issues.apache.org/jira/browse/IO-288 Project: Commons IO Issue Type: New Feature Components: Utilities Reporter: Georg Henzler Fix For: 2.2 Attachments: ReversedLinesFileReader0.3.zip I needed to analyse a log file today and I was looking for a ReversedLinesFileReader: A class that behaves exactly like BufferedReader except that it goes from bottom to top when readLine() is called. I didn't find it in IOUtils and the internet didn't help a lot either, e.g. http://www.java2s.com/Tutorial/Java/0180__File/ReversingaFile.htm is a fairly inefficient - the log files I'm analysing are huge and it is not a good idea to load the whole content in the memory. So I ended up writing an implementation myself using little memory and the class RandomAccessFile - see attached file. It's used as follows: int blockSize = 4096; // only that much memory is needed, no matter how big the file is ReversedLinesFileReader reversedLinesFileReader = new ReversedLinesFileReader (myFile, blockSize, UTF-8); // encoding is supported String line = null; while((line=reversedLinesFileReader.readLine())!=null) { ... // use the line if(enoughLinesSeen) { break; } } reversedLinesFileReader.close(); I believe this could be useful for other people as well! -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (IO-303) TeeOutputStream fails executing branch.close() when main.close() raised an exception
[ https://issues.apache.org/jira/browse/IO-303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13210346#comment-13210346 ] Fabian Barney edited comment on IO-303 at 2/17/12 5:51 PM: --- In my code I prefer throwing the first one. There is one exception when a latter Throwable occurrs and it is an Error and the former not. In my opinion this is the Throwable you want to see. Another approach is to throw something like a MultiIOException containing all occurred exceptions. I agree that this all is not a real pleasure, but better than leaving resources open that can be closed successfully. I've written a MultiOutputStream yesterday: https://github.com/fabian-barney/Utils/blob/master/utils/src/com/barney4j/utils/io/MultiOutputStream.java I am not sure for myself that I made the right decision here. was (Author: fabian.barney): In my code I prefer throwing the first one. There is one exception when a latter Throwable occurrs and it is an Error and the former not. In my opinion this is the Throwable you want to see. Another approach is to throw something like a MultiIOException containing all occurred exceptions. I agree that this all is not a real pleasure, but better than leaving resources open that might be closed successfully. I've written a MultiOutputStream yesterday: https://github.com/fabian-barney/Utils/blob/master/utils/src/com/barney4j/utils/io/MultiOutputStream.java I am not sure for myself that I made the right decision here. TeeOutputStream fails executing branch.close() when main.close() raised an exception Key: IO-303 URL: https://issues.apache.org/jira/browse/IO-303 Project: Commons IO Issue Type: Bug Components: Streams/Writers Affects Versions: 2.1 Reporter: Fabian Barney Labels: close, stream TeeOutputStream.close() looks like this: {code:title=TeeOutputStream.java|borderStyle=solid} /** * Closes both streams. * @throws IOException if an I/O error occurs */ @Override public void close() throws IOException { super.close(); this.branch.close(); } {code} It is obvious that {{this.branch.close()}} is not executed when {{super.close()}} raises an exception. {{super.close()}} may in fact raise an IOException since {{ProxyOutputStream.handleIOException(IOException)}} is not overridden. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (IO-303) TeeOutputStream does not call branch.close() when main.close() throws an exception
[ https://issues.apache.org/jira/browse/IO-303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13210625#comment-13210625 ] Fabian Barney edited comment on IO-303 at 2/17/12 10:40 PM: This is ok I think. Here's how I would write it but it is just a matter of taste... {noformat} @Override public void close() throws IOException { try { super.close(); } finally { this.branch.close(); } } {noformat} Thanks for addressing this issue that fast! was (Author: fabian.barney): This is ok I think. Here's how I would write it but it is just a matter of taste... {noformat} @Override public void close() throws IOException { try { super.close(); } finally { this.branch.close(); } } {noformat} TeeOutputStream does not call branch.close() when main.close() throws an exception -- Key: IO-303 URL: https://issues.apache.org/jira/browse/IO-303 Project: Commons IO Issue Type: Bug Components: Streams/Writers Affects Versions: 2.1 Reporter: Fabian Barney Assignee: Gary D. Gregory Labels: close, stream Fix For: 2.2 TeeOutputStream.close() looks like this: {code:title=TeeOutputStream.java|borderStyle=solid} /** * Closes both streams. * @throws IOException if an I/O error occurs */ @Override public void close() throws IOException { super.close(); this.branch.close(); } {code} It is obvious that {{this.branch.close()}} is not executed when {{super.close()}} raises an exception. {{super.close()}} may in fact raise an IOException since {{ProxyOutputStream.handleIOException(IOException)}} is not overridden. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (IO-303) TeeOutputStream does not call branch.close() when main.close() throws an exception
[ https://issues.apache.org/jira/browse/IO-303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13210625#comment-13210625 ] Fabian Barney edited comment on IO-303 at 2/17/12 11:03 PM: Thanks for addressing this issue that fast! Here's how I would write it but it is just a matter of taste... {noformat} @Override public void close() throws IOException { try { super.close(); } finally { this.branch.close(); } } {noformat} was (Author: fabian.barney): This is ok I think. Here's how I would write it but it is just a matter of taste... {noformat} @Override public void close() throws IOException { try { super.close(); } finally { this.branch.close(); } } {noformat} Thanks for addressing this issue that fast! TeeOutputStream does not call branch.close() when main.close() throws an exception -- Key: IO-303 URL: https://issues.apache.org/jira/browse/IO-303 Project: Commons IO Issue Type: Bug Components: Streams/Writers Affects Versions: 2.1 Reporter: Fabian Barney Assignee: Gary D. Gregory Labels: close, stream Fix For: 2.2 TeeOutputStream.close() looks like this: {code:title=TeeOutputStream.java|borderStyle=solid} /** * Closes both streams. * @throws IOException if an I/O error occurs */ @Override public void close() throws IOException { super.close(); this.branch.close(); } {code} It is obvious that {{this.branch.close()}} is not executed when {{super.close()}} raises an exception. {{super.close()}} may in fact raise an IOException since {{ProxyOutputStream.handleIOException(IOException)}} is not overridden. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (MATH-746) Things to do before releasing 3.0
[ https://issues.apache.org/jira/browse/MATH-746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13209152#comment-13209152 ] Sébastien Brisard edited comment on MATH-746 at 2/16/12 6:53 AM: - In order to disable the CS scanning of {{BOBYQAOptimizer}}, I had to * Add the following module to {{checkstyle.xml}} {code} module name=SuppressionCommentFilter property name=offCommentFormat value=CHECKSTYLE\: stop all/ property name=onCommentFormat value=CHECKSTYLE\: resume all/ /module {code} * Insert the following first line in {{BOBYQAOptimizer.java}} {{// CHECKSTYLE: stop all}} * On second thoughts, I also inserted the last line in {{BOBYQAOptimizer.java}} {{// CHECKSTYLE: resume all}} not sure it is necessary, but it should not harm. was (Author: celestin): In order to disable the CS scanning of {{BOBYQAOptimizer}}, I had to * Add the following module to {{checkstyle.xml}} {code} module name=SuppressionCommentFilter property name=offCommentFormat value=CHECKSTYLE\: stop all/ property name=onCommentFormat value=CHECKSTYLE\: resume all/ /module {code} * Insert the following first line in {{BOBYQAOptimizer.java}} {{// CHECKSTYLE: stop all}} Things to do before releasing 3.0 - Key: MATH-746 URL: https://issues.apache.org/jira/browse/MATH-746 Project: Commons Math Issue Type: Task Reporter: Gilles Priority: Blocker Fix For: 3.0 This issue is meant to contain a list of tasks to be completed before the release. * Remarks to be added to the *release notes*: ** Experimental code: {{BOBYQAOptimizer}} *** Many code paths untested *** Looking for volunteers to improve the code readability, robustness and performance *** Looking for volunteers to extend the test suite ** {{FastMath}} is not always faster than {{Math}} (issue MATH-740) * Create a release branch * Disable CheckStyle scanning of {{BOBYQAOptimizer}}: {color:green}done in {{r1244855}}{color} (/) * Remove unit test class {{BatteryNISTTest}} * Remove class {{PivotingQRDecomposition}} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (MATH-745) up to 5x Performance Improvement on FasFourierTransformer.java by using a recursive iterative sumation Approach
[ https://issues.apache.org/jira/browse/MATH-745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13206867#comment-13206867 ] Thomas Neidhart edited comment on MATH-745 at 2/13/12 1:30 PM: --- I was curious to see if a recursion is really faster than a loop. The test results can be seen here: fft (calls per timed block: 1000, timed blocks: 100, time unit: ms) name time/call std error total time ratio difference Loop 6.97587976e-02 1.15353985e-02 6.9759e+03 1.e+00 0.e+00 Recursive 7.11088946e-02 5.04734785e-03 7.1109e+03 1.0194e+00 1.35009693e+02 The function being transformed is a simple sin. was (Author: tn): I was curious to see if a recursion is really faster than a loop. The test results can be seen here: fft (calls per timed block: 1000, timed blocks: 100, time unit: ms) name time/call std error total time ratio difference Loop 6.97587976e-02 1.15353985e-02 6.9759e+03 1.e+00 0.e+00 Recursive 7.11088946e-02 5.04734785e-03 7.1109e+03 1.0194e+00 1.35009693e+02 The function being transformed is sin(x). up to 5x Performance Improvement on FasFourierTransformer.java by using a recursive iterative sumation Approach --- Key: MATH-745 URL: https://issues.apache.org/jira/browse/MATH-745 Project: Commons Math Issue Type: Improvement Affects Versions: 3.0 Reporter: Leandro Ariel Pezzente Labels: FFT, Fast, Fourier, Transform Attachments: FastFourierTransformer.patch.txt By swithinch form a loop iterative approach to a recursive iterative approach on fastFourierTransformer.java a Perfomance Improvement of up to 5x is gained. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (MATH-745) up to 5x Performance Improvement on FasFourierTransformer.java by using a recursive iterative sumation Approach
[ https://issues.apache.org/jira/browse/MATH-745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13206867#comment-13206867 ] Thomas Neidhart edited comment on MATH-745 at 2/13/12 1:30 PM: --- I was curious to see if a recursion is really faster than a loop. The test results can be seen here: {noformat} fft (calls per timed block: 1000, timed blocks: 100, time unit: ms) name time/call std error total time ratio difference Loop 6.97587976e-02 1.15353985e-02 6.9759e+03 1.e+00 0.e+00 Recursive 7.11088946e-02 5.04734785e-03 7.1109e+03 1.0194e+00 1.35009693e+02 {noformat} The function being transformed is a simple sin. was (Author: tn): I was curious to see if a recursion is really faster than a loop. The test results can be seen here: fft (calls per timed block: 1000, timed blocks: 100, time unit: ms) name time/call std error total time ratio difference Loop 6.97587976e-02 1.15353985e-02 6.9759e+03 1.e+00 0.e+00 Recursive 7.11088946e-02 5.04734785e-03 7.1109e+03 1.0194e+00 1.35009693e+02 The function being transformed is a simple sin. up to 5x Performance Improvement on FasFourierTransformer.java by using a recursive iterative sumation Approach --- Key: MATH-745 URL: https://issues.apache.org/jira/browse/MATH-745 Project: Commons Math Issue Type: Improvement Affects Versions: 3.0 Reporter: Leandro Ariel Pezzente Labels: FFT, Fast, Fourier, Transform Attachments: FastFourierTransformer.patch.txt By swithinch form a loop iterative approach to a recursive iterative approach on fastFourierTransformer.java a Perfomance Improvement of up to 5x is gained. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (MATH-745) up to 5x Performance Improvement on FasFourierTransformer.java by using a recursive iterative sumation Approach
[ https://issues.apache.org/jira/browse/MATH-745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13206922#comment-13206922 ] Sébastien Brisard edited comment on MATH-745 at 2/13/12 3:54 PM: - Thomas, have you checked how this scales with the data size? Otherwise, I can do it. was (Author: celestin): Thomas, have you checked how this scales with the data size. up to 5x Performance Improvement on FasFourierTransformer.java by using a recursive iterative sumation Approach --- Key: MATH-745 URL: https://issues.apache.org/jira/browse/MATH-745 Project: Commons Math Issue Type: Improvement Affects Versions: 3.0 Reporter: Leandro Ariel Pezzente Labels: FFT, Fast, Fourier, Transform Attachments: FastFourierTransformer.patch.txt By swithinch form a loop iterative approach to a recursive iterative approach on fastFourierTransformer.java a Perfomance Improvement of up to 5x is gained. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (LANG-462) FastDateFormat supports parse
[ https://issues.apache.org/jira/browse/LANG-462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13207247#comment-13207247 ] Felix Müller edited comment on LANG-462 at 2/13/12 9:46 PM: The merged patch causes build errors because of used java 6 features (commons lang is still java 5 compatible). Override annotation were used to mark implementation of interface methods. That's only possible since java 6. Also one test was wrong. I fixed it. Please see attached for the patch of the patch. :-) was (Author: fmueller): The merged patch causes build errors because of used java 6 features (commons lang is still java 5 comppatible). Override annotation were used to mark implementation of interface methods. That's only possible since java 6. Also one test was wrong. I fixed it. Please see attached for the patch of the patch. :-) FastDateFormat supports parse - Key: LANG-462 URL: https://issues.apache.org/jira/browse/LANG-462 Project: Commons Lang Issue Type: New Feature Components: lang.time.* Reporter: Franz Wong Fix For: 3.2 Attachments: DateParser.patch, LANG-462-FormatCache.patch, LANG-462-Hen.patch, LANG-462_buildfix.patch, UseFormatCache.patch, lang462.patch, with_interfaces.patch, with_interfaces2.patch, with_updated_tests.patch Currently FastDateFormat only supports formatting the ISO8601 time zone, however, it doesn't support parsing such string to Date. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (IO-299) getPrefixLength returns null if filename has leading slashes
[ https://issues.apache.org/jira/browse/IO-299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13206439#comment-13206439 ] Thilina Dampahala edited comment on IO-299 at 2/12/12 4:07 PM: --- In this case for your input I don't want to become null!, I think the expected output should be In Unix: //I don't want to become null! In Windows: \\I don't want to become null! Any ideas on this? was (Author: thdamp): In this case for your input I don't want to become null!, I think the expected output should be In Unix: //I don't want to become null! In Windows: \\I don't want to become null! Any ideas on this? getPrefixLength returns null if filename has leading slashes Key: IO-299 URL: https://issues.apache.org/jira/browse/IO-299 Project: Commons IO Issue Type: Bug Components: Utilities Affects Versions: 2.0.1, 2.1 Reporter: Rick Latrine Original Estimate: 2h Remaining Estimate: 2h Situation: FilenameUtils.getPrefixLength is used in FilenameUtils.doNormalize. FilenameUtils.normalize(I don't want to become null!) returns null. Problem: Expected was: I don't want to become null! The method FilenameUtils.getPrefixLength returns -1 for the mentioned string. The root problem is found in following lines of code: {code:title=FilenameUtils.getPrefixLength} ... int posUnix = filename.indexOf(UNIX_SEPARATOR, 2); int posWin = filename.indexOf(WINDOWS_SEPARATOR, 2); if ((posUnix == -1 posWin == -1) || posUnix == 2 || posWin == 2) { return -1; } ... {code} Solution: All leading slashes should be ignored at all, but considering the rest of the string. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (IO-299) getPrefixLength returns null if filename has leading slashes
[ https://issues.apache.org/jira/browse/IO-299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13206439#comment-13206439 ] Thilina Dampahala edited comment on IO-299 at 2/12/12 4:08 PM: --- In this case for your input I don't want to become null!, I think the expected output should be In Unix: //I don't want to become null! In Windows: I don't want to become null! Any ideas on this? was (Author: thdamp): In this case for your input I don't want to become null!, I think the expected output should be In Unix: //I don't want to become null! In Windows: \\I don't want to become null! Any ideas on this? getPrefixLength returns null if filename has leading slashes Key: IO-299 URL: https://issues.apache.org/jira/browse/IO-299 Project: Commons IO Issue Type: Bug Components: Utilities Affects Versions: 2.0.1, 2.1 Reporter: Rick Latrine Original Estimate: 2h Remaining Estimate: 2h Situation: FilenameUtils.getPrefixLength is used in FilenameUtils.doNormalize. FilenameUtils.normalize(I don't want to become null!) returns null. Problem: Expected was: I don't want to become null! The method FilenameUtils.getPrefixLength returns -1 for the mentioned string. The root problem is found in following lines of code: {code:title=FilenameUtils.getPrefixLength} ... int posUnix = filename.indexOf(UNIX_SEPARATOR, 2); int posWin = filename.indexOf(WINDOWS_SEPARATOR, 2); if ((posUnix == -1 posWin == -1) || posUnix == 2 || posWin == 2) { return -1; } ... {code} Solution: All leading slashes should be ignored at all, but considering the rest of the string. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (IO-299) getPrefixLength returns null if filename has leading slashes
[ https://issues.apache.org/jira/browse/IO-299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13206439#comment-13206439 ] Thilina Dampahala edited comment on IO-299 at 2/12/12 4:08 PM: --- In this case for your input I don't want to become null!, I think the expected output should be In Unix: //I don't want to become null! In Windows: \\\I don't want to become null! Any ideas on this? was (Author: thdamp): In this case for your input I don't want to become null!, I think the expected output should be In Unix: //I don't want to become null! In Windows: I don't want to become null! Any ideas on this? getPrefixLength returns null if filename has leading slashes Key: IO-299 URL: https://issues.apache.org/jira/browse/IO-299 Project: Commons IO Issue Type: Bug Components: Utilities Affects Versions: 2.0.1, 2.1 Reporter: Rick Latrine Original Estimate: 2h Remaining Estimate: 2h Situation: FilenameUtils.getPrefixLength is used in FilenameUtils.doNormalize. FilenameUtils.normalize(I don't want to become null!) returns null. Problem: Expected was: I don't want to become null! The method FilenameUtils.getPrefixLength returns -1 for the mentioned string. The root problem is found in following lines of code: {code:title=FilenameUtils.getPrefixLength} ... int posUnix = filename.indexOf(UNIX_SEPARATOR, 2); int posWin = filename.indexOf(WINDOWS_SEPARATOR, 2); if ((posUnix == -1 posWin == -1) || posUnix == 2 || posWin == 2) { return -1; } ... {code} Solution: All leading slashes should be ignored at all, but considering the rest of the string. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (IO-299) getPrefixLength returns null if filename has leading slashes
[ https://issues.apache.org/jira/browse/IO-299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13206439#comment-13206439 ] Thilina Dampahala edited comment on IO-299 at 2/12/12 4:09 PM: --- In this case for your input I don't want to become null!, I think the expected output should be In Unix: //I don't want to become null! In Windows: \\I don't want to become null! Any ideas on this? was (Author: thdamp): In this case for your input I don't want to become null!, I think the expected output should be In Unix: //I don't want to become null! In Windows: \\\I don't want to become null! Any ideas on this? getPrefixLength returns null if filename has leading slashes Key: IO-299 URL: https://issues.apache.org/jira/browse/IO-299 Project: Commons IO Issue Type: Bug Components: Utilities Affects Versions: 2.0.1, 2.1 Reporter: Rick Latrine Original Estimate: 2h Remaining Estimate: 2h Situation: FilenameUtils.getPrefixLength is used in FilenameUtils.doNormalize. FilenameUtils.normalize(I don't want to become null!) returns null. Problem: Expected was: I don't want to become null! The method FilenameUtils.getPrefixLength returns -1 for the mentioned string. The root problem is found in following lines of code: {code:title=FilenameUtils.getPrefixLength} ... int posUnix = filename.indexOf(UNIX_SEPARATOR, 2); int posWin = filename.indexOf(WINDOWS_SEPARATOR, 2); if ((posUnix == -1 posWin == -1) || posUnix == 2 || posWin == 2) { return -1; } ... {code} Solution: All leading slashes should be ignored at all, but considering the rest of the string. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (IO-299) getPrefixLength returns null if filename has leading slashes
[ https://issues.apache.org/jira/browse/IO-299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13206439#comment-13206439 ] Thilina Dampahala edited comment on IO-299 at 2/12/12 4:09 PM: --- In this case for your input I don't want to become null!, I think the expected output should be In Unix: //I don't want to become null! In Windows: \\I don't want to become null! Any ideas on this? was (Author: thdamp): In this case for your input I don't want to become null!, I think the expected output should be In Unix: //I don't want to become null! In Windows: \\I don't want to become null! Any ideas on this? getPrefixLength returns null if filename has leading slashes Key: IO-299 URL: https://issues.apache.org/jira/browse/IO-299 Project: Commons IO Issue Type: Bug Components: Utilities Affects Versions: 2.0.1, 2.1 Reporter: Rick Latrine Original Estimate: 2h Remaining Estimate: 2h Situation: FilenameUtils.getPrefixLength is used in FilenameUtils.doNormalize. FilenameUtils.normalize(I don't want to become null!) returns null. Problem: Expected was: I don't want to become null! The method FilenameUtils.getPrefixLength returns -1 for the mentioned string. The root problem is found in following lines of code: {code:title=FilenameUtils.getPrefixLength} ... int posUnix = filename.indexOf(UNIX_SEPARATOR, 2); int posWin = filename.indexOf(WINDOWS_SEPARATOR, 2); if ((posUnix == -1 posWin == -1) || posUnix == 2 || posWin == 2) { return -1; } ... {code} Solution: All leading slashes should be ignored at all, but considering the rest of the string. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (IO-299) getPrefixLength returns null if filename has leading slashes
[ https://issues.apache.org/jira/browse/IO-299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13206439#comment-13206439 ] Thilina Dampahala edited comment on IO-299 at 2/12/12 4:11 PM: --- In this case for your input I don't want to become null!, I think the expected output should be In Unix: //I don't want to become null! In Windows: Same with the other slash Any ideas on this? was (Author: thdamp): In this case for your input I don't want to become null!, I think the expected output should be In Unix: //I don't want to become null! In Windows: \\I don't want to become null! Any ideas on this? getPrefixLength returns null if filename has leading slashes Key: IO-299 URL: https://issues.apache.org/jira/browse/IO-299 Project: Commons IO Issue Type: Bug Components: Utilities Affects Versions: 2.0.1, 2.1 Reporter: Rick Latrine Original Estimate: 2h Remaining Estimate: 2h Situation: FilenameUtils.getPrefixLength is used in FilenameUtils.doNormalize. FilenameUtils.normalize(I don't want to become null!) returns null. Problem: Expected was: I don't want to become null! The method FilenameUtils.getPrefixLength returns -1 for the mentioned string. The root problem is found in following lines of code: {code:title=FilenameUtils.getPrefixLength} ... int posUnix = filename.indexOf(UNIX_SEPARATOR, 2); int posWin = filename.indexOf(WINDOWS_SEPARATOR, 2); if ((posUnix == -1 posWin == -1) || posUnix == 2 || posWin == 2) { return -1; } ... {code} Solution: All leading slashes should be ignored at all, but considering the rest of the string. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (MATH-650) FastMath has static code which slows the first access to FastMath
[ https://issues.apache.org/jira/browse/MATH-650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13206095#comment-13206095 ] Luc Maisonobe edited comment on MATH-650 at 2/11/12 12:44 PM: -- I would like to resolve this issue. I think everybody agreed pre-computation was an improvement, and the remaining discussions are about the way the pre-computed tables are loaded. We have two competing implementations for that: resources loaded from a binary file and literal arrays compiled in the library. In both cases, the data is generated beforehand by our own code, so in both case users who wants to re-generate them with different settings can do so. The advantages of resources are that they are more compact (binary) and don't clutter the sources with large tables. The advantages of literal arrays are that the can be checked by mere human beings and don't require any support code. The speed difference between the two implementation exists but is tiny. Two people have already expressed their preferences (one favoring resources the other one favoring literal arrays). I don't have a clear cut preference myself (only a slight bias toward one solution which I prefer to keep for myself). I would like to have the opinions of newer developers and users on this before selecting one approach. Could someone please provide another opinion ? was (Author: luc): I would like to resolve this issue. I think everybody agreed pre-computation was an improvement, and the remaining discussions are about the way the pre-computed tables are loaded. We have two competing implementations for that: resources loaded from a binary file and literal arrays compiled in the library. In both cases, the data is generated beforehand by our own code, so in both case users who wants to re-generate with different settings them can do so. The advantages of resources are that they are more compact (binary) and don't clutter the sources with large tables. The advantages of literal arrays are that the can be checked by mere human beings and don't require any support code. The speed difference between the two implementation exists but is tiny. Two people have already expressed their preferences (one favoring resources the other one favoring literal arrays). I don't have a clear cut preference myself (only a slight bias toward one solution which I prefer to keep for myself). I would like to have the opinions of newer developers and users on this before selecting one approach. Could someone please provide another opinion ? FastMath has static code which slows the first access to FastMath - Key: MATH-650 URL: https://issues.apache.org/jira/browse/MATH-650 Project: Commons Math Issue Type: Improvement Affects Versions: Nightly Builds Environment: Android 2.3 (Dalvik VM with JIT) Reporter: Alexis Robert Priority: Minor Fix For: 3.0 Attachments: FastMathLoadCheck.java, LucTestPerformance.java Working on an Android application using Orekit, I've discovered that a simple FastMath.floor() takes about 4 to 5 secs on a 1GHz Nexus One phone (only the first time it's called). I've launched the Android profiling tool (traceview) and the problem seems to be linked with the static portion of FastMath code named // Initialize tables The timing resulted in : - FastMath.slowexp (40.8%) - FastMath.expint (39.2%) \- FastMath.quadmult() (95.6% of expint) - FastMath.slowlog (18.2%) Hoping that would help Thanks! Alexis Robert -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (MATH-650) FastMath has static code which slows the first access to FastMath
[ https://issues.apache.org/jira/browse/MATH-650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13206188#comment-13206188 ] Luc Maisonobe edited comment on MATH-650 at 2/11/12 5:27 PM: - I think the reason literal array are faster is that there is no real reading (no file to open, no loop, no read ...). Everything has already been prepared beforehand by the compiler and the loaded class is most probably already in a memory-mapped file. Remember that the array are literal and need to be parsed only by the compiler, not by the JVM at loading time. In fact in both cases what is loaded is binary. was (Author: luc): I think the reason literal array are faster is that they is no real reading (no file to open, no loop, no read ...). Everything has already been prepared beforehand by the compiler and the loaded class is most probably already in a memory-mapped file. Remember that the array are literal and need to be parsed only by the compiler, not by the JVM at loading time. In fact in both cases what is loaded is binary. FastMath has static code which slows the first access to FastMath - Key: MATH-650 URL: https://issues.apache.org/jira/browse/MATH-650 Project: Commons Math Issue Type: Improvement Affects Versions: Nightly Builds Environment: Android 2.3 (Dalvik VM with JIT) Reporter: Alexis Robert Priority: Minor Fix For: 3.0 Attachments: FastMathLoadCheck.java, LucTestPerformance.java Working on an Android application using Orekit, I've discovered that a simple FastMath.floor() takes about 4 to 5 secs on a 1GHz Nexus One phone (only the first time it's called). I've launched the Android profiling tool (traceview) and the problem seems to be linked with the static portion of FastMath code named // Initialize tables The timing resulted in : - FastMath.slowexp (40.8%) - FastMath.expint (39.2%) \- FastMath.quadmult() (95.6% of expint) - FastMath.slowlog (18.2%) Hoping that would help Thanks! Alexis Robert -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (DBUTILS-88) Make AsyncQueryRunner be a decorator around a QueryRunner
[ https://issues.apache.org/jira/browse/DBUTILS-88?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13206255#comment-13206255 ] William R. Speirs edited comment on DBUTILS-88 at 2/11/12 8:34 PM: --- Note: The DBUTILS-88v1.patch includes changes to the unit tests which all pass now. was (Author: wspeirs): Note: The DBUTILS-99v1.patch includes changes to the unit tests which all pass now. Make AsyncQueryRunner be a decorator around a QueryRunner - Key: DBUTILS-88 URL: https://issues.apache.org/jira/browse/DBUTILS-88 Project: Commons DbUtils Issue Type: Task Reporter: Moandji Ezana Priority: Minor Attachments: AsyncQueryRunner_wraps_QueryRunner.txt, DBUTILS-88v1.patch AsyncQueryRunner duplicates much of the code in QueryRunner. Would it be possible for AsyncQueryRunner to simply decorate a QueryRunner with async functionality, in the same way a BufferedInputStream might decorate an InputStream? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (MATH-728) Errors in BOBYQAOptimizer when numberOfInterpolationPoints is greater than 2*dim+1
[ https://issues.apache.org/jira/browse/MATH-728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13206296#comment-13206296 ] Gilles edited comment on MATH-728 at 2/11/12 11:47 PM: --- Although the bug that triggered this issue is fixed, failures of the unit test still miss an explanation... The poor performance is to be expected given the current state of the code (e.g. many matrix calculations are done explicitly, with getters and setters, instead of calling methods of the matrix objects). was (Author: erans): Although the bug that triggered this issue is fixed, failures of the unit test are still miss an explanation... The poor performance is to be expected given the current state of the code (e.g. many matrix calculations are done explicitly, with getters and setters, instead of calling methods of the matrix objects). Errors in BOBYQAOptimizer when numberOfInterpolationPoints is greater than 2*dim+1 -- Key: MATH-728 URL: https://issues.apache.org/jira/browse/MATH-728 Project: Commons Math Issue Type: Bug Affects Versions: 3.0 Environment: Mac Java(TM) SE Runtime Environment (build 1.6.0_29-b11-402-11M3527) Reporter: Bruce A Johnson Fix For: 3.0 I've been having trouble getting BOBYQA to minimize a function (actually a non-linear least squares fit) so as one change I increased the number of interpolation points. It seems that anything larger than 2*dim+1 causes an error (typically at line 1662 interpolationPoints.setEntry(nfm, ipt, interpolationPoints.getEntry(ipt, ipt)); I'm guessing there is an off by one error in the translation from FORTRAN. Changing the BOBYQAOptimizerTest as follows (increasing number of interpolation points by one) will cause failures. Bruce Index: src/test/java/org/apache/commons/math/optimization/direct/BOBYQAOptimizerTest.java === --- src/test/java/org/apache/commons/math/optimization/direct/BOBYQAOptimizerTest.java (revision 1221065) +++ src/test/java/org/apache/commons/math/optimization/direct/BOBYQAOptimizerTest.java (working copy) @@ -258,7 +258,7 @@ //RealPointValuePair result = optim.optimize(10, func, goal, startPoint); final double[] lB = boundaries == null ? null : boundaries[0]; final double[] uB = boundaries == null ? null : boundaries[1]; -BOBYQAOptimizer optim = new BOBYQAOptimizer(2 * dim + 1); +BOBYQAOptimizer optim = new BOBYQAOptimizer(2 * dim + 2); RealPointValuePair result = optim.optimize(maxEvaluations, func, goal, startPoint, lB, uB); //System.out.println(func.getClass().getName() + = // + optim.getEvaluations() + f(); -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (SANDBOX-389) [BeanUtils2] Implement populate() on DefaultBeanAccessor
[ https://issues.apache.org/jira/browse/SANDBOX-389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13205386#comment-13205386 ] Benedikt Ritter edited comment on SANDBOX-389 at 2/10/12 12:13 PM: --- I finally got the time to review and fix my patch. I re-implemented {{equals()}} on {{TestBean}} using commons lang 3. Commons lang 3.1 has been added as a dependency in test scope to pom.xml. Now everything should be fine with {{populate()}}. Regards, Benedikt was (Author: britter): I finally got the time to review and fix my patch. I re-implemented {{equals()}} on {{TestBean}} using commons lang 3. Commons lang 3.1 has been added as a dependency in test scope to pom.xml. Now everything should be fine with {{populate}()}. Regards, Benedikt [BeanUtils2] Implement populate() on DefaultBeanAccessor - Key: SANDBOX-389 URL: https://issues.apache.org/jira/browse/SANDBOX-389 Project: Commons Sandbox Issue Type: Improvement Components: BeanUtils2 Affects Versions: Nightly Builds Reporter: Benedikt Ritter Attachments: SANDBOX-389.txt, SANDBOX-389v2.txt Implement {{populate()}} as discussed on the ML (see http://markmail.org/thread/niv47muvrms56pqr) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (VFS-340) FileSystemException: Badly formed URI
[ https://issues.apache.org/jira/browse/VFS-340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13202811#comment-13202811 ] Kirthi Paruchuri edited comment on VFS-340 at 2/7/12 9:29 PM: -- Are there any updates on this JIRA? I got the same exception(unknown protocol:sftp) when I am trying to do SFTP and I am using VFS2-2.0 Thanks was (Author: kirthi97): Are there any updates on this JIRA? I got the same exception(unknown protocol:sftp) when I am trying to do SFTP Thanks FileSystemException: Badly formed URI -- Key: VFS-340 URL: https://issues.apache.org/jira/browse/VFS-340 Project: Commons VFS Issue Type: Bug Affects Versions: 1.0 Environment: windows XP, JDK1.5 Reporter: bharani Original Estimate: 96h Remaining Estimate: 96h I am trying to connect to SFTP using commons.vfs . The program is running fine initially by connecting to SFTP and retrieving files ,but after one point of time it throws following exception repeatedly and refuses to connect thereafter. FileSystemException: Badly formed URI sftp://sftpuser:@US456564/home57556/;. at org.apache.commons.vfs.provider.AbstractOriginatingFileProvider.findFile(AbstractOriginatingFileProvider.java:58) at org.apache.commons.vfs.impl.DefaultFileSystemManager.resolveFile(DefaultFileSystemManager.java:641) at org.apache.commons.vfs.impl.DefaultFileSystemManager.resolveFile(DefaultFileSystemManager.java:582) at com.java.workflow.trigger.BaseTask.establishConnection(BaseTask.java:167) at com.java.workflow.trigger.sftp.sampleconnect.getSource2System(sampleconnect.java:11) at com.java.workflow.trigger.sftp.samplecnt.perform(samplecnt.java:27) at com.java.workflow.trigger.BaseTask.execute(BaseTask.java:127) at org.quartz.core.JobRunShell.run(JobRunShell.java:216) at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:549) Caused by: org.apache.commons.vfs.FileSystemException: Port number is missing from URI sftp://sftpuser:@US456564/home57556/;. at org.apache.commons.vfs.provider.HostFileNameParser.extractPort(HostFileNameParser.java:229) at org.apache.commons.vfs.provider.HostFileNameParser.extractToPath(HostFileNameParser.java:134) at org.apache.commons.vfs.provider.URLFileNameParser.parseUri(URLFileNameParser.java:48) at org.apache.commons.vfs.provider.AbstractFileProvider.parseUri(AbstractFileProvider.java:170) at org.apache.commons.vfs.provider.AbstractOriginatingFileProvider.findFile(AbstractOriginatingFileProvider.java:54) ... 8 more -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (VFS-340) FileSystemException: Badly formed URI
[ https://issues.apache.org/jira/browse/VFS-340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13202811#comment-13202811 ] Kirthi Paruchuri edited comment on VFS-340 at 2/7/12 9:29 PM: -- Are there any updates on this JIRA? I got the same exception(unknown protocol:sftp) when I am trying to do SFTP and I am using VFS2-2.0 on Linux machine Thanks was (Author: kirthi97): Are there any updates on this JIRA? I got the same exception(unknown protocol:sftp) when I am trying to do SFTP and I am using VFS2-2.0 Thanks FileSystemException: Badly formed URI -- Key: VFS-340 URL: https://issues.apache.org/jira/browse/VFS-340 Project: Commons VFS Issue Type: Bug Affects Versions: 1.0 Environment: windows XP, JDK1.5 Reporter: bharani Original Estimate: 96h Remaining Estimate: 96h I am trying to connect to SFTP using commons.vfs . The program is running fine initially by connecting to SFTP and retrieving files ,but after one point of time it throws following exception repeatedly and refuses to connect thereafter. FileSystemException: Badly formed URI sftp://sftpuser:@US456564/home57556/;. at org.apache.commons.vfs.provider.AbstractOriginatingFileProvider.findFile(AbstractOriginatingFileProvider.java:58) at org.apache.commons.vfs.impl.DefaultFileSystemManager.resolveFile(DefaultFileSystemManager.java:641) at org.apache.commons.vfs.impl.DefaultFileSystemManager.resolveFile(DefaultFileSystemManager.java:582) at com.java.workflow.trigger.BaseTask.establishConnection(BaseTask.java:167) at com.java.workflow.trigger.sftp.sampleconnect.getSource2System(sampleconnect.java:11) at com.java.workflow.trigger.sftp.samplecnt.perform(samplecnt.java:27) at com.java.workflow.trigger.BaseTask.execute(BaseTask.java:127) at org.quartz.core.JobRunShell.run(JobRunShell.java:216) at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:549) Caused by: org.apache.commons.vfs.FileSystemException: Port number is missing from URI sftp://sftpuser:@US456564/home57556/;. at org.apache.commons.vfs.provider.HostFileNameParser.extractPort(HostFileNameParser.java:229) at org.apache.commons.vfs.provider.HostFileNameParser.extractToPath(HostFileNameParser.java:134) at org.apache.commons.vfs.provider.URLFileNameParser.parseUri(URLFileNameParser.java:48) at org.apache.commons.vfs.provider.AbstractFileProvider.parseUri(AbstractFileProvider.java:170) at org.apache.commons.vfs.provider.AbstractOriginatingFileProvider.findFile(AbstractOriginatingFileProvider.java:54) ... 8 more -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (VFS-340) FileSystemException: Badly formed URI
[ https://issues.apache.org/jira/browse/VFS-340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13202811#comment-13202811 ] Kirthi Paruchuri edited comment on VFS-340 at 2/7/12 9:38 PM: -- Are there any updates on this JIRA? I got the same exception when I am trying to do SFTP and I am using VFS2-2.0 on Linux machine org.apache.commons.vfs2.FileSystemException: Badly formed URI sftp://orbitz:***@sftp-stg.bazaarvoice.com/import-inbox/product_feed_EB.xml.gz;. at org.apache.commons.vfs2.provider.url.UrlFileProvider.findFile(UrlFileProvider.java:91) at org.apache.commons.vfs2.impl.DefaultFileSystemManager.resolveFile(DefaultFileSystemManager.java:713) at org.apache.commons.vfs2.impl.DefaultFileSystemManager.resolveFile(DefaultFileSystemManager.java:621) at com.orbitz.jobs.reviews.common.SFtpClient.upload(SFtpClient.java:20) at com.orbitz.jobs.reviews.common.SFtpClientTest.testUpload(SFtpClientTest.java:21) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at junit.framework.TestCase.runTest(TestCase.java:154) at junit.framework.TestCase.runBare(TestCase.java:127) at junit.framework.TestResult$1.protect(TestResult.java:106) at junit.framework.TestResult.runProtected(TestResult.java:124) at junit.framework.TestResult.run(TestResult.java:109) at junit.framework.TestCase.run(TestCase.java:118) at org.eclipse.jdt.internal.junit.runner.junit3.JUnit3TestReference.run(JUnit3TestReference.java:130) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) Caused by: java.net.MalformedURLException: unknown protocol: sftp at java.net.URL.init(URL.java:574) at java.net.URL.init(URL.java:464) at java.net.URL.init(URL.java:413) at org.apache.commons.vfs2.provider.url.UrlFileProvider.findFile(UrlFileProvider.java:72) ... 20 more was (Author: kirthi97): Are there any updates on this JIRA? I got the same exception(unknown protocol:sftp) when I am trying to do SFTP and I am using VFS2-2.0 on Linux machine Thanks FileSystemException: Badly formed URI -- Key: VFS-340 URL: https://issues.apache.org/jira/browse/VFS-340 Project: Commons VFS Issue Type: Bug Affects Versions: 1.0 Environment: windows XP, JDK1.5 Reporter: bharani Original Estimate: 96h Remaining Estimate: 96h I am trying to connect to SFTP using commons.vfs . The program is running fine initially by connecting to SFTP and retrieving files ,but after one point of time it throws following exception repeatedly and refuses to connect thereafter. FileSystemException: Badly formed URI sftp://sftpuser:@US456564/home57556/;. at org.apache.commons.vfs.provider.AbstractOriginatingFileProvider.findFile(AbstractOriginatingFileProvider.java:58) at org.apache.commons.vfs.impl.DefaultFileSystemManager.resolveFile(DefaultFileSystemManager.java:641) at org.apache.commons.vfs.impl.DefaultFileSystemManager.resolveFile(DefaultFileSystemManager.java:582) at com.java.workflow.trigger.BaseTask.establishConnection(BaseTask.java:167) at com.java.workflow.trigger.sftp.sampleconnect.getSource2System(sampleconnect.java:11) at com.java.workflow.trigger.sftp.samplecnt.perform(samplecnt.java:27) at com.java.workflow.trigger.BaseTask.execute(BaseTask.java:127) at org.quartz.core.JobRunShell.run(JobRunShell.java:216) at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:549) Caused by: org.apache.commons.vfs.FileSystemException: Port number is missing from URI sftp://sftpuser:@US456564/home57556/;. at org.apache.commons.vfs.provider.HostFileNameParser.extractPort(HostFileNameParser.java:229) at org.apache.commons.vfs.provider.HostFileNameParser.extractToPath(HostFileNameParser.java:134) at org.apache.commons.vfs.provider.URLFileNameParser.parseUri(URLFileNameParser.java:48) at
[jira] [Issue Comment Edited] (VFS-400) Selector based on regular expressions
[ https://issues.apache.org/jira/browse/VFS-400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13201188#comment-13201188 ] Rikard Oxenstrand edited comment on VFS-400 at 2/6/12 10:09 AM: Draft implementation of the function. Uploaded as attachment. was (Author: rikard.oxenstrand): Draft implementation of the function. Selector based on regular expressions - Key: VFS-400 URL: https://issues.apache.org/jira/browse/VFS-400 Project: Commons VFS Issue Type: New Feature Affects Versions: 2.1 Reporter: Rikard Oxenstrand Priority: Minor Attachments: FileRegexSelector.java.patch In the long todo list there was a post about adding a file selector based on regular expressions. I had need for that for a specific project so I built a simple class that seems to work. I'm kind of new to open source contribution though so I'm not sure if i should just commit it to trunk. Here is the code: {code:title=FileRegexSelector.java|borderStyle=solid} /* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the License); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an AS IS BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.commons.vfs2; import java.util.regex.Matcher; import java.util.regex.Pattern; /** * A {@link FileSelector} that selects based on regular expressions matched against base filename. * * @since 2.1 */ public class FileRegexSelector implements FileSelector { /** * The extensions to select. */ private Pattern pattern = null; /** * Creates a new selector for the given extensions. * * @param extensions *The extensions to be included by this selector. */ public FileRegexSelector(String regex) { this.pattern = Pattern.compile(regex); } /** * Determines if a file or folder should be selected. * @param fileInfo *The file selection information. * @return true if the file should be selected, false otherwise. */ public boolean includeFile(final FileSelectInfo fileInfo) { if (this.pattern == null) { return false; } Matcher matcher = this.pattern.matcher(fileInfo.getFile().getName().getBaseName()); return matcher.matches(); } /** * Determines whether a folder should be traversed. * * @param fileInfo *The file selection information. * @return true if descendents should be traversed, fase otherwise. */ public boolean traverseDescendents(final FileSelectInfo fileInfo) { return true; } } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (DAEMON-240) Undocumented and inconsistent behaviour of multi-valued registry entries
[ https://issues.apache.org/jira/browse/DAEMON-240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13201358#comment-13201358 ] Mladen Turk edited comment on DAEMON-240 at 2/6/12 6:18 PM: Resolved by ensuring that --option rests the parser state. 1. In case --option ++option ... is used the resulting value will be overwritten with those command line (combined) 2. In case ++option ++option ... is used the resulting value will be appended to existing one. 3. ++option=a --option=b will cause that any options on the command line before last -- are dropped. No error or no warning is reported in that case. was (Author: mt...@apache.org): Resolved by disabling intermixing -- and ++ options on the same command line. This means that either --option or any number of ++option can be used, but not together Undocumented and inconsistent behaviour of multi-valued registry entries Key: DAEMON-240 URL: https://issues.apache.org/jira/browse/DAEMON-240 Project: Commons Daemon Issue Type: Bug Components: Procrun Affects Versions: 1.0.8 Reporter: Sebb Fix For: 1.0.9 The behaviour of option processing for multi-valued registry entries is partly undocumented, and inconsistent. --Option and ++Option are only documented in the context of a single invocation of procrun. The documentation should be updated to clarify that ++Option can be used in a separate invocation of procrun (update service) to append values to the registry. == The documentation implies that --Option resets the value. This is only true if the ++Option is not used at the same time - if the ++Option is used anywhere on the command-line, then all the options are appended to any existing value in the registry. The behaviour of mixed --Option and ++Option should either be fixed so that --Option clears any existing settings, or the existing behaviour should be documented. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (SANDBOX-388) Generic Type inference doesn't work in Eclipse
[ https://issues.apache.org/jira/browse/SANDBOX-388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13201572#comment-13201572 ] Steven Dolg edited comment on SANDBOX-388 at 2/6/12 8:55 PM: - A quick workaround might be overloading the methods in question, using a type of graph that allows to connect the disobedient generic to Graph in the type parameters of the method header. Like this: {noformat} public static V extends Vertex, WE extends WeightedEdgeW, W, G extends DirectedMutableWeightedGraphV, WE, W FromHeadBuilderV, WE, W, G findMaxFlow( G graph ) public static V extends Vertex, WE extends WeightedEdgeW, W, G extends DirectedGraphV, WE FromHeadBuilderV, WE, W, G findMaxFlow( G graph ) {noformat} and this {noformat} public static V extends Vertex, WE extends WeightedEdgeW, W, G extends UndirectedMutableWeightedGraphV, WE, W SpanningTreeSourceSelectorV, W, WE, G minimumSpanningTree( G graph ) public static V extends Vertex, WE extends WeightedEdgeW, W, G extends GraphV, WE SpanningTreeSourceSelectorV, W, WE, G minimumSpanningTree( G graph ) {noformat} This fixes all the compile error in my Eclipse (exact same version as mentioned in the issue description). Not sure if this works everywhere or if it is desirable at all, but I wanted to mention it anyway. was (Author: steven.dolg): A quick workaround might be overloading the methods in question, using a type of graph that allows to mention the disobedient generic in the type parameters of the method header. Like this: {noformat} public static V extends Vertex, WE extends WeightedEdgeW, W, G extends DirectedMutableWeightedGraphV, WE, W FromHeadBuilderV, WE, W, G findMaxFlow( G graph ) public static V extends Vertex, WE extends WeightedEdgeW, W, G extends DirectedGraphV, WE FromHeadBuilderV, WE, W, G findMaxFlow( G graph ) {noformat} and this {noformat} public static V extends Vertex, WE extends WeightedEdgeW, W, G extends UndirectedMutableWeightedGraphV, WE, W SpanningTreeSourceSelectorV, W, WE, G minimumSpanningTree( G graph ) public static V extends Vertex, WE extends WeightedEdgeW, W, G extends GraphV, WE SpanningTreeSourceSelectorV, W, WE, G minimumSpanningTree( G graph ) {noformat} This fixes all the compile error in my Eclipse (exact same version as mentioned in the issue description). Not sure if this works everywhere or if it is desirable at all, but I wanted to mention it anyway. Generic Type inference doesn't work in Eclipse -- Key: SANDBOX-388 URL: https://issues.apache.org/jira/browse/SANDBOX-388 Project: Commons Sandbox Issue Type: Bug Components: Graph Environment: Eclipse Java EE IDE for Web Developers. Version: Indigo Service Release 1 Build id: 20110916-0149 Reporter: Simone Tripodi Priority: Blocker {{Flow}} and {{MST}} EDSL is affected by generic type inference issue, it simply doesn't work in Eclipse. It works in IDEA, but in the Eclipse forum they reported that doesn't work if the code is compiled with Oracle JDK7. One of the reported error in Eclipse is: {quote} Type mismatch: cannot convert from SpanningTreeBaseLabeledVertex,BaseLabeledWeightedEdgeDouble,Object to SpanningTreeBaseLabeledVertex,BaseLabeledWeightedEdgeDouble,Double {quote} Looking for a solution -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (SANDBOX-387) [BeanUtils2] Implement possibility to find out if a property readable and/or wirtable
[ https://issues.apache.org/jira/browse/SANDBOX-387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13200903#comment-13200903 ] Simone Tripodi edited comment on SANDBOX-387 at 2/5/12 9:20 PM: Good thought. I honestly started to prefer much more compact and less explicit (and verbose) sentences, like {{on( myBean ).set( propertyName ).with( valueObject )}}, (that involves actual APIs change) so {{on( myBean ).isReadble( propertyName )}} is my preferred on. Anyway, {code} if ( on( myBean ).isWritable( propertyName ) { on( myBean ).set( propertyName ).with( This is a String value! ); } {code} looks less functional and redundant, we could move to a new option, something like (more or less): {code} ifIsWritable( propertyName ).on( myBean ).set( This is a String value! ); {code} was (Author: simone.tripodi): Good thought. I honestly started to prefer much more compact and less explicit (and verbose) sentences, like {{on( myBean ).set( propertyName ).with( valueObject )}}, (that involves actual APIs change) so {{on( myBean ).isReadble( propertyName )}} is my preferred on. Anyway, {code} if ( on( myBean ).isWritable( propertyName ) { on( myBean ).setProperty( propertyName ).withValue( This is a String value! ); } {code} looks less functional and redundant, we could move to a new option, something like (more or less): {code} ifIsWritable( propertyName ).on( myBean ).set( This is a String value! ); {code} [BeanUtils2] Implement possibility to find out if a property readable and/or wirtable - Key: SANDBOX-387 URL: https://issues.apache.org/jira/browse/SANDBOX-387 Project: Commons Sandbox Issue Type: Improvement Components: BeanUtils2 Affects Versions: Nightly Builds Reporter: Benedikt Ritter Currently there is no possibility to find out, if a property is readable and/or writable. For example, one has to pass a value to {{setProperty(name).withValue(argument)}} and hope, that the property is writeable (because a {{NoSucheMethodExcpetion}} will be thrown, if it is not). For this reason it would be nice, if one could do something like: {code:java} if (on(myBean).isWritable(writeableProperty) { on(myBean).setProperty(writableProperty).withValue(This is a String value!); } {code} Solution: * Add {{public boolean isWritable(String propertyName)}} and {{public boolean isReadable(String propertyName)}} to {{BeanAccessor}}. * in {{isWritable()}} check if a {{PropertyDescriptor}} can be obtained from PropertyRegistry (if not, throw {{NoSuchMethodException}}). ** if so, return true, if {{propertyDescriptor.getWriteMethod() != null}} and false otherwise. * in {{isReadable()}} check if a {{PropertyDescriptor}} can be obtained from PropertyRegistry (if not, throw {{NoSuchMethodException}}). ** if so, return true, if {{propertyDescriptor.getReadMethod() != null}} and false otherwise. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (MATH-725) use initialized static final arrays, instead of initializing it in constructors
[ https://issues.apache.org/jira/browse/MATH-725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13199279#comment-13199279 ] Gilles edited comment on MATH-725 at 2/2/12 10:22 PM: -- Those arrays cannot be static since their size depends on a parameter (k) passed to the constructor. Please clarify what you proposed (or, better: provide a patch). Thanks. was (Author: erans): Those arrays cannot be static since their size depends on a parameter (k) passed to the constructor. use initialized static final arrays, instead of initializing it in constructors --- Key: MATH-725 URL: https://issues.apache.org/jira/browse/MATH-725 Project: Commons Math Issue Type: Improvement Affects Versions: 2.2 Reporter: Eldar Agalarov Priority: Minor Fix For: 3.0 Original Estimate: 1h Remaining Estimate: 1h The Well PRNG's implementations have arrays iRm1, iRm2, iRm3, i1, i2, i3. All these arrays are unmodifiable, so we can replace this arrays initialization block final int w = 32; final int r = (k + w - 1) / w; this.v = new int[r]; this.index = 0; // precompute indirection index tables. These tables are used for optimizing access // they allow saving computations like (j + r - 2) % r with costly modulo operations iRm1 = new int[r]; iRm2 = new int[r]; i1 = new int[r]; i2 = new int[r]; i3 = new int[r]; for (int j = 0; j r; ++j) { iRm1[j] = (j + r - 1) % r; iRm2[j] = (j + r - 2) % r; i1[j] = (j + m1) % r; i2[j] = (j + m2) % r; i3[j] = (j + m3) % r; } with inline initialized static final arrays. This is much better and faster implementation, freed from unnecessary costly calculations (such as %). Another solution: leave as is, but make all these arrays static. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (NET-291) enterLocalPassiveMode is set back to Active on connect
[ https://issues.apache.org/jira/browse/NET-291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13199381#comment-13199381 ] John B edited comment on NET-291 at 2/2/12 11:59 PM: - With version 3.1 I'm still getting the problem active mode is being entered after connect is called, Once I fixed that issue it connected but was not able to make data connections. 1) FTPHTTPClient still needs to force passive mode until the offending code in FTPClient that resets to active mode is fixed (maybe it used to do this?). 2) FTPHTTPClient overrides _ openDataConnection _(String,String) but should override _ openDataConnection _(int,String) instead because in the parent class the (int,Str) version just converts the first arg to a Str and calls the (Str,Str) version. Doing this allows the subclass to capture both versions and make the needed behaviour change for Proxied connections. Patch is diff -wu .orig/FTPHTTPClient.java FTPHTTPClient.java --- .orig/FTPHTTPClient.javaWed Feb 1 14:57:41 2012 +++ FTPHTTPClient.java Thu Feb 2 14:05:50 2012 @@ -63,11 +63,11 @@ * @throws IllegalStateException if connection mode is not passive */ @Override -protected Socket _openDataConnection_(int command, String arg) +protected Socket _openDataConnection_(String command, String arg) throws IOException { //Force local passive mode, active mode not supported by through proxy if (getDataConnectionMode() != PASSIVE_LOCAL_DATA_CONNECTION_MODE) { -throw new IllegalStateException(Only passive connection mode supported); +enterLocalPassiveMode(); } was (Author: johnb): With version 3.1 I'm still getting the problem active mode is being entered after connect is called, Once I fixed that issue it connected but was not able to make data connections. 1) FTPHTTPClient still needs to force passive mode until the offending code in FTPClient that resets to active mode is fixed (maybe it used to do this?). 2) FTPHTTPClient overrides _openDataConnection_(String,String) but should override _openDataConnection_(int,String) instead because in the parent class the (int,Str) version just converts the first arg to a Str and calls the (Str,Str) version. Doing this allows the subclass to capture both versions and make the needed behaviour change for Proxied connections. Patch is diff -wu .orig/FTPHTTPClient.java FTPHTTPClient.java --- .orig/FTPHTTPClient.javaWed Feb 1 14:57:41 2012 +++ FTPHTTPClient.java Thu Feb 2 14:05:50 2012 @@ -63,11 +63,11 @@ * @throws IllegalStateException if connection mode is not passive */ @Override -protected Socket _openDataConnection_(int command, String arg) +protected Socket _openDataConnection_(String command, String arg) throws IOException { //Force local passive mode, active mode not supported by through proxy if (getDataConnectionMode() != PASSIVE_LOCAL_DATA_CONNECTION_MODE) { -throw new IllegalStateException(Only passive connection mode supported); +enterLocalPassiveMode(); } enterLocalPassiveMode is set back to Active on connect -- Key: NET-291 URL: https://issues.apache.org/jira/browse/NET-291 Project: Commons Net Issue Type: Bug Components: FTP Affects Versions: 2.0 Reporter: Kevin Brown Fix For: 3.1 enterLocalPassiveMode (exhibit A) docs claim that mode will be set to PASSIVE_LOCAL_DATA_CONNECTION_MODE some other method such as enterLocalActiveMode is called (exhibit B). However, active mode is being entered after connect is called. This behavior can be easily observed by modifying FtpExample by moving ftp.enterLocalPassiveMode() to before ftp.connect(server). Perhaps either the code or docs could be updated to remedy this. Versions prior to 2.0 behaved as documented. exhibit A: /*** * Set the current data connection mode to * code PASSIVE_LOCAL_DATA_CONNECTION_MODE /code. Use this * method only for data transfers between the client and server. * This method causes a PASV command to be issued to the server * before the opening of every data connection, telling the server to * open a data port to which the client will connect to conduct * data transfers. The FTPClient will stay in * code PASSIVE_LOCAL_DATA_CONNECTION_MODE /code until the * mode is changed by calling some other method such as * {@link #enterLocalActiveMode enterLocalActiveMode() } ***/ public void enterLocalPassiveMode() { __dataConnectionMode = PASSIVE_LOCAL_DATA_CONNECTION_MODE; // These will be set when just before a data
[jira] [Issue Comment Edited] (NET-291) enterLocalPassiveMode is set back to Active on connect
[ https://issues.apache.org/jira/browse/NET-291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13199381#comment-13199381 ] John B edited comment on NET-291 at 2/3/12 12:02 AM: - With version 3.1 I'm still getting the problem active mode is being entered after connect is called, Once I fixed that issue it connected but was not able to make data connections. 1) FTPHTTPClient still needs to force passive mode until the offending code in FTPClient that resets to active mode is fixed (maybe it used to do this?). 2) FTPHTTPClient overrides _ openDataConnection _(String,String) but should override _ openDataConnection _(int,String) instead because in the parent class the (int,Str) version just converts the first arg to a Str and calls the (Str,Str) version. Doing this allows the subclass to capture both versions and make the needed behaviour change for Proxied connections. Patch is code diff -wu .orig/FTPHTTPClient.java FTPHTTPClient.java --- .orig/FTPHTTPClient.javaWed Feb 1 14:57:41 2012 +++ FTPHTTPClient.java Thu Feb 2 14:05:50 2012 @@ -63,11 +63,11 @@ * @throws IllegalStateException if connection mode is not passive */ @Override -protected Socket _openDataConnection_(int command, String arg) +protected Socket _openDataConnection_(String command, String arg) throws IOException { //Force local passive mode, active mode not supported by through proxy if (getDataConnectionMode() != PASSIVE_LOCAL_DATA_CONNECTION_MODE) { -throw new IllegalStateException(Only passive connection mode supported); +enterLocalPassiveMode(); } /code was (Author: johnb): With version 3.1 I'm still getting the problem active mode is being entered after connect is called, Once I fixed that issue it connected but was not able to make data connections. 1) FTPHTTPClient still needs to force passive mode until the offending code in FTPClient that resets to active mode is fixed (maybe it used to do this?). 2) FTPHTTPClient overrides _ openDataConnection _(String,String) but should override _ openDataConnection _(int,String) instead because in the parent class the (int,Str) version just converts the first arg to a Str and calls the (Str,Str) version. Doing this allows the subclass to capture both versions and make the needed behaviour change for Proxied connections. Patch is diff -wu .orig/FTPHTTPClient.java FTPHTTPClient.java --- .orig/FTPHTTPClient.javaWed Feb 1 14:57:41 2012 +++ FTPHTTPClient.java Thu Feb 2 14:05:50 2012 @@ -63,11 +63,11 @@ * @throws IllegalStateException if connection mode is not passive */ @Override -protected Socket _openDataConnection_(int command, String arg) +protected Socket _openDataConnection_(String command, String arg) throws IOException { //Force local passive mode, active mode not supported by through proxy if (getDataConnectionMode() != PASSIVE_LOCAL_DATA_CONNECTION_MODE) { -throw new IllegalStateException(Only passive connection mode supported); +enterLocalPassiveMode(); } enterLocalPassiveMode is set back to Active on connect -- Key: NET-291 URL: https://issues.apache.org/jira/browse/NET-291 Project: Commons Net Issue Type: Bug Components: FTP Affects Versions: 2.0 Reporter: Kevin Brown Fix For: 3.1 enterLocalPassiveMode (exhibit A) docs claim that mode will be set to PASSIVE_LOCAL_DATA_CONNECTION_MODE some other method such as enterLocalActiveMode is called (exhibit B). However, active mode is being entered after connect is called. This behavior can be easily observed by modifying FtpExample by moving ftp.enterLocalPassiveMode() to before ftp.connect(server). Perhaps either the code or docs could be updated to remedy this. Versions prior to 2.0 behaved as documented. exhibit A: /*** * Set the current data connection mode to * code PASSIVE_LOCAL_DATA_CONNECTION_MODE /code. Use this * method only for data transfers between the client and server. * This method causes a PASV command to be issued to the server * before the opening of every data connection, telling the server to * open a data port to which the client will connect to conduct * data transfers. The FTPClient will stay in * code PASSIVE_LOCAL_DATA_CONNECTION_MODE /code until the * mode is changed by calling some other method such as * {@link #enterLocalActiveMode enterLocalActiveMode() } ***/ public void enterLocalPassiveMode() { __dataConnectionMode = PASSIVE_LOCAL_DATA_CONNECTION_MODE; // These will be set when just
[jira] [Issue Comment Edited] (NET-291) enterLocalPassiveMode is set back to Active on connect
[ https://issues.apache.org/jira/browse/NET-291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13199381#comment-13199381 ] John B edited comment on NET-291 at 2/3/12 12:06 AM: - With version 3.1 I'm still getting the problem active mode is being entered after connect is called, Once I fixed that issue it connected but was not able to make data connections. 1) FTPHTTPClient still needs to force passive mode until the offending code in FTPClient that resets to active mode is fixed (maybe it used to do this?). 2) FTPHTTPClient overrides _ openDataConnection _(String,String) but should override _ openDataConnection _(int,String) instead because in the parent class the (int,Str) version just converts the first arg to a Str and calls the (Str,Str) version. Doing this allows the subclass to capture both versions and make the needed behaviour change for Proxied connections. Patch is {noformat} diff -wu .orig/FTPHTTPClient.java FTPHTTPClient.java --- .orig/FTPHTTPClient.javaWed Feb 1 14:57:41 2012 +++ FTPHTTPClient.java Thu Feb 2 14:05:50 2012 @@ -63,11 +63,11 @@ * @throws IllegalStateException if connection mode is not passive */ @Override -protected Socket _openDataConnection_(int command, String arg) +protected Socket _openDataConnection_(String command, String arg) throws IOException { //Force local passive mode, active mode not supported by through proxy if (getDataConnectionMode() != PASSIVE_LOCAL_DATA_CONNECTION_MODE) { -throw new IllegalStateException(Only passive connection mode supported); +enterLocalPassiveMode(); } {noformat} was (Author: johnb): With version 3.1 I'm still getting the problem active mode is being entered after connect is called, Once I fixed that issue it connected but was not able to make data connections. 1) FTPHTTPClient still needs to force passive mode until the offending code in FTPClient that resets to active mode is fixed (maybe it used to do this?). 2) FTPHTTPClient overrides _ openDataConnection _(String,String) but should override _ openDataConnection _(int,String) instead because in the parent class the (int,Str) version just converts the first arg to a Str and calls the (Str,Str) version. Doing this allows the subclass to capture both versions and make the needed behaviour change for Proxied connections. Patch is code diff -wu .orig/FTPHTTPClient.java FTPHTTPClient.java --- .orig/FTPHTTPClient.javaWed Feb 1 14:57:41 2012 +++ FTPHTTPClient.java Thu Feb 2 14:05:50 2012 @@ -63,11 +63,11 @@ * @throws IllegalStateException if connection mode is not passive */ @Override -protected Socket _openDataConnection_(int command, String arg) +protected Socket _openDataConnection_(String command, String arg) throws IOException { //Force local passive mode, active mode not supported by through proxy if (getDataConnectionMode() != PASSIVE_LOCAL_DATA_CONNECTION_MODE) { -throw new IllegalStateException(Only passive connection mode supported); +enterLocalPassiveMode(); } /code enterLocalPassiveMode is set back to Active on connect -- Key: NET-291 URL: https://issues.apache.org/jira/browse/NET-291 Project: Commons Net Issue Type: Bug Components: FTP Affects Versions: 2.0 Reporter: Kevin Brown Fix For: 3.1 enterLocalPassiveMode (exhibit A) docs claim that mode will be set to PASSIVE_LOCAL_DATA_CONNECTION_MODE some other method such as enterLocalActiveMode is called (exhibit B). However, active mode is being entered after connect is called. This behavior can be easily observed by modifying FtpExample by moving ftp.enterLocalPassiveMode() to before ftp.connect(server). Perhaps either the code or docs could be updated to remedy this. Versions prior to 2.0 behaved as documented. exhibit A: /*** * Set the current data connection mode to * code PASSIVE_LOCAL_DATA_CONNECTION_MODE /code. Use this * method only for data transfers between the client and server. * This method causes a PASV command to be issued to the server * before the opening of every data connection, telling the server to * open a data port to which the client will connect to conduct * data transfers. The FTPClient will stay in * code PASSIVE_LOCAL_DATA_CONNECTION_MODE /code until the * mode is changed by calling some other method such as * {@link #enterLocalActiveMode enterLocalActiveMode() } ***/ public void enterLocalPassiveMode() { __dataConnectionMode = PASSIVE_LOCAL_DATA_CONNECTION_MODE; // These
[jira] [Issue Comment Edited] (NET-291) enterLocalPassiveMode is set back to Active on connect
[ https://issues.apache.org/jira/browse/NET-291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13199381#comment-13199381 ] John B edited comment on NET-291 at 2/3/12 12:12 AM: - With version 3.1 I'm still getting the problem active mode is being entered after connect is called, Once I fixed that issue it connected but was not able to make data connections. 1) FTPHTTPClient still needs to force passive mode until the offending code in FTPClient that resets to active mode is fixed (maybe it used to do this?). 2) FTPHTTPClient overrides _\_openDataConnection\_(String,String)_ but should override _\_openDataConnection\_(int,String)_ instead because in the parent class the (int,Str) version just converts the first arg to a Str and calls the (Str,Str) version. Doing this allows the subclass to capture both versions and make the needed behaviour change for Proxied connections. Patch is {noformat} diff -wu .orig/FTPHTTPClient.java FTPHTTPClient.java --- .orig/FTPHTTPClient.javaWed Feb 1 14:57:41 2012 +++ FTPHTTPClient.java Thu Feb 2 14:05:50 2012 @@ -63,11 +63,11 @@ * @throws IllegalStateException if connection mode is not passive */ @Override -protected Socket _openDataConnection_(int command, String arg) +protected Socket _openDataConnection_(String command, String arg) throws IOException { //Force local passive mode, active mode not supported by through proxy if (getDataConnectionMode() != PASSIVE_LOCAL_DATA_CONNECTION_MODE) { -throw new IllegalStateException(Only passive connection mode supported); +enterLocalPassiveMode(); } {noformat} was (Author: johnb): With version 3.1 I'm still getting the problem active mode is being entered after connect is called, Once I fixed that issue it connected but was not able to make data connections. 1) FTPHTTPClient still needs to force passive mode until the offending code in FTPClient that resets to active mode is fixed (maybe it used to do this?). 2) FTPHTTPClient overrides _ openDataConnection _(String,String) but should override _ openDataConnection _(int,String) instead because in the parent class the (int,Str) version just converts the first arg to a Str and calls the (Str,Str) version. Doing this allows the subclass to capture both versions and make the needed behaviour change for Proxied connections. Patch is {noformat} diff -wu .orig/FTPHTTPClient.java FTPHTTPClient.java --- .orig/FTPHTTPClient.javaWed Feb 1 14:57:41 2012 +++ FTPHTTPClient.java Thu Feb 2 14:05:50 2012 @@ -63,11 +63,11 @@ * @throws IllegalStateException if connection mode is not passive */ @Override -protected Socket _openDataConnection_(int command, String arg) +protected Socket _openDataConnection_(String command, String arg) throws IOException { //Force local passive mode, active mode not supported by through proxy if (getDataConnectionMode() != PASSIVE_LOCAL_DATA_CONNECTION_MODE) { -throw new IllegalStateException(Only passive connection mode supported); +enterLocalPassiveMode(); } {noformat} enterLocalPassiveMode is set back to Active on connect -- Key: NET-291 URL: https://issues.apache.org/jira/browse/NET-291 Project: Commons Net Issue Type: Bug Components: FTP Affects Versions: 2.0 Reporter: Kevin Brown Fix For: 3.1 enterLocalPassiveMode (exhibit A) docs claim that mode will be set to PASSIVE_LOCAL_DATA_CONNECTION_MODE some other method such as enterLocalActiveMode is called (exhibit B). However, active mode is being entered after connect is called. This behavior can be easily observed by modifying FtpExample by moving ftp.enterLocalPassiveMode() to before ftp.connect(server). Perhaps either the code or docs could be updated to remedy this. Versions prior to 2.0 behaved as documented. exhibit A: /*** * Set the current data connection mode to * code PASSIVE_LOCAL_DATA_CONNECTION_MODE /code. Use this * method only for data transfers between the client and server. * This method causes a PASV command to be issued to the server * before the opening of every data connection, telling the server to * open a data port to which the client will connect to conduct * data transfers. The FTPClient will stay in * code PASSIVE_LOCAL_DATA_CONNECTION_MODE /code until the * mode is changed by calling some other method such as * {@link #enterLocalActiveMode enterLocalActiveMode() } ***/ public void enterLocalPassiveMode() { __dataConnectionMode = PASSIVE_LOCAL_DATA_CONNECTION_MODE;
[jira] [Issue Comment Edited] (OGNL-47) Unable to use Struts 2.3.1.2 application with OGNL 3.0.4 by enabling security manager
[ https://issues.apache.org/jira/browse/OGNL-47?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13196872#comment-13196872 ] Philippe Eymann edited comment on OGNL-47 at 1/31/12 12:50 PM: --- Hi Maurizio, I think that this issue is mainly related to a problem with Struts 2.3.1, and the official OGNL release which is bundled with this version of Struts is the 3.0.4, see [this link on the Stuts official page|http://newverhost.com/pub//struts/library/struts-2.3.1.2-lib.zip]. The code extracts I provided are from the 3.0.4 version. Is there any more recent released version of the OGNL jar which contains the current (SVN trunk) OgnlRuntime class ? I didn't find any download link on the Apache site. Regards, Philippe was (Author: p.eymann): Hi Maurizio, I think that this issue is mainly related to a problem with Struts 2.3.1, and the official OGNL release which is bundled with this version of Struts is the 3.0.4, see [this link on the Stuts official page|http://newverhost.com/pub//struts/library/struts-2.3.1.2-lib.zip]. Is there any more recent released version of the OGNL jar which contains the current (SVN trunk) OgnlRuntime class ? I didn't find any download link on the Apache site. Regards, Philippe Unable to use Struts 2.3.1.2 application with OGNL 3.0.4 by enabling security manager -- Key: OGNL-47 URL: https://issues.apache.org/jira/browse/OGNL-47 Project: Commons OGNL Issue Type: Bug Reporter: kesava Priority: Blocker Unable to use Struts 2.3.1.2 application with OGNL 3.0.4 by enabling security manager Steps to reproduce 1.Enable security manager 2.Load the app {noformat} Caught an Ognl exception while getting property serviceProviders - Class: ognl.ObjectPropertyAccessor File: ObjectPropertyAccessor.java Method: getPossibleProperty Line: 69 - ognl/ObjectPropertyAccessor.java:69:-1 at com.opensymphony.xwork2.ognl.accessor.CompoundRootAccessor.getProperty(CompoundRootAccessor.java:142) at ognl.OgnlRuntime.getProperty(OgnlRuntime.java:2303) at ognl.ASTProperty.getValueBody(ASTProperty.java:114) at ognl.SimpleNode.evaluateGetValueBody(SimpleNode.java:212) at ognl.SimpleNode.getValue(SimpleNode.java:258) at ognl.Ognl.getValue(Ognl.java:494) at ognl.Ognl.getValue(Ognl.java:458) at com.opensymphony.xwork2.ognl.OgnlUtil.getValue(OgnlUtil.java:213) at com.opensymphony.xwork2.ognl.OgnlValueStack.getValueUsingOgnl(OgnlValueStack.java:277) at com.opensymphony.xwork2.ognl.OgnlValueStack.tryFindValue(OgnlValueStack.java:260) at com.opensymphony.xwork2.ognl.OgnlValueStack.tryFindValueWhenExpressionIsNotNull(OgnlValueStack.java:242) at com.opensymphony.xwork2.ognl.OgnlValueStack.findValue(OgnlValueStack.java:222) at com.opensymphony.xwork2.ognl.OgnlValueStack.findValue(OgnlValueStack.java:284) at org.apache.struts2.views.velocity.StrutsVelocityContext.internalGet(StrutsVelocityContext.java:91) at org.apache.velocity.context.AbstractContext.get(AbstractContext.java:193) at org.apache.velocity.context.InternalContextAdapterImpl.get(InternalContextAdapterImpl.java:286) at org.apache.velocity.runtime.parser.node.ASTReference.getVariableValue(ASTReference.java:843) at org.apache.velocity.runtime.parser.node.ASTReference.execute(ASTReference.java:222) at org.apache.velocity.runtime.parser.node.ASTReference.value(ASTReference.java:507) at org.apache.velocity.runtime.parser.node.ASTExpression.value(ASTExpression.java:71) at org.apache.velocity.runtime.parser.node.ASTSetDirective.render(ASTSetDirective.java:142) at org.apache.velocity.runtime.parser.node.SimpleNode.render(SimpleNode.java:336) at org.apache.velocity.runtime.directive.Parse.render(Parse.java:263) at org.apache.velocity.runtime.parser.node.ASTDirective.render(ASTDirective.java:175) at org.apache.velocity.runtime.parser.node.SimpleNode.render(SimpleNode.java:336) at org.apache.velocity.runtime.directive.Parse.render(Parse.java:263) at org.apache.velocity.runtime.parser.node.ASTDirective.render(ASTDirective.java:175) at org.apache.velocity.runtime.parser.node.SimpleNode.render(SimpleNode.java:336) at org.apache.velocity.Template.merge(Template.java:328) at org.apache.velocity.Template.merge(Template.java:235) at org.apache.struts2.dispatcher.VelocityResult.doExecute(VelocityResult.java:156) at org.apache.struts2.dispatcher.StrutsResultSupport.execute(StrutsResultSupport.java:186) at
[jira] [Issue Comment Edited] (DBUTILS-87) Return generated key on insert
[ https://issues.apache.org/jira/browse/DBUTILS-87?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13197624#comment-13197624 ] Moandji Ezana edited comment on DBUTILS-87 at 2/1/12 6:56 AM: -- QueryRunner_insert.txt is a patch that adds the insert methods and a very basic unit test, just to get the ball rolling. was (Author: mwanji): Here is a patch that adds the insert methods and a very basic unit test, just to get the ball rolling. Return generated key on insert -- Key: DBUTILS-87 URL: https://issues.apache.org/jira/browse/DBUTILS-87 Project: Commons DbUtils Issue Type: New Feature Reporter: Moandji Ezana Attachments: QueryRunner_insert.txt It would be useful to have an insert method on QueryRunner that returns the id of the new record. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (SANDBOX-370) GraphML format exporter
[ https://issues.apache.org/jira/browse/SANDBOX-370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13195865#comment-13195865 ] Matteo Moci edited comment on SANDBOX-370 at 1/29/12 10:47 PM: --- Added an exporter to GraphML format to commons-graph. was (Author: mox601): Added an exporter to GraphML format. GraphML format exporter --- Key: SANDBOX-370 URL: https://issues.apache.org/jira/browse/SANDBOX-370 Project: Commons Sandbox Issue Type: Improvement Components: Graph Affects Versions: Nightly Builds Reporter: Matteo Moci Priority: Minor Labels: export, graph, xml Attachments: SANDBOX-370.patch Original Estimate: 6h Remaining Estimate: 6h Commons-graph should have some way to read and write graph representations encoded with GraphML - http://graphml.graphdrawing.org/ -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (JCS-89) UDP Discovery fails to report correct IP address to peers for back-connect when InetAddress.getLocalHost() fails to return an externally-visible address (i.e. re
[ https://issues.apache.org/jira/browse/JCS-89?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13193556#comment-13193556 ] Diego Rivera edited comment on JCS-89 at 1/26/12 5:22 AM: -- To clarify, the issue is that InetAddress.getLocalHost() may not always return an externally-visible address. In particular, it may return a loopback address. This means that under those circumstances, all caches in a UDP-discovery cluster will invariably discover only themselves, and thus the lateral cluster will be useless. This patch addresses that by superseding the IP address from the UDPDiscoveryMessage.host property with the IP Address (NOT hostname, for speed, efficiency and certainty) from the actual UDP multicast packet that has just been received, and is being responded to. This ensures there will be no confusion when performing the back-connect. was (Author: diego.rivera...@gmail.com): To clarify, the issue is that InetAddress.getLocalHost() may not always return an externally-visible address. In particular, it may return a loopback address. This means that under those circumstances, all caches in a UDP-discovery cluster will invariably discover only themselves, and thus the lateral cluster will be useless. This patch addresses that by superseding the IP address from the UDPDiscoveryMessage.host property with the IP Address (NOT hostname, for speed, efficiency and certainty) from the actual UDP multicast packet that is being responded to. This ensures there will be no confusion when performing the back-connect. UDP Discovery fails to report correct IP address to peers for back-connect when InetAddress.getLocalHost() fails to return an externally-visible address (i.e. returns a local address) --- Key: JCS-89 URL: https://issues.apache.org/jira/browse/JCS-89 Project: Commons JCS Issue Type: Bug Reporter: Diego Rivera Attachments: jcs-89-fix.patch Original Estimate: 1h Remaining Estimate: 1h On certain environments where reverse-lookup of the machine's IP address isn't available, or other IP configurations restrict the ability of the JVM to determine its own canonical local address, it's impossible to determine ahead of time what address should be sent into the UDP multicast in order for lateral peers to establish the back-connection. The fix for this is simple: when the packet is received with the discovery message, determine the source host address of the packet that was received and set that to the discovery message's host property (setHost(packet.getAddress().getHostAddress()). This way, it's 100% for certain we'll be back-connecting to the correct instance. A patch will be uploaded shortly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (SANDBOX-362) Create basic unit tests for all classes
[ https://issues.apache.org/jira/browse/SANDBOX-362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191495#comment-13191495 ] Benedikt Ritter edited comment on SANDBOX-362 at 1/23/12 9:43 PM: -- Based on a discussion we had on the mailing list about best practices for unit tests, I refactored ArgumentTest and added test methods for getParamaters(Argument?... args). Here is a summary of what I changed: * separated long methods into very small methods: each primitive type has it's own test methods * defined fields for every primitive type, to make sure that assertions do not accedential fail because wrong values get compared * created separate test methods instead of wrapping failure cases inside try/catch-blocks * Added tests methods that test against TestBean.class Please let me know, what you think about the changes! was (Author: britter): Based on a discussion we had on the mailing list about best practices for unit tests, I refactored ArgumentTest and added test methods for getParamaters(Argument?... args). Here is a summary of what I changed: * separated long methods into very small methods: each primitive type has it's own test methods * defined fields for every primitive type, to make sure that assertions do not accedential fail because wrong values get compared * Added tests methods that test against TestBean.class Please let me know, what you think about the changes! Create basic unit tests for all classes --- Key: SANDBOX-362 URL: https://issues.apache.org/jira/browse/SANDBOX-362 Project: Commons Sandbox Issue Type: Test Components: BeanUtils2 Affects Versions: Nightly Builds Reporter: Benedikt Ritter Attachments: ArgumentTest.zip, SANDBOX-362-RefactoringOfArgumentTest.txt Back up all existing implementations with unit tests: * Argument * Assertions * BeanUtils * DefaultBeanAccessor * DefaultClassAccessor * MethodRegistry * TypeUtils -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (SANDBOX-358) Early return/termination for graph visit
[ https://issues.apache.org/jira/browse/SANDBOX-358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13189906#comment-13189906 ] Claudio Squarcella edited comment on SANDBOX-358 at 1/21/12 1:37 AM: - Hi, so I actually changed signatures for methods in {{GraphVisitHandler}}: * {{discoverVertex}} and {{discoverEdge}} now tell the algorithm if the corresponding {{Vertex}} and {{Edge}} should be expanded (i.e. if their neighbors should be added to the visit queue); * {{finishVertex}} and {{finishEdge}} tell if the algorithm can be ended prematurely (because the search is complete, e.g. the target element was found). All tests still work. I hope you guys like this one :) Claudio was (Author: claudio.squarcella): Hi, so I actually changed signatures for methods in {{GraphVisitHandler}}: * {{discoverVertex}} and {{discoverEdge}} now tell the algorithm if the corresponding {{Vertex}} and {{Edge}} should be expanded (i.e. if their neighbors should be added to the visit queue); * {{finishVertex}} and {{finishEdge}} tell if the algorithm can be ended prematurely (because the search is complete, e.g. before the target element was found). All tests still work. I hope you guys like this one :) Claudio Early return/termination for graph visit Key: SANDBOX-358 URL: https://issues.apache.org/jira/browse/SANDBOX-358 Project: Commons Sandbox Issue Type: Improvement Components: Graph Reporter: Claudio Squarcella Assignee: Simone Tripodi Priority: Minor Labels: graph, visit, visithandler Attachments: EarlyTerminationAndSubgraphSkipForSearchAlgorithms.patch Original Estimate: 72h Remaining Estimate: 72h Hello, the current implementations in the class {{Visit}} (package {{org.apache.commons.graph.visit}}) do not include the possibility to stop the search prematurely. That would be more natural (and faster) for many kinds of search, e.g. when looking for the first occurrence of a specific pattern (vertex, path with certain features, etc). The easiest solution: changing all method signatures in {{GraphVisitHandler}} from {{void}} to {{boolean}}, forcing the handler to answer the question: _should I continue?_ I understand semantics get a bit entangled (well, not with an appropriate documentation!), so I am open to more verbose options: e.g. a method {{isSearchComplete}} to call after every step... but that would only be more verbose, wouldn't it? Waiting for feedback and ready to patch :-) Claudio -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (JCS-87) Migrate to build with Maven 3.0
[ https://issues.apache.org/jira/browse/JCS-87?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13188758#comment-13188758 ] Diego Rivera edited comment on JCS-87 at 1/18/12 10:30 PM: --- Forgot to add that most of the failed tests (above) don't appear to have anything to do with the (RMI?) server not being started. Furthermore, for newer mechanisms something like an embedded tomcat (Jetty?) could be used when that's needed for integration tests, perhaps there are plugins to start the RMI server for the same purpose, etc? was (Author: diego.rivera...@gmail.com): Forgot to add that most of the failed tests (above) don't appear to have anything to do with the (RMI?) server not being started. Furthermore, for newer mechanisms something like an embedded tomcat (Jetty?) could be used when that's needed for integration tests, using commons-daemon Migrate to build with Maven 3.0 --- Key: JCS-87 URL: https://issues.apache.org/jira/browse/JCS-87 Project: Commons JCS Issue Type: Improvement Affects Versions: jcs-1.3 Reporter: Diego Rivera Original Estimate: 72h Remaining Estimate: 72h Evidently, the documentation clearly states that Maven 1.x is the only supported mechanism to build JCS. However, this is very dated and it wouldn't be a bad idea to update the build process to more modern/current standards. I've tried to do this, and have run into a few snags wrt tests - I get many unit tests failures in what I expect should have been trivial tests: Failed tests: testIndexedDiskCache4(org.apache.jcs.auxiliary.disk.indexed.IndexedDiskCacheSameRegionConcurrentUnitTest$4): expected:indexedRegion4 data 2200 but was:null testIndexedDiskCache5(org.apache.jcs.auxiliary.disk.indexed.IndexedDiskCacheSameRegionConcurrentUnitTest$5): expected:indexedRegion4 data 0 but was:null testIndexedDiskCache1(org.apache.jcs.auxiliary.disk.indexed.IndexedDiskCacheNoMemoryUnitTest$1): expected:indexedRegion1 data 0 but was:null testIndexedDiskCache3(org.apache.jcs.auxiliary.disk.indexed.IndexedDiskCacheNoMemoryUnitTest$3): expected:indexedRegion3 data 0 but was:null testIndexedDiskCache2(org.apache.jcs.auxiliary.disk.indexed.IndexedDiskCacheNoMemoryUnitTest$2): expected:indexedRegion2 data 0 but was:null testBlockDiskCache1(org.apache.jcs.auxiliary.disk.block.BlockDiskCacheSameRegionConcurrentUnitTest$1): Wrong value for key [0:key] expected:blockRegion4 data 0-blockRegion4 but was:null testBlockDiskCache2(org.apache.jcs.auxiliary.disk.block.BlockDiskCacheSameRegionConcurrentUnitTest$2): Wrong value for key [1000:key] expected:blockRegion4 data 1000-blockRegion4 but was:null testBlockDiskCache3(org.apache.jcs.auxiliary.disk.block.BlockDiskCacheSameRegionConcurrentUnitTest$3): Wrong value for key [2000:key] expected:blockRegion4 data 2000-blockRegion4 but was:null testBlockDiskCache4(org.apache.jcs.auxiliary.disk.block.BlockDiskCacheSameRegionConcurrentUnitTest$4): Wrong value for key [2200:key] expected:blockRegion4 data 2200-blockRegion4 but was:null testExpireInBackground(org.apache.jcs.auxiliary.disk.jdbc.JDBCDiskCacheShrinkUnitTest): Removed key should be null: 0:key testSpoolEvent(org.apache.jcs.engine.control.event.SimpleEventHandlingUnitTest): The number of ELEMENT_EVENT_SPOOLED_DISK_AVAILABLE events [0] does not equal the number expected [2] testSpoolNoDiskEvent(org.apache.jcs.engine.control.event.SimpleEventHandlingUnitTest): The number of ELEMENT_EVENT_SPOOLED_DISK_NOT_AVAILABLE events [19002] does not equal the number expected. testSpoolNotAllowedEvent(org.apache.jcs.engine.control.event.SimpleEventHandlingUnitTest): The number of ELEMENT_EVENT_SPOOLED_NOT_ALLOWED events [0] does not equal the number expected. testSpoolNotAllowedEventOnItem(org.apache.jcs.engine.control.event.SimpleEventHandlingUnitTest): The number of ELEMENT_EVENT_SPOOLED_NOT_ALLOWED events [0] does not equal the number expected. testUpdateConfig(org.apache.jcs.engine.control.CompositeCacheDiskUsageUnitTest): expected:1 but was:0 testSystemPropertyUsage(org.apache.jcs.engine.SystemPropertyUsageUnitTest): System property value is not reflected expected:1000 but was:6789 testLoadFromCCF(org.apache.jcs.engine.memory.mru.MRUMemoryCacheUnitTest): Cache name should have MRU in it. testGetStatsThroughHub(org.apache.jcs.engine.memory.mru.MRUMemoryCacheUnitTest): Should have 200 puts testDefaultConfigUndefinedPool(org.apache.jcs.utils.threadpool.ThreadPoolManagerUnitTest): expected:150 but was:151 testNonExistentConfigFile(org.apache.jcs.utils.threadpool.ThreadPoolManagerUnitTest): expected:150 but was:151
[jira] [Issue Comment Edited] (JEXL-125) Unable to invoke method with ObjectContext
[ https://issues.apache.org/jira/browse/JEXL-125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13186602#comment-13186602 ] Maurizio Cucchiara edited comment on JEXL-125 at 1/15/12 10:07 PM: --- I'm not a JEXL guru, but looking at the source code, I guess that the code fragment you provided is not right (see below examples). {code:title=Example 1} public void test() throws Exception { JexlEngine jexl = new JexlEngine(); jexl.setStrict(true); JexlContext jc = new ObjectContextFoo(jexl, new Foo()); Assert.assertEquals(OK, jc.get(method())); } {code} {code:title=Example 2} @Test public void test() throws Exception { JexlEngine jexl = new JexlEngine(); jexl.setStrict(true); Expression e = jexl.createExpression(method()); JexlContext jc = new FooContext(jexl, new Foo()); Assert.assertEquals(FOOBAR, e.evaluate(jc)); } static public class FooContext extends ObjectContextFoo { FooContext(JexlEngine jexl, Foo foo) { super(jexl, foo); } public String method() { return FOOBAR; } } {code} Just another side note: when you are not sure if you are facing a bug or not, I strongly advise to ask to [the user ML|http://commons.apache.org/jexl/mail-lists.html] HTH was (Author: maurizio.cucchiara): I'm not a JEXL guru, but looking at the source code, I guess that the code fragment you provided is not right (see below examples). {code:title=Example 1} public void test() throws Exception { JexlEngine jexl = new JexlEngine(); jexl.setStrict(true); Expression e = jexl.createExpression(method()); JexlContext jc = new ObjectContextFoo(jexl, new Foo()); Assert.assertEquals(OK, jc.get(methodA())); } {code} {code:title=Example 2} @Test public void test() throws Exception { JexlEngine jexl = new JexlEngine(); jexl.setStrict(true); Expression e = jexl.createExpression(method()); JexlContext jc = new FooContext(jexl, new Foo()); Assert.assertEquals(FOOBAR, e.evaluate(jc)); } static public class FooContext extends ObjectContextFoo { FooContext(JexlEngine jexl, Foo foo) { super(jexl, foo); } public String method() { return FOOBAR; } } {code} Just another side note: when you are not sure if you are facing a bug or not, I strongly advise to ask to [the user ML|http://commons.apache.org/jexl/mail-lists.html] HTH Unable to invoke method with ObjectContext -- Key: JEXL-125 URL: https://issues.apache.org/jira/browse/JEXL-125 Project: Commons JEXL Issue Type: Bug Affects Versions: 2.1.1 Environment: Java 1.6.0_20 on Windows 7 Reporter: Matteo Trotta Hi, I'm trying to invoke a method on Object context but I can't get it to work. I don't know if it's a bug or I'm doing it wrong. Here it is the code I'm using: {code:title=JexlTest.java} package it.test; import org.apache.commons.jexl2.Expression; import org.apache.commons.jexl2.JexlContext; import org.apache.commons.jexl2.JexlEngine; import org.apache.commons.jexl2.ObjectContext; import org.junit.Test; public class JexlTest { public static class Foo { public String method() { return OK; } } @Test public void test() throws Exception { JexlEngine jexl = new JexlEngine(); jexl.setStrict(true); Expression e = jexl.createExpression(method()); JexlContext jc = new ObjectContextFoo(jexl, new Foo()); System.out.println(e.evaluate(jc)); } } {code} Here is the exception I'm getting: {noformat} org.apache.commons.jexl2.JexlException: it.test.JexlTest.test@19![0,8]: 'method();' method error at org.apache.commons.jexl2.Interpreter.call(Interpreter.java:1078) at org.apache.commons.jexl2.Interpreter.visit(Interpreter.java:1100) at org.apache.commons.jexl2.parser.ASTMethodNode.jjtAccept(ASTMethodNode.java:18) at org.apache.commons.jexl2.Interpreter.visit(Interpreter.java:1317) at org.apache.commons.jexl2.parser.ASTReference.jjtAccept(ASTReference.java:18) at org.apache.commons.jexl2.Interpreter.interpret(Interpreter.java:232) at org.apache.commons.jexl2.ExpressionImpl.evaluate(ExpressionImpl.java:65) at it.test.JexlTest.test(JexlTest.java:21) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at