[jira] [Commented] (COMPRESS-461) Add test for APK signing block
[ https://issues.apache.org/jira/browse/COMPRESS-461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17853809#comment-17853809 ] Gary D. Gregory commented on COMPRESS-461: -- [~bodewig] , [~eighthave] Would you please create a PR on GitHub with this new test? > Add test for APK signing block > -- > > Key: COMPRESS-461 > URL: https://issues.apache.org/jira/browse/COMPRESS-461 > Project: Commons Compress > Issue Type: Test > Components: Archivers >Affects Versions: 1.17 >Reporter: Stefan Bodewig >Priority: Minor > Attachments: test-services-1.1.0.apk > > > Add a regression test for the fix of COMPRESS-455. > This involves creating a minimal apk file that we can distribute as a > testcase which contains an apk signing block. Creating such an apk file is > probably easy if you've got an android development environment. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (CONFIGURATION-825) INIConfiguration marks exceptions that will not be thrown
[ https://issues.apache.org/jira/browse/CONFIGURATION-825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory updated CONFIGURATION-825: -- Fix Version/s: 2.11.1 (was: 2.11.0) > INIConfiguration marks exceptions that will not be thrown > -- > > Key: CONFIGURATION-825 > URL: https://issues.apache.org/jira/browse/CONFIGURATION-825 > Project: Commons Configuration > Issue Type: Improvement > Components: Expression engine >Affects Versions: 2.8.0 > Environment: java 8,win ,the file content is "/error/" >Reporter: ChenYuwang >Priority: Major > Fix For: 2.11.1 > > > INIConfiguration.read() & INIConfiguration.write() marks > ConfigurationException, but has no chance to throw. I understand that a > ConfigurationException should be thrown if INIConfiguration read something > that is not ini‘s format, but currently it doesn't. INIConfiguration just > ignores everything it doesn't recognize. For example, the file content is > "/error/" -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NET-709) IMAP Memory considerations with large ‘FETCH’ sizes.
[ https://issues.apache.org/jira/browse/NET-709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory updated NET-709: Fix Version/s: 3.11.2 (was: 3.11.1) > IMAP Memory considerations with large ‘FETCH’ sizes. > > > Key: NET-709 > URL: https://issues.apache.org/jira/browse/NET-709 > Project: Commons Net > Issue Type: Improvement > Components: IMAP >Affects Versions: 3.8.0 >Reporter: Anders >Priority: Minor > Labels: IMAP, buffer, chunking, large, literal, memory, partial > Fix For: 3.11.2 > > Original Estimate: 96h > Remaining Estimate: 96h > > h2. *IMAP Memory considerations with large ‘FETCH’ sizes.* > > The following comments concern classes in the > [org.apache.common.net.imap|https://commons.apache.org/proper/commons-net/apidocs/org/apache/commons/net/imap/package-summary.html] > package. > > Consider the following imap ‘fetch’ exchange between a client (>) and server > (<): > {{> A654 FETCH 1:2 (BODY[TEXT])}} > {{{}< * 1 FETCH (BODY[TEXT] {*}{{*}{*}8000{*}{*}}{*}\r\n{}}}{{{}…{}}} > {{< * 2 FETCH …}} > {{< A654 OK FETCH completed}} > > The first untagged response (* 1 FETCH …) contains a literal \{8000} or > 80MB. > > After reviewing the > [source|https://gitbox.apache.org/repos/asf?p=commons-net.git;a=blob;f=src/main/java/org/apache/commons/net/imap/IMAP.java;h=d97f1073d8b97545d0a063c6832fe55c116166e2;hb=HEAD#l298], > it is my understanding, the entire 80MB sequence of data will be read into > Java memory even when using > ‘[IMAPChunkListener|https://commons.apache.org/proper/commons-net/apidocs/org/apache/commons/net/imap/IMAP.IMAPChunkListener.html]’. > According the the documentation: > > {quote}Implement this interface and register it via > [IMAP.setChunkListener(IMAPChunkListener)|https://commons.apache.org/proper/commons-net/apidocs/org/apache/commons/net/imap/IMAP.html#setChunkListener-org.apache.commons.net.imap.IMAP.IMAPChunkListener-] > in order to get access to multi-line partial command responses. Useful when > processing large FETCH responses. > {quote} > > It is apparent the partial fetch response is read in full (80MB) before > invoking the ‘IMAPChunkListener’ and then discarding the read lines (freeing > up memory). > > Back to the example: > > A654 FETCH 1:2 (BODY[TEXT]) > < * 1 FETCH (BODY[TEXT] \{8000}\r\n > *{color:#ff}…. <— read in full into memory then discarded after calling > IMAPChunkListener{color}* > < * 2 FETCH (BODY[TEXT] \{250}\r\n > {color:#ff}*…. <— read in full into memory then discarded after calling > IMAPChunkListener*{color} > < A654 OK FETCH completed > > Above, you can see the chunk listener is good for each individual partial > fetch response but does not prevent a large partial from being loaded into > memory. > > Let’s review the > [code|https://gitbox.apache.org/repos/asf?p=commons-net.git;a=blob;f=src/main/java/org/apache/commons/net/imap/IMAP.java;h=d97f1073d8b97545d0a063c6832fe55c116166e2;hb=HEAD#l298]: > > [ > 296|https://gitbox.apache.org/repos/asf?p=commons-net.git;a=blob;f=src/main/java/org/apache/commons/net/imap/IMAP.java;h=d97f1073d8b97545d0a063c6832fe55c116166e2;hb=HEAD#l296] > int literalCount = IMAPReply.literalCount(line); > {color:#ff}Above counts the size of the literal, in our case 8000 or > 80MB (for the first partial fetch response).{color} > > > [ > 297|https://gitbox.apache.org/repos/asf?p=commons-net.git;a=blob;f=src/main/java/org/apache/commons/net/imap/IMAP.java;h=d97f1073d8b97545d0a063c6832fe55c116166e2;hb=HEAD#l297] > final boolean isMultiLine = literalCount >= 0; > [ > 298|https://gitbox.apache.org/repos/asf?p=commons-net.git;a=blob;f=src/main/java/org/apache/commons/net/imap/IMAP.java;h=d97f1073d8b97545d0a063c6832fe55c116166e2;hb=HEAD#l298] > while (literalCount >= 0) { > [ > 299|https://gitbox.apache.org/repos/asf?p=commons-net.git;a=blob;f=src/main/java/org/apache/commons/net/imap/IMAP.java;h=d97f1073d8b97545d0a063c6832fe55c116166e2;hb=HEAD#l299] > line=_reader.readLine(); > [ > 300|https://gitbox.apache.org/repos/asf?p=commons-net.git;a=blob;f=src/main/java/org/apache/commons/net/imap/IMAP.java;h=d97f1073d8b97545d0a063c6832fe55c116166e2;hb=HEAD#l300] > if (line == null) > { throw new EOFException("Connection closed > without indication."); } > [ > 303|https://gitbox.apache.org/repos/asf?p=commons-net.git;a=blob;f=src/main/java/org/apache/commons/net/imap/IMAP.java;h=d97f1073d8b97545d0a063c6832fe55c116166e2;hb=HEAD#l303] > replyLines.add(line); > [ >
[jira] [Commented] (COLLECTIONS-701) StackOverflowError in SetUniqueList.add() when it receives itself
[ https://issues.apache.org/jira/browse/COLLECTIONS-701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17853536#comment-17853536 ] Gary D. Gregory commented on COLLECTIONS-701: - Hello [~aikebah] Thank you for your report. Would you please provide a new unit test here so we can reproduce the exact issue you are seeing? Or feel free to create a PR with a test and fix. TY. > StackOverflowError in SetUniqueList.add() when it receives itself > - > > Key: COLLECTIONS-701 > URL: https://issues.apache.org/jira/browse/COLLECTIONS-701 > Project: Commons Collections > Issue Type: Bug > Components: Collection >Affects Versions: 3.2.2 >Reporter: Shin Hong >Priority: Critical > Fix For: 4.3 > > > Hi. > We found that the following test case fails with a StackOverFlowError > exception: > {code:java} > test() { >SetUniqueList l = new SetUniqueList(new LinkedList()) ; >l.add((Object) l) ; > }{code} > The add() execution traps into an infinite recursion which crashes the > program. > From the stack trace, we found that the infinite recursion occurs > at AbstractList.hashCode() since it invokes hashCode() of each of its > elements. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IO-856) ListFiles should not fail on vanishing files
[ https://issues.apache.org/jira/browse/IO-856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17852969#comment-17852969 ] Gary D. Gregory commented on IO-856: Hello [~thomas.hart...@gmail.com] Thank you for your update. I added a unit test to reproduce this kind of issue: A file deletion between creating the stream and collecting it. It passes on Windows, Ubuntu, and macOS, using Java 11, 17, and 21 (see https://github.com/apache/commons-io/actions). The new test is here: [https://github.com/apache/commons-io/blob/97c4803e1c8f100756d24eff4fdfd631a08534dc/src/test/java/org/apache/commons/io/FileUtilsListFilesTest.java#L209-L223] The best path forward would be for you to create a PR with a failing unit test we can debug. You could try updating to the current version of Java 17 and see if that helps. TY! > ListFiles should not fail on vanishing files > > > Key: IO-856 > URL: https://issues.apache.org/jira/browse/IO-856 > Project: Commons IO > Issue Type: Bug > Components: Utilities >Affects Versions: 2.16.1 >Reporter: Thomas Hartwig >Assignee: Gary D. Gregory >Priority: Major > > ListFiles crashes when vanishing files are involved while listing, ListFiles > should simply list, the application should care of if files are not existent > any more: > > java.io.UncheckedIOException: java.nio.file.NoSuchFileException: > /tmp/20b50a15-b84e-4a9a-953e-223452dac994/a914fa55-50f7-4de0-8ca6-1fd84f10b29a.png > at > java.base/java.nio.file.FileTreeIterator.fetchNextIfNeeded(FileTreeIterator.java:87) > at > java.base/java.nio.file.FileTreeIterator.hasNext(FileTreeIterator.java:103) > at java.base/java.util.Iterator.forEachRemaining(Iterator.java:132) > at > java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1845) > at > java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) > at > java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) > at > java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921) > at > java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) > at > java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682) > at > org.apache.commons.io@2.16.1/org.apache.commons.io.FileUtils.toList(FileUtils.java:3025) > at > org.apache.commons.io@2.16.1/org.apache.commons.io.FileUtils.listFiles(FileUtils.java:2314) > at com.itth.test/test.ApacheBug.lambda$main$1(ApacheBug.java:39) > at java.base/java.lang.Thread.run(Thread.java:842) > Caused by: java.nio.file.NoSuchFileException: > /tmp/20b50a15-b84e-4a9a-953e-223452dac994/a914fa55-50f7-4de0-8ca6-1fd84f10b29a.png > at > java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92) > at > java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106) > at > java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) > at > java.base/sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55) > at > java.base/sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:148) > at > java.base/sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99) > at java.base/java.nio.file.Files.readAttributes(Files.java:1851) > at > java.base/java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:226) > at java.base/java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:277) > at java.base/java.nio.file.FileTreeWalker.next(FileTreeWalker.java:374) > at > java.base/java.nio.file.FileTreeIterator.fetchNextIfNeeded(FileTreeIterator.java:83) > ... 12 more > > Use this to reproduce: > > package test; > import org.apache.commons.io.FileUtils; > import java.io.BufferedOutputStream; > import java.io.File; > import java.io.FileOutputStream; > import java.io.IOException; > import java.nio.charset.StandardCharsets; > import java.nio.file.Path; > import java.util.Collection; > import java.util.UUID; > public class ApacheBug { > public static void main(String[] args) { > // create random directory in tmp, create the directory if it does not exist > final File dir = FileUtils.getTempDirectory(); > if (!dir.exists()) { > if (!dir.mkdirs()) { > throw new RuntimeException("could not create image file path: " + > dir.getAbsolutePath()); > } > } > // create random file in the directory > new Thread(() -> { > try { > while (true) { > final File file = Path.of(dir.getAbsolutePath(), UUID.randomUUID().toString() > + ".png").toFile(); > new BufferedOutputStream(new > FileOutputStream(file)).write("TEST".getBytes(StandardCharsets.UTF_8)); > file.delete(); > } > }
[jira] [Commented] (IO-856) ListFiles should not fail on vanishing files
[ https://issues.apache.org/jira/browse/IO-856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17852930#comment-17852930 ] Gary D. Gregory commented on IO-856: Hello [~thomas.hart...@gmail.com] Thank you for your report. Please specify: * What OS? * What Java version? > ListFiles should not fail on vanishing files > > > Key: IO-856 > URL: https://issues.apache.org/jira/browse/IO-856 > Project: Commons IO > Issue Type: Bug > Components: Utilities >Affects Versions: 2.16.1 >Reporter: Thomas Hartwig >Assignee: Gary D. Gregory >Priority: Major > > ListFiles crashes when vanishing files are involved while listing, ListFiles > should simply list, the application should care of if files are not existent > any more: > > java.io.UncheckedIOException: java.nio.file.NoSuchFileException: > /tmp/20b50a15-b84e-4a9a-953e-223452dac994/a914fa55-50f7-4de0-8ca6-1fd84f10b29a.png > at > java.base/java.nio.file.FileTreeIterator.fetchNextIfNeeded(FileTreeIterator.java:87) > at > java.base/java.nio.file.FileTreeIterator.hasNext(FileTreeIterator.java:103) > at java.base/java.util.Iterator.forEachRemaining(Iterator.java:132) > at > java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1845) > at > java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) > at > java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) > at > java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921) > at > java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) > at > java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682) > at > org.apache.commons.io@2.16.1/org.apache.commons.io.FileUtils.toList(FileUtils.java:3025) > at > org.apache.commons.io@2.16.1/org.apache.commons.io.FileUtils.listFiles(FileUtils.java:2314) > at com.itth.test/test.ApacheBug.lambda$main$1(ApacheBug.java:39) > at java.base/java.lang.Thread.run(Thread.java:842) > Caused by: java.nio.file.NoSuchFileException: > /tmp/20b50a15-b84e-4a9a-953e-223452dac994/a914fa55-50f7-4de0-8ca6-1fd84f10b29a.png > at > java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92) > at > java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106) > at > java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) > at > java.base/sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55) > at > java.base/sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:148) > at > java.base/sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99) > at java.base/java.nio.file.Files.readAttributes(Files.java:1851) > at > java.base/java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:226) > at java.base/java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:277) > at java.base/java.nio.file.FileTreeWalker.next(FileTreeWalker.java:374) > at > java.base/java.nio.file.FileTreeIterator.fetchNextIfNeeded(FileTreeIterator.java:83) > ... 12 more > > Use this to reproduce: > > package test; > import org.apache.commons.io.FileUtils; > import java.io.BufferedOutputStream; > import java.io.File; > import java.io.FileOutputStream; > import java.io.IOException; > import java.nio.charset.StandardCharsets; > import java.nio.file.Path; > import java.util.Collection; > import java.util.UUID; > public class ApacheBug { > public static void main(String[] args) { > // create random directory in tmp, create the directory if it does not exist > final File dir = FileUtils.getTempDirectory(); > if (!dir.exists()) { > if (!dir.mkdirs()) { > throw new RuntimeException("could not create image file path: " + > dir.getAbsolutePath()); > } > } > // create random file in the directory > new Thread(() -> { > try { > while (true) { > final File file = Path.of(dir.getAbsolutePath(), UUID.randomUUID().toString() > + ".png").toFile(); > new BufferedOutputStream(new > FileOutputStream(file)).write("TEST".getBytes(StandardCharsets.UTF_8)); > file.delete(); > } > } catch (IOException e) { > e.printStackTrace(); > } > }).start(); > new Thread(() -> { > try { > while (true) { > final Collection files = FileUtils.listFiles(dir, new String[]\{"png"}, > true); > System.out.println(files.size()); > } > } catch (Exception e) { > e.printStackTrace(); > } > }).start(); > try { > Thread.sleep(1); > } catch (InterruptedException e) { > Thread.currentThread().interrupt(); > } > } > } > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IO-856) ListFiles should not fail on vanishing files
[ https://issues.apache.org/jira/browse/IO-856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17852930#comment-17852930 ] Gary D. Gregory edited comment on IO-856 at 6/6/24 7:56 PM: Hello [~thomas.hart...@gmail.com] Thank you for your report. Please specify: * What OS? Some UNIX variant it seems. * What Java version? was (Author: garydgregory): Hello [~thomas.hart...@gmail.com] Thank you for your report. Please specify: * What OS? * What Java version? > ListFiles should not fail on vanishing files > > > Key: IO-856 > URL: https://issues.apache.org/jira/browse/IO-856 > Project: Commons IO > Issue Type: Bug > Components: Utilities >Affects Versions: 2.16.1 >Reporter: Thomas Hartwig >Assignee: Gary D. Gregory >Priority: Major > > ListFiles crashes when vanishing files are involved while listing, ListFiles > should simply list, the application should care of if files are not existent > any more: > > java.io.UncheckedIOException: java.nio.file.NoSuchFileException: > /tmp/20b50a15-b84e-4a9a-953e-223452dac994/a914fa55-50f7-4de0-8ca6-1fd84f10b29a.png > at > java.base/java.nio.file.FileTreeIterator.fetchNextIfNeeded(FileTreeIterator.java:87) > at > java.base/java.nio.file.FileTreeIterator.hasNext(FileTreeIterator.java:103) > at java.base/java.util.Iterator.forEachRemaining(Iterator.java:132) > at > java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1845) > at > java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) > at > java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) > at > java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921) > at > java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) > at > java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682) > at > org.apache.commons.io@2.16.1/org.apache.commons.io.FileUtils.toList(FileUtils.java:3025) > at > org.apache.commons.io@2.16.1/org.apache.commons.io.FileUtils.listFiles(FileUtils.java:2314) > at com.itth.test/test.ApacheBug.lambda$main$1(ApacheBug.java:39) > at java.base/java.lang.Thread.run(Thread.java:842) > Caused by: java.nio.file.NoSuchFileException: > /tmp/20b50a15-b84e-4a9a-953e-223452dac994/a914fa55-50f7-4de0-8ca6-1fd84f10b29a.png > at > java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92) > at > java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106) > at > java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) > at > java.base/sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55) > at > java.base/sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:148) > at > java.base/sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99) > at java.base/java.nio.file.Files.readAttributes(Files.java:1851) > at > java.base/java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:226) > at java.base/java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:277) > at java.base/java.nio.file.FileTreeWalker.next(FileTreeWalker.java:374) > at > java.base/java.nio.file.FileTreeIterator.fetchNextIfNeeded(FileTreeIterator.java:83) > ... 12 more > > Use this to reproduce: > > package test; > import org.apache.commons.io.FileUtils; > import java.io.BufferedOutputStream; > import java.io.File; > import java.io.FileOutputStream; > import java.io.IOException; > import java.nio.charset.StandardCharsets; > import java.nio.file.Path; > import java.util.Collection; > import java.util.UUID; > public class ApacheBug { > public static void main(String[] args) { > // create random directory in tmp, create the directory if it does not exist > final File dir = FileUtils.getTempDirectory(); > if (!dir.exists()) { > if (!dir.mkdirs()) { > throw new RuntimeException("could not create image file path: " + > dir.getAbsolutePath()); > } > } > // create random file in the directory > new Thread(() -> { > try { > while (true) { > final File file = Path.of(dir.getAbsolutePath(), UUID.randomUUID().toString() > + ".png").toFile(); > new BufferedOutputStream(new > FileOutputStream(file)).write("TEST".getBytes(StandardCharsets.UTF_8)); > file.delete(); > } > } catch (IOException e) { > e.printStackTrace(); > } > }).start(); > new Thread(() -> { > try { > while (true) { > final Collection files = FileUtils.listFiles(dir, new String[]\{"png"}, > true); > System.out.println(files.size()); > } > } catch (Exception e) { > e.printStackTrace(); > } > }).start(); > try { > Thread.sleep(1); >
[jira] [Updated] (IO-856) ListFiles should not fail on vanishing files
[ https://issues.apache.org/jira/browse/IO-856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory updated IO-856: --- Assignee: Gary D. Gregory > ListFiles should not fail on vanishing files > > > Key: IO-856 > URL: https://issues.apache.org/jira/browse/IO-856 > Project: Commons IO > Issue Type: Bug > Components: Utilities >Affects Versions: 2.16.1 >Reporter: Thomas Hartwig >Assignee: Gary D. Gregory >Priority: Major > > ListFiles crashes when vanishing files are involved while listing, ListFiles > should simply list, the application should care of if files are not existent > any more: > > java.io.UncheckedIOException: java.nio.file.NoSuchFileException: > /tmp/20b50a15-b84e-4a9a-953e-223452dac994/a914fa55-50f7-4de0-8ca6-1fd84f10b29a.png > at > java.base/java.nio.file.FileTreeIterator.fetchNextIfNeeded(FileTreeIterator.java:87) > at > java.base/java.nio.file.FileTreeIterator.hasNext(FileTreeIterator.java:103) > at java.base/java.util.Iterator.forEachRemaining(Iterator.java:132) > at > java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1845) > at > java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) > at > java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) > at > java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921) > at > java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) > at > java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682) > at > org.apache.commons.io@2.16.1/org.apache.commons.io.FileUtils.toList(FileUtils.java:3025) > at > org.apache.commons.io@2.16.1/org.apache.commons.io.FileUtils.listFiles(FileUtils.java:2314) > at com.itth.test/test.ApacheBug.lambda$main$1(ApacheBug.java:39) > at java.base/java.lang.Thread.run(Thread.java:842) > Caused by: java.nio.file.NoSuchFileException: > /tmp/20b50a15-b84e-4a9a-953e-223452dac994/a914fa55-50f7-4de0-8ca6-1fd84f10b29a.png > at > java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92) > at > java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106) > at > java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) > at > java.base/sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55) > at > java.base/sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:148) > at > java.base/sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99) > at java.base/java.nio.file.Files.readAttributes(Files.java:1851) > at > java.base/java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:226) > at java.base/java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:277) > at java.base/java.nio.file.FileTreeWalker.next(FileTreeWalker.java:374) > at > java.base/java.nio.file.FileTreeIterator.fetchNextIfNeeded(FileTreeIterator.java:83) > ... 12 more > > Use this to reproduce: > > package test; > import org.apache.commons.io.FileUtils; > import java.io.BufferedOutputStream; > import java.io.File; > import java.io.FileOutputStream; > import java.io.IOException; > import java.nio.charset.StandardCharsets; > import java.nio.file.Path; > import java.util.Collection; > import java.util.UUID; > public class ApacheBug { > public static void main(String[] args) { > // create random directory in tmp, create the directory if it does not exist > final File dir = FileUtils.getTempDirectory(); > if (!dir.exists()) { > if (!dir.mkdirs()) { > throw new RuntimeException("could not create image file path: " + > dir.getAbsolutePath()); > } > } > // create random file in the directory > new Thread(() -> { > try { > while (true) { > final File file = Path.of(dir.getAbsolutePath(), UUID.randomUUID().toString() > + ".png").toFile(); > new BufferedOutputStream(new > FileOutputStream(file)).write("TEST".getBytes(StandardCharsets.UTF_8)); > file.delete(); > } > } catch (IOException e) { > e.printStackTrace(); > } > }).start(); > new Thread(() -> { > try { > while (true) { > final Collection files = FileUtils.listFiles(dir, new String[]\{"png"}, > true); > System.out.println(files.size()); > } > } catch (Exception e) { > e.printStackTrace(); > } > }).start(); > try { > Thread.sleep(1); > } catch (InterruptedException e) { > Thread.currentThread().interrupt(); > } > } > } > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (CONFIGURATION-847) Property with an empty string value are not processed in the current main (2.11.0-snapshot)
[ https://issues.apache.org/jira/browse/CONFIGURATION-847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17851778#comment-17851778 ] Gary D. Gregory commented on CONFIGURATION-847: --- PR merged to git master. Build in [https://repository.apache.org/content/repositories/snapshots/] > Property with an empty string value are not processed in the current main > (2.11.0-snapshot) > --- > > Key: CONFIGURATION-847 > URL: https://issues.apache.org/jira/browse/CONFIGURATION-847 > Project: Commons Configuration > Issue Type: Bug >Affects Versions: Nightly Builds >Reporter: Andrea Bollini >Assignee: Gary D. Gregory >Priority: Critical > Fix For: 2.11.0 > > > I hit a side effect of the > https://issues.apache.org/jira/browse/CONFIGURATION-846 recently solved. > {{Assuming that we have a property file as configuration source like that}} > {{test.empty.property =}} > > and that we will try to inject such property in a spring bean > {{@Value("${test.empty.property"})}} > {{private String emptyValue;}} > {{ we will get an exception like: BeanDefinitionStore Invalid bean > definition ... Could not resolve placeholder}} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (CONFIGURATION-847) Property with an empty string value are not processed in the current main (2.11.0-snapshot)
[ https://issues.apache.org/jira/browse/CONFIGURATION-847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory resolved CONFIGURATION-847. --- Resolution: Fixed > Property with an empty string value are not processed in the current main > (2.11.0-snapshot) > --- > > Key: CONFIGURATION-847 > URL: https://issues.apache.org/jira/browse/CONFIGURATION-847 > Project: Commons Configuration > Issue Type: Bug >Affects Versions: Nightly Builds >Reporter: Andrea Bollini >Priority: Critical > Fix For: 2.11.0 > > > I hit a side effect of the > https://issues.apache.org/jira/browse/CONFIGURATION-846 recently solved. > {{Assuming that we have a property file as configuration source like that}} > {{test.empty.property =}} > > and that we will try to inject such property in a spring bean > {{@Value("${test.empty.property"})}} > {{private String emptyValue;}} > {{ we will get an exception like: BeanDefinitionStore Invalid bean > definition ... Could not resolve placeholder}} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (CONFIGURATION-847) Property with an empty string value are not processed in the current main (2.11.0-snapshot)
[ https://issues.apache.org/jira/browse/CONFIGURATION-847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory updated CONFIGURATION-847: -- Assignee: Gary D. Gregory > Property with an empty string value are not processed in the current main > (2.11.0-snapshot) > --- > > Key: CONFIGURATION-847 > URL: https://issues.apache.org/jira/browse/CONFIGURATION-847 > Project: Commons Configuration > Issue Type: Bug >Affects Versions: Nightly Builds >Reporter: Andrea Bollini >Assignee: Gary D. Gregory >Priority: Critical > Fix For: 2.11.0 > > > I hit a side effect of the > https://issues.apache.org/jira/browse/CONFIGURATION-846 recently solved. > {{Assuming that we have a property file as configuration source like that}} > {{test.empty.property =}} > > and that we will try to inject such property in a spring bean > {{@Value("${test.empty.property"})}} > {{private String emptyValue;}} > {{ we will get an exception like: BeanDefinitionStore Invalid bean > definition ... Could not resolve placeholder}} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NET-731) FTPSClient no longer supports fileTransferMode (eg DEFLATE)
[ https://issues.apache.org/jira/browse/NET-731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17851668#comment-17851668 ] Gary D. Gregory commented on NET-731: - Hello [~fanningpj] Thank you for your report. What version are you using compared to the previous behavior? > FTPSClient no longer supports fileTransferMode (eg DEFLATE) > --- > > Key: NET-731 > URL: https://issues.apache.org/jira/browse/NET-731 > Project: Commons Net > Issue Type: Task > Components: FTP >Reporter: PJ Fanning >Priority: Major > > The new openDataSecureConnection method in FTPSClient does not support > fileTransferMode (eg DEFLATE). > https://github.com/apache/commons-net/pull/90/files#diff-b4292a5bd3e39f502d24bce1eb934384a951a120080c870cdc68c0585a78c6e9[ > > |https://github.com/apache/commons-net/pull/90/files#diff-b4292a5bd3e39f502d24bce1eb934384a951a120080c870cdc68c0585a78c6e9] > > The FTPSClient code used to delegate to FTPClient _openDataConnection_ > [https://github.com/apache/commons-net/blob/b5038eff135dff54e2ee2d09b94ec7d8937cb09b/src/main/java/org/apache/commons/net/ftp/FTPClient.java#L696] > This method supports `wrapOnDeflate` while openDataSecureConnection does not. > I'm not sure if FTPS supports DEFLATE transfer mode but while implementing an > Apache Pekko workaround for the NET-718, I spotted the diff. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (IO-831) Add getInputStream() for 'https' & 'http' in URIOrigin
[ https://issues.apache.org/jira/browse/IO-831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory resolved IO-831. Fix Version/s: 2.16.2 Resolution: Fixed > Add getInputStream() for 'https' & 'http' in URIOrigin > -- > > Key: IO-831 > URL: https://issues.apache.org/jira/browse/IO-831 > Project: Commons IO > Issue Type: Bug >Reporter: Elliotte Rusty Harold >Priority: Major > Fix For: 2.16.2 > > > I think file URLs might work but http/https URLs, much more common, don't. > I'm not yet sure if this can be fixed without changing the API. > @Test > public void testReadFromURL() throws URISyntaxException, IOException { > final URIOrigin origin = new URIOrigin(new > URI("https://www.yahoo.com;)); > try (final InputStream in = origin.getInputStream()) { > assertNotEquals(-1, in.read()); > } > } > java.nio.file.FileSystemNotFoundException: Provider "https" not installed > at java.nio.file.Paths.get(Paths.java:147) > at > org.apache.commons.io.build.AbstractOrigin$URIOrigin.getPath(AbstractOrigin.java:402) > at > org.apache.commons.io.build.AbstractOrigin.getInputStream(AbstractOrigin.java:540) > at > org.apache.commons.io.build.URIOriginTest.testReadFromURL(URIOriginTest.java:47) > at java.lang.reflect.Method.invoke(Method.java:498) > at java.util.ArrayList.forEach(ArrayList.java:1257) > at java.util.ArrayList.forEach(ArrayList.java:1257) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IO-831) Add getInputStream() for 'https' & 'http' in URIOrigin
[ https://issues.apache.org/jira/browse/IO-831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory updated IO-831: --- Summary: Add getInputStream() for 'https' & 'http' in URIOrigin (was: http URI origins don't work) > Add getInputStream() for 'https' & 'http' in URIOrigin > -- > > Key: IO-831 > URL: https://issues.apache.org/jira/browse/IO-831 > Project: Commons IO > Issue Type: Bug >Reporter: Elliotte Rusty Harold >Priority: Major > > I think file URLs might work but http/https URLs, much more common, don't. > I'm not yet sure if this can be fixed without changing the API. > @Test > public void testReadFromURL() throws URISyntaxException, IOException { > final URIOrigin origin = new URIOrigin(new > URI("https://www.yahoo.com;)); > try (final InputStream in = origin.getInputStream()) { > assertNotEquals(-1, in.read()); > } > } > java.nio.file.FileSystemNotFoundException: Provider "https" not installed > at java.nio.file.Paths.get(Paths.java:147) > at > org.apache.commons.io.build.AbstractOrigin$URIOrigin.getPath(AbstractOrigin.java:402) > at > org.apache.commons.io.build.AbstractOrigin.getInputStream(AbstractOrigin.java:540) > at > org.apache.commons.io.build.URIOriginTest.testReadFromURL(URIOriginTest.java:47) > at java.lang.reflect.Method.invoke(Method.java:498) > at java.util.ArrayList.forEach(ArrayList.java:1257) > at java.util.ArrayList.forEach(ArrayList.java:1257) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (CLI-335) Defining Default Properties documentation has errors.
[ https://issues.apache.org/jira/browse/CLI-335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17851167#comment-17851167 ] Gary D. Gregory commented on CLI-335: - [~claude] Thank you for finding this hole in our documentation. Would you please provide a PR? > Defining Default Properties documentation has errors. > - > > Key: CLI-335 > URL: https://issues.apache.org/jira/browse/CLI-335 > Project: Commons CLI > Issue Type: Bug > Components: Documentation >Affects Versions: 1.8.0 >Reporter: Claude Warren >Priority: Major > > https://commons.apache.org/proper/commons-cli/properties.html specifically > links to the deprecated OptionBuilder class. It should reference the > Option.Builder (note the dot) class. > In addition there are methods defined in Option.Builder that are not > described in the properties document. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NET-709) IMAP Memory considerations with large ‘FETCH’ sizes.
[ https://issues.apache.org/jira/browse/NET-709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory updated NET-709: Fix Version/s: 3.11.1 (was: 3.11.0) > IMAP Memory considerations with large ‘FETCH’ sizes. > > > Key: NET-709 > URL: https://issues.apache.org/jira/browse/NET-709 > Project: Commons Net > Issue Type: Improvement > Components: IMAP >Affects Versions: 3.8.0 >Reporter: Anders >Priority: Minor > Labels: IMAP, buffer, chunking, large, literal, memory, partial > Fix For: 3.11.1 > > Original Estimate: 96h > Remaining Estimate: 96h > > h2. *IMAP Memory considerations with large ‘FETCH’ sizes.* > > The following comments concern classes in the > [org.apache.common.net.imap|https://commons.apache.org/proper/commons-net/apidocs/org/apache/commons/net/imap/package-summary.html] > package. > > Consider the following imap ‘fetch’ exchange between a client (>) and server > (<): > {{> A654 FETCH 1:2 (BODY[TEXT])}} > {{{}< * 1 FETCH (BODY[TEXT] {*}{{*}{*}8000{*}{*}}{*}\r\n{}}}{{{}…{}}} > {{< * 2 FETCH …}} > {{< A654 OK FETCH completed}} > > The first untagged response (* 1 FETCH …) contains a literal \{8000} or > 80MB. > > After reviewing the > [source|https://gitbox.apache.org/repos/asf?p=commons-net.git;a=blob;f=src/main/java/org/apache/commons/net/imap/IMAP.java;h=d97f1073d8b97545d0a063c6832fe55c116166e2;hb=HEAD#l298], > it is my understanding, the entire 80MB sequence of data will be read into > Java memory even when using > ‘[IMAPChunkListener|https://commons.apache.org/proper/commons-net/apidocs/org/apache/commons/net/imap/IMAP.IMAPChunkListener.html]’. > According the the documentation: > > {quote}Implement this interface and register it via > [IMAP.setChunkListener(IMAPChunkListener)|https://commons.apache.org/proper/commons-net/apidocs/org/apache/commons/net/imap/IMAP.html#setChunkListener-org.apache.commons.net.imap.IMAP.IMAPChunkListener-] > in order to get access to multi-line partial command responses. Useful when > processing large FETCH responses. > {quote} > > It is apparent the partial fetch response is read in full (80MB) before > invoking the ‘IMAPChunkListener’ and then discarding the read lines (freeing > up memory). > > Back to the example: > > A654 FETCH 1:2 (BODY[TEXT]) > < * 1 FETCH (BODY[TEXT] \{8000}\r\n > *{color:#ff}…. <— read in full into memory then discarded after calling > IMAPChunkListener{color}* > < * 2 FETCH (BODY[TEXT] \{250}\r\n > {color:#ff}*…. <— read in full into memory then discarded after calling > IMAPChunkListener*{color} > < A654 OK FETCH completed > > Above, you can see the chunk listener is good for each individual partial > fetch response but does not prevent a large partial from being loaded into > memory. > > Let’s review the > [code|https://gitbox.apache.org/repos/asf?p=commons-net.git;a=blob;f=src/main/java/org/apache/commons/net/imap/IMAP.java;h=d97f1073d8b97545d0a063c6832fe55c116166e2;hb=HEAD#l298]: > > [ > 296|https://gitbox.apache.org/repos/asf?p=commons-net.git;a=blob;f=src/main/java/org/apache/commons/net/imap/IMAP.java;h=d97f1073d8b97545d0a063c6832fe55c116166e2;hb=HEAD#l296] > int literalCount = IMAPReply.literalCount(line); > {color:#ff}Above counts the size of the literal, in our case 8000 or > 80MB (for the first partial fetch response).{color} > > > [ > 297|https://gitbox.apache.org/repos/asf?p=commons-net.git;a=blob;f=src/main/java/org/apache/commons/net/imap/IMAP.java;h=d97f1073d8b97545d0a063c6832fe55c116166e2;hb=HEAD#l297] > final boolean isMultiLine = literalCount >= 0; > [ > 298|https://gitbox.apache.org/repos/asf?p=commons-net.git;a=blob;f=src/main/java/org/apache/commons/net/imap/IMAP.java;h=d97f1073d8b97545d0a063c6832fe55c116166e2;hb=HEAD#l298] > while (literalCount >= 0) { > [ > 299|https://gitbox.apache.org/repos/asf?p=commons-net.git;a=blob;f=src/main/java/org/apache/commons/net/imap/IMAP.java;h=d97f1073d8b97545d0a063c6832fe55c116166e2;hb=HEAD#l299] > line=_reader.readLine(); > [ > 300|https://gitbox.apache.org/repos/asf?p=commons-net.git;a=blob;f=src/main/java/org/apache/commons/net/imap/IMAP.java;h=d97f1073d8b97545d0a063c6832fe55c116166e2;hb=HEAD#l300] > if (line == null) > { throw new EOFException("Connection closed > without indication."); } > [ > 303|https://gitbox.apache.org/repos/asf?p=commons-net.git;a=blob;f=src/main/java/org/apache/commons/net/imap/IMAP.java;h=d97f1073d8b97545d0a063c6832fe55c116166e2;hb=HEAD#l303] > replyLines.add(line); > [ >
[jira] [Resolved] (CONFIGURATION-846) Unable to load multivalued configurations into Spring using ConfigurationPropertySource
[ https://issues.apache.org/jira/browse/CONFIGURATION-846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory resolved CONFIGURATION-846. --- Fix Version/s: 2.10.2 Resolution: Fixed > Unable to load multivalued configurations into Spring using > ConfigurationPropertySource > --- > > Key: CONFIGURATION-846 > URL: https://issues.apache.org/jira/browse/CONFIGURATION-846 > Project: Commons Configuration > Issue Type: Bug >Affects Versions: 2.10.0, 2.10.1 >Reporter: Tim Donohue >Priority: Minor > Fix For: 2.10.2 > > > We've run into an odd bug when using Commons Configuration v2 + Spring Boot > which I _believe_ is caused by changes in the PR > [https://github.com/apache/commons-configuration/pull/309] to address > https://issues.apache.org/jira/browse/CONFIGURATION-834. > During a routine upgrade from Commons Configuration v2.9.0 to v2.10.1, we > discovered that our multivalued configurations (i.e. an array or list of > values) were only loading the *first value* into Spring. In other words, it > seems to no longer be possible to load multivalued configurations into Spring > Beans via something like this: > {{@Value("${some.multivalued.prop}")}} > {{String[] myMultivaluedVariable;}} > I could be wrong, but I _believe_ it may be caused by the [change from > `getProperty()` to `getString()` in PR > 309|https://github.com/apache/commons-configuration/pull/309/files#diff-2f481434a16d50ce9df3af48f9e72fc8872050b0e8d1614fcd7420a8779db283R52], > because `getString()` is [documented to only return the *first value* in a > list of > values|https://commons.apache.org/proper/commons-configuration/userguide/howto_basicfeatures.html#List_handling] > {quote}Of interest is also the last line of the example fragment. Here the > `getString()` method is called for a property that has multiple values. This > call will return the first value of the list. > {quote} > I don't know of the proper solution to this issue. But I can confirm that > v2.9.0 works properly for multivalued configurations, but both v2.10.0 and > v2.10.1 do not (in both those versions we are seeing only the first value > loaded into Spring for multivalued configurations). > For our purposes, we are looking to create a custom > ConfigurationPropertySource to workaround this issue in our codebase. > However, ideally, it'd be better to ensure the default > ConfigurationPropertySource is still able to handle multi-valued > configurations. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (COLLECTIONS-855) Update the EnhancedDoubleHasher to correct the cube component of the hash
[ https://issues.apache.org/jira/browse/COLLECTIONS-855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17850490#comment-17850490 ] Gary D. Gregory commented on COLLECTIONS-855: - There is no compatibility to worry about: The code was part of the first milestone release, we can change it. We are not even talking about a binary or source compatibility issue. IMO, we should do what is best for the long term. > Update the EnhancedDoubleHasher to correct the cube component of the hash > - > > Key: COLLECTIONS-855 > URL: https://issues.apache.org/jira/browse/COLLECTIONS-855 > Project: Commons Collections > Issue Type: Bug > Components: Bloomfilter >Affects Versions: 4.5.0-M1 >Reporter: Alex Herbert >Priority: Blocker > > The EnhancedDoubleHasher currently computes the hash with the cube component > lagging by 1: > {noformat} > hash[i] = ( h1(x) - i*h2(x) - ((i-1)^3 - (i-1))/6 ) wrapped in [0, > bits){noformat} > Correct this to the intended: > {noformat} > hash[i] = ( h1(x) - i*h2(x) - (i*i*i - i)/6 ) wrapped in [0, bits){noformat} > This is a simple change in the current controlling loop from: > {code:java} > for (int i = 0; i < k; i++) { {code} > to: > {code:java} > for (int i = 1; i <= k; i++) { {code} > > Issue notified by Juan Manuel Gimeno Illa on the Commons dev mailing list > (see [https://lists.apache.org/thread/wjmwxzozrtf41ko9r0g7pzrrg11o923o]). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (LANG-1733) Add null-safe Consumers.accept() and Functions.apply()
[ https://issues.apache.org/jira/browse/LANG-1733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory resolved LANG-1733. --- Fix Version/s: 3.15.0 Resolution: Fixed > Add null-safe Consumers.accept() and Functions.apply() > -- > > Key: LANG-1733 > URL: https://issues.apache.org/jira/browse/LANG-1733 > Project: Commons Lang > Issue Type: New Feature >Reporter: Jongjin Bae >Priority: Major > Fix For: 3.15.0 > > > I have a new suggestion about null handling. > I usually check a object is null or not before using it to avoid NPE. > It is pretty obvious, but It is quite cumbersome and has some overhead. > So I want to introduce the following null-safety methods in ObjectUtils class > and make people easy to handle null without using if/else statement or > Optional class, etc. > {code:java} > public static R applyIfNotNull(final T object, final Function > function) { > return object != null ? function.apply(object) : null; > } > public static void acceptIfNotNull(final T object, final Consumer > consumer) { > if (object != null) { > consumer.accept(object); > } > } > {code} > What do you think about it? > If it looks good, I will implement this feature. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (LANG-1733) Add null-safe Consumers.accept() and Functions.apply()
[ https://issues.apache.org/jira/browse/LANG-1733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory updated LANG-1733: -- Summary: Add null-safe Consumers.accept() and Functions.apply() (was: `null` handling feature in ObjectUtils) > Add null-safe Consumers.accept() and Functions.apply() > -- > > Key: LANG-1733 > URL: https://issues.apache.org/jira/browse/LANG-1733 > Project: Commons Lang > Issue Type: New Feature >Reporter: Jongjin Bae >Priority: Major > > I have a new suggestion about null handling. > I usually check a object is null or not before using it to avoid NPE. > It is pretty obvious, but It is quite cumbersome and has some overhead. > So I want to introduce the following null-safety methods in ObjectUtils class > and make people easy to handle null without using if/else statement or > Optional class, etc. > {code:java} > public static R applyIfNotNull(final T object, final Function > function) { > return object != null ? function.apply(object) : null; > } > public static void acceptIfNotNull(final T object, final Consumer > consumer) { > if (object != null) { > consumer.accept(object); > } > } > {code} > What do you think about it? > If it looks good, I will implement this feature. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (COLLECTIONS-855) Update the EnhancedDoubleHasher to correct the cube component of the hash
[ https://issues.apache.org/jira/browse/COLLECTIONS-855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17850338#comment-17850338 ] Gary D. Gregory commented on COLLECTIONS-855: - PRs welcome :) I'd like to go for an M2 release next, which I can cut anytime. > Update the EnhancedDoubleHasher to correct the cube component of the hash > - > > Key: COLLECTIONS-855 > URL: https://issues.apache.org/jira/browse/COLLECTIONS-855 > Project: Commons Collections > Issue Type: Bug > Components: Bloomfilter >Affects Versions: 4.5.0-M1 >Reporter: Alex Herbert >Priority: Blocker > > The EnhancedDoubleHasher currently computes the hash with the cube component > lagging by 1: > {noformat} > hash[i] = ( h1(x) - i*h2(x) - ((i-1)^3 - (i-1))/6 ) wrapped in [0, > bits){noformat} > Correct this to the intended: > {noformat} > hash[i] = ( h1(x) - i*h2(x) - (i*i*i - i)/6 ) wrapped in [0, bits){noformat} > This is a simple change in the current controlling loop from: > {code:java} > for (int i = 0; i < k; i++) { {code} > to: > {code:java} > for (int i = 1; i <= k; i++) { {code} > > Issue notified by Juan Manuel Gimeno Illa on the Commons dev mailing list > (see [https://lists.apache.org/thread/wjmwxzozrtf41ko9r0g7pzrrg11o923o]). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (VFS-853) Due to double weak references the file listener are not executed
[ https://issues.apache.org/jira/browse/VFS-853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory resolved VFS-853. - Resolution: Fixed PR merged. TY [~b.eckenfels]! > Due to double weak references the file listener are not executed > > > Key: VFS-853 > URL: https://issues.apache.org/jira/browse/VFS-853 > Project: Commons VFS > Issue Type: Bug >Reporter: Bernd Eckenfels >Assignee: Bernd Eckenfels >Priority: Major > Fix For: 2.10.0 > > > On DelegatedFileObjects the Listener is registered with a WeakReference > listener. The original code which did that has a (errounous) duplication of > listeners. This leads to the problem that the "middle" listener is never > referenced and therefore imediatelly collected, which in turn leads to > removal of the "outer" listener. > I have added a testcase which reproduces the problem and does not fail when > duplication is removed. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (CLI-329) Support "Deprecated" CLI Options
[ https://issues.apache.org/jira/browse/CLI-329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory resolved CLI-329. - Resolution: Fixed > Support "Deprecated" CLI Options > > > Key: CLI-329 > URL: https://issues.apache.org/jira/browse/CLI-329 > Project: Commons CLI > Issue Type: New Feature >Reporter: Eric Pugh >Assignee: Gary D. Gregory >Priority: Major > Fix For: 1.7.0 > > > Per [https://lists.apache.org/thread/zj63psowkjvox3v3pr4zl7mdjtddk9zd] it > would be nice if as your CLI evolves you could mark a command line option as > deprecated. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (CLI-334) Fix Javadoc pathing
[ https://issues.apache.org/jira/browse/CLI-334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17850086#comment-17850086 ] Gary D. Gregory commented on CLI-334: - PR merged. TY [~epugh]. > Fix Javadoc pathing > --- > > Key: CLI-334 > URL: https://issues.apache.org/jira/browse/CLI-334 > Project: Commons CLI > Issue Type: Improvement > Components: Documentation >Affects Versions: 1.8.0 >Reporter: Eric Pugh >Priority: Minor > Fix For: 1.8.1 > > > I found some urls on the site to the javadocs that aren't quite right... -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (CLI-334) Fix Javadoc pathing
[ https://issues.apache.org/jira/browse/CLI-334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory resolved CLI-334. - Fix Version/s: 1.8.1 Resolution: Fixed > Fix Javadoc pathing > --- > > Key: CLI-334 > URL: https://issues.apache.org/jira/browse/CLI-334 > Project: Commons CLI > Issue Type: Improvement > Components: Documentation >Affects Versions: 1.8.0 >Reporter: Eric Pugh >Priority: Minor > Fix For: 1.8.1 > > > I found some urls on the site to the javadocs that aren't quite right... -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (CLI-334) Fix Javadoc pathing
[ https://issues.apache.org/jira/browse/CLI-334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory updated CLI-334: Summary: Fix Javadoc pathing (was: Some bad javadoc links) > Fix Javadoc pathing > --- > > Key: CLI-334 > URL: https://issues.apache.org/jira/browse/CLI-334 > Project: Commons CLI > Issue Type: Improvement > Components: Documentation >Affects Versions: 1.8.0 >Reporter: Eric Pugh >Priority: Minor > > I found some urls on the site to the javadocs that aren't quite right... -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (NET-730) Cannot connect to FTP server with HTTP proxy
[ https://issues.apache.org/jira/browse/NET-730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory resolved NET-730. - Fix Version/s: 3.11.0 Resolution: Fixed [~jthalmai] Fixed in git master and snapshot builds in https://repository.apache.org/content/repositories/snapshots/commons-net/commons-net/3.11.0-SNAPSHOT/ Please verify with your use case and let me know. > Cannot connect to FTP server with HTTP proxy > > > Key: NET-730 > URL: https://issues.apache.org/jira/browse/NET-730 > Project: Commons Net > Issue Type: Bug > Components: FTP >Affects Versions: 3.10.0 >Reporter: Johannes Thalmair >Assignee: Gary D. Gregory >Priority: Major > Fix For: 3.11.0 > > > After updating from Commons Net 3.9.0 to 3.10.0, I can no longer connect to > an FTP server with an HTTP proxy that requires authorization. Sadly I do not > have direct access to that server and don't know which proxy is running > there. A try to connect just blocks for 5 minutes and then fails with an > IOException: No response from proxy > at org.apache.commons.net.ftp.FTPHTTPClient > .tunnelHandshake(FTPHTTPClient .java:209) > at org.apache.commons.net.ftp.FTPHTTPClient .connect(FTPHTTPClient > .java:173) > > I'm using the org.apache.commons.net.ftp.FTPHTTPClient for connecting and > already did some debugging. The change that causes my problem is the switch > from the deprecated > org.apache.commons.net.util.Base64.{{{}encodeToString(byte[]){}}} to > java.util.Base64.getEncoder().encodeToString({{{}byte[]{}}}) to encode the > Proxy-Authorization header in FTPHTTPClient.tunnelHandshake() (see > [https://github.com/apache/commons-net/commit/396bade29ad98d20a2c039ac561db56b63018b39]) > The old encoding method added a CRLF / "\r\n" to the end of the String, > while the new one does not. This specific proxy seems to expect it, I don't > know if others do, too. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NET-730) Cannot connect to FTP server with HTTP proxy
[ https://issues.apache.org/jira/browse/NET-730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory updated NET-730: Assignee: Gary D. Gregory > Cannot connect to FTP server with HTTP proxy > > > Key: NET-730 > URL: https://issues.apache.org/jira/browse/NET-730 > Project: Commons Net > Issue Type: Bug > Components: FTP >Affects Versions: 3.10.0 >Reporter: Johannes Thalmair >Assignee: Gary D. Gregory >Priority: Major > > After updating from Commons Net 3.9.0 to 3.10.0, I can no longer connect to > an FTP server with an HTTP proxy that requires authorization. Sadly I do not > have direct access to that server and don't know which proxy is running > there. A try to connect just blocks for 5 minutes and then fails with an > IOException: No response from proxy > at org.apache.commons.net.ftp.FTPHTTPClient > .tunnelHandshake(FTPHTTPClient .java:209) > at org.apache.commons.net.ftp.FTPHTTPClient .connect(FTPHTTPClient > .java:173) > > I'm using the org.apache.commons.net.ftp.FTPHTTPClient for connecting and > already did some debugging. The change that causes my problem is the switch > from the deprecated > org.apache.commons.net.util.Base64.{{{}encodeToString(byte[]){}}} to > java.util.Base64.getEncoder().encodeToString({{{}byte[]{}}}) to encode the > Proxy-Authorization header in FTPHTTPClient.tunnelHandshake() (see > [https://github.com/apache/commons-net/commit/396bade29ad98d20a2c039ac561db56b63018b39]) > The old encoding method added a CRLF / "\r\n" to the end of the String, > while the new one does not. This specific proxy seems to expect it, I don't > know if others do, too. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (TEXT-217) Snake case utility method: CaseUtils.toSnakeCase(....)
[ https://issues.apache.org/jira/browse/TEXT-217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849453#comment-17849453 ] Gary D. Gregory commented on TEXT-217: -- Hi [~claude] This could both be useful but also a source of endless requests and/or bugs, which is why I would like to see exact format definitions or links to definitions. For example, for me, camel case starts with a lowercase, like a Java method name. I have many projects at work that use custom converters but I am not sure how this would fit in because I prefer to see real tests than what is currently in PR 552. It also does not seem to match what I think of camel case. This is what I have custom converters for at work in different products: The input can be any of XML Schema, WSDL (SOAP), Swagger 2, Open API 3.x, COBOL copybooks, database tables (table names and column names), and probably other specifications I can't recall. I've worked on many products! ;-) Then, for example, I need to take an XML element name and make that a Java class name and/or a method name; the same for an XML attribute name. Another example is taking XML element and attribute names and turning those into Swagger 2 and Open API 3 keys. In the case of converting into XML or into Open API, it's not good enough for the names to be legal, they have to be "pretty", in the conventions of a format. In Open API, that's camel case starting with a lowercase letter. For XML, there are different conventions, so we pick one. > Snake case utility method: CaseUtils.toSnakeCase() > -- > > Key: TEXT-217 > URL: https://issues.apache.org/jira/browse/TEXT-217 > Project: Commons Text > Issue Type: New Feature >Affects Versions: 1.9 >Reporter: Adil Iqbal >Assignee: Claude Warren >Priority: Major > Fix For: 1.12.1 > > Time Spent: 1h > Remaining Estimate: 0h > > Requesting a feature to convert any string to snake case, as per > CaseUtils.toCamelCase(...) > *Rationale:* > As per the OpenAPI Specification 3.0, keys should be in snake case. There is > currently no common utility that can be used to accomplish that task. > Any interaction between Java and Python is hindered, since Python uses snake > case as a best practice. > *Feature Set Requested:* > All features currently included in CaseUtils.toCamelCase(...) sans > capitalization flag. As you know, the capitalization flag was implemented to > support PascalCase, which is a convention even in Java, for many situations. > There is no equivalent for snake case. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (TEXT-217) Snake case utility method: CaseUtils.toSnakeCase(....)
[ https://issues.apache.org/jira/browse/TEXT-217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849452#comment-17849452 ] Gary D. Gregory commented on TEXT-217: -- Where does OpenAPI require snake case for keys? I only see camel case in https://spec.openapis.org/oas/v3.1.0, for example "termsOfService". > Snake case utility method: CaseUtils.toSnakeCase() > -- > > Key: TEXT-217 > URL: https://issues.apache.org/jira/browse/TEXT-217 > Project: Commons Text > Issue Type: New Feature >Affects Versions: 1.9 >Reporter: Adil Iqbal >Assignee: Claude Warren >Priority: Major > Fix For: 1.12.1 > > Time Spent: 1h > Remaining Estimate: 0h > > Requesting a feature to convert any string to snake case, as per > CaseUtils.toCamelCase(...) > *Rationale:* > As per the OpenAPI Specification 3.0, keys should be in snake case. There is > currently no common utility that can be used to accomplish that task. > Any interaction between Java and Python is hindered, since Python uses snake > case as a best practice. > *Feature Set Requested:* > All features currently included in CaseUtils.toCamelCase(...) sans > capitalization flag. As you know, the capitalization flag was implemented to > support PascalCase, which is a convention even in Java, for many situations. > There is no equivalent for snake case. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (TEXT-234) Improve StrBuilder documentation for new line text
[ https://issues.apache.org/jira/browse/TEXT-234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849418#comment-17849418 ] Gary D. Gregory edited comment on TEXT-234 at 5/25/24 2:13 AM: --- Hello [~tobiaskiecker] All set? was (Author: garydgregory): Helo [~tobiaskiecker] All set? > Improve StrBuilder documentation for new line text > -- > > Key: TEXT-234 > URL: https://issues.apache.org/jira/browse/TEXT-234 > Project: Commons Text > Issue Type: Improvement >Affects Versions: 1.12.0 >Reporter: TobiasKiecker >Priority: Minor > Labels: documentation > > The method _setNewLineText_ in both _StrBuilder_ and _TextStringBuilder_ have > ambiguous documentation. If someone were to extend the class and override > _appendNewLine_ null would not be handled anymore. > The docstring of s{_}etNewlineText{_} implies that THIS function does the > handling, while in truth it is done in _appendNewLine._ -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (TEXT-234) Improve StrBuilder documentation for new line text
[ https://issues.apache.org/jira/browse/TEXT-234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849418#comment-17849418 ] Gary D. Gregory commented on TEXT-234: -- Helo [~tobiaskiecker] All set? > Improve StrBuilder documentation for new line text > -- > > Key: TEXT-234 > URL: https://issues.apache.org/jira/browse/TEXT-234 > Project: Commons Text > Issue Type: Improvement >Affects Versions: 1.12.0 >Reporter: TobiasKiecker >Priority: Minor > Labels: documentation > > The method _setNewLineText_ in both _StrBuilder_ and _TextStringBuilder_ have > ambiguous documentation. If someone were to extend the class and override > _appendNewLine_ null would not be handled anymore. > The docstring of s{_}etNewlineText{_} implies that THIS function does the > handling, while in truth it is done in _appendNewLine._ -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (CLI-321) Add and use a Converter interface and implementations without using BeanUtils
[ https://issues.apache.org/jira/browse/CLI-321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory updated CLI-321: Fix Version/s: 1.8.1 (was: 1.8.0) > Add and use a Converter interface and implementations without using BeanUtils > -- > > Key: CLI-321 > URL: https://issues.apache.org/jira/browse/CLI-321 > Project: Commons CLI > Issue Type: Improvement > Components: Parser >Affects Versions: 1.6.0 >Reporter: Claude Warren >Assignee: Claude Warren >Priority: Minor > Fix For: 1.8.1 > > > The current TypeHandler implementation notes indicate that the > BeanUtils.Converters should be used to create instances of the various types. > This issue is to complete the implementation of TypeHandler so that it uses > the BeanUtils.Converters. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (CLI-322) Allow minus for kebab-case options
[ https://issues.apache.org/jira/browse/CLI-322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory updated CLI-322: Fix Version/s: 1.8.1 (was: 1.8.0) > Allow minus for kebab-case options > -- > > Key: CLI-322 > URL: https://issues.apache.org/jira/browse/CLI-322 > Project: Commons CLI > Issue Type: New Feature > Components: Parser >Affects Versions: 1.6.0 >Reporter: Claude Warren >Assignee: Claude Warren >Priority: Minor > Fix For: 1.8.1 > > > Currently minus (“-“) is not allowed in option names, > which makes common long options in kebab-case > (like {{--is-not-allowed}}) impossible. > This change is to allow it inside an option name. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (LOGGING-192) NoClassDefFoundError: org/apache/logging/log4j/spi/LoggerAdapter when using custom classloader
[ https://issues.apache.org/jira/browse/LOGGING-192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849119#comment-17849119 ] Gary D. Gregory commented on LOGGING-192: - CC [~pkarwasz] > NoClassDefFoundError: org/apache/logging/log4j/spi/LoggerAdapter when using > custom classloader > -- > > Key: LOGGING-192 > URL: https://issues.apache.org/jira/browse/LOGGING-192 > Project: Commons Logging > Issue Type: Bug >Affects Versions: 1.3.0, 1.3.1, 1.3.2 > Environment: This behavior was observed while running Adopt Open JDK > 11 and the latest version of Tomcat 9. The behavior can be reproduced > outside of tomcat (see attached reproduction case). >Reporter: Dave Dority >Priority: Major > Attachments: commons-logging-classloading-issue.zip > > > If you have: > * A web application running in Tomcat which contains commons-logging:1.2 > * That web application contains a custom classloader for loading a > seperately distributed software component (whose dependencies will conflict > with the dependencies of the web application). > * The software component uses commons-logging:1.3.2 > When the web application attempts use software component, the code > [here|https://github.com/apache/commons-logging/blob/rel/commons-logging-1.3.2/src/main/java/org/apache/commons/logging/LogFactory.java#L918-L938] > looks for the presence of different logging implementation classes on the > thread context classloader's (TCCL) classpath to select an optimal > implementation. It seems like what is happening is that the LogFactory class > looking for implementation class on the TCCL's classpath and the trying to > load the selected factory from the web application's custom classloader (the > loader for the instance of LogFactory that is running). This is the result: > {code:java} > Exception in thread "main" java.lang.NoClassDefFoundError: > org/apache/logging/log4j/spi/LoggerAdapter > at java.base/java.lang.Class.forName0(Native Method) > at java.base/java.lang.Class.forName(Class.java:315) > at > org.apache.commons.logging.LogFactory.createFactory(LogFactory.java:419) > at > org.apache.commons.logging.LogFactory.lambda$newFactory$3(LogFactory.java:1431) > at java.base/java.security.AccessController.doPrivileged(Native > Method) > at > org.apache.commons.logging.LogFactory.newFactory(LogFactory.java:1431) > at > org.apache.commons.logging.LogFactory.getFactory(LogFactory.java:928) > at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:987) > at > org.component.ClassLoadedComponent.(ClassLoadedComponent.java:7) > at > java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at > java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490) > at java.base/java.lang.Class.newInstance(Class.java:584){code} > This occurs when the web application has commons-logging:1.2 and the software > component has commons-logging:1.3.x. This does not occur when both are using > version 1.2. > Unfortunately, changing the web application's version of commons-logging is > outside is not something I can influence. > An isolated reproduction case is attached. It requires Java 11. To run it: > * Unzip it to a directory. > * Run > {code:java} > ./gradlew reproduceIssue{code} > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (LOGGING-192) NoClassDefFoundError: org/apache/logging/log4j/spi/LoggerAdapter when using custom classloader
[ https://issues.apache.org/jira/browse/LOGGING-192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory updated LOGGING-192: Fix Version/s: (was: 2.0) (was: 1.3.3) > NoClassDefFoundError: org/apache/logging/log4j/spi/LoggerAdapter when using > custom classloader > -- > > Key: LOGGING-192 > URL: https://issues.apache.org/jira/browse/LOGGING-192 > Project: Commons Logging > Issue Type: Bug >Affects Versions: 1.3.0, 1.3.1, 1.3.2 > Environment: This behavior was observed while running Adopt Open JDK > 11 and the latest version of Tomcat 9. The behavior can be reproduced > outside of tomcat (see attached reproduction case). >Reporter: Dave Dority >Priority: Major > Attachments: commons-logging-classloading-issue.zip > > > If you have: > * A web application running in Tomcat which contains commons-logging:1.2 > * That web application contains a custom classloader for loading a > seperately distributed software component (whose dependencies will conflict > with the dependencies of the web application). > * The software component uses commons-logging:1.3.2 > When the web application attempts use software component, the code > [here|https://github.com/apache/commons-logging/blob/rel/commons-logging-1.3.2/src/main/java/org/apache/commons/logging/LogFactory.java#L918-L938] > looks for the presence of different logging implementation classes on the > thread context classloader's (TCCL) classpath to select an optimal > implementation. It seems like what is happening is that the LogFactory class > looking for implementation class on the TCCL's classpath and the trying to > load the selected factory from the web application's custom classloader (the > loader for the instance of LogFactory that is running). This is the result: > {code:java} > Exception in thread "main" java.lang.NoClassDefFoundError: > org/apache/logging/log4j/spi/LoggerAdapter > at java.base/java.lang.Class.forName0(Native Method) > at java.base/java.lang.Class.forName(Class.java:315) > at > org.apache.commons.logging.LogFactory.createFactory(LogFactory.java:419) > at > org.apache.commons.logging.LogFactory.lambda$newFactory$3(LogFactory.java:1431) > at java.base/java.security.AccessController.doPrivileged(Native > Method) > at > org.apache.commons.logging.LogFactory.newFactory(LogFactory.java:1431) > at > org.apache.commons.logging.LogFactory.getFactory(LogFactory.java:928) > at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:987) > at > org.component.ClassLoadedComponent.(ClassLoadedComponent.java:7) > at > java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at > java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490) > at java.base/java.lang.Class.newInstance(Class.java:584){code} > This occurs when the web application has commons-logging:1.2 and the software > component has commons-logging:1.3.x. This does not occur when both are using > version 1.2. > Unfortunately, changing the web application's version of commons-logging is > outside is not something I can influence. > An isolated reproduction case is attached. It requires Java 11. To run it: > * Unzip it to a directory. > * Run > {code:java} > ./gradlew reproduceIssue{code} > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (LANG-1736) Pair.hashCode() leads to a lot of hash collisions
[ https://issues.apache.org/jira/browse/LANG-1736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory resolved LANG-1736. --- Resolution: Information Provided > Pair.hashCode() leads to a lot of hash collisions > - > > Key: LANG-1736 > URL: https://issues.apache.org/jira/browse/LANG-1736 > Project: Commons Lang > Issue Type: Improvement > Components: lang.tuple.* >Affects Versions: 3.14.0 >Reporter: TobiasKiecker >Priority: Trivial > Attachments: Main.java > > > The implementation of Pair.hashCode() has hash collisions for most java > primitives with close values. This could affect the performance of maps with > pairs as keys. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (LANG-1735) Fix Javadoc for FluentBitSet.setInclusive(int, int)
[ https://issues.apache.org/jira/browse/LANG-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory resolved LANG-1735. --- Fix Version/s: 3.15.0 Resolution: Fixed > Fix Javadoc for FluentBitSet.setInclusive(int, int) > --- > > Key: LANG-1735 > URL: https://issues.apache.org/jira/browse/LANG-1735 > Project: Commons Lang > Issue Type: Improvement > Components: lang.* >Affects Versions: 3.14.0 >Reporter: TobiasKiecker >Priority: Minor > Labels: documentation > Fix For: 3.15.0 > > > Documentation states toIndex "exclusive" should be "inclusive" based on > function name and actual implementation. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (LANG-1735) Fix Javadoc for FluentBitSet.setInclusive(int, int)
[ https://issues.apache.org/jira/browse/LANG-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory updated LANG-1735: -- Summary: Fix Javadoc for FluentBitSet.setInclusive(int, int) (was: Improve FluentBitSet documentation for setInclusive) > Fix Javadoc for FluentBitSet.setInclusive(int, int) > --- > > Key: LANG-1735 > URL: https://issues.apache.org/jira/browse/LANG-1735 > Project: Commons Lang > Issue Type: Improvement > Components: lang.* >Affects Versions: 3.14.0 >Reporter: TobiasKiecker >Priority: Minor > Labels: documentation > > Documentation states toIndex "exclusive" should be "inclusive" based on > function name and actual implementation. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (LANG-1736) Pair.hashCode() leads to a lot of hash collisions
[ https://issues.apache.org/jira/browse/LANG-1736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848663#comment-17848663 ] Gary D. Gregory edited comment on LANG-1736 at 5/22/24 3:30 PM: So... exactly like {{HashMap}} (and {{ConcurrentHashMap}}) which we comment in the code as following. Please see {{org.apache.commons.lang3.tuple.PairTest.testHashMapEntry()}} and {{testConcurrentHashMapEntry()}}. was (Author: garydgregory): So... exactly like {{HashMap}} (and {{ConcurrentHashMap}}) which we comment in the code as following. Please see {{org.apache.commons.lang3.tuple.PairTest.test...MapEntry()}}. > Pair.hashCode() leads to a lot of hash collisions > - > > Key: LANG-1736 > URL: https://issues.apache.org/jira/browse/LANG-1736 > Project: Commons Lang > Issue Type: Improvement > Components: lang.tuple.* >Affects Versions: 3.14.0 >Reporter: TobiasKiecker >Priority: Trivial > Attachments: Main.java > > > The implementation of Pair.hashCode() has hash collisions for most java > primitives with close values. This could affect the performance of maps with > pairs as keys. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (LANG-1736) Pair.hashCode() leads to a lot of hash collisions
[ https://issues.apache.org/jira/browse/LANG-1736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848663#comment-17848663 ] Gary D. Gregory edited comment on LANG-1736 at 5/22/24 3:29 PM: So... exactly like {{HashMap}} (and {{ConcurrentHashMap}}) which we comment in the code as following. Please see {{org.apache.commons.lang3.tuple.PairTest.test...MapEntry()}}. was (Author: garydgregory): So... exactly like {{HashMap}} (and {{ConcurrentHashMap}}) which we comment in the code as following. Please see {{org.apache.commons.lang3.tuple.PairTest.testMapEntry()}}. > Pair.hashCode() leads to a lot of hash collisions > - > > Key: LANG-1736 > URL: https://issues.apache.org/jira/browse/LANG-1736 > Project: Commons Lang > Issue Type: Improvement > Components: lang.tuple.* >Affects Versions: 3.14.0 >Reporter: TobiasKiecker >Priority: Trivial > Attachments: Main.java > > > The implementation of Pair.hashCode() has hash collisions for most java > primitives with close values. This could affect the performance of maps with > pairs as keys. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (LANG-1736) Pair.hashCode() leads to a lot of hash collisions
[ https://issues.apache.org/jira/browse/LANG-1736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848663#comment-17848663 ] Gary D. Gregory edited comment on LANG-1736 at 5/22/24 3:21 PM: So... exactly like {{HashMap}} (and {{ConcurrentHashMap}}) which we comment in the code as following. Please see {{org.apache.commons.lang3.tuple.PairTest.testMapEntry()}}. was (Author: garydgregory): So... exactly like {{HashMap}} (and {{ConcurrentHashMap}}) which we comment in the code as following. Please see {{org.apache.commons.lang3.tuple.PairTest.testMapEntry()}} > Pair.hashCode() leads to a lot of hash collisions > - > > Key: LANG-1736 > URL: https://issues.apache.org/jira/browse/LANG-1736 > Project: Commons Lang > Issue Type: Improvement > Components: lang.tuple.* >Affects Versions: 3.14.0 >Reporter: TobiasKiecker >Priority: Trivial > Attachments: Main.java > > > The implementation of Pair.hashCode() has hash collisions for most java > primitives with close values. This could affect the performance of maps with > pairs as keys. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (LANG-1736) Pair.hashCode() leads to a lot of hash collisions
[ https://issues.apache.org/jira/browse/LANG-1736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848663#comment-17848663 ] Gary D. Gregory edited comment on LANG-1736 at 5/22/24 3:21 PM: So... exactly like {{HashMap}} (and {{ConcurrentHashMap}}) which we comment in the code as following. Please see {{org.apache.commons.lang3.tuple.PairTest.testMapEntry()}} was (Author: garydgregory): So... exactly like {{HashMap}} (and {{ConcurrentHashMap}}) which we comment in the code as following. > Pair.hashCode() leads to a lot of hash collisions > - > > Key: LANG-1736 > URL: https://issues.apache.org/jira/browse/LANG-1736 > Project: Commons Lang > Issue Type: Improvement > Components: lang.tuple.* >Affects Versions: 3.14.0 >Reporter: TobiasKiecker >Priority: Trivial > Attachments: Main.java > > > The implementation of Pair.hashCode() has hash collisions for most java > primitives with close values. This could affect the performance of maps with > pairs as keys. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (LANG-1736) Pair.hashCode() leads to a lot of hash collisions
[ https://issues.apache.org/jira/browse/LANG-1736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848663#comment-17848663 ] Gary D. Gregory commented on LANG-1736: --- So... exactly like {{HashMap}} and {{ConcurrentHashMap}} which we comment in the code as following. > Pair.hashCode() leads to a lot of hash collisions > - > > Key: LANG-1736 > URL: https://issues.apache.org/jira/browse/LANG-1736 > Project: Commons Lang > Issue Type: Improvement > Components: lang.tuple.* >Affects Versions: 3.14.0 >Reporter: TobiasKiecker >Priority: Trivial > Attachments: Main.java > > > The implementation of Pair.hashCode() has hash collisions for most java > primitives with close values. This could affect the performance of maps with > pairs as keys. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (LANG-1736) Pair.hashCode() leads to a lot of hash collisions
[ https://issues.apache.org/jira/browse/LANG-1736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848663#comment-17848663 ] Gary D. Gregory edited comment on LANG-1736 at 5/22/24 3:19 PM: So... exactly like {{HashMap}} (and {{ConcurrentHashMap}}) which we comment in the code as following. was (Author: garydgregory): So... exactly like {{HashMap}} and {{ConcurrentHashMap}} which we comment in the code as following. > Pair.hashCode() leads to a lot of hash collisions > - > > Key: LANG-1736 > URL: https://issues.apache.org/jira/browse/LANG-1736 > Project: Commons Lang > Issue Type: Improvement > Components: lang.tuple.* >Affects Versions: 3.14.0 >Reporter: TobiasKiecker >Priority: Trivial > Attachments: Main.java > > > The implementation of Pair.hashCode() has hash collisions for most java > primitives with close values. This could affect the performance of maps with > pairs as keys. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (DBCP-598) Configuration Description on Homepage is inaccurate
[ https://issues.apache.org/jira/browse/DBCP-598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848586#comment-17848586 ] Gary D. Gregory commented on DBCP-598: -- Hello [~mwriedt] > e.g. Times are Durations not Millis. The milliseconds {{int}}/{{long}} APIs are still there, but are deprecated in favor of their {{Duration}} equivalents. The default for {{testOnBorrow}} is true: https://github.com/apache/commons-dbcp/blob/c3e7ba5f31fb2f77bf83f4a2d4510fb0627f6a57/src/main/java/org/apache/commons/dbcp2/BasicDataSource.java#L249 Feel free to provide a PR if you want to improve the documentation :) > Configuration Description on Homepage is inaccurate > --- > > Key: DBCP-598 > URL: https://issues.apache.org/jira/browse/DBCP-598 > Project: Commons DBCP > Issue Type: Improvement >Affects Versions: 2.9.0, 2.12.0 >Reporter: Max Philipp Wriedt >Priority: Major > > [https://commons.apache.org/proper/commons-dbcp/configuration.html] > The description of some settings doesn't correspond to underlying code. > > e.g. Times are Durations not Millis. > Also "testOnBorrow" seems to be false in default? -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (TEXT-234) Improve StrBuilder documentation for new line text
[ https://issues.apache.org/jira/browse/TEXT-234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory updated TEXT-234: - Summary: Improve StrBuilder documentation for new line text (was: Improve documentation for setNewLineText and appendNewLine) > Improve StrBuilder documentation for new line text > -- > > Key: TEXT-234 > URL: https://issues.apache.org/jira/browse/TEXT-234 > Project: Commons Text > Issue Type: Improvement >Affects Versions: 1.12.0 >Reporter: TobiasKiecker >Priority: Minor > Labels: documentation > > The method _setNewLineText_ in both _StrBuilder_ and _TextStringBuilder_ have > ambiguous documentation. If someone were to extend the class and override > _appendNewLine_ null would not be handled anymore. > The docstring of s{_}etNewlineText{_} implies that THIS function does the > handling, while in truth it is done in _appendNewLine._ -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (TEXT-234) Improve documentation for setNewLineText and appendNewLine
[ https://issues.apache.org/jira/browse/TEXT-234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory updated TEXT-234: - Summary: Improve documentation for setNewLineText and appendNewLine (was: Inconsistent documentation for setNewLineText and appendNewLine) > Improve documentation for setNewLineText and appendNewLine > -- > > Key: TEXT-234 > URL: https://issues.apache.org/jira/browse/TEXT-234 > Project: Commons Text > Issue Type: Improvement >Affects Versions: 1.12.0 >Reporter: TobiasKiecker >Priority: Minor > Labels: documentation > > The method _setNewLineText_ in both _StrBuilder_ and _TextStringBuilder_ have > ambiguous documentation. If someone were to extend the class and override > _appendNewLine_ null would not be handled anymore. > The docstring of s{_}etNewlineText{_} implies that THIS function does the > handling, while in truth it is done in _appendNewLine._ -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (LANG-1734) Deprecate/replace SerializationUtils.deserialize
[ https://issues.apache.org/jira/browse/LANG-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848238#comment-17848238 ] Gary D. Gregory edited comment on LANG-1734 at 5/21/24 2:40 PM: I agree we can deprecate. I would like to see a follow up ticket to document deprecation in favor of ... what. If it's only documentation, that's OK with me. Serialization proxies a la Effective Java for example. Adding an allow list might not be worth it, due to bugs in user configurations and the sense of false security. Needs discussion. was (Author: garydgregory): I agree we can deprecate. I would like to see a follow up ticket to document deprecation in favor of ... what. If it's only documentation, that's OK with me. Serialization proxies a la Effective Java for example. Adding an allow list might not be worth it, due to bugs and the sense of false security. Needs discussion. > Deprecate/replace SerializationUtils.deserialize > > > Key: LANG-1734 > URL: https://issues.apache.org/jira/browse/LANG-1734 > Project: Commons Lang > Issue Type: Task > Components: lang.* >Reporter: Arnout Engelen >Priority: Minor > > SerializationUtils.deserialize should never be used with untrusted input: it > is generally not possible to prove the absence of classes on the classpath > that can be used as 'gadgets' for deserialization attacks. > When SerializationUtils.deserialize was introduced, Java serialization was > still 'in vogue' and the JDK APIs for deserialization were awkward to use. > Nowadays, other serialization mechanisms (and serialization proxies) are more > popular, and the Java APIs have gotten much better, so there isn't much > reason for "SerializationUtils.deserialize" anymore. > For these reasons, it might be good to deprecate > SerializationUtils.deserialize, or at least more clearly mark it as not > suitable to be used with untrusted input. We might also want to replace it > with variants that encourage allow/denylisting or other security filters, or > recommend > [https://docs.oracle.com/en/java/javase/11/core/serialization-filtering1.html] > for that. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (LANG-1734) Deprecate/replace SerializationUtils.deserialize
[ https://issues.apache.org/jira/browse/LANG-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848238#comment-17848238 ] Gary D. Gregory commented on LANG-1734: --- I agree we can deprecated. I would like to see a follow up ticket to document deprecation in favor of ... what. If it's only documentation, that's OK with me. Serialization proxies a la Effective Java for example. Adding an allow list might not be worth it, due to bugs and the sense of false security. Needs discussion. > Deprecate/replace SerializationUtils.deserialize > > > Key: LANG-1734 > URL: https://issues.apache.org/jira/browse/LANG-1734 > Project: Commons Lang > Issue Type: Task > Components: lang.* >Reporter: Arnout Engelen >Priority: Minor > > SerializationUtils.deserialize should never be used with untrusted input: it > is generally not possible to prove the absence of classes on the classpath > that can be used as 'gadgets' for deserialization attacks. > When SerializationUtils.deserialize was introduced, Java serialization was > still 'in vogue' and the JDK APIs for deserialization were awkward to use. > Nowadays, other serialization mechanisms (and serialization proxies) are more > popular, and the Java APIs have gotten much better, so there isn't much > reason for "SerializationUtils.deserialize" anymore. > For these reasons, it might be good to deprecate > SerializationUtils.deserialize, or at least more clearly mark it as not > suitable to be used with untrusted input. We might also want to replace it > with variants that encourage allow/denylisting or other security filters, or > recommend > [https://docs.oracle.com/en/java/javase/11/core/serialization-filtering1.html] > for that. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (LANG-1734) Deprecate/replace SerializationUtils.deserialize
[ https://issues.apache.org/jira/browse/LANG-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848238#comment-17848238 ] Gary D. Gregory edited comment on LANG-1734 at 5/21/24 2:39 PM: I agree we can deprecate. I would like to see a follow up ticket to document deprecation in favor of ... what. If it's only documentation, that's OK with me. Serialization proxies a la Effective Java for example. Adding an allow list might not be worth it, due to bugs and the sense of false security. Needs discussion. was (Author: garydgregory): I agree we can deprecated. I would like to see a follow up ticket to document deprecation in favor of ... what. If it's only documentation, that's OK with me. Serialization proxies a la Effective Java for example. Adding an allow list might not be worth it, due to bugs and the sense of false security. Needs discussion. > Deprecate/replace SerializationUtils.deserialize > > > Key: LANG-1734 > URL: https://issues.apache.org/jira/browse/LANG-1734 > Project: Commons Lang > Issue Type: Task > Components: lang.* >Reporter: Arnout Engelen >Priority: Minor > > SerializationUtils.deserialize should never be used with untrusted input: it > is generally not possible to prove the absence of classes on the classpath > that can be used as 'gadgets' for deserialization attacks. > When SerializationUtils.deserialize was introduced, Java serialization was > still 'in vogue' and the JDK APIs for deserialization were awkward to use. > Nowadays, other serialization mechanisms (and serialization proxies) are more > popular, and the Java APIs have gotten much better, so there isn't much > reason for "SerializationUtils.deserialize" anymore. > For these reasons, it might be good to deprecate > SerializationUtils.deserialize, or at least more clearly mark it as not > suitable to be used with untrusted input. We might also want to replace it > with variants that encourage allow/denylisting or other security filters, or > recommend > [https://docs.oracle.com/en/java/javase/11/core/serialization-filtering1.html] > for that. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IMAGING-376) HTTP 404 to Javadoc
[ https://issues.apache.org/jira/browse/IMAGING-376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory updated IMAGING-376: Summary: HTTP 404 to Javadoc (was: Link to API documentation is dead) > HTTP 404 to Javadoc > --- > > Key: IMAGING-376 > URL: https://issues.apache.org/jira/browse/IMAGING-376 > Project: Commons Imaging > Issue Type: Bug > Components: Documentation >Reporter: Nils Christian Ehmke >Priority: Major > > Hi, > The website at [https://commons.apache.org/proper/commons-imaging/] points to > the latest API documentation at > https://commons.apache.org/proper/commons-imaging/apidocs/index.html. This > seems to be a dead link, as it leads to a 404 not found. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IMAGING-376) Link to API documentation is dead
[ https://issues.apache.org/jira/browse/IMAGING-376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17847679#comment-17847679 ] Gary D. Gregory commented on IMAGING-376: - Internally the site is published here: [https://svn.apache.org/repos/infra/websites/production/commons/content/proper/commons-imaging/index.html] with the Javadoc here: [https://svn.apache.org/repos/infra/websites/production/commons/content/proper/commons-imaging/apidocs/index.html] but somehow this has not surfaced publicly. > Link to API documentation is dead > - > > Key: IMAGING-376 > URL: https://issues.apache.org/jira/browse/IMAGING-376 > Project: Commons Imaging > Issue Type: Bug > Components: Documentation >Reporter: Nils Christian Ehmke >Priority: Major > > Hi, > The website at [https://commons.apache.org/proper/commons-imaging/] points to > the latest API documentation at > https://commons.apache.org/proper/commons-imaging/apidocs/index.html. This > seems to be a dead link, as it leads to a 404 not found. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (TEXT-234) Inconsistent documentation for setNewLineText and appendNewLine
[ https://issues.apache.org/jira/browse/TEXT-234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17847469#comment-17847469 ] Gary D. Gregory commented on TEXT-234: -- [~tobiaskiecker] Thank you for your report. Feel free to create a PR on GitHub and document the existing behavior better. > Inconsistent documentation for setNewLineText and appendNewLine > > > Key: TEXT-234 > URL: https://issues.apache.org/jira/browse/TEXT-234 > Project: Commons Text > Issue Type: Improvement >Affects Versions: 1.12.0 >Reporter: TobiasKiecker >Priority: Minor > Labels: documentation > > The method _setNewLineText_ in both _StrBuilder_ and _TextStringBuilder_ have > ambiguous documentation. If someone were to extend the class and override > _appendNewLine_ null would not be handled anymore. > The docstring of s{_}etNewlineText{_} implies that THIS function does the > handling, while in truth it is done in _appendNewLine._ -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IO-819) Commons IO v2.15.0 is breaking android builds
[ https://issues.apache.org/jira/browse/IO-819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846962#comment-17846962 ] Gary D. Gregory commented on IO-819: This looks like a problem in the tooling associated with Android builds, so you should look for a change there, not here IMO. > Commons IO v2.15.0 is breaking android builds > - > > Key: IO-819 > URL: https://issues.apache.org/jira/browse/IO-819 > Project: Commons IO > Issue Type: Bug >Affects Versions: 2.15.0 > Environment: java --version > openjdk 11.0.21 2023-10-17 LTS > OpenJDK Runtime Environment Zulu11.68+17-CA (build 11.0.21+9-LTS) > OpenJDK 64-Bit Server VM Zulu11.68+17-CA (build 11.0.21+9-LTS, mixed mode) > > Mac OS 14.1 on Apple Silicon >Reporter: Pranshu >Priority: Major > Attachments: Screenshot 2023-10-27 at 00.07.46.png, stacktrace.txt > > > Hey we are using commons +commons-io:commons-io+ java package in out React > Native app on the android side. > Apparently the recently releases version v2.15.0 is breaking android builds, > whereas versions <= v2.14.0 works well. > Steps to Repro > 1. Create a RN app > npx react-native init CommonsIORepro > 2. Add commons-io dep android/{*}app{*}/build.gradle > dependencies{ > implementation "commons-io:commons-io:2.15.0" > } > > 3. yarn run start > 4. In a new terminal - yarn run android -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (COMPRESS-675) Regression in pack200's Archive class -- underlying InputStream is now closed
[ https://issues.apache.org/jira/browse/COMPRESS-675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory resolved COMPRESS-675. -- Fix Version/s: 1.26.2 Resolution: Fixed Fixed in git master and snapshot builds. > Regression in pack200's Archive class -- underlying InputStream is now closed > -- > > Key: COMPRESS-675 > URL: https://issues.apache.org/jira/browse/COMPRESS-675 > Project: Commons Compress > Issue Type: Bug >Affects Versions: 1.26.0, 1.26.1 >Reporter: Tim Allison >Assignee: Gary D. Gregory >Priority: Major > Fix For: 1.26.2 > > > On TIKA-4221, on our recent regression tests, we noticed a change in the > behavior of Pack200's Archive class. In 1.26.0, the unwrapping of the > FilterInputStreams > (https://github.com/apache/commons-compress/blob/68cd2e7fb488b4ad8a9fdc81cae97ae6e8248ea5/src/main/java/org/apache/commons/compress/harmony/unpack200/Pack200UnpackerAdapter.java#L66) > effectively disables CloseShieldInputStreams, which means that the > underlying stream is closed after the parse. > This causes problems when a Pack200 file is inside of an ArchiveInputStream. > Not sure of the best solution. There's a triggering file on the Tika issue. > We can implement a crude workaround until this is fixed in commons-compress. > Thank you! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (COMPRESS-675) Regression in pack200's Archive class -- underlying InputStream is now closed
[ https://issues.apache.org/jira/browse/COMPRESS-675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846926#comment-17846926 ] Gary D. Gregory edited comment on COMPRESS-675 at 5/16/24 11:53 AM: Fixed in git master and snapshot builds. Would you mind validating this addresses your issue? TY for your patience! was (Author: garydgregory): Fixed in git master and snapshot builds. > Regression in pack200's Archive class -- underlying InputStream is now closed > -- > > Key: COMPRESS-675 > URL: https://issues.apache.org/jira/browse/COMPRESS-675 > Project: Commons Compress > Issue Type: Bug >Affects Versions: 1.26.0, 1.26.1 >Reporter: Tim Allison >Assignee: Gary D. Gregory >Priority: Major > Fix For: 1.26.2 > > > On TIKA-4221, on our recent regression tests, we noticed a change in the > behavior of Pack200's Archive class. In 1.26.0, the unwrapping of the > FilterInputStreams > (https://github.com/apache/commons-compress/blob/68cd2e7fb488b4ad8a9fdc81cae97ae6e8248ea5/src/main/java/org/apache/commons/compress/harmony/unpack200/Pack200UnpackerAdapter.java#L66) > effectively disables CloseShieldInputStreams, which means that the > underlying stream is closed after the parse. > This causes problems when a Pack200 file is inside of an ArchiveInputStream. > Not sure of the best solution. There's a triggering file on the Tika issue. > We can implement a crude workaround until this is fixed in commons-compress. > Thank you! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (COMPRESS-679) Regression on parallel processing of 7zip files
[ https://issues.apache.org/jira/browse/COMPRESS-679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846902#comment-17846902 ] Gary D. Gregory commented on COMPRESS-679: -- I can cut a release candidate this week. > Regression on parallel processing of 7zip files > --- > > Key: COMPRESS-679 > URL: https://issues.apache.org/jira/browse/COMPRESS-679 > Project: Commons Compress > Issue Type: Bug >Affects Versions: 1.26.0, 1.26.1 >Reporter: Mikaël MECHOULAM >Assignee: Gary D. Gregory >Priority: Critical > Fix For: 1.26.2 > > Attachments: file.7z > > > I've run into a bug which occurs when attempting to read a 7zip file in > several threads simultaneously. The following code illustrates the problem. > The file.7z is in attachment > > {code:java} > import java.io.InputStream; > import java.nio.file.Paths; > import java.util.stream.IntStream; > import org.apache.commons.compress.archivers.sevenz.SevenZArchiveEntry; > import org.apache.commons.compress.archivers.sevenz.SevenZFile; > public class TestZip { > public static void main(final String[] args) { > final Runnable runnable = () -> { > try { > try (final SevenZFile sevenZFile = > SevenZFile.builder().setPath(Paths.get("file.7z")).get()) { > SevenZArchiveEntry sevenZArchiveEntry; > while ((sevenZArchiveEntry = sevenZFile.getNextEntry()) > != null) { > if ("file4.txt".equals(sevenZArchiveEntry.getName())) > { // The entry must not be the first of the ZIP archive to reproduce > final InputStream inputStream = > sevenZFile.getInputStream(sevenZArchiveEntry); > // treatments... > break; > } > } > } > } catch (final Exception e) { // java.io.IOException: Checksum > verification failed > e.printStackTrace(); > } > }; > IntStream.range(0, 30).forEach(i -> new Thread(runnable).start()); > } > } > {code} > Below is the output I receive on version 1.26: > > {code:java} > java.io.IOException: Checksum verification failed > at > org.apache.commons.compress.utils.ChecksumVerifyingInputStream.verify(ChecksumVerifyingInputStream.java:98) > at > org.apache.commons.compress.utils.ChecksumVerifyingInputStream.read(ChecksumVerifyingInputStream.java:92) > at org.apache.commons.io.IOUtils.skip(IOUtils.java:2422) > at org.apache.commons.io.IOUtils.skip(IOUtils.java:2380) > at > org.apache.commons.compress.archivers.sevenz.SevenZFile.getCurrentStream(SevenZFile.java:912) > at > org.apache.commons.compress.archivers.sevenz.SevenZFile.getInputStream(SevenZFile.java:988) > at > com.infotel.arcsys.nativ.archiving.zip.TestZip.lambda$main$0(TestZip.java:21) > at java.base/java.lang.Thread.run(Thread.java:833) > > {code} > The issue seems to arise from the transition from version 1.25 to 1.26 of > Apache Commons Compress. In the {{SevenZFile}} class of the library, the > private method {{getCurrentStream}} has migrated from > {{IOUtils.skip(InputStream, long)}} to a method with a same signature but in > Commons-IO package, which leads to a change in behavior. In version 1.26, it > uses a shared and unsynchronized buffer, theoretically intended only for > writing ({{{}SCRATCH_BYTE_BUFFER_WO{}}}). This causes checksum verification > issues within the library. The problem seems to be resolved by specifying the > {{Supplier}} of the buffer to use. > {code:java} > try (InputStream stream = deferredBlockStreams.remove(0)) { > org.apache.commons.io.IOUtils.skip(stream, Long.MAX_VALUE, () -> new > byte[org.apache.commons.io.IOUtils.DEFAULT_BUFFER_SIZE]); > } {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (COMPRESS-679) Regression on parallel processing of 7zip files
[ https://issues.apache.org/jira/browse/COMPRESS-679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846750#comment-17846750 ] Gary D. Gregory edited comment on COMPRESS-679 at 5/15/24 8:32 PM: --- [~mikael_mechoulam] Thank you for your report. Fixed in git master and snapshot builds in https://repository.apache.org/content/repositories/snapshots/org/apache/commons/commons-compress/1.26.2-SNAPSHOT/ Please test and let us know. was (Author: garydgregory): [~mikael_mechoulam] Thank you for your report. Fixed in git master and snapshot builds in https://repository.apache.org/content/repositories/snapshots/org/apache/commons/commons-compress/1.26.2-SNAPSHOT/ Please give test and let us know. > Regression on parallel processing of 7zip files > --- > > Key: COMPRESS-679 > URL: https://issues.apache.org/jira/browse/COMPRESS-679 > Project: Commons Compress > Issue Type: Bug >Affects Versions: 1.26.0, 1.26.1 >Reporter: Mikaël MECHOULAM >Assignee: Gary D. Gregory >Priority: Critical > Fix For: 1.26.2 > > Attachments: file.7z > > > I've run into a bug which occurs when attempting to read a 7zip file in > several threads simultaneously. The following code illustrates the problem. > The file.7z is in attachment > > {code:java} > import java.io.InputStream; > import java.nio.file.Paths; > import java.util.stream.IntStream; > import org.apache.commons.compress.archivers.sevenz.SevenZArchiveEntry; > import org.apache.commons.compress.archivers.sevenz.SevenZFile; > public class TestZip { > public static void main(final String[] args) { > final Runnable runnable = () -> { > try { > try (final SevenZFile sevenZFile = > SevenZFile.builder().setPath(Paths.get("file.7z")).get()) { > SevenZArchiveEntry sevenZArchiveEntry; > while ((sevenZArchiveEntry = sevenZFile.getNextEntry()) > != null) { > if ("file4.txt".equals(sevenZArchiveEntry.getName())) > { // The entry must not be the first of the ZIP archive to reproduce > final InputStream inputStream = > sevenZFile.getInputStream(sevenZArchiveEntry); > // treatments... > break; > } > } > } > } catch (final Exception e) { // java.io.IOException: Checksum > verification failed > e.printStackTrace(); > } > }; > IntStream.range(0, 30).forEach(i -> new Thread(runnable).start()); > } > } > {code} > Below is the output I receive on version 1.26: > > {code:java} > java.io.IOException: Checksum verification failed > at > org.apache.commons.compress.utils.ChecksumVerifyingInputStream.verify(ChecksumVerifyingInputStream.java:98) > at > org.apache.commons.compress.utils.ChecksumVerifyingInputStream.read(ChecksumVerifyingInputStream.java:92) > at org.apache.commons.io.IOUtils.skip(IOUtils.java:2422) > at org.apache.commons.io.IOUtils.skip(IOUtils.java:2380) > at > org.apache.commons.compress.archivers.sevenz.SevenZFile.getCurrentStream(SevenZFile.java:912) > at > org.apache.commons.compress.archivers.sevenz.SevenZFile.getInputStream(SevenZFile.java:988) > at > com.infotel.arcsys.nativ.archiving.zip.TestZip.lambda$main$0(TestZip.java:21) > at java.base/java.lang.Thread.run(Thread.java:833) > > {code} > The issue seems to arise from the transition from version 1.25 to 1.26 of > Apache Commons Compress. In the {{SevenZFile}} class of the library, the > private method {{getCurrentStream}} has migrated from > {{IOUtils.skip(InputStream, long)}} to a method with a same signature but in > Commons-IO package, which leads to a change in behavior. In version 1.26, it > uses a shared and unsynchronized buffer, theoretically intended only for > writing ({{{}SCRATCH_BYTE_BUFFER_WO{}}}). This causes checksum verification > issues within the library. The problem seems to be resolved by specifying the > {{Supplier}} of the buffer to use. > {code:java} > try (InputStream stream = deferredBlockStreams.remove(0)) { > org.apache.commons.io.IOUtils.skip(stream, Long.MAX_VALUE, () -> new > byte[org.apache.commons.io.IOUtils.DEFAULT_BUFFER_SIZE]); > } {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (COMPRESS-679) Regression on parallel processing of 7zip files
[ https://issues.apache.org/jira/browse/COMPRESS-679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory resolved COMPRESS-679. -- Fix Version/s: 1.26.2 Resolution: Fixed [~mikael_mechoulam] Thank you for your report. Fixed in git master and snapshot builds in https://repository.apache.org/content/repositories/snapshots/org/apache/commons/commons-compress/1.26.2-SNAPSHOT/ Please give test and let us know. > Regression on parallel processing of 7zip files > --- > > Key: COMPRESS-679 > URL: https://issues.apache.org/jira/browse/COMPRESS-679 > Project: Commons Compress > Issue Type: Bug >Affects Versions: 1.26.0, 1.26.1 >Reporter: Mikaël MECHOULAM >Assignee: Gary D. Gregory >Priority: Critical > Fix For: 1.26.2 > > Attachments: file.7z > > > I've run into a bug which occurs when attempting to read a 7zip file in > several threads simultaneously. The following code illustrates the problem. > The file.7z is in attachment > > {code:java} > import java.io.InputStream; > import java.nio.file.Paths; > import java.util.stream.IntStream; > import org.apache.commons.compress.archivers.sevenz.SevenZArchiveEntry; > import org.apache.commons.compress.archivers.sevenz.SevenZFile; > public class TestZip { > public static void main(final String[] args) { > final Runnable runnable = () -> { > try { > try (final SevenZFile sevenZFile = > SevenZFile.builder().setPath(Paths.get("file.7z")).get()) { > SevenZArchiveEntry sevenZArchiveEntry; > while ((sevenZArchiveEntry = sevenZFile.getNextEntry()) > != null) { > if ("file4.txt".equals(sevenZArchiveEntry.getName())) > { // The entry must not be the first of the ZIP archive to reproduce > final InputStream inputStream = > sevenZFile.getInputStream(sevenZArchiveEntry); > // treatments... > break; > } > } > } > } catch (final Exception e) { // java.io.IOException: Checksum > verification failed > e.printStackTrace(); > } > }; > IntStream.range(0, 30).forEach(i -> new Thread(runnable).start()); > } > } > {code} > Below is the output I receive on version 1.26: > > {code:java} > java.io.IOException: Checksum verification failed > at > org.apache.commons.compress.utils.ChecksumVerifyingInputStream.verify(ChecksumVerifyingInputStream.java:98) > at > org.apache.commons.compress.utils.ChecksumVerifyingInputStream.read(ChecksumVerifyingInputStream.java:92) > at org.apache.commons.io.IOUtils.skip(IOUtils.java:2422) > at org.apache.commons.io.IOUtils.skip(IOUtils.java:2380) > at > org.apache.commons.compress.archivers.sevenz.SevenZFile.getCurrentStream(SevenZFile.java:912) > at > org.apache.commons.compress.archivers.sevenz.SevenZFile.getInputStream(SevenZFile.java:988) > at > com.infotel.arcsys.nativ.archiving.zip.TestZip.lambda$main$0(TestZip.java:21) > at java.base/java.lang.Thread.run(Thread.java:833) > > {code} > The issue seems to arise from the transition from version 1.25 to 1.26 of > Apache Commons Compress. In the {{SevenZFile}} class of the library, the > private method {{getCurrentStream}} has migrated from > {{IOUtils.skip(InputStream, long)}} to a method with a same signature but in > Commons-IO package, which leads to a change in behavior. In version 1.26, it > uses a shared and unsynchronized buffer, theoretically intended only for > writing ({{{}SCRATCH_BYTE_BUFFER_WO{}}}). This causes checksum verification > issues within the library. The problem seems to be resolved by specifying the > {{Supplier}} of the buffer to use. > {code:java} > try (InputStream stream = deferredBlockStreams.remove(0)) { > org.apache.commons.io.IOUtils.skip(stream, Long.MAX_VALUE, () -> new > byte[org.apache.commons.io.IOUtils.DEFAULT_BUFFER_SIZE]); > } {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (COMPRESS-679) Regression on parallel processing of 7zip files
[ https://issues.apache.org/jira/browse/COMPRESS-679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory updated COMPRESS-679: - Assignee: Gary D. Gregory > Regression on parallel processing of 7zip files > --- > > Key: COMPRESS-679 > URL: https://issues.apache.org/jira/browse/COMPRESS-679 > Project: Commons Compress > Issue Type: Bug >Affects Versions: 1.26.0, 1.26.1 >Reporter: Mikaël MECHOULAM >Assignee: Gary D. Gregory >Priority: Critical > Attachments: file.7z > > > I've run into a bug which occurs when attempting to read a 7zip file in > several threads simultaneously. The following code illustrates the problem. > The file.7z is in attachment > > {code:java} > import java.io.InputStream; > import java.nio.file.Paths; > import java.util.stream.IntStream; > import org.apache.commons.compress.archivers.sevenz.SevenZArchiveEntry; > import org.apache.commons.compress.archivers.sevenz.SevenZFile; > public class TestZip { > public static void main(final String[] args) { > final Runnable runnable = () -> { > try { > try (final SevenZFile sevenZFile = > SevenZFile.builder().setPath(Paths.get("file.7z")).get()) { > SevenZArchiveEntry sevenZArchiveEntry; > while ((sevenZArchiveEntry = sevenZFile.getNextEntry()) > != null) { > if ("file4.txt".equals(sevenZArchiveEntry.getName())) > { // The entry must not be the first of the ZIP archive to reproduce > final InputStream inputStream = > sevenZFile.getInputStream(sevenZArchiveEntry); > // treatments... > break; > } > } > } > } catch (final Exception e) { // java.io.IOException: Checksum > verification failed > e.printStackTrace(); > } > }; > IntStream.range(0, 30).forEach(i -> new Thread(runnable).start()); > } > } > {code} > Below is the output I receive on version 1.26: > > {code:java} > java.io.IOException: Checksum verification failed > at > org.apache.commons.compress.utils.ChecksumVerifyingInputStream.verify(ChecksumVerifyingInputStream.java:98) > at > org.apache.commons.compress.utils.ChecksumVerifyingInputStream.read(ChecksumVerifyingInputStream.java:92) > at org.apache.commons.io.IOUtils.skip(IOUtils.java:2422) > at org.apache.commons.io.IOUtils.skip(IOUtils.java:2380) > at > org.apache.commons.compress.archivers.sevenz.SevenZFile.getCurrentStream(SevenZFile.java:912) > at > org.apache.commons.compress.archivers.sevenz.SevenZFile.getInputStream(SevenZFile.java:988) > at > com.infotel.arcsys.nativ.archiving.zip.TestZip.lambda$main$0(TestZip.java:21) > at java.base/java.lang.Thread.run(Thread.java:833) > > {code} > The issue seems to arise from the transition from version 1.25 to 1.26 of > Apache Commons Compress. In the {{SevenZFile}} class of the library, the > private method {{getCurrentStream}} has migrated from > {{IOUtils.skip(InputStream, long)}} to a method with a same signature but in > Commons-IO package, which leads to a change in behavior. In version 1.26, it > uses a shared and unsynchronized buffer, theoretically intended only for > writing ({{{}SCRATCH_BYTE_BUFFER_WO{}}}). This causes checksum verification > issues within the library. The problem seems to be resolved by specifying the > {{Supplier}} of the buffer to use. > {code:java} > try (InputStream stream = deferredBlockStreams.remove(0)) { > org.apache.commons.io.IOUtils.skip(stream, Long.MAX_VALUE, () -> new > byte[org.apache.commons.io.IOUtils.DEFAULT_BUFFER_SIZE]); > } {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (VFS-852) webdav4s is not working with multiple TLS Record Layer segments
[ https://issues.apache.org/jira/browse/VFS-852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846311#comment-17846311 ] Gary D. Gregory commented on VFS-852: - Releasing is definitively on my to-do list. Maybe within a week or two. > webdav4s is not working with multiple TLS Record Layer segments > --- > > Key: VFS-852 > URL: https://issues.apache.org/jira/browse/VFS-852 > Project: Commons VFS > Issue Type: Bug >Affects Versions: 2.9.0 >Reporter: Sodrul Bhuiyan >Priority: Major > > We're trying to use webdav over SSL using webdav4s provider. We're running > into connection closed error because the connection had been released from > the finally block as part of > org.apache.commons.vfs2.provider.webdav4.Webdav4FileObject#executeRequest > method. The issue becomes visible from > org.apache.commons.vfs2.provider.webdav4.Webdav4FileObject#getProperties(org.apache.commons.vfs2.provider.GenericURLFileName, > int, org.apache.jackrabbit.webdav.property.DavPropertyNameSet, boolean) > method which we're using. I would imagine it'd also be an issue from > org.apache.commons.vfs2.provider.webdav4.Webdav4FileObject#doListChildrenResolved > method. As both of these methods try to get the body of the response after > the connection had been released from executeRequest method. > The design assumption was that the entire data (http response) was consumed > before closing. However while debugging the issue we have found that TLS > transmission containing the application data had been broken up into multiple > TLS Record Layer Segments (Fragments as designed). While filling up the > buffer from SSL Socket it stopped after the 1st TLS record layer, which only > contained the http headers as it hit the end of that stream (fragment). > Non-ssl transaction doesn't have fragmentation so buffer fills up entire > response at once thus doesn't cause the connection closed error. > I'd imagine the fix would be to implement an overloading executeRequest > method for keeping the connection open and close it after retrieving the body > of the response from getProperties and doListChildrenResolved method. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (CLI-332) Add optional HelpFormatter Function to document Deprecated options
[ https://issues.apache.org/jira/browse/CLI-332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory resolved CLI-332. - Fix Version/s: 1.7.1 Resolution: Fixed > Add optional HelpFormatter Function to document Deprecated options > -- > > Key: CLI-332 > URL: https://issues.apache.org/jira/browse/CLI-332 > Project: Commons CLI > Issue Type: Improvement > Components: Help formatter >Affects Versions: 1.7.0 >Reporter: Claude Warren >Assignee: Claude Warren >Priority: Minor > Fix For: 1.7.1 > > > Currently the HelpFormatter just prings "[Deprecated]" at the front of the > description for items that are deprecated. It would be nice to be able to > provide more information in the help output. > > My proposal is to add a Function that will be applied to > deprecated Options and to provide the text output. The default > implementation will return "[Deprecated} "+option.getDescription(). > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (CLI-332) Add optional HelpFormatter Function to document Deprecated options
[ https://issues.apache.org/jira/browse/CLI-332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory updated CLI-332: Summary: Add optional HelpFormatter Function to document Deprecated options (was: Deprecated option details not specified in HelpFormatter) > Add optional HelpFormatter Function to document Deprecated options > -- > > Key: CLI-332 > URL: https://issues.apache.org/jira/browse/CLI-332 > Project: Commons CLI > Issue Type: Improvement > Components: Help formatter >Affects Versions: 1.7.0 >Reporter: Claude Warren >Assignee: Claude Warren >Priority: Minor > > Currently the HelpFormatter just prings "[Deprecated]" at the front of the > description for items that are deprecated. It would be nice to be able to > provide more information in the help output. > > My proposal is to add a Function that will be applied to > deprecated Options and to provide the text output. The default > implementation will return "[Deprecated} "+option.getDescription(). > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (CLI-331) Deprecated option usage is not detected if non string keys are used for resolution.
[ https://issues.apache.org/jira/browse/CLI-331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory resolved CLI-331. - Fix Version/s: 1.7.1 Resolution: Fixed > Deprecated option usage is not detected if non string keys are used for > resolution. > --- > > Key: CLI-331 > URL: https://issues.apache.org/jira/browse/CLI-331 > Project: Commons CLI > Issue Type: Bug > Components: CLI-1.x >Affects Versions: 1.7.0 >Reporter: Claude Warren >Assignee: Claude Warren >Priority: Major > Fix For: 1.7.1 > > > CommandLine.handleDeprecated() is not called if the option key is not a > string. > > For example getOptionValue() has both String and Option parameter types. > * getOptionValue(String) calls handleDeprecated() > * getOptionValue(Option) does not call handleDeprecated(). > In most cases the String parameter resolves the Option and calls the Option > parameter method of the same name. > The fix is to move the handleDeprecated() into the Option processing path. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (CONFIGURATION-845) DatabaseConfiguration loses list property's order
[ https://issues.apache.org/jira/browse/CONFIGURATION-845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845455#comment-17845455 ] Gary D. Gregory commented on CONFIGURATION-845: --- The new column has to be optional, otherwise existing installations would break. > DatabaseConfiguration loses list property's order > - > > Key: CONFIGURATION-845 > URL: https://issues.apache.org/jira/browse/CONFIGURATION-845 > Project: Commons Configuration > Issue Type: Bug >Affects Versions: 2.10.1 >Reporter: pieter martin >Priority: Major > > {code:java} > private static final String SQL_GET_PROPERTY = "SELECT * FROM %s WHERE %s =?"; > {code} > This is the query DatabaseConfiguration executes to get a property's values. > It assumes the order will be insertion order but this is not true on > postgresql. > So configuration.getList(key) loses the order of the list. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (FILEUPLOAD-355) Update code example: Use IOUtils instead of Streams utils class
[ https://issues.apache.org/jira/browse/FILEUPLOAD-355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844978#comment-17844978 ] Gary D. Gregory edited comment on FILEUPLOAD-355 at 5/9/24 12:58 PM: - This functionality does not belong in FileUpload, please use Apache Commons IO: {code:java} org.apache.commons.io.IOUtils.toString(InputStream, String) {code} or: {code:java} org.apache.commons.io.IOUtils.toString(InputStream, Charset) {code} I've updated the site in git master to use: {code:java} IOUtils.toString(stream, Charset.defaultCharset()) {code} was (Author: garydgregory): This functionality does not belong in FileUpload, please use Apache Commons IO: {code:java} org.apache.commons.io.IOUtils.toString(InputStream, String) {code} or: {code:java} org.apache.commons.io.IOUtils.toString(InputStream, Charset) {code} I've updated the site in git master. > Update code example: Use IOUtils instead of Streams utils class > --- > > Key: FILEUPLOAD-355 > URL: https://issues.apache.org/jira/browse/FILEUPLOAD-355 > Project: Commons FileUpload > Issue Type: Wish >Affects Versions: 2.0.0-M2 >Reporter: Ana >Priority: Major > Fix For: 2.0.0 > > > I'm trying to use the Streaming API in *commons-fileupload2-jakarta-serverl6 > v 2.0.0-M2* like [in this > example|https://commons.apache.org/proper/commons-fileupload/streaming.html]:]: > {code:java} > // Create a new file upload handler > JakartaServletFileUpload upload = new JakartaServletFileUpload(); > // Parse the request > upload.getItemIterator(request).forEachRemaining(item -> { > String name = item.getFieldName(); > InputStream stream = item.getInputStream(); > if (item.isFormField()) { > System.out.println("Form field " + name + " with value " > + Streams.asString(stream) + " detected."); > } else { > System.out.println("File field " + name + " with file name " > + item.getName() + " detected."); > // Process the input stream > ... > } > }); {code} > But the org.apache.commons.fileupload.util.Streams class cannot be found. It > doesn't seem to have been ported to v2. Are there any plans to add it to > future releases? > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (FILEUPLOAD-355) Update code example: Use IOUtils instead of Streams utils class
[ https://issues.apache.org/jira/browse/FILEUPLOAD-355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844978#comment-17844978 ] Gary D. Gregory edited comment on FILEUPLOAD-355 at 5/9/24 12:57 PM: - This functionality does not belong in FileUpload, please use Apache Commons IO: {code:java} org.apache.commons.io.IOUtils.toString(InputStream, String) {code} or: {code:java} org.apache.commons.io.IOUtils.toString(InputStream, Charset) {code} I've updated the site in git master. was (Author: garydgregory): This functionality does not belong in FileUpload, please use Apache Commons IO: {code:java} org.apache.commons.io.IOUtils.toString(InputStream, String) {code} or: {code:java} org.apache.commons.io.IOUtils.toString(InputStream, Charset) {code} I'll update the docs. > Update code example: Use IOUtils instead of Streams utils class > --- > > Key: FILEUPLOAD-355 > URL: https://issues.apache.org/jira/browse/FILEUPLOAD-355 > Project: Commons FileUpload > Issue Type: Wish >Affects Versions: 2.0.0-M2 >Reporter: Ana >Priority: Major > Fix For: 2.0.0 > > > I'm trying to use the Streaming API in *commons-fileupload2-jakarta-serverl6 > v 2.0.0-M2* like [in this > example|https://commons.apache.org/proper/commons-fileupload/streaming.html]:]: > {code:java} > // Create a new file upload handler > JakartaServletFileUpload upload = new JakartaServletFileUpload(); > // Parse the request > upload.getItemIterator(request).forEachRemaining(item -> { > String name = item.getFieldName(); > InputStream stream = item.getInputStream(); > if (item.isFormField()) { > System.out.println("Form field " + name + " with value " > + Streams.asString(stream) + " detected."); > } else { > System.out.println("File field " + name + " with file name " > + item.getName() + " detected."); > // Process the input stream > ... > } > }); {code} > But the org.apache.commons.fileupload.util.Streams class cannot be found. It > doesn't seem to have been ported to v2. Are there any plans to add it to > future releases? > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (FILEUPLOAD-355) Update code example: Use IOUtils instead of Streams utils class
[ https://issues.apache.org/jira/browse/FILEUPLOAD-355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844978#comment-17844978 ] Gary D. Gregory edited comment on FILEUPLOAD-355 at 5/9/24 12:55 PM: - This functionality does not belong in FileUpload, please use Apache Commons IO: {code:java} org.apache.commons.io.IOUtils.toString(InputStream, String) {code} or: {code:java} org.apache.commons.io.IOUtils.toString(InputStream, Charset) {code} I'll update the docs. was (Author: garydgregory): This functionality does not belong in FileUpload, please use Apache Commons IO: {code:java} org.apache.commons.io.IOUtils.toString(InputStream, String) {code} or: {code:java} org.apache.commons.io.IOUtils.toString(InputStream, Charset) {code} > Update code example: Use IOUtils instead of Streams utils class > --- > > Key: FILEUPLOAD-355 > URL: https://issues.apache.org/jira/browse/FILEUPLOAD-355 > Project: Commons FileUpload > Issue Type: Wish >Affects Versions: 2.0.0-M2 >Reporter: Ana >Priority: Major > Fix For: 2.0.0 > > > I'm trying to use the Streaming API in *commons-fileupload2-jakarta-serverl6 > v 2.0.0-M2* like [in this > example|https://commons.apache.org/proper/commons-fileupload/streaming.html]:]: > {code:java} > // Create a new file upload handler > JakartaServletFileUpload upload = new JakartaServletFileUpload(); > // Parse the request > upload.getItemIterator(request).forEachRemaining(item -> { > String name = item.getFieldName(); > InputStream stream = item.getInputStream(); > if (item.isFormField()) { > System.out.println("Form field " + name + " with value " > + Streams.asString(stream) + " detected."); > } else { > System.out.println("File field " + name + " with file name " > + item.getName() + " detected."); > // Process the input stream > ... > } > }); {code} > But the org.apache.commons.fileupload.util.Streams class cannot be found. It > doesn't seem to have been ported to v2. Are there any plans to add it to > future releases? > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FILEUPLOAD-355) Update code example: Use IOUtils instead of Streams utils class
[ https://issues.apache.org/jira/browse/FILEUPLOAD-355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory updated FILEUPLOAD-355: --- Summary: Update code example: Use IOUtils instead of Streams utils class (was: Missing Streams utils class in latest release) > Update code example: Use IOUtils instead of Streams utils class > --- > > Key: FILEUPLOAD-355 > URL: https://issues.apache.org/jira/browse/FILEUPLOAD-355 > Project: Commons FileUpload > Issue Type: Wish >Affects Versions: 2.0.0-M2 >Reporter: Ana >Priority: Major > > I'm trying to use the Streaming API in *commons-fileupload2-jakarta-serverl6 > v 2.0.0-M2* like [in this > example|https://commons.apache.org/proper/commons-fileupload/streaming.html]:]: > {code:java} > // Create a new file upload handler > JakartaServletFileUpload upload = new JakartaServletFileUpload(); > // Parse the request > upload.getItemIterator(request).forEachRemaining(item -> { > String name = item.getFieldName(); > InputStream stream = item.getInputStream(); > if (item.isFormField()) { > System.out.println("Form field " + name + " with value " > + Streams.asString(stream) + " detected."); > } else { > System.out.println("File field " + name + " with file name " > + item.getName() + " detected."); > // Process the input stream > ... > } > }); {code} > But the org.apache.commons.fileupload.util.Streams class cannot be found. It > doesn't seem to have been ported to v2. Are there any plans to add it to > future releases? > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (FILEUPLOAD-355) Update code example: Use IOUtils instead of Streams utils class
[ https://issues.apache.org/jira/browse/FILEUPLOAD-355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory resolved FILEUPLOAD-355. Fix Version/s: 2.0.0 Resolution: Fixed > Update code example: Use IOUtils instead of Streams utils class > --- > > Key: FILEUPLOAD-355 > URL: https://issues.apache.org/jira/browse/FILEUPLOAD-355 > Project: Commons FileUpload > Issue Type: Wish >Affects Versions: 2.0.0-M2 >Reporter: Ana >Priority: Major > Fix For: 2.0.0 > > > I'm trying to use the Streaming API in *commons-fileupload2-jakarta-serverl6 > v 2.0.0-M2* like [in this > example|https://commons.apache.org/proper/commons-fileupload/streaming.html]:]: > {code:java} > // Create a new file upload handler > JakartaServletFileUpload upload = new JakartaServletFileUpload(); > // Parse the request > upload.getItemIterator(request).forEachRemaining(item -> { > String name = item.getFieldName(); > InputStream stream = item.getInputStream(); > if (item.isFormField()) { > System.out.println("Form field " + name + " with value " > + Streams.asString(stream) + " detected."); > } else { > System.out.println("File field " + name + " with file name " > + item.getName() + " detected."); > // Process the input stream > ... > } > }); {code} > But the org.apache.commons.fileupload.util.Streams class cannot be found. It > doesn't seem to have been ported to v2. Are there any plans to add it to > future releases? > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FILEUPLOAD-355) Update code example: Use IOUtils instead of Streams utils class
[ https://issues.apache.org/jira/browse/FILEUPLOAD-355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844978#comment-17844978 ] Gary D. Gregory commented on FILEUPLOAD-355: This functionality does not belong in FileUpload, please use Apache Commons IO: {code:java} org.apache.commons.io.IOUtils.toString(InputStream, String) {code} or: {code:java} org.apache.commons.io.IOUtils.toString(InputStream, Charset) {code} > Update code example: Use IOUtils instead of Streams utils class > --- > > Key: FILEUPLOAD-355 > URL: https://issues.apache.org/jira/browse/FILEUPLOAD-355 > Project: Commons FileUpload > Issue Type: Wish >Affects Versions: 2.0.0-M2 >Reporter: Ana >Priority: Major > > I'm trying to use the Streaming API in *commons-fileupload2-jakarta-serverl6 > v 2.0.0-M2* like [in this > example|https://commons.apache.org/proper/commons-fileupload/streaming.html]:]: > {code:java} > // Create a new file upload handler > JakartaServletFileUpload upload = new JakartaServletFileUpload(); > // Parse the request > upload.getItemIterator(request).forEachRemaining(item -> { > String name = item.getFieldName(); > InputStream stream = item.getInputStream(); > if (item.isFormField()) { > System.out.println("Form field " + name + " with value " > + Streams.asString(stream) + " detected."); > } else { > System.out.println("File field " + name + " with file name " > + item.getName() + " detected."); > // Process the input stream > ... > } > }); {code} > But the org.apache.commons.fileupload.util.Streams class cannot be found. It > doesn't seem to have been ported to v2. Are there any plans to add it to > future releases? > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (DIGESTER-201) Migrate from cglib to...
Gary D. Gregory created DIGESTER-201: Summary: Migrate from cglib to... Key: DIGESTER-201 URL: https://issues.apache.org/jira/browse/DIGESTER-201 Project: Commons Digester Issue Type: Task Reporter: Gary D. Gregory Apache Commons BCEL, ByteBuddy, ASM? -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (DIGESTER-188) Update cglib from 2.2.2 to 3.3.0
[ https://issues.apache.org/jira/browse/DIGESTER-188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844661#comment-17844661 ] Gary D. Gregory edited comment on DIGESTER-188 at 5/8/24 1:53 PM: -- Apache Commons BCEL, ByteBuddy, or ASM? Feel free to provide a PR on GitHub. There is no cglib 3.3.3 on Maven Central. BTW, our git master is on 3.3.0. Also 3.3.0 is the latest on GH: https://github.com/cglib/cglib/releases was (Author: garydgregory): Apache Commons BCEL, ByteBuddy, or ASM? Feel free to provide a PR on GitHub. There is no cglib 3.3.3 on Maven Central BTW, our git master is on 3.3.0. Also not on GH: https://github.com/cglib/cglib/releases > Update cglib from 2.2.2 to 3.3.0 > > > Key: DIGESTER-188 > URL: https://issues.apache.org/jira/browse/DIGESTER-188 > Project: Commons Digester > Issue Type: Improvement >Affects Versions: 3.2 >Reporter: Gary D. Gregory >Priority: Major > Fix For: 3.3 > > > Update cglib from 2.2.2 to 3.2.5 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (DIGESTER-188) Update cglib from 2.2.2 to 3.3.0
[ https://issues.apache.org/jira/browse/DIGESTER-188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844661#comment-17844661 ] Gary D. Gregory edited comment on DIGESTER-188 at 5/8/24 1:52 PM: -- Apache Commons BCEL, ByteBuddy, or ASM? Feel free to provide a PR on GitHub. There is no cglib 3.3.3 on Maven Central BTW, our git master is on 3.3.0. Also not on GH: https://github.com/cglib/cglib/releases was (Author: garydgregory): Apache Commons BCEL, ByteBuddy, or ASM? Feel free to provide a PR on GitHub. There is no cglib 3.3.3 on Maven Central BTW, our git master is on 3.3.0. > Update cglib from 2.2.2 to 3.3.0 > > > Key: DIGESTER-188 > URL: https://issues.apache.org/jira/browse/DIGESTER-188 > Project: Commons Digester > Issue Type: Improvement >Affects Versions: 3.2 >Reporter: Gary D. Gregory >Priority: Major > Fix For: 3.3 > > > Update cglib from 2.2.2 to 3.2.5 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (DIGESTER-188) Update cglib from 2.2.2 to 3.3.0
[ https://issues.apache.org/jira/browse/DIGESTER-188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory updated DIGESTER-188: - Summary: Update cglib from 2.2.2 to 3.3.0 (was: Update cglib from 2.2.2 to 3.2.5) > Update cglib from 2.2.2 to 3.3.0 > > > Key: DIGESTER-188 > URL: https://issues.apache.org/jira/browse/DIGESTER-188 > Project: Commons Digester > Issue Type: Improvement >Affects Versions: 3.2 >Reporter: Gary D. Gregory >Priority: Major > Fix For: 3.3 > > > Update cglib from 2.2.2 to 3.2.5 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (DIGESTER-188) Update cglib from 2.2.2 to 3.3.0
[ https://issues.apache.org/jira/browse/DIGESTER-188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory resolved DIGESTER-188. -- Resolution: Fixed > Update cglib from 2.2.2 to 3.3.0 > > > Key: DIGESTER-188 > URL: https://issues.apache.org/jira/browse/DIGESTER-188 > Project: Commons Digester > Issue Type: Improvement >Affects Versions: 3.2 >Reporter: Gary D. Gregory >Priority: Major > Fix For: 3.3 > > > Update cglib from 2.2.2 to 3.2.5 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (DIGESTER-188) Update cglib from 2.2.2 to 3.2.5
[ https://issues.apache.org/jira/browse/DIGESTER-188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844661#comment-17844661 ] Gary D. Gregory edited comment on DIGESTER-188 at 5/8/24 1:38 PM: -- Apache Commons BCEL, ByteBuddy, or ASM? Feel free to provide a PR on GitHub. There is no cglib 3.3.3 on Maven Central BTW, our git master is on 3.3.0. was (Author: garydgregory): Apache Commons BCEL, ByteBuddy, or ASM? Feel free to provide a PR on GitHub. > Update cglib from 2.2.2 to 3.2.5 > > > Key: DIGESTER-188 > URL: https://issues.apache.org/jira/browse/DIGESTER-188 > Project: Commons Digester > Issue Type: Improvement >Affects Versions: 3.2 >Reporter: Gary D. Gregory >Priority: Major > Fix For: 3.3 > > > Update cglib from 2.2.2 to 3.2.5 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (DIGESTER-188) Update cglib from 2.2.2 to 3.2.5
[ https://issues.apache.org/jira/browse/DIGESTER-188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844661#comment-17844661 ] Gary D. Gregory commented on DIGESTER-188: -- Apache Commons BCEL, ByteBuddy, or ASM? Feel free to provide a PR on GitHub. > Update cglib from 2.2.2 to 3.2.5 > > > Key: DIGESTER-188 > URL: https://issues.apache.org/jira/browse/DIGESTER-188 > Project: Commons Digester > Issue Type: Improvement >Affects Versions: 3.2 >Reporter: Gary D. Gregory >Priority: Major > Fix For: 3.3 > > > Update cglib from 2.2.2 to 3.2.5 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (LANG-1733) `null` handling feature in ObjectUtils
[ https://issues.apache.org/jira/browse/LANG-1733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844264#comment-17844264 ] Gary D. Gregory commented on LANG-1733: --- If this goes anywhere it should be in the function package, see Consumers and Functions classes. > `null` handling feature in ObjectUtils > -- > > Key: LANG-1733 > URL: https://issues.apache.org/jira/browse/LANG-1733 > Project: Commons Lang > Issue Type: New Feature >Reporter: Jongjin Bae >Priority: Major > > I have a new suggestion about null handling. > I usually check a object is null or not before using it to avoid NPE. > It is pretty obvious, but It is quite cumbersome and has some overhead. > So I want to introduce the following null-safety methods in ObjectUtils class > and make people easy to handle null without using if/else statement or > Optional class, etc. > {code:java} > public static R applyIfNotNull(final T object, final Function > function) { > return object != null ? function.apply(object) : null; > } > public static void acceptIfNotNull(final T object, final Consumer > consumer) { > if (object != null) { > consumer.accept(object); > } > } > {code} > What do you think about it? > If it looks good, I will implement this feature. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (CONFIGURATION-845) DatabaseConfiguration loses list property's order
[ https://issues.apache.org/jira/browse/CONFIGURATION-845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844262#comment-17844262 ] Gary D. Gregory edited comment on CONFIGURATION-845 at 5/7/24 12:14 PM: [~pietermartin] Thank you for you report. What should the ORDER BY be? Feel free to create a PR on github. Should this order be optional since it might not matter for most apps and puts additional load on the DB, especially if the table has no index. was (Author: garydgregory): [~pietermartin] Thank you for you report. What should the ORDER BY be? Feel free to create a PR on github. > DatabaseConfiguration loses list property's order > - > > Key: CONFIGURATION-845 > URL: https://issues.apache.org/jira/browse/CONFIGURATION-845 > Project: Commons Configuration > Issue Type: Bug >Affects Versions: 2.10.1 >Reporter: pieter martin >Priority: Major > > {code:java} > private static final String SQL_GET_PROPERTY = "SELECT * FROM %s WHERE %s =?"; > {code} > This is the query DatabaseConfiguration executes to get a property's values. > It assumes the order will be insertion order but this is not true on > postgresql. > So configuration.getList(key) loses the order of the list. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (CONFIGURATION-845) DatabaseConfiguration loses list property's order
[ https://issues.apache.org/jira/browse/CONFIGURATION-845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844262#comment-17844262 ] Gary D. Gregory commented on CONFIGURATION-845: --- [~pietermartin] Thank you for you report. What should the ORDER BY be? Feel free to create a PR on github. > DatabaseConfiguration loses list property's order > - > > Key: CONFIGURATION-845 > URL: https://issues.apache.org/jira/browse/CONFIGURATION-845 > Project: Commons Configuration > Issue Type: Bug >Affects Versions: 2.10.1 >Reporter: pieter martin >Priority: Major > > {code:java} > private static final String SQL_GET_PROPERTY = "SELECT * FROM %s WHERE %s =?"; > {code} > This is the query DatabaseConfiguration executes to get a property's values. > It assumes the order will be insertion order but this is not true on > postgresql. > So configuration.getList(key) loses the order of the list. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (NET-710) Timestamp parsing fails around the change to daylight savings
[ https://issues.apache.org/jira/browse/NET-710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843488#comment-17843488 ] Gary D. Gregory edited comment on NET-710 at 5/4/24 10:00 PM: -- I added {{FTPTimestampParserImplTest.testNet710()}} as a disabled test. was (Author: garydgregory): Do you have a unit test we can debug? > Timestamp parsing fails around the change to daylight savings > - > > Key: NET-710 > URL: https://issues.apache.org/jira/browse/NET-710 > Project: Commons Net > Issue Type: Bug > Components: FTP >Affects Versions: 3.3, 3.8.0 >Reporter: Mike Baranczak >Priority: Major > > {{String ts = "Mar 13 02:33";}} > {{Calendar serverTime = Calendar.getInstance(TimeZone.getTimeZone("EDT"), > Locale.US);}} > {{serverTime.set(2022, 2, 16, 14, 0);}} > {{Calendar c = new FTPTimestampParserImpl().parseTimestamp(ts, serverTime);}} > > {{Result:}} > > {{java.text.ParseException: Timestamp 'Mar 13 02:33' could not be parsed > using a server time of Wed Mar 16 10:00:54 EDT 2022}} > {{ at > org.apache.commons.net.ftp.parser.FTPTimestampParserImpl.parseTimestamp > (FTPTimestampParserImpl.java:246)}} > > I can't tell what's going on, but this seems to have something to do with the > transition to Daylight Savings Time, which happened on Sunday, March 13. I > ran into this bug when trying to get a list of recent files from an FTP > server. (UnixFTPEntryParser ignores the exception silently, which isn't a > great idea, either.) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NET-710) Timestamp parsing fails around the change to daylight savings
[ https://issues.apache.org/jira/browse/NET-710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843488#comment-17843488 ] Gary D. Gregory commented on NET-710: - Do you have a unit test we can debug? > Timestamp parsing fails around the change to daylight savings > - > > Key: NET-710 > URL: https://issues.apache.org/jira/browse/NET-710 > Project: Commons Net > Issue Type: Bug > Components: FTP >Affects Versions: 3.3, 3.8.0 >Reporter: Mike Baranczak >Priority: Major > > {{String ts = "Mar 13 02:33";}} > {{Calendar serverTime = Calendar.getInstance(TimeZone.getTimeZone("EDT"), > Locale.US);}} > {{serverTime.set(2022, 2, 16, 14, 0);}} > {{Calendar c = new FTPTimestampParserImpl().parseTimestamp(ts, serverTime);}} > > {{Result:}} > > {{java.text.ParseException: Timestamp 'Mar 13 02:33' could not be parsed > using a server time of Wed Mar 16 10:00:54 EDT 2022}} > {{ at > org.apache.commons.net.ftp.parser.FTPTimestampParserImpl.parseTimestamp > (FTPTimestampParserImpl.java:246)}} > > I can't tell what's going on, but this seems to have something to do with the > transition to Daylight Savings Time, which happened on Sunday, March 13. I > ran into this bug when trying to get a list of recent files from an FTP > server. (UnixFTPEntryParser ignores the exception silently, which isn't a > great idea, either.) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (EMAIL-209) InputStreamDataSource#getInputStream() violates javax.activation.DataSource contract
[ https://issues.apache.org/jira/browse/EMAIL-209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843025#comment-17843025 ] Gary D. Gregory commented on EMAIL-209: --- [~ehubert] Thank you for you report. Feel free to provide a PR on GitHub. FTR [https://docs.oracle.com/javase/8/docs/api/javax/activation/DataSource.html#getInputStream--] > InputStreamDataSource#getInputStream() violates javax.activation.DataSource > contract > > > Key: EMAIL-209 > URL: https://issues.apache.org/jira/browse/EMAIL-209 > Project: Commons Email > Issue Type: Bug >Affects Versions: 1.6.0 >Reporter: Eric Hubert >Priority: Major > > After upgrading from commons-email 1.5 to 1.6.0 an integration test of a > custom MessageHandler implementation taking care of attachments broke, as > attachments were no longer correctly processed. > I had a brief look at this custom implementation and noticed that > org.apache.commons.mail.util.MimeMessageParser#getAttachmentList > was used to retrieve the list of attachment data source and processed it > twice relying on the javax.activation.DataSource contract of > #getInputStream(): > "Note that a new InputStream object must be returned each time this method is > called, and the stream must be positioned at the beginning of the data." > which was working just fine with commons-email 1.5 being backed by > javax.mail.util.ByteArrayDataSource. > It looks like with EMAIL-207 a new implementation > org.apache.commons.mail.activation.InputStreamDataSource was introduced which > current implementation seems to violate this contract. Calling getInputStream > does not provide a fresh InputStream at the beginning of the data, but > returns the existing object in the state of the previous usage, resulting in > incomplete data processing. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (LANG-1732) Add a function to convert a Integer object to boolean value
[ https://issues.apache.org/jira/browse/LANG-1732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17842674#comment-17842674 ] Gary D. Gregory commented on LANG-1732: --- You can just let the compiler unbox the result for you, it will NEVER be null, or explicitly call {{booleanValue()}}. Alternatively, you can toString() the Integer and pass it to {{BooleanUtils.toBoolean(String)}}, or call {{inValue()}} and then {{BooleanUtils.toBoolean(int)}}. Adding an API here might not be worth the extra clutter. > Add a function to convert a Integer object to boolean value > --- > > Key: LANG-1732 > URL: https://issues.apache.org/jira/browse/LANG-1732 > Project: Commons Lang > Issue Type: New Feature >Reporter: Jongjin Bae >Priority: Major > > I need a function to convert a Integer object to boolean value. > If the Integer object is null, the function returns boolean value it received > as an argument. > How about supporting this function in org.apache.commons.lang3.BooleanUtils > class? > I will implement this function, if it looks good. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (LANG-1732) Add a function to convert a Integer object to boolean value
[ https://issues.apache.org/jira/browse/LANG-1732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory resolved LANG-1732. --- Resolution: Information Provided This functionality already exists: {{{}BooleanUtils.toBooleanObject(Integer){}}}. > Add a function to convert a Integer object to boolean value > --- > > Key: LANG-1732 > URL: https://issues.apache.org/jira/browse/LANG-1732 > Project: Commons Lang > Issue Type: New Feature >Reporter: Jongjin Bae >Priority: Major > > I need a function to convert a Integer object to boolean value. > If the Integer object is null, the function returns boolean value it received > as an argument. > How about supporting this function in org.apache.commons.lang3.BooleanUtils > class? > I will implement this function, if it looks good. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IO-785) FileUtils.deleteDirectory fails to delete directory on Azure AKS
[ https://issues.apache.org/jira/browse/IO-785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841977#comment-17841977 ] Gary D. Gregory commented on IO-785: Please test with {*}2.16.1{*}. > FileUtils.deleteDirectory fails to delete directory on Azure AKS > - > > Key: IO-785 > URL: https://issues.apache.org/jira/browse/IO-785 > Project: Commons IO > Issue Type: Bug > Components: Utilities >Affects Versions: 2.9.0 > Environment: Azure Files Container Storage Interface (CSI) driver in > Azure Kubernetes Service (AKS) > apiVersion: storage.k8s.io/v1 > kind: StorageClass > metadata: > annotations: > kubectl.kubernetes.io/last-applied-configuration: | > > \{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"azure-aks-test-cluster-file-storage-class"},"mountOptions":["dir_mode=0777","file_mode=0777","uid=0","gid=0","mfsymlinks","cache=strict","actimeo=30"],"provisioner":"kubernetes.io/azure-file"} > storageclass.kubernetes.io/is-default-class: "false" > creationTimestamp: "2022-01-01T0-00:00:00Z" > name: azure-aks-test-cluster-file-storage-class > resourceVersion: "12768518" > uid: bc6-invalid-8c > mountOptions: > - dir_mode=0777 > - file_mode=0777 > - uid=0 > - gid=0 > - mfsymlinks > - cache=strict > - actimeo=30 > provisioner: kubernetes.io/azure-file > reclaimPolicy: Delete > volumeBindingMode: Immediate >Reporter: Ivica Loncar >Priority: Major > > On Azure AKS file persistent volume > (https://learn.microsoft.com/en-us/azure/aks/azure-files-csi) we've got > following exception: > {noformat} > org.apache.commons.io.IOExceptionList: > work/bef4a1a575c54ac099816b6babf4bde9/job-3418 > at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:330) > at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1191) > at > com.xebialabs.xlrelease.remote.executor.k8s.KubeService.cleanWorkDir(KubeService.scala:107) > at > com.xebialabs.xlrelease.remote.executor.k8s.KubeJobExecutorService.cleanup(KubeJobExecutorService.scala:27) > at > com.xebialabs.xlrelease.remote.runner.JobRunnerActor.$anonfun$handleEvent$4(JobRunnerActor.scala:219) > at > com.xebialabs.xlrelease.remote.runner.JobRunnerActor.$anonfun$handleEvent$4$adapted(JobRunnerActor.scala:218) > at scala.Option.foreach(Option.scala:437) > at > com.xebialabs.xlrelease.remote.runner.JobRunnerActor.com$xebialabs$xlrelease$remote$runner$JobRunnerActor$$handleEvent(JobRunnerActor.scala:218) > at > com.xebialabs.xlrelease.remote.runner.JobRunnerActor$$anonfun$receiveRecover$1.$anonfun$applyOrElse$2(JobRunnerActor.scala:45) > at > com.xebialabs.xlrelease.remote.runner.JobRunnerActor$$anonfun$receiveRecover$1.$anonfun$applyOrElse$2$adapted(JobRunnerActor.scala:45) > at scala.Option.foreach(Option.scala:437) > at > com.xebialabs.xlrelease.remote.runner.JobRunnerActor$$anonfun$receiveRecover$1.applyOrElse(JobRunnerActor.scala:45) > at > scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:35) > at akka.event.LoggingReceive.apply(LoggingReceive.scala:96) > at akka.event.LoggingReceive.apply(LoggingReceive.scala:70) > at > akka.persistence.Eventsourced$$anon$2$$anonfun$1.applyOrElse(Eventsourced.scala:643) > at akka.actor.Actor.aroundReceive(Actor.scala:537) > at akka.actor.Actor.aroundReceive$(Actor.scala:535) > at > com.xebialabs.xlrelease.remote.runner.JobRunnerActor.akka$persistence$Eventsourced$$super$aroundReceive(JobRunnerActor.scala:22) > at > akka.persistence.Eventsourced$$anon$3.stateReceive(Eventsourced.scala:771) > at akka.persistence.Eventsourced.aroundReceive(Eventsourced.scala:245) > at akka.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:244) > at > com.xebialabs.xlrelease.remote.runner.JobRunnerActor.aroundReceive(JobRunnerActor.scala:22) > at akka.actor.ActorCell.receiveMessage(ActorCell.scala:579) > at akka.actor.ActorCell.invoke(ActorCell.scala:547) > at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270) > at akka.dispatch.Mailbox.run(Mailbox.scala:231) > at akka.dispatch.Mailbox.exec(Mailbox.scala:243) > at java.base/java.util.concurrent.ForkJoinTask.doExec(Unknown Source) > at > java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(Unknown > Source) > at java.base/java.util.concurrent.ForkJoinPool.scan(Unknown Source) > at java.base/java.util.concurrent.ForkJoinPool.runWorker(Unknown Source) > at java.base/java.util.concurrent.ForkJoinWorkerThread.run(Unknown > Source) > Caused by: java.io.IOException: Cannot delete file: > work/bef4a1a575c54ac099816b6babf4bde9/job-3418/input
[jira] [Commented] (IO-850) DeletingPathVisitor always fails to delete a dir when symbolic link target is deleted before the link itself
[ https://issues.apache.org/jira/browse/IO-850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841974#comment-17841974 ] Gary D. Gregory commented on IO-850: Note that *2.16.1* is available. > DeletingPathVisitor always fails to delete a dir when symbolic link target is > deleted before the link itself > > > Key: IO-850 > URL: https://issues.apache.org/jira/browse/IO-850 > Project: Commons IO > Issue Type: Bug >Affects Versions: 2.15.1 >Reporter: Johan Compagner >Priority: Major > > DeletingPathVisitor doesn't give us an option for the SimplePathVisitor > superclass visitFileFailedFunction property (that constructor also exposed to > the intermediate class CountingPathVisitor is not used/exposed in the > DeletingPathVisitor class) > So i can't use that but the DeletingPathVisitor should use that, because i > can't delete a certain directory if you have something like this: > > parentdir: > adir > symboliclinkpointingtoadir > > if that happens and i call this Files.walkFileTree(path, > DeletingPathVisitor.withLongCounters()); > on the parent dir and it first deletes "adir" > then it will completely fail to delete that parentdir (or clean the parent > dir) > this is because Files will try to open the directory stream of that > "symboliclinkpointingtoadir" and that will fail because the "adir" is already > gone. so its now an invalid symbolic link. The the Files walkFileTree > implementation does call visitFileFailed but that is competely not > implemented in the DeletingPathVisitor and i have no means of also adding it > to it. > i think DeletingPathVisitor should just do what i now do in my own > implementation: > > > {code:java} > public FileVisitResult > visitFileFailed(Path file, IOException exc) throws IOException > { > Files.deleteIfExists(file); > return FileVisitResult.CONTINUE; > } > > {code} > just try to delete that file and be done with it (this works fine) > now it bombs out because that IOException is something like "FileNotFound" > and if if i call it again and again on that dir (so "adir" is already gone) > the DeletingPathVisitor is never able to delete/clean that parent dir it > always bombs out. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IO-855) Clarify behavior of PeekableInputStream.peek() in JavaDoc
[ https://issues.apache.org/jira/browse/IO-855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841968#comment-17841968 ] Gary D. Gregory edited comment on IO-855 at 4/29/24 12:28 PM: -- Yeah, this class made it into the codebase without unit tests, so we only have the Javadocs to go by to detect the intent of the original author. What's a bug? What's a feature? Who knows! I started https://lists.apache.org/thread/qr2ksllxcjvw5xnfgh19g554znhqmmpm was (Author: garydgregory): Yeah, this class made it into the codebase without unit tests, so we only have the Javadocs to go by to detect the intent of the original author. What's a bug? What's a feature? Who knows! > Clarify behavior of PeekableInputStream.peek() in JavaDoc > - > > Key: IO-855 > URL: https://issues.apache.org/jira/browse/IO-855 > Project: Commons IO > Issue Type: Improvement > Components: Streams/Writers >Affects Versions: 2.15.1 >Reporter: Dominik Stadler >Priority: Minor > > The current JavaDoc of the PeekableInputStream states {*}"Returns whether the > next bytes in the buffer are as given by sourceBuffer."{*}. When trying to > using it I expected the following to work: > > PeekableInputStream stream = {color:#cc7832}new {color}PeekableInputStream( > {color:#cc7832}new {color}ByteArrayInputStream({color:#6a8759}"Some text > buffer"{color}.getBytes(StandardCharsets.{color:#9876aa}UTF_8{color}))){color:#cc7832}; > {color}{color:#cc7832} > {color}assertTrue(stream.peek({color:#6a8759}"Some"{color}.getBytes(StandardCharsets.{color:#9876aa}UTF_8{color}))){color:#cc7832}; > {color} > > However this fails because the current implementation checks if the available > bytes on the stream *exactly* match the bytes in the stream, so if there is > more data available, it will return false! > > If this is the intended behavior, it should be made more prominent in the > JavaDoc. > P.S.: I am not sure how useful such an implementation is as at least for me > most such use-cases are of type "startsWith", not "equals". -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IO-855) Clarify behavior of PeekableInputStream.peek() in JavaDoc
[ https://issues.apache.org/jira/browse/IO-855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841968#comment-17841968 ] Gary D. Gregory commented on IO-855: Yeah, this class made it into the codebase without unit tests, so we only have the Javadocs to go by to detect the intent of the original author. What's a bug? What's a feature? Who knows! > Clarify behavior of PeekableInputStream.peek() in JavaDoc > - > > Key: IO-855 > URL: https://issues.apache.org/jira/browse/IO-855 > Project: Commons IO > Issue Type: Improvement > Components: Streams/Writers >Affects Versions: 2.15.1 >Reporter: Dominik Stadler >Priority: Minor > > The current JavaDoc of the PeekableInputStream states {*}"Returns whether the > next bytes in the buffer are as given by sourceBuffer."{*}. When trying to > using it I expected the following to work: > > PeekableInputStream stream = {color:#cc7832}new {color}PeekableInputStream( > {color:#cc7832}new {color}ByteArrayInputStream({color:#6a8759}"Some text > buffer"{color}.getBytes(StandardCharsets.{color:#9876aa}UTF_8{color}))){color:#cc7832}; > {color}{color:#cc7832} > {color}assertTrue(stream.peek({color:#6a8759}"Some"{color}.getBytes(StandardCharsets.{color:#9876aa}UTF_8{color}))){color:#cc7832}; > {color} > > However this fails because the current implementation checks if the available > bytes on the stream *exactly* match the bytes in the stream, so if there is > more data available, it will return false! > > If this is the intended behavior, it should be made more prominent in the > JavaDoc. > P.S.: I am not sure how useful such an implementation is as at least for me > most such use-cases are of type "startsWith", not "equals". -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (COMPRESS-678) ArArchiveOutputStream doesn't pad correctly when a file name length is odd and greater than 16 (padding missing)
[ https://issues.apache.org/jira/browse/COMPRESS-678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841632#comment-17841632 ] Gary D. Gregory commented on COMPRESS-678: -- TY for the confirmation. > ArArchiveOutputStream doesn't pad correctly when a file name length is odd > and greater than 16 (padding missing) > > > Key: COMPRESS-678 > URL: https://issues.apache.org/jira/browse/COMPRESS-678 > Project: Commons Compress > Issue Type: Bug > Components: Archivers >Affects Versions: 1.26.1 >Reporter: takaaki nakama >Assignee: Gary D. Gregory >Priority: Minor > Fix For: 1.26.2 > > > Using ArArchiveInputStream, reading content created by ArArchiveOutputStream > causes IOException "Invalid entry trailer." at specific conditions. > > h4. Conditions > 1. LongFILE_BSD mode is enabled > 2. Ar file contains at least two entries. > 1. First entry's name length is longer than 16bytes and odd > 2. First entry's body length is odd. > 3. Second entry's name length is odd > > h4. Cause > ArArchiveOutputStream add padding if entryOffset is odd. This entryOffset > only includes body length, but not entry name length. > [https://github.com/apache/commons-compress/blob/master/src/main/java/org/apache/commons/compress/archivers/ar/ArArchiveOutputStream.java#L80] > > ArArchiveIutputStream try to remove padding when offset is odd. This offset > includes body length and name length. > [https://github.com/apache/commons-compress/blob/master/src/main/java/org/apache/commons/compress/archivers/ar/ArArchiveInputStream.java#L266] > > So encoding/decoding use different logics for padding, and at specific > conditions, ArArchiveIutputStream remove 1byte that is actually not padding > by mistake. > > > h4. Reproduction Code > > {code:java} > package test; > import java.io.File; > import java.io.FileInputStream; > import java.io.FileOutputStream; > import java.io.IOException; > import org.apache.commons.compress.archivers.ar.ArArchiveEntry; > import org.apache.commons.compress.archivers.ar.ArArchiveInputStream; > import org.apache.commons.compress.archivers.ar.ArArchiveOutputStream; > import org.junit.Test; > public class ArTest { > @Test > public void test() throws IOException { > File file = new File("test.ar"); > ArArchiveOutputStream arOut = new ArArchiveOutputStream(new > FileOutputStream(file)); > arOut.setLongFileMode(ArArchiveOutputStream.LONGFILE_BSD); > arOut.putArchiveEntry(new ArArchiveEntry("01234567891234567", 1)); > arOut.write(new byte[]{1}); > arOut.closeArchiveEntry(); > arOut.putArchiveEntry(new ArArchiveEntry("a", 1)); > arOut.write(new byte[]{1}); > arOut.closeArchiveEntry(); > arOut.close(); > ArArchiveInputStream arIn = new ArArchiveInputStream(new > FileInputStream(file)); > ArArchiveEntry entry = arIn.getNextArEntry(); > System.out.println(entry.getName()); > arIn.readAllBytes(); > entry = arIn.getNextArEntry(); // <- This line causes the exception. > System.out.println(entry.getName()); > } > }{code} > > > > h4. Error stack Trace > {code:java} > java.io.IOException: Invalid entry trailer. not read the content? Occurred at > byte: 146 > at > org.apache.commons.compress.archivers.ar.ArArchiveInputStream.getNextArEntry(ArArchiveInputStream.java:294) > at org.apache.druid.mila.ArTest.test(ArTest.java:34) > at > java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103) > at java.base/java.lang.reflect.Method.invoke(Method.java:580) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) >
[jira] [Resolved] (BCEL-285) "ClassFormatException: Invalid signature" thrown on generics
[ https://issues.apache.org/jira/browse/BCEL-285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory resolved BCEL-285. -- Resolution: Cannot Reproduce > "ClassFormatException: Invalid signature" thrown on generics > > > Key: BCEL-285 > URL: https://issues.apache.org/jira/browse/BCEL-285 > Project: Commons BCEL > Issue Type: Bug > Components: Parser >Affects Versions: 6.0 > Environment: java 8, macOS 10.12.2 >Reporter: Tom Brus >Priority: Major > > The following stripped down example throws an Exception in {{BCEL}}: > {code:title=Bar.java|borderStyle=solid} > import java.io.IOException; > import java.util.stream.Stream; > import org.apache.bcel.classfile.ClassParser; > public class Main { > public static void main(String[] args) throws IOException { > ClassParser parser = new > ClassParser(Main.class.getResourceAsStream("Main.class"), "Main.class"); > parser.parse().getMethods()[2].getCode().toString(); /* <- EXCEPTION > thrown */ > > ((Stream)null).peek(x -> {}); /* <- problem spot */ > } > } > {code} > The exception is: > {code} > Exception in thread "main" org.apache.bcel.classfile.ClassFormatException: > Invalid signature: `!+Ljava/lang/Object;' > at org.apache.bcel.classfile.Utility.signatureToString(Utility.java:930) > at > org.apache.bcel.classfile.LocalVariable.toStringShared(LocalVariable.java:187) > at > org.apache.bcel.classfile.LocalVariableTypeTable.toString(LocalVariableTypeTable.java:121) > at java.lang.String.valueOf(String.java:2994) > at java.lang.StringBuilder.append(StringBuilder.java:131) > at org.apache.bcel.classfile.Code.toString(Code.java:316) > at org.apache.bcel.classfile.Code.toString(Code.java:328) > at Main.main(Main.java:9) > {code} > This problem only occurs in eclipse (I am using NEON.2) and does not occur in > Intellij (I am using 2016.3.2). This probably indicates that {{ecj}} and > {{javac}} differ in their class code generator. > Since our project uses {{sonar:findbugs}} which uses {{BCEL}} this is a major > problem for us. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (COMPRESS-678) ArArchiveOutputStream doesn't pad correctly when a file name length is odd and greater than 16 (padding missing)
[ https://issues.apache.org/jira/browse/COMPRESS-678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841540#comment-17841540 ] Gary D. Gregory edited comment on COMPRESS-678 at 4/27/24 9:23 PM: --- In git master: ArArchiveOutputStream doesn't pad correctly when a file name length is odd and greater than 16 (padding missing). I verified this with the `ar` app on macOS. See {{org.apache.commons.compress.archivers.ar.Compress678Test}} Also in snapshot builds in https://repository.apache.org/content/repositories/snapshots/org/apache/commons/commons-compress/1.26.2-SNAPSHOT/ was (Author: garydgregory): In git master: ArArchiveOutputStream doesn't pad correctly when a file name length is odd and greater than 16 (padding missing). I verified this with the `ar` app on macOS. Also in snapshot builds in https://repository.apache.org/content/repositories/snapshots/org/apache/commons/commons-compress/1.26.2-SNAPSHOT/ > ArArchiveOutputStream doesn't pad correctly when a file name length is odd > and greater than 16 (padding missing) > > > Key: COMPRESS-678 > URL: https://issues.apache.org/jira/browse/COMPRESS-678 > Project: Commons Compress > Issue Type: Bug > Components: Archivers >Affects Versions: 1.26.1 >Reporter: takaaki nakama >Assignee: Gary D. Gregory >Priority: Minor > Fix For: 1.26.2 > > > Using ArArchiveInputStream, reading content created by ArArchiveOutputStream > causes IOException "Invalid entry trailer." at specific conditions. > > h4. Conditions > 1. LongFILE_BSD mode is enabled > 2. Ar file contains at least two entries. > 1. First entry's name length is longer than 16bytes and odd > 2. First entry's body length is odd. > 3. Second entry's name length is odd > > h4. Cause > ArArchiveOutputStream add padding if entryOffset is odd. This entryOffset > only includes body length, but not entry name length. > [https://github.com/apache/commons-compress/blob/master/src/main/java/org/apache/commons/compress/archivers/ar/ArArchiveOutputStream.java#L80] > > ArArchiveIutputStream try to remove padding when offset is odd. This offset > includes body length and name length. > [https://github.com/apache/commons-compress/blob/master/src/main/java/org/apache/commons/compress/archivers/ar/ArArchiveInputStream.java#L266] > > So encoding/decoding use different logics for padding, and at specific > conditions, ArArchiveIutputStream remove 1byte that is actually not padding > by mistake. > > > h4. Reproduction Code > > {code:java} > package test; > import java.io.File; > import java.io.FileInputStream; > import java.io.FileOutputStream; > import java.io.IOException; > import org.apache.commons.compress.archivers.ar.ArArchiveEntry; > import org.apache.commons.compress.archivers.ar.ArArchiveInputStream; > import org.apache.commons.compress.archivers.ar.ArArchiveOutputStream; > import org.junit.Test; > public class ArTest { > @Test > public void test() throws IOException { > File file = new File("test.ar"); > ArArchiveOutputStream arOut = new ArArchiveOutputStream(new > FileOutputStream(file)); > arOut.setLongFileMode(ArArchiveOutputStream.LONGFILE_BSD); > arOut.putArchiveEntry(new ArArchiveEntry("01234567891234567", 1)); > arOut.write(new byte[]{1}); > arOut.closeArchiveEntry(); > arOut.putArchiveEntry(new ArArchiveEntry("a", 1)); > arOut.write(new byte[]{1}); > arOut.closeArchiveEntry(); > arOut.close(); > ArArchiveInputStream arIn = new ArArchiveInputStream(new > FileInputStream(file)); > ArArchiveEntry entry = arIn.getNextArEntry(); > System.out.println(entry.getName()); > arIn.readAllBytes(); > entry = arIn.getNextArEntry(); // <- This line causes the exception. > System.out.println(entry.getName()); > } > }{code} > > > > h4. Error stack Trace > {code:java} > java.io.IOException: Invalid entry trailer. not read the content? Occurred at > byte: 146 > at > org.apache.commons.compress.archivers.ar.ArArchiveInputStream.getNextArEntry(ArArchiveInputStream.java:294) > at org.apache.druid.mila.ArTest.test(ArTest.java:34) > at > java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103) > at java.base/java.lang.reflect.Method.invoke(Method.java:580) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at >
[jira] [Updated] (COMPRESS-678) ArArchiveOutputStream doesn't pad correctly when a file name length is odd and greater than 16 (padding missing)
[ https://issues.apache.org/jira/browse/COMPRESS-678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory updated COMPRESS-678: - Summary: ArArchiveOutputStream doesn't pad correctly when a file name length is odd and greater than 16 (padding missing) (was: ArArchiveInputStream.getNextArEntry() cause IOException "Invalid entry trailer.") > ArArchiveOutputStream doesn't pad correctly when a file name length is odd > and greater than 16 (padding missing) > > > Key: COMPRESS-678 > URL: https://issues.apache.org/jira/browse/COMPRESS-678 > Project: Commons Compress > Issue Type: Bug > Components: Archivers >Affects Versions: 1.26.1 >Reporter: takaaki nakama >Assignee: Gary D. Gregory >Priority: Minor > Fix For: 1.26.2 > > > Using ArArchiveInputStream, reading content created by ArArchiveOutputStream > causes IOException "Invalid entry trailer." at specific conditions. > > h4. Conditions > 1. LongFILE_BSD mode is enabled > 2. Ar file contains at least two entries. > 1. First entry's name length is longer than 16bytes and odd > 2. First entry's body length is odd. > 3. Second entry's name length is odd > > h4. Cause > ArArchiveOutputStream add padding if entryOffset is odd. This entryOffset > only includes body length, but not entry name length. > [https://github.com/apache/commons-compress/blob/master/src/main/java/org/apache/commons/compress/archivers/ar/ArArchiveOutputStream.java#L80] > > ArArchiveIutputStream try to remove padding when offset is odd. This offset > includes body length and name length. > [https://github.com/apache/commons-compress/blob/master/src/main/java/org/apache/commons/compress/archivers/ar/ArArchiveInputStream.java#L266] > > So encoding/decoding use different logics for padding, and at specific > conditions, ArArchiveIutputStream remove 1byte that is actually not padding > by mistake. > > > h4. Reproduction Code > > {code:java} > package test; > import java.io.File; > import java.io.FileInputStream; > import java.io.FileOutputStream; > import java.io.IOException; > import org.apache.commons.compress.archivers.ar.ArArchiveEntry; > import org.apache.commons.compress.archivers.ar.ArArchiveInputStream; > import org.apache.commons.compress.archivers.ar.ArArchiveOutputStream; > import org.junit.Test; > public class ArTest { > @Test > public void test() throws IOException { > File file = new File("test.ar"); > ArArchiveOutputStream arOut = new ArArchiveOutputStream(new > FileOutputStream(file)); > arOut.setLongFileMode(ArArchiveOutputStream.LONGFILE_BSD); > arOut.putArchiveEntry(new ArArchiveEntry("01234567891234567", 1)); > arOut.write(new byte[]{1}); > arOut.closeArchiveEntry(); > arOut.putArchiveEntry(new ArArchiveEntry("a", 1)); > arOut.write(new byte[]{1}); > arOut.closeArchiveEntry(); > arOut.close(); > ArArchiveInputStream arIn = new ArArchiveInputStream(new > FileInputStream(file)); > ArArchiveEntry entry = arIn.getNextArEntry(); > System.out.println(entry.getName()); > arIn.readAllBytes(); > entry = arIn.getNextArEntry(); // <- This line causes the exception. > System.out.println(entry.getName()); > } > }{code} > > > > h4. Error stack Trace > {code:java} > java.io.IOException: Invalid entry trailer. not read the content? Occurred at > byte: 146 > at > org.apache.commons.compress.archivers.ar.ArArchiveInputStream.getNextArEntry(ArArchiveInputStream.java:294) > at org.apache.druid.mila.ArTest.test(ArTest.java:34) > at > java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103) > at java.base/java.lang.reflect.Method.invoke(Method.java:580) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) >
[jira] [Resolved] (COMPRESS-678) ArArchiveInputStream.getNextArEntry() cause IOException "Invalid entry trailer."
[ https://issues.apache.org/jira/browse/COMPRESS-678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory resolved COMPRESS-678. -- Fix Version/s: 1.26.2 Assignee: Gary D. Gregory Resolution: Fixed In git master: ArArchiveOutputStream doesn't pad correctly when a file name length is odd and greater than 16 (padding missing). I verified this with the `ar` app on macOS. Also in snapshot builds in https://repository.apache.org/content/repositories/snapshots/org/apache/commons/commons-compress/1.26.2-SNAPSHOT/ > ArArchiveInputStream.getNextArEntry() cause IOException "Invalid entry > trailer." > > > Key: COMPRESS-678 > URL: https://issues.apache.org/jira/browse/COMPRESS-678 > Project: Commons Compress > Issue Type: Bug > Components: Archivers >Affects Versions: 1.26.1 >Reporter: takaaki nakama >Assignee: Gary D. Gregory >Priority: Minor > Fix For: 1.26.2 > > > Using ArArchiveInputStream, reading content created by ArArchiveOutputStream > causes IOException "Invalid entry trailer." at specific conditions. > > h4. Conditions > 1. LongFILE_BSD mode is enabled > 2. Ar file contains at least two entries. > 1. First entry's name length is longer than 16bytes and odd > 2. First entry's body length is odd. > 3. Second entry's name length is odd > > h4. Cause > ArArchiveOutputStream add padding if entryOffset is odd. This entryOffset > only includes body length, but not entry name length. > [https://github.com/apache/commons-compress/blob/master/src/main/java/org/apache/commons/compress/archivers/ar/ArArchiveOutputStream.java#L80] > > ArArchiveIutputStream try to remove padding when offset is odd. This offset > includes body length and name length. > [https://github.com/apache/commons-compress/blob/master/src/main/java/org/apache/commons/compress/archivers/ar/ArArchiveInputStream.java#L266] > > So encoding/decoding use different logics for padding, and at specific > conditions, ArArchiveIutputStream remove 1byte that is actually not padding > by mistake. > > > h4. Reproduction Code > > {code:java} > package test; > import java.io.File; > import java.io.FileInputStream; > import java.io.FileOutputStream; > import java.io.IOException; > import org.apache.commons.compress.archivers.ar.ArArchiveEntry; > import org.apache.commons.compress.archivers.ar.ArArchiveInputStream; > import org.apache.commons.compress.archivers.ar.ArArchiveOutputStream; > import org.junit.Test; > public class ArTest { > @Test > public void test() throws IOException { > File file = new File("test.ar"); > ArArchiveOutputStream arOut = new ArArchiveOutputStream(new > FileOutputStream(file)); > arOut.setLongFileMode(ArArchiveOutputStream.LONGFILE_BSD); > arOut.putArchiveEntry(new ArArchiveEntry("01234567891234567", 1)); > arOut.write(new byte[]{1}); > arOut.closeArchiveEntry(); > arOut.putArchiveEntry(new ArArchiveEntry("a", 1)); > arOut.write(new byte[]{1}); > arOut.closeArchiveEntry(); > arOut.close(); > ArArchiveInputStream arIn = new ArArchiveInputStream(new > FileInputStream(file)); > ArArchiveEntry entry = arIn.getNextArEntry(); > System.out.println(entry.getName()); > arIn.readAllBytes(); > entry = arIn.getNextArEntry(); // <- This line causes the exception. > System.out.println(entry.getName()); > } > }{code} > > > > h4. Error stack Trace > {code:java} > java.io.IOException: Invalid entry trailer. not read the content? Occurred at > byte: 146 > at > org.apache.commons.compress.archivers.ar.ArArchiveInputStream.getNextArEntry(ArArchiveInputStream.java:294) > at org.apache.druid.mila.ArTest.test(ArTest.java:34) > at > java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103) > at java.base/java.lang.reflect.Method.invoke(Method.java:580) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > at
[jira] [Commented] (COMPRESS-678) ArArchiveInputStream.getNextArEntry() cause IOException "Invalid entry trailer."
[ https://issues.apache.org/jira/browse/COMPRESS-678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841474#comment-17841474 ] Gary D. Gregory commented on COMPRESS-678: -- Thank you [~tnakama888] I can run the new test locally and reproduce your report. > ArArchiveInputStream.getNextArEntry() cause IOException "Invalid entry > trailer." > > > Key: COMPRESS-678 > URL: https://issues.apache.org/jira/browse/COMPRESS-678 > Project: Commons Compress > Issue Type: Bug > Components: Archivers >Affects Versions: 1.26.1 >Reporter: takaaki nakama >Priority: Minor > > Using ArArchiveInputStream, reading content created by ArArchiveOutputStream > causes IOException "Invalid entry trailer." at specific conditions. > > h4. Conditions > 1. LongFILE_BSD mode is enabled > 2. Ar file contains at least two entries. > 1. First entry's name length is longer than 16bytes and odd > 2. First entry's body length is odd. > 3. Second entry's name length is odd > > h4. Cause > ArArchiveOutputStream add padding if entryOffset is odd. This entryOffset > only includes body length, but not entry name length. > [https://github.com/apache/commons-compress/blob/master/src/main/java/org/apache/commons/compress/archivers/ar/ArArchiveOutputStream.java#L80] > > ArArchiveIutputStream try to remove padding when offset is odd. This offset > includes body length and name length. > [https://github.com/apache/commons-compress/blob/master/src/main/java/org/apache/commons/compress/archivers/ar/ArArchiveInputStream.java#L266] > > So encoding/decoding use different logics for padding, and at specific > conditions, ArArchiveIutputStream remove 1byte that is actually not padding > by mistake. > > > h4. Reproduction Code > > {code:java} > package test; > import java.io.File; > import java.io.FileInputStream; > import java.io.FileOutputStream; > import java.io.IOException; > import org.apache.commons.compress.archivers.ar.ArArchiveEntry; > import org.apache.commons.compress.archivers.ar.ArArchiveInputStream; > import org.apache.commons.compress.archivers.ar.ArArchiveOutputStream; > import org.junit.Test; > public class ArTest { > @Test > public void test() throws IOException { > File file = new File("test.ar"); > ArArchiveOutputStream arOut = new ArArchiveOutputStream(new > FileOutputStream(file)); > arOut.setLongFileMode(ArArchiveOutputStream.LONGFILE_BSD); > arOut.putArchiveEntry(new ArArchiveEntry("01234567891234567", 1)); > arOut.write(new byte[]{1}); > arOut.closeArchiveEntry(); > arOut.putArchiveEntry(new ArArchiveEntry("a", 1)); > arOut.write(new byte[]{1}); > arOut.closeArchiveEntry(); > arOut.close(); > ArArchiveInputStream arIn = new ArArchiveInputStream(new > FileInputStream(file)); > ArArchiveEntry entry = arIn.getNextArEntry(); > System.out.println(entry.getName()); > arIn.readAllBytes(); > entry = arIn.getNextArEntry(); // <- This line causes the exception. > System.out.println(entry.getName()); > } > }{code} > > > > h4. Error stack Trace > {code:java} > java.io.IOException: Invalid entry trailer. not read the content? Occurred at > byte: 146 > at > org.apache.commons.compress.archivers.ar.ArArchiveInputStream.getNextArEntry(ArArchiveInputStream.java:294) > at org.apache.druid.mila.ArTest.test(ArTest.java:34) > at > java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103) > at java.base/java.lang.reflect.Method.invoke(Method.java:580) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
[jira] [Updated] (BCEL-228) SpotBugs issues
[ https://issues.apache.org/jira/browse/BCEL-228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory updated BCEL-228: - Summary: SpotBugs issues (was: Findbugs issues) > SpotBugs issues > --- > > Key: BCEL-228 > URL: https://issues.apache.org/jira/browse/BCEL-228 > Project: Commons BCEL > Issue Type: Bug >Affects Versions: 6.0 >Reporter: Charles Honton >Priority: Major > Fix For: 6.9.1 > > > Fix findbugs issues. The fixable issues may be dependent upon whether BC is > broken. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (BCEL-228) SpotBugs issues
[ https://issues.apache.org/jira/browse/BCEL-228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory resolved BCEL-228. -- Fix Version/s: 6.9.0 (was: 6.9.1) Resolution: Fixed > SpotBugs issues > --- > > Key: BCEL-228 > URL: https://issues.apache.org/jira/browse/BCEL-228 > Project: Commons BCEL > Issue Type: Bug >Affects Versions: 6.0 >Reporter: Charles Honton >Priority: Major > Fix For: 6.9.0 > > > Fix findbugs issues. The fixable issues may be dependent upon whether BC is > broken. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (BCEL-229) Checkstyle issues
[ https://issues.apache.org/jira/browse/BCEL-229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory resolved BCEL-229. -- Fix Version/s: 6.9.0 (was: 6.9.1) Resolution: Fixed > Checkstyle issues > - > > Key: BCEL-229 > URL: https://issues.apache.org/jira/browse/BCEL-229 > Project: Commons BCEL > Issue Type: Bug >Affects Versions: 6.0 >Reporter: Charles Honton >Priority: Major > Fix For: 6.9.0 > > > Fix checkstyle issues. The fixable issues may be dependent upon whether BC > is broken. -- This message was sent by Atlassian Jira (v8.20.10#820010)