[GitHub] [commons-imaging] garydgregory commented on a diff in pull request #301: IMAGING-355 - Large animated GIF takes too much heap memory in getMetadata
garydgregory commented on code in PR #301: URL: https://github.com/apache/commons-imaging/pull/301#discussion_r1257659022 ## src/main/java/org/apache/commons/imaging/formats/gif/GifImageParser.java: ## @@ -407,7 +407,7 @@ public byte[] getIccProfileBytes(final ByteSource byteSource, final GifImagingPa @Override public ImageInfo getImageInfo(final ByteSource byteSource, final GifImagingParameters params) throws ImagingException, IOException { -final GifImageContents blocks = readFile(byteSource, false); +final GifImageContents blocks = readFile(byteSource, true); Review Comment: **Note**: All current callers of `org.apache.commons.imaging.formats.gif.GifImageParser.readFile(ByteSource, boolean)` pass false for `stopBeforeImageData`. Would it make for all call sites to pick up this value from `GifImagingParameters`? Or, would this cause some use-cases to break. We will need to make sure to unit test whatever code we change or add. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@commons.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (POOL-411) NPE when deregistering key at end of borrow
[ https://issues.apache.org/jira/browse/POOL-411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17741435#comment-17741435 ] Phil Steitz commented on POOL-411: -- [~sebb] I added sleeps temporarily between the locks in register/deregister and a couple of timing-related tests failed because the latency controls in the tests did not work correctly, but nothing else failed. I am not sure how to elegantly set this up so it can be done periodically, but I agree it is a good idea to do maybe in the runup to releases (like running soak and performance tests with commons-performance). On the double-checking, there are two cases to consider: first, register. There is basically double-checking there or more precisely assurance provided by computeIfAbsent. That is really where the bug fix is. It makes sure that if there was no pool under the key when the read lock was acquired and there is one after the write lock is acquired, no new pool is created. Second is deregister. If poolMap.get(k) returns null after getting the write lock that means that the original pool has been removed and we are in a deregister applied to the wrong pool situation. The right thing to do there is throw ISE, but it should not be able to happen (which is why there was no null check before this bug was reported). We could add a test there, but I am concerned that it would be a check that should never be able to fail and there is small, but not zero performance cost to performing it for pretty much every pool action (register/deregister are very hot methods). The same applies to testing the counters on every inspection. Perhaps best is to temporarily insert these checks along with the sleeps as part of test runs. Any suggestions about how to do that in a simple, maintainable way? > NPE when deregistering key at end of borrow > --- > > Key: POOL-411 > URL: https://issues.apache.org/jira/browse/POOL-411 > Project: Commons Pool > Issue Type: Task >Affects Versions: 2.11.1 >Reporter: Richard Eckart de Castilho >Priority: Major > Fix For: 2.12.0 > > > There is a potential for an NPE happening in the finally block of > borrowObject: > {noformat} > Caused by: java.lang.NullPointerException: Cannot invoke > "org.apache.commons.pool2.impl.GenericKeyedObjectPool$ObjectDeque.getNumInterested()" > because "objectDeque" is null > at > org.apache.commons.pool2.impl.GenericKeyedObjectPool.deregister(GenericKeyedObjectPool.java:821) > at > org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:507) > at > org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:350) > > {noformat} > From reading the code, it seems this could happen e.g. if a pool is > concurrently cleared while a borrow is in progress. > Not sure what a proper solution here would be. Maybe deregister should > silently do nothing if poolMap.get(k) returns null? -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (COMPRESS-647) ArrayIndexOutOfBoundsException when reading Zip with data descriptor entries
[ https://issues.apache.org/jira/browse/COMPRESS-647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Osipov updated COMPRESS-647: Description: The attached fuzzed zip file fails to open with the following test when the {{allowStoredEntriesWithDataDescriptor}} value is {{true}}. {code:java} @ParameterizedTest @ValueSource(booleans = {true, false}) public void zipInputStream(final boolean allowStoredEntriesWithDataDescriptor) { try (ZipArchiveInputStream zIn = new ZipArchiveInputStream(Files.newInputStream(Paths.get("crash-commons-compress-ZipArchiveInputStream-dataDescriptor")), "UTF-8", false, allowStoredEntriesWithDataDescriptor)) { ZipArchiveEntry zae = zIn.getNextZipEntry(); while (zae != null) { zae = zIn.getNextZipEntry(); } } catch (IOException e) { // Ignore expected exception } } {code} The exception is {code:java} java.lang.ArrayIndexOutOfBoundsException: arraycopy: source index -6 out of bounds for byte[512] at java.base/java.lang.System.arraycopy(Native Method) at java.base/java.io.PushbackInputStream.unread(PushbackInputStream.java:232) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.pushback(ZipArchiveInputStream.java:979) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.bufferContainsSignature(ZipArchiveInputStream.java:471) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.readStoredEntry(ZipArchiveInputStream.java:1282) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.readStored(ZipArchiveInputStream.java:1211) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.read(ZipArchiveInputStream.java:1013) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.skip(ZipArchiveInputStream.java:1343) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.closeEntry(ZipArchiveInputStream.java:562) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.getNextZipEntry(ZipArchiveInputStream.java:735) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStreamTest.zipInputStream(ZipArchiveInputStreamTest.java:765) {code} I also tried to open the file with the ZipFile class and the crash does not occure there. Seems to only affect the stream implementation with that specific option. was: The attached fuzzed zip file fails to open with the following test when the {{allowStoredEntriesWithDataDescriptor} }value is {{true}}. {code:java} @ParameterizedTest @ValueSource(booleans = {true, false}) public void zipInputStream(final boolean allowStoredEntriesWithDataDescriptor) { try (ZipArchiveInputStream zIn = new ZipArchiveInputStream(Files.newInputStream(Paths.get("crash-commons-compress-ZipArchiveInputStream-dataDescriptor")), "UTF-8", false, allowStoredEntriesWithDataDescriptor)) { ZipArchiveEntry zae = zIn.getNextZipEntry(); while (zae != null) { zae = zIn.getNextZipEntry(); } } catch (IOException e) { // Ignore expected exception } } {code} The exception is {code:java} java.lang.ArrayIndexOutOfBoundsException: arraycopy: source index -6 out of bounds for byte[512] at java.base/java.lang.System.arraycopy(Native Method) at java.base/java.io.PushbackInputStream.unread(PushbackInputStream.java:232) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.pushback(ZipArchiveInputStream.java:979) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.bufferContainsSignature(ZipArchiveInputStream.java:471) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.readStoredEntry(ZipArchiveInputStream.java:1282) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.readStored(ZipArchiveInputStream.java:1211) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.read(ZipArchiveInputStream.java:1013) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.skip(ZipArchiveInputStream.java:1343) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.closeEntry(ZipArchiveInputStream.java:562) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.getNextZipEntry(ZipArchiveInputStream.java:735) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStreamTest.zipInputStream(ZipArchiveInputStreamTest.java:765) {code} I also tried to open the file with the ZipFile class and the crash does not occure there. Seems to only affect the stream implementation with that specific option. > ArrayIndexOutOfBoundsException when reading Zip with data descriptor entries > > > Key: COMPRESS-647 > URL: https://issues.apache.org/jira/browse/COMPRESS-647 > Project: Commons Compress >
[jira] [Updated] (COMPRESS-647) ArrayIndexOutOfBoundsException when reading Zip with data descriptor entries
[ https://issues.apache.org/jira/browse/COMPRESS-647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Osipov updated COMPRESS-647: Description: The attached fuzzed zip file fails to open with the following test when the {{allowStoredEntriesWithDataDescriptor} }value is {{true}}. {code:java} @ParameterizedTest @ValueSource(booleans = {true, false}) public void zipInputStream(final boolean allowStoredEntriesWithDataDescriptor) { try (ZipArchiveInputStream zIn = new ZipArchiveInputStream(Files.newInputStream(Paths.get("crash-commons-compress-ZipArchiveInputStream-dataDescriptor")), "UTF-8", false, allowStoredEntriesWithDataDescriptor)) { ZipArchiveEntry zae = zIn.getNextZipEntry(); while (zae != null) { zae = zIn.getNextZipEntry(); } } catch (IOException e) { // Ignore expected exception } } {code} The exception is {code:java} java.lang.ArrayIndexOutOfBoundsException: arraycopy: source index -6 out of bounds for byte[512] at java.base/java.lang.System.arraycopy(Native Method) at java.base/java.io.PushbackInputStream.unread(PushbackInputStream.java:232) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.pushback(ZipArchiveInputStream.java:979) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.bufferContainsSignature(ZipArchiveInputStream.java:471) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.readStoredEntry(ZipArchiveInputStream.java:1282) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.readStored(ZipArchiveInputStream.java:1211) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.read(ZipArchiveInputStream.java:1013) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.skip(ZipArchiveInputStream.java:1343) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.closeEntry(ZipArchiveInputStream.java:562) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.getNextZipEntry(ZipArchiveInputStream.java:735) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStreamTest.zipInputStream(ZipArchiveInputStreamTest.java:765) {code} I also tried to open the file with the ZipFile class and the crash does not occure there. Seems to only affect the stream implementation with that specific option. was: The attached fuzzed zip file fails to open with the following test when the {{allowStoredEntriesWithDataDescriptor }}value is {{{}true{}}}. {code:java} @ParameterizedTest @ValueSource(booleans = {true, false}) public void zipInputStream(final boolean allowStoredEntriesWithDataDescriptor) { try (ZipArchiveInputStream zIn = new ZipArchiveInputStream(Files.newInputStream(Paths.get("crash-commons-compress-ZipArchiveInputStream-dataDescriptor")), "UTF-8", false, allowStoredEntriesWithDataDescriptor)) { ZipArchiveEntry zae = zIn.getNextZipEntry(); while (zae != null) { zae = zIn.getNextZipEntry(); } } catch (IOException e) { // Ignore expected exception } } {code} The exception is {code:java} java.lang.ArrayIndexOutOfBoundsException: arraycopy: source index -6 out of bounds for byte[512] at java.base/java.lang.System.arraycopy(Native Method) at java.base/java.io.PushbackInputStream.unread(PushbackInputStream.java:232) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.pushback(ZipArchiveInputStream.java:979) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.bufferContainsSignature(ZipArchiveInputStream.java:471) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.readStoredEntry(ZipArchiveInputStream.java:1282) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.readStored(ZipArchiveInputStream.java:1211) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.read(ZipArchiveInputStream.java:1013) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.skip(ZipArchiveInputStream.java:1343) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.closeEntry(ZipArchiveInputStream.java:562) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.getNextZipEntry(ZipArchiveInputStream.java:735) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStreamTest.zipInputStream(ZipArchiveInputStreamTest.java:765) {code} I also tried to open the file with the ZipFile class and the crash does not occure there. Seems to only affect the stream implementation with that specific option. > ArrayIndexOutOfBoundsException when reading Zip with data descriptor entries > > > Key: COMPRESS-647 > URL: https://issues.apache.org/jira/browse/COMPRESS-647 > Project: Commons Compress >
[jira] [Closed] (FILEUPLOAD-343) Update Project Version
[ https://issues.apache.org/jira/browse/FILEUPLOAD-343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory closed FILEUPLOAD-343. -- Fix Version/s: 1.5 Resolution: Fixed > Update Project Version > -- > > Key: FILEUPLOAD-343 > URL: https://issues.apache.org/jira/browse/FILEUPLOAD-343 > Project: Commons FileUpload > Issue Type: Wish >Reporter: Gabryel Monteiro >Priority: Minor > Fix For: 1.5 > > > Hello, > > It seems the last released version was released three years ago as 1.4. It > seems to be really sad, as there are further updates in the repository that > are not reflected in this release. > One of those problems would be the fact that the commons-io version in the > version 1.4 is a vulnerable one, that has a CVE. This doesn't happen in the > main repository. > It would be very interesting that you could upload a version 1.5 of the > library in the current state, so other projects could use a more recent > version and be more protected. At the moment I am using the > io.github.openfeign.form:feign-form-spring library and I have to manually > override the commons-io version, so the problem is avoided. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Closed] (FILEUPLOAD-347) CVE in commons-io versions less than 2.7
[ https://issues.apache.org/jira/browse/FILEUPLOAD-347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory closed FILEUPLOAD-347. -- Fix Version/s: 1.5 Resolution: Fixed > CVE in commons-io versions less than 2.7 > - > > Key: FILEUPLOAD-347 > URL: https://issues.apache.org/jira/browse/FILEUPLOAD-347 > Project: Commons FileUpload > Issue Type: Task >Affects Versions: 1.4 > Environment: java 17 on macos >Reporter: Michael Brewer >Priority: Major > Fix For: 1.5 > > Attachments: Screen Shot 2022-07-17 at 10.19.06 AM.png > > > Current version of commons-fileupload depends on common-io 2.2 which has a > medium level CVE. Looks like the github unreleased version is already using > the latest, so once this is released the CVE should go away. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FILEUPLOAD-343) Update Project Version
[ https://issues.apache.org/jira/browse/FILEUPLOAD-343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17741409#comment-17741409 ] Dennis Kieselhorst commented on FILEUPLOAD-343: --- To be resolved, release 1.5 is out for a while. > Update Project Version > -- > > Key: FILEUPLOAD-343 > URL: https://issues.apache.org/jira/browse/FILEUPLOAD-343 > Project: Commons FileUpload > Issue Type: Wish >Reporter: Gabryel Monteiro >Priority: Minor > > Hello, > > It seems the last released version was released three years ago as 1.4. It > seems to be really sad, as there are further updates in the repository that > are not reflected in this release. > One of those problems would be the fact that the commons-io version in the > version 1.4 is a vulnerable one, that has a CVE. This doesn't happen in the > main repository. > It would be very interesting that you could upload a version 1.5 of the > library in the current state, so other projects could use a more recent > version and be more protected. At the moment I am using the > io.github.openfeign.form:feign-form-spring library and I have to manually > override the commons-io version, so the problem is avoided. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FILEUPLOAD-347) CVE in commons-io versions less than 2.7
[ https://issues.apache.org/jira/browse/FILEUPLOAD-347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17741407#comment-17741407 ] Dennis Kieselhorst commented on FILEUPLOAD-347: --- This can be resolved as Release 1.5 is available for a while. > CVE in commons-io versions less than 2.7 > - > > Key: FILEUPLOAD-347 > URL: https://issues.apache.org/jira/browse/FILEUPLOAD-347 > Project: Commons FileUpload > Issue Type: Task >Affects Versions: 1.4 > Environment: java 17 on macos >Reporter: Michael Brewer >Priority: Major > Attachments: Screen Shot 2022-07-17 at 10.19.06 AM.png > > > Current version of commons-fileupload depends on common-io 2.2 which has a > medium level CVE. Looks like the github unreleased version is already using > the latest, so once this is released the CVE should go away. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FILEUPLOAD-309) Release version 2.0.0
[ https://issues.apache.org/jira/browse/FILEUPLOAD-309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory updated FILEUPLOAD-309: --- Assignee: Gary D. Gregory > Release version 2.0.0 > - > > Key: FILEUPLOAD-309 > URL: https://issues.apache.org/jira/browse/FILEUPLOAD-309 > Project: Commons FileUpload > Issue Type: Wish >Reporter: Thiago Henrique Hupner >Assignee: Gary D. Gregory >Priority: Major > > At Piranha, we've migrated to use the new Jakarta namespace. > One of our dependencies is the Commons File Upload, but the latest version > available is 1.4. > Looking around at the source code, I've found that the code is already > prepared for the new Jakarta namespace. > So, I want to know if there's a plan to release a new version soon. Or at > least a 2.0.0 milestone. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (COMPRESS-647) ArrayIndexOutOfBoundsException when reading Zip with data descriptor entries
Robin Schimpf created COMPRESS-647: -- Summary: ArrayIndexOutOfBoundsException when reading Zip with data descriptor entries Key: COMPRESS-647 URL: https://issues.apache.org/jira/browse/COMPRESS-647 Project: Commons Compress Issue Type: Bug Affects Versions: 1.23.0 Reporter: Robin Schimpf Attachments: crash-commons-compress-ZipArchiveInputStream-dataDescriptor The attached fuzzed zip file fails to open with the following test when the {{allowStoredEntriesWithDataDescriptor }}value is {{{}true{}}}. {code:java} @ParameterizedTest @ValueSource(booleans = {true, false}) public void zipInputStream(final boolean allowStoredEntriesWithDataDescriptor) { try (ZipArchiveInputStream zIn = new ZipArchiveInputStream(Files.newInputStream(Paths.get("crash-commons-compress-ZipArchiveInputStream-dataDescriptor")), "UTF-8", false, allowStoredEntriesWithDataDescriptor)) { ZipArchiveEntry zae = zIn.getNextZipEntry(); while (zae != null) { zae = zIn.getNextZipEntry(); } } catch (IOException e) { // Ignore expected exception } } {code} The exception is {code:java} java.lang.ArrayIndexOutOfBoundsException: arraycopy: source index -6 out of bounds for byte[512] at java.base/java.lang.System.arraycopy(Native Method) at java.base/java.io.PushbackInputStream.unread(PushbackInputStream.java:232) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.pushback(ZipArchiveInputStream.java:979) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.bufferContainsSignature(ZipArchiveInputStream.java:471) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.readStoredEntry(ZipArchiveInputStream.java:1282) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.readStored(ZipArchiveInputStream.java:1211) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.read(ZipArchiveInputStream.java:1013) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.skip(ZipArchiveInputStream.java:1343) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.closeEntry(ZipArchiveInputStream.java:562) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.getNextZipEntry(ZipArchiveInputStream.java:735) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStreamTest.zipInputStream(ZipArchiveInputStreamTest.java:765) {code} I also tried to open the file with the ZipFile class and the crash does not occure there. Seems to only affect the stream implementation with that specific option. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [commons-imaging] garydgregory commented on a diff in pull request #301: IMAGING-355 - Large animated GIF takes too much heap memory in getMetadata
garydgregory commented on code in PR #301: URL: https://github.com/apache/commons-imaging/pull/301#discussion_r1257506692 ## src/main/java/org/apache/commons/imaging/formats/gif/GifImageParser.java: ## @@ -407,7 +407,7 @@ public byte[] getIccProfileBytes(final ByteSource byteSource, final GifImagingPa @Override public ImageInfo getImageInfo(final ByteSource byteSource, final GifImagingParameters params) throws ImagingException, IOException { -final GifImageContents blocks = readFile(byteSource, false); +final GifImageContents blocks = readFile(byteSource, true); Review Comment: That's a good point! I wonder if this data is hard-coded to be skipped it will be unavailable for other API calls once the object is constructed. That would be bad. But if it can be paramerized, then the client call.site can decide. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@commons.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [commons-imaging] kinow commented on a diff in pull request #301: IMAGING-355 - Large animated GIF takes too much heap memory in getMetadata
kinow commented on code in PR #301: URL: https://github.com/apache/commons-imaging/pull/301#discussion_r1257503334 ## src/main/java/org/apache/commons/imaging/formats/gif/GifImageParser.java: ## @@ -407,7 +407,7 @@ public byte[] getIccProfileBytes(final ByteSource byteSource, final GifImagingPa @Override public ImageInfo getImageInfo(final ByteSource byteSource, final GifImagingParameters params) throws ImagingException, IOException { -final GifImageContents blocks = readFile(byteSource, false); +final GifImageContents blocks = readFile(byteSource, true); Review Comment: Maybe this `true`/`false` (`stopBeforeImageData`) should instead be retrieved from the `GitImagingParameters` object? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@commons.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (POOL-411) NPE when deregistering key at end of borrow
[ https://issues.apache.org/jira/browse/POOL-411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17741381#comment-17741381 ] Sebb commented on POOL-411: --- The process of upgrading the lock has an inherent window when other threads can grab the write lock, and thus change the pool. Whilst it looks like the original issue has been solved, running tests with a bigger window would help ensure that there aren't any other such issues lurking. [Unfortunately a sleep will cause the tests to run longer unless the loop counts are reduced] As to rechecking that that object is still in the pool: standard behaviour when upgrading a lock is to recheck the condition once the write lock is obtained. In this case, the condition is that 'poolMap.get(k)' does not return null, but this is not checked. == There is another possible issue: the code checks the counts for equality with zero. If there is a possibility that counts may be decremented incorrectly, the possibility exists that the count may go negative, and perhaps never equal zero at the time of testing. It might make sense to check counts and throw an exception if they ever go negative. > NPE when deregistering key at end of borrow > --- > > Key: POOL-411 > URL: https://issues.apache.org/jira/browse/POOL-411 > Project: Commons Pool > Issue Type: Task >Affects Versions: 2.11.1 >Reporter: Richard Eckart de Castilho >Priority: Major > Fix For: 2.12.0 > > > There is a potential for an NPE happening in the finally block of > borrowObject: > {noformat} > Caused by: java.lang.NullPointerException: Cannot invoke > "org.apache.commons.pool2.impl.GenericKeyedObjectPool$ObjectDeque.getNumInterested()" > because "objectDeque" is null > at > org.apache.commons.pool2.impl.GenericKeyedObjectPool.deregister(GenericKeyedObjectPool.java:821) > at > org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:507) > at > org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:350) > > {noformat} > From reading the code, it seems this could happen e.g. if a pool is > concurrently cleared while a borrow is in progress. > Not sure what a proper solution here would be. Maybe deregister should > silently do nothing if poolMap.get(k) returns null? -- This message was sent by Atlassian Jira (v8.20.10#820010)