[jira] [Closed] (COMPRESS-663) How Are Vulnerabilities Fixed?

2024-02-20 Thread Michael Osipov (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Osipov closed COMPRESS-663.
---
Resolution: Invalid

This isn't a forum. Direct it to the dev mailing list.

> How Are Vulnerabilities Fixed?
> --
>
> Key: COMPRESS-663
> URL: https://issues.apache.org/jira/browse/COMPRESS-663
> Project: Commons Compress
>  Issue Type: Wish
>Reporter: Radar wen
>Priority: Major
>
> CVE-2024-25710 and CVE-2024-26308
> Can you tell me how the two vulnerabilities are fixed? Or, which submission 
> record is corresponding to? Which code block is affected?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] chore(javax2jakarta): update transaction api to jakarta [commons-dbcp]

2024-02-20 Thread via GitHub


garydgregory commented on PR #351:
URL: https://github.com/apache/commons-dbcp/pull/351#issuecomment-1955780903

   -1 this will break binary compatibility. 
   
   This type of worked is planned but will be done very differently: we will 
split the project into a multi-module project to allow support for both javax 
and Jakarta. But we might complete the pool2 to 3 work first.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@commons.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (COMPRESS-663) How Are Vulnerabilities Fixed?

2024-02-20 Thread Radar wen (Jira)
Radar wen created COMPRESS-663:
--

 Summary: How Are Vulnerabilities Fixed?
 Key: COMPRESS-663
 URL: https://issues.apache.org/jira/browse/COMPRESS-663
 Project: Commons Compress
  Issue Type: Wish
Reporter: Radar wen


CVE-2024-25710 and CVE-2024-26308

Can you tell me how the two vulnerabilities are fixed? Or, which submission 
record is corresponding to? Which code block is affected?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] chore(javax2jakarta): update transaction api to jakarta [commons-dbcp]

2024-02-20 Thread via GitHub


educhastenier commented on code in PR #351:
URL: https://github.com/apache/commons-dbcp/pull/351#discussion_r1496642412


##
pom.xml:
##
@@ -319,7 +315,7 @@
 12310469
 2.12.0
 
-
javax.transaction;version="1.1.0",javax.transaction.xa;version="1.1.0";partial=true;mandatory:=partial,*
+
jakarta.transaction;version="2.0.1",javax.transaction.xa;version="1.1.0";partial=true;mandatory:=partial,*

Review Comment:
   I am not sure how this osgi stuff works, so maybe there is something more to 
change here...



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@commons.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] chore(javax2jakarta): update transaction api to jakarta [commons-dbcp]

2024-02-20 Thread via GitHub


educhastenier commented on code in PR #351:
URL: https://github.com/apache/commons-dbcp/pull/351#discussion_r1496641615


##
pom.xml:
##
@@ -239,19 +239,15 @@
 
 
 
-  org.apache.geronimo.modules
+  org.apache.geronimo.components
   geronimo-transaction
-  2.2.1
+  4.0.0
   test
   
 
   org.junit.jupiter
   junit-jupiter
 
-
-  commons-logging
-  commons-logging
-

Review Comment:
   This module is not a dependency of geronimo-transaction anymore



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@commons.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] chore(javax2jakarta): update transaction api to jakarta [commons-dbcp]

2024-02-20 Thread via GitHub


educhastenier opened a new pull request, #351:
URL: https://github.com/apache/commons-dbcp/pull/351

   Update jakarta.transaction:jakarta.transaction-api from 1.3.1 (packages 
javax.) to 2.0.1 (packages jakarta.)
   This requires to update also:
   * narayana to a version using jakarta.transaction-api (for tests only)
   * geronimo to a version using jakarta.transaction-api (for tests only)
   
   Those packages also needed to update the minimum java version used to 11


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@commons.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (DBCP-595) Connection pool can be exhausted when connections are killed on the DB side

2024-02-20 Thread Phil Steitz (Jira)


 [ 
https://issues.apache.org/jira/browse/DBCP-595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Steitz closed DBCP-595.

Resolution: Not A Problem

Don't mean to be blunt / dismissive with the close here.  But this does not 
look like a DBCP bug and it would be better to discuss on the mailing list any 
changes to behavior.  See [~ggregory] 's nice summary there on why we do not 
force close connections on fatal SQL exceptions - basically drivers don't 
consistently return codes that a generic library can count on.

I would definitely recommend reporting or trying to find and patch the code 
that checks out connections from the pool and does not close them on fatal 
exception paths.  That would impact a lot of other users.  One way to find this 
is to turn abandoned connection logging.

> Connection pool can be exhausted when connections are killed on the DB side
> ---
>
> Key: DBCP-595
> URL: https://issues.apache.org/jira/browse/DBCP-595
> Project: Commons DBCP
>  Issue Type: Bug
>Affects Versions: 2.11.0
>Reporter: Dénes Bodó
>Priority: Critical
>  Labels: deadlock, robustness
> Attachments: ReproOneThread-jstack-minIdle.txt, 
> ReproOneThread-jstack_when_create.txt, ReproOneThread-jstack_when_stuck.txt, 
> ReproOneThread-screenshot_when_create-newCreateCount_gt_localMaxTotal.png
>
>
> Apache Oozie 5.2.1 uses OpenJPA 2.4.2 and commons-dbcp 1.4 and commons-pool 
> 1.5.4. These are ancient versions, I know.
> h1. Description
> The issue is that when due to some network issues or "maintenance work" on 
> the DB side (especially PostgreSQL) which causes the DB connection to be 
> closed, it results exhausted Pool on the client side. Many threads are 
> waiting at this point:
> {noformat}
> "pool-2-thread-4" #20 prio=5 os_prio=31 tid=0x7faf7903b800 nid=0x8603 
> waiting on condition [0x00030f3e7000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x00066aca8e70> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>   at 
> org.apache.commons.pool2.impl.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:1324)
>  {noformat}
> According to my observation this is because the JDBC driver does not get 
> closed on the client side, nor the abstract DBCP connection 
> _org.apache.commons.dbcp2.PoolableConnection_ .
> h1. Repro
> (Un)Fortunately I can reproduce the issue using the latest and greatest 
> commons-dbcp 2.11.0 and commons-pool 2.12.0 along with OpenJPA 3.2.2.
> I've just created a Java application to reproduce the issue: 
> [https://github.com/dionusos/pool_exhausted_repro] . See README.md for 
> detailed repro steps.
> h1. Kind of solution?
> To be honest I am not really familiar with DBCP but with this change I 
> managed to make my application more robust:
> {code:java}
> diff --git a/src/main/java/org/apache/commons/dbcp2/PoolableConnection.java 
> b/src/main/java/org/apache/commons/dbcp2/PoolableConnection.java
> index 440cb756..678550bf 100644
> --- a/src/main/java/org/apache/commons/dbcp2/PoolableConnection.java
> +++ b/src/main/java/org/apache/commons/dbcp2/PoolableConnection.java
> @@ -214,6 +214,10 @@ public class PoolableConnection extends 
> DelegatingConnection impleme
>      @Override
>      protected void handleException(final SQLException e) throws SQLException 
> {
>          fatalSqlExceptionThrown |= isFatalException(e);
> +        if (fatalSqlExceptionThrown && getDelegate() != null) {
> +            getDelegate().close();
> +            this.close();
> +        }
>          super.handleException(e);
>      }{code}
> What do you think about this approach?
> Is it a completely dead-end or we can start working on it in this direction?
> Do you agree that the reported and reproduced issue is a real one and nut 
> just some kind of misconfiguration?
>  
> I am lost at this point and I need to move forward so I am asking for 
> guidance here.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] Implemented MODE Z for FtpClient [commons-net]

2024-02-20 Thread via GitHub


Flofler commented on PR #220:
URL: https://github.com/apache/commons-net/pull/220#issuecomment-1955159269

   I'm not sure if this is your preferred way to implement this with a wrapping 
socket. Alternatively I could change any code that is using the 
getInputStream() and getOutputStream() of the Socket. But then any new code 
that is using the Socket must know that you will have to wrap an Inflater- or 
DeflaterOutputStream around it.
   
   Another solution would be to hide the Socket inside a wrapper class 
completely.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@commons.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (COMPRESS-662) maven-wrapper.properties outdated

2024-02-20 Thread Tilman Hausherr (Jira)
Tilman Hausherr created COMPRESS-662:


 Summary: maven-wrapper.properties outdated
 Key: COMPRESS-662
 URL: https://issues.apache.org/jira/browse/COMPRESS-662
 Project: Commons Compress
  Issue Type: Bug
  Components: Build
Affects Versions: 1.26.0
Reporter: Tilman Hausherr


The file maven-wrapper.properties contains a link to 
https://repo1.maven.org/maven2/org/apache/maven/apache-maven/3.5.0/apache-maven-3.5.0-bin.zip

However in the pom.xml the maven enforcer plugin requires 3.6.3 or higher 
(through the apache parent).

Solution: replace the URL with another, e.g.
https://repo1.maven.org/maven2/org/apache/maven/apache-maven/3.9.6/apache-maven-3.9.6-bin.zip




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (COMPRESS-661) commons-compress 1.26.0 breaks Apache Tika 2.9.1

2024-02-20 Thread Tilman Hausherr (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818863#comment-17818863
 ] 

Tilman Hausherr edited comment on COMPRESS-661 at 2/20/24 7:39 PM:
---

{code:java}
ArArchiveInputStream ar = new ArArchiveInputStream(new BufferedInputStream(new 
FileInputStream("../testARofText.ar")));
System.out.println("ar.markSupported(): " + ar.markSupported());

ArArchiveEntry aentry;
while ((aentry = ar.getNextEntry()) != null)
{
ar.mark(10);
ar.read(new byte[10]);
ar.reset();
System.out.println("AR: " + new String(ar.readAllBytes()));
}
{code}
This code will (properly) fail with 1.25.0 because mark/release is not 
supported and markSupported() is false:
{noformat}
ar.markSupported(): false
Exception in thread "main" java.io.IOException: mark/reset not supported
at java.base/java.io.InputStream.reset(InputStream.java:655)
at 
com.mycompany.maventikaproject.TilmanSevenTest.main(TilmanSevenTest.java:62)
{noformat}

With 1.26.0 it will bring this, while markSupported() is true:
{noformat}
ar.markSupported(): true
AR: Test d'indexation de Txt
http://www.a
Exception in thread "main" java.io.IOException: Truncated ar archive
at 
org.apache.commons.compress.archivers.ar.ArArchiveInputStream.getNextArEntry(ArArchiveInputStream.java:281)
at 
org.apache.commons.compress.archivers.ar.ArArchiveInputStream.getNextEntry(ArArchiveInputStream.java:351)
at 
com.mycompany.maventikaproject.TilmanSevenTest.main(TilmanSevenTest.java:58)
{noformat}


was (Author: tilman):
{code:java}
ArArchiveInputStream ar = new ArArchiveInputStream(new BufferedInputStream(new 
FileInputStream("../testARofText.ar")));
System.out.println("ar.markSupported(): " + ar.markSupported());

ArArchiveEntry aentry;
while ((aentry = ar.getNextEntry()) != null)
{
ar.mark(10);
ar.read(new byte[10]);
ar.reset();
System.out.println("AR: " + new String(ar.readAllBytes()));
}
{code}
This code will fail with 1.25.0 because mark/release is not supported and 
markSupported() is false:
{noformat}
ar.markSupported(): false
Exception in thread "main" java.io.IOException: mark/reset not supported
at java.base/java.io.InputStream.reset(InputStream.java:655)
at 
com.mycompany.maventikaproject.TilmanSevenTest.main(TilmanSevenTest.java:62)
{noformat}

With 1.26.0 it will bring this, while markSupported() is true:
{noformat}
ar.markSupported(): true
AR: Test d'indexation de Txt
http://www.a
Exception in thread "main" java.io.IOException: Truncated ar archive
at 
org.apache.commons.compress.archivers.ar.ArArchiveInputStream.getNextArEntry(ArArchiveInputStream.java:281)
at 
org.apache.commons.compress.archivers.ar.ArArchiveInputStream.getNextEntry(ArArchiveInputStream.java:351)
at 
com.mycompany.maventikaproject.TilmanSevenTest.main(TilmanSevenTest.java:58)
{noformat}

> commons-compress 1.26.0 breaks Apache Tika 2.9.1
> 
>
> Key: COMPRESS-661
> URL: https://issues.apache.org/jira/browse/COMPRESS-661
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Compressors
>Affects Versions: 1.26.0
>Reporter: Alexander Veit
>Priority: Critical
> Attachments: testARofText.ar
>
>
> Apache Commons Compress 1.26.0 fixes
> * https://www.cve.org/CVERecord?id=CVE-2024-25710 and
> * https://www.cve.org/CVERecord?id=CVE-2024-26308.
> We have tried to replace Apache Commons Compress 1.25.0 with 1.26.0 in our 
> deployments in order to fix these security vulnerabilities. But unfortunately 
> now Apache Tika is broken:
> {noformat}
>   org.apache.tika.exception.TikaException: TIKA-198: Illegal IOException from 
> org.apache.tika.parser.iwork.IWorkPackageParser@41fcb910
> at 
> app//org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:304)
> at 
> app//org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:298)
> at 
> app//org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:203)
> at app//org.apache.tika.Tika.parseToString(Tika.java:525)
> at app//org.apache.tika.Tika.parseToString(Tika.java:495)
> at ...
>   Caused by: java.io.IOException: Resetting to invalid mark
> at 
> java.base/java.io.BufferedInputStream.reset(BufferedInputStream.java:446)
> at 
> org.apache.tika.parser.iwork.IWorkPackageParser.parse(IWorkPackageParser.java:97)
> at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:298)
> ... 42 more
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (COMPRESS-661) commons-compress 1.26.0 breaks Apache Tika 2.9.1

2024-02-20 Thread Tilman Hausherr (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818863#comment-17818863
 ] 

Tilman Hausherr edited comment on COMPRESS-661 at 2/20/24 7:38 PM:
---

{code:java}
ArArchiveInputStream ar = new ArArchiveInputStream(new BufferedInputStream(new 
FileInputStream("../testARofText.ar")));
System.out.println("ar.markSupported(): " + ar.markSupported());

ArArchiveEntry aentry;
while ((aentry = ar.getNextEntry()) != null)
{
ar.mark(10);
ar.read(new byte[10]);
ar.reset();
System.out.println("AR: " + new String(ar.readAllBytes()));
}
{code}
This code will fail with 1.25.0 because mark/release is not supported and 
markSupported() is false:
{noformat}
ar.markSupported(): false
Exception in thread "main" java.io.IOException: mark/reset not supported
at java.base/java.io.InputStream.reset(InputStream.java:655)
at 
com.mycompany.maventikaproject.TilmanSevenTest.main(TilmanSevenTest.java:62)
{noformat}

With 1.26.0 it will bring this, while markSupported() is true:
{noformat}
ar.markSupported(): true
AR: Test d'indexation de Txt
http://www.a
Exception in thread "main" java.io.IOException: Truncated ar archive
at 
org.apache.commons.compress.archivers.ar.ArArchiveInputStream.getNextArEntry(ArArchiveInputStream.java:281)
at 
org.apache.commons.compress.archivers.ar.ArArchiveInputStream.getNextEntry(ArArchiveInputStream.java:351)
at 
com.mycompany.maventikaproject.TilmanSevenTest.main(TilmanSevenTest.java:58)
{noformat}


was (Author: tilman):
{code:java}
ArArchiveInputStream ar = new ArArchiveInputStream(new BufferedInputStream(new 
FileInputStream("../testARofText.ar")));
System.out.println("ar.markSupported(): " + ar.markSupported());

ArArchiveEntry aentry;
while ((aentry = ar.getNextEntry()) != null)
{
ar.mark(10);
ar.read(new byte[10]);
ar.reset();
System.out.println("AR: " + new String(ar.readAllBytes()));
}
{code}
This code will fail with 1.25.0 because mark/release is not supported and 
markSupported() is false:
{code:java}
ar.markSupported(): false
Exception in thread "main" java.io.IOException: mark/reset not supported
at java.base/java.io.InputStream.reset(InputStream.java:655)
at 
com.mycompany.maventikaproject.TilmanSevenTest.main(TilmanSevenTest.java:62)
{code}


With 1.26.0 it will bring this, while markSupported() is true:

ar.markSupported(): true
AR: Test d'indexation de Txt
http://www.a
Exception in thread "main" java.io.IOException: Truncated ar archive
at 
org.apache.commons.compress.archivers.ar.ArArchiveInputStream.getNextArEntry(ArArchiveInputStream.java:281)
at 
org.apache.commons.compress.archivers.ar.ArArchiveInputStream.getNextEntry(ArArchiveInputStream.java:351)
at 
com.mycompany.maventikaproject.TilmanSevenTest.main(TilmanSevenTest.java:58)


> commons-compress 1.26.0 breaks Apache Tika 2.9.1
> 
>
> Key: COMPRESS-661
> URL: https://issues.apache.org/jira/browse/COMPRESS-661
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Compressors
>Affects Versions: 1.26.0
>Reporter: Alexander Veit
>Priority: Critical
> Attachments: testARofText.ar
>
>
> Apache Commons Compress 1.26.0 fixes
> * https://www.cve.org/CVERecord?id=CVE-2024-25710 and
> * https://www.cve.org/CVERecord?id=CVE-2024-26308.
> We have tried to replace Apache Commons Compress 1.25.0 with 1.26.0 in our 
> deployments in order to fix these security vulnerabilities. But unfortunately 
> now Apache Tika is broken:
> {noformat}
>   org.apache.tika.exception.TikaException: TIKA-198: Illegal IOException from 
> org.apache.tika.parser.iwork.IWorkPackageParser@41fcb910
> at 
> app//org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:304)
> at 
> app//org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:298)
> at 
> app//org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:203)
> at app//org.apache.tika.Tika.parseToString(Tika.java:525)
> at app//org.apache.tika.Tika.parseToString(Tika.java:495)
> at ...
>   Caused by: java.io.IOException: Resetting to invalid mark
> at 
> java.base/java.io.BufferedInputStream.reset(BufferedInputStream.java:446)
> at 
> org.apache.tika.parser.iwork.IWorkPackageParser.parse(IWorkPackageParser.java:97)
> at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:298)
> ... 42 more
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (COMPRESS-632) Improve fuzzing coverage in oss-fuzz

2024-02-20 Thread Yakov Shafranovich (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818944#comment-17818944
 ] 

Yakov Shafranovich commented on COMPRESS-632:
-

The oss-fuzz change has been merged in. There are a few more archivers and 
compressors that I didn't include coverage for because they are just thin 
proxies to third-party libraries or JDK code.

> Improve fuzzing coverage in oss-fuzz
> 
>
> Key: COMPRESS-632
> URL: https://issues.apache.org/jira/browse/COMPRESS-632
> Project: Commons Compress
>  Issue Type: Improvement
>Reporter: Robin Schimpf
>Priority: Major
>
> Fuzzing the library brought great stability improvements in the last couple 
> releases. But the current integration in oss-fuzz has only a limited scope. 
> Fuzzing is only done on the following classes:
>  * SevenZFile
>  * TarFile
>  * ZipFile
> Additionally those fuzzing tests only open the file and are not reading the 
> file content.
> IMHO the tests should be expanded to cover the following:
>  * Fuzz all supported formats (stream based and file based)
>  * Read the whole fuzzed file
> I don't know if it makes sense to also fuzz archive creation. The only thing 
> which might be worth there would be the ArchiveEntries since fuzzing the file 
> content seems useless.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] COMPRESS-660: Make commons.codec imports optional [commons-compress]

2024-02-20 Thread via GitHub


beatbrot commented on PR #482:
URL: https://github.com/apache/commons-compress/pull/482#issuecomment-1954703951

   @garydgregory Alright. `mvn` passes on my machine :) Maybe you could 
re-trigger the workflows?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@commons.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (COMPRESS-661) commons-compress 1.26.0 breaks Apache Tika 2.9.1

2024-02-20 Thread Tilman Hausherr (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818898#comment-17818898
 ] 

Tilman Hausherr edited comment on COMPRESS-661 at 2/20/24 4:35 PM:
---

Might be this one:
https://github.com/apache/commons-compress/commit/92d382e3cd6f1199340121ee8ad3bdf95f2154d0
FilterInputStream delegates markSupported(), but InputStream returns false. If 
I'm right then the solution would be to return false instead of not having 
markSupported() in ArchiveInputStream.


was (Author: tilman):
Might be this one:
https://github.com/apache/commons-compress/commit/92d382e3cd6f1199340121ee8ad3bdf95f2154d0
FilterInputStream delegates markSupported(), while InputStream returns false. 
If I'm right then the solution would be to return false.

> commons-compress 1.26.0 breaks Apache Tika 2.9.1
> 
>
> Key: COMPRESS-661
> URL: https://issues.apache.org/jira/browse/COMPRESS-661
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Compressors
>Affects Versions: 1.26.0
>Reporter: Alexander Veit
>Priority: Critical
> Attachments: testARofText.ar
>
>
> Apache Commons Compress 1.26.0 fixes
> * https://www.cve.org/CVERecord?id=CVE-2024-25710 and
> * https://www.cve.org/CVERecord?id=CVE-2024-26308.
> We have tried to replace Apache Commons Compress 1.25.0 with 1.26.0 in our 
> deployments in order to fix these security vulnerabilities. But unfortunately 
> now Apache Tika is broken:
> {noformat}
>   org.apache.tika.exception.TikaException: TIKA-198: Illegal IOException from 
> org.apache.tika.parser.iwork.IWorkPackageParser@41fcb910
> at 
> app//org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:304)
> at 
> app//org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:298)
> at 
> app//org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:203)
> at app//org.apache.tika.Tika.parseToString(Tika.java:525)
> at app//org.apache.tika.Tika.parseToString(Tika.java:495)
> at ...
>   Caused by: java.io.IOException: Resetting to invalid mark
> at 
> java.base/java.io.BufferedInputStream.reset(BufferedInputStream.java:446)
> at 
> org.apache.tika.parser.iwork.IWorkPackageParser.parse(IWorkPackageParser.java:97)
> at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:298)
> ... 42 more
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (COMPRESS-661) commons-compress 1.26.0 breaks Apache Tika 2.9.1

2024-02-20 Thread Tilman Hausherr (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818898#comment-17818898
 ] 

Tilman Hausherr commented on COMPRESS-661:
--

Might be this one:
https://github.com/apache/commons-compress/commit/92d382e3cd6f1199340121ee8ad3bdf95f2154d0
FilterInputStream delegates markSupported(), while InputStream returns false. 
If I'm right then the solution would be to return false.

> commons-compress 1.26.0 breaks Apache Tika 2.9.1
> 
>
> Key: COMPRESS-661
> URL: https://issues.apache.org/jira/browse/COMPRESS-661
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Compressors
>Affects Versions: 1.26.0
>Reporter: Alexander Veit
>Priority: Critical
> Attachments: testARofText.ar
>
>
> Apache Commons Compress 1.26.0 fixes
> * https://www.cve.org/CVERecord?id=CVE-2024-25710 and
> * https://www.cve.org/CVERecord?id=CVE-2024-26308.
> We have tried to replace Apache Commons Compress 1.25.0 with 1.26.0 in our 
> deployments in order to fix these security vulnerabilities. But unfortunately 
> now Apache Tika is broken:
> {noformat}
>   org.apache.tika.exception.TikaException: TIKA-198: Illegal IOException from 
> org.apache.tika.parser.iwork.IWorkPackageParser@41fcb910
> at 
> app//org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:304)
> at 
> app//org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:298)
> at 
> app//org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:203)
> at app//org.apache.tika.Tika.parseToString(Tika.java:525)
> at app//org.apache.tika.Tika.parseToString(Tika.java:495)
> at ...
>   Caused by: java.io.IOException: Resetting to invalid mark
> at 
> java.base/java.io.BufferedInputStream.reset(BufferedInputStream.java:446)
> at 
> org.apache.tika.parser.iwork.IWorkPackageParser.parse(IWorkPackageParser.java:97)
> at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:298)
> ... 42 more
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] COMPRESS-660: Make commons.codec imports optional [commons-compress]

2024-02-20 Thread via GitHub


garydgregory commented on PR #482:
URL: https://github.com/apache/commons-compress/pull/482#issuecomment-1954517621

   Hi @beatbrot 
   Thank you for your PR.
   You'll want to run `mvn` by itself to run all of our build checks in order 
to catch the current build failures.
   TY!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@commons.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Update maven wrapper to 3.9.6 [commons-compress]

2024-02-20 Thread via GitHub


garydgregory commented on PR #483:
URL: https://github.com/apache/commons-compress/pull/483#issuecomment-1954514876

   Hi @beatbrot 
   Let's see if anyone else chimes in who is in love with these scripts... 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@commons.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (COMPRESS-661) commons-compress 1.26.0 breaks Apache Tika 2.9.1

2024-02-20 Thread Tim Allison (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818875#comment-17818875
 ] 

Tim Allison commented on COMPRESS-661:
--

This explains the 7z failures (and others?) in Tika. When we wrap an 
InputStream in a TikaInputStream with get(InputStream...), we test for whether 
{{markSupported()}}. If it isn't supported, we wrap the stream in a 
BufferedInputStream. I'm seeing the same change in behavior with 
{{markSupported()}} in 1.25.0 vs 1.26.0.

> commons-compress 1.26.0 breaks Apache Tika 2.9.1
> 
>
> Key: COMPRESS-661
> URL: https://issues.apache.org/jira/browse/COMPRESS-661
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Compressors
>Affects Versions: 1.26.0
>Reporter: Alexander Veit
>Priority: Critical
> Attachments: testARofText.ar
>
>
> Apache Commons Compress 1.26.0 fixes
> * https://www.cve.org/CVERecord?id=CVE-2024-25710 and
> * https://www.cve.org/CVERecord?id=CVE-2024-26308.
> We have tried to replace Apache Commons Compress 1.25.0 with 1.26.0 in our 
> deployments in order to fix these security vulnerabilities. But unfortunately 
> now Apache Tika is broken:
> {noformat}
>   org.apache.tika.exception.TikaException: TIKA-198: Illegal IOException from 
> org.apache.tika.parser.iwork.IWorkPackageParser@41fcb910
> at 
> app//org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:304)
> at 
> app//org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:298)
> at 
> app//org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:203)
> at app//org.apache.tika.Tika.parseToString(Tika.java:525)
> at app//org.apache.tika.Tika.parseToString(Tika.java:495)
> at ...
>   Caused by: java.io.IOException: Resetting to invalid mark
> at 
> java.base/java.io.BufferedInputStream.reset(BufferedInputStream.java:446)
> at 
> org.apache.tika.parser.iwork.IWorkPackageParser.parse(IWorkPackageParser.java:97)
> at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:298)
> ... 42 more
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] Fix concurrent map access exception [commons-bcel]

2024-02-20 Thread via GitHub


garydgregory merged PR #275:
URL: https://github.com/apache/commons-bcel/pull/275


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@commons.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (COMPRESS-661) commons-compress 1.26.0 breaks Apache Tika 2.9.1

2024-02-20 Thread Tilman Hausherr (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818863#comment-17818863
 ] 

Tilman Hausherr edited comment on COMPRESS-661 at 2/20/24 3:30 PM:
---

{code:java}
ArArchiveInputStream ar = new ArArchiveInputStream(new BufferedInputStream(new 
FileInputStream("../testARofText.ar")));
System.out.println("ar.markSupported(): " + ar.markSupported());

ArArchiveEntry aentry;
while ((aentry = ar.getNextEntry()) != null)
{
ar.mark(10);
ar.read(new byte[10]);
ar.reset();
System.out.println("AR: " + new String(ar.readAllBytes()));
}
{code}
This code will fail with 1.25.0 because mark/release is not supported and 
markSupported() is false:
{code:java}
ar.markSupported(): false
Exception in thread "main" java.io.IOException: mark/reset not supported
at java.base/java.io.InputStream.reset(InputStream.java:655)
at 
com.mycompany.maventikaproject.TilmanSevenTest.main(TilmanSevenTest.java:62)
{code}


With 1.26.0 it will bring this, while markSupported() is true:

ar.markSupported(): true
AR: Test d'indexation de Txt
http://www.a
Exception in thread "main" java.io.IOException: Truncated ar archive
at 
org.apache.commons.compress.archivers.ar.ArArchiveInputStream.getNextArEntry(ArArchiveInputStream.java:281)
at 
org.apache.commons.compress.archivers.ar.ArArchiveInputStream.getNextEntry(ArArchiveInputStream.java:351)
at 
com.mycompany.maventikaproject.TilmanSevenTest.main(TilmanSevenTest.java:58)



was (Author: tilman):
{code:java}
ArArchiveInputStream ar = new ArArchiveInputStream(new BufferedInputStream(new 
FileInputStream("../testARofText.ar")));
System.out.println("ar.markSupported(): " + ar.markSupported());

ArArchiveEntry aentry;
while ((aentry = ar.getNextEntry()) != null)
{
ar.mark(10);
ar.read(new byte[10]);
ar.reset();
System.out.println("AR: " + new String(ar.readAllBytes()));
}
{code}
This code will fail with 1.25.0 because mark/release is not supported. With 
1.26.0 it will bring this, while markSupported is true:

ar.markSupported(): true
AR: Test d'indexation de Txt
http://www.a
Exception in thread "main" java.io.IOException: Truncated ar archive
at 
org.apache.commons.compress.archivers.ar.ArArchiveInputStream.getNextArEntry(ArArchiveInputStream.java:281)
at 
org.apache.commons.compress.archivers.ar.ArArchiveInputStream.getNextEntry(ArArchiveInputStream.java:351)
at 
com.mycompany.maventikaproject.TilmanSevenTest.main(TilmanSevenTest.java:58)


> commons-compress 1.26.0 breaks Apache Tika 2.9.1
> 
>
> Key: COMPRESS-661
> URL: https://issues.apache.org/jira/browse/COMPRESS-661
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Compressors
>Affects Versions: 1.26.0
>Reporter: Alexander Veit
>Priority: Critical
> Attachments: testARofText.ar
>
>
> Apache Commons Compress 1.26.0 fixes
> * https://www.cve.org/CVERecord?id=CVE-2024-25710 and
> * https://www.cve.org/CVERecord?id=CVE-2024-26308.
> We have tried to replace Apache Commons Compress 1.25.0 with 1.26.0 in our 
> deployments in order to fix these security vulnerabilities. But unfortunately 
> now Apache Tika is broken:
> {noformat}
>   org.apache.tika.exception.TikaException: TIKA-198: Illegal IOException from 
> org.apache.tika.parser.iwork.IWorkPackageParser@41fcb910
> at 
> app//org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:304)
> at 
> app//org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:298)
> at 
> app//org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:203)
> at app//org.apache.tika.Tika.parseToString(Tika.java:525)
> at app//org.apache.tika.Tika.parseToString(Tika.java:495)
> at ...
>   Caused by: java.io.IOException: Resetting to invalid mark
> at 
> java.base/java.io.BufferedInputStream.reset(BufferedInputStream.java:446)
> at 
> org.apache.tika.parser.iwork.IWorkPackageParser.parse(IWorkPackageParser.java:97)
> at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:298)
> ... 42 more
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (JXPATH-203) setValue failing, no error reported

2024-02-20 Thread Glen McCormick (Jira)
Glen McCormick created JXPATH-203:
-

 Summary: setValue failing, no error reported
 Key: JXPATH-203
 URL: https://issues.apache.org/jira/browse/JXPATH-203
 Project: Commons JXPath
  Issue Type: Bug
Affects Versions: 1.3
Reporter: Glen McCormick
 Attachments: Main.java

Use case: updating a value based on matching a sibling

The setValue against the context just doesn't work, looks the same as JXPATH-47 
but that's been closed for years

Problem can be replicated with in [^Main.java]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (COMPRESS-661) commons-compress 1.26.0 breaks Apache Tika 2.9.1

2024-02-20 Thread Tilman Hausherr (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818863#comment-17818863
 ] 

Tilman Hausherr commented on COMPRESS-661:
--

{code:java}
ArArchiveInputStream ar = new ArArchiveInputStream(new BufferedInputStream(new 
FileInputStream("../testARofText.ar")));
System.out.println("ar.markSupported(): " + ar.markSupported());

ArArchiveEntry aentry;
while ((aentry = ar.getNextEntry()) != null)
{
ar.mark(10);
ar.read(new byte[10]);
ar.reset();
System.out.println("AR: " + new String(ar.readAllBytes()));
}
{code}
This code will fail with 1.25.0 because mark/release is not supported. With 
1.26.0 it will bring this, while markSupported is true:

ar.markSupported(): true
AR: Test d'indexation de Txt
http://www.a
Exception in thread "main" java.io.IOException: Truncated ar archive
at 
org.apache.commons.compress.archivers.ar.ArArchiveInputStream.getNextArEntry(ArArchiveInputStream.java:281)
at 
org.apache.commons.compress.archivers.ar.ArArchiveInputStream.getNextEntry(ArArchiveInputStream.java:351)
at 
com.mycompany.maventikaproject.TilmanSevenTest.main(TilmanSevenTest.java:58)


> commons-compress 1.26.0 breaks Apache Tika 2.9.1
> 
>
> Key: COMPRESS-661
> URL: https://issues.apache.org/jira/browse/COMPRESS-661
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Compressors
>Affects Versions: 1.26.0
>Reporter: Alexander Veit
>Priority: Critical
> Attachments: testARofText.ar
>
>
> Apache Commons Compress 1.26.0 fixes
> * https://www.cve.org/CVERecord?id=CVE-2024-25710 and
> * https://www.cve.org/CVERecord?id=CVE-2024-26308.
> We have tried to replace Apache Commons Compress 1.25.0 with 1.26.0 in our 
> deployments in order to fix these security vulnerabilities. But unfortunately 
> now Apache Tika is broken:
> {noformat}
>   org.apache.tika.exception.TikaException: TIKA-198: Illegal IOException from 
> org.apache.tika.parser.iwork.IWorkPackageParser@41fcb910
> at 
> app//org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:304)
> at 
> app//org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:298)
> at 
> app//org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:203)
> at app//org.apache.tika.Tika.parseToString(Tika.java:525)
> at app//org.apache.tika.Tika.parseToString(Tika.java:495)
> at ...
>   Caused by: java.io.IOException: Resetting to invalid mark
> at 
> java.base/java.io.BufferedInputStream.reset(BufferedInputStream.java:446)
> at 
> org.apache.tika.parser.iwork.IWorkPackageParser.parse(IWorkPackageParser.java:97)
> at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:298)
> ... 42 more
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (COMPRESS-661) commons-compress 1.26.0 breaks Apache Tika 2.9.1

2024-02-20 Thread Tilman Hausherr (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tilman Hausherr updated COMPRESS-661:
-
Attachment: testARofText.ar

> commons-compress 1.26.0 breaks Apache Tika 2.9.1
> 
>
> Key: COMPRESS-661
> URL: https://issues.apache.org/jira/browse/COMPRESS-661
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Compressors
>Affects Versions: 1.26.0
>Reporter: Alexander Veit
>Priority: Critical
> Attachments: testARofText.ar
>
>
> Apache Commons Compress 1.26.0 fixes
> * https://www.cve.org/CVERecord?id=CVE-2024-25710 and
> * https://www.cve.org/CVERecord?id=CVE-2024-26308.
> We have tried to replace Apache Commons Compress 1.25.0 with 1.26.0 in our 
> deployments in order to fix these security vulnerabilities. But unfortunately 
> now Apache Tika is broken:
> {noformat}
>   org.apache.tika.exception.TikaException: TIKA-198: Illegal IOException from 
> org.apache.tika.parser.iwork.IWorkPackageParser@41fcb910
> at 
> app//org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:304)
> at 
> app//org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:298)
> at 
> app//org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:203)
> at app//org.apache.tika.Tika.parseToString(Tika.java:525)
> at app//org.apache.tika.Tika.parseToString(Tika.java:495)
> at ...
>   Caused by: java.io.IOException: Resetting to invalid mark
> at 
> java.base/java.io.BufferedInputStream.reset(BufferedInputStream.java:446)
> at 
> org.apache.tika.parser.iwork.IWorkPackageParser.parse(IWorkPackageParser.java:97)
> at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:298)
> ... 42 more
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] Update maven wrapper to 3.9.6 [commons-compress]

2024-02-20 Thread via GitHub


beatbrot commented on PR #483:
URL: https://github.com/apache/commons-compress/pull/483#issuecomment-1954445983

   I mean...personally, I like the wrapper scripts. But if you want, I can 
change this PR to simply remove the wrapper :) Should I do that?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@commons.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (VALIDATOR-492) ValidatorUtils.copyFastHashMap is broken

2024-02-20 Thread Tobias Wildgruber (Jira)


[ 
https://issues.apache.org/jira/browse/VALIDATOR-492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818848#comment-17818848
 ] 

Tobias Wildgruber commented on VALIDATOR-492:
-

[https://github.com/apache/commons-validator/commit/273b687f81a1e48f2dc1bd1850f223ed21a8ebb5#diff-89a6acf0c6feec72707708d50292e9d15f3efafd6de0c52caa1a8ff6d5a65974R144]

This is where the bug was introduced, FastHashMap seems to be incompatible with 
forEach().

> ValidatorUtils.copyFastHashMap is broken
> 
>
> Key: VALIDATOR-492
> URL: https://issues.apache.org/jira/browse/VALIDATOR-492
> Project: Commons Validator
>  Issue Type: Bug
>Affects Versions: 1.8.0
>Reporter: Tobias Wildgruber
>Priority: Major
>
> ValidatorUtils.copyFastHashMap is broken which in turn causes Field#clone() 
> to loose hVars and hMsgs. Field#clone().
> This is f.e. used in ValidatorAction#
> handleIndexedField() causing validation to misbehave when using 
> indexedListProperty (which is where we found this).
> Test case that fails in 1.8.0 but works in 1.7:
> {{public void testCopyFastHashMap() {}}
> {{  final FastHashMap original = new FastHashMap();}}
> {{  original.put("key1", "value1");}}
> {{  original.put("key2", "value2");}}
> {{  original.put("key3", "value3");}}
> {{  original.setFast(true);}}
> {{  final FastHashMap copy = ValidatorUtils.copyFastHashMap(original);}}
> {{  assertEquals(original, copy);}}
> {{}}}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (VALIDATOR-492) ValidatorUtils.copyFastHashMap is broken

2024-02-20 Thread Tobias Wildgruber (Jira)
Tobias Wildgruber created VALIDATOR-492:
---

 Summary: ValidatorUtils.copyFastHashMap is broken
 Key: VALIDATOR-492
 URL: https://issues.apache.org/jira/browse/VALIDATOR-492
 Project: Commons Validator
  Issue Type: Bug
Affects Versions: 1.8.0
Reporter: Tobias Wildgruber


ValidatorUtils.copyFastHashMap is broken which in turn causes Field#clone() to 
loose hVars and hMsgs. Field#clone().

This is f.e. used in ValidatorAction#
handleIndexedField() causing validation to misbehave when using 
indexedListProperty (which is where we found this).
Test case that fails in 1.8.0 but works in 1.7:

{{public void testCopyFastHashMap() {}}
{{  final FastHashMap original = new FastHashMap();}}
{{  original.put("key1", "value1");}}
{{  original.put("key2", "value2");}}
{{  original.put("key3", "value3");}}
{{  original.setFast(true);}}
{{  final FastHashMap copy = ValidatorUtils.copyFastHashMap(original);}}
{{  assertEquals(original, copy);}}
{{}}}

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] Update maven wrapper to 3.9.6 [commons-compress]

2024-02-20 Thread via GitHub


garydgregory commented on PR #483:
URL: https://github.com/apache/commons-compress/pull/483#issuecomment-1954347874

   We have jar files in the repository? Gross. I'd rather get rid of this since 
it will automatically go out of date with the next release, and the next one. I 
never use it. The GitHub CI doesn't either.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@commons.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (DBCP-595) Connection pool can be exhausted when connections are killed on the DB side

2024-02-20 Thread Jira


[ 
https://issues.apache.org/jira/browse/DBCP-595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818815#comment-17818815
 ] 

Dénes Bodó commented on DBCP-595:
-

[~psteitz]

I executed the ReproOneThread class with these settings:
{code:java}
connectionProperties.append("MaxActive=").append(1).append(",");
connectionProperties.append("MaxTotal=").append(2).append(",");
//connectionProperties.append("MaxIdle=").append(2).append(",");

connectionProperties.append("fastFailValidation=").append("false").append(",");
connectionProperties.append("TestOnBorrow=").append("true").append(",");
connectionProperties.append("TestOnReturn=").append("true").append(",");

connectionProperties.append("TestWhileIdle=").append("true").append(",");
//connectionProperties.append("ValidationQuery=").append("SELECT 
1").append(",");

connectionProperties.append("timeBetweenEvictionRunsMillis=").append(10_000).append(",");

connectionProperties.append("numTestsPerEvictionRun=").append(10).append(",");
{code}
The program got stuck when the 
org.apache.commons.pool2.impl.GenericObjectPool#create failed to create a new 
connection because newCreateCount > localMaxTotal was true. See this screen 
shot:
!ReproOneThread-screenshot_when_create-newCreateCount_gt_localMaxTotal.png|width=576,height=386!
I created jstack at the time when I created the screen shot and a jstack after 
I let the debugger continue running but the program really stuck:

[^ReproOneThread-jstack_when_create.txt] and 
[^ReproOneThread-jstack_when_stuck.txt].

Setting fastFailValidation to true had no effect, the program successfully 
stuck.

When I set MinIdle to 1 there was a thread running after the program stuck and 
it tried to create new connections but the variables shoed the same as the 
above screen shot: . [^ReproOneThread-jstack-minIdle.txt] 

*Based on this I confirm the program got stuck when numActive == maxActive.*

 

Regarding validation and connection closure in case of exception:
I played around with my repro code (ReproDBCP):
 * Closed the connection got from DataSource::getConnection() when it was not 
null
 ** when exception occurred
 ** in a finally block
 * maxActive=1, maxTotal=2, validation turned off completely
 * 4 threads

There was no sign of any deadlock during testing.

 

This confirms your theory that when the issue is happening in Oozie the 
"client" does not close the connection after it notices the exception about 
connection closure.

 

My questions:

1. As OpenJPA is the client in Oozie's perspective does it mean that I have to 
check OpenJPA code if it closes/releases the connection when catches an 
exception from _org.apache.commons.dbcp2.PoolableConnection#handleException_ ?
Is it the right approach to drop/close the connection by the client? Shouldn't 
the client get notified about *fatalSqlExceptionThrown* instead of a simple 
SQLException?
{code:java}
@Override
protected void handleException(final SQLException e) throws SQLException {
fatalSqlExceptionThrown |= isFatalException(e);
super.handleException(e);
} {code}
 

2. If DBCP is aware that this is a fatalSqlException, shouldn't it handle the 
situation by closing the connection automatically? - I know, this is what I 
suggested in my patch. Just curious.

 

Thank you.

> Connection pool can be exhausted when connections are killed on the DB side
> ---
>
> Key: DBCP-595
> URL: https://issues.apache.org/jira/browse/DBCP-595
> Project: Commons DBCP
>  Issue Type: Bug
>Affects Versions: 2.11.0
>Reporter: Dénes Bodó
>Priority: Critical
>  Labels: deadlock, robustness
> Attachments: ReproOneThread-jstack-minIdle.txt, 
> ReproOneThread-jstack_when_create.txt, ReproOneThread-jstack_when_stuck.txt, 
> ReproOneThread-screenshot_when_create-newCreateCount_gt_localMaxTotal.png
>
>
> Apache Oozie 5.2.1 uses OpenJPA 2.4.2 and commons-dbcp 1.4 and commons-pool 
> 1.5.4. These are ancient versions, I know.
> h1. Description
> The issue is that when due to some network issues or "maintenance work" on 
> the DB side (especially PostgreSQL) which causes the DB connection to be 
> closed, it results exhausted Pool on the client side. Many threads are 
> waiting at this point:
> {noformat}
> "pool-2-thread-4" #20 prio=5 os_prio=31 tid=0x7faf7903b800 nid=0x8603 
> waiting on condition [0x00030f3e7000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x00066aca8e70> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> 

[jira] [Updated] (DBCP-595) Connection pool can be exhausted when connections are killed on the DB side

2024-02-20 Thread Jira


 [ 
https://issues.apache.org/jira/browse/DBCP-595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dénes Bodó updated DBCP-595:

Attachment: ReproOneThread-jstack-minIdle.txt

> Connection pool can be exhausted when connections are killed on the DB side
> ---
>
> Key: DBCP-595
> URL: https://issues.apache.org/jira/browse/DBCP-595
> Project: Commons DBCP
>  Issue Type: Bug
>Affects Versions: 2.11.0
>Reporter: Dénes Bodó
>Priority: Critical
>  Labels: deadlock, robustness
> Attachments: ReproOneThread-jstack-minIdle.txt, 
> ReproOneThread-jstack_when_create.txt, ReproOneThread-jstack_when_stuck.txt, 
> ReproOneThread-screenshot_when_create-newCreateCount_gt_localMaxTotal.png
>
>
> Apache Oozie 5.2.1 uses OpenJPA 2.4.2 and commons-dbcp 1.4 and commons-pool 
> 1.5.4. These are ancient versions, I know.
> h1. Description
> The issue is that when due to some network issues or "maintenance work" on 
> the DB side (especially PostgreSQL) which causes the DB connection to be 
> closed, it results exhausted Pool on the client side. Many threads are 
> waiting at this point:
> {noformat}
> "pool-2-thread-4" #20 prio=5 os_prio=31 tid=0x7faf7903b800 nid=0x8603 
> waiting on condition [0x00030f3e7000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x00066aca8e70> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>   at 
> org.apache.commons.pool2.impl.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:1324)
>  {noformat}
> According to my observation this is because the JDBC driver does not get 
> closed on the client side, nor the abstract DBCP connection 
> _org.apache.commons.dbcp2.PoolableConnection_ .
> h1. Repro
> (Un)Fortunately I can reproduce the issue using the latest and greatest 
> commons-dbcp 2.11.0 and commons-pool 2.12.0 along with OpenJPA 3.2.2.
> I've just created a Java application to reproduce the issue: 
> [https://github.com/dionusos/pool_exhausted_repro] . See README.md for 
> detailed repro steps.
> h1. Kind of solution?
> To be honest I am not really familiar with DBCP but with this change I 
> managed to make my application more robust:
> {code:java}
> diff --git a/src/main/java/org/apache/commons/dbcp2/PoolableConnection.java 
> b/src/main/java/org/apache/commons/dbcp2/PoolableConnection.java
> index 440cb756..678550bf 100644
> --- a/src/main/java/org/apache/commons/dbcp2/PoolableConnection.java
> +++ b/src/main/java/org/apache/commons/dbcp2/PoolableConnection.java
> @@ -214,6 +214,10 @@ public class PoolableConnection extends 
> DelegatingConnection impleme
>      @Override
>      protected void handleException(final SQLException e) throws SQLException 
> {
>          fatalSqlExceptionThrown |= isFatalException(e);
> +        if (fatalSqlExceptionThrown && getDelegate() != null) {
> +            getDelegate().close();
> +            this.close();
> +        }
>          super.handleException(e);
>      }{code}
> What do you think about this approach?
> Is it a completely dead-end or we can start working on it in this direction?
> Do you agree that the reported and reproduced issue is a real one and nut 
> just some kind of misconfiguration?
>  
> I am lost at this point and I need to move forward so I am asking for 
> guidance here.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (DBCP-595) Connection pool can be exhausted when connections are killed on the DB side

2024-02-20 Thread Jira


 [ 
https://issues.apache.org/jira/browse/DBCP-595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dénes Bodó updated DBCP-595:

Attachment: ReproOneThread-jstack_when_create.txt
ReproOneThread-jstack_when_stuck.txt

> Connection pool can be exhausted when connections are killed on the DB side
> ---
>
> Key: DBCP-595
> URL: https://issues.apache.org/jira/browse/DBCP-595
> Project: Commons DBCP
>  Issue Type: Bug
>Affects Versions: 2.11.0
>Reporter: Dénes Bodó
>Priority: Critical
>  Labels: deadlock, robustness
> Attachments: ReproOneThread-jstack_when_create.txt, 
> ReproOneThread-jstack_when_stuck.txt, 
> ReproOneThread-screenshot_when_create-newCreateCount_gt_localMaxTotal.png
>
>
> Apache Oozie 5.2.1 uses OpenJPA 2.4.2 and commons-dbcp 1.4 and commons-pool 
> 1.5.4. These are ancient versions, I know.
> h1. Description
> The issue is that when due to some network issues or "maintenance work" on 
> the DB side (especially PostgreSQL) which causes the DB connection to be 
> closed, it results exhausted Pool on the client side. Many threads are 
> waiting at this point:
> {noformat}
> "pool-2-thread-4" #20 prio=5 os_prio=31 tid=0x7faf7903b800 nid=0x8603 
> waiting on condition [0x00030f3e7000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x00066aca8e70> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>   at 
> org.apache.commons.pool2.impl.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:1324)
>  {noformat}
> According to my observation this is because the JDBC driver does not get 
> closed on the client side, nor the abstract DBCP connection 
> _org.apache.commons.dbcp2.PoolableConnection_ .
> h1. Repro
> (Un)Fortunately I can reproduce the issue using the latest and greatest 
> commons-dbcp 2.11.0 and commons-pool 2.12.0 along with OpenJPA 3.2.2.
> I've just created a Java application to reproduce the issue: 
> [https://github.com/dionusos/pool_exhausted_repro] . See README.md for 
> detailed repro steps.
> h1. Kind of solution?
> To be honest I am not really familiar with DBCP but with this change I 
> managed to make my application more robust:
> {code:java}
> diff --git a/src/main/java/org/apache/commons/dbcp2/PoolableConnection.java 
> b/src/main/java/org/apache/commons/dbcp2/PoolableConnection.java
> index 440cb756..678550bf 100644
> --- a/src/main/java/org/apache/commons/dbcp2/PoolableConnection.java
> +++ b/src/main/java/org/apache/commons/dbcp2/PoolableConnection.java
> @@ -214,6 +214,10 @@ public class PoolableConnection extends 
> DelegatingConnection impleme
>      @Override
>      protected void handleException(final SQLException e) throws SQLException 
> {
>          fatalSqlExceptionThrown |= isFatalException(e);
> +        if (fatalSqlExceptionThrown && getDelegate() != null) {
> +            getDelegate().close();
> +            this.close();
> +        }
>          super.handleException(e);
>      }{code}
> What do you think about this approach?
> Is it a completely dead-end or we can start working on it in this direction?
> Do you agree that the reported and reproduced issue is a real one and nut 
> just some kind of misconfiguration?
>  
> I am lost at this point and I need to move forward so I am asking for 
> guidance here.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (DBCP-595) Connection pool can be exhausted when connections are killed on the DB side

2024-02-20 Thread Jira


 [ 
https://issues.apache.org/jira/browse/DBCP-595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dénes Bodó updated DBCP-595:

Attachment: 
ReproOneThread-screenshot_when_create-newCreateCount_gt_localMaxTotal.png

> Connection pool can be exhausted when connections are killed on the DB side
> ---
>
> Key: DBCP-595
> URL: https://issues.apache.org/jira/browse/DBCP-595
> Project: Commons DBCP
>  Issue Type: Bug
>Affects Versions: 2.11.0
>Reporter: Dénes Bodó
>Priority: Critical
>  Labels: deadlock, robustness
> Attachments: ReproOneThread-jstack_when_create.txt, 
> ReproOneThread-jstack_when_stuck.txt, 
> ReproOneThread-screenshot_when_create-newCreateCount_gt_localMaxTotal.png
>
>
> Apache Oozie 5.2.1 uses OpenJPA 2.4.2 and commons-dbcp 1.4 and commons-pool 
> 1.5.4. These are ancient versions, I know.
> h1. Description
> The issue is that when due to some network issues or "maintenance work" on 
> the DB side (especially PostgreSQL) which causes the DB connection to be 
> closed, it results exhausted Pool on the client side. Many threads are 
> waiting at this point:
> {noformat}
> "pool-2-thread-4" #20 prio=5 os_prio=31 tid=0x7faf7903b800 nid=0x8603 
> waiting on condition [0x00030f3e7000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x00066aca8e70> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>   at 
> org.apache.commons.pool2.impl.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:1324)
>  {noformat}
> According to my observation this is because the JDBC driver does not get 
> closed on the client side, nor the abstract DBCP connection 
> _org.apache.commons.dbcp2.PoolableConnection_ .
> h1. Repro
> (Un)Fortunately I can reproduce the issue using the latest and greatest 
> commons-dbcp 2.11.0 and commons-pool 2.12.0 along with OpenJPA 3.2.2.
> I've just created a Java application to reproduce the issue: 
> [https://github.com/dionusos/pool_exhausted_repro] . See README.md for 
> detailed repro steps.
> h1. Kind of solution?
> To be honest I am not really familiar with DBCP but with this change I 
> managed to make my application more robust:
> {code:java}
> diff --git a/src/main/java/org/apache/commons/dbcp2/PoolableConnection.java 
> b/src/main/java/org/apache/commons/dbcp2/PoolableConnection.java
> index 440cb756..678550bf 100644
> --- a/src/main/java/org/apache/commons/dbcp2/PoolableConnection.java
> +++ b/src/main/java/org/apache/commons/dbcp2/PoolableConnection.java
> @@ -214,6 +214,10 @@ public class PoolableConnection extends 
> DelegatingConnection impleme
>      @Override
>      protected void handleException(final SQLException e) throws SQLException 
> {
>          fatalSqlExceptionThrown |= isFatalException(e);
> +        if (fatalSqlExceptionThrown && getDelegate() != null) {
> +            getDelegate().close();
> +            this.close();
> +        }
>          super.handleException(e);
>      }{code}
> What do you think about this approach?
> Is it a completely dead-end or we can start working on it in this direction?
> Do you agree that the reported and reproduced issue is a real one and nut 
> just some kind of misconfiguration?
>  
> I am lost at this point and I need to move forward so I am asking for 
> guidance here.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (COMPRESS-661) commons-compress 1.26.0 breaks Apache Tika 2.9.1

2024-02-20 Thread Tilman Hausherr (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818792#comment-17818792
 ] 

Tilman Hausherr commented on COMPRESS-661:
--

I'm wondering if the previous version had mark / reset features that this one 
doesn't have? Or didn't have and has it now?

> commons-compress 1.26.0 breaks Apache Tika 2.9.1
> 
>
> Key: COMPRESS-661
> URL: https://issues.apache.org/jira/browse/COMPRESS-661
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Compressors
>Affects Versions: 1.26.0
>Reporter: Alexander Veit
>Priority: Critical
>
> Apache Commons Compress 1.26.0 fixes
> * https://www.cve.org/CVERecord?id=CVE-2024-25710 and
> * https://www.cve.org/CVERecord?id=CVE-2024-26308.
> We have tried to replace Apache Commons Compress 1.25.0 with 1.26.0 in our 
> deployments in order to fix these security vulnerabilities. But unfortunately 
> now Apache Tika is broken:
> {noformat}
>   org.apache.tika.exception.TikaException: TIKA-198: Illegal IOException from 
> org.apache.tika.parser.iwork.IWorkPackageParser@41fcb910
> at 
> app//org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:304)
> at 
> app//org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:298)
> at 
> app//org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:203)
> at app//org.apache.tika.Tika.parseToString(Tika.java:525)
> at app//org.apache.tika.Tika.parseToString(Tika.java:495)
> at ...
>   Caused by: java.io.IOException: Resetting to invalid mark
> at 
> java.base/java.io.BufferedInputStream.reset(BufferedInputStream.java:446)
> at 
> org.apache.tika.parser.iwork.IWorkPackageParser.parse(IWorkPackageParser.java:97)
> at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:298)
> ... 42 more
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (COMPRESS-661) commons-compress 1.26.0 breaks Apache Tika 2.9.1

2024-02-20 Thread Tilman Hausherr (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818775#comment-17818775
 ] 

Tilman Hausherr edited comment on COMPRESS-661 at 2/20/24 11:56 AM:


I'm working on it

https://github.com/apache/tika/pull/1605

The bug mentioned here is the "harmless" one, it goes away by increasing the 
parameter to mark(). However there are more test failures, I'm trying to get 
around them.


was (Author: tilman):
I'm working on it

[https://github.com/apache/pdfbox/pull/180]

The bug mentioned here is the "harmless" one, it goes away by increasing the 
parameter to mark(). However there are more test failures, I'm trying to get 
around them.

> commons-compress 1.26.0 breaks Apache Tika 2.9.1
> 
>
> Key: COMPRESS-661
> URL: https://issues.apache.org/jira/browse/COMPRESS-661
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Compressors
>Affects Versions: 1.26.0
>Reporter: Alexander Veit
>Priority: Critical
>
> Apache Commons Compress 1.26.0 fixes
> * https://www.cve.org/CVERecord?id=CVE-2024-25710 and
> * https://www.cve.org/CVERecord?id=CVE-2024-26308.
> We have tried to replace Apache Commons Compress 1.25.0 with 1.26.0 in our 
> deployments in order to fix these security vulnerabilities. But unfortunately 
> now Apache Tika is broken:
> {noformat}
>   org.apache.tika.exception.TikaException: TIKA-198: Illegal IOException from 
> org.apache.tika.parser.iwork.IWorkPackageParser@41fcb910
> at 
> app//org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:304)
> at 
> app//org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:298)
> at 
> app//org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:203)
> at app//org.apache.tika.Tika.parseToString(Tika.java:525)
> at app//org.apache.tika.Tika.parseToString(Tika.java:495)
> at ...
>   Caused by: java.io.IOException: Resetting to invalid mark
> at 
> java.base/java.io.BufferedInputStream.reset(BufferedInputStream.java:446)
> at 
> org.apache.tika.parser.iwork.IWorkPackageParser.parse(IWorkPackageParser.java:97)
> at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:298)
> ... 42 more
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (COMPRESS-661) commons-compress 1.26.0 breaks Apache Tika 2.9.1

2024-02-20 Thread Tilman Hausherr (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818775#comment-17818775
 ] 

Tilman Hausherr edited comment on COMPRESS-661 at 2/20/24 11:53 AM:


I'm working on it

[https://github.com/apache/pdfbox/pull/180]

The bug mentioned here is the "harmless" one, it goes away by increasing the 
parameter to mark(). However there are more test failures, I'm trying to get 
around them.


was (Author: tilman):
I'm working on it

[https://github.com/apache/pdfbox/pull/180]

The bug mentioned is the "harmless" one, it goes away by increasing the 
parameter to mark. However there are more test failures, I'm trying to get 
around them.

> commons-compress 1.26.0 breaks Apache Tika 2.9.1
> 
>
> Key: COMPRESS-661
> URL: https://issues.apache.org/jira/browse/COMPRESS-661
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Compressors
>Affects Versions: 1.26.0
>Reporter: Alexander Veit
>Priority: Critical
>
> Apache Commons Compress 1.26.0 fixes
> * https://www.cve.org/CVERecord?id=CVE-2024-25710 and
> * https://www.cve.org/CVERecord?id=CVE-2024-26308.
> We have tried to replace Apache Commons Compress 1.25.0 with 1.26.0 in our 
> deployments in order to fix these security vulnerabilities. But unfortunately 
> now Apache Tika is broken:
> {noformat}
>   org.apache.tika.exception.TikaException: TIKA-198: Illegal IOException from 
> org.apache.tika.parser.iwork.IWorkPackageParser@41fcb910
> at 
> app//org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:304)
> at 
> app//org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:298)
> at 
> app//org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:203)
> at app//org.apache.tika.Tika.parseToString(Tika.java:525)
> at app//org.apache.tika.Tika.parseToString(Tika.java:495)
> at ...
>   Caused by: java.io.IOException: Resetting to invalid mark
> at 
> java.base/java.io.BufferedInputStream.reset(BufferedInputStream.java:446)
> at 
> org.apache.tika.parser.iwork.IWorkPackageParser.parse(IWorkPackageParser.java:97)
> at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:298)
> ... 42 more
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (COMPRESS-661) commons-compress 1.26.0 breaks Apache Tika 2.9.1

2024-02-20 Thread Tilman Hausherr (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818775#comment-17818775
 ] 

Tilman Hausherr commented on COMPRESS-661:
--

I'm working on it

[https://github.com/apache/pdfbox/pull/180]

The bug mentioned is the "harmless" one, it goes away by increasing the 
parameter to mark. However there are more test failures, I'm trying to get 
around them.

> commons-compress 1.26.0 breaks Apache Tika 2.9.1
> 
>
> Key: COMPRESS-661
> URL: https://issues.apache.org/jira/browse/COMPRESS-661
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Compressors
>Affects Versions: 1.26.0
>Reporter: Alexander Veit
>Priority: Critical
>
> Apache Commons Compress 1.26.0 fixes
> * https://www.cve.org/CVERecord?id=CVE-2024-25710 and
> * https://www.cve.org/CVERecord?id=CVE-2024-26308.
> We have tried to replace Apache Commons Compress 1.25.0 with 1.26.0 in our 
> deployments in order to fix these security vulnerabilities. But unfortunately 
> now Apache Tika is broken:
> {noformat}
>   org.apache.tika.exception.TikaException: TIKA-198: Illegal IOException from 
> org.apache.tika.parser.iwork.IWorkPackageParser@41fcb910
> at 
> app//org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:304)
> at 
> app//org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:298)
> at 
> app//org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:203)
> at app//org.apache.tika.Tika.parseToString(Tika.java:525)
> at app//org.apache.tika.Tika.parseToString(Tika.java:495)
> at ...
>   Caused by: java.io.IOException: Resetting to invalid mark
> at 
> java.base/java.io.BufferedInputStream.reset(BufferedInputStream.java:446)
> at 
> org.apache.tika.parser.iwork.IWorkPackageParser.parse(IWorkPackageParser.java:97)
> at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:298)
> ... 42 more
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (CODEC-317) ColognePhonetic: Duplicate code in some cases

2024-02-20 Thread DRUser123 (Jira)


[ 
https://issues.apache.org/jira/browse/CODEC-317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818725#comment-17818725
 ] 

DRUser123 edited comment on CODEC-317 at 2/20/24 9:52 AM:
--

Hi [~ggregory] , 
thank you for your reply, the Junit test is extremely simple: 

 
{noformat}
@Test
public void testColognePhonetic() {         
ColognePhonetic colognePhonetic = new ColognePhonetic();
String name = "Müller"; // Correct case         
String name2 = "Müleler"; // Incorrect case         
String name3 = "Mülhler"; // Incorrect case         
System.out.println(name + ": " + colognePhonetic.colognePhonetic(name));
    System.out.println(name2 + ": " + colognePhonetic.colognePhonetic(name2));
    System.out.println(name3 + ": " + colognePhonetic.colognePhonetic(name3)); 
}
{noformat}
 

I ran the test in debug mode and put a breakpoint on the function in question 
({*}ColognePhonetic$CologneOutputBuffer.put(code) line 275{*}). 

As I see it, the solution would be to move line 275 inside the if so that the 
lastCode variable does not change unless the code is actually inserted into the 
output. 
Essentially, the function would become the following:

 
{noformat}
public void put(final char code) {
    if (code != CHAR_IGNORE && lastCode != code && (code != '0' || length == 0))
{
data[length] = code;
        length++;     
lastCode = code;  // Here the line moved from outside to inside the if
}
}
{noformat}
 

I hope it can help solve the issue!


was (Author: JIRAUSER304320):
Hi [~ggregory] , 
thank you for your reply, the Junit test is extremely simple: 

@Test
public void testColognePhonetic() {
        ColognePhonetic colognePhonetic = new ColognePhonetic(); 

        String name = "Müller"; // Correct case
        String name2 = "Müleler"; // Incorrect case
        String name3 = "Mülhler"; // Incorrect case

        System.out.println(name + ": " + colognePhonetic.colognePhonetic(name));
        System.out.println(name2 + ": " + 
colognePhonetic.colognePhonetic(name2));
        System.out.println(name3 + ": " + 
colognePhonetic.colognePhonetic(name3));
}
I ran the test in debug mode and put a breakpoint on the function in question 
({*}ColognePhonetic$CologneOutputBuffer.put(code) line 275{*}). 

As I see it, the solution would be to move line 275 inside the if so that the 
lastCode variable does not change unless the code is actually inserted into the 
output. 
Essentially, the function would become the following:

public void put(final char code) {
    if (code != CHAR_IGNORE && lastCode != code && (code != '0' || length == 
0)) {
    data[length] = code;
        length++;
    *lastCode = code;*  // Here the line moved from outside to inside the if
    }
}

I hope it can help solve the issue!

> ColognePhonetic: Duplicate code in some cases
> -
>
> Key: CODEC-317
> URL: https://issues.apache.org/jira/browse/CODEC-317
> Project: Commons Codec
>  Issue Type: Bug
>Affects Versions: 1.15, 1.16.1
>Reporter: DRUser123
>Priority: Major
>
> h2. ColognePhonetic: Duplicate code in some cases
> When the character "H" or an intermediate vowel (not at the beginning of the 
> string) is intercepted, the code should not be added to the output; however, 
> the lastCode variable takes the value of the latter, and this generates a 
> duplicate code recognition error. 
> The piece of code in question is 
> *ColognePhonetic$CologneOutputBuffer.put(code) line 275 version 1.16.1 
> (tested also with 1.15).*
> {+}Example with Müller (correctly coded){+}:
> Char = 'M', code = 6, lastCode = null, output = '6'
> Char = 'U', code = 0, lastCode = 6, output = '6' (no intermediate zeros are 
> added)
> Char = 'L', code = 5, lastCode = 0, output = '65'   
> Char = 'L', code = 5, lastCode = 5, output = '65' (no duplicate codes are 
> added)
> Char = 'E', code = 0, lastCode = 5, output = '65' (no intermediate zeros are 
> added)
> Char = 'R', code = 7, lastCode = 0, output = '657' 
> {+}Example with Mülhler (incorrectly coded){+}:
> Char = 'M', code = 6, lastCode = null, output = '6'
> Char = 'U', code = 0, lastCode = 6, output = '6' (no intermediate zeros are 
> added)
> Char = 'L', code = 5, lastCode = 0, output = '65'   
> Char = 'H', code = -, lastCode = 5, output = '65' 
> Char = 'L', {*}code = 5, lastCode = -{*}, output = '655' ({*}Fails to 
> identify duplicate code{*})
> Char = 'E', code = 0, lastCode = 5, output = '655' (No intermediate zeros are 
> added)
> Char = 'R', code = 7, lastCode = 0, output = '6557' 
> {+}Example with Müleler (incorrectly coded){+}:
> Char = 'M', code = 6, lastCode = null, output = '6'
> Char = 'U', code = 0, lastCode = 6, output = '6' (no intermediate zeros are 
> added)
> Char = 'L', code = 5, lastCode = 0, output = '65'   
> Char = 

[jira] [Commented] (CODEC-317) ColognePhonetic: Duplicate code in some cases

2024-02-20 Thread DRUser123 (Jira)


[ 
https://issues.apache.org/jira/browse/CODEC-317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818725#comment-17818725
 ] 

DRUser123 commented on CODEC-317:
-

Hi [~ggregory] , 
thank you for your reply, the Junit test is extremely simple: 

@Test
public void testColognePhonetic() {
        ColognePhonetic colognePhonetic = new ColognePhonetic(); 

        String name = "Müller"; // Correct case
        String name2 = "Müleler"; // Incorrect case
        String name3 = "Mülhler"; // Incorrect case

        System.out.println(name + ": " + colognePhonetic.colognePhonetic(name));
        System.out.println(name2 + ": " + 
colognePhonetic.colognePhonetic(name2));
        System.out.println(name3 + ": " + 
colognePhonetic.colognePhonetic(name3));
}
I ran the test in debug mode and put a breakpoint on the function in question 
({*}ColognePhonetic$CologneOutputBuffer.put(code) line 275{*}). 

As I see it, the solution would be to move line 275 inside the if so that the 
lastCode variable does not change unless the code is actually inserted into the 
output. 
Essentially, the function would become the following:

public void put(final char code) {
    if (code != CHAR_IGNORE && lastCode != code && (code != '0' || length == 
0)) {
    data[length] = code;
        length++;
    *lastCode = code;*  // Here the line moved from outside to inside the if
    }
}

I hope it can help solve the issue!

> ColognePhonetic: Duplicate code in some cases
> -
>
> Key: CODEC-317
> URL: https://issues.apache.org/jira/browse/CODEC-317
> Project: Commons Codec
>  Issue Type: Bug
>Affects Versions: 1.15, 1.16.1
>Reporter: DRUser123
>Priority: Major
>
> h2. ColognePhonetic: Duplicate code in some cases
> When the character "H" or an intermediate vowel (not at the beginning of the 
> string) is intercepted, the code should not be added to the output; however, 
> the lastCode variable takes the value of the latter, and this generates a 
> duplicate code recognition error. 
> The piece of code in question is 
> *ColognePhonetic$CologneOutputBuffer.put(code) line 275 version 1.16.1 
> (tested also with 1.15).*
> {+}Example with Müller (correctly coded){+}:
> Char = 'M', code = 6, lastCode = null, output = '6'
> Char = 'U', code = 0, lastCode = 6, output = '6' (no intermediate zeros are 
> added)
> Char = 'L', code = 5, lastCode = 0, output = '65'   
> Char = 'L', code = 5, lastCode = 5, output = '65' (no duplicate codes are 
> added)
> Char = 'E', code = 0, lastCode = 5, output = '65' (no intermediate zeros are 
> added)
> Char = 'R', code = 7, lastCode = 0, output = '657' 
> {+}Example with Mülhler (incorrectly coded){+}:
> Char = 'M', code = 6, lastCode = null, output = '6'
> Char = 'U', code = 0, lastCode = 6, output = '6' (no intermediate zeros are 
> added)
> Char = 'L', code = 5, lastCode = 0, output = '65'   
> Char = 'H', code = -, lastCode = 5, output = '65' 
> Char = 'L', {*}code = 5, lastCode = -{*}, output = '655' ({*}Fails to 
> identify duplicate code{*})
> Char = 'E', code = 0, lastCode = 5, output = '655' (No intermediate zeros are 
> added)
> Char = 'R', code = 7, lastCode = 0, output = '6557' 
> {+}Example with Müleler (incorrectly coded){+}:
> Char = 'M', code = 6, lastCode = null, output = '6'
> Char = 'U', code = 0, lastCode = 6, output = '6' (no intermediate zeros are 
> added)
> Char = 'L', code = 5, lastCode = 0, output = '65'   
> Char = 'E', code = 0, lastCode = 5, output = '65' (no intermediate zeros are 
> added)
> Char = 'L', {*}code = 5, lastCode = 0{*}, output = '655' ({*}Fails to 
> identify duplicate code{*})
> Char = 'E', code = 0, lastCode = 5, output = '655' (no intermediate zeros are 
> added)
> Char = 'R', code = 7, lastCode = 0, output = '6557' 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (COMPRESS-660) OSGi Manifest requires optional dependency commons-codec

2024-02-20 Thread Christoph Loy (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818718#comment-17818718
 ] 

Christoph Loy commented on COMPRESS-660:


[~ggregory]  Done :) See the linked PR

> OSGi Manifest requires optional dependency commons-codec
> 
>
> Key: COMPRESS-660
> URL: https://issues.apache.org/jira/browse/COMPRESS-660
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 1.26.0
>Reporter: Christoph Loy
>Priority: Major
>
> Since version 1.26, commons-compress *optionally* [requires 
> commons-codec|https://github.com/apache/commons-compress/blob/09a271dfd73e3ce01815f3f65057f92b5b7009bb/pom.xml#L134].
> In the OSGi-Manifest, the Import-Package declaration for 
> org.apache.commons.codec does not have the resolution:=optional attribute.
>  
> In our case, we have commons-compress as maven dependency. Since 
> commons-codec is an optional dependency, it is not downloaded automatically. 
> But when we start our application, we get an OSGi error that commons-codec 
> cannot be resolved.
>  
>  
> To fix this issue, the Import-Package delcaration to all 
> apache.commons.compress packages has to be marked with resolution:=optional 
> in META-INF/MANIFEST.MF.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (COMPRESS-661) commons-compress 1.26.0 breaks Apache Tika 2.9.1

2024-02-20 Thread Alexander Veit (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Veit updated COMPRESS-661:

Description: 
Apache Commons Compress 1.26.0 fixes
* https://www.cve.org/CVERecord?id=CVE-2024-25710 and
* https://www.cve.org/CVERecord?id=CVE-2024-26308.

We have tried to replace Apache Commons Compress 1.25.0 with 1.26.0 in our 
deployments in order to fix these security vulnerabilities. But unfortunately 
now Apache Tika is broken:

{noformat}
  org.apache.tika.exception.TikaException: TIKA-198: Illegal IOException from 
org.apache.tika.parser.iwork.IWorkPackageParser@41fcb910
at 
app//org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:304)
at 
app//org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:298)
at 
app//org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:203)
at app//org.apache.tika.Tika.parseToString(Tika.java:525)
at app//org.apache.tika.Tika.parseToString(Tika.java:495)
at ...
  Caused by: java.io.IOException: Resetting to invalid mark
at java.base/java.io.BufferedInputStream.reset(BufferedInputStream.java:446)
at 
org.apache.tika.parser.iwork.IWorkPackageParser.parse(IWorkPackageParser.java:97)
at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:298)
... 42 more
{noformat}


  was:
Apache Commons Compress 1.26.0 fixes
* https://www.cve.org/CVERecord?id=CVE-2024-25710 and
* https://www.cve.org/CVERecord?id=CVE-2024-26308.

We have tried to replace Apache Commons Compress 1.25.0 with 1.26.0 in our 
deployments in order to fix these security vulnerabilities. But unfortunately 
now Apache Tika is broken:


{code:text}
  org.apache.tika.exception.TikaException: TIKA-198: Illegal IOException from 
org.apache.tika.parser.iwork.IWorkPackageParser@41fcb910
at 
app//org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:304)
at 
app//org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:298)
at 
app//org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:203)
at app//org.apache.tika.Tika.parseToString(Tika.java:525)
at app//org.apache.tika.Tika.parseToString(Tika.java:495)
at ...
  Caused by: java.io.IOException: Resetting to invalid mark
at java.base/java.io.BufferedInputStream.reset(BufferedInputStream.java:446)
at 
org.apache.tika.parser.iwork.IWorkPackageParser.parse(IWorkPackageParser.java:97)
at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:298)
... 42 more
{code}



> commons-compress 1.26.0 breaks Apache Tika 2.9.1
> 
>
> Key: COMPRESS-661
> URL: https://issues.apache.org/jira/browse/COMPRESS-661
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Compressors
>Affects Versions: 1.26.0
>Reporter: Alexander Veit
>Priority: Critical
>
> Apache Commons Compress 1.26.0 fixes
> * https://www.cve.org/CVERecord?id=CVE-2024-25710 and
> * https://www.cve.org/CVERecord?id=CVE-2024-26308.
> We have tried to replace Apache Commons Compress 1.25.0 with 1.26.0 in our 
> deployments in order to fix these security vulnerabilities. But unfortunately 
> now Apache Tika is broken:
> {noformat}
>   org.apache.tika.exception.TikaException: TIKA-198: Illegal IOException from 
> org.apache.tika.parser.iwork.IWorkPackageParser@41fcb910
> at 
> app//org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:304)
> at 
> app//org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:298)
> at 
> app//org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:203)
> at app//org.apache.tika.Tika.parseToString(Tika.java:525)
> at app//org.apache.tika.Tika.parseToString(Tika.java:495)
> at ...
>   Caused by: java.io.IOException: Resetting to invalid mark
> at 
> java.base/java.io.BufferedInputStream.reset(BufferedInputStream.java:446)
> at 
> org.apache.tika.parser.iwork.IWorkPackageParser.parse(IWorkPackageParser.java:97)
> at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:298)
> ... 42 more
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (COMPRESS-661) commons-compress 1.26.0 breaks Apache Tika 2.9.1

2024-02-20 Thread Alexander Veit (Jira)
Alexander Veit created COMPRESS-661:
---

 Summary: commons-compress 1.26.0 breaks Apache Tika 2.9.1
 Key: COMPRESS-661
 URL: https://issues.apache.org/jira/browse/COMPRESS-661
 Project: Commons Compress
  Issue Type: Bug
  Components: Compressors
Affects Versions: 1.26.0
Reporter: Alexander Veit


Apache Commons Compress 1.26.0 fixes
* https://www.cve.org/CVERecord?id=CVE-2024-25710 and
* https://www.cve.org/CVERecord?id=CVE-2024-26308.

We have tried to replace Apache Commons Compress 1.25.0 with 1.26.0 in our 
deployments in order to fix these security vulnerabilities. But unfortunately 
now Apache Tika is broken:


{code:text}
  org.apache.tika.exception.TikaException: TIKA-198: Illegal IOException from 
org.apache.tika.parser.iwork.IWorkPackageParser@41fcb910
at 
app//org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:304)
at 
app//org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:298)
at 
app//org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:203)
at app//org.apache.tika.Tika.parseToString(Tika.java:525)
at app//org.apache.tika.Tika.parseToString(Tika.java:495)
at ...
  Caused by: java.io.IOException: Resetting to invalid mark
at java.base/java.io.BufferedInputStream.reset(BufferedInputStream.java:446)
at 
org.apache.tika.parser.iwork.IWorkPackageParser.parse(IWorkPackageParser.java:97)
at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:298)
... 42 more
{code}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] Fix concurrent map access exception [commons-bcel]

2024-02-20 Thread via GitHub


gnodet opened a new pull request, #275:
URL: https://github.com/apache/commons-bcel/pull/275

   (no comment)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@commons.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org