[jira] [Updated] (MATH-1579) Create "clustering" module

2023-07-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/MATH-1579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated MATH-1579:
-
Labels: modularization pull-request-available refactoring  (was: 
modularization refactoring)

> Create "clustering" module
> --
>
> Key: MATH-1579
> URL: https://issues.apache.org/jira/browse/MATH-1579
> Project: Commons Math
>  Issue Type: Sub-task
>Reporter: Gilles Sadowski
>Priority: Major
>  Labels: modularization, pull-request-available, refactoring
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Move and enhance codes from the "legacy" module" that are currently in 
> package {{{}o.a.c.math4.legacy.ml.clustering{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MATH-1570) Redundant operation

2023-06-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/MATH-1570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated MATH-1570:
-
Labels: pull-request-available  (was: )

> Redundant operation
> ---
>
> Key: MATH-1570
> URL: https://issues.apache.org/jira/browse/MATH-1570
> Project: Commons Math
>  Issue Type: Sub-task
>Reporter: Arturo Bernal
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> * Redundant String operation
>  * Redundant array creation
>  * Redundant type arguments
>  * Redundant type cast



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MATH-1397) Complex.ZERO.pow(2.0) is NaN

2023-05-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/MATH-1397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated MATH-1397:
-
Labels: pull-request-available  (was: )

> Complex.ZERO.pow(2.0) is NaN
> 
>
> Key: MATH-1397
> URL: https://issues.apache.org/jira/browse/MATH-1397
> Project: Commons Math
>  Issue Type: Bug
>Affects Versions: 3.6.1
> Environment: Linux, Java1.7/Java1.8
>Reporter: Mario Wenzel
>Assignee: Eric Barnhill
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0
>
>
> ```
> package complextest;
> import org.apache.commons.math3.complex.Complex;
> public class T {
>   public static void main(String[] args) {
>   System.out.println(Complex.ZERO.pow(2.0));
>   }
> }
> ```
> This is the code and the readout is `(NaN, NaN)`. This surely isn't right. 
> For one, it should actually be zero 
> (https://www.wolframalpha.com/input/?i=(0%2B0i)%5E2) and second of all, the 
> documentation doesn't state that anything could go wrong from a Complex 
> number that has no NaNs and Infs.
> The other definition states that it doesn't work when the base is Zero, but 
> it surely should. This strange corner case destroys any naive implementation 
> of stuff wrt the mandelbrot set.
> It would be nice to not have to implement this exception myself.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MATH-1358) Function object for "log1p(x) / x"

2023-05-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/MATH-1358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated MATH-1358:
-
Labels: pull-request-available  (was: )

> Function object for "log1p(x) / x"
> --
>
> Key: MATH-1358
> URL: https://issues.apache.org/jira/browse/MATH-1358
> Project: Commons Math
>  Issue Type: Task
>Reporter: Gilles Sadowski
>Assignee: Rob Tompkins
>Priority: Minor
>  Labels: pull-request-available
>
> Function object to be created in package {{o.a.c.m.analysis.function}}.
> Rationale: see MATH-1344.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MATH-1359) Function object for "expm1(x) / x"

2023-05-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/MATH-1359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated MATH-1359:
-
Labels: pull-request-available  (was: )

> Function object for "expm1(x) / x"
> --
>
> Key: MATH-1359
> URL: https://issues.apache.org/jira/browse/MATH-1359
> Project: Commons Math
>  Issue Type: Task
>Reporter: Gilles Sadowski
>Assignee: Rob Tompkins
>Priority: Minor
>  Labels: pull-request-available
>
> Function object to be created in package o.a.c.m.analysis.function.
> Rationale: see MATH-1344.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MATH-1660) FastMath/AccurateMath.scalb does not handle subnormal results properly

2023-05-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/MATH-1660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated MATH-1660:
-
Labels: pull-request-available  (was: )

> FastMath/AccurateMath.scalb does not handle subnormal results properly
> --
>
> Key: MATH-1660
> URL: https://issues.apache.org/jira/browse/MATH-1660
> Project: Commons Math
>  Issue Type: Bug
>  Components: core
>Affects Versions: 3.3, 4.0-beta1
>Reporter: Fran Lattanzio
>Priority: Major
>  Labels: pull-request-available
>
> FastMath.scalb does not compute subnormal values correctly. I have run this 
> against ACM 3.3, but I see the same code in AccurateMath.scalb in 4.0 A 
> simple example:
> {code:java}
> public class ScalbTest {
>     
>     @Test
>     public void scalbSubnormal() {
>         double x = -0x1.8df4a353af3a8p-2;
>         int e = -1024;
>         double wrong = FastMath.scalb(x, e);
>         double right = StrictMath.scalb(x, e);     
>         
>         Assert.assertEquals(right, wrong, 0);
>     }
> } {code}
>  
> The code that handles subnormal outputs is incorrect in this case: Rounding 
> causes the loss of exactly 1/2 an ulp, but the resulting scaled mantissa is 
> even. Thus, it should not be rounded up. Conceptually the code needs to check 
> for the following 3 cases.
>  # Less than 1/2 ulp was lost.
>  # Exactly 1/2 ulp lost.
>  # More than 1/2 ulp lost.
> For case 1, there is nothing to do. For case 2, it needs to round up only if 
> the mantissa is odd. For case 3, always round up. 
> The code below is a rough guide to what the fix should be I believe.
> {code:java}
>                 // the input is a normal number and the result is a subnormal 
> number
>                 // recover the hidden mantissa bit
>                 mantissa |= 1L << 52;
>                 
>                 // capture the lost bits before scaling down the mantissa.
>                 final long lostBits = mantissa & ((1L << (-scaledExponent + 
> 1)) - 1);
>                 mantissa >>>= 1 - scaledExponent;
>                 
>                 // there are 3 cases to consider:
>                 // 1. We lost less than 1/2 an ulp -> nothing to do.
>                 // 2. We lost exactly 1/2 ulp -> round up if mantissa is odd 
> because we are in ties-to-even.
>                 // 3. We lost more than 1/2 ulp -> round up.
>                 
>                 final long halfUlp = (1L << (-scaledExponent));               
>  
>                 if((lostBits == halfUlp && (mantissa & 1L) == 1L) || lostBits 
> > halfUlp) {
>                     mantissa++;
>                 }
>                          
>                 return Double.longBitsToDouble(sign | mantissa); {code}
> This code is probably not perfect, and I have not tested it on a large range 
> of inputs, but it demonstrates the basic idea.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MATH-1654) Matrix implementations of getEntry are slow

2023-03-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/MATH-1654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated MATH-1654:
-
Labels: pull-request-available  (was: )

> Matrix implementations of getEntry are slow
> ---
>
> Key: MATH-1654
> URL: https://issues.apache.org/jira/browse/MATH-1654
> Project: Commons Math
>  Issue Type: Improvement
>Affects Versions: 3.6.1, 4.X
> Environment: master branch
> 3.6.1
>Reporter: Cyril de Catheu
>Priority: Major
>  Labels: pull-request-available
> Attachments: bobyqa_matrix_getEntry_current.txt, 
> bobyqa_matrix_getEntry_withOpti.txt, bobyqa_matrix_getentry_slow.png
>
>
> Inspecting BOBYQA performance, I see a lot of the time is spent into the 
> [getEntry|https://github.com/apache/commons-math/blob/889d27b5d7b23eaf1ee984e93b892b5128afc454/commons-math-legacy/src/main/java/org/apache/commons/math4/legacy/linear/Array2DRowRealMatrix.java#L300]
>  method of {{Array2DRowRealMatrix}}.
> At each getEntry, a costly {{MatrixUtils.checkMatrixIndex}} is performed.
> See flamegraph. 
> {color}{color:#00}!bobyqa_matrix_getentry_slow.png|width=865,height=263!{color}
>  
> It seems other implementations of `RealMatrix` also start by the 
> `checkMatrixIndex`. 
> I did not check for complex matrices.
> It is very likely using a try and catch IndexOutOfBoundsException will be way 
> faster in the nominal case than checking indexes before accessing the array.
> All consumers of `RealMatrix#getEntry` could benefit from this optimization.
> See 
> [benchmark|https://github.com/cyrilou242/commons-math/commit/241e89f23574660d1fbaa7cdb567552c1a26a7f6]
>  for a simple BOBYQA workload. 
> For this workload the performance gain is ~10%. 
> Before: [^bobyqa_matrix_getEntry_current.txt] --> 120 ± 5   ms/operations
> After: [^bobyqa_matrix_getEntry_withOpti.txt]  --> 106 ± 8   ms/operations
> See [example fix commit 
> here|https://github.com/cyrilou242/commons-math/commit/41b268ea6ebb165f978f2f8802c90401278653bd].{color}
> Other options:
> 1.  replace {{matrix.getEntry(row, column)}} by 
> {{matrix.getDataRef()[row][column]}} inside the Bobyqa implementation. 
> According to my (very limited) benchmark results this is the fastest. But 
> this would be specific to Bobyqa, and this makes the code harder to read, so 
> this can be taken as a different task I guess. 
> 2.  add something like a {{fastGetEntry}} to the {{RealMatrix}} interface. 
> This method would not perform the indexes checks. But I guess changing the 
> interface is overkill.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (VFS-683) Thread safety issue in VFSClassLoader - NullPointerException thrown

2023-01-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/VFS-683?focusedWorklogId=840417=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-840417
 ]

ASF GitHub Bot logged work on VFS-683:
--

Author: ASF GitHub Bot
Created on: 19/Jan/23 18:50
Start Date: 19/Jan/23 18:50
Worklog Time Spent: 10m 
  Work Description: dlmarion commented on issue #2775:
URL: https://github.com/apache/accumulo/issues/2775#issuecomment-1397449873

   @ivakegg submitted a [patch](https://github.com/apache/commons-vfs/pull/367) 
for [VFS-683](https://issues.apache.org/jira/projects/VFS/issues/VFS-683) that 
should be included in the VFS 2.10 release.




Issue Time Tracking
---

Worklog Id: (was: 840417)
Time Spent: 2h  (was: 1h 50m)

> Thread safety issue in VFSClassLoader - NullPointerException thrown
> ---
>
> Key: VFS-683
> URL: https://issues.apache.org/jira/browse/VFS-683
> Project: Commons VFS
>  Issue Type: Bug
>Affects Versions: 2.2
>Reporter: Daryl Odnert
>Assignee: Gary D. Gregory
>Priority: Major
> Fix For: 2.10.0
>
> Attachments: Main.java
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> In my application, I have two instances of the {{VFSClassLoader}}, each of 
> which is being used in a distinct thread. Both {{VFSClassLoader}} instances 
> refer to the same compressed file resource described by a {{FileObject}} that 
> is passed to the class loader's constructor. Intermittently, the application 
> throws an exception with the stack trace shown below. So, there seems to be 
> either a race condition in the code or an undocumented assumption here. If it 
> is unsupported for two {{VFSClassLoader}} instances to refer to the same 
> resource (file), then that assumption should be documented. But if that is 
> not the case, then there is a race condition bug in the implementation.
> {noformat}
> 43789 WARN  {} c.a.e.u.PreferredPathClassLoader - While loading class 
> org.apache.hive.jdbc.HiveDatabaseMetaData, rethrowing unexpected 
> java.lang.NullPointerException: Inflater has been closed
> java.lang.NullPointerException: Inflater has been closed
>   at java.util.zip.Inflater.ensureOpen(Inflater.java:389)
>   at java.util.zip.Inflater.inflate(Inflater.java:257)
>   at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:152)
>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:284)
>   at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
>   at 
> org.apache.commons.vfs2.util.MonitorInputStream.read(MonitorInputStream.java:91)
>   at org.apache.commons.vfs2.FileUtil.getContent(FileUtil.java:47)
>   at org.apache.commons.vfs2.impl.Resource.getBytes(Resource.java:102)
>   at 
> org.apache.commons.vfs2.impl.VFSClassLoader.defineClass(VFSClassLoader.java:179)
>   at 
> org.apache.commons.vfs2.impl.VFSClassLoader.findClass(VFSClassLoader.java:150)
> at 
> com.atscale.engine.utils.PreferredPathClassLoader.findClass(PreferredPathClassLoader.scala:54)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (IO-784) Add support for Appendable to HexDump util

2023-01-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/IO-784?focusedWorklogId=840283=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-840283
 ]

ASF GitHub Bot logged work on IO-784:
-

Author: ASF GitHub Bot
Created on: 19/Jan/23 13:29
Start Date: 19/Jan/23 13:29
Worklog Time Spent: 10m 
  Work Description: garydgregory commented on PR #418:
URL: https://github.com/apache/commons-io/pull/418#issuecomment-1396977636

   I need to look at this more closely over the weekend: I don't know why the 
HTTP providers have to be unique and different compared to all the others. Is 
the use of concepts in this PR backward? In the PR, the "free" code now _also_ 
"closes" resources and that feels backward to me. I expect the "close" code to 
also "free" resources are part of closing, not the other way around.
   Any thoughts?




Issue Time Tracking
---

Worklog Id: (was: 840283)
Time Spent: 2h  (was: 1h 50m)

> Add support for Appendable to HexDump util
> --
>
> Key: IO-784
> URL: https://issues.apache.org/jira/browse/IO-784
> Project: Commons IO
>  Issue Type: Improvement
>  Components: Utilities
>Affects Versions: 2.11.0
>Reporter: Fredrik Kjellberg
>Priority: Major
> Fix For: 2.12.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> The HexDump utility currently only supports to send the output of the hex 
> dump to an OutputStream. This makes it cumbersome if you want to e.g. output 
> the dump to System.out.
> The HexDump utility should support to send the output to an `Appendable` 
> making it possible to use e.g. `System.out` or `StringBuilder` as output 
> targets.
> Created PR with a proposed fix: 
> [https://github.com/apache/commons-io/pull/418]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (IO-552) FilenameUtils.concat fails if second argument (fullFilenameToAdd) starts with '~' (tilde)

2023-01-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/IO-552?focusedWorklogId=840267=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-840267
 ]

ASF GitHub Bot logged work on IO-552:
-

Author: ASF GitHub Bot
Created on: 19/Jan/23 12:26
Start Date: 19/Jan/23 12:26
Worklog Time Spent: 10m 
  Work Description: garydgregory closed pull request #297: IO-552: Honor 
tilde as a valid character for file and directory names
URL: https://github.com/apache/commons-io/pull/297




Issue Time Tracking
---

Worklog Id: (was: 840267)
Time Spent: 2h 40m  (was: 2.5h)

> FilenameUtils.concat fails if second argument (fullFilenameToAdd) starts with 
> '~' (tilde)
> -
>
> Key: IO-552
> URL: https://issues.apache.org/jira/browse/IO-552
> Project: Commons IO
>  Issue Type: Bug
>  Components: Utilities
>Affects Versions: 2.2, 2.5
> Environment: Windows 7 64bit, JavaVM 1.8 32bit
>Reporter: Jochen Tümmers
>Priority: Critical
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> {{FilenameUtils.concat("c:/temp", "~abc.txt") returns "~abc.txt/" instead of 
> "c:/temp/~abc.txt".}}
> As a result, the file would be created in the user's home directory instead 
> of c:/temp.
> (Note: I Had to replace all instances of double backslashes that would 
> normally appear in the java code with forward slashes as the editor cannot 
> handle backslashes properly.)
> commons io 2.2. and 2.5 behave the same. 2.3 and 2.4 not tested.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (IO-552) FilenameUtils.concat fails if second argument (fullFilenameToAdd) starts with '~' (tilde)

2023-01-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/IO-552?focusedWorklogId=840266=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-840266
 ]

ASF GitHub Bot logged work on IO-552:
-

Author: ASF GitHub Bot
Created on: 19/Jan/23 12:26
Start Date: 19/Jan/23 12:26
Worklog Time Spent: 10m 
  Work Description: garydgregory commented on PR #297:
URL: https://github.com/apache/commons-io/pull/297#issuecomment-1396901440

   I am still -1 on this one because `~` is only a Unix-like shells and `~.txt` 
is a legal file name on Windows 10. So I will close this PR.




Issue Time Tracking
---

Worklog Id: (was: 840266)
Time Spent: 2.5h  (was: 2h 20m)

> FilenameUtils.concat fails if second argument (fullFilenameToAdd) starts with 
> '~' (tilde)
> -
>
> Key: IO-552
> URL: https://issues.apache.org/jira/browse/IO-552
> Project: Commons IO
>  Issue Type: Bug
>  Components: Utilities
>Affects Versions: 2.2, 2.5
> Environment: Windows 7 64bit, JavaVM 1.8 32bit
>Reporter: Jochen Tümmers
>Priority: Critical
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> {{FilenameUtils.concat("c:/temp", "~abc.txt") returns "~abc.txt/" instead of 
> "c:/temp/~abc.txt".}}
> As a result, the file would be created in the user's home directory instead 
> of c:/temp.
> (Note: I Had to replace all instances of double backslashes that would 
> normally appear in the java code with forward slashes as the editor cannot 
> handle backslashes properly.)
> commons io 2.2. and 2.5 behave the same. 2.3 and 2.4 not tested.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (IO-784) Add support for Appendable to HexDump util

2023-01-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/IO-784?focusedWorklogId=840071=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-840071
 ]

ASF GitHub Bot logged work on IO-784:
-

Author: ASF GitHub Bot
Created on: 18/Jan/23 19:58
Start Date: 18/Jan/23 19:58
Worklog Time Spent: 10m 
  Work Description: garydgregory merged PR #418:
URL: https://github.com/apache/commons-io/pull/418




Issue Time Tracking
---

Worklog Id: (was: 840071)
Time Spent: 1h 50m  (was: 1h 40m)

> Add support for Appendable to HexDump util
> --
>
> Key: IO-784
> URL: https://issues.apache.org/jira/browse/IO-784
> Project: Commons IO
>  Issue Type: Improvement
>  Components: Utilities
>Affects Versions: 2.11.0
>Reporter: Fredrik Kjellberg
>Priority: Major
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> The HexDump utility currently only supports to send the output of the hex 
> dump to an OutputStream. This makes it cumbersome if you want to e.g. output 
> the dump to System.out.
> The HexDump utility should support to send the output to an `Appendable` 
> making it possible to use e.g. `System.out` or `StringBuilder` as output 
> targets.
> Created PR with a proposed fix: 
> [https://github.com/apache/commons-io/pull/418]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (CRYPTO-162) openSslCipher support engine

2023-01-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/CRYPTO-162?focusedWorklogId=840064=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-840064
 ]

ASF GitHub Bot logged work on CRYPTO-162:
-

Author: ASF GitHub Bot
Created on: 18/Jan/23 19:46
Start Date: 18/Jan/23 19:46
Worklog Time Spent: 10m 
  Work Description: markwkern commented on PR #165:
URL: https://github.com/apache/commons-crypto/pull/165#issuecomment-1387678224

   You could use the Intel rdrand engine in the unit test.  It's always 
available in OpenSSL for machines of the Ivy Bridge or later generation.
% openssl engine -v
   (rdrand) Intel RDRAND engine
   




Issue Time Tracking
---

Worklog Id: (was: 840064)
Time Spent: 1h 50m  (was: 1h 40m)

> openSslCipher support engine
> 
>
> Key: CRYPTO-162
> URL: https://issues.apache.org/jira/browse/CRYPTO-162
> Project: Commons Crypto
>  Issue Type: New Feature
>  Components: Cipher
>Reporter: wenweijian
>Priority: Minor
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> The engine is the hardware or software implementation used for performing 
> cryptographic operations.
>  
> Assume we have a hardware device with a super fast implementation of AES. Now 
> when we use AES encryption we can set the engine to that hardware device 
> (instead of {{{}NULL{}}}), which means that the operations are now computed 
> by the hardware device instead of the default OpenSSL software layer.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (IMAGING-339) Basic WebP support

2023-01-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/IMAGING-339?focusedWorklogId=839286=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-839286
 ]

ASF GitHub Bot logged work on IMAGING-339:
--

Author: ASF GitHub Bot
Created on: 16/Jan/23 01:09
Start Date: 16/Jan/23 01:09
Worklog Time Spent: 10m 
  Work Description: Glavo commented on PR #254:
URL: https://github.com/apache/commons-imaging/pull/254#issuecomment-1383319953

   I updated this PR, added the `@since` tag and more detailed javadoc.




Issue Time Tracking
---

Worklog Id: (was: 839286)
Time Spent: 1.5h  (was: 1h 20m)

> Basic WebP support
> --
>
> Key: IMAGING-339
> URL: https://issues.apache.org/jira/browse/IMAGING-339
> Project: Commons Imaging
>  Issue Type: Improvement
>  Components: Format: WebP
>Affects Versions: 1.0-alpha2
>Reporter: Bruno P. Kinoshita
>Assignee: Bruno P. Kinoshita
>Priority: Minor
> Fix For: 1.0-alpha3
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Placeholder for https://github.com/apache/commons-imaging/pull/254



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (IO-769) FileUtils.copyFileToDirectory can lead to not accessible file when preserving the file date

2023-01-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/IO-769?focusedWorklogId=839274=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-839274
 ]

ASF GitHub Bot logged work on IO-769:
-

Author: ASF GitHub Bot
Created on: 15/Jan/23 15:03
Start Date: 15/Jan/23 15:03
Worklog Time Spent: 10m 
  Work Description: menscikov commented on PR #377:
URL: https://github.com/apache/commons-io/pull/377#issuecomment-1383174106

   Hello, `FileUtils.copyInputStreamToFile()` has the same issue starting from 
**2.9** version.
   Please fix it also.




Issue Time Tracking
---

Worklog Id: (was: 839274)
Time Spent: 1h 20m  (was: 1h 10m)

> FileUtils.copyFileToDirectory can lead to not accessible file when preserving 
> the file date
> ---
>
> Key: IO-769
> URL: https://issues.apache.org/jira/browse/IO-769
> Project: Commons IO
>  Issue Type: Bug
>  Components: Utilities
>Affects Versions: 2.11.0
>Reporter: Jérémy Carnus
>Priority: Major
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Hi,
> The current implementation for copyFileToDirectory by default preserve the 
> file. 
> There 2 issues regarding this:
>  * the javadoc mentions this is done by File.setLastModified by in fact this 
> is done by the COPY_ATTRIBUTES options
>  * Under Windows, the COPY_ATTRIBUTES also copies the security attributes 
> (SID and permissions) and can lead to a file not beeing readable after copy 
> (if for example, you copie from a mount under docker or a shared folder)
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (IMAGING-340) Support PNG extension

2023-01-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/IMAGING-340?focusedWorklogId=839271=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-839271
 ]

ASF GitHub Bot logged work on IMAGING-340:
--

Author: ASF GitHub Bot
Created on: 15/Jan/23 13:39
Start Date: 15/Jan/23 13:39
Worklog Time Spent: 10m 
  Work Description: Glavo commented on PR #269:
URL: https://github.com/apache/commons-imaging/pull/269#issuecomment-1383153151

   The javadoc has been updated to add the `@since` tag.




Issue Time Tracking
---

Worklog Id: (was: 839271)
Time Spent: 40m  (was: 0.5h)

> Support PNG extension
> -
>
> Key: IMAGING-340
> URL: https://issues.apache.org/jira/browse/IMAGING-340
> Project: Commons Imaging
>  Issue Type: Improvement
>  Components: Format: PNG
>Reporter: Glavo
>Priority: Minor
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Support [Extensions to the PNG 1.2 Specification, Version 
> 1.5.0|http://ftp-osl.osuosl.org/pub/libpng/documents/pngext-1.5.0.html].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (IMAGING-340) Support PNG extension

2023-01-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/IMAGING-340?focusedWorklogId=839270=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-839270
 ]

ASF GitHub Bot logged work on IMAGING-340:
--

Author: ASF GitHub Bot
Created on: 15/Jan/23 13:31
Start Date: 15/Jan/23 13:31
Worklog Time Spent: 10m 
  Work Description: kinow commented on code in PR #269:
URL: https://github.com/apache/commons-imaging/pull/269#discussion_r1070592972


##
src/main/java/org/apache/commons/imaging/formats/png/package-info.java:
##
@@ -16,7 +16,14 @@
  */
 
 /**
- * The PNG image format.
+ * The PNG (Portable Network Graphics) image format.
+ * 
+ * The implementation is based on the
+ * http://www.libpng.org/pub/png/spec/1.2/;>PNG specification version 
1.2,
+ * and supports the following extensions:
+ * 
+ * http://ftp-osl.osuosl.org/pub/libpng/documents/pngext-1.5.0.html;>Extensions
 to the PNG 1.2 Specification, Version 1.5.0
+ * 

Review Comment:
   Thank you! :clap: 



##
src/main/java/org/apache/commons/imaging/formats/png/PngImageParser.java:
##
@@ -282,21 +263,53 @@ public Dimension getImageSize(final ByteSource 
byteSource, final PngImagingParam
 @Override
 public ImageMetadata getMetadata(final ByteSource byteSource, final 
PngImagingParameters params)
 throws ImageReadException, IOException {
-final List chunks = readChunks(byteSource, new ChunkType[] { 
ChunkType.tEXt, ChunkType.zTXt, ChunkType.iTXt }, false);
+final ChunkType[] chunkTypes = { ChunkType.tEXt, ChunkType.zTXt, 
ChunkType.iTXt, ChunkType.eXIf };
+final List chunks = readChunks(byteSource, chunkTypes, 
false);
 
 if (chunks.isEmpty()) {
 return null;
 }
 
-final GenericImageMetadata result = new GenericImageMetadata();
+final GenericImageMetadata textual = new GenericImageMetadata();
+TiffImageMetadata exif = null;
 
 for (final PngChunk chunk : chunks) {
-final PngTextChunk textChunk = (PngTextChunk) chunk;
+if (chunk instanceof PngTextChunk) {
+final PngTextChunk textChunk = (PngTextChunk) chunk;
+textual.add(textChunk.getKeyword(), textChunk.getText());
+} else if (chunk.chunkType == ChunkType.eXIf.value) {
+if (exif != null) {
+throw new ImageReadException("Duplicate eXIf chunk");
+}
+exif = (TiffImageMetadata) new 
TiffImageParser().getMetadata(chunk.getBytes());
+}

Review Comment:
   I think we should either log and/or raise an error for any other type here.



##
src/main/java/org/apache/commons/imaging/formats/png/PngImageParser.java:
##
@@ -187,29 +190,7 @@ private List readChunks(final InputStream is, 
final ChunkType[] chunkT
 final int crc = read4Bytes("CRC", is, "Not a Valid PNG File", 
getByteOrder());
 
 if (keep) {
-if (chunkType == ChunkType.iCCP.value) {
-result.add(new PngChunkIccp(length, chunkType, crc, 
bytes));
-} else if (chunkType == ChunkType.tEXt.value) {
-result.add(new PngChunkText(length, chunkType, crc, 
bytes));
-} else if (chunkType == ChunkType.zTXt.value) {
-result.add(new PngChunkZtxt(length, chunkType, crc, 
bytes));
-} else if (chunkType == ChunkType.IHDR.value) {
-result.add(new PngChunkIhdr(length, chunkType, crc, 
bytes));
-} else if (chunkType == ChunkType.PLTE.value) {
-result.add(new PngChunkPlte(length, chunkType, crc, 
bytes));
-} else if (chunkType == ChunkType.pHYs.value) {
-result.add(new PngChunkPhys(length, chunkType, crc, 
bytes));
-} else if (chunkType == ChunkType.sCAL.value) {
-result.add(new PngChunkScal(length, chunkType, crc, 
bytes));
-} else if (chunkType == ChunkType.IDAT.value) {
-result.add(new PngChunkIdat(length, chunkType, crc, 
bytes));
-} else if (chunkType == ChunkType.gAMA.value) {
-result.add(new PngChunkGama(length, chunkType, crc, 
bytes));
-} else if (chunkType == ChunkType.iTXt.value) {
-result.add(new PngChunkItxt(length, chunkType, crc, 
bytes));
-} else {
-result.add(new PngChunk(length, chunkType, crc, bytes));
-}
+result.add(ChunkType.makeChunk(length, chunkType, crc, bytes));

Review Comment:
   :ok_man: :clap:  bravo, @Glavo 



##
src/main/java/org/apache/commons/imaging/formats/png/PngImageMetadata.java:
##
@@ -0,0 +1,95 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * 

[jira] [Work logged] (IMAGING-340) Support PNG extension

2023-01-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/IMAGING-340?focusedWorklogId=839253=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-839253
 ]

ASF GitHub Bot logged work on IMAGING-340:
--

Author: ASF GitHub Bot
Created on: 15/Jan/23 01:58
Start Date: 15/Jan/23 01:58
Worklog Time Spent: 10m 
  Work Description: Glavo commented on PR #269:
URL: https://github.com/apache/commons-imaging/pull/269#issuecomment-1383022457

   I updated this PR, followed the requirements of IMAGING-341, and recorded 
the standard version in the document.
   
   Can someone review this PR?




Issue Time Tracking
---

Worklog Id: (was: 839253)
Time Spent: 20m  (was: 10m)

> Support PNG extension
> -
>
> Key: IMAGING-340
> URL: https://issues.apache.org/jira/browse/IMAGING-340
> Project: Commons Imaging
>  Issue Type: Improvement
>  Components: Format: PNG
>Reporter: Glavo
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Support [Extensions to the PNG 1.2 Specification, Version 
> 1.5.0|http://ftp-osl.osuosl.org/pub/libpng/documents/pngext-1.5.0.html].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (IMAGING-342) Read PNG metadata from iTXt chunk

2023-01-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/IMAGING-342?focusedWorklogId=839179=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-839179
 ]

ASF GitHub Bot logged work on IMAGING-342:
--

Author: ASF GitHub Bot
Created on: 13/Jan/23 22:52
Start Date: 13/Jan/23 22:52
Worklog Time Spent: 10m 
  Work Description: kinow commented on PR #268:
URL: https://github.com/apache/commons-imaging/pull/268#issuecomment-1382556757

   Merged!




Issue Time Tracking
---

Worklog Id: (was: 839179)
Time Spent: 0.5h  (was: 20m)

> Read PNG metadata from iTXt chunk
> -
>
> Key: IMAGING-342
> URL: https://issues.apache.org/jira/browse/IMAGING-342
> Project: Commons Imaging
>  Issue Type: Improvement
>  Components: Format: PNG
>Affects Versions: 1.0-alpha2
>Reporter: Glavo
>Assignee: Bruno P. Kinoshita
>Priority: Major
> Fix For: 1.0-alpha3
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In [PNG 
> specification|http://www.libpng.org/pub/png/spec/1.2/PNG-Chunks.html#C.Anc-text],
>  {{iTXt}} chunk is semantically equivalent to the {{tEXt}} and {{zTXt}} 
> chunks, but {{PngImageParser::getMetadata}} does not recognize the {{iTXt}} 
> chunk.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (IMAGING-342) Read PNG metadata from iTXt chunk

2023-01-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/IMAGING-342?focusedWorklogId=839178=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-839178
 ]

ASF GitHub Bot logged work on IMAGING-342:
--

Author: ASF GitHub Bot
Created on: 13/Jan/23 22:52
Start Date: 13/Jan/23 22:52
Worklog Time Spent: 10m 
  Work Description: kinow closed pull request #268: [IMAGING-342] Read PNG 
metadata from iTXt chunk
URL: https://github.com/apache/commons-imaging/pull/268




Issue Time Tracking
---

Worklog Id: (was: 839178)
Time Spent: 20m  (was: 10m)

> Read PNG metadata from iTXt chunk
> -
>
> Key: IMAGING-342
> URL: https://issues.apache.org/jira/browse/IMAGING-342
> Project: Commons Imaging
>  Issue Type: Improvement
>  Components: Format: PNG
>Affects Versions: 1.0-alpha2
>Reporter: Glavo
>Assignee: Bruno P. Kinoshita
>Priority: Major
> Fix For: 1.0-alpha3
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In [PNG 
> specification|http://www.libpng.org/pub/png/spec/1.2/PNG-Chunks.html#C.Anc-text],
>  {{iTXt}} chunk is semantically equivalent to the {{tEXt}} and {{zTXt}} 
> chunks, but {{PngImageParser::getMetadata}} does not recognize the {{iTXt}} 
> chunk.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (IMAGING-342) Read PNG metadata from iTXt chunk

2023-01-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/IMAGING-342?focusedWorklogId=839165=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-839165
 ]

ASF GitHub Bot logged work on IMAGING-342:
--

Author: ASF GitHub Bot
Created on: 13/Jan/23 22:25
Start Date: 13/Jan/23 22:25
Worklog Time Spent: 10m 
  Work Description: Glavo commented on PR #268:
URL: https://github.com/apache/commons-imaging/pull/268#issuecomment-1382460788

   I added a test to check whether the metadata with Unicode characters can be 
correctly obtained from PNG images.




Issue Time Tracking
---

Worklog Id: (was: 839165)
Remaining Estimate: 0h
Time Spent: 10m

> Read PNG metadata from iTXt chunk
> -
>
> Key: IMAGING-342
> URL: https://issues.apache.org/jira/browse/IMAGING-342
> Project: Commons Imaging
>  Issue Type: Improvement
>  Components: Format: PNG
>Reporter: Glavo
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In [PNG 
> specification|http://www.libpng.org/pub/png/spec/1.2/PNG-Chunks.html#C.Anc-text],
>  {{iTXt}} chunk is semantically equivalent to the {{tEXt}} and {{zTXt}} 
> chunks, but {{PngImageParser::getMetadata}} does not recognize the {{iTXt}} 
> chunk.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (IMAGING-340) Support PNG extension

2023-01-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/IMAGING-340?focusedWorklogId=839098=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-839098
 ]

ASF GitHub Bot logged work on IMAGING-340:
--

Author: ASF GitHub Bot
Created on: 13/Jan/23 14:59
Start Date: 13/Jan/23 14:59
Worklog Time Spent: 10m 
  Work Description: Glavo commented on PR #269:
URL: https://github.com/apache/commons-imaging/pull/269#issuecomment-1381983655

   When I created the test, I noticed the type of `EXIF_TAG_EXIF_IMAGE_WIDTH` 
and `EXIF_TAG_EXIF_IMAGE_LENGTH` is `TagInfoShort`.
   
   However, in the [standard 
document](https://www.cipa.jp/std/documents/e/DC-X008-Translation-2019-E.pdf), 
the field type is SHORT or LONG.
   
   Unfortunately, the test image I found uses the LONG type field to record 
`ExifImageWidth` and `ExifImageLength`, so I found this problem.
   
   Should I solve this problem in this PR? Or do I need to open a new PR?




Issue Time Tracking
---

Worklog Id: (was: 839098)
Remaining Estimate: 0h
Time Spent: 10m

> Support PNG extension
> -
>
> Key: IMAGING-340
> URL: https://issues.apache.org/jira/browse/IMAGING-340
> Project: Commons Imaging
>  Issue Type: Improvement
>  Components: Format: PNG
>Reporter: Glavo
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Support [Extensions to the PNG 1.2 Specification, Version 
> 1.5.0|http://ftp-osl.osuosl.org/pub/libpng/documents/pngext-1.5.0.html].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (IMAGING-339) Basic WebP support

2023-01-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/IMAGING-339?focusedWorklogId=838966=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-838966
 ]

ASF GitHub Bot logged work on IMAGING-339:
--

Author: ASF GitHub Bot
Created on: 12/Jan/23 21:40
Start Date: 12/Jan/23 21:40
Worklog Time Spent: 10m 
  Work Description: Glavo commented on PR #254:
URL: https://github.com/apache/commons-imaging/pull/254#issuecomment-1381023692

   @kinow I went back to this work and completed the missing documents.
   
   Can you take a look at my question here at  
https://github.com/apache/commons-imaging/pull/254/files/08d6a0dff607eefa633c89d231a01623e3ad90c6#r1049312581?
 
   I feel that the usage of `ImageReadException` and `IOException` is often 
confused, so I don't know which one to choose.




Issue Time Tracking
---

Worklog Id: (was: 838966)
Time Spent: 1h 20m  (was: 1h 10m)

> Basic WebP support
> --
>
> Key: IMAGING-339
> URL: https://issues.apache.org/jira/browse/IMAGING-339
> Project: Commons Imaging
>  Issue Type: Improvement
>  Components: Format: WebP
>Affects Versions: 1.0-alpha2
>Reporter: Bruno P. Kinoshita
>Assignee: Bruno P. Kinoshita
>Priority: Minor
> Fix For: 1.0-alpha3
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Placeholder for https://github.com/apache/commons-imaging/pull/254



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (IMAGING-339) Basic WebP support

2023-01-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/IMAGING-339?focusedWorklogId=838965=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-838965
 ]

ASF GitHub Bot logged work on IMAGING-339:
--

Author: ASF GitHub Bot
Created on: 12/Jan/23 21:37
Start Date: 12/Jan/23 21:37
Worklog Time Spent: 10m 
  Work Description: garydgregory commented on code in PR #254:
URL: https://github.com/apache/commons-imaging/pull/254#discussion_r1068685156


##
src/main/java/org/apache/commons/imaging/formats/webp/chunks/WebPChunk.java:
##
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.commons.imaging.formats.webp.chunks;
+
+import org.apache.commons.imaging.ImageReadException;
+import org.apache.commons.imaging.common.BinaryFileParser;
+
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.nio.ByteOrder;
+import java.nio.charset.StandardCharsets;
+
+public abstract class WebPChunk extends BinaryFileParser {
+public static final int TYPE_VP8 = 0x20385056;
+public static final int TYPE_VP8L = 0x4C385056;
+public static final int TYPE_VP8X = 0x58385056;
+public static final int TYPE_ANIM = 0x4D494E41;
+public static final int TYPE_ANMF = 0x464D4E41;
+public static final int TYPE_ICCP = 0x50434349;
+public static final int TYPE_EXIF = 0x46495845;
+public static final int TYPE_XMP = 0x20504D58;
+
+private final int type;
+private final int size;
+protected final byte[] bytes;
+
+WebPChunk(int type, int size, byte[] bytes) throws ImageReadException {
+super(ByteOrder.LITTLE_ENDIAN);
+
+if (size != bytes.length) {
+throw new AssertionError("Chunk size must match bytes length");

Review Comment:
   Which leads to consider if we should present the smallest API surface for 
this new code, IOW the minimal elements that should be public or protected. 
Once public or protected, it's set in stone within a major release line.





Issue Time Tracking
---

Worklog Id: (was: 838965)
Time Spent: 1h 10m  (was: 1h)

> Basic WebP support
> --
>
> Key: IMAGING-339
> URL: https://issues.apache.org/jira/browse/IMAGING-339
> Project: Commons Imaging
>  Issue Type: Improvement
>  Components: Format: WebP
>Affects Versions: 1.0-alpha2
>Reporter: Bruno P. Kinoshita
>Assignee: Bruno P. Kinoshita
>Priority: Minor
> Fix For: 1.0-alpha3
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Placeholder for https://github.com/apache/commons-imaging/pull/254



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (IMAGING-339) Basic WebP support

2023-01-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/IMAGING-339?focusedWorklogId=838958=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-838958
 ]

ASF GitHub Bot logged work on IMAGING-339:
--

Author: ASF GitHub Bot
Created on: 12/Jan/23 21:15
Start Date: 12/Jan/23 21:15
Worklog Time Spent: 10m 
  Work Description: Glavo commented on code in PR #254:
URL: https://github.com/apache/commons-imaging/pull/254#discussion_r1068661686


##
src/main/java/org/apache/commons/imaging/formats/webp/WebPImageParser.java:
##
@@ -0,0 +1,351 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.commons.imaging.formats.webp;
+
+import org.apache.commons.imaging.ImageFormat;
+import org.apache.commons.imaging.ImageFormats;
+import org.apache.commons.imaging.ImageInfo;
+import org.apache.commons.imaging.ImageParser;
+import org.apache.commons.imaging.ImageReadException;
+import org.apache.commons.imaging.common.XmpEmbeddable;
+import org.apache.commons.imaging.common.XmpImagingParameters;
+import org.apache.commons.imaging.common.bytesource.ByteSource;
+import org.apache.commons.imaging.formats.tiff.TiffImageMetadata;
+import org.apache.commons.imaging.formats.tiff.TiffImageParser;
+import org.apache.commons.imaging.formats.webp.chunks.WebPChunk;
+import org.apache.commons.imaging.formats.webp.chunks.WebPChunkANIM;
+import org.apache.commons.imaging.formats.webp.chunks.WebPChunkANMF;
+import org.apache.commons.imaging.formats.webp.chunks.WebPChunkEXIF;
+import org.apache.commons.imaging.formats.webp.chunks.WebPChunkICCP;
+import org.apache.commons.imaging.formats.webp.chunks.WebPChunkVP8;
+import org.apache.commons.imaging.formats.webp.chunks.WebPChunkVP8L;
+import org.apache.commons.imaging.formats.webp.chunks.WebPChunkVP8X;
+import org.apache.commons.imaging.formats.webp.chunks.WebPChunkXMP;
+import org.apache.commons.imaging.formats.webp.chunks.WebPChunkXYZW;
+
+import java.awt.Dimension;
+import java.awt.image.BufferedImage;
+import java.io.Closeable;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.PrintWriter;
+import java.nio.ByteOrder;
+import java.util.ArrayList;
+
+import static org.apache.commons.imaging.common.BinaryFunctions.read4Bytes;
+import static org.apache.commons.imaging.common.BinaryFunctions.readBytes;
+import static org.apache.commons.imaging.common.BinaryFunctions.skipBytes;
+
+public class WebPImageParser extends ImageParser 
implements XmpEmbeddable {
+
+private static final String DEFAULT_EXTENSION = 
ImageFormats.WEBP.getDefaultExtension();
+private static final String[] ACCEPTED_EXTENSIONS = 
ImageFormats.WEBP.getExtensions();
+
+@Override
+public WebPImagingParameters getDefaultParameters() {
+return new WebPImagingParameters();
+}
+
+@Override
+public String getName() {
+return "WebP-Custom";
+}
+
+@Override
+public String getDefaultExtension() {
+return DEFAULT_EXTENSION;
+}
+
+@Override
+protected String[] getAcceptedExtensions() {
+return ACCEPTED_EXTENSIONS;
+}
+
+@Override
+protected ImageFormat[] getAcceptedTypes() {
+return new ImageFormat[]{ImageFormats.WEBP};
+}
+
+static int readFileHeader(InputStream is) throws IOException, 
ImageReadException {
+byte[] buffer = new byte[4];
+if (is.read(buffer) < 4 || 
!WebPConstants.RIFF_SIGNATURE.equals(buffer)) {
+throw new IOException("Not a Valid WebP File");
+}
+
+int fileSize = read4Bytes("File Size", is, "Not a Valid WebP File", 
ByteOrder.LITTLE_ENDIAN);
+if (fileSize < 0) {
+throw new ImageReadException("File Size is too long:" + fileSize);
+}
+
+if (is.read(buffer) < 4 || 
!WebPConstants.WEBP_SIGNATURE.equals(buffer)) {
+throw new IOException("Not a Valid WebP File");

Review Comment:
   @kinow Can you take a look at this question?





Issue Time Tracking
---

Worklog Id: (was: 838958)
Time Spent: 1h  (was: 50m)

> Basic WebP support
> --
>
> Key: 

[jira] [Work logged] (IMAGING-339) Basic WebP support

2023-01-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/IMAGING-339?focusedWorklogId=838954=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-838954
 ]

ASF GitHub Bot logged work on IMAGING-339:
--

Author: ASF GitHub Bot
Created on: 12/Jan/23 21:12
Start Date: 12/Jan/23 21:12
Worklog Time Spent: 10m 
  Work Description: Glavo commented on code in PR #254:
URL: https://github.com/apache/commons-imaging/pull/254#discussion_r1068659362


##
src/main/java/org/apache/commons/imaging/formats/webp/chunks/WebPChunk.java:
##
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.commons.imaging.formats.webp.chunks;
+
+import org.apache.commons.imaging.ImageReadException;
+import org.apache.commons.imaging.common.BinaryFileParser;
+
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.nio.ByteOrder;
+import java.nio.charset.StandardCharsets;
+
+public abstract class WebPChunk extends BinaryFileParser {
+public static final int TYPE_VP8 = 0x20385056;
+public static final int TYPE_VP8L = 0x4C385056;
+public static final int TYPE_VP8X = 0x58385056;
+public static final int TYPE_ANIM = 0x4D494E41;
+public static final int TYPE_ANMF = 0x464D4E41;
+public static final int TYPE_ICCP = 0x50434349;
+public static final int TYPE_EXIF = 0x46495845;
+public static final int TYPE_XMP = 0x20504D58;
+
+private final int type;
+private final int size;
+protected final byte[] bytes;
+
+WebPChunk(int type, int size, byte[] bytes) throws ImageReadException {
+super(ByteOrder.LITTLE_ENDIAN);
+
+if (size != bytes.length) {
+throw new AssertionError("Chunk size must match bytes length");

Review Comment:
   > You must be thinking of IllegalStateException, we should not throw Errors.
   
   In my understanding, here I am asserting that throwing an exception is an 
unreachable path.
   
   But I thought again, maybe it is also feasible to throw 
`IllegalArgumentException`, because Chunk may also be constructed by users 
themselves.





Issue Time Tracking
---

Worklog Id: (was: 838954)
Time Spent: 50m  (was: 40m)

> Basic WebP support
> --
>
> Key: IMAGING-339
> URL: https://issues.apache.org/jira/browse/IMAGING-339
> Project: Commons Imaging
>  Issue Type: Improvement
>  Components: Format: WebP
>Affects Versions: 1.0-alpha2
>Reporter: Bruno P. Kinoshita
>Assignee: Bruno P. Kinoshita
>Priority: Minor
> Fix For: 1.0-alpha3
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Placeholder for https://github.com/apache/commons-imaging/pull/254



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (IMAGING-339) Basic WebP support

2023-01-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/IMAGING-339?focusedWorklogId=838953=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-838953
 ]

ASF GitHub Bot logged work on IMAGING-339:
--

Author: ASF GitHub Bot
Created on: 12/Jan/23 21:09
Start Date: 12/Jan/23 21:09
Worklog Time Spent: 10m 
  Work Description: garydgregory commented on code in PR #254:
URL: https://github.com/apache/commons-imaging/pull/254#discussion_r1068651074


##
src/main/java/org/apache/commons/imaging/formats/webp/chunks/WebPChunk.java:
##
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.commons.imaging.formats.webp.chunks;
+
+import org.apache.commons.imaging.ImageReadException;
+import org.apache.commons.imaging.common.BinaryFileParser;
+
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.nio.ByteOrder;
+import java.nio.charset.StandardCharsets;
+
+public abstract class WebPChunk extends BinaryFileParser {
+public static final int TYPE_VP8 = 0x20385056;
+public static final int TYPE_VP8L = 0x4C385056;
+public static final int TYPE_VP8X = 0x58385056;
+public static final int TYPE_ANIM = 0x4D494E41;
+public static final int TYPE_ANMF = 0x464D4E41;
+public static final int TYPE_ICCP = 0x50434349;
+public static final int TYPE_EXIF = 0x46495845;
+public static final int TYPE_XMP = 0x20504D58;
+
+private final int type;
+private final int size;
+protected final byte[] bytes;
+
+WebPChunk(int type, int size, byte[] bytes) throws ImageReadException {
+super(ByteOrder.LITTLE_ENDIAN);
+
+if (size != bytes.length) {
+throw new AssertionError("Chunk size must match bytes length");

Review Comment:
   You must be thinking of IllegalStateException or IllegalArgumentException we 
should not throw Errors.





Issue Time Tracking
---

Worklog Id: (was: 838953)
Time Spent: 40m  (was: 0.5h)

> Basic WebP support
> --
>
> Key: IMAGING-339
> URL: https://issues.apache.org/jira/browse/IMAGING-339
> Project: Commons Imaging
>  Issue Type: Improvement
>  Components: Format: WebP
>Affects Versions: 1.0-alpha2
>Reporter: Bruno P. Kinoshita
>Assignee: Bruno P. Kinoshita
>Priority: Minor
> Fix For: 1.0-alpha3
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Placeholder for https://github.com/apache/commons-imaging/pull/254



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (IMAGING-339) Basic WebP support

2023-01-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/IMAGING-339?focusedWorklogId=838952=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-838952
 ]

ASF GitHub Bot logged work on IMAGING-339:
--

Author: ASF GitHub Bot
Created on: 12/Jan/23 21:08
Start Date: 12/Jan/23 21:08
Worklog Time Spent: 10m 
  Work Description: kinow commented on code in PR #254:
URL: https://github.com/apache/commons-imaging/pull/254#discussion_r1068655185


##
src/main/java/org/apache/commons/imaging/formats/webp/chunks/WebPChunk.java:
##
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.commons.imaging.formats.webp.chunks;
+
+import org.apache.commons.imaging.ImageReadException;
+import org.apache.commons.imaging.common.BinaryFileParser;
+
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.nio.ByteOrder;
+import java.nio.charset.StandardCharsets;
+
+public abstract class WebPChunk extends BinaryFileParser {
+public static final int TYPE_VP8 = 0x20385056;
+public static final int TYPE_VP8L = 0x4C385056;
+public static final int TYPE_VP8X = 0x58385056;
+public static final int TYPE_ANIM = 0x4D494E41;
+public static final int TYPE_ANMF = 0x464D4E41;
+public static final int TYPE_ICCP = 0x50434349;
+public static final int TYPE_EXIF = 0x46495845;
+public static final int TYPE_XMP = 0x20504D58;
+
+private final int type;
+private final int size;
+protected final byte[] bytes;
+
+WebPChunk(int type, int size, byte[] bytes) throws ImageReadException {
+super(ByteOrder.LITTLE_ENDIAN);
+
+if (size != bytes.length) {
+throw new AssertionError("Chunk size must match bytes length");

Review Comment:
   I think it'd be important to be consistent, so users are not surprised when 
using the API. Have a look at the other parsers, @Glavo , and see what 
exceptions are being used. If you believe we must change it to another 
exception type, then we can create a follow up issue to discuss and plan how to 
do that in all other parsers :+1: 





Issue Time Tracking
---

Worklog Id: (was: 838952)
Time Spent: 0.5h  (was: 20m)

> Basic WebP support
> --
>
> Key: IMAGING-339
> URL: https://issues.apache.org/jira/browse/IMAGING-339
> Project: Commons Imaging
>  Issue Type: Improvement
>  Components: Format: WebP
>Affects Versions: 1.0-alpha2
>Reporter: Bruno P. Kinoshita
>Assignee: Bruno P. Kinoshita
>Priority: Minor
> Fix For: 1.0-alpha3
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Placeholder for https://github.com/apache/commons-imaging/pull/254



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (IMAGING-339) Basic WebP support

2023-01-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/IMAGING-339?focusedWorklogId=838950=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-838950
 ]

ASF GitHub Bot logged work on IMAGING-339:
--

Author: ASF GitHub Bot
Created on: 12/Jan/23 21:04
Start Date: 12/Jan/23 21:04
Worklog Time Spent: 10m 
  Work Description: garydgregory commented on code in PR #254:
URL: https://github.com/apache/commons-imaging/pull/254#discussion_r1068651074


##
src/main/java/org/apache/commons/imaging/formats/webp/chunks/WebPChunk.java:
##
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.commons.imaging.formats.webp.chunks;
+
+import org.apache.commons.imaging.ImageReadException;
+import org.apache.commons.imaging.common.BinaryFileParser;
+
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.nio.ByteOrder;
+import java.nio.charset.StandardCharsets;
+
+public abstract class WebPChunk extends BinaryFileParser {
+public static final int TYPE_VP8 = 0x20385056;
+public static final int TYPE_VP8L = 0x4C385056;
+public static final int TYPE_VP8X = 0x58385056;
+public static final int TYPE_ANIM = 0x4D494E41;
+public static final int TYPE_ANMF = 0x464D4E41;
+public static final int TYPE_ICCP = 0x50434349;
+public static final int TYPE_EXIF = 0x46495845;
+public static final int TYPE_XMP = 0x20504D58;
+
+private final int type;
+private final int size;
+protected final byte[] bytes;
+
+WebPChunk(int type, int size, byte[] bytes) throws ImageReadException {
+super(ByteOrder.LITTLE_ENDIAN);
+
+if (size != bytes.length) {
+throw new AssertionError("Chunk size must match bytes length");

Review Comment:
   You must be thinking of IllegalStateException, we should not throw Errors.





Issue Time Tracking
---

Worklog Id: (was: 838950)
Time Spent: 20m  (was: 10m)

> Basic WebP support
> --
>
> Key: IMAGING-339
> URL: https://issues.apache.org/jira/browse/IMAGING-339
> Project: Commons Imaging
>  Issue Type: Improvement
>  Components: Format: WebP
>Affects Versions: 1.0-alpha2
>Reporter: Bruno P. Kinoshita
>Assignee: Bruno P. Kinoshita
>Priority: Minor
> Fix For: 1.0-alpha3
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Placeholder for https://github.com/apache/commons-imaging/pull/254



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (IMAGING-339) Basic WebP support

2023-01-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/IMAGING-339?focusedWorklogId=838948=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-838948
 ]

ASF GitHub Bot logged work on IMAGING-339:
--

Author: ASF GitHub Bot
Created on: 12/Jan/23 21:00
Start Date: 12/Jan/23 21:00
Worklog Time Spent: 10m 
  Work Description: Glavo commented on code in PR #254:
URL: https://github.com/apache/commons-imaging/pull/254#discussion_r1068648410


##
src/main/java/org/apache/commons/imaging/formats/webp/chunks/WebPChunk.java:
##
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.commons.imaging.formats.webp.chunks;
+
+import org.apache.commons.imaging.ImageReadException;
+import org.apache.commons.imaging.common.BinaryFileParser;
+
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.nio.ByteOrder;
+import java.nio.charset.StandardCharsets;
+
+public abstract class WebPChunk extends BinaryFileParser {
+public static final int TYPE_VP8 = 0x20385056;
+public static final int TYPE_VP8L = 0x4C385056;
+public static final int TYPE_VP8X = 0x58385056;
+public static final int TYPE_ANIM = 0x4D494E41;
+public static final int TYPE_ANMF = 0x464D4E41;
+public static final int TYPE_ICCP = 0x50434349;
+public static final int TYPE_EXIF = 0x46495845;
+public static final int TYPE_XMP = 0x20504D58;
+
+private final int type;
+private final int size;
+protected final byte[] bytes;
+
+WebPChunk(int type, int size, byte[] bytes) throws ImageReadException {
+super(ByteOrder.LITTLE_ENDIAN);
+
+if (size != bytes.length) {
+throw new AssertionError("Chunk size must match bytes length");

Review Comment:
   @kinow I think `AssertionError` should be used here.
   
   I think `ImageReadException` should be thrown only when there is no error in 
commons-imaging itself, e.g. we encountered illegal input at runtime.
   
   However, here, illegal input data will not cause this problem. An exception 
will be thrown only if commons-imaging itself has logical errors, so I use 
`AssertionError` here.





Issue Time Tracking
---

Worklog Id: (was: 838948)
Remaining Estimate: 0h
Time Spent: 10m

> Basic WebP support
> --
>
> Key: IMAGING-339
> URL: https://issues.apache.org/jira/browse/IMAGING-339
> Project: Commons Imaging
>  Issue Type: Improvement
>  Components: Format: WebP
>Affects Versions: 1.0-alpha2
>Reporter: Bruno P. Kinoshita
>Assignee: Bruno P. Kinoshita
>Priority: Minor
> Fix For: 1.0-alpha3
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Placeholder for https://github.com/apache/commons-imaging/pull/254



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (IO-784) Add support for Appendable to HexDump util

2023-01-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/IO-784?focusedWorklogId=837762=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-837762
 ]

ASF GitHub Bot logged work on IO-784:
-

Author: ASF GitHub Bot
Created on: 08/Jan/23 18:30
Start Date: 08/Jan/23 18:30
Worklog Time Spent: 10m 
  Work Description: fkjellberg commented on code in PR #418:
URL: https://github.com/apache/commons-io/pull/418#discussion_r1064184381


##
src/test/java/org/apache/commons/io/HexDumpTest.java:
##
@@ -253,6 +253,16 @@ public void testDumpOutputStream() throws IOException {
 
 // verify proper behavior with null stream
 assertThrows(NullPointerException.class, () -> HexDump.dump(testArray, 
0x1000, null, 0));
+

Review Comment:
   @garydgregory Thanks for the pointer. I was not aware of this test utility 
class. Code updated.





Issue Time Tracking
---

Worklog Id: (was: 837762)
Time Spent: 1h 40m  (was: 1.5h)

> Add support for Appendable to HexDump util
> --
>
> Key: IO-784
> URL: https://issues.apache.org/jira/browse/IO-784
> Project: Commons IO
>  Issue Type: Improvement
>  Components: Utilities
>Affects Versions: 2.11.0
>Reporter: Fredrik Kjellberg
>Priority: Major
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> The HexDump utility currently only supports to send the output of the hex 
> dump to an OutputStream. This makes it cumbersome if you want to e.g. output 
> the dump to System.out.
> The HexDump utility should support to send the output to an `Appendable` 
> making it possible to use e.g. `System.out` or `StringBuilder` as output 
> targets.
> Created PR with a proposed fix: 
> [https://github.com/apache/commons-io/pull/418]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (IO-784) Add support for Appendable to HexDump util

2023-01-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/IO-784?focusedWorklogId=837750=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-837750
 ]

ASF GitHub Bot logged work on IO-784:
-

Author: ASF GitHub Bot
Created on: 08/Jan/23 16:38
Start Date: 08/Jan/23 16:38
Worklog Time Spent: 10m 
  Work Description: garydgregory commented on code in PR #418:
URL: https://github.com/apache/commons-io/pull/418#discussion_r1064170937


##
src/test/java/org/apache/commons/io/HexDumpTest.java:
##
@@ -253,6 +253,16 @@ public void testDumpOutputStream() throws IOException {
 
 // verify proper behavior with null stream
 assertThrows(NullPointerException.class, () -> HexDump.dump(testArray, 
0x1000, null, 0));
+

Review Comment:
   How about reusing `ThrowOnCloseInputStream`?





Issue Time Tracking
---

Worklog Id: (was: 837750)
Time Spent: 1.5h  (was: 1h 20m)

> Add support for Appendable to HexDump util
> --
>
> Key: IO-784
> URL: https://issues.apache.org/jira/browse/IO-784
> Project: Commons IO
>  Issue Type: Improvement
>  Components: Utilities
>Affects Versions: 2.11.0
>Reporter: Fredrik Kjellberg
>Priority: Major
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> The HexDump utility currently only supports to send the output of the hex 
> dump to an OutputStream. This makes it cumbersome if you want to e.g. output 
> the dump to System.out.
> The HexDump utility should support to send the output to an `Appendable` 
> making it possible to use e.g. `System.out` or `StringBuilder` as output 
> targets.
> Created PR with a proposed fix: 
> [https://github.com/apache/commons-io/pull/418]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (LANG-1634) ObjectUtils - apply Consumer with non-null value

2023-01-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/LANG-1634?focusedWorklogId=837666=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-837666
 ]

ASF GitHub Bot logged work on LANG-1634:


Author: ASF GitHub Bot
Created on: 07/Jan/23 05:22
Start Date: 07/Jan/23 05:22
Worklog Time Spent: 10m 
  Work Description: singhbaljit commented on code in PR #684:
URL: https://github.com/apache/commons-lang/pull/684#discussion_r1063955651


##
src/main/java/org/apache/commons/lang3/ObjectUtils.java:
##
@@ -226,6 +227,63 @@ public static boolean anyNotNull(final Object... values) {
 return firstNonNull(values) != null;
 }
 
+/**
+ * 
+ * Calls the given {@code consumer's} {@link Consumer#accept(Object)} 
method with the first {@code non-null} value
+ * from {@code objects}. If all the values are null, the consumer is not 
invoked. This is equivalent to the call
+ * {@code ObjectUtils.acceptIfNonNull(ObjectUtils.firstNonNull(objects), 
consumer)}
+ * 
+ *
+ * 
+ * The caller is responsible for thread-safety and exception handling of 
consumer.
+ * 
+ *
+ * 
+ * ObjectUtils.acceptFirstNonNull(bean::setValue, null) - 
setValue not invoked
+ * ObjectUtils.acceptFirstNonNull(bean::setValue, null, "abc", "def")   - 
setValue invoked with "abc"
+ * ObjectUtils.acceptFirstNonNull(v - bean.setValue(v), "abc")  - 
setValue invoked with "abc"
+ * 
+ *
+ * @param  the type of the object
+ * @param objects  the values to test, may be {@code null} or empty
+ * @param consumer the consumer operation to invoke with the first 
non-null {@code objects}.
+ * @see #firstNonNull(Object...)
+ * @see #acceptIfNonNull(Object, Consumer)
+ * @since 3.12
+ */
+@SafeVarargs
+public static  void acceptFirstNonNull(final Consumer consumer, 
final T... objects) {
+acceptIfNonNull(firstNonNull(objects), consumer);
+}
+
+/**
+ * 
+ * Calls the given {@code consumer's} {@link Consumer#accept(Object)} 
method with the {@code object} if it is
+ * {@code non-null}.
+ * 
+ *
+ * 
+ * The caller is responsible for thread-safety and exception handling of 
consumer.
+ * 
+ *
+ * 
+ * ObjectUtils.acceptIfNonNull(null, bean::setValue) - 
setValue not invoked
+ * ObjectUtils.acceptIfNonNull("abc", bean::setValue)- 
setValue invoked with "abc"
+ * ObjectUtils.acceptIfNonNull("abc", v - bean.setValue(v))  - 
setValue invoked with "abc"
+ * 
+ *
+ * @param  the type of the object
+ * @param object the {@code Object} to test, may be {@code null}
+ * @param consumer the consumer operation to invoke with {@code object} if 
it is {@code non-null}
+ * @see #acceptFirstNonNull(Consumer, Object...)
+ * @since 3.12
+ */
+public static  void acceptIfNonNull(final T object, final Consumer 
consumer) {

Review Comment:
   more user-friendly: `Consumer consumer`. 
   
   Also, `requireNonNull(consumer, "consumer")`.





Issue Time Tracking
---

Worklog Id: (was: 837666)
Time Spent: 3h 50m  (was: 3h 40m)

> ObjectUtils - apply Consumer with non-null value
> 
>
> Key: LANG-1634
> URL: https://issues.apache.org/jira/browse/LANG-1634
> Project: Commons Lang
>  Issue Type: Improvement
>  Components: lang.*
>Reporter: Bindul Bhowmik
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> There are multiple places in code where we have to check if a value is 
> {{null}} before using it in a setter or other method, like:
> {code:java}
> if (valueX != null) {
>   bean.setValue(valueX);
>   someObject.compute(valueX, "bar");
> }
> {code}
> This enhancement request is to add a couple of methods in {{ObjectUtils}} to 
> wrap this logic,  like the following:
> {code:java}
> public static  void applyIfNonNull(final Consumer consumer, final T 
> object)
> public static  void applyFirstNonNull(final Consumer consumer, final 
> T... objects)
> {code}
> With this the two statements above could be used as:
> {code:java}
> ObjectUtils.applyIfNonNull(bean::setValue, valueX);
> ObjectUtils.appyIfNonNull(v -> someObject.compute(v, "bar"), valueX);
> {code}
> The benefit of this should increase with more such null checks we need in the 
> code that can be replaced by single statements.
> Pull request forthcoming.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (LANG-1634) ObjectUtils - apply Consumer with non-null value

2023-01-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/LANG-1634?focusedWorklogId=837665=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-837665
 ]

ASF GitHub Bot logged work on LANG-1634:


Author: ASF GitHub Bot
Created on: 07/Jan/23 05:04
Start Date: 07/Jan/23 05:04
Worklog Time Spent: 10m 
  Work Description: singhbaljit commented on PR #684:
URL: https://github.com/apache/commons-lang/pull/684#issuecomment-1374380735

   We could really use this method. This is helpful because it removes the 
if/else code branch from the application codes, and therefore, devs don't have 
to write trivial unit tests to meet the code coverage requirements.




Issue Time Tracking
---

Worklog Id: (was: 837665)
Time Spent: 3h 40m  (was: 3.5h)

> ObjectUtils - apply Consumer with non-null value
> 
>
> Key: LANG-1634
> URL: https://issues.apache.org/jira/browse/LANG-1634
> Project: Commons Lang
>  Issue Type: Improvement
>  Components: lang.*
>Reporter: Bindul Bhowmik
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> There are multiple places in code where we have to check if a value is 
> {{null}} before using it in a setter or other method, like:
> {code:java}
> if (valueX != null) {
>   bean.setValue(valueX);
>   someObject.compute(valueX, "bar");
> }
> {code}
> This enhancement request is to add a couple of methods in {{ObjectUtils}} to 
> wrap this logic,  like the following:
> {code:java}
> public static  void applyIfNonNull(final Consumer consumer, final T 
> object)
> public static  void applyFirstNonNull(final Consumer consumer, final 
> T... objects)
> {code}
> With this the two statements above could be used as:
> {code:java}
> ObjectUtils.applyIfNonNull(bean::setValue, valueX);
> ObjectUtils.appyIfNonNull(v -> someObject.compute(v, "bar"), valueX);
> {code}
> The benefit of this should increase with more such null checks we need in the 
> code that can be replaced by single statements.
> Pull request forthcoming.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (POOL-393) BaseGenericObjectPool.jmxRegister may cost too much time

2023-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/POOL-393?focusedWorklogId=836456=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836456
 ]

ASF GitHub Bot logged work on POOL-393:
---

Author: ASF GitHub Bot
Created on: 02/Jan/23 21:28
Start Date: 02/Jan/23 21:28
Worklog Time Spent: 10m 
  Work Description: psteitz commented on PR #199:
URL: https://github.com/apache/commons-pool/pull/199#issuecomment-1369218259

   Sorry to be late responding here and great to see you working on pool, 
Niall!  Looks good to me.  And I like it better than my previous attempt 
because it keeps the sequence of identifiers contiguous.  We do need to worry 
about concurrent access, but the only contention issue that I can see is the if 
name check succeeds for one thread and another grabs it, but the code catches 
the exception that would happen in that case and increments.  So should be 
fine.  One small nit is pls get rid of the system out in the unit test.  There 
is already too much spewage from [pool] unit tests (some of it my fault I am 
sure).  Sorry again to be slow to respond.
   




Issue Time Tracking
---

Worklog Id: (was: 836456)
Time Spent: 2h 40m  (was: 2.5h)

> BaseGenericObjectPool.jmxRegister may cost too much time
> 
>
> Key: POOL-393
> URL: https://issues.apache.org/jira/browse/POOL-393
> Project: Commons Pool
>  Issue Type: Improvement
>Affects Versions: 2.4.2
>Reporter: Shichao Yuan
>Priority: Major
> Fix For: 2.12.0
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
>  
> When creating many pools, I find that it tasks too much time to register jmx.
> In the code,  the ObjectName's postfix always starts with 1, so many 
> InstanceAlreadyExistsExceptions may be thrown before registered successfully.
> Maybe a random number is a better choice, or a atomic long.
> {quote}private ObjectName jmxRegister(BaseObjectPoolConfig config,
>  String jmxNameBase, String jmxNamePrefix) {
>  ObjectName objectName = null;
>  MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
>  int i = 1;
>  boolean registered = false;
>  String base = config.getJmxNameBase();
>  if (base == null)
> Unknown macro: \{ base = jmxNameBase; }
> while (!registered) {
>  try {
>  ObjectName objName;
>  // Skip the numeric suffix for the first pool in case there is
>  // only one so the names are cleaner.
>  if (i == 1)
> Unknown macro: \{ objName = new ObjectName(base + jmxNamePrefix); }
> else
> Unknown macro: \{ objName = new ObjectName(base + jmxNamePrefix + i); }
> mbs.registerMBean(this, objName);
>  objectName = objName;
>  registered = true;
>  } catch (MalformedObjectNameException e) {
>  if (BaseObjectPoolConfig.DEFAULT_JMX_NAME_PREFIX.equals(
>  jmxNamePrefix) && jmxNameBase.equals(base))
> Unknown macro: \{ // Shouldn't happen. Skip registration if it does. 
> registered = true; }
> else
> Unknown macro: \{ // Must be an invalid name. Use the defaults instead. 
> jmxNamePrefix = BaseObjectPoolConfig.DEFAULT_JMX_NAME_PREFIX; base = 
> jmxNameBase; }
> } catch (InstanceAlreadyExistsException e)
> Unknown macro: \{ // Increment the index and try again i++; }
> catch (MBeanRegistrationException e)
> Unknown macro: \{ // Shouldn't happen. Skip registration if it does. 
> registered = true; }
> catch (NotCompliantMBeanException e)
> }
>  return objectName;
>  }
> {quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (POOL-393) BaseGenericObjectPool.jmxRegister may cost too much time

2023-01-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/POOL-393?focusedWorklogId=836348=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836348
 ]

ASF GitHub Bot logged work on POOL-393:
---

Author: ASF GitHub Bot
Created on: 01/Jan/23 15:54
Start Date: 01/Jan/23 15:54
Worklog Time Spent: 10m 
  Work Description: garydgregory merged PR #199:
URL: https://github.com/apache/commons-pool/pull/199




Issue Time Tracking
---

Worklog Id: (was: 836348)
Time Spent: 2.5h  (was: 2h 20m)

> BaseGenericObjectPool.jmxRegister may cost too much time
> 
>
> Key: POOL-393
> URL: https://issues.apache.org/jira/browse/POOL-393
> Project: Commons Pool
>  Issue Type: Improvement
>Affects Versions: 2.4.2
>Reporter: Shichao Yuan
>Priority: Major
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
>  
> When creating many pools, I find that it tasks too much time to register jmx.
> In the code,  the ObjectName's postfix always starts with 1, so many 
> InstanceAlreadyExistsExceptions may be thrown before registered successfully.
> Maybe a random number is a better choice, or a atomic long.
> {quote}private ObjectName jmxRegister(BaseObjectPoolConfig config,
>  String jmxNameBase, String jmxNamePrefix) {
>  ObjectName objectName = null;
>  MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
>  int i = 1;
>  boolean registered = false;
>  String base = config.getJmxNameBase();
>  if (base == null)
> Unknown macro: \{ base = jmxNameBase; }
> while (!registered) {
>  try {
>  ObjectName objName;
>  // Skip the numeric suffix for the first pool in case there is
>  // only one so the names are cleaner.
>  if (i == 1)
> Unknown macro: \{ objName = new ObjectName(base + jmxNamePrefix); }
> else
> Unknown macro: \{ objName = new ObjectName(base + jmxNamePrefix + i); }
> mbs.registerMBean(this, objName);
>  objectName = objName;
>  registered = true;
>  } catch (MalformedObjectNameException e) {
>  if (BaseObjectPoolConfig.DEFAULT_JMX_NAME_PREFIX.equals(
>  jmxNamePrefix) && jmxNameBase.equals(base))
> Unknown macro: \{ // Shouldn't happen. Skip registration if it does. 
> registered = true; }
> else
> Unknown macro: \{ // Must be an invalid name. Use the defaults instead. 
> jmxNamePrefix = BaseObjectPoolConfig.DEFAULT_JMX_NAME_PREFIX; base = 
> jmxNameBase; }
> } catch (InstanceAlreadyExistsException e)
> Unknown macro: \{ // Increment the index and try again i++; }
> catch (MBeanRegistrationException e)
> Unknown macro: \{ // Shouldn't happen. Skip registration if it does. 
> registered = true; }
> catch (NotCompliantMBeanException e)
> }
>  return objectName;
>  }
> {quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (COMPRESS-613) Write ZIP extra time fields automatically

2023-01-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-613?focusedWorklogId=836346=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836346
 ]

ASF GitHub Bot logged work on COMPRESS-613:
---

Author: ASF GitHub Bot
Created on: 01/Jan/23 13:28
Start Date: 01/Jan/23 13:28
Worklog Time Spent: 10m 
  Work Description: garydgregory commented on PR #345:
URL: https://github.com/apache/commons-compress/pull/345#issuecomment-1368444645

   @andrebrait 
   Thank you for your updates, merged.




Issue Time Tracking
---

Worklog Id: (was: 836346)
Time Spent: 3h 20m  (was: 3h 10m)

> Write ZIP extra time fields automatically
> -
>
> Key: COMPRESS-613
> URL: https://issues.apache.org/jira/browse/COMPRESS-613
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Archivers
>Affects Versions: 1.21
>Reporter: Andre Brait
>Priority: Major
>  Labels: zip
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> When writing to a Zip file through ZipArchiveOutputStream, setting creation 
> and access times in a ZipArchiveEntry does not cause these to be reflected as 
> X5455 or X000A extra fields in the resulting zip file. This also happens for 
> modification times that do not fit into an MS-DOS time.
> As a consequence, the date range is reduced, as well as the granularity (form 
> 100ns intervals to seconds).
> ZipEntry and standard java.util.zip facilities do that automatically, but 
> that's missing here.
> My proposal is to use the same logic the java.util.zip do and add those extra 
> fields automatically, if situation requires them.
> See my existing logic for this here: 
> [https://github.com/andrebrait/DATROMTool/blob/86a4f4978bab250ca54d047c58b4f91e7dbbcc7f/core/src/main/java/io/github/datromtool/io/FileCopier.java#L1425]
> It's (almost) the same logic from java.util.zip, but adapted to be used with 
> ZipArchiveEntry.
> If you're ok with it, I can send a PR.
> Actual logic will be more like 
> {{{}java.util.zip.ZipOutputStream#writeLOC(XEntry){}}}, represented below:
> {code:java}
> int elenEXTT = 0; // info-zip extended timestamp
> int flagEXTT = 0;
> long umtime = -1;
> long uatime = -1;
> long uctime = -1;
> if (e.mtime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LMT;
> umtime = fileTimeToUnixTime(e.mtime);
> }
> if (e.atime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LAT;
> uatime = fileTimeToUnixTime(e.atime);
> }
> if (e.ctime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAT_CT;
> uctime = fileTimeToUnixTime(e.ctime);
> }
> if (flagEXTT != 0) {
> // to use ntfs time if any m/a/ctime is beyond unixtime upper 
> bound
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> elen += 36;// NTFS time, total 36 bytes
> } else {
> elen += (elenEXTT + 5);// headid(2) + size(2) + flag(1) + 
> data
> }
> }
> writeShort(elen);
> writeBytes(nameBytes, 0, nameBytes.length);
> if (hasZip64) {
> writeShort(ZIP64_EXTID);
> writeShort(16);
> writeLong(e.size);
> writeLong(e.csize);
> }
> if (flagEXTT != 0) {
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> writeShort(EXTID_NTFS);// id
> writeShort(32);// data size
> writeInt(0);   // reserved
> writeShort(0x0001);// NTFS attr tag
> writeShort(24);
> writeLong(e.mtime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.mtime));
> writeLong(e.atime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.atime));
> writeLong(e.ctime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.ctime));
> } else {
> writeShort(EXTID_EXTT);
> writeShort(elenEXTT + 1);  // flag + data
> writeByte(flagEXTT);
> if (e.mtime != null)
> writeInt(umtime);
> if (e.atime != null)
> writeInt(uatime);
> if (e.ctime != null)
> 

[jira] [Work logged] (COMPRESS-613) Write ZIP extra time fields automatically

2023-01-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-613?focusedWorklogId=836343=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836343
 ]

ASF GitHub Bot logged work on COMPRESS-613:
---

Author: ASF GitHub Bot
Created on: 01/Jan/23 12:46
Start Date: 01/Jan/23 12:46
Worklog Time Spent: 10m 
  Work Description: garydgregory merged PR #345:
URL: https://github.com/apache/commons-compress/pull/345




Issue Time Tracking
---

Worklog Id: (was: 836343)
Time Spent: 3h 10m  (was: 3h)

> Write ZIP extra time fields automatically
> -
>
> Key: COMPRESS-613
> URL: https://issues.apache.org/jira/browse/COMPRESS-613
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Archivers
>Affects Versions: 1.21
>Reporter: Andre Brait
>Priority: Major
>  Labels: zip
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> When writing to a Zip file through ZipArchiveOutputStream, setting creation 
> and access times in a ZipArchiveEntry does not cause these to be reflected as 
> X5455 or X000A extra fields in the resulting zip file. This also happens for 
> modification times that do not fit into an MS-DOS time.
> As a consequence, the date range is reduced, as well as the granularity (form 
> 100ns intervals to seconds).
> ZipEntry and standard java.util.zip facilities do that automatically, but 
> that's missing here.
> My proposal is to use the same logic the java.util.zip do and add those extra 
> fields automatically, if situation requires them.
> See my existing logic for this here: 
> [https://github.com/andrebrait/DATROMTool/blob/86a4f4978bab250ca54d047c58b4f91e7dbbcc7f/core/src/main/java/io/github/datromtool/io/FileCopier.java#L1425]
> It's (almost) the same logic from java.util.zip, but adapted to be used with 
> ZipArchiveEntry.
> If you're ok with it, I can send a PR.
> Actual logic will be more like 
> {{{}java.util.zip.ZipOutputStream#writeLOC(XEntry){}}}, represented below:
> {code:java}
> int elenEXTT = 0; // info-zip extended timestamp
> int flagEXTT = 0;
> long umtime = -1;
> long uatime = -1;
> long uctime = -1;
> if (e.mtime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LMT;
> umtime = fileTimeToUnixTime(e.mtime);
> }
> if (e.atime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LAT;
> uatime = fileTimeToUnixTime(e.atime);
> }
> if (e.ctime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAT_CT;
> uctime = fileTimeToUnixTime(e.ctime);
> }
> if (flagEXTT != 0) {
> // to use ntfs time if any m/a/ctime is beyond unixtime upper 
> bound
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> elen += 36;// NTFS time, total 36 bytes
> } else {
> elen += (elenEXTT + 5);// headid(2) + size(2) + flag(1) + 
> data
> }
> }
> writeShort(elen);
> writeBytes(nameBytes, 0, nameBytes.length);
> if (hasZip64) {
> writeShort(ZIP64_EXTID);
> writeShort(16);
> writeLong(e.size);
> writeLong(e.csize);
> }
> if (flagEXTT != 0) {
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> writeShort(EXTID_NTFS);// id
> writeShort(32);// data size
> writeInt(0);   // reserved
> writeShort(0x0001);// NTFS attr tag
> writeShort(24);
> writeLong(e.mtime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.mtime));
> writeLong(e.atime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.atime));
> writeLong(e.ctime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.ctime));
> } else {
> writeShort(EXTID_EXTT);
> writeShort(elenEXTT + 1);  // flag + data
> writeByte(flagEXTT);
> if (e.mtime != null)
> writeInt(umtime);
> if (e.atime != null)
> writeInt(uatime);
> if (e.ctime != null)
> writeInt(uctime);
> }
> }
> writeExtra(e.extra);
> 

[jira] [Work logged] (IO-784) Add support for Appendable to HexDump util

2022-12-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/IO-784?focusedWorklogId=836264=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836264
 ]

ASF GitHub Bot logged work on IO-784:
-

Author: ASF GitHub Bot
Created on: 30/Dec/22 16:59
Start Date: 30/Dec/22 16:59
Worklog Time Spent: 10m 
  Work Description: fkjellberg commented on code in PR #418:
URL: https://github.com/apache/commons-io/pull/418#discussion_r1059464270


##
src/main/java/org/apache/commons/io/HexDump.java:
##
@@ -118,14 +148,54 @@ public static void dump(final byte[] data, final long 
offset,
 }
 }
 buffer.append(System.lineSeparator());
-// make explicit the dependency on the default encoding
-stream.write(buffer.toString().getBytes(Charset.defaultCharset()));
-stream.flush();
+appendable.append(buffer);
 buffer.setLength(0);
 display_offset += chars_read;
 }
 }
 
+/**
+ * Dumps an array of bytes to an OutputStream. The output is formatted
+ * for human inspection, with a hexadecimal offset followed by the
+ * hexadecimal values of the next 16 bytes of data and the printable ASCII
+ * characters (if any) that those bytes represent printed per each line
+ * of output.
+ * 
+ * The offset argument specifies the start offset of the data array
+ * within a larger entity like a file or an incoming stream. For example,
+ * if the data array contains the third kibibyte of a file, then the
+ * offset argument should be set to 2048. The offset value printed
+ * at the beginning of each line indicates where in that larger entity
+ * the first byte on that line is located.
+ * 
+ * 
+ * All bytes between the given index (inclusive) and the end of the
+ * data array are dumped.
+ * 
+ *
+ * @param data  the byte array to be dumped
+ * @param offset  offset of the byte array within a larger entity
+ * @param stream  the OutputStream to which the data is to be
+ *   written
+ * @param index initial index into the byte array
+ *
+ * @throws IOException is thrown if anything goes wrong writing
+ * the data to stream
+ * @throws ArrayIndexOutOfBoundsException if the index is
+ * outside the data array's bounds
+ * @throws NullPointerException if the output stream is null
+ */
+public static void dump(final byte[] data, final long offset,
+final OutputStream stream, final int index)
+throws IOException, ArrayIndexOutOfBoundsException {
+Objects.requireNonNull(stream, "stream");
+
+try (OutputStreamWriter out = new 
OutputStreamWriter(CloseShieldOutputStream.wrap(stream), 
Charset.defaultCharset())) {
+dump(data, offset, out, index, data.length - index);
+out.flush();

Review Comment:
   Commenting both on the `CloseShieldOutputStream` comment above and this 
comment.
   
   The original implementation flushed the underlying stream. It actually 
flushed at every row written. Since the `OutputStreamWriter` will be 
automatically closed within the try-with-resources block and looking at the 
source code for `StreamEncoder` within the `OutputStreamWriter`, I now notice 
that the `StreamEncoder` will flush before closing the stream. I will remove 
the explicit flush in a follow up commit.
   
   The original implementation left the `OutputStream` open after the call. 
Since I wrap the `OutputStream` in an `OutputStreamWriter` that is closed, it 
will propagate that close call to the underlying stream as well. I'm using the 
`CloseShieldOutputStream` to protect it from being closed and preserve the same 
behavior as before.
   
   I think we should preserve the same behavior as the original code when it 
comes to keeping the `OutputStream` open after the call.





Issue Time Tracking
---

Worklog Id: (was: 836264)
Time Spent: 1h 20m  (was: 1h 10m)

> Add support for Appendable to HexDump util
> --
>
> Key: IO-784
> URL: https://issues.apache.org/jira/browse/IO-784
> Project: Commons IO
>  Issue Type: Improvement
>  Components: Utilities
>Affects Versions: 2.11.0
>Reporter: Fredrik Kjellberg
>Priority: Major
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> The HexDump utility currently only supports to send the output of the hex 
> dump to an OutputStream. This makes it cumbersome if you want to e.g. output 
> the dump to System.out.
> The HexDump utility should support to send the output to an `Appendable` 
> making it possible to use e.g. `System.out` or `StringBuilder` as output 
> targets.
> Created PR with a proposed fix: 

[jira] [Work logged] (COMPRESS-613) Write ZIP extra time fields automatically

2022-12-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-613?focusedWorklogId=836249=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836249
 ]

ASF GitHub Bot logged work on COMPRESS-613:
---

Author: ASF GitHub Bot
Created on: 30/Dec/22 15:05
Start Date: 30/Dec/22 15:05
Worklog Time Spent: 10m 
  Work Description: garydgregory commented on PR #345:
URL: https://github.com/apache/commons-compress/pull/345#issuecomment-1367962516

   > > The "fix" for COMPRESS-583 was to document that the behaviour changed 
when setting the entry from a file. For me this would be ok but at the same 
time reproduciblity is no concern for my use cases. But maybe someone else has 
a different opinion.
   > 
   > Creating the entry from a file is still very convenient. I guess we could 
come up with a different constructor for when that's desired without setting 
any of the optional attributes (i.e. the original behavior).
   
   I think we can work on build reproducibility issues in subsequent PRs.
   




Issue Time Tracking
---

Worklog Id: (was: 836249)
Time Spent: 3h  (was: 2h 50m)

> Write ZIP extra time fields automatically
> -
>
> Key: COMPRESS-613
> URL: https://issues.apache.org/jira/browse/COMPRESS-613
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Archivers
>Affects Versions: 1.21
>Reporter: Andre Brait
>Priority: Major
>  Labels: zip
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> When writing to a Zip file through ZipArchiveOutputStream, setting creation 
> and access times in a ZipArchiveEntry does not cause these to be reflected as 
> X5455 or X000A extra fields in the resulting zip file. This also happens for 
> modification times that do not fit into an MS-DOS time.
> As a consequence, the date range is reduced, as well as the granularity (form 
> 100ns intervals to seconds).
> ZipEntry and standard java.util.zip facilities do that automatically, but 
> that's missing here.
> My proposal is to use the same logic the java.util.zip do and add those extra 
> fields automatically, if situation requires them.
> See my existing logic for this here: 
> [https://github.com/andrebrait/DATROMTool/blob/86a4f4978bab250ca54d047c58b4f91e7dbbcc7f/core/src/main/java/io/github/datromtool/io/FileCopier.java#L1425]
> It's (almost) the same logic from java.util.zip, but adapted to be used with 
> ZipArchiveEntry.
> If you're ok with it, I can send a PR.
> Actual logic will be more like 
> {{{}java.util.zip.ZipOutputStream#writeLOC(XEntry){}}}, represented below:
> {code:java}
> int elenEXTT = 0; // info-zip extended timestamp
> int flagEXTT = 0;
> long umtime = -1;
> long uatime = -1;
> long uctime = -1;
> if (e.mtime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LMT;
> umtime = fileTimeToUnixTime(e.mtime);
> }
> if (e.atime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LAT;
> uatime = fileTimeToUnixTime(e.atime);
> }
> if (e.ctime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAT_CT;
> uctime = fileTimeToUnixTime(e.ctime);
> }
> if (flagEXTT != 0) {
> // to use ntfs time if any m/a/ctime is beyond unixtime upper 
> bound
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> elen += 36;// NTFS time, total 36 bytes
> } else {
> elen += (elenEXTT + 5);// headid(2) + size(2) + flag(1) + 
> data
> }
> }
> writeShort(elen);
> writeBytes(nameBytes, 0, nameBytes.length);
> if (hasZip64) {
> writeShort(ZIP64_EXTID);
> writeShort(16);
> writeLong(e.size);
> writeLong(e.csize);
> }
> if (flagEXTT != 0) {
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> writeShort(EXTID_NTFS);// id
> writeShort(32);// data size
> writeInt(0);   // reserved
> writeShort(0x0001);// NTFS attr tag
> writeShort(24);
> writeLong(e.mtime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.mtime));
> writeLong(e.atime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.atime));
> 

[jira] [Work logged] (IO-784) Add support for Appendable to HexDump util

2022-12-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/IO-784?focusedWorklogId=836248=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836248
 ]

ASF GitHub Bot logged work on IO-784:
-

Author: ASF GitHub Bot
Created on: 30/Dec/22 15:02
Start Date: 30/Dec/22 15:02
Worklog Time Spent: 10m 
  Work Description: garydgregory commented on code in PR #418:
URL: https://github.com/apache/commons-io/pull/418#discussion_r1059422260


##
src/main/java/org/apache/commons/io/HexDump.java:
##
@@ -118,14 +148,54 @@ public static void dump(final byte[] data, final long 
offset,
 }
 }
 buffer.append(System.lineSeparator());
-// make explicit the dependency on the default encoding
-stream.write(buffer.toString().getBytes(Charset.defaultCharset()));
-stream.flush();
+appendable.append(buffer);
 buffer.setLength(0);
 display_offset += chars_read;
 }
 }
 
+/**
+ * Dumps an array of bytes to an OutputStream. The output is formatted
+ * for human inspection, with a hexadecimal offset followed by the
+ * hexadecimal values of the next 16 bytes of data and the printable ASCII
+ * characters (if any) that those bytes represent printed per each line
+ * of output.
+ * 
+ * The offset argument specifies the start offset of the data array
+ * within a larger entity like a file or an incoming stream. For example,
+ * if the data array contains the third kibibyte of a file, then the
+ * offset argument should be set to 2048. The offset value printed
+ * at the beginning of each line indicates where in that larger entity
+ * the first byte on that line is located.
+ * 
+ * 
+ * All bytes between the given index (inclusive) and the end of the
+ * data array are dumped.
+ * 
+ *
+ * @param data  the byte array to be dumped
+ * @param offset  offset of the byte array within a larger entity
+ * @param stream  the OutputStream to which the data is to be
+ *   written
+ * @param index initial index into the byte array
+ *
+ * @throws IOException is thrown if anything goes wrong writing
+ * the data to stream
+ * @throws ArrayIndexOutOfBoundsException if the index is
+ * outside the data array's bounds
+ * @throws NullPointerException if the output stream is null
+ */
+public static void dump(final byte[] data, final long offset,
+final OutputStream stream, final int index)
+throws IOException, ArrayIndexOutOfBoundsException {
+Objects.requireNonNull(stream, "stream");
+
+try (OutputStreamWriter out = new 
OutputStreamWriter(CloseShieldOutputStream.wrap(stream), 
Charset.defaultCharset())) {
+dump(data, offset, out, index, data.length - index);
+out.flush();

Review Comment:
   Let the caller decide when to flush and/or close.



##
src/main/java/org/apache/commons/io/HexDump.java:
##
@@ -118,14 +148,54 @@ public static void dump(final byte[] data, final long 
offset,
 }
 }
 buffer.append(System.lineSeparator());
-// make explicit the dependency on the default encoding
-stream.write(buffer.toString().getBytes(Charset.defaultCharset()));
-stream.flush();
+appendable.append(buffer);
 buffer.setLength(0);
 display_offset += chars_read;
 }
 }
 
+/**
+ * Dumps an array of bytes to an OutputStream. The output is formatted
+ * for human inspection, with a hexadecimal offset followed by the
+ * hexadecimal values of the next 16 bytes of data and the printable ASCII
+ * characters (if any) that those bytes represent printed per each line
+ * of output.
+ * 
+ * The offset argument specifies the start offset of the data array
+ * within a larger entity like a file or an incoming stream. For example,
+ * if the data array contains the third kibibyte of a file, then the
+ * offset argument should be set to 2048. The offset value printed
+ * at the beginning of each line indicates where in that larger entity
+ * the first byte on that line is located.
+ * 
+ * 
+ * All bytes between the given index (inclusive) and the end of the
+ * data array are dumped.
+ * 
+ *
+ * @param data  the byte array to be dumped
+ * @param offset  offset of the byte array within a larger entity
+ * @param stream  the OutputStream to which the data is to be
+ *   written
+ * @param index initial index into the byte array
+ *
+ * @throws IOException is thrown if anything goes wrong writing
+ * the data to stream
+ * @throws 

[jira] [Work logged] (IO-784) Add support for Appendable to HexDump util

2022-12-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/IO-784?focusedWorklogId=836239=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836239
 ]

ASF GitHub Bot logged work on IO-784:
-

Author: ASF GitHub Bot
Created on: 30/Dec/22 13:38
Start Date: 30/Dec/22 13:38
Worklog Time Spent: 10m 
  Work Description: fkjellberg commented on code in PR #418:
URL: https://github.com/apache/commons-io/pull/418#discussion_r1059394620


##
src/main/java/org/apache/commons/io/HexDump.java:
##
@@ -118,14 +144,55 @@ public static void dump(final byte[] data, final long 
offset,
 }
 }
 buffer.append(System.lineSeparator());
-// make explicit the dependency on the default encoding
-stream.write(buffer.toString().getBytes(Charset.defaultCharset()));
-stream.flush();
+out.append(buffer);
 buffer.setLength(0);
 display_offset += chars_read;
 }
 }
 
+/**
+ * Dumps an array of bytes to an OutputStream. The output is formatted
+ * for human inspection, with a hexadecimal offset followed by the
+ * hexadecimal values of the next 16 bytes of data and the printable ASCII
+ * characters (if any) that those bytes represent printed per each line
+ * of output.
+ * 
+ * The offset argument specifies the start offset of the data array
+ * within a larger entity like a file or an incoming stream. For example,
+ * if the data array contains the third kibibyte of a file, then the
+ * offset argument should be set to 2048. The offset value printed
+ * at the beginning of each line indicates where in that larger entity
+ * the first byte on that line is located.
+ * 
+ * 
+ * All bytes between the given index (inclusive) and the end of the
+ * data array are dumped.
+ * 
+ *
+ * @param data  the byte array to be dumped
+ * @param offset  offset of the byte array within a larger entity
+ * @param stream  the OutputStream to which the data is to be
+ *   written
+ * @param index initial index into the byte array
+ *
+ * @throws IOException is thrown if anything goes wrong writing
+ * the data to stream
+ * @throws ArrayIndexOutOfBoundsException if the index is
+ * outside the data array's bounds
+ * @throws NullPointerException if the output stream is null
+ */
+public static void dump(final byte[] data, final long offset,
+final OutputStream stream, final int index)
+throws IOException, ArrayIndexOutOfBoundsException {
+Objects.requireNonNull(stream, "stream");
+
+// make explicit the dependency on the default encoding

Review Comment:
   This comment came from the original code. I've removed it now.



##
src/main/java/org/apache/commons/io/HexDump.java:
##
@@ -53,7 +56,26 @@ public class HexDump {
 };
 
 /**
- * Dumps an array of bytes to an OutputStream. The output is formatted
+ * Dumps an array of bytes to an Appendable. The output is formatted
+ * for human inspection, with a hexadecimal offset followed by the
+ * hexadecimal values of the next 16 bytes of data and the printable ASCII
+ * characters (if any) that those bytes represent printed per each line
+ * of output.
+ *
+ * @param data  the byte array to be dumped
+ * @param out  the Appendable to which the data is to be written
+ *
+ * @throws IOException is thrown if anything goes wrong writing
+ * the data to appendable
+ */
+
+public static void dump(final byte[] data, final Appendable out)

Review Comment:
   Fixed





Issue Time Tracking
---

Worklog Id: (was: 836239)
Time Spent: 1h  (was: 50m)

> Add support for Appendable to HexDump util
> --
>
> Key: IO-784
> URL: https://issues.apache.org/jira/browse/IO-784
> Project: Commons IO
>  Issue Type: Improvement
>  Components: Utilities
>Affects Versions: 2.11.0
>Reporter: Fredrik Kjellberg
>Priority: Major
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The HexDump utility currently only supports to send the output of the hex 
> dump to an OutputStream. This makes it cumbersome if you want to e.g. output 
> the dump to System.out.
> The HexDump utility should support to send the output to an `Appendable` 
> making it possible to use e.g. `System.out` or `StringBuilder` as output 
> targets.
> Created PR with a proposed fix: 
> [https://github.com/apache/commons-io/pull/418]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (IO-784) Add support for Appendable to HexDump util

2022-12-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/IO-784?focusedWorklogId=836238=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836238
 ]

ASF GitHub Bot logged work on IO-784:
-

Author: ASF GitHub Bot
Created on: 30/Dec/22 13:37
Start Date: 30/Dec/22 13:37
Worklog Time Spent: 10m 
  Work Description: fkjellberg commented on code in PR #418:
URL: https://github.com/apache/commons-io/pull/418#discussion_r1059394498


##
src/main/java/org/apache/commons/io/HexDump.java:
##
@@ -53,7 +56,26 @@ public class HexDump {
 };
 
 /**
- * Dumps an array of bytes to an OutputStream. The output is formatted
+ * Dumps an array of bytes to an Appendable. The output is formatted
+ * for human inspection, with a hexadecimal offset followed by the
+ * hexadecimal values of the next 16 bytes of data and the printable ASCII
+ * characters (if any) that those bytes represent printed per each line
+ * of output.
+ *
+ * @param data  the byte array to be dumped
+ * @param out  the Appendable to which the data is to be written
+ *
+ * @throws IOException is thrown if anything goes wrong writing
+ * the data to appendable
+ */
+

Review Comment:
   @garydgregory Thanks for reviewing. I've fixed both.





Issue Time Tracking
---

Worklog Id: (was: 836238)
Time Spent: 50m  (was: 40m)

> Add support for Appendable to HexDump util
> --
>
> Key: IO-784
> URL: https://issues.apache.org/jira/browse/IO-784
> Project: Commons IO
>  Issue Type: Improvement
>  Components: Utilities
>Affects Versions: 2.11.0
>Reporter: Fredrik Kjellberg
>Priority: Major
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The HexDump utility currently only supports to send the output of the hex 
> dump to an OutputStream. This makes it cumbersome if you want to e.g. output 
> the dump to System.out.
> The HexDump utility should support to send the output to an `Appendable` 
> making it possible to use e.g. `System.out` or `StringBuilder` as output 
> targets.
> Created PR with a proposed fix: 
> [https://github.com/apache/commons-io/pull/418]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (IO-784) Add support for Appendable to HexDump util

2022-12-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/IO-784?focusedWorklogId=836234=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836234
 ]

ASF GitHub Bot logged work on IO-784:
-

Author: ASF GitHub Bot
Created on: 30/Dec/22 12:25
Start Date: 30/Dec/22 12:25
Worklog Time Spent: 10m 
  Work Description: garydgregory commented on code in PR #418:
URL: https://github.com/apache/commons-io/pull/418#discussion_r1059365488


##
src/main/java/org/apache/commons/io/HexDump.java:
##
@@ -53,7 +56,26 @@ public class HexDump {
 };
 
 /**
- * Dumps an array of bytes to an OutputStream. The output is formatted
+ * Dumps an array of bytes to an Appendable. The output is formatted
+ * for human inspection, with a hexadecimal offset followed by the
+ * hexadecimal values of the next 16 bytes of data and the printable ASCII
+ * characters (if any) that those bytes represent printed per each line
+ * of output.
+ *
+ * @param data  the byte array to be dumped
+ * @param out  the Appendable to which the data is to be written
+ *
+ * @throws IOException is thrown if anything goes wrong writing
+ * the data to appendable
+ */
+

Review Comment:
   Remove whitespace between Javadoc and code.



##
src/main/java/org/apache/commons/io/HexDump.java:
##
@@ -53,7 +56,26 @@ public class HexDump {
 };
 
 /**
- * Dumps an array of bytes to an OutputStream. The output is formatted
+ * Dumps an array of bytes to an Appendable. The output is formatted
+ * for human inspection, with a hexadecimal offset followed by the
+ * hexadecimal values of the next 16 bytes of data and the printable ASCII
+ * characters (if any) that those bytes represent printed per each line
+ * of output.
+ *
+ * @param data  the byte array to be dumped
+ * @param out  the Appendable to which the data is to be written
+ *
+ * @throws IOException is thrown if anything goes wrong writing
+ * the data to appendable
+ */
+

Review Comment:
   Add Javadoc since tag to new public and protected elements. 



##
src/main/java/org/apache/commons/io/HexDump.java:
##
@@ -53,7 +56,26 @@ public class HexDump {
 };
 
 /**
- * Dumps an array of bytes to an OutputStream. The output is formatted
+ * Dumps an array of bytes to an Appendable. The output is formatted
+ * for human inspection, with a hexadecimal offset followed by the
+ * hexadecimal values of the next 16 bytes of data and the printable ASCII
+ * characters (if any) that those bytes represent printed per each line
+ * of output.
+ *
+ * @param data  the byte array to be dumped
+ * @param out  the Appendable to which the data is to be written
+ *
+ * @throws IOException is thrown if anything goes wrong writing
+ * the data to appendable
+ */
+
+public static void dump(final byte[] data, final Appendable out)

Review Comment:
   Rename `out` to `appendable` so it has no chance of being confused with an 
Output* class.



##
src/main/java/org/apache/commons/io/HexDump.java:
##
@@ -118,14 +144,55 @@ public static void dump(final byte[] data, final long 
offset,
 }
 }
 buffer.append(System.lineSeparator());
-// make explicit the dependency on the default encoding
-stream.write(buffer.toString().getBytes(Charset.defaultCharset()));
-stream.flush();
+out.append(buffer);
 buffer.setLength(0);
 display_offset += chars_read;
 }
 }
 
+/**
+ * Dumps an array of bytes to an OutputStream. The output is formatted
+ * for human inspection, with a hexadecimal offset followed by the
+ * hexadecimal values of the next 16 bytes of data and the printable ASCII
+ * characters (if any) that those bytes represent printed per each line
+ * of output.
+ * 
+ * The offset argument specifies the start offset of the data array
+ * within a larger entity like a file or an incoming stream. For example,
+ * if the data array contains the third kibibyte of a file, then the
+ * offset argument should be set to 2048. The offset value printed
+ * at the beginning of each line indicates where in that larger entity
+ * the first byte on that line is located.
+ * 
+ * 
+ * All bytes between the given index (inclusive) and the end of the
+ * data array are dumped.
+ * 
+ *
+ * @param data  the byte array to be dumped
+ * @param offset  offset of the byte array within a larger entity
+ * @param stream  the OutputStream to which the data is to be
+ *   written
+ * @param index initial index into the byte array
+ *
+ * @throws 

[jira] [Work logged] (COMPRESS-613) Write ZIP extra time fields automatically

2022-12-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-613?focusedWorklogId=836229=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836229
 ]

ASF GitHub Bot logged work on COMPRESS-613:
---

Author: ASF GitHub Bot
Created on: 30/Dec/22 12:01
Start Date: 30/Dec/22 12:01
Worklog Time Spent: 10m 
  Work Description: andrebrait commented on PR #345:
URL: https://github.com/apache/commons-compress/pull/345#issuecomment-1367882038

   > The "fix" for COMPRESS-583 was to document that the behaviour changed when 
setting the entry from a file. For me this would be ok but at the same time 
reproduciblity is no concern for my use cases. But maybe someone else has a 
different opinion.
   
   Creating the entry from a file is still very convenient. I guess we could 
come up with a different constructor for when that's desired without setting 
any of the optional attributes (i.e. the original behavior).




Issue Time Tracking
---

Worklog Id: (was: 836229)
Time Spent: 2h 50m  (was: 2h 40m)

> Write ZIP extra time fields automatically
> -
>
> Key: COMPRESS-613
> URL: https://issues.apache.org/jira/browse/COMPRESS-613
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Archivers
>Affects Versions: 1.21
>Reporter: Andre Brait
>Priority: Major
>  Labels: zip
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> When writing to a Zip file through ZipArchiveOutputStream, setting creation 
> and access times in a ZipArchiveEntry does not cause these to be reflected as 
> X5455 or X000A extra fields in the resulting zip file. This also happens for 
> modification times that do not fit into an MS-DOS time.
> As a consequence, the date range is reduced, as well as the granularity (form 
> 100ns intervals to seconds).
> ZipEntry and standard java.util.zip facilities do that automatically, but 
> that's missing here.
> My proposal is to use the same logic the java.util.zip do and add those extra 
> fields automatically, if situation requires them.
> See my existing logic for this here: 
> [https://github.com/andrebrait/DATROMTool/blob/86a4f4978bab250ca54d047c58b4f91e7dbbcc7f/core/src/main/java/io/github/datromtool/io/FileCopier.java#L1425]
> It's (almost) the same logic from java.util.zip, but adapted to be used with 
> ZipArchiveEntry.
> If you're ok with it, I can send a PR.
> Actual logic will be more like 
> {{{}java.util.zip.ZipOutputStream#writeLOC(XEntry){}}}, represented below:
> {code:java}
> int elenEXTT = 0; // info-zip extended timestamp
> int flagEXTT = 0;
> long umtime = -1;
> long uatime = -1;
> long uctime = -1;
> if (e.mtime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LMT;
> umtime = fileTimeToUnixTime(e.mtime);
> }
> if (e.atime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LAT;
> uatime = fileTimeToUnixTime(e.atime);
> }
> if (e.ctime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAT_CT;
> uctime = fileTimeToUnixTime(e.ctime);
> }
> if (flagEXTT != 0) {
> // to use ntfs time if any m/a/ctime is beyond unixtime upper 
> bound
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> elen += 36;// NTFS time, total 36 bytes
> } else {
> elen += (elenEXTT + 5);// headid(2) + size(2) + flag(1) + 
> data
> }
> }
> writeShort(elen);
> writeBytes(nameBytes, 0, nameBytes.length);
> if (hasZip64) {
> writeShort(ZIP64_EXTID);
> writeShort(16);
> writeLong(e.size);
> writeLong(e.csize);
> }
> if (flagEXTT != 0) {
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> writeShort(EXTID_NTFS);// id
> writeShort(32);// data size
> writeInt(0);   // reserved
> writeShort(0x0001);// NTFS attr tag
> writeShort(24);
> writeLong(e.mtime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.mtime));
> writeLong(e.atime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.atime));
> writeLong(e.ctime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   

[jira] [Work logged] (COMPRESS-613) Write ZIP extra time fields automatically

2022-12-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-613?focusedWorklogId=836226=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836226
 ]

ASF GitHub Bot logged work on COMPRESS-613:
---

Author: ASF GitHub Bot
Created on: 30/Dec/22 11:51
Start Date: 30/Dec/22 11:51
Worklog Time Spent: 10m 
  Work Description: theobisproject commented on PR #345:
URL: https://github.com/apache/commons-compress/pull/345#issuecomment-1367878120

   The "fix" for COMPRESS-583 was to document that the behaviour changed when 
setting the entry from a file. For me this would be ok but at the same time 
reproduciblity is no concern for my use cases. But maybe someone else has a 
different opinion.




Issue Time Tracking
---

Worklog Id: (was: 836226)
Time Spent: 2h 40m  (was: 2.5h)

> Write ZIP extra time fields automatically
> -
>
> Key: COMPRESS-613
> URL: https://issues.apache.org/jira/browse/COMPRESS-613
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Archivers
>Affects Versions: 1.21
>Reporter: Andre Brait
>Priority: Major
>  Labels: zip
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> When writing to a Zip file through ZipArchiveOutputStream, setting creation 
> and access times in a ZipArchiveEntry does not cause these to be reflected as 
> X5455 or X000A extra fields in the resulting zip file. This also happens for 
> modification times that do not fit into an MS-DOS time.
> As a consequence, the date range is reduced, as well as the granularity (form 
> 100ns intervals to seconds).
> ZipEntry and standard java.util.zip facilities do that automatically, but 
> that's missing here.
> My proposal is to use the same logic the java.util.zip do and add those extra 
> fields automatically, if situation requires them.
> See my existing logic for this here: 
> [https://github.com/andrebrait/DATROMTool/blob/86a4f4978bab250ca54d047c58b4f91e7dbbcc7f/core/src/main/java/io/github/datromtool/io/FileCopier.java#L1425]
> It's (almost) the same logic from java.util.zip, but adapted to be used with 
> ZipArchiveEntry.
> If you're ok with it, I can send a PR.
> Actual logic will be more like 
> {{{}java.util.zip.ZipOutputStream#writeLOC(XEntry){}}}, represented below:
> {code:java}
> int elenEXTT = 0; // info-zip extended timestamp
> int flagEXTT = 0;
> long umtime = -1;
> long uatime = -1;
> long uctime = -1;
> if (e.mtime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LMT;
> umtime = fileTimeToUnixTime(e.mtime);
> }
> if (e.atime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LAT;
> uatime = fileTimeToUnixTime(e.atime);
> }
> if (e.ctime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAT_CT;
> uctime = fileTimeToUnixTime(e.ctime);
> }
> if (flagEXTT != 0) {
> // to use ntfs time if any m/a/ctime is beyond unixtime upper 
> bound
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> elen += 36;// NTFS time, total 36 bytes
> } else {
> elen += (elenEXTT + 5);// headid(2) + size(2) + flag(1) + 
> data
> }
> }
> writeShort(elen);
> writeBytes(nameBytes, 0, nameBytes.length);
> if (hasZip64) {
> writeShort(ZIP64_EXTID);
> writeShort(16);
> writeLong(e.size);
> writeLong(e.csize);
> }
> if (flagEXTT != 0) {
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> writeShort(EXTID_NTFS);// id
> writeShort(32);// data size
> writeInt(0);   // reserved
> writeShort(0x0001);// NTFS attr tag
> writeShort(24);
> writeLong(e.mtime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.mtime));
> writeLong(e.atime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.atime));
> writeLong(e.ctime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.ctime));
> } else {
> writeShort(EXTID_EXTT);
> writeShort(elenEXTT + 1);  // flag + data
> writeByte(flagEXTT);
> if 

[jira] [Work logged] (COMPRESS-613) Write ZIP extra time fields automatically

2022-12-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-613?focusedWorklogId=836222=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836222
 ]

ASF GitHub Bot logged work on COMPRESS-613:
---

Author: ASF GitHub Bot
Created on: 30/Dec/22 11:08
Start Date: 30/Dec/22 11:08
Worklog Time Spent: 10m 
  Work Description: andrebrait commented on PR #345:
URL: https://github.com/apache/commons-compress/pull/345#issuecomment-1367861655

   I also made some changes to TAR and 7z lately, around those dates.
   
   Maybe I can revisit all of them and fix COMPRESS-583 as well.




Issue Time Tracking
---

Worklog Id: (was: 836222)
Time Spent: 2.5h  (was: 2h 20m)

> Write ZIP extra time fields automatically
> -
>
> Key: COMPRESS-613
> URL: https://issues.apache.org/jira/browse/COMPRESS-613
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Archivers
>Affects Versions: 1.21
>Reporter: Andre Brait
>Priority: Major
>  Labels: zip
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> When writing to a Zip file through ZipArchiveOutputStream, setting creation 
> and access times in a ZipArchiveEntry does not cause these to be reflected as 
> X5455 or X000A extra fields in the resulting zip file. This also happens for 
> modification times that do not fit into an MS-DOS time.
> As a consequence, the date range is reduced, as well as the granularity (form 
> 100ns intervals to seconds).
> ZipEntry and standard java.util.zip facilities do that automatically, but 
> that's missing here.
> My proposal is to use the same logic the java.util.zip do and add those extra 
> fields automatically, if situation requires them.
> See my existing logic for this here: 
> [https://github.com/andrebrait/DATROMTool/blob/86a4f4978bab250ca54d047c58b4f91e7dbbcc7f/core/src/main/java/io/github/datromtool/io/FileCopier.java#L1425]
> It's (almost) the same logic from java.util.zip, but adapted to be used with 
> ZipArchiveEntry.
> If you're ok with it, I can send a PR.
> Actual logic will be more like 
> {{{}java.util.zip.ZipOutputStream#writeLOC(XEntry){}}}, represented below:
> {code:java}
> int elenEXTT = 0; // info-zip extended timestamp
> int flagEXTT = 0;
> long umtime = -1;
> long uatime = -1;
> long uctime = -1;
> if (e.mtime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LMT;
> umtime = fileTimeToUnixTime(e.mtime);
> }
> if (e.atime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LAT;
> uatime = fileTimeToUnixTime(e.atime);
> }
> if (e.ctime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAT_CT;
> uctime = fileTimeToUnixTime(e.ctime);
> }
> if (flagEXTT != 0) {
> // to use ntfs time if any m/a/ctime is beyond unixtime upper 
> bound
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> elen += 36;// NTFS time, total 36 bytes
> } else {
> elen += (elenEXTT + 5);// headid(2) + size(2) + flag(1) + 
> data
> }
> }
> writeShort(elen);
> writeBytes(nameBytes, 0, nameBytes.length);
> if (hasZip64) {
> writeShort(ZIP64_EXTID);
> writeShort(16);
> writeLong(e.size);
> writeLong(e.csize);
> }
> if (flagEXTT != 0) {
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> writeShort(EXTID_NTFS);// id
> writeShort(32);// data size
> writeInt(0);   // reserved
> writeShort(0x0001);// NTFS attr tag
> writeShort(24);
> writeLong(e.mtime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.mtime));
> writeLong(e.atime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.atime));
> writeLong(e.ctime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.ctime));
> } else {
> writeShort(EXTID_EXTT);
> writeShort(elenEXTT + 1);  // flag + data
> writeByte(flagEXTT);
> if (e.mtime != null)
> writeInt(umtime);
> if (e.atime != null)
> 

[jira] [Work logged] (COMPRESS-613) Write ZIP extra time fields automatically

2022-12-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-613?focusedWorklogId=836221=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836221
 ]

ASF GitHub Bot logged work on COMPRESS-613:
---

Author: ASF GitHub Bot
Created on: 30/Dec/22 11:06
Start Date: 30/Dec/22 11:06
Worklog Time Spent: 10m 
  Work Description: andrebrait commented on PR #345:
URL: https://github.com/apache/commons-compress/pull/345#issuecomment-1367860607

   > I just want to raise awareness that a similar change I have done for the 
tar format in 
[COMPRESS-404](https://issues.apache.org/jira/browse/COMPRESS-404) has lead to 
problems with reproducible builds (see 
[COMPRESS-583](https://issues.apache.org/jira/browse/COMPRESS-583)). I don't 
know if something similar could happen here too.
   
   Oh, you mean files built using these? Yes, I know this could do that, but it 
was already setting modification dates. Access date would change on each 
access, (provided the filesystem supports it and isn't mounted with e.g. 
noatime), though.
   
   I guess I could make setting those things optional with a parameter on the 
constructor or a static constructor for them, but other than that, there isn't 
that much that can be done if the filesystem gives something different all the 
time.




Issue Time Tracking
---

Worklog Id: (was: 836221)
Time Spent: 2h 20m  (was: 2h 10m)

> Write ZIP extra time fields automatically
> -
>
> Key: COMPRESS-613
> URL: https://issues.apache.org/jira/browse/COMPRESS-613
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Archivers
>Affects Versions: 1.21
>Reporter: Andre Brait
>Priority: Major
>  Labels: zip
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> When writing to a Zip file through ZipArchiveOutputStream, setting creation 
> and access times in a ZipArchiveEntry does not cause these to be reflected as 
> X5455 or X000A extra fields in the resulting zip file. This also happens for 
> modification times that do not fit into an MS-DOS time.
> As a consequence, the date range is reduced, as well as the granularity (form 
> 100ns intervals to seconds).
> ZipEntry and standard java.util.zip facilities do that automatically, but 
> that's missing here.
> My proposal is to use the same logic the java.util.zip do and add those extra 
> fields automatically, if situation requires them.
> See my existing logic for this here: 
> [https://github.com/andrebrait/DATROMTool/blob/86a4f4978bab250ca54d047c58b4f91e7dbbcc7f/core/src/main/java/io/github/datromtool/io/FileCopier.java#L1425]
> It's (almost) the same logic from java.util.zip, but adapted to be used with 
> ZipArchiveEntry.
> If you're ok with it, I can send a PR.
> Actual logic will be more like 
> {{{}java.util.zip.ZipOutputStream#writeLOC(XEntry){}}}, represented below:
> {code:java}
> int elenEXTT = 0; // info-zip extended timestamp
> int flagEXTT = 0;
> long umtime = -1;
> long uatime = -1;
> long uctime = -1;
> if (e.mtime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LMT;
> umtime = fileTimeToUnixTime(e.mtime);
> }
> if (e.atime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LAT;
> uatime = fileTimeToUnixTime(e.atime);
> }
> if (e.ctime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAT_CT;
> uctime = fileTimeToUnixTime(e.ctime);
> }
> if (flagEXTT != 0) {
> // to use ntfs time if any m/a/ctime is beyond unixtime upper 
> bound
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> elen += 36;// NTFS time, total 36 bytes
> } else {
> elen += (elenEXTT + 5);// headid(2) + size(2) + flag(1) + 
> data
> }
> }
> writeShort(elen);
> writeBytes(nameBytes, 0, nameBytes.length);
> if (hasZip64) {
> writeShort(ZIP64_EXTID);
> writeShort(16);
> writeLong(e.size);
> writeLong(e.csize);
> }
> if (flagEXTT != 0) {
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> writeShort(EXTID_NTFS);// id
> writeShort(32);// data size
> writeInt(0);   // reserved
> writeShort(0x0001);// NTFS attr tag
> writeShort(24);
> writeLong(e.mtime == 

[jira] [Work logged] (COMPRESS-613) Write ZIP extra time fields automatically

2022-12-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-613?focusedWorklogId=836213=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836213
 ]

ASF GitHub Bot logged work on COMPRESS-613:
---

Author: ASF GitHub Bot
Created on: 30/Dec/22 10:30
Start Date: 30/Dec/22 10:30
Worklog Time Spent: 10m 
  Work Description: theobisproject commented on PR #345:
URL: https://github.com/apache/commons-compress/pull/345#issuecomment-1367845914

   I just want to raise awareness that a similar change I have done for the tar 
format in [COMPRESS-404](https://issues.apache.org/jira/browse/COMPRESS-404) 
has lead to problems with reproducible builds (see 
[COMPRESS-583](https://issues.apache.org/jira/browse/COMPRESS-583)). I don't 
know if something similar could happen here too.




Issue Time Tracking
---

Worklog Id: (was: 836213)
Time Spent: 2h 10m  (was: 2h)

> Write ZIP extra time fields automatically
> -
>
> Key: COMPRESS-613
> URL: https://issues.apache.org/jira/browse/COMPRESS-613
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Archivers
>Affects Versions: 1.21
>Reporter: Andre Brait
>Priority: Major
>  Labels: zip
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> When writing to a Zip file through ZipArchiveOutputStream, setting creation 
> and access times in a ZipArchiveEntry does not cause these to be reflected as 
> X5455 or X000A extra fields in the resulting zip file. This also happens for 
> modification times that do not fit into an MS-DOS time.
> As a consequence, the date range is reduced, as well as the granularity (form 
> 100ns intervals to seconds).
> ZipEntry and standard java.util.zip facilities do that automatically, but 
> that's missing here.
> My proposal is to use the same logic the java.util.zip do and add those extra 
> fields automatically, if situation requires them.
> See my existing logic for this here: 
> [https://github.com/andrebrait/DATROMTool/blob/86a4f4978bab250ca54d047c58b4f91e7dbbcc7f/core/src/main/java/io/github/datromtool/io/FileCopier.java#L1425]
> It's (almost) the same logic from java.util.zip, but adapted to be used with 
> ZipArchiveEntry.
> If you're ok with it, I can send a PR.
> Actual logic will be more like 
> {{{}java.util.zip.ZipOutputStream#writeLOC(XEntry){}}}, represented below:
> {code:java}
> int elenEXTT = 0; // info-zip extended timestamp
> int flagEXTT = 0;
> long umtime = -1;
> long uatime = -1;
> long uctime = -1;
> if (e.mtime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LMT;
> umtime = fileTimeToUnixTime(e.mtime);
> }
> if (e.atime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LAT;
> uatime = fileTimeToUnixTime(e.atime);
> }
> if (e.ctime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAT_CT;
> uctime = fileTimeToUnixTime(e.ctime);
> }
> if (flagEXTT != 0) {
> // to use ntfs time if any m/a/ctime is beyond unixtime upper 
> bound
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> elen += 36;// NTFS time, total 36 bytes
> } else {
> elen += (elenEXTT + 5);// headid(2) + size(2) + flag(1) + 
> data
> }
> }
> writeShort(elen);
> writeBytes(nameBytes, 0, nameBytes.length);
> if (hasZip64) {
> writeShort(ZIP64_EXTID);
> writeShort(16);
> writeLong(e.size);
> writeLong(e.csize);
> }
> if (flagEXTT != 0) {
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> writeShort(EXTID_NTFS);// id
> writeShort(32);// data size
> writeInt(0);   // reserved
> writeShort(0x0001);// NTFS attr tag
> writeShort(24);
> writeLong(e.mtime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.mtime));
> writeLong(e.atime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.atime));
> writeLong(e.ctime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.ctime));
> } else {
> writeShort(EXTID_EXTT);
> 

[jira] [Work logged] (IO-784) Add support for Appendable to HexDump util

2022-12-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/IO-784?focusedWorklogId=836205=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836205
 ]

ASF GitHub Bot logged work on IO-784:
-

Author: ASF GitHub Bot
Created on: 30/Dec/22 08:37
Start Date: 30/Dec/22 08:37
Worklog Time Spent: 10m 
  Work Description: fkjellberg commented on PR #418:
URL: https://github.com/apache/commons-io/pull/418#issuecomment-1367794115

   @garydgregory I added one more test and rebased the branch




Issue Time Tracking
---

Worklog Id: (was: 836205)
Time Spent: 0.5h  (was: 20m)

> Add support for Appendable to HexDump util
> --
>
> Key: IO-784
> URL: https://issues.apache.org/jira/browse/IO-784
> Project: Commons IO
>  Issue Type: Improvement
>  Components: Utilities
>Affects Versions: 2.11.0
>Reporter: Fredrik Kjellberg
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The HexDump utility currently only supports to send the output of the hex 
> dump to an OutputStream. This makes it cumbersome if you want to e.g. output 
> the dump to System.out.
> The HexDump utility should support to send the output to an `Appendable` 
> making it possible to use e.g. `System.out` or `StringBuilder` as output 
> targets.
> Created PR with a proposed fix: 
> [https://github.com/apache/commons-io/pull/418]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (COMPRESS-613) Write ZIP extra time fields automatically

2022-12-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-613?focusedWorklogId=836190=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836190
 ]

ASF GitHub Bot logged work on COMPRESS-613:
---

Author: ASF GitHub Bot
Created on: 30/Dec/22 02:28
Start Date: 30/Dec/22 02:28
Worklog Time Spent: 10m 
  Work Description: andrebrait commented on code in PR #345:
URL: https://github.com/apache/commons-compress/pull/345#discussion_r1059218171


##
src/main/java/org/apache/commons/compress/archivers/zip/ZipUtil.java:
##
@@ -30,10 +33,85 @@
  * @Immutable
  */
 public abstract class ZipUtil {
+
 /**
+ * DOS time constant for representing timestamps before 1980.
  * Smallest date/time ZIP can handle.
+ * 
+ * MS-DOS records file dates and times as packed 16-bit values. An MS-DOS 
date has the following format.
+ * 
+ * 
+ * BitsContents
+ * 
+ * 
+ *   0-4: Day of the month (1-31).
+ *   5-8: Month (1 = January, 2 = February, and so on).
+ *   9-15: Year offset from 1980 (add 1980 to get the actual 
year).
+ * 
+ *
+ * An MS-DOS time has the following format.
+ * 
+ * BitsContents
+ * 
+ * 
+ *   0-4: Second divided by 2.
+ *   5-10: Minute (0-59).
+ *   11-15: Hour (0-23 on a 24-hour clock).
+ * 
+ *
+ * This constant expresses the minimum DOS date of January 1st 1980 at 
00:00:00 or, bit-by-bit:
+ * 
+ * Year: 000
+ * Month: 0001
+ * Day: 1
+ * Hour: 0
+ * Minute: 00
+ * Seconds: 0
+ * 
+ *
+ * 
+ * This was copied from {@link ZipEntry}.
+ * 
+ *
+ * @since 1.23
  */
-private static final byte[] DOS_TIME_MIN = ZipLong.getBytes(0x2100L);

Review Comment:
   @garydgregory note: this was previously the wrong value. We were lucky we 
never stumbled upon this as a bug.





Issue Time Tracking
---

Worklog Id: (was: 836190)
Time Spent: 2h  (was: 1h 50m)

> Write ZIP extra time fields automatically
> -
>
> Key: COMPRESS-613
> URL: https://issues.apache.org/jira/browse/COMPRESS-613
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Archivers
>Affects Versions: 1.21
>Reporter: Andre Brait
>Priority: Major
>  Labels: zip
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> When writing to a Zip file through ZipArchiveOutputStream, setting creation 
> and access times in a ZipArchiveEntry does not cause these to be reflected as 
> X5455 or X000A extra fields in the resulting zip file. This also happens for 
> modification times that do not fit into an MS-DOS time.
> As a consequence, the date range is reduced, as well as the granularity (form 
> 100ns intervals to seconds).
> ZipEntry and standard java.util.zip facilities do that automatically, but 
> that's missing here.
> My proposal is to use the same logic the java.util.zip do and add those extra 
> fields automatically, if situation requires them.
> See my existing logic for this here: 
> [https://github.com/andrebrait/DATROMTool/blob/86a4f4978bab250ca54d047c58b4f91e7dbbcc7f/core/src/main/java/io/github/datromtool/io/FileCopier.java#L1425]
> It's (almost) the same logic from java.util.zip, but adapted to be used with 
> ZipArchiveEntry.
> If you're ok with it, I can send a PR.
> Actual logic will be more like 
> {{{}java.util.zip.ZipOutputStream#writeLOC(XEntry){}}}, represented below:
> {code:java}
> int elenEXTT = 0; // info-zip extended timestamp
> int flagEXTT = 0;
> long umtime = -1;
> long uatime = -1;
> long uctime = -1;
> if (e.mtime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LMT;
> umtime = fileTimeToUnixTime(e.mtime);
> }
> if (e.atime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LAT;
> uatime = fileTimeToUnixTime(e.atime);
> }
> if (e.ctime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAT_CT;
> uctime = fileTimeToUnixTime(e.ctime);
> }
> if (flagEXTT != 0) {
> // to use ntfs time if any m/a/ctime is beyond unixtime upper 
> bound
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> elen += 36;// NTFS time, total 36 bytes
> } else {
> elen += (elenEXTT + 5);// headid(2) + size(2) + flag(1) + 
> data
> }
> }
> writeShort(elen);
> writeBytes(nameBytes, 0, nameBytes.length);
> if (hasZip64) {
> 

[jira] [Work logged] (COMPRESS-613) Write ZIP extra time fields automatically

2022-12-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-613?focusedWorklogId=836184=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836184
 ]

ASF GitHub Bot logged work on COMPRESS-613:
---

Author: ASF GitHub Bot
Created on: 30/Dec/22 00:06
Start Date: 30/Dec/22 00:06
Worklog Time Spent: 10m 
  Work Description: andrebrait commented on code in PR #345:
URL: https://github.com/apache/commons-compress/pull/345#discussion_r1059196161


##
src/main/java/org/apache/commons/compress/utils/TimeUtils.java:
##
@@ -46,6 +46,62 @@ public final class TimeUtils {
  */
 static final long WINDOWS_EPOCH_OFFSET = -1164447360L;
 
+/**
+ * Converts "standard UNIX time" (in seconds, UTC/GMT) to {@link FileTime}.
+ *
+ * @param time UNIX timestamp
+ * @return the corresponding FileTime
+ */
+public static FileTime unixTimeToFileTime(final long time) {
+return FileTime.from(time, TimeUnit.SECONDS);
+}
+
+/**
+ * Converts {@link FileTime} to "standard UNIX time".
+ *
+ * @param time the original FileTime
+ * @return the UNIX timestamp
+ */
+public static long fileTimeToUnixTime(final FileTime time) {

Review Comment:
   The scale isn't included in the name, is it?





Issue Time Tracking
---

Worklog Id: (was: 836184)
Time Spent: 1h 50m  (was: 1h 40m)

> Write ZIP extra time fields automatically
> -
>
> Key: COMPRESS-613
> URL: https://issues.apache.org/jira/browse/COMPRESS-613
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Archivers
>Affects Versions: 1.21
>Reporter: Andre Brait
>Priority: Major
>  Labels: zip
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> When writing to a Zip file through ZipArchiveOutputStream, setting creation 
> and access times in a ZipArchiveEntry does not cause these to be reflected as 
> X5455 or X000A extra fields in the resulting zip file. This also happens for 
> modification times that do not fit into an MS-DOS time.
> As a consequence, the date range is reduced, as well as the granularity (form 
> 100ns intervals to seconds).
> ZipEntry and standard java.util.zip facilities do that automatically, but 
> that's missing here.
> My proposal is to use the same logic the java.util.zip do and add those extra 
> fields automatically, if situation requires them.
> See my existing logic for this here: 
> [https://github.com/andrebrait/DATROMTool/blob/86a4f4978bab250ca54d047c58b4f91e7dbbcc7f/core/src/main/java/io/github/datromtool/io/FileCopier.java#L1425]
> It's (almost) the same logic from java.util.zip, but adapted to be used with 
> ZipArchiveEntry.
> If you're ok with it, I can send a PR.
> Actual logic will be more like 
> {{{}java.util.zip.ZipOutputStream#writeLOC(XEntry){}}}, represented below:
> {code:java}
> int elenEXTT = 0; // info-zip extended timestamp
> int flagEXTT = 0;
> long umtime = -1;
> long uatime = -1;
> long uctime = -1;
> if (e.mtime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LMT;
> umtime = fileTimeToUnixTime(e.mtime);
> }
> if (e.atime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LAT;
> uatime = fileTimeToUnixTime(e.atime);
> }
> if (e.ctime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAT_CT;
> uctime = fileTimeToUnixTime(e.ctime);
> }
> if (flagEXTT != 0) {
> // to use ntfs time if any m/a/ctime is beyond unixtime upper 
> bound
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> elen += 36;// NTFS time, total 36 bytes
> } else {
> elen += (elenEXTT + 5);// headid(2) + size(2) + flag(1) + 
> data
> }
> }
> writeShort(elen);
> writeBytes(nameBytes, 0, nameBytes.length);
> if (hasZip64) {
> writeShort(ZIP64_EXTID);
> writeShort(16);
> writeLong(e.size);
> writeLong(e.csize);
> }
> if (flagEXTT != 0) {
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> writeShort(EXTID_NTFS);// id
> writeShort(32);// data size
> writeInt(0);   // reserved
> writeShort(0x0001);// NTFS attr tag
> writeShort(24);
> writeLong(e.mtime == null ? 

[jira] [Work logged] (COMPRESS-613) Write ZIP extra time fields automatically

2022-12-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-613?focusedWorklogId=836183=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836183
 ]

ASF GitHub Bot logged work on COMPRESS-613:
---

Author: ASF GitHub Bot
Created on: 30/Dec/22 00:04
Start Date: 30/Dec/22 00:04
Worklog Time Spent: 10m 
  Work Description: andrebrait commented on code in PR #345:
URL: https://github.com/apache/commons-compress/pull/345#discussion_r1059195870


##
src/main/java/org/apache/commons/compress/archivers/zip/X5455_ExtendedTimestamp.java:
##
@@ -130,16 +155,20 @@ private static Date zipLongToDate(final ZipLong unixTime) 
{
 return unixTime != null ? new Date(unixTime.getIntValue() * 1000L) : 
null;
 }
 
+private static FileTime zipLongToFileTime(final ZipLong unixTime) {

Review Comment:
   I get it, but it became somewhat confusing having all method be overloaded 
like that. I can change, though. It's up to you.





Issue Time Tracking
---

Worklog Id: (was: 836183)
Time Spent: 1h 40m  (was: 1.5h)

> Write ZIP extra time fields automatically
> -
>
> Key: COMPRESS-613
> URL: https://issues.apache.org/jira/browse/COMPRESS-613
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Archivers
>Affects Versions: 1.21
>Reporter: Andre Brait
>Priority: Major
>  Labels: zip
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> When writing to a Zip file through ZipArchiveOutputStream, setting creation 
> and access times in a ZipArchiveEntry does not cause these to be reflected as 
> X5455 or X000A extra fields in the resulting zip file. This also happens for 
> modification times that do not fit into an MS-DOS time.
> As a consequence, the date range is reduced, as well as the granularity (form 
> 100ns intervals to seconds).
> ZipEntry and standard java.util.zip facilities do that automatically, but 
> that's missing here.
> My proposal is to use the same logic the java.util.zip do and add those extra 
> fields automatically, if situation requires them.
> See my existing logic for this here: 
> [https://github.com/andrebrait/DATROMTool/blob/86a4f4978bab250ca54d047c58b4f91e7dbbcc7f/core/src/main/java/io/github/datromtool/io/FileCopier.java#L1425]
> It's (almost) the same logic from java.util.zip, but adapted to be used with 
> ZipArchiveEntry.
> If you're ok with it, I can send a PR.
> Actual logic will be more like 
> {{{}java.util.zip.ZipOutputStream#writeLOC(XEntry){}}}, represented below:
> {code:java}
> int elenEXTT = 0; // info-zip extended timestamp
> int flagEXTT = 0;
> long umtime = -1;
> long uatime = -1;
> long uctime = -1;
> if (e.mtime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LMT;
> umtime = fileTimeToUnixTime(e.mtime);
> }
> if (e.atime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LAT;
> uatime = fileTimeToUnixTime(e.atime);
> }
> if (e.ctime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAT_CT;
> uctime = fileTimeToUnixTime(e.ctime);
> }
> if (flagEXTT != 0) {
> // to use ntfs time if any m/a/ctime is beyond unixtime upper 
> bound
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> elen += 36;// NTFS time, total 36 bytes
> } else {
> elen += (elenEXTT + 5);// headid(2) + size(2) + flag(1) + 
> data
> }
> }
> writeShort(elen);
> writeBytes(nameBytes, 0, nameBytes.length);
> if (hasZip64) {
> writeShort(ZIP64_EXTID);
> writeShort(16);
> writeLong(e.size);
> writeLong(e.csize);
> }
> if (flagEXTT != 0) {
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> writeShort(EXTID_NTFS);// id
> writeShort(32);// data size
> writeInt(0);   // reserved
> writeShort(0x0001);// NTFS attr tag
> writeShort(24);
> writeLong(e.mtime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.mtime));
> writeLong(e.atime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.atime));
> writeLong(e.ctime == null ? WINDOWS_TIME_NOT_AVAILABLE
>

[jira] [Work logged] (COMPRESS-613) Write ZIP extra time fields automatically

2022-12-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-613?focusedWorklogId=836182=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836182
 ]

ASF GitHub Bot logged work on COMPRESS-613:
---

Author: ASF GitHub Bot
Created on: 30/Dec/22 00:04
Start Date: 30/Dec/22 00:04
Worklog Time Spent: 10m 
  Work Description: andrebrait commented on code in PR #345:
URL: https://github.com/apache/commons-compress/pull/345#discussion_r1059195545


##
src/main/java/org/apache/commons/compress/archivers/zip/X000A_NTFS.java:
##
@@ -97,13 +105,13 @@ private static FileTime zipToFileTime(final 
ZipEightByteInteger z) {
 }
 return TimeUtils.ntfsTimeToFileTime(z.getLongValue());
 }
-

Review Comment:
   Removed by mistake. I'll take a look at the whole changelog again.





Issue Time Tracking
---

Worklog Id: (was: 836182)
Time Spent: 1.5h  (was: 1h 20m)

> Write ZIP extra time fields automatically
> -
>
> Key: COMPRESS-613
> URL: https://issues.apache.org/jira/browse/COMPRESS-613
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Archivers
>Affects Versions: 1.21
>Reporter: Andre Brait
>Priority: Major
>  Labels: zip
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When writing to a Zip file through ZipArchiveOutputStream, setting creation 
> and access times in a ZipArchiveEntry does not cause these to be reflected as 
> X5455 or X000A extra fields in the resulting zip file. This also happens for 
> modification times that do not fit into an MS-DOS time.
> As a consequence, the date range is reduced, as well as the granularity (form 
> 100ns intervals to seconds).
> ZipEntry and standard java.util.zip facilities do that automatically, but 
> that's missing here.
> My proposal is to use the same logic the java.util.zip do and add those extra 
> fields automatically, if situation requires them.
> See my existing logic for this here: 
> [https://github.com/andrebrait/DATROMTool/blob/86a4f4978bab250ca54d047c58b4f91e7dbbcc7f/core/src/main/java/io/github/datromtool/io/FileCopier.java#L1425]
> It's (almost) the same logic from java.util.zip, but adapted to be used with 
> ZipArchiveEntry.
> If you're ok with it, I can send a PR.
> Actual logic will be more like 
> {{{}java.util.zip.ZipOutputStream#writeLOC(XEntry){}}}, represented below:
> {code:java}
> int elenEXTT = 0; // info-zip extended timestamp
> int flagEXTT = 0;
> long umtime = -1;
> long uatime = -1;
> long uctime = -1;
> if (e.mtime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LMT;
> umtime = fileTimeToUnixTime(e.mtime);
> }
> if (e.atime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LAT;
> uatime = fileTimeToUnixTime(e.atime);
> }
> if (e.ctime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAT_CT;
> uctime = fileTimeToUnixTime(e.ctime);
> }
> if (flagEXTT != 0) {
> // to use ntfs time if any m/a/ctime is beyond unixtime upper 
> bound
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> elen += 36;// NTFS time, total 36 bytes
> } else {
> elen += (elenEXTT + 5);// headid(2) + size(2) + flag(1) + 
> data
> }
> }
> writeShort(elen);
> writeBytes(nameBytes, 0, nameBytes.length);
> if (hasZip64) {
> writeShort(ZIP64_EXTID);
> writeShort(16);
> writeLong(e.size);
> writeLong(e.csize);
> }
> if (flagEXTT != 0) {
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> writeShort(EXTID_NTFS);// id
> writeShort(32);// data size
> writeInt(0);   // reserved
> writeShort(0x0001);// NTFS attr tag
> writeShort(24);
> writeLong(e.mtime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.mtime));
> writeLong(e.atime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.atime));
> writeLong(e.ctime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.ctime));
> } else {
> writeShort(EXTID_EXTT);
>

[jira] [Work logged] (COMPRESS-613) Write ZIP extra time fields automatically

2022-12-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-613?focusedWorklogId=836181=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836181
 ]

ASF GitHub Bot logged work on COMPRESS-613:
---

Author: ASF GitHub Bot
Created on: 30/Dec/22 00:03
Start Date: 30/Dec/22 00:03
Worklog Time Spent: 10m 
  Work Description: andrebrait commented on code in PR #345:
URL: https://github.com/apache/commons-compress/pull/345#discussion_r1059195453


##
src/main/java/org/apache/commons/compress/archivers/zip/X000A_NTFS.java:
##
@@ -68,7 +68,14 @@
  * @NotThreadSafe
  */
 public class X000A_NTFS implements ZipExtraField {
-private static final ZipShort HEADER_ID = new ZipShort(0x000a);
+
+/**
+ * The header ID for this extra field.
+ *
+ * @since 1.23
+ */
+public static final ZipShort HEADER_ID = new ZipShort(0x000a);

Review Comment:
   Will do if it works properly.
   
   The reason for making it public was that I was testing it before with my 
project that uses commons-compress and I had it public to be able to use it 
there. I was getting the extra field directly.
   
   But tbh, it makes perfect sense to let it be public if we also have a public 
method kn ZipArchiveEntry to fetch an ExtraField based on a ZipLong containing 
its identifier. I know having a small API is important, but should anyone need 
to fetch that extra field (which is allowed using ZipArchiveEntry) it's quite 
frustrating, as the user, that I need to maintain a bunch of my own constants, 
copied from this class, instead of having this class expose this itself.





Issue Time Tracking
---

Worklog Id: (was: 836181)
Time Spent: 1h 20m  (was: 1h 10m)

> Write ZIP extra time fields automatically
> -
>
> Key: COMPRESS-613
> URL: https://issues.apache.org/jira/browse/COMPRESS-613
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Archivers
>Affects Versions: 1.21
>Reporter: Andre Brait
>Priority: Major
>  Labels: zip
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> When writing to a Zip file through ZipArchiveOutputStream, setting creation 
> and access times in a ZipArchiveEntry does not cause these to be reflected as 
> X5455 or X000A extra fields in the resulting zip file. This also happens for 
> modification times that do not fit into an MS-DOS time.
> As a consequence, the date range is reduced, as well as the granularity (form 
> 100ns intervals to seconds).
> ZipEntry and standard java.util.zip facilities do that automatically, but 
> that's missing here.
> My proposal is to use the same logic the java.util.zip do and add those extra 
> fields automatically, if situation requires them.
> See my existing logic for this here: 
> [https://github.com/andrebrait/DATROMTool/blob/86a4f4978bab250ca54d047c58b4f91e7dbbcc7f/core/src/main/java/io/github/datromtool/io/FileCopier.java#L1425]
> It's (almost) the same logic from java.util.zip, but adapted to be used with 
> ZipArchiveEntry.
> If you're ok with it, I can send a PR.
> Actual logic will be more like 
> {{{}java.util.zip.ZipOutputStream#writeLOC(XEntry){}}}, represented below:
> {code:java}
> int elenEXTT = 0; // info-zip extended timestamp
> int flagEXTT = 0;
> long umtime = -1;
> long uatime = -1;
> long uctime = -1;
> if (e.mtime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LMT;
> umtime = fileTimeToUnixTime(e.mtime);
> }
> if (e.atime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LAT;
> uatime = fileTimeToUnixTime(e.atime);
> }
> if (e.ctime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAT_CT;
> uctime = fileTimeToUnixTime(e.ctime);
> }
> if (flagEXTT != 0) {
> // to use ntfs time if any m/a/ctime is beyond unixtime upper 
> bound
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> elen += 36;// NTFS time, total 36 bytes
> } else {
> elen += (elenEXTT + 5);// headid(2) + size(2) + flag(1) + 
> data
> }
> }
> writeShort(elen);
> writeBytes(nameBytes, 0, nameBytes.length);
> if (hasZip64) {
> writeShort(ZIP64_EXTID);
> writeShort(16);
> writeLong(e.size);
> writeLong(e.csize);
> }
> if (flagEXTT != 0) {
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > 

[jira] [Work logged] (COMPRESS-613) Write ZIP extra time fields automatically

2022-12-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-613?focusedWorklogId=836179=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836179
 ]

ASF GitHub Bot logged work on COMPRESS-613:
---

Author: ASF GitHub Bot
Created on: 29/Dec/22 23:59
Start Date: 29/Dec/22 23:59
Worklog Time Spent: 10m 
  Work Description: andrebrait commented on code in PR #345:
URL: https://github.com/apache/commons-compress/pull/345#discussion_r1059194898


##
src/main/java/org/apache/commons/compress/utils/TimeUtils.java:
##
@@ -46,6 +46,62 @@ public final class TimeUtils {
  */
 static final long WINDOWS_EPOCH_OFFSET = -1164447360L;
 
+/**
+ * Converts "standard UNIX time" (in seconds, UTC/GMT) to {@link FileTime}.
+ *
+ * @param time UNIX timestamp
+ * @return the corresponding FileTime
+ */
+public static FileTime unixTimeToFileTime(final long time) {
+return FileTime.from(time, TimeUnit.SECONDS);
+}
+
+/**
+ * Converts {@link FileTime} to "standard UNIX time".
+ *
+ * @param time the original FileTime
+ * @return the UNIX timestamp
+ */
+public static long fileTimeToUnixTime(final FileTime time) {
+return time.to(TimeUnit.SECONDS);
+}
+
+/**
+ * Converts Java time (milliseconds since Epoch) to "standard UNIX time".
+ *
+ * @param time the original Java time
+ * @return the UNIX timestamp
+ */
+public static long javaTimeToUnixTime(final long time) {
+return time / 1000L;
+}
+
+/**
+ * Checks whether a FileTime exceeds the minimum or maximum for the 
"standard UNIX time".
+ * If the FileTime is null, this method always returns false.
+ *
+ * @param time the FileTime to evaluate, can be null
+ * @return true if the time exceeds the minimum or maximum UNIX time, 
false otherwise
+ */
+public static boolean exceedsUnixTime(final FileTime time) {

Review Comment:
   I think it could be negated and called "fitsInUnixTime" or something like 
that? Or maybe "isUnixTime" (but also negated)





Issue Time Tracking
---

Worklog Id: (was: 836179)
Time Spent: 1h 10m  (was: 1h)

> Write ZIP extra time fields automatically
> -
>
> Key: COMPRESS-613
> URL: https://issues.apache.org/jira/browse/COMPRESS-613
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Archivers
>Affects Versions: 1.21
>Reporter: Andre Brait
>Priority: Major
>  Labels: zip
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> When writing to a Zip file through ZipArchiveOutputStream, setting creation 
> and access times in a ZipArchiveEntry does not cause these to be reflected as 
> X5455 or X000A extra fields in the resulting zip file. This also happens for 
> modification times that do not fit into an MS-DOS time.
> As a consequence, the date range is reduced, as well as the granularity (form 
> 100ns intervals to seconds).
> ZipEntry and standard java.util.zip facilities do that automatically, but 
> that's missing here.
> My proposal is to use the same logic the java.util.zip do and add those extra 
> fields automatically, if situation requires them.
> See my existing logic for this here: 
> [https://github.com/andrebrait/DATROMTool/blob/86a4f4978bab250ca54d047c58b4f91e7dbbcc7f/core/src/main/java/io/github/datromtool/io/FileCopier.java#L1425]
> It's (almost) the same logic from java.util.zip, but adapted to be used with 
> ZipArchiveEntry.
> If you're ok with it, I can send a PR.
> Actual logic will be more like 
> {{{}java.util.zip.ZipOutputStream#writeLOC(XEntry){}}}, represented below:
> {code:java}
> int elenEXTT = 0; // info-zip extended timestamp
> int flagEXTT = 0;
> long umtime = -1;
> long uatime = -1;
> long uctime = -1;
> if (e.mtime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LMT;
> umtime = fileTimeToUnixTime(e.mtime);
> }
> if (e.atime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LAT;
> uatime = fileTimeToUnixTime(e.atime);
> }
> if (e.ctime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAT_CT;
> uctime = fileTimeToUnixTime(e.ctime);
> }
> if (flagEXTT != 0) {
> // to use ntfs time if any m/a/ctime is beyond unixtime upper 
> bound
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> elen += 36;// NTFS time, total 36 bytes
> } else {
> elen += (elenEXTT + 5);// 

[jira] [Work logged] (COMPRESS-613) Write ZIP extra time fields automatically

2022-12-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-613?focusedWorklogId=836178=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836178
 ]

ASF GitHub Bot logged work on COMPRESS-613:
---

Author: ASF GitHub Bot
Created on: 29/Dec/22 23:58
Start Date: 29/Dec/22 23:58
Worklog Time Spent: 10m 
  Work Description: andrebrait commented on code in PR #345:
URL: https://github.com/apache/commons-compress/pull/345#discussion_r1059194715


##
src/main/java/org/apache/commons/compress/utils/TimeUtils.java:
##
@@ -46,6 +46,62 @@ public final class TimeUtils {
  */
 static final long WINDOWS_EPOCH_OFFSET = -1164447360L;
 
+/**
+ * Converts "standard UNIX time" (in seconds, UTC/GMT) to {@link FileTime}.

Review Comment:
   No idea, TBH. Will change it.





Issue Time Tracking
---

Worklog Id: (was: 836178)
Time Spent: 1h  (was: 50m)

> Write ZIP extra time fields automatically
> -
>
> Key: COMPRESS-613
> URL: https://issues.apache.org/jira/browse/COMPRESS-613
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Archivers
>Affects Versions: 1.21
>Reporter: Andre Brait
>Priority: Major
>  Labels: zip
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> When writing to a Zip file through ZipArchiveOutputStream, setting creation 
> and access times in a ZipArchiveEntry does not cause these to be reflected as 
> X5455 or X000A extra fields in the resulting zip file. This also happens for 
> modification times that do not fit into an MS-DOS time.
> As a consequence, the date range is reduced, as well as the granularity (form 
> 100ns intervals to seconds).
> ZipEntry and standard java.util.zip facilities do that automatically, but 
> that's missing here.
> My proposal is to use the same logic the java.util.zip do and add those extra 
> fields automatically, if situation requires them.
> See my existing logic for this here: 
> [https://github.com/andrebrait/DATROMTool/blob/86a4f4978bab250ca54d047c58b4f91e7dbbcc7f/core/src/main/java/io/github/datromtool/io/FileCopier.java#L1425]
> It's (almost) the same logic from java.util.zip, but adapted to be used with 
> ZipArchiveEntry.
> If you're ok with it, I can send a PR.
> Actual logic will be more like 
> {{{}java.util.zip.ZipOutputStream#writeLOC(XEntry){}}}, represented below:
> {code:java}
> int elenEXTT = 0; // info-zip extended timestamp
> int flagEXTT = 0;
> long umtime = -1;
> long uatime = -1;
> long uctime = -1;
> if (e.mtime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LMT;
> umtime = fileTimeToUnixTime(e.mtime);
> }
> if (e.atime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LAT;
> uatime = fileTimeToUnixTime(e.atime);
> }
> if (e.ctime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAT_CT;
> uctime = fileTimeToUnixTime(e.ctime);
> }
> if (flagEXTT != 0) {
> // to use ntfs time if any m/a/ctime is beyond unixtime upper 
> bound
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> elen += 36;// NTFS time, total 36 bytes
> } else {
> elen += (elenEXTT + 5);// headid(2) + size(2) + flag(1) + 
> data
> }
> }
> writeShort(elen);
> writeBytes(nameBytes, 0, nameBytes.length);
> if (hasZip64) {
> writeShort(ZIP64_EXTID);
> writeShort(16);
> writeLong(e.size);
> writeLong(e.csize);
> }
> if (flagEXTT != 0) {
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> writeShort(EXTID_NTFS);// id
> writeShort(32);// data size
> writeInt(0);   // reserved
> writeShort(0x0001);// NTFS attr tag
> writeShort(24);
> writeLong(e.mtime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.mtime));
> writeLong(e.atime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.atime));
> writeLong(e.ctime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.ctime));
> } else {
> writeShort(EXTID_EXTT);
> 

[jira] [Work logged] (COMPRESS-613) Write ZIP extra time fields automatically

2022-12-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-613?focusedWorklogId=836168=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836168
 ]

ASF GitHub Bot logged work on COMPRESS-613:
---

Author: ASF GitHub Bot
Created on: 29/Dec/22 21:14
Start Date: 29/Dec/22 21:14
Worklog Time Spent: 10m 
  Work Description: andrebrait commented on code in PR #345:
URL: https://github.com/apache/commons-compress/pull/345#discussion_r1059150967


##
pom.xml:
##
@@ -200,7 +200,13 @@ Brotli, Zstandard and ar, cpio, jar, tar, zip, dump, 7z, 
arj.
   slf4j-api
   ${slf4j.version}
   test
-
+

Review Comment:
   Not required, but it's only in test scope and just so we can see any logs 
produced by the application while running the tests. Without this, they get 
swallowed.





Issue Time Tracking
---

Worklog Id: (was: 836168)
Time Spent: 50m  (was: 40m)

> Write ZIP extra time fields automatically
> -
>
> Key: COMPRESS-613
> URL: https://issues.apache.org/jira/browse/COMPRESS-613
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Archivers
>Affects Versions: 1.21
>Reporter: Andre Brait
>Priority: Major
>  Labels: zip
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> When writing to a Zip file through ZipArchiveOutputStream, setting creation 
> and access times in a ZipArchiveEntry does not cause these to be reflected as 
> X5455 or X000A extra fields in the resulting zip file. This also happens for 
> modification times that do not fit into an MS-DOS time.
> As a consequence, the date range is reduced, as well as the granularity (form 
> 100ns intervals to seconds).
> ZipEntry and standard java.util.zip facilities do that automatically, but 
> that's missing here.
> My proposal is to use the same logic the java.util.zip do and add those extra 
> fields automatically, if situation requires them.
> See my existing logic for this here: 
> [https://github.com/andrebrait/DATROMTool/blob/86a4f4978bab250ca54d047c58b4f91e7dbbcc7f/core/src/main/java/io/github/datromtool/io/FileCopier.java#L1425]
> It's (almost) the same logic from java.util.zip, but adapted to be used with 
> ZipArchiveEntry.
> If you're ok with it, I can send a PR.
> Actual logic will be more like 
> {{{}java.util.zip.ZipOutputStream#writeLOC(XEntry){}}}, represented below:
> {code:java}
> int elenEXTT = 0; // info-zip extended timestamp
> int flagEXTT = 0;
> long umtime = -1;
> long uatime = -1;
> long uctime = -1;
> if (e.mtime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LMT;
> umtime = fileTimeToUnixTime(e.mtime);
> }
> if (e.atime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LAT;
> uatime = fileTimeToUnixTime(e.atime);
> }
> if (e.ctime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAT_CT;
> uctime = fileTimeToUnixTime(e.ctime);
> }
> if (flagEXTT != 0) {
> // to use ntfs time if any m/a/ctime is beyond unixtime upper 
> bound
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> elen += 36;// NTFS time, total 36 bytes
> } else {
> elen += (elenEXTT + 5);// headid(2) + size(2) + flag(1) + 
> data
> }
> }
> writeShort(elen);
> writeBytes(nameBytes, 0, nameBytes.length);
> if (hasZip64) {
> writeShort(ZIP64_EXTID);
> writeShort(16);
> writeLong(e.size);
> writeLong(e.csize);
> }
> if (flagEXTT != 0) {
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> writeShort(EXTID_NTFS);// id
> writeShort(32);// data size
> writeInt(0);   // reserved
> writeShort(0x0001);// NTFS attr tag
> writeShort(24);
> writeLong(e.mtime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.mtime));
> writeLong(e.atime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.atime));
> writeLong(e.ctime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.ctime));
> } else {
> writeShort(EXTID_EXTT);
>   

[jira] [Work logged] (COMPRESS-613) Write ZIP extra time fields automatically

2022-12-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-613?focusedWorklogId=836167=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836167
 ]

ASF GitHub Bot logged work on COMPRESS-613:
---

Author: ASF GitHub Bot
Created on: 29/Dec/22 21:13
Start Date: 29/Dec/22 21:13
Worklog Time Spent: 10m 
  Work Description: garydgregory commented on code in PR #345:
URL: https://github.com/apache/commons-compress/pull/345#discussion_r1059150686


##
pom.xml:
##
@@ -200,7 +200,13 @@ Brotli, Zstandard and ar, cpio, jar, tar, zip, dump, 7z, 
arj.
   slf4j-api
   ${slf4j.version}
   test
-
+

Review Comment:
   Remove this dep unless required.





Issue Time Tracking
---

Worklog Id: (was: 836167)
Time Spent: 40m  (was: 0.5h)

> Write ZIP extra time fields automatically
> -
>
> Key: COMPRESS-613
> URL: https://issues.apache.org/jira/browse/COMPRESS-613
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Archivers
>Affects Versions: 1.21
>Reporter: Andre Brait
>Priority: Major
>  Labels: zip
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When writing to a Zip file through ZipArchiveOutputStream, setting creation 
> and access times in a ZipArchiveEntry does not cause these to be reflected as 
> X5455 or X000A extra fields in the resulting zip file. This also happens for 
> modification times that do not fit into an MS-DOS time.
> As a consequence, the date range is reduced, as well as the granularity (form 
> 100ns intervals to seconds).
> ZipEntry and standard java.util.zip facilities do that automatically, but 
> that's missing here.
> My proposal is to use the same logic the java.util.zip do and add those extra 
> fields automatically, if situation requires them.
> See my existing logic for this here: 
> [https://github.com/andrebrait/DATROMTool/blob/86a4f4978bab250ca54d047c58b4f91e7dbbcc7f/core/src/main/java/io/github/datromtool/io/FileCopier.java#L1425]
> It's (almost) the same logic from java.util.zip, but adapted to be used with 
> ZipArchiveEntry.
> If you're ok with it, I can send a PR.
> Actual logic will be more like 
> {{{}java.util.zip.ZipOutputStream#writeLOC(XEntry){}}}, represented below:
> {code:java}
> int elenEXTT = 0; // info-zip extended timestamp
> int flagEXTT = 0;
> long umtime = -1;
> long uatime = -1;
> long uctime = -1;
> if (e.mtime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LMT;
> umtime = fileTimeToUnixTime(e.mtime);
> }
> if (e.atime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LAT;
> uatime = fileTimeToUnixTime(e.atime);
> }
> if (e.ctime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAT_CT;
> uctime = fileTimeToUnixTime(e.ctime);
> }
> if (flagEXTT != 0) {
> // to use ntfs time if any m/a/ctime is beyond unixtime upper 
> bound
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> elen += 36;// NTFS time, total 36 bytes
> } else {
> elen += (elenEXTT + 5);// headid(2) + size(2) + flag(1) + 
> data
> }
> }
> writeShort(elen);
> writeBytes(nameBytes, 0, nameBytes.length);
> if (hasZip64) {
> writeShort(ZIP64_EXTID);
> writeShort(16);
> writeLong(e.size);
> writeLong(e.csize);
> }
> if (flagEXTT != 0) {
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> writeShort(EXTID_NTFS);// id
> writeShort(32);// data size
> writeInt(0);   // reserved
> writeShort(0x0001);// NTFS attr tag
> writeShort(24);
> writeLong(e.mtime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.mtime));
> writeLong(e.atime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.atime));
> writeLong(e.ctime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.ctime));
> } else {
> writeShort(EXTID_EXTT);
> writeShort(elenEXTT + 1);  // flag + data
> writeByte(flagEXTT);
> if (e.mtime != 

[jira] [Work logged] (COMPRESS-613) Write ZIP extra time fields automatically

2022-12-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-613?focusedWorklogId=836166=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836166
 ]

ASF GitHub Bot logged work on COMPRESS-613:
---

Author: ASF GitHub Bot
Created on: 29/Dec/22 21:11
Start Date: 29/Dec/22 21:11
Worklog Time Spent: 10m 
  Work Description: garydgregory commented on code in PR #345:
URL: https://github.com/apache/commons-compress/pull/345#discussion_r1059143999


##
src/main/java/org/apache/commons/compress/archivers/zip/X000A_NTFS.java:
##
@@ -68,7 +68,14 @@
  * @NotThreadSafe
  */
 public class X000A_NTFS implements ZipExtraField {
-private static final ZipShort HEADER_ID = new ZipShort(0x000a);
+
+/**
+ * The header ID for this extra field.
+ *
+ * @since 1.23
+ */
+public static final ZipShort HEADER_ID = new ZipShort(0x000a);

Review Comment:
   Why is this public now? Would package-private suffice? Let's make sure to 
avoid increasing the API surface unless we must.



##
src/main/java/org/apache/commons/compress/utils/TimeUtils.java:
##
@@ -46,6 +46,62 @@ public final class TimeUtils {
  */
 static final long WINDOWS_EPOCH_OFFSET = -1164447360L;
 
+/**
+ * Converts "standard UNIX time" (in seconds, UTC/GMT) to {@link FileTime}.

Review Comment:
   Why is "standard UNIX time" in quotes?



##
src/main/java/org/apache/commons/compress/utils/TimeUtils.java:
##
@@ -46,6 +46,62 @@ public final class TimeUtils {
  */
 static final long WINDOWS_EPOCH_OFFSET = -1164447360L;
 
+/**
+ * Converts "standard UNIX time" (in seconds, UTC/GMT) to {@link FileTime}.
+ *
+ * @param time UNIX timestamp
+ * @return the corresponding FileTime
+ */
+public static FileTime unixTimeToFileTime(final long time) {
+return FileTime.from(time, TimeUnit.SECONDS);
+}
+
+/**
+ * Converts {@link FileTime} to "standard UNIX time".
+ *
+ * @param time the original FileTime
+ * @return the UNIX timestamp
+ */
+public static long fileTimeToUnixTime(final FileTime time) {
+return time.to(TimeUnit.SECONDS);
+}
+
+/**
+ * Converts Java time (milliseconds since Epoch) to "standard UNIX time".
+ *
+ * @param time the original Java time
+ * @return the UNIX timestamp
+ */
+public static long javaTimeToUnixTime(final long time) {

Review Comment:
   The method name is good here because the parameter and return type are long.



##
src/main/java/org/apache/commons/compress/archivers/zip/X5455_ExtendedTimestamp.java:
##
@@ -130,16 +155,20 @@ private static Date zipLongToDate(final ZipLong unixTime) 
{
 return unixTime != null ? new Date(unixTime.getIntValue() * 1000L) : 
null;
 }
 
+private static FileTime zipLongToFileTime(final ZipLong unixTime) {

Review Comment:
   The "zipLong" method name prefix is redundant with the argument type 
`zipLongToFileTime` -> `toFileTime` or probably better `zipLongToFileTime` -> 
`unixToFileTime`



##
src/main/java/org/apache/commons/compress/archivers/zip/X000A_NTFS.java:
##
@@ -97,13 +105,13 @@ private static FileTime zipToFileTime(final 
ZipEightByteInteger z) {
 }
 return TimeUtils.ntfsTimeToFileTime(z.getLongValue());
 }
-
 private ZipEightByteInteger modifyTime = ZipEightByteInteger.ZERO;
 
 private ZipEightByteInteger accessTime = ZipEightByteInteger.ZERO;
 
 private ZipEightByteInteger createTime = ZipEightByteInteger.ZERO;
 
+

Review Comment:
   Remove extra empty line.



##
src/main/java/org/apache/commons/compress/archivers/zip/X5455_ExtendedTimestamp.java:
##
@@ -242,6 +285,20 @@ public Date getCreateJavaTime() {
 return zipLongToDate(createTime);
 }
 
+/**
+ * Returns the create time as a {@link FileTime}
+ * of this zip entry, or null if no such timestamp exists in the zip entry.
+ * The milliseconds are always zeroed out, since the underlying data
+ * offers only per-second precision.
+ *
+ * @return modify time as {@link FileTime} or null.
+ *

Review Comment:
   Remove extra line b/w Javadoc tags return and since.



##
src/main/java/org/apache/commons/compress/utils/TimeUtils.java:
##
@@ -46,6 +46,62 @@ public final class TimeUtils {
  */
 static final long WINDOWS_EPOCH_OFFSET = -1164447360L;
 
+/**
+ * Converts "standard UNIX time" (in seconds, UTC/GMT) to {@link FileTime}.
+ *
+ * @param time UNIX timestamp
+ * @return the corresponding FileTime
+ */
+public static FileTime unixTimeToFileTime(final long time) {
+return FileTime.from(time, TimeUnit.SECONDS);
+}
+
+/**
+ * Converts {@link FileTime} to "standard UNIX time".
+ *
+ * @param time the original 

[jira] [Work logged] (IO-784) Add support for Appendable to HexDump util

2022-12-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/IO-784?focusedWorklogId=836163=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836163
 ]

ASF GitHub Bot logged work on IO-784:
-

Author: ASF GitHub Bot
Created on: 29/Dec/22 20:37
Start Date: 29/Dec/22 20:37
Worklog Time Spent: 10m 
  Work Description: garydgregory commented on PR #418:
URL: https://github.com/apache/commons-io/pull/418#issuecomment-1367573842

Code coverage is negative with this PR, so you are likely missing some code 
paths in the new code.




Issue Time Tracking
---

Worklog Id: (was: 836163)
Time Spent: 20m  (was: 10m)

> Add support for Appendable to HexDump util
> --
>
> Key: IO-784
> URL: https://issues.apache.org/jira/browse/IO-784
> Project: Commons IO
>  Issue Type: Improvement
>  Components: Utilities
>Affects Versions: 2.11.0
>Reporter: Fredrik Kjellberg
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The HexDump utility currently only supports to send the output of the hex 
> dump to an OutputStream. This makes it cumbersome if you want to e.g. output 
> the dump to System.out.
> The HexDump utility should support to send the output to an `Appendable` 
> making it possible to use e.g. `System.out` or `StringBuilder` as output 
> targets.
> Created PR with a proposed fix: 
> [https://github.com/apache/commons-io/pull/418]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (IO-784) Add support for Appendable to HexDump util

2022-12-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/IO-784?focusedWorklogId=836138=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836138
 ]

ASF GitHub Bot logged work on IO-784:
-

Author: ASF GitHub Bot
Created on: 29/Dec/22 16:18
Start Date: 29/Dec/22 16:18
Worklog Time Spent: 10m 
  Work Description: codecov-commenter commented on PR #418:
URL: https://github.com/apache/commons-io/pull/418#issuecomment-1367441946

   # 
[Codecov](https://codecov.io/gh/apache/commons-io/pull/418?src=pr=h1_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation)
 Report
   > Merging 
[#418](https://codecov.io/gh/apache/commons-io/pull/418?src=pr=desc_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation)
 (2b60f3a) into 
[master](https://codecov.io/gh/apache/commons-io/commit/3bd9659002d88405a60e21231cca78c3da7a7ecc?el=desc_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation)
 (3bd9659) will **decrease** coverage by `0.04%`.
   > The diff coverage is `92.85%`.
   
   ```diff
   @@ Coverage Diff  @@
   ## master #418  +/-   ##
   
   - Coverage 86.11%   86.07%   -0.05% 
   - Complexity 3212 3214   +2 
   
 Files   215  215  
 Lines  7496 7503   +7 
 Branches906  907   +1 
   
   + Hits   6455 6458   +3 
   - Misses  794  796   +2 
   - Partials247  249   +2 
   ```
   
   
   | [Impacted 
Files](https://codecov.io/gh/apache/commons-io/pull/418?src=pr=tree_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation)
 | Coverage Δ | |
   |---|---|---|
   | 
[src/main/java/org/apache/commons/io/HexDump.java](https://codecov.io/gh/apache/commons-io/pull/418/diff?src=pr=tree_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation#diff-c3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2NvbW1vbnMvaW8vSGV4RHVtcC5qYXZh)
 | `93.47% <92.85%> (-1.12%)` | :arrow_down: |
   | 
[.../main/java/org/apache/commons/io/input/Tailer.java](https://codecov.io/gh/apache/commons-io/pull/418/diff?src=pr=tree_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation#diff-c3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2NvbW1vbnMvaW8vaW5wdXQvVGFpbGVyLmphdmE=)
 | `85.57% <0.00%> (-1.50%)` | :arrow_down: |
   | 
[...ache/commons/io/input/ReversedLinesFileReader.java](https://codecov.io/gh/apache/commons-io/pull/418/diff?src=pr=tree_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation#diff-c3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2NvbW1vbnMvaW8vaW5wdXQvUmV2ZXJzZWRMaW5lc0ZpbGVSZWFkZXIuamF2YQ==)
 | `87.12% <0.00%> (-0.20%)` | :arrow_down: |
   
   :mega: We’re building smart automated test selection to slash your CI/CD 
build times. [Learn 
more](https://about.codecov.io/iterative-testing/?utm_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation)
   




Issue Time Tracking
---

Worklog Id: (was: 836138)
Remaining Estimate: 0h
Time Spent: 10m

> Add support for Appendable to HexDump util
> --
>
> Key: IO-784
> URL: https://issues.apache.org/jira/browse/IO-784
> Project: Commons IO
>  Issue Type: Improvement
>  Components: Utilities
>Affects Versions: 2.11.0
>Reporter: Fredrik Kjellberg
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The HexDump utility currently only supports to send the output of the hex 
> dump to an OutputStream. This makes it cumbersome if you want to e.g. output 
> the dump to System.out.
> The HexDump utility should support to send the output to an `Appendable` 
> making it possible to use e.g. `System.out` or `StringBuilder` as output 
> targets.
> Created PR with a proposed fix: 
> [https://github.com/apache/commons-io/pull/418]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (COMPRESS-613) Write ZIP extra time fields automatically

2022-12-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-613?focusedWorklogId=836012=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836012
 ]

ASF GitHub Bot logged work on COMPRESS-613:
---

Author: ASF GitHub Bot
Created on: 28/Dec/22 20:49
Start Date: 28/Dec/22 20:49
Worklog Time Spent: 10m 
  Work Description: codecov-commenter commented on PR #345:
URL: https://github.com/apache/commons-compress/pull/345#issuecomment-1366908609

   # 
[Codecov](https://codecov.io/gh/apache/commons-compress/pull/345?src=pr=h1_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation)
 Report
   > Merging 
[#345](https://codecov.io/gh/apache/commons-compress/pull/345?src=pr=desc_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation)
 (894040c) into 
[master](https://codecov.io/gh/apache/commons-compress/commit/f6dadd24b4b20f46541110b0146ce8413430e873?el=desc_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation)
 (f6dadd2) will **increase** coverage by `0.10%`.
   > The diff coverage is `89.01%`.
   
   ```diff
   @@ Coverage Diff  @@
   ## master #345  +/-   ##
   
   + Coverage 80.33%   80.43%   +0.10% 
   - Complexity 6653 6718  +65 
   
 Files   342  342  
 Lines 2523225356 +124 
 Branches   4085 4107  +22 
   
   + Hits  2027120396 +125 
   + Misses 3382 3373   -9 
   - Partials   1579 1587   +8 
   ```
   
   
   | [Impacted 
Files](https://codecov.io/gh/apache/commons-compress/pull/345?src=pr=tree_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation)
 | Coverage Δ | |
   |---|---|---|
   | 
[...compress/archivers/zip/ZipArchiveOutputStream.java](https://codecov.io/gh/apache/commons-compress/pull/345/diff?src=pr=tree_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation#diff-c3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2NvbW1vbnMvY29tcHJlc3MvYXJjaGl2ZXJzL3ppcC9aaXBBcmNoaXZlT3V0cHV0U3RyZWFtLmphdmE=)
 | `74.28% <ø> (-0.05%)` | :arrow_down: |
   | 
[...ommons/compress/archivers/zip/ZipArchiveEntry.java](https://codecov.io/gh/apache/commons-compress/pull/345/diff?src=pr=tree_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation#diff-c3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2NvbW1vbnMvY29tcHJlc3MvYXJjaGl2ZXJzL3ppcC9aaXBBcmNoaXZlRW50cnkuamF2YQ==)
 | `82.50% <85.82%> (+2.43%)` | :arrow_up: |
   | 
[...a/org/apache/commons/compress/utils/TimeUtils.java](https://codecov.io/gh/apache/commons-compress/pull/345/diff?src=pr=tree_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation#diff-c3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2NvbW1vbnMvY29tcHJlc3MvdXRpbHMvVGltZVV0aWxzLmphdmE=)
 | `96.29% <90.00%> (-3.71%)` | :arrow_down: |
   | 
[...mons/compress/archivers/cpio/CpioArchiveEntry.java](https://codecov.io/gh/apache/commons-compress/pull/345/diff?src=pr=tree_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation#diff-c3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2NvbW1vbnMvY29tcHJlc3MvYXJjaGl2ZXJzL2NwaW8vQ3Bpb0FyY2hpdmVFbnRyeS5qYXZh)
 | `73.65% <100.00%> (ø)` | |
   | 
[...ommons/compress/archivers/tar/TarArchiveEntry.java](https://codecov.io/gh/apache/commons-compress/pull/345/diff?src=pr=tree_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation#diff-c3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2NvbW1vbnMvY29tcHJlc3MvYXJjaGl2ZXJzL3Rhci9UYXJBcmNoaXZlRW50cnkuamF2YQ==)
 | `71.71% <100.00%> (ø)` | |
   | 
[...compress/archivers/tar/TarArchiveOutputStream.java](https://codecov.io/gh/apache/commons-compress/pull/345/diff?src=pr=tree_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation#diff-c3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2NvbW1vbnMvY29tcHJlc3MvYXJjaGl2ZXJzL3Rhci9UYXJBcmNoaXZlT3V0cHV0U3RyZWFtLmphdmE=)
 | `88.61% <100.00%> (ø)` | |
   | 
[...che/commons/compress/archivers/zip/X000A\_NTFS.java](https://codecov.io/gh/apache/commons-compress/pull/345/diff?src=pr=tree_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation#diff-c3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2NvbW1vbnMvY29tcHJlc3MvYXJjaGl2ZXJzL3ppcC9YMDAwQV9OVEZTLmphdmE=)
 | `71.29% <100.00%> (+4.62%)` | :arrow_up: |
   | 

[jira] [Work logged] (COMPRESS-613) Write ZIP extra time fields automatically

2022-12-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-613?focusedWorklogId=836010=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836010
 ]

ASF GitHub Bot logged work on COMPRESS-613:
---

Author: ASF GitHub Bot
Created on: 28/Dec/22 20:46
Start Date: 28/Dec/22 20:46
Worklog Time Spent: 10m 
  Work Description: andrebrait opened a new pull request, #345:
URL: https://github.com/apache/commons-compress/pull/345

   This adds support for extra time data (InfoZip and NTFS Extra fields) to Zip 
archives.
   This supports both reading and writing to these fields automatically, 
including when creating a ZipArchiveEntry from an existing file on the disk, 
with proper fall backs.
   
   Additionally:
   
   1. Works around a bug involving Integer Overflow in Java 8 and Zip archives: 
https://bugs.openjdk.org/browse/JDK-8130914
   2. Consolidates a few more time conversions into TimeUtils
   3. Works around an oversight in Java 8 where the NTFS fields on Zip archives 
are only read to the precision of microseconds instead of their maximum of 
100ns.
   
   Currently, it's missing a few test cases for reading/writing, though I have 
tested them myself. Submitting for this review because the code itself is 
likely final (and there are enough tests to cover most of them, and I'll submit 
the rest of the test cases in the coming days).




Issue Time Tracking
---

Worklog Id: (was: 836010)
Remaining Estimate: 0h
Time Spent: 10m

> Write ZIP extra time fields automatically
> -
>
> Key: COMPRESS-613
> URL: https://issues.apache.org/jira/browse/COMPRESS-613
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Archivers
>Affects Versions: 1.21
>Reporter: Andre Brait
>Priority: Major
>  Labels: zip
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When writing to a Zip file through ZipArchiveOutputStream, setting creation 
> and access times in a ZipArchiveEntry does not cause these to be reflected as 
> X5455 or X000A extra fields in the resulting zip file. This also happens for 
> modification times that do not fit into an MS-DOS time.
> As a consequence, the date range is reduced, as well as the granularity (form 
> 100ns intervals to seconds).
> ZipEntry and standard java.util.zip facilities do that automatically, but 
> that's missing here.
> My proposal is to use the same logic the java.util.zip do and add those extra 
> fields automatically, if situation requires them.
> See my existing logic for this here: 
> [https://github.com/andrebrait/DATROMTool/blob/86a4f4978bab250ca54d047c58b4f91e7dbbcc7f/core/src/main/java/io/github/datromtool/io/FileCopier.java#L1425]
> It's (almost) the same logic from java.util.zip, but adapted to be used with 
> ZipArchiveEntry.
> If you're ok with it, I can send a PR.
> Actual logic will be more like 
> {{{}java.util.zip.ZipOutputStream#writeLOC(XEntry){}}}, represented below:
> {code:java}
> int elenEXTT = 0; // info-zip extended timestamp
> int flagEXTT = 0;
> long umtime = -1;
> long uatime = -1;
> long uctime = -1;
> if (e.mtime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LMT;
> umtime = fileTimeToUnixTime(e.mtime);
> }
> if (e.atime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LAT;
> uatime = fileTimeToUnixTime(e.atime);
> }
> if (e.ctime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAT_CT;
> uctime = fileTimeToUnixTime(e.ctime);
> }
> if (flagEXTT != 0) {
> // to use ntfs time if any m/a/ctime is beyond unixtime upper 
> bound
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> elen += 36;// NTFS time, total 36 bytes
> } else {
> elen += (elenEXTT + 5);// headid(2) + size(2) + flag(1) + 
> data
> }
> }
> writeShort(elen);
> writeBytes(nameBytes, 0, nameBytes.length);
> if (hasZip64) {
> writeShort(ZIP64_EXTID);
> writeShort(16);
> writeLong(e.size);
> writeLong(e.csize);
> }
> if (flagEXTT != 0) {
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> writeShort(EXTID_NTFS);// id
> writeShort(32);// data size
> writeInt(0);   // reserved
> writeShort(0x0001);   

[jira] [Work logged] (POOL-393) BaseGenericObjectPool.jmxRegister may cost too much time

2022-12-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/POOL-393?focusedWorklogId=835929=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-835929
 ]

ASF GitHub Bot logged work on POOL-393:
---

Author: ASF GitHub Bot
Created on: 28/Dec/22 12:31
Start Date: 28/Dec/22 12:31
Worklog Time Spent: 10m 
  Work Description: garydgregory commented on PR #199:
URL: https://github.com/apache/commons-pool/pull/199#issuecomment-1366617619

   @psteitz ?




Issue Time Tracking
---

Worklog Id: (was: 835929)
Time Spent: 2h 20m  (was: 2h 10m)

> BaseGenericObjectPool.jmxRegister may cost too much time
> 
>
> Key: POOL-393
> URL: https://issues.apache.org/jira/browse/POOL-393
> Project: Commons Pool
>  Issue Type: Improvement
>Affects Versions: 2.4.2
>Reporter: Shichao Yuan
>Priority: Major
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
>  
> When creating many pools, I find that it tasks too much time to register jmx.
> In the code,  the ObjectName's postfix always starts with 1, so many 
> InstanceAlreadyExistsExceptions may be thrown before registered successfully.
> Maybe a random number is a better choice, or a atomic long.
> {quote}private ObjectName jmxRegister(BaseObjectPoolConfig config,
>  String jmxNameBase, String jmxNamePrefix) {
>  ObjectName objectName = null;
>  MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
>  int i = 1;
>  boolean registered = false;
>  String base = config.getJmxNameBase();
>  if (base == null)
> Unknown macro: \{ base = jmxNameBase; }
> while (!registered) {
>  try {
>  ObjectName objName;
>  // Skip the numeric suffix for the first pool in case there is
>  // only one so the names are cleaner.
>  if (i == 1)
> Unknown macro: \{ objName = new ObjectName(base + jmxNamePrefix); }
> else
> Unknown macro: \{ objName = new ObjectName(base + jmxNamePrefix + i); }
> mbs.registerMBean(this, objName);
>  objectName = objName;
>  registered = true;
>  } catch (MalformedObjectNameException e) {
>  if (BaseObjectPoolConfig.DEFAULT_JMX_NAME_PREFIX.equals(
>  jmxNamePrefix) && jmxNameBase.equals(base))
> Unknown macro: \{ // Shouldn't happen. Skip registration if it does. 
> registered = true; }
> else
> Unknown macro: \{ // Must be an invalid name. Use the defaults instead. 
> jmxNamePrefix = BaseObjectPoolConfig.DEFAULT_JMX_NAME_PREFIX; base = 
> jmxNameBase; }
> } catch (InstanceAlreadyExistsException e)
> Unknown macro: \{ // Increment the index and try again i++; }
> catch (MBeanRegistrationException e)
> Unknown macro: \{ // Shouldn't happen. Skip registration if it does. 
> registered = true; }
> catch (NotCompliantMBeanException e)
> }
>  return objectName;
>  }
> {quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (COLLECTIONS-806) Upgrade junit.framework.Test to org.junit.jupiter.api.Test

2022-12-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COLLECTIONS-806?focusedWorklogId=835772=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-835772
 ]

ASF GitHub Bot logged work on COLLECTIONS-806:
--

Author: ASF GitHub Bot
Created on: 27/Dec/22 13:43
Start Date: 27/Dec/22 13:43
Worklog Time Spent: 10m 
  Work Description: garydgregory merged PR #371:
URL: https://github.com/apache/commons-collections/pull/371




Issue Time Tracking
---

Worklog Id: (was: 835772)
Time Spent: 3h  (was: 2h 50m)

> Upgrade junit.framework.Test to org.junit.jupiter.api.Test
> --
>
> Key: COLLECTIONS-806
> URL: https://issues.apache.org/jira/browse/COLLECTIONS-806
> Project: Commons Collections
>  Issue Type: Sub-task
>Reporter: John Patrick
>Priority: Major
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Covers '57' usages of legacy usage of;
> {code:java}
> import junit.framework.Test;
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (COLLECTIONS-806) Upgrade junit.framework.Test to org.junit.jupiter.api.Test

2022-12-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COLLECTIONS-806?focusedWorklogId=835703=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-835703
 ]

ASF GitHub Bot logged work on COLLECTIONS-806:
--

Author: ASF GitHub Bot
Created on: 26/Dec/22 21:25
Start Date: 26/Dec/22 21:25
Worklog Time Spent: 10m 
  Work Description: codecov-commenter commented on PR #371:
URL: 
https://github.com/apache/commons-collections/pull/371#issuecomment-1365467714

   # 
[Codecov](https://codecov.io/gh/apache/commons-collections/pull/371?src=pr=h1_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation)
 Report
   > Merging 
[#371](https://codecov.io/gh/apache/commons-collections/pull/371?src=pr=desc_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation)
 (2f1b25a) into 
[master](https://codecov.io/gh/apache/commons-collections/commit/511d171516b8575a4aec55924ec39be64ccd6974?el=desc_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation)
 (511d171) will **not change** coverage.
   > The diff coverage is `n/a`.
   
   ```diff
   @@Coverage Diff@@
   ## master #371   +/-   ##
   =
 Coverage 81.19%   81.19%   
 Complexity 4604 4604   
   =
 Files   288  288   
 Lines 1342413424   
 Branches   1982 1982   
   =
 Hits  1090010900   
 Misses 1932 1932   
 Partials592  592   
   ```
   
   
   
   :mega: We’re building smart automated test selection to slash your CI/CD 
build times. [Learn 
more](https://about.codecov.io/iterative-testing/?utm_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation)
   




Issue Time Tracking
---

Worklog Id: (was: 835703)
Time Spent: 2h 50m  (was: 2h 40m)

> Upgrade junit.framework.Test to org.junit.jupiter.api.Test
> --
>
> Key: COLLECTIONS-806
> URL: https://issues.apache.org/jira/browse/COLLECTIONS-806
> Project: Commons Collections
>  Issue Type: Sub-task
>Reporter: John Patrick
>Priority: Major
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Covers '57' usages of legacy usage of;
> {code:java}
> import junit.framework.Test;
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (COLLECTIONS-806) Upgrade junit.framework.Test to org.junit.jupiter.api.Test

2022-12-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COLLECTIONS-806?focusedWorklogId=835702=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-835702
 ]

ASF GitHub Bot logged work on COLLECTIONS-806:
--

Author: ASF GitHub Bot
Created on: 26/Dec/22 20:58
Start Date: 26/Dec/22 20:58
Worklog Time Spent: 10m 
  Work Description: pas725 opened a new pull request, #371:
URL: https://github.com/apache/commons-collections/pull/371

   **Jira**: https://issues.apache.org/jira/browse/COLLECTIONS-806
   
   **Description**: COLLECTIONS-806 is a subtask of the main task 
[COLLECTIONS-777-Fully migrate to JUnit 
5](https://issues.apache.org/jira/browse/COLLECTIONS-777)
   
   **Subtask** **COLLECTIONS-806 description**: Upgrade org.junit.Test to 
org.junit.jupiter.api.Test
   
   
   **Changes**
   - Removed references of `junit.framework.Test` from tests
   - Removed unused method `BulkTest.makeSuite()`




Issue Time Tracking
---

Worklog Id: (was: 835702)
Time Spent: 2h 40m  (was: 2.5h)

> Upgrade junit.framework.Test to org.junit.jupiter.api.Test
> --
>
> Key: COLLECTIONS-806
> URL: https://issues.apache.org/jira/browse/COLLECTIONS-806
> Project: Commons Collections
>  Issue Type: Sub-task
>Reporter: John Patrick
>Priority: Major
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Covers '57' usages of legacy usage of;
> {code:java}
> import junit.framework.Test;
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (POOL-393) BaseGenericObjectPool.jmxRegister may cost too much time

2022-12-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/POOL-393?focusedWorklogId=835611=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-835611
 ]

ASF GitHub Bot logged work on POOL-393:
---

Author: ASF GitHub Bot
Created on: 24/Dec/22 22:27
Start Date: 24/Dec/22 22:27
Worklog Time Spent: 10m 
  Work Description: garydgregory commented on PR #199:
URL: https://github.com/apache/commons-pool/pull/199#issuecomment-1364590761

   Hi all,
   
   I don't think we need address tests since this PR just tweaks the existing 
allocation. Unless we go with random numbers, we just have to accept that one 
app can allocate names 1 through 100, and that all other apps have to test 1 
through 100 before settling on 101, or 102, and so on. I think there can still 
be a race condition, but we can fix that separately. 
   
   Curious to see what @psteitz thinks.




Issue Time Tracking
---

Worklog Id: (was: 835611)
Time Spent: 2h 10m  (was: 2h)

> BaseGenericObjectPool.jmxRegister may cost too much time
> 
>
> Key: POOL-393
> URL: https://issues.apache.org/jira/browse/POOL-393
> Project: Commons Pool
>  Issue Type: Improvement
>Affects Versions: 2.4.2
>Reporter: Shichao Yuan
>Priority: Major
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
>  
> When creating many pools, I find that it tasks too much time to register jmx.
> In the code,  the ObjectName's postfix always starts with 1, so many 
> InstanceAlreadyExistsExceptions may be thrown before registered successfully.
> Maybe a random number is a better choice, or a atomic long.
> {quote}private ObjectName jmxRegister(BaseObjectPoolConfig config,
>  String jmxNameBase, String jmxNamePrefix) {
>  ObjectName objectName = null;
>  MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
>  int i = 1;
>  boolean registered = false;
>  String base = config.getJmxNameBase();
>  if (base == null)
> Unknown macro: \{ base = jmxNameBase; }
> while (!registered) {
>  try {
>  ObjectName objName;
>  // Skip the numeric suffix for the first pool in case there is
>  // only one so the names are cleaner.
>  if (i == 1)
> Unknown macro: \{ objName = new ObjectName(base + jmxNamePrefix); }
> else
> Unknown macro: \{ objName = new ObjectName(base + jmxNamePrefix + i); }
> mbs.registerMBean(this, objName);
>  objectName = objName;
>  registered = true;
>  } catch (MalformedObjectNameException e) {
>  if (BaseObjectPoolConfig.DEFAULT_JMX_NAME_PREFIX.equals(
>  jmxNamePrefix) && jmxNameBase.equals(base))
> Unknown macro: \{ // Shouldn't happen. Skip registration if it does. 
> registered = true; }
> else
> Unknown macro: \{ // Must be an invalid name. Use the defaults instead. 
> jmxNamePrefix = BaseObjectPoolConfig.DEFAULT_JMX_NAME_PREFIX; base = 
> jmxNameBase; }
> } catch (InstanceAlreadyExistsException e)
> Unknown macro: \{ // Increment the index and try again i++; }
> catch (MBeanRegistrationException e)
> Unknown macro: \{ // Shouldn't happen. Skip registration if it does. 
> registered = true; }
> catch (NotCompliantMBeanException e)
> }
>  return objectName;
>  }
> {quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (POOL-393) BaseGenericObjectPool.jmxRegister may cost too much time

2022-12-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/POOL-393?focusedWorklogId=835588=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-835588
 ]

ASF GitHub Bot logged work on POOL-393:
---

Author: ASF GitHub Bot
Created on: 24/Dec/22 17:36
Start Date: 24/Dec/22 17:36
Worklog Time Spent: 10m 
  Work Description: niallkp commented on PR #199:
URL: https://github.com/apache/commons-pool/pull/199#issuecomment-1364561311

   Firstly I would say that this PR doesn't change the logic of the method - it 
just  checks first whether theres already a MBean with the same name rather 
than trying to register an MBean and having it throw an 
InstanceAlreadyExistsException and this appears to improve the performance x10 
despite all the additional calls to the **_isRegisterd()_** method.
   
   On the issue of multiple threads registering multiple pools at the same 
time, then they will iterate until a free name is found and the only 
possibility I see for an infinite loop would be if all possible names had 
already been registered. Its hard to conceive of that being the case and I 
suspect that other issues would occur in that case (out of memory issues).
   
   So locally I created a test case with two Callable instances which both 
create 3 thread pools in separate threads and then printed out the JMX Name of 
each pool and it produced the following results:
   
   **First Task:**
   
   - org.apache.commons.pool2:type=GenericObjectPool,name=pool2
   - org.apache.commons.pool2:type=GenericObjectPool,name=pool4
   - org.apache.commons.pool2:type=GenericObjectPool,name=pool6
   
   **Second Task:**
   
   - org.apache.commons.pool2:type=GenericObjectPool,name=pool3
   - org.apache.commons.pool2:type=GenericObjectPool,name=pool5
   - org.apache.commons.pool2:type=GenericObjectPool,name=pool7
   
   If you want I can add this test case to the PR?
   
   (**Note**: the reason there isn't an MBean with **_name=pool_** is because 
**TestBaseGenericObjectPool** sets up a GenericObjectPool in the **setup()** 
method of the test which is already registered) 
   
   
   
   
   




Issue Time Tracking
---

Worklog Id: (was: 835588)
Time Spent: 2h  (was: 1h 50m)

> BaseGenericObjectPool.jmxRegister may cost too much time
> 
>
> Key: POOL-393
> URL: https://issues.apache.org/jira/browse/POOL-393
> Project: Commons Pool
>  Issue Type: Improvement
>Affects Versions: 2.4.2
>Reporter: Shichao Yuan
>Priority: Major
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
>  
> When creating many pools, I find that it tasks too much time to register jmx.
> In the code,  the ObjectName's postfix always starts with 1, so many 
> InstanceAlreadyExistsExceptions may be thrown before registered successfully.
> Maybe a random number is a better choice, or a atomic long.
> {quote}private ObjectName jmxRegister(BaseObjectPoolConfig config,
>  String jmxNameBase, String jmxNamePrefix) {
>  ObjectName objectName = null;
>  MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
>  int i = 1;
>  boolean registered = false;
>  String base = config.getJmxNameBase();
>  if (base == null)
> Unknown macro: \{ base = jmxNameBase; }
> while (!registered) {
>  try {
>  ObjectName objName;
>  // Skip the numeric suffix for the first pool in case there is
>  // only one so the names are cleaner.
>  if (i == 1)
> Unknown macro: \{ objName = new ObjectName(base + jmxNamePrefix); }
> else
> Unknown macro: \{ objName = new ObjectName(base + jmxNamePrefix + i); }
> mbs.registerMBean(this, objName);
>  objectName = objName;
>  registered = true;
>  } catch (MalformedObjectNameException e) {
>  if (BaseObjectPoolConfig.DEFAULT_JMX_NAME_PREFIX.equals(
>  jmxNamePrefix) && jmxNameBase.equals(base))
> Unknown macro: \{ // Shouldn't happen. Skip registration if it does. 
> registered = true; }
> else
> Unknown macro: \{ // Must be an invalid name. Use the defaults instead. 
> jmxNamePrefix = BaseObjectPoolConfig.DEFAULT_JMX_NAME_PREFIX; base = 
> jmxNameBase; }
> } catch (InstanceAlreadyExistsException e)
> Unknown macro: \{ // Increment the index and try again i++; }
> catch (MBeanRegistrationException e)
> Unknown macro: \{ // Shouldn't happen. Skip registration if it does. 
> registered = true; }
> catch (NotCompliantMBeanException e)
> }
>  return objectName;
>  }
> {quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (POOL-393) BaseGenericObjectPool.jmxRegister may cost too much time

2022-12-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/POOL-393?focusedWorklogId=835586=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-835586
 ]

ASF GitHub Bot logged work on POOL-393:
---

Author: ASF GitHub Bot
Created on: 24/Dec/22 15:27
Start Date: 24/Dec/22 15:27
Worklog Time Spent: 10m 
  Work Description: kinow commented on PR #199:
URL: https://github.com/apache/commons-pool/pull/199#issuecomment-1364545324

   The class docs state it's intended to be thread-safe, and that `i` 
variable's scope is limited to the method. It should be fine regarding other 
threads trying to instantiate it I think.
   
   I think @garydgregory is right asking if it there is a scenario where it 
just keeps going in a loop trying to register `jmxbean`, `jmxbean1`, 
`jmxbean2`, ...`, jmxbean`. But I think if that's a possible scenario, that 
deserves a separate issue as I think that affects `master` too (or at least 
deserves a comment in the javadocs or in the code body telling others why that 
wouldn't happen).




Issue Time Tracking
---

Worklog Id: (was: 835586)
Time Spent: 1h 50m  (was: 1h 40m)

> BaseGenericObjectPool.jmxRegister may cost too much time
> 
>
> Key: POOL-393
> URL: https://issues.apache.org/jira/browse/POOL-393
> Project: Commons Pool
>  Issue Type: Improvement
>Affects Versions: 2.4.2
>Reporter: Shichao Yuan
>Priority: Major
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
>  
> When creating many pools, I find that it tasks too much time to register jmx.
> In the code,  the ObjectName's postfix always starts with 1, so many 
> InstanceAlreadyExistsExceptions may be thrown before registered successfully.
> Maybe a random number is a better choice, or a atomic long.
> {quote}private ObjectName jmxRegister(BaseObjectPoolConfig config,
>  String jmxNameBase, String jmxNamePrefix) {
>  ObjectName objectName = null;
>  MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
>  int i = 1;
>  boolean registered = false;
>  String base = config.getJmxNameBase();
>  if (base == null)
> Unknown macro: \{ base = jmxNameBase; }
> while (!registered) {
>  try {
>  ObjectName objName;
>  // Skip the numeric suffix for the first pool in case there is
>  // only one so the names are cleaner.
>  if (i == 1)
> Unknown macro: \{ objName = new ObjectName(base + jmxNamePrefix); }
> else
> Unknown macro: \{ objName = new ObjectName(base + jmxNamePrefix + i); }
> mbs.registerMBean(this, objName);
>  objectName = objName;
>  registered = true;
>  } catch (MalformedObjectNameException e) {
>  if (BaseObjectPoolConfig.DEFAULT_JMX_NAME_PREFIX.equals(
>  jmxNamePrefix) && jmxNameBase.equals(base))
> Unknown macro: \{ // Shouldn't happen. Skip registration if it does. 
> registered = true; }
> else
> Unknown macro: \{ // Must be an invalid name. Use the defaults instead. 
> jmxNamePrefix = BaseObjectPoolConfig.DEFAULT_JMX_NAME_PREFIX; base = 
> jmxNameBase; }
> } catch (InstanceAlreadyExistsException e)
> Unknown macro: \{ // Increment the index and try again i++; }
> catch (MBeanRegistrationException e)
> Unknown macro: \{ // Shouldn't happen. Skip registration if it does. 
> registered = true; }
> catch (NotCompliantMBeanException e)
> }
>  return objectName;
>  }
> {quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (POOL-393) BaseGenericObjectPool.jmxRegister may cost too much time

2022-12-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/POOL-393?focusedWorklogId=835584=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-835584
 ]

ASF GitHub Bot logged work on POOL-393:
---

Author: ASF GitHub Bot
Created on: 24/Dec/22 15:13
Start Date: 24/Dec/22 15:13
Worklog Time Spent: 10m 
  Work Description: garydgregory commented on PR #199:
URL: https://github.com/apache/commons-pool/pull/199#issuecomment-1364543702

   What if another thread, somewhere, is also using Pool, also wanting JMX...




Issue Time Tracking
---

Worklog Id: (was: 835584)
Time Spent: 1h 40m  (was: 1.5h)

> BaseGenericObjectPool.jmxRegister may cost too much time
> 
>
> Key: POOL-393
> URL: https://issues.apache.org/jira/browse/POOL-393
> Project: Commons Pool
>  Issue Type: Improvement
>Affects Versions: 2.4.2
>Reporter: Shichao Yuan
>Priority: Major
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
>  
> When creating many pools, I find that it tasks too much time to register jmx.
> In the code,  the ObjectName's postfix always starts with 1, so many 
> InstanceAlreadyExistsExceptions may be thrown before registered successfully.
> Maybe a random number is a better choice, or a atomic long.
> {quote}private ObjectName jmxRegister(BaseObjectPoolConfig config,
>  String jmxNameBase, String jmxNamePrefix) {
>  ObjectName objectName = null;
>  MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
>  int i = 1;
>  boolean registered = false;
>  String base = config.getJmxNameBase();
>  if (base == null)
> Unknown macro: \{ base = jmxNameBase; }
> while (!registered) {
>  try {
>  ObjectName objName;
>  // Skip the numeric suffix for the first pool in case there is
>  // only one so the names are cleaner.
>  if (i == 1)
> Unknown macro: \{ objName = new ObjectName(base + jmxNamePrefix); }
> else
> Unknown macro: \{ objName = new ObjectName(base + jmxNamePrefix + i); }
> mbs.registerMBean(this, objName);
>  objectName = objName;
>  registered = true;
>  } catch (MalformedObjectNameException e) {
>  if (BaseObjectPoolConfig.DEFAULT_JMX_NAME_PREFIX.equals(
>  jmxNamePrefix) && jmxNameBase.equals(base))
> Unknown macro: \{ // Shouldn't happen. Skip registration if it does. 
> registered = true; }
> else
> Unknown macro: \{ // Must be an invalid name. Use the defaults instead. 
> jmxNamePrefix = BaseObjectPoolConfig.DEFAULT_JMX_NAME_PREFIX; base = 
> jmxNameBase; }
> } catch (InstanceAlreadyExistsException e)
> Unknown macro: \{ // Increment the index and try again i++; }
> catch (MBeanRegistrationException e)
> Unknown macro: \{ // Shouldn't happen. Skip registration if it does. 
> registered = true; }
> catch (NotCompliantMBeanException e)
> }
>  return objectName;
>  }
> {quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (POOL-393) BaseGenericObjectPool.jmxRegister may cost too much time

2022-12-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/POOL-393?focusedWorklogId=835577=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-835577
 ]

ASF GitHub Bot logged work on POOL-393:
---

Author: ASF GitHub Bot
Created on: 24/Dec/22 14:13
Start Date: 24/Dec/22 14:13
Worklog Time Spent: 10m 
  Work Description: niallkp commented on PR #199:
URL: https://github.com/apache/commons-pool/pull/199#issuecomment-1364536206

   Sorry Gary, not sure what you mean about retries. When it retries it changes 
the name by incrementing the count - since the count is appended to the name of 
the MBean being registered. So I'm not sure how this gets into an infinite loop?




Issue Time Tracking
---

Worklog Id: (was: 835577)
Time Spent: 1.5h  (was: 1h 20m)

> BaseGenericObjectPool.jmxRegister may cost too much time
> 
>
> Key: POOL-393
> URL: https://issues.apache.org/jira/browse/POOL-393
> Project: Commons Pool
>  Issue Type: Improvement
>Affects Versions: 2.4.2
>Reporter: Shichao Yuan
>Priority: Major
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
>  
> When creating many pools, I find that it tasks too much time to register jmx.
> In the code,  the ObjectName's postfix always starts with 1, so many 
> InstanceAlreadyExistsExceptions may be thrown before registered successfully.
> Maybe a random number is a better choice, or a atomic long.
> {quote}private ObjectName jmxRegister(BaseObjectPoolConfig config,
>  String jmxNameBase, String jmxNamePrefix) {
>  ObjectName objectName = null;
>  MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
>  int i = 1;
>  boolean registered = false;
>  String base = config.getJmxNameBase();
>  if (base == null)
> Unknown macro: \{ base = jmxNameBase; }
> while (!registered) {
>  try {
>  ObjectName objName;
>  // Skip the numeric suffix for the first pool in case there is
>  // only one so the names are cleaner.
>  if (i == 1)
> Unknown macro: \{ objName = new ObjectName(base + jmxNamePrefix); }
> else
> Unknown macro: \{ objName = new ObjectName(base + jmxNamePrefix + i); }
> mbs.registerMBean(this, objName);
>  objectName = objName;
>  registered = true;
>  } catch (MalformedObjectNameException e) {
>  if (BaseObjectPoolConfig.DEFAULT_JMX_NAME_PREFIX.equals(
>  jmxNamePrefix) && jmxNameBase.equals(base))
> Unknown macro: \{ // Shouldn't happen. Skip registration if it does. 
> registered = true; }
> else
> Unknown macro: \{ // Must be an invalid name. Use the defaults instead. 
> jmxNamePrefix = BaseObjectPoolConfig.DEFAULT_JMX_NAME_PREFIX; base = 
> jmxNameBase; }
> } catch (InstanceAlreadyExistsException e)
> Unknown macro: \{ // Increment the index and try again i++; }
> catch (MBeanRegistrationException e)
> Unknown macro: \{ // Shouldn't happen. Skip registration if it does. 
> registered = true; }
> catch (NotCompliantMBeanException e)
> }
>  return objectName;
>  }
> {quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (LANG-1682) Adding StringUtils.startsWithAnyIgnoreCase method

2022-12-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/LANG-1682?focusedWorklogId=835504=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-835504
 ]

ASF GitHub Bot logged work on LANG-1682:


Author: ASF GitHub Bot
Created on: 23/Dec/22 12:07
Start Date: 23/Dec/22 12:07
Worklog Time Spent: 10m 
  Work Description: Enigo commented on PR #848:
URL: https://github.com/apache/commons-lang/pull/848#issuecomment-1363899271

   I'm not sure if that would be user-friendly, as most users are used to one 
single class for all operations with strings, numbers, lists, sets etc.
   That would be quite a drastic change
   But I do agree that this class is quite crowded already 




Issue Time Tracking
---

Worklog Id: (was: 835504)
Time Spent: 50m  (was: 40m)

> Adding StringUtils.startsWithAnyIgnoreCase method
> -
>
> Key: LANG-1682
> URL: https://issues.apache.org/jira/browse/LANG-1682
> Project: Commons Lang
>  Issue Type: Improvement
>  Components: lang.*
>Reporter: Ruslan Sibgatullin
>Priority: Minor
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Adding `StringUtils.startsWithAnyIgnoreCase` to have more flexibility.
> Based on the existing `startsWith` method



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (LANG-1682) Adding StringUtils.startsWithAnyIgnoreCase method

2022-12-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/LANG-1682?focusedWorklogId=835503=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-835503
 ]

ASF GitHub Bot logged work on LANG-1682:


Author: ASF GitHub Bot
Created on: 23/Dec/22 11:58
Start Date: 23/Dec/22 11:58
Worklog Time Spent: 10m 
  Work Description: garydgregory commented on PR #848:
URL: https://github.com/apache/commons-lang/pull/848#issuecomment-1363892443

   Needs further thought: I don't think it makes sense to overcrowd this class 
with more "ignore case" versions of methods and to also have "any" or "all" 
versions of methods. Instead, we should consider another design, maybe having a 
string util class with a subclass for case sensitive and another for cases 
insensitive matches, for example.




Issue Time Tracking
---

Worklog Id: (was: 835503)
Time Spent: 40m  (was: 0.5h)

> Adding StringUtils.startsWithAnyIgnoreCase method
> -
>
> Key: LANG-1682
> URL: https://issues.apache.org/jira/browse/LANG-1682
> Project: Commons Lang
>  Issue Type: Improvement
>  Components: lang.*
>Reporter: Ruslan Sibgatullin
>Priority: Minor
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Adding `StringUtils.startsWithAnyIgnoreCase` to have more flexibility.
> Based on the existing `startsWith` method



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (LANG-1682) Adding StringUtils.startsWithAnyIgnoreCase method

2022-12-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/LANG-1682?focusedWorklogId=835446=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-835446
 ]

ASF GitHub Bot logged work on LANG-1682:


Author: ASF GitHub Bot
Created on: 23/Dec/22 05:38
Start Date: 23/Dec/22 05:38
Worklog Time Spent: 10m 
  Work Description: Enigo commented on PR #848:
URL: https://github.com/apache/commons-lang/pull/848#issuecomment-1363635919

   Hey @garydgregory 
   any chance for this PR to be reviewed and merged?
   thanks!
   




Issue Time Tracking
---

Worklog Id: (was: 835446)
Time Spent: 0.5h  (was: 20m)

> Adding StringUtils.startsWithAnyIgnoreCase method
> -
>
> Key: LANG-1682
> URL: https://issues.apache.org/jira/browse/LANG-1682
> Project: Commons Lang
>  Issue Type: Improvement
>  Components: lang.*
>Reporter: Ruslan Sibgatullin
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Adding `StringUtils.startsWithAnyIgnoreCase` to have more flexibility.
> Based on the existing `startsWith` method



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (POOL-393) BaseGenericObjectPool.jmxRegister may cost too much time

2022-12-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/POOL-393?focusedWorklogId=835415=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-835415
 ]

ASF GitHub Bot logged work on POOL-393:
---

Author: ASF GitHub Bot
Created on: 22/Dec/22 23:45
Start Date: 22/Dec/22 23:45
Worklog Time Spent: 10m 
  Work Description: codecov-commenter commented on PR #199:
URL: https://github.com/apache/commons-pool/pull/199#issuecomment-1363443034

   # 
[Codecov](https://codecov.io/gh/apache/commons-pool/pull/199?src=pr=h1_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation)
 Report
   > Merging 
[#199](https://codecov.io/gh/apache/commons-pool/pull/199?src=pr=desc_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation)
 (3026c61) into 
[master](https://codecov.io/gh/apache/commons-pool/commit/eb2cf8eb2b7984e7300cb6875ad3882508ff56f3?el=desc_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation)
 (eb2cf8e) will **increase** coverage by `0.10%`.
   > The diff coverage is `100.00%`.
   
   ```diff
   @@ Coverage Diff  @@
   ## master #199  +/-   ##
   
   + Coverage 81.83%   81.94%   +0.10% 
   - Complexity  760  763   +3 
   
 Files42   42  
 Lines  3066 3068   +2 
 Branches308  309   +1 
   
   + Hits   2509 2514   +5 
   + Misses  450  449   -1 
   + Partials107  105   -2 
   ```
   
   
   | [Impacted 
Files](https://codecov.io/gh/apache/commons-pool/pull/199?src=pr=tree_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation)
 | Coverage Δ | |
   |---|---|---|
   | 
[...ache/commons/pool2/impl/BaseGenericObjectPool.java](https://codecov.io/gh/apache/commons-pool/pull/199/diff?src=pr=tree_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation#diff-c3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2NvbW1vbnMvcG9vbDIvaW1wbC9CYXNlR2VuZXJpY09iamVjdFBvb2wuamF2YQ==)
 | `88.36% <100.00%> (-0.43%)` | :arrow_down: |
   | 
[...g/apache/commons/pool2/impl/GenericObjectPool.java](https://codecov.io/gh/apache/commons-pool/pull/199/diff?src=pr=tree_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation#diff-c3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2NvbW1vbnMvcG9vbDIvaW1wbC9HZW5lcmljT2JqZWN0UG9vbC5qYXZh)
 | `85.41% <0.00%> (+1.30%)` | :arrow_up: |
   
   :mega: We’re building smart automated test selection to slash your CI/CD 
build times. [Learn 
more](https://about.codecov.io/iterative-testing/?utm_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation)
   




Issue Time Tracking
---

Worklog Id: (was: 835415)
Time Spent: 1h 20m  (was: 1h 10m)

> BaseGenericObjectPool.jmxRegister may cost too much time
> 
>
> Key: POOL-393
> URL: https://issues.apache.org/jira/browse/POOL-393
> Project: Commons Pool
>  Issue Type: Improvement
>Affects Versions: 2.4.2
>Reporter: Shichao Yuan
>Priority: Major
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
>  
> When creating many pools, I find that it tasks too much time to register jmx.
> In the code,  the ObjectName's postfix always starts with 1, so many 
> InstanceAlreadyExistsExceptions may be thrown before registered successfully.
> Maybe a random number is a better choice, or a atomic long.
> {quote}private ObjectName jmxRegister(BaseObjectPoolConfig config,
>  String jmxNameBase, String jmxNamePrefix) {
>  ObjectName objectName = null;
>  MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
>  int i = 1;
>  boolean registered = false;
>  String base = config.getJmxNameBase();
>  if (base == null)
> Unknown macro: \{ base = jmxNameBase; }
> while (!registered) {
>  try {
>  ObjectName objName;
>  // Skip the numeric suffix for the first pool in case there is
>  // only one so the names are cleaner.
>  if (i == 1)
> Unknown macro: \{ objName = new ObjectName(base + jmxNamePrefix); }
> else
> Unknown macro: \{ objName = new ObjectName(base + jmxNamePrefix + i); }
> mbs.registerMBean(this, objName);
>  objectName = objName;
>  registered = true;
>  } catch (MalformedObjectNameException e) {
>  if (BaseObjectPoolConfig.DEFAULT_JMX_NAME_PREFIX.equals(
>  jmxNamePrefix) && jmxNameBase.equals(base))
> Unknown macro: \{ // Shouldn't happen. Skip registration if it does. 
> 

[jira] [Work logged] (POOL-393) BaseGenericObjectPool.jmxRegister may cost too much time

2022-12-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/POOL-393?focusedWorklogId=835413=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-835413
 ]

ASF GitHub Bot logged work on POOL-393:
---

Author: ASF GitHub Bot
Created on: 22/Dec/22 23:16
Start Date: 22/Dec/22 23:16
Worklog Time Spent: 10m 
  Work Description: niallkp opened a new pull request, #199:
URL: https://github.com/apache/commons-pool/pull/199

   The algorithm for generating the JMX name for newly created pools can be 
very slow if the number of pools is large. This PR makes a 10x improvement 
without changing the naming sequence.
   
   I tried a couple of approaches - first retrieving all the registered pool 
names using  the MBeanServer's  **_queryNames(ObjectName, QueryExp)_** method 
and and then using MBeanServer's **_isRegistered(ObjectName)_** method. The 
later involved many more JMX calls but was slightly faster and simpler code - 
so this PR uses that approach.
   
   This PR seems to provide the performance improvement without changing 
behavior - which Phil didn't like in 
https://github.com/apache/commons-pool/pull/115




Issue Time Tracking
---

Worklog Id: (was: 835413)
Time Spent: 1h 10m  (was: 1h)

> BaseGenericObjectPool.jmxRegister may cost too much time
> 
>
> Key: POOL-393
> URL: https://issues.apache.org/jira/browse/POOL-393
> Project: Commons Pool
>  Issue Type: Improvement
>Affects Versions: 2.4.2
>Reporter: Shichao Yuan
>Priority: Major
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
>  
> When creating many pools, I find that it tasks too much time to register jmx.
> In the code,  the ObjectName's postfix always starts with 1, so many 
> InstanceAlreadyExistsExceptions may be thrown before registered successfully.
> Maybe a random number is a better choice, or a atomic long.
> {quote}private ObjectName jmxRegister(BaseObjectPoolConfig config,
>  String jmxNameBase, String jmxNamePrefix) {
>  ObjectName objectName = null;
>  MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
>  int i = 1;
>  boolean registered = false;
>  String base = config.getJmxNameBase();
>  if (base == null)
> Unknown macro: \{ base = jmxNameBase; }
> while (!registered) {
>  try {
>  ObjectName objName;
>  // Skip the numeric suffix for the first pool in case there is
>  // only one so the names are cleaner.
>  if (i == 1)
> Unknown macro: \{ objName = new ObjectName(base + jmxNamePrefix); }
> else
> Unknown macro: \{ objName = new ObjectName(base + jmxNamePrefix + i); }
> mbs.registerMBean(this, objName);
>  objectName = objName;
>  registered = true;
>  } catch (MalformedObjectNameException e) {
>  if (BaseObjectPoolConfig.DEFAULT_JMX_NAME_PREFIX.equals(
>  jmxNamePrefix) && jmxNameBase.equals(base))
> Unknown macro: \{ // Shouldn't happen. Skip registration if it does. 
> registered = true; }
> else
> Unknown macro: \{ // Must be an invalid name. Use the defaults instead. 
> jmxNamePrefix = BaseObjectPoolConfig.DEFAULT_JMX_NAME_PREFIX; base = 
> jmxNameBase; }
> } catch (InstanceAlreadyExistsException e)
> Unknown macro: \{ // Increment the index and try again i++; }
> catch (MBeanRegistrationException e)
> Unknown macro: \{ // Shouldn't happen. Skip registration if it does. 
> registered = true; }
> catch (NotCompliantMBeanException e)
> }
>  return objectName;
>  }
> {quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (COMPRESS-633) Adding support for SevenZ password encryption

2022-12-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-633?focusedWorklogId=834739=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-834739
 ]

ASF GitHub Bot logged work on COMPRESS-633:
---

Author: ASF GitHub Bot
Created on: 20/Dec/22 11:32
Start Date: 20/Dec/22 11:32
Worklog Time Spent: 10m 
  Work Description: garydgregory commented on PR #332:
URL: https://github.com/apache/commons-compress/pull/332#issuecomment-1359219490

   Nope, we have other components to work through and Commons Net just had a 
release.




Issue Time Tracking
---

Worklog Id: (was: 834739)
Time Spent: 3h 10m  (was: 3h)

> Adding support for SevenZ password encryption
> -
>
> Key: COMPRESS-633
> URL: https://issues.apache.org/jira/browse/COMPRESS-633
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Compressors
>Affects Versions: 1.22
>Reporter: Daniel Santos
>Priority: Major
>  Labels: contributing, features
> Fix For: 1.23
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
>  The purpose is to provide password based encryption for 7z compression in 
> the same way of decryption already supported, so go forward of [one know 
> limitation|https://commons.apache.org/proper/commons-compress/limitations.html]
> ☝️In this way, I would like to submit my contribution based on the existing 
> implementation of decryption and the [C++ implementation of 7z 
> |https://github.com/kornelski/7z/blob/main/CPP/7zip/Crypto/7zAes.cpp]
> ✅ I added one unit test
>  
> I prepared a [Pull Request on 
> GitHub|https://github.com/apache/commons-compress/pull/332]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (COMPRESS-633) Adding support for SevenZ password encryption

2022-12-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-633?focusedWorklogId=834631=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-834631
 ]

ASF GitHub Bot logged work on COMPRESS-633:
---

Author: ASF GitHub Bot
Created on: 19/Dec/22 23:57
Start Date: 19/Dec/22 23:57
Worklog Time Spent: 10m 
  Work Description: Dougniel commented on PR #332:
URL: https://github.com/apache/commons-compress/pull/332#issuecomment-1358641774

   Hi @garydgregory, an idea of next release date ?




Issue Time Tracking
---

Worklog Id: (was: 834631)
Time Spent: 3h  (was: 2h 50m)

> Adding support for SevenZ password encryption
> -
>
> Key: COMPRESS-633
> URL: https://issues.apache.org/jira/browse/COMPRESS-633
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Compressors
>Affects Versions: 1.22
>Reporter: Daniel Santos
>Priority: Major
>  Labels: contributing, features
> Fix For: 1.23
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
>  The purpose is to provide password based encryption for 7z compression in 
> the same way of decryption already supported, so go forward of [one know 
> limitation|https://commons.apache.org/proper/commons-compress/limitations.html]
> ☝️In this way, I would like to submit my contribution based on the existing 
> implementation of decryption and the [C++ implementation of 7z 
> |https://github.com/kornelski/7z/blob/main/CPP/7zip/Crypto/7zAes.cpp]
> ✅ I added one unit test
>  
> I prepared a [Pull Request on 
> GitHub|https://github.com/apache/commons-compress/pull/332]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (DBCP-589) Provide Jakarta namespace ready artifact of DBCP2

2022-12-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/DBCP-589?focusedWorklogId=834329=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-834329
 ]

ASF GitHub Bot logged work on DBCP-589:
---

Author: ASF GitHub Bot
Created on: 17/Dec/22 21:14
Start Date: 17/Dec/22 21:14
Worklog Time Spent: 10m 
  Work Description: rzo1 commented on PR #248:
URL: https://github.com/apache/commons-dbcp/pull/248#issuecomment-1356473724

   Closing, see  
https://lists.apache.org/thread/kzoox2hkb484gyj1z13rn42xko2bp29w for details.




Issue Time Tracking
---

Worklog Id: (was: 834329)
Time Spent: 1h 40m  (was: 1.5h)

> Provide Jakarta namespace ready artifact of DBCP2
> -
>
> Key: DBCP-589
> URL: https://issues.apache.org/jira/browse/DBCP-589
> Project: Commons DBCP
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Richard Zowalla
>Priority: Minor
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Currently, we are using a shaded and relocated version of DBCP2 in TomEE to 
> have a Jakarta namespace variant of DBCP2.
> For DBCP2 it would require to do some small relocations:
> {code:xml}
>  
> 
>   javax.transaction
>   jakarta.transaction
>   
> javax.transaction.xa.**
>   
> 
>   
> {code}
> Geronimo and other EE related projects are using the relocation / shade 
> approach to provide artifacts via a "jakarta" classifier. 
> I will open a related PR soon.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (DBCP-589) Provide Jakarta namespace ready artifact of DBCP2

2022-12-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/DBCP-589?focusedWorklogId=834328=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-834328
 ]

ASF GitHub Bot logged work on DBCP-589:
---

Author: ASF GitHub Bot
Created on: 17/Dec/22 21:12
Start Date: 17/Dec/22 21:12
Worklog Time Spent: 10m 
  Work Description: rzo1 closed pull request #248: DBCP-589 - Provide 
Jakarta namespace ready artifact of DBCP2
URL: https://github.com/apache/commons-dbcp/pull/248




Issue Time Tracking
---

Worklog Id: (was: 834328)
Time Spent: 1.5h  (was: 1h 20m)

> Provide Jakarta namespace ready artifact of DBCP2
> -
>
> Key: DBCP-589
> URL: https://issues.apache.org/jira/browse/DBCP-589
> Project: Commons DBCP
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Richard Zowalla
>Priority: Minor
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Currently, we are using a shaded and relocated version of DBCP2 in TomEE to 
> have a Jakarta namespace variant of DBCP2.
> For DBCP2 it would require to do some small relocations:
> {code:xml}
>  
> 
>   javax.transaction
>   jakarta.transaction
>   
> javax.transaction.xa.**
>   
> 
>   
> {code}
> Geronimo and other EE related projects are using the relocation / shade 
> approach to provide artifacts via a "jakarta" classifier. 
> I will open a related PR soon.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (DBCP-589) Provide Jakarta namespace ready artifact of DBCP2

2022-12-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/DBCP-589?focusedWorklogId=834097=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-834097
 ]

ASF GitHub Bot logged work on DBCP-589:
---

Author: ASF GitHub Bot
Created on: 16/Dec/22 09:21
Start Date: 16/Dec/22 09:21
Worklog Time Spent: 10m 
  Work Description: rmannibucau commented on PR #248:
URL: https://github.com/apache/commons-dbcp/pull/248#issuecomment-1354435743

   > impossible to debug, and again, untested
   
   So factually you can debug as any artifact - you shade the sources too, it 
is used and got proven working fine by multiple asf projects.
   And test is just a matter of writing some test or re-executing the suite 
ones if you prefer (but technically/theorically speaking you need just a smoke 
test covering the javax integration code to ensure it works since functionally 
the rest is already covered).
   
   So overall both points are not blockers and are even almost not true.
   
   > the sanest way to do that is changing the source
   
   So it means jakarta is used on master, by commons rules it becomes dbcp3.
   So, as mentionned question is: do we abandon dbcp2 and consider javax is no 
more used. If so I join you but if not (and I think we still have 2-3 years of 
javax) then we should target to enable jakarta (dbcp2.jakarta package) and 
maintain a single code base - would be a pain for no gain at all to fork 
ourselves for that.




Issue Time Tracking
---

Worklog Id: (was: 834097)
Time Spent: 1h 20m  (was: 1h 10m)

> Provide Jakarta namespace ready artifact of DBCP2
> -
>
> Key: DBCP-589
> URL: https://issues.apache.org/jira/browse/DBCP-589
> Project: Commons DBCP
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Richard Zowalla
>Priority: Minor
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Currently, we are using a shaded and relocated version of DBCP2 in TomEE to 
> have a Jakarta namespace variant of DBCP2.
> For DBCP2 it would require to do some small relocations:
> {code:xml}
>  
> 
>   javax.transaction
>   jakarta.transaction
>   
> javax.transaction.xa.**
>   
> 
>   
> {code}
> Geronimo and other EE related projects are using the relocation / shade 
> approach to provide artifacts via a "jakarta" classifier. 
> I will open a related PR soon.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (DBCP-589) Provide Jakarta namespace ready artifact of DBCP2

2022-12-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/DBCP-589?focusedWorklogId=834095=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-834095
 ]

ASF GitHub Bot logged work on DBCP-589:
---

Author: ASF GitHub Bot
Created on: 16/Dec/22 09:06
Start Date: 16/Dec/22 09:06
Worklog Time Spent: 10m 
  Work Description: rzo1 commented on PR #248:
URL: https://github.com/apache/commons-dbcp/pull/248#issuecomment-1354420435

   @garydgregory So let's start a discussion on dev@ how the Commons wants to 
deal with Jakarta?




Issue Time Tracking
---

Worklog Id: (was: 834095)
Time Spent: 1h 10m  (was: 1h)

> Provide Jakarta namespace ready artifact of DBCP2
> -
>
> Key: DBCP-589
> URL: https://issues.apache.org/jira/browse/DBCP-589
> Project: Commons DBCP
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Richard Zowalla
>Priority: Minor
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Currently, we are using a shaded and relocated version of DBCP2 in TomEE to 
> have a Jakarta namespace variant of DBCP2.
> For DBCP2 it would require to do some small relocations:
> {code:xml}
>  
> 
>   javax.transaction
>   jakarta.transaction
>   
> javax.transaction.xa.**
>   
> 
>   
> {code}
> Geronimo and other EE related projects are using the relocation / shade 
> approach to provide artifacts via a "jakarta" classifier. 
> I will open a related PR soon.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (DBCP-589) Provide Jakarta namespace ready artifact of DBCP2

2022-12-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/DBCP-589?focusedWorklogId=834093=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-834093
 ]

ASF GitHub Bot logged work on DBCP-589:
---

Author: ASF GitHub Bot
Created on: 16/Dec/22 09:01
Start Date: 16/Dec/22 09:01
Worklog Time Spent: 10m 
  Work Description: garydgregory commented on PR #248:
URL: https://github.com/apache/commons-dbcp/pull/248#issuecomment-1354415101

   Please, no, -1; impossible to debug, and again, untested, also we ideally 
should have a consistent way to deal with Jakarta naming throughout Apache 
Commons, and the sanest way to do that is changing the source, if that breaks 
binary compatibility, then it can be done in a major version.




Issue Time Tracking
---

Worklog Id: (was: 834093)
Time Spent: 1h  (was: 50m)

> Provide Jakarta namespace ready artifact of DBCP2
> -
>
> Key: DBCP-589
> URL: https://issues.apache.org/jira/browse/DBCP-589
> Project: Commons DBCP
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Richard Zowalla
>Priority: Minor
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Currently, we are using a shaded and relocated version of DBCP2 in TomEE to 
> have a Jakarta namespace variant of DBCP2.
> For DBCP2 it would require to do some small relocations:
> {code:xml}
>  
> 
>   javax.transaction
>   jakarta.transaction
>   
> javax.transaction.xa.**
>   
> 
>   
> {code}
> Geronimo and other EE related projects are using the relocation / shade 
> approach to provide artifacts via a "jakarta" classifier. 
> I will open a related PR soon.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (DBCP-589) Provide Jakarta namespace ready artifact of DBCP2

2022-12-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/DBCP-589?focusedWorklogId=834083=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-834083
 ]

ASF GitHub Bot logged work on DBCP-589:
---

Author: ASF GitHub Bot
Created on: 16/Dec/22 08:27
Start Date: 16/Dec/22 08:27
Worklog Time Spent: 10m 
  Work Description: rmannibucau commented on PR #248:
URL: https://github.com/apache/commons-dbcp/pull/248#issuecomment-1354381686

   +1 to get it in or switch now (I know it is a bit early and it is not 
justified to get 2 "main" branches).
   Note that if it helps adding a test is very feasible - we do at openwebbeans 
for ex - and enforcing binary compatiblity can be done too even if quite 
useless for this particular project (guess a test is a very good compromise 
since it will validate both at once).
   
   The only change I would do is to relocate `org.apache.commons.dbcp2` in 
`org.apache.commons.dbcp2.jakarta` to ensure both can run together - yes it 
happens.




Issue Time Tracking
---

Worklog Id: (was: 834083)
Time Spent: 50m  (was: 40m)

> Provide Jakarta namespace ready artifact of DBCP2
> -
>
> Key: DBCP-589
> URL: https://issues.apache.org/jira/browse/DBCP-589
> Project: Commons DBCP
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Richard Zowalla
>Priority: Minor
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Currently, we are using a shaded and relocated version of DBCP2 in TomEE to 
> have a Jakarta namespace variant of DBCP2.
> For DBCP2 it would require to do some small relocations:
> {code:xml}
>  
> 
>   javax.transaction
>   jakarta.transaction
>   
> javax.transaction.xa.**
>   
> 
>   
> {code}
> Geronimo and other EE related projects are using the relocation / shade 
> approach to provide artifacts via a "jakarta" classifier. 
> I will open a related PR soon.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (DBCP-589) Provide Jakarta namespace ready artifact of DBCP2

2022-12-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/DBCP-589?focusedWorklogId=834081=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-834081
 ]

ASF GitHub Bot logged work on DBCP-589:
---

Author: ASF GitHub Bot
Created on: 16/Dec/22 08:20
Start Date: 16/Dec/22 08:20
Worklog Time Spent: 10m 
  Work Description: rzo1 commented on PR #248:
URL: https://github.com/apache/commons-dbcp/pull/248#issuecomment-1354373025

   If we change the source code (which is indeed an option), it might be 
required to maintain two versions of dbcp2 as the javax namespace while be 
around for a long time. In the end, it would be great to have a Jakarta-ready 
version of DBCP2 available - otherwise, people are forced to do some "hacky" 
things to achive that. 
   
   I cannot follow regarding "binary compatibility" - maybe you can explain 
that point a bit more?
   




Issue Time Tracking
---

Worklog Id: (was: 834081)
Time Spent: 40m  (was: 0.5h)

> Provide Jakarta namespace ready artifact of DBCP2
> -
>
> Key: DBCP-589
> URL: https://issues.apache.org/jira/browse/DBCP-589
> Project: Commons DBCP
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Richard Zowalla
>Priority: Minor
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently, we are using a shaded and relocated version of DBCP2 in TomEE to 
> have a Jakarta namespace variant of DBCP2.
> For DBCP2 it would require to do some small relocations:
> {code:xml}
>  
> 
>   javax.transaction
>   jakarta.transaction
>   
> javax.transaction.xa.**
>   
> 
>   
> {code}
> Geronimo and other EE related projects are using the relocation / shade 
> approach to provide artifacts via a "jakarta" classifier. 
> I will open a related PR soon.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (DBCP-589) Provide Jakarta namespace ready artifact of DBCP2

2022-12-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/DBCP-589?focusedWorklogId=834079=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-834079
 ]

ASF GitHub Bot logged work on DBCP-589:
---

Author: ASF GitHub Bot
Created on: 16/Dec/22 08:14
Start Date: 16/Dec/22 08:14
Worklog Time Spent: 10m 
  Work Description: garydgregory commented on PR #248:
URL: https://github.com/apache/commons-dbcp/pull/248#issuecomment-1354364907

   While clever, this feels like a hack, untested as well. We should update the 
sources when we want to do this. We can't even tell if such a change would 
break binary compatibility.




Issue Time Tracking
---

Worklog Id: (was: 834079)
Time Spent: 0.5h  (was: 20m)

> Provide Jakarta namespace ready artifact of DBCP2
> -
>
> Key: DBCP-589
> URL: https://issues.apache.org/jira/browse/DBCP-589
> Project: Commons DBCP
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Richard Zowalla
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently, we are using a shaded and relocated version of DBCP2 in TomEE to 
> have a Jakarta namespace variant of DBCP2.
> For DBCP2 it would require to do some small relocations:
> {code:xml}
>  
> 
>   javax.transaction
>   jakarta.transaction
>   
> javax.transaction.xa.**
>   
> 
>   
> {code}
> Geronimo and other EE related projects are using the relocation / shade 
> approach to provide artifacts via a "jakarta" classifier. 
> I will open a related PR soon.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (DBCP-589) Provide Jakarta namespace ready artifact of DBCP2

2022-12-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/DBCP-589?focusedWorklogId=834061=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-834061
 ]

ASF GitHub Bot logged work on DBCP-589:
---

Author: ASF GitHub Bot
Created on: 16/Dec/22 07:14
Start Date: 16/Dec/22 07:14
Worklog Time Spent: 10m 
  Work Description: codecov-commenter commented on PR #248:
URL: https://github.com/apache/commons-dbcp/pull/248#issuecomment-1354314617

   # 
[Codecov](https://codecov.io/gh/apache/commons-dbcp/pull/248?src=pr=h1_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation)
 Report
   > Merging 
[#248](https://codecov.io/gh/apache/commons-dbcp/pull/248?src=pr=desc_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation)
 (ae20eda) into 
[master](https://codecov.io/gh/apache/commons-dbcp/commit/2008276a484d101d789f01bba68b444103a0eb4f?el=desc_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation)
 (2008276) will **decrease** coverage by `0.01%`.
   > The diff coverage is `n/a`.
   
   ```diff
   @@ Coverage Diff  @@
   ## master #248  +/-   ##
   
   - Coverage 59.72%   59.71%   -0.02% 
 Complexity 1783 1783  
   
 Files57   57  
 Lines  7417 7417  
 Branches421  421  
   
   - Hits   4430 4429   -1 
   - Misses 2770 2771   +1 
 Partials217  217  
   ```
   
   
   | [Impacted 
Files](https://codecov.io/gh/apache/commons-dbcp/pull/248?src=pr=tree_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation)
 | Coverage Δ | |
   |---|---|---|
   | 
[...ache/commons/dbcp2/managed/TransactionContext.java](https://codecov.io/gh/apache/commons-dbcp/pull/248/diff?src=pr=tree_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation#diff-c3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2NvbW1vbnMvZGJjcDIvbWFuYWdlZC9UcmFuc2FjdGlvbkNvbnRleHQuamF2YQ==)
 | `72.22% <0.00%> (-1.86%)` | :arrow_down: |
   
   :mega: We’re building smart automated test selection to slash your CI/CD 
build times. [Learn 
more](https://about.codecov.io/iterative-testing/?utm_medium=referral_source=github_content=comment_campaign=pr+comments_term=The+Apache+Software+Foundation)
   




Issue Time Tracking
---

Worklog Id: (was: 834061)
Time Spent: 20m  (was: 10m)

> Provide Jakarta namespace ready artifact of DBCP2
> -
>
> Key: DBCP-589
> URL: https://issues.apache.org/jira/browse/DBCP-589
> Project: Commons DBCP
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Richard Zowalla
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently, we are using a shaded and relocated version of DBCP2 in TomEE to 
> have a Jakarta namespace variant of DBCP2.
> For DBCP2 it would require to do some small relocations:
> {code:xml}
>  
> 
>   javax.transaction
>   jakarta.transaction
>   
> javax.transaction.xa.**
>   
> 
>   
> {code}
> Geronimo and other EE related projects are using the relocation / shade 
> approach to provide artifacts via a "jakarta" classifier. 
> I will open a related PR soon.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (DBCP-589) Provide Jakarta namespace ready artifact of DBCP2

2022-12-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/DBCP-589?focusedWorklogId=834060=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-834060
 ]

ASF GitHub Bot logged work on DBCP-589:
---

Author: ASF GitHub Bot
Created on: 16/Dec/22 07:10
Start Date: 16/Dec/22 07:10
Worklog Time Spent: 10m 
  Work Description: rzo1 opened a new pull request, #248:
URL: https://github.com/apache/commons-dbcp/pull/248

   # What does this PR do?
   
   This PR provides a Jakarta namespace ready artifact of DBCP2 by relocating 
the related `javax.transcation.*` imports to  `jakarta.transaction.*` and 
provides a related (attached) artifact with a jakarta classifier to be consumed 
by user projects.




Issue Time Tracking
---

Worklog Id: (was: 834060)
Remaining Estimate: 0h
Time Spent: 10m

> Provide Jakarta namespace ready artifact of DBCP2
> -
>
> Key: DBCP-589
> URL: https://issues.apache.org/jira/browse/DBCP-589
> Project: Commons DBCP
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Richard Zowalla
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, we are using a shaded and relocated version of DBCP2 in TomEE to 
> have a Jakarta namespace variant of DBCP2.
> For DBCP2 it would require to do some small relocations:
> {code:xml}
>  
> 
>   javax.transaction
>   jakarta.transaction
>   
> javax.transaction.xa.**
>   
> 
>   
> {code}
> Geronimo and other EE related projects are using the relocation / shade 
> approach to provide artifacts via a "jakarta" classifier. 
> I will open a related PR soon.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (COLLECTIONS-803) CaseInsensitiveMap prevent duplicate key conversion on put

2022-12-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COLLECTIONS-803?focusedWorklogId=833311=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-833311
 ]

ASF GitHub Bot logged work on COLLECTIONS-803:
--

Author: ASF GitHub Bot
Created on: 14/Dec/22 07:49
Start Date: 14/Dec/22 07:49
Worklog Time Spent: 10m 
  Work Description: Simulant87 commented on PR #276:
URL: 
https://github.com/apache/commons-collections/pull/276#issuecomment-1350566817

   @garydgregory May I request another review, to get my PR merged? I think the 
PR is complete with a test covering the new code, no conflicts to the main 
branch, and the pipeline is green.




Issue Time Tracking
---

Worklog Id: (was: 833311)
Time Spent: 1h 50m  (was: 1h 40m)

> CaseInsensitiveMap prevent duplicate key conversion on put
> --
>
> Key: COLLECTIONS-803
> URL: https://issues.apache.org/jira/browse/COLLECTIONS-803
> Project: Commons Collections
>  Issue Type: Improvement
>  Components: Map
>Affects Versions: 4.4
>Reporter: Simulant
>Priority: Minor
>  Labels: performance
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> When adding a new item into a {{CaseInsensitiveMap}} the {{convertKey(key)}} 
> method is called twice, once in the {{put(key, value)}} method and second in 
> the {{createEntry(next, hashCode, key, value)}} method. The result could be 
> re-used resulting in a better performance.
> Depending on the {{toString()}} implementation of the key and the resulting 
> length of the key before the lower case conversion the operation can get 
> expensive and should not be called twice, as the {{CaseInsensitiveMap}} 
> overwrites the {{convertKey(key)}} method and makes it more expensive and 
> depending on the input unlike in the implementation of the 
> {{AbstractHashedMap}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (COMPRESS-633) Adding support for SevenZ password encryption

2022-12-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-633?focusedWorklogId=832605=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-832605
 ]

ASF GitHub Bot logged work on COMPRESS-633:
---

Author: ASF GitHub Bot
Created on: 12/Dec/22 00:22
Start Date: 12/Dec/22 00:22
Worklog Time Spent: 10m 
  Work Description: garydgregory commented on PR #332:
URL: https://github.com/apache/commons-compress/pull/332#issuecomment-1345706828

   @Dougniel 
   Right. We can always document it and make it public later. But also then 
make sure the API is right.




Issue Time Tracking
---

Worklog Id: (was: 832605)
Time Spent: 2h 50m  (was: 2h 40m)

> Adding support for SevenZ password encryption
> -
>
> Key: COMPRESS-633
> URL: https://issues.apache.org/jira/browse/COMPRESS-633
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Compressors
>Affects Versions: 1.22
>Reporter: Daniel Santos
>Priority: Major
>  Labels: contributing, features
> Fix For: 1.23
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
>  The purpose is to provide password based encryption for 7z compression in 
> the same way of decryption already supported, so go forward of [one know 
> limitation|https://commons.apache.org/proper/commons-compress/limitations.html]
> ☝️In this way, I would like to submit my contribution based on the existing 
> implementation of decryption and the [C++ implementation of 7z 
> |https://github.com/kornelski/7z/blob/main/CPP/7zip/Crypto/7zAes.cpp]
> ✅ I added one unit test
>  
> I prepared a [Pull Request on 
> GitHub|https://github.com/apache/commons-compress/pull/332]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (COMPRESS-633) Adding support for SevenZ password encryption

2022-12-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-633?focusedWorklogId=832598=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-832598
 ]

ASF GitHub Bot logged work on COMPRESS-633:
---

Author: ASF GitHub Bot
Created on: 11/Dec/22 22:05
Start Date: 11/Dec/22 22:05
Worklog Time Spent: 10m 
  Work Description: Dougniel commented on PR #332:
URL: https://github.com/apache/commons-compress/pull/332#issuecomment-1345669609

   > @Dougniel
   > Merged. Thank you for your work and patience. Please verify.
   
   Thank you for your rigor, all is ok 
   
   I noticed that you switched `AES256Options` to package-private, I hesitated 
because, as public, it offers more possibilities to advanced uses (per file 
Initialization Vector, per file password.. when setting it at 
`SevenZArchiveEntry` level). But this increases the public API surface and is 
not documented




Issue Time Tracking
---

Worklog Id: (was: 832598)
Time Spent: 2h 40m  (was: 2.5h)

> Adding support for SevenZ password encryption
> -
>
> Key: COMPRESS-633
> URL: https://issues.apache.org/jira/browse/COMPRESS-633
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Compressors
>Affects Versions: 1.22
>Reporter: Daniel Santos
>Priority: Major
>  Labels: contributing, features
> Fix For: 1.23
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
>  The purpose is to provide password based encryption for 7z compression in 
> the same way of decryption already supported, so go forward of [one know 
> limitation|https://commons.apache.org/proper/commons-compress/limitations.html]
> ☝️In this way, I would like to submit my contribution based on the existing 
> implementation of decryption and the [C++ implementation of 7z 
> |https://github.com/kornelski/7z/blob/main/CPP/7zip/Crypto/7zAes.cpp]
> ✅ I added one unit test
>  
> I prepared a [Pull Request on 
> GitHub|https://github.com/apache/commons-compress/pull/332]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (COMPRESS-633) Adding support for SevenZ password encryption

2022-12-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-633?focusedWorklogId=832518=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-832518
 ]

ASF GitHub Bot logged work on COMPRESS-633:
---

Author: ASF GitHub Bot
Created on: 10/Dec/22 15:57
Start Date: 10/Dec/22 15:57
Worklog Time Spent: 10m 
  Work Description: garydgregory commented on PR #332:
URL: https://github.com/apache/commons-compress/pull/332#issuecomment-1345293519

   @Dougniel 
   Thank you for your work and patience. Please verify.




Issue Time Tracking
---

Worklog Id: (was: 832518)
Time Spent: 2.5h  (was: 2h 20m)

> Adding support for SevenZ password encryption
> -
>
> Key: COMPRESS-633
> URL: https://issues.apache.org/jira/browse/COMPRESS-633
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Compressors
>Affects Versions: 1.22
>Reporter: Daniel Santos
>Priority: Major
>  Labels: contributing, features
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
>  The purpose is to provide password based encryption for 7z compression in 
> the same way of decryption already supported, so go forward of [one know 
> limitation|https://commons.apache.org/proper/commons-compress/limitations.html]
> ☝️In this way, I would like to submit my contribution based on the existing 
> implementation of decryption and the [C++ implementation of 7z 
> |https://github.com/kornelski/7z/blob/main/CPP/7zip/Crypto/7zAes.cpp]
> ✅ I added one unit test
>  
> I prepared a [Pull Request on 
> GitHub|https://github.com/apache/commons-compress/pull/332]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (COMPRESS-633) Adding support for SevenZ password encryption

2022-12-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-633?focusedWorklogId=832517=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-832517
 ]

ASF GitHub Bot logged work on COMPRESS-633:
---

Author: ASF GitHub Bot
Created on: 10/Dec/22 15:55
Start Date: 10/Dec/22 15:55
Worklog Time Spent: 10m 
  Work Description: garydgregory merged PR #332:
URL: https://github.com/apache/commons-compress/pull/332




Issue Time Tracking
---

Worklog Id: (was: 832517)
Time Spent: 2h 20m  (was: 2h 10m)

> Adding support for SevenZ password encryption
> -
>
> Key: COMPRESS-633
> URL: https://issues.apache.org/jira/browse/COMPRESS-633
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Compressors
>Affects Versions: 1.22
>Reporter: Daniel Santos
>Priority: Major
>  Labels: contributing, features
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
>  The purpose is to provide password based encryption for 7z compression in 
> the same way of decryption already supported, so go forward of [one know 
> limitation|https://commons.apache.org/proper/commons-compress/limitations.html]
> ☝️In this way, I would like to submit my contribution based on the existing 
> implementation of decryption and the [C++ implementation of 7z 
> |https://github.com/kornelski/7z/blob/main/CPP/7zip/Crypto/7zAes.cpp]
> ✅ I added one unit test
>  
> I prepared a [Pull Request on 
> GitHub|https://github.com/apache/commons-compress/pull/332]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (COMPRESS-633) Adding support for SevenZ password encryption

2022-12-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-633?focusedWorklogId=832511=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-832511
 ]

ASF GitHub Bot logged work on COMPRESS-633:
---

Author: ASF GitHub Bot
Created on: 10/Dec/22 12:36
Start Date: 10/Dec/22 12:36
Worklog Time Spent: 10m 
  Work Description: garydgregory commented on code in PR #332:
URL: https://github.com/apache/commons-compress/pull/332#discussion_r1037641840


##
src/main/java/org/apache/commons/compress/archivers/sevenz/AES256SHA256Decoder.java:
##
@@ -126,4 +115,121 @@ public void close() throws IOException {
 }
 };
 }
+
+@Override
+OutputStream encode(OutputStream out, Object options) throws IOException {
+AES256Options opts = (AES256Options) options;
+final byte[] aesKeyBytes = sha256Password(opts.password, 
opts.numCyclesPower, opts.salt);
+final SecretKey aesKey = new SecretKeySpec(aesKeyBytes, "AES");
+
+final Cipher cipher;
+try {
+cipher = Cipher.getInstance("AES/CBC/NoPadding");
+cipher.init(Cipher.ENCRYPT_MODE, aesKey, new 
IvParameterSpec(opts.iv));
+} catch (final GeneralSecurityException generalSecurityException) {
+throw new IOException(
+"Encryption error " + "(do you have the JCE Unlimited Strength 
Jurisdiction Policy Files installed?)",

Review Comment:
   Why is a new Random instance allocated each time?





Issue Time Tracking
---

Worklog Id: (was: 832511)
Time Spent: 2h 10m  (was: 2h)

> Adding support for SevenZ password encryption
> -
>
> Key: COMPRESS-633
> URL: https://issues.apache.org/jira/browse/COMPRESS-633
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Compressors
>Affects Versions: 1.22
>Reporter: Daniel Santos
>Priority: Major
>  Labels: contributing, features
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
>  The purpose is to provide password based encryption for 7z compression in 
> the same way of decryption already supported, so go forward of [one know 
> limitation|https://commons.apache.org/proper/commons-compress/limitations.html]
> ☝️In this way, I would like to submit my contribution based on the existing 
> implementation of decryption and the [C++ implementation of 7z 
> |https://github.com/kornelski/7z/blob/main/CPP/7zip/Crypto/7zAes.cpp]
> ✅ I added one unit test
>  
> I prepared a [Pull Request on 
> GitHub|https://github.com/apache/commons-compress/pull/332]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (COMPRESS-633) Adding support for SevenZ password encryption

2022-12-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-633?focusedWorklogId=832369=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-832369
 ]

ASF GitHub Bot logged work on COMPRESS-633:
---

Author: ASF GitHub Bot
Created on: 09/Dec/22 14:48
Start Date: 09/Dec/22 14:48
Worklog Time Spent: 10m 
  Work Description: garydgregory commented on code in PR #332:
URL: https://github.com/apache/commons-compress/pull/332#discussion_r1044525053


##
src/main/java/org/apache/commons/compress/archivers/sevenz/AES256Options.java:
##
@@ -0,0 +1,100 @@
+/*
+ *  Licensed to the Apache Software Foundation (ASF) under one or more
+ *  contributor license agreements.  See the NOTICE file distributed with
+ *  this work for additional information regarding copyright ownership.
+ *  The ASF licenses this file to You under the Apache License, Version 2.0
+ *  (the "License"); you may not use this file except in compliance with
+ *  the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ *
+ */
+package org.apache.commons.compress.archivers.sevenz;
+
+import java.security.GeneralSecurityException;
+import java.security.NoSuchAlgorithmException;
+import java.security.SecureRandom;
+import javax.crypto.Cipher;
+import javax.crypto.SecretKey;
+import javax.crypto.spec.IvParameterSpec;
+import javax.crypto.spec.SecretKeySpec;
+
+/**
+ * Options for {@link SevenZMethod#AES256SHA256} encoder
+ * 
+ * @since 1.23
+ * @see AES256SHA256Decoder
+ */
+public class AES256Options {
+
+private final byte[] salt;
+private final byte[] iv;
+private final int numCyclesPower;
+private final Cipher cipher;
+
+/**

Review Comment:
   Please complete the Javadoc comments. You need a starting sentence.





Issue Time Tracking
---

Worklog Id: (was: 832369)
Time Spent: 2h  (was: 1h 50m)

> Adding support for SevenZ password encryption
> -
>
> Key: COMPRESS-633
> URL: https://issues.apache.org/jira/browse/COMPRESS-633
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Compressors
>Affects Versions: 1.22
>Reporter: Daniel Santos
>Priority: Major
>  Labels: contributing, features
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
>  The purpose is to provide password based encryption for 7z compression in 
> the same way of decryption already supported, so go forward of [one know 
> limitation|https://commons.apache.org/proper/commons-compress/limitations.html]
> ☝️In this way, I would like to submit my contribution based on the existing 
> implementation of decryption and the [C++ implementation of 7z 
> |https://github.com/kornelski/7z/blob/main/CPP/7zip/Crypto/7zAes.cpp]
> ✅ I added one unit test
>  
> I prepared a [Pull Request on 
> GitHub|https://github.com/apache/commons-compress/pull/332]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (COMPRESS-633) Adding support for SevenZ password encryption

2022-12-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-633?focusedWorklogId=832191=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-832191
 ]

ASF GitHub Bot logged work on COMPRESS-633:
---

Author: ASF GitHub Bot
Created on: 08/Dec/22 21:35
Start Date: 08/Dec/22 21:35
Worklog Time Spent: 10m 
  Work Description: garydgregory commented on PR #332:
URL: https://github.com/apache/commons-compress/pull/332#issuecomment-1343395856

   Just rebase on git master to test with the latest, that will help us get 
closer ;-)




Issue Time Tracking
---

Worklog Id: (was: 832191)
Time Spent: 1h 50m  (was: 1h 40m)

> Adding support for SevenZ password encryption
> -
>
> Key: COMPRESS-633
> URL: https://issues.apache.org/jira/browse/COMPRESS-633
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Compressors
>Affects Versions: 1.22
>Reporter: Daniel Santos
>Priority: Major
>  Labels: contributing, features
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
>  The purpose is to provide password based encryption for 7z compression in 
> the same way of decryption already supported, so go forward of [one know 
> limitation|https://commons.apache.org/proper/commons-compress/limitations.html]
> ☝️In this way, I would like to submit my contribution based on the existing 
> implementation of decryption and the [C++ implementation of 7z 
> |https://github.com/kornelski/7z/blob/main/CPP/7zip/Crypto/7zAes.cpp]
> ✅ I added one unit test
>  
> I prepared a [Pull Request on 
> GitHub|https://github.com/apache/commons-compress/pull/332]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (COMPRESS-633) Adding support for SevenZ password encryption

2022-12-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-633?focusedWorklogId=832186=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-832186
 ]

ASF GitHub Bot logged work on COMPRESS-633:
---

Author: ASF GitHub Bot
Created on: 08/Dec/22 21:15
Start Date: 08/Dec/22 21:15
Worklog Time Spent: 10m 
  Work Description: Dougniel commented on PR #332:
URL: https://github.com/apache/commons-compress/pull/332#issuecomment-1343364000

   It seems that there is an issue with `actions/setup-java` : 
https://github.com/actions/setup-java/issues/422




Issue Time Tracking
---

Worklog Id: (was: 832186)
Time Spent: 1h 40m  (was: 1.5h)

> Adding support for SevenZ password encryption
> -
>
> Key: COMPRESS-633
> URL: https://issues.apache.org/jira/browse/COMPRESS-633
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Compressors
>Affects Versions: 1.22
>Reporter: Daniel Santos
>Priority: Major
>  Labels: contributing, features
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
>  The purpose is to provide password based encryption for 7z compression in 
> the same way of decryption already supported, so go forward of [one know 
> limitation|https://commons.apache.org/proper/commons-compress/limitations.html]
> ☝️In this way, I would like to submit my contribution based on the existing 
> implementation of decryption and the [C++ implementation of 7z 
> |https://github.com/kornelski/7z/blob/main/CPP/7zip/Crypto/7zAes.cpp]
> ✅ I added one unit test
>  
> I prepared a [Pull Request on 
> GitHub|https://github.com/apache/commons-compress/pull/332]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (COMPRESS-633) Adding support for SevenZ password encryption

2022-12-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-633?focusedWorklogId=832185=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-832185
 ]

ASF GitHub Bot logged work on COMPRESS-633:
---

Author: ASF GitHub Bot
Created on: 08/Dec/22 21:08
Start Date: 08/Dec/22 21:08
Worklog Time Spent: 10m 
  Work Description: Dougniel commented on code in PR #332:
URL: https://github.com/apache/commons-compress/pull/332#discussion_r1043827496


##
.vscode/settings.json:
##
@@ -0,0 +1,3 @@
+{
+"java.configuration.updateBuildConfiguration": "automatic"
+}

Review Comment:
   Right, It was a mistake 





Issue Time Tracking
---

Worklog Id: (was: 832185)
Time Spent: 1.5h  (was: 1h 20m)

> Adding support for SevenZ password encryption
> -
>
> Key: COMPRESS-633
> URL: https://issues.apache.org/jira/browse/COMPRESS-633
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Compressors
>Affects Versions: 1.22
>Reporter: Daniel Santos
>Priority: Major
>  Labels: contributing, features
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
>  The purpose is to provide password based encryption for 7z compression in 
> the same way of decryption already supported, so go forward of [one know 
> limitation|https://commons.apache.org/proper/commons-compress/limitations.html]
> ☝️In this way, I would like to submit my contribution based on the existing 
> implementation of decryption and the [C++ implementation of 7z 
> |https://github.com/kornelski/7z/blob/main/CPP/7zip/Crypto/7zAes.cpp]
> ✅ I added one unit test
>  
> I prepared a [Pull Request on 
> GitHub|https://github.com/apache/commons-compress/pull/332]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   5   6   7   8   9   10   >