[jira] [Commented] (COMPRESS-394) [Zip] Local `Version Needed To Extract` does not match Central Directory

2017-05-13 Thread Gary Gregory (JIRA)

[ 
https://issues.apache.org/jira/browse/COMPRESS-394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16009525#comment-16009525
 ] 

Gary Gregory commented on COMPRESS-394:
---

Hi Plamen,

We are in the middle of releasing version 1.14 ATM, which should hit the site 
in a couple of days at the most.

We welcome patches with unit tests :-)

Gary

> [Zip] Local `Version Needed To Extract` does not match Central Directory
> 
>
> Key: COMPRESS-394
> URL: https://issues.apache.org/jira/browse/COMPRESS-394
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Archivers
>Reporter: Plamen Totev
>Priority: Minor
>
> Hi,
> This is followup on an issue reported on Plexus Archiver - 
> https://github.com/codehaus-plexus/plexus-archiver/issues/57
> Plexus Archiver uses {{ZipArchiveOutputStream}} to create zip archives. It 
> constructs the {{ZipArchiveOutputStream}} using {{BufferedOutputStream}}. As 
> a result the output do not provide random access and additional data 
> descriptor records are added. Unfortunately this leads to different values 
> being set for {{version needed to extract}} field in the local file header 
> and in the central directory. It looks like that the root cause is the way 
> the local header {{version needed to extract}} field value is calculated:
> {code:java}
> if (phased &&  !isZip64Required(entry.entry, zip64Mode)){
> putShort(INITIAL_VERSION, buf, LFH_VERSION_NEEDED_OFFSET);
> } else {
> putShort(versionNeededToExtract(zipMethod, hasZip64Extra(ze)), 
> buf, LFH_VERSION_NEEDED_OFFSET);
> }
> {code}
> As you can see the need for data descriptors is not taken into account. On 
> other hand when the central directory is created the following is used to 
> determine the minimum required version
> {code:java}
> private int versionNeededToExtract(final int zipMethod, final boolean 
> zip64) {
> if (zip64) {
> return ZIP64_MIN_VERSION;
> }
> // requires version 2 as we are going to store length info
> // in the data descriptor
> return (isDeflatedToOutputStream(zipMethod)) ?
> DATA_DESCRIPTOR_MIN_VERSION :
> INITIAL_VERSION;
> }
> {code}
> As a side note: I'm not a zip expert by any means so I could be wrong, but my 
> understanding is that if Deflate compression is used then the minimum 
> required version should be 2.0 regardless if data descriptors are used or not.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (COMPRESS-395) [Zip] Do not add data descriptor record when CRC and size values are known

2017-05-13 Thread Gary Gregory (JIRA)

 [ 
https://issues.apache.org/jira/browse/COMPRESS-395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Gregory updated COMPRESS-395:
--
Summary: [Zip] Do not add data descriptor record when CRC and size values 
are known  (was: Zip - Do not add data descriptor record when CRC and size 
values are known)

> [Zip] Do not add data descriptor record when CRC and size values are known
> --
>
> Key: COMPRESS-395
> URL: https://issues.apache.org/jira/browse/COMPRESS-395
> Project: Commons Compress
>  Issue Type: Improvement
>Reporter: Plamen Totev
>Priority: Minor
>
> Hi,
> Currently {{ZipArchiveOutputStream}} will add data descriptor record when the 
> output do not provide random access. But if you add an entry using 
> {{addRawArchiveEntry}} then the CRC, compressed size and uncompressed size 
> could be know and there is no need for data descriptor record as those values 
> could be set in the local file header. The current implementation does both - 
> it sets the correct value in the local file header and adds additional data 
> descriptor record. Here is the relevant code from 
> {{ZipArchiveOutputStream#putArchiveEntry}}:
> {code:java}
> // just a placeholder, real data will be in data
> // descriptor or inserted later via SeekableByteChannel
> ZipEightByteInteger size = ZipEightByteInteger.ZERO;
> ZipEightByteInteger compressedSize = ZipEightByteInteger.ZERO;
> if (phased){
> size = new ZipEightByteInteger(entry.entry.getSize());
> compressedSize = new 
> ZipEightByteInteger(entry.entry.getCompressedSize());
> } else if (entry.entry.getMethod() == STORED
> && entry.entry.getSize() != ArchiveEntry.SIZE_UNKNOWN) {
> // actually, we already know the sizes
> size = new ZipEightByteInteger(entry.entry.getSize());
> compressedSize = size;
> }
> z64.setSize(size);
> z64.setCompressedSize(compressedSize);
> {code}
> Maybe {{ZipArchiveOutputStream}} could be improved to not add  data 
> descriptor record when the CRC and size values are known in advance.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (COMPRESS-394) [Zip] Local `Version Needed To Extract` does not match Central Directory

2017-05-13 Thread Gary Gregory (JIRA)

 [ 
https://issues.apache.org/jira/browse/COMPRESS-394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Gregory updated COMPRESS-394:
--
Summary: [Zip] Local `Version Needed To Extract` does not match Central 
Directory  (was: Zip - Local `Version Needed To Extract` does not match Central 
Directory)

> [Zip] Local `Version Needed To Extract` does not match Central Directory
> 
>
> Key: COMPRESS-394
> URL: https://issues.apache.org/jira/browse/COMPRESS-394
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Archivers
>Reporter: Plamen Totev
>Priority: Minor
>
> Hi,
> This is followup on an issue reported on Plexus Archiver - 
> https://github.com/codehaus-plexus/plexus-archiver/issues/57
> Plexus Archiver uses {{ZipArchiveOutputStream}} to create zip archives. It 
> constructs the {{ZipArchiveOutputStream}} using {{BufferedOutputStream}}. As 
> a result the output do not provide random access and additional data 
> descriptor records are added. Unfortunately this leads to different values 
> being set for {{version needed to extract}} field in the local file header 
> and in the central directory. It looks like that the root cause is the way 
> the local header {{version needed to extract}} field value is calculated:
> {code:java}
> if (phased &&  !isZip64Required(entry.entry, zip64Mode)){
> putShort(INITIAL_VERSION, buf, LFH_VERSION_NEEDED_OFFSET);
> } else {
> putShort(versionNeededToExtract(zipMethod, hasZip64Extra(ze)), 
> buf, LFH_VERSION_NEEDED_OFFSET);
> }
> {code}
> As you can see the need for data descriptors is not taken into account. On 
> other hand when the central directory is created the following is used to 
> determine the minimum required version
> {code:java}
> private int versionNeededToExtract(final int zipMethod, final boolean 
> zip64) {
> if (zip64) {
> return ZIP64_MIN_VERSION;
> }
> // requires version 2 as we are going to store length info
> // in the data descriptor
> return (isDeflatedToOutputStream(zipMethod)) ?
> DATA_DESCRIPTOR_MIN_VERSION :
> INITIAL_VERSION;
> }
> {code}
> As a side note: I'm not a zip expert by any means so I could be wrong, but my 
> understanding is that if Deflate compression is used then the minimum 
> required version should be 2.0 regardless if data descriptors are used or not.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (COMPRESS-395) Zip - Do not add data descriptor record when CRC and size values are known

2017-05-13 Thread Plamen Totev (JIRA)

 [ 
https://issues.apache.org/jira/browse/COMPRESS-395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Plamen Totev updated COMPRESS-395:
--
Summary: Zip - Do not add data descriptor record when CRC and size values 
are known  (was: Zip - Do not add data descriptor record when CRC and size 
value are known)

> Zip - Do not add data descriptor record when CRC and size values are known
> --
>
> Key: COMPRESS-395
> URL: https://issues.apache.org/jira/browse/COMPRESS-395
> Project: Commons Compress
>  Issue Type: Improvement
>Reporter: Plamen Totev
>Priority: Minor
>
> Hi,
> Currently {{ZipArchiveOutputStream}} will add data descriptor record when the 
> output do not provide random access. But if you add an entry using 
> {{addRawArchiveEntry}} then the CRC, compressed size and uncompressed size 
> could be know and there is no need for data descriptor record as those values 
> could be set in the local file header. The current implementation does both - 
> it sets the correct value in the local file header and adds additional data 
> descriptor record. Here is the relevant code from 
> {{ZipArchiveOutputStream#putArchiveEntry}}:
> {code:java}
> // just a placeholder, real data will be in data
> // descriptor or inserted later via SeekableByteChannel
> ZipEightByteInteger size = ZipEightByteInteger.ZERO;
> ZipEightByteInteger compressedSize = ZipEightByteInteger.ZERO;
> if (phased){
> size = new ZipEightByteInteger(entry.entry.getSize());
> compressedSize = new 
> ZipEightByteInteger(entry.entry.getCompressedSize());
> } else if (entry.entry.getMethod() == STORED
> && entry.entry.getSize() != ArchiveEntry.SIZE_UNKNOWN) {
> // actually, we already know the sizes
> size = new ZipEightByteInteger(entry.entry.getSize());
> compressedSize = size;
> }
> z64.setSize(size);
> z64.setCompressedSize(compressedSize);
> {code}
> Maybe {{ZipArchiveOutputStream}} could be improved to not add  data 
> descriptor record when the CRC and size values are known in advance.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (COMPRESS-395) Zip - Do not add data descriptor record when CRC and size value are known

2017-05-13 Thread Plamen Totev (JIRA)
Plamen Totev created COMPRESS-395:
-

 Summary: Zip - Do not add data descriptor record when CRC and size 
value are known
 Key: COMPRESS-395
 URL: https://issues.apache.org/jira/browse/COMPRESS-395
 Project: Commons Compress
  Issue Type: Improvement
Reporter: Plamen Totev
Priority: Minor


Hi,

Currently {{ZipArchiveOutputStream}} will add data descriptor record when the 
output do not provide random access. But if you add an entry using 
{{addRawArchiveEntry}} then the CRC, compressed size and uncompressed size 
could be know and there is no need for data descriptor record as those values 
could be set in the local file header. The current implementation does both - 
it sets the correct value in the local file header and adds additional data 
descriptor record. Here is the relevant code from 
{{ZipArchiveOutputStream#putArchiveEntry}}:
{code:java}
// just a placeholder, real data will be in data
// descriptor or inserted later via SeekableByteChannel
ZipEightByteInteger size = ZipEightByteInteger.ZERO;
ZipEightByteInteger compressedSize = ZipEightByteInteger.ZERO;
if (phased){
size = new ZipEightByteInteger(entry.entry.getSize());
compressedSize = new 
ZipEightByteInteger(entry.entry.getCompressedSize());
} else if (entry.entry.getMethod() == STORED
&& entry.entry.getSize() != ArchiveEntry.SIZE_UNKNOWN) {
// actually, we already know the sizes
size = new ZipEightByteInteger(entry.entry.getSize());
compressedSize = size;
}
z64.setSize(size);
z64.setCompressedSize(compressedSize);
{code}

Maybe {{ZipArchiveOutputStream}} could be improved to not add  data descriptor 
record when the CRC and size values are known in advance.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (COMPRESS-394) Zip - Local `Version Needed To Extract` does not match Central Directory

2017-05-13 Thread Plamen Totev (JIRA)

 [ 
https://issues.apache.org/jira/browse/COMPRESS-394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Plamen Totev updated COMPRESS-394:
--
Priority: Minor  (was: Major)

> Zip - Local `Version Needed To Extract` does not match Central Directory
> 
>
> Key: COMPRESS-394
> URL: https://issues.apache.org/jira/browse/COMPRESS-394
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Archivers
>Reporter: Plamen Totev
>Priority: Minor
>
> Hi,
> This is followup on an issue reported on Plexus Archiver - 
> https://github.com/codehaus-plexus/plexus-archiver/issues/57
> Plexus Archiver uses {{ZipArchiveOutputStream}} to create zip archives. It 
> constructs the {{ZipArchiveOutputStream}} using {{BufferedOutputStream}}. As 
> a result the output do not provide random access and additional data 
> descriptor records are added. Unfortunately this leads to different values 
> being set for {{version needed to extract}} field in the local file header 
> and in the central directory. It looks like that the root cause is the way 
> the local header {{version needed to extract}} field value is calculated:
> {code:java}
> if (phased &&  !isZip64Required(entry.entry, zip64Mode)){
> putShort(INITIAL_VERSION, buf, LFH_VERSION_NEEDED_OFFSET);
> } else {
> putShort(versionNeededToExtract(zipMethod, hasZip64Extra(ze)), 
> buf, LFH_VERSION_NEEDED_OFFSET);
> }
> {code}
> As you can see the need for data descriptors is not taken into account. On 
> other hand when the central directory is created the following is used to 
> determine the minimum required version
> {code:java}
> private int versionNeededToExtract(final int zipMethod, final boolean 
> zip64) {
> if (zip64) {
> return ZIP64_MIN_VERSION;
> }
> // requires version 2 as we are going to store length info
> // in the data descriptor
> return (isDeflatedToOutputStream(zipMethod)) ?
> DATA_DESCRIPTOR_MIN_VERSION :
> INITIAL_VERSION;
> }
> {code}
> As a side note: I'm not a zip expert by any means so I could be wrong, but my 
> understanding is that if Deflate compression is used then the minimum 
> required version should be 2.0 regardless if data descriptors are used or not.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (COMPRESS-394) Zip - Local `Version Needed To Extract` does not match Central Directory

2017-05-13 Thread Plamen Totev (JIRA)
Plamen Totev created COMPRESS-394:
-

 Summary: Zip - Local `Version Needed To Extract` does not match 
Central Directory
 Key: COMPRESS-394
 URL: https://issues.apache.org/jira/browse/COMPRESS-394
 Project: Commons Compress
  Issue Type: Bug
  Components: Archivers
Reporter: Plamen Totev


Hi,

This is followup on an issue reported on Plexus Archiver - 
https://github.com/codehaus-plexus/plexus-archiver/issues/57

Plexus Archiver uses {{ZipArchiveOutputStream}} to create zip archives. It 
constructs the {{ZipArchiveOutputStream}} using {{BufferedOutputStream}}. As a 
result the output do not provide random access and additional data descriptor 
records are added. Unfortunately this leads to different values being set for 
{{version needed to extract}} field in the local file header and in the central 
directory. It looks like that the root cause is the way the local header 
{{version needed to extract}} field value is calculated:
{code:java}
if (phased &&  !isZip64Required(entry.entry, zip64Mode)){
putShort(INITIAL_VERSION, buf, LFH_VERSION_NEEDED_OFFSET);
} else {
putShort(versionNeededToExtract(zipMethod, hasZip64Extra(ze)), buf, 
LFH_VERSION_NEEDED_OFFSET);
}
{code}

As you can see the need for data descriptors is not taken into account. On 
other hand when the central directory is created the following is used to 
determine the minimum required version

{code:java}
private int versionNeededToExtract(final int zipMethod, final boolean 
zip64) {
if (zip64) {
return ZIP64_MIN_VERSION;
}
// requires version 2 as we are going to store length info
// in the data descriptor
return (isDeflatedToOutputStream(zipMethod)) ?
DATA_DESCRIPTOR_MIN_VERSION :
INITIAL_VERSION;
}
{code}

As a side note: I'm not a zip expert by any means so I could be wrong, but my 
understanding is that if Deflate compression is used then the minimum required 
version should be 2.0 regardless if data descriptors are used or not.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (NUMBERS-39) Move "Beta" and "Erf" from "Commons Math"

2017-05-13 Thread Gilles (JIRA)
Gilles created NUMBERS-39:
-

 Summary: Move "Beta" and "Erf" from "Commons Math"
 Key: NUMBERS-39
 URL: https://issues.apache.org/jira/browse/NUMBERS-39
 Project: Commons Numbers
  Issue Type: Task
Reporter: Gilles
Priority: Minor
 Fix For: 1.0


Class {{Beta}} and {{Erf}} (in package {{o.a.c.math4.special}}) to be moved to 
module {{commons-numbers-gamma}}.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (NUMBERS-38) No unit tests for "LanczosApproximation" class

2017-05-13 Thread Gilles (JIRA)
Gilles created NUMBERS-38:
-

 Summary: No unit tests for "LanczosApproximation" class
 Key: NUMBERS-38
 URL: https://issues.apache.org/jira/browse/NUMBERS-38
 Project: Commons Numbers
  Issue Type: Test
Reporter: Gilles
 Fix For: 1.0


The computation of the {{LanczosApproximation}} (package 
{{o.a.c.numbers.gamma}} in module {{commons-numbers-gamma}}) function is not 
checked by unit tests.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (MATH-1284) Vector is-not-a Point

2017-05-13 Thread Raymond DeCampo (JIRA)

 [ 
https://issues.apache.org/jira/browse/MATH-1284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond DeCampo resolved MATH-1284.
---
Resolution: Fixed
  Assignee: Raymond DeCampo

Resolution applied in commit 7a59c0af26177cf69e702eaac85471e54762f664

> Vector is-not-a Point
> -
>
> Key: MATH-1284
> URL: https://issues.apache.org/jira/browse/MATH-1284
> Project: Commons Math
>  Issue Type: Bug
>Affects Versions: 3.5
>Reporter: Roman Werpachowski
>Assignee: Raymond DeCampo
>Priority: Minor
> Fix For: 4.0
>
>
> The class hierarchy for geometry claims that Vector is-a Point: 
> https://commons.apache.org/proper/commons-math/apidocs/org/apache/commons/math3/geometry/Point.html
> This is mathematically incorrect, see e.g. 
> http://math.stackexchange.com/a/645827
> Just because they share the same numerical representation, Point and Vector 
> shouldn't be crammed into a common class hierarchy.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)