[
https://issues.apache.org/jira/browse/COMPRESS-407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16315498#comment-16315498
]
Christopher Hunt edited comment on COMPRESS-407 at 1/7/18 10:30 PM:
--------------------------------------------------------------------
Thanks so much for responding, particularly given my vague comment. There's a
good chance that the problem is mine. The error I see is:
{code:java}
org.apache.commons.compress.archivers.ArchiveException: IOException while
reading signature.
at
org.apache.commons.compress.archivers.ArchiveStreamFactory.detect(ArchiveStreamFactory.java:502)
at
org.apache.commons.compress.archivers.ArchiveStreamFactory.createArchiveInputStream(ArchiveStreamFactory.java:476)
{code}
...where I wasn't seeing that before. Here's how I write my tar stream (Scala):
{code:scala}
val tar = {
val bos = new ByteArrayOutputStream()
val tos =
new ArchiveStreamFactory()
.createArchiveOutputStream(ArchiveStreamFactory.TAR, bos)
.asInstanceOf[TarArchiveOutputStream]
try {
{
val te = new TarArchiveEntry("classes/")
tos.putArchiveEntry(te)
tos.closeArchiveEntry()
}
{
val te = new TarArchiveEntry("classes/example/")
tos.putArchiveEntry(te)
tos.closeArchiveEntry()
}
{
val te = new TarArchiveEntry("classes/example/Hello.class")
val classFile =
Paths.get(getClass.getResource("/example/Hello.class").toURI)
val data = Files.readAllBytes(classFile)
te.setSize(data.length.toLong)
tos.putArchiveEntry(te)
tos.write(data)
tos.closeArchiveEntry()
}
tos.flush()
tos.finish()
} finally {
tos.close()
}
bos.toByteArray
}
{code}
Here's how I'm reading it (just the opening section):
{code:scala}
val TarBlocksize = 512
val TarBlockingFactor = 20
val TarBufferSize = TarBlocksize * TarBlockingFactor * 2
val TarInputMaxBlockingTime = 3.seconds
val FileBufferSize = 8192
val is =
new
BufferedInputStream(source.runWith(StreamConverters.asInputStream(TarInputMaxBlockingTime)),
TarBufferSize)
try {
val tarInput = new
ArchiveStreamFactory().createArchiveInputStream(is).asInstanceOf[TarArchiveInputStream]
....
{code}
Full project: https://github.com/huntc/landlord
To reproduce:
1. `cd landlordd`
2. `sbt`
3. `testOnly com.github.huntc.landlord.JvmExecutorSpec -- -z "start a process
that then outputs stdin, ends and shuts everything down"`
was (Author: [email protected]):
Thanks so much for responding, particularly given my vague comment. There's a
good chance that the problem is mine. The error I see is:
{{org.apache.commons.compress.archivers.ArchiveException: IOException while
reading signature.
at
org.apache.commons.compress.archivers.ArchiveStreamFactory.detect(ArchiveStreamFactory.java:502)
at
org.apache.commons.compress.archivers.ArchiveStreamFactory.createArchiveInputStream(ArchiveStreamFactory.java:476)
}}
...where I wasn't seeing that before. Here's how I write my tar stream (Scala):
{{
val tar = {
val bos = new ByteArrayOutputStream()
val tos =
new ArchiveStreamFactory()
.createArchiveOutputStream(ArchiveStreamFactory.TAR, bos)
.asInstanceOf[TarArchiveOutputStream]
try {
{
val te = new TarArchiveEntry("classes/")
tos.putArchiveEntry(te)
tos.closeArchiveEntry()
}
{
val te = new TarArchiveEntry("classes/example/")
tos.putArchiveEntry(te)
tos.closeArchiveEntry()
}
{
val te = new TarArchiveEntry("classes/example/Hello.class")
val classFile =
Paths.get(getClass.getResource("/example/Hello.class").toURI)
val data = Files.readAllBytes(classFile)
te.setSize(data.length.toLong)
tos.putArchiveEntry(te)
tos.write(data)
tos.closeArchiveEntry()
}
tos.flush()
tos.finish()
} finally {
tos.close()
}
bos.toByteArray
}
}}
Here's how I'm reading it (just the opening section):
{{
val TarBlocksize = 512
val TarBlockingFactor = 20
val TarBufferSize = TarBlocksize * TarBlockingFactor * 2
val TarInputMaxBlockingTime = 3.seconds
val FileBufferSize = 8192
val is =
new
BufferedInputStream(source.runWith(StreamConverters.asInputStream(TarInputMaxBlockingTime)),
TarBufferSize)
try {
val tarInput = new
ArchiveStreamFactory().createArchiveInputStream(is).asInstanceOf[TarArchiveInputStream]
....
}}
Full project: https://github.com/huntc/landlord
To reproduce:
1. `cd landlordd`
2. `sbt`
3. `testOnly com.github.huntc.landlord.JvmExecutorSpec -- -z "start a process
that then outputs stdin, ends and shuts everything down"`
> Validate Block and Record Sizes
> -------------------------------
>
> Key: COMPRESS-407
> URL: https://issues.apache.org/jira/browse/COMPRESS-407
> Project: Commons Compress
> Issue Type: Sub-task
> Components: Archivers
> Reporter: Simon Spero
> Fix For: 1.15
>
>
> Reject record sizes not equal to 512 bytes; require block sizes to be
> multiples of 512 bytes.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)