[
https://issues.apache.org/jira/browse/JCR-4369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Amit Jain resolved JCR-4369.
----------------------------
Resolution: Fixed
Thanks [~woon_san].
Patch committed with
[http://svn.apache.org/viewvc?view=revision&revision=1840272]
I changed the package versions to what were being suggested.
> Avoid S3 Incomplete Read Warning
> --------------------------------
>
> Key: JCR-4369
> URL: https://issues.apache.org/jira/browse/JCR-4369
> Project: Jackrabbit Content Repository
> Issue Type: Improvement
> Components: jackrabbit-aws-ext
> Affects Versions: 2.16.3, 2.17.5
> Reporter: Woonsan Ko
> Assignee: Amit Jain
> Priority: Minor
> Fix For: 2.18, 2.17.6
>
>
> While using S3DataStore, the following logs are observed occasionally:
> {noformat}
> WARN [com.amazonaws.services.s3.internal.S3AbortableInputStream.close():178]
> Not all bytes were read from the S3ObjectInputStream,
> aborting HTTP connection. This is likely an error and may result in
> sub-optimal behavior. Request only the bytes you need via a ranged
> GET or drain the input stream after use.
> {noformat}
> The warning logs are being left not only by HTTP processing threads, but also
> by background threads, which made me think of the possibility of some
> 'issues' in {{S3DataStore}} implementation. Not just caused by a broken http
> connection by client.
> By the way, this issue is not a major one as AWS toolkit seems to just give a
> warning as _recommendation_ in that case, with closing the underlying
> HttpRequest object properly. So, there's no issue in functionality for the
> record. It's only about 'warning' message and possible sub-optimal http
> request handling under the hood (in AWS toolkit side).
> After looking at the code, I noticed that
> {{CachingDataStore#proactiveCaching}} is enabled by default, which means the
> {{S3DataStore}} tries to _proactively_ download the binary content,
> asynchronously in a new thread, even when accessing metadata through
> {{#getLastModified(...) and #getLength(...).
> Anyway, the _minor_ problem is now, whenever the {{S3DataStore}} reads
> content (in other words get an input stream on an {{S3Object}}, it is
> recommended to _read_ all data or _abort_ the input stream. Just to _close_
> the input stream is not good enough in AWS SDK perspective, resulting in the
> warning. See {{S3AbortableInputStream#close()}} method. \[1\]
> Therefore, some S3 related classes (such as
> {{org.apache.jackrabbit.core.data.LocalCache#store(String, InputStream)}},
> {{CachingDataStore#getStream(DataIdentifier)}}, etc.) should be improved like
> the following:
> - If local cache file doesn't exist or it's on purge mode, it works as it
> does: Just copy everything to local cache file and close it.
> - Otherwise, it should {{abort}} the underlying {{S3ObjectInputStream}}.
> The issue is a known one in AWS toolkit. \[2,3\] It seems like clients using
> the toolkit needs to _abort_ the input stream if it doesn't want to read data
> fully.
> \[1\]
> https://github.com/aws/aws-sdk-java/blob/master/aws-java-sdk-s3/src/main/java/com/amazonaws/services/s3/internal/S3AbortableInputStream.java#L174-L187
> \[2\] https://github.com/aws/aws-sdk-java/issues/1211
> \[3\] https://github.com/aws/aws-sdk-java/issues/1657
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)