[
https://issues.apache.org/jira/browse/HDDS-1496?focusedWorklogId=248277&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-248277
]
ASF GitHub Bot logged work on HDDS-1496:
----------------------------------------
Author: ASF GitHub Bot
Created on: 24/May/19 20:27
Start Date: 24/May/19 20:27
Worklog Time Spent: 10m
Work Description: hadoop-yetus commented on issue #804: HDDS-1496.
Support partial chunk reads and checksum verification
URL: https://github.com/apache/hadoop/pull/804#issuecomment-495778406
:broken_heart: **-1 overall**
| Vote | Subsystem | Runtime | Comment |
|:----:|----------:|--------:|:--------|
| 0 | reexec | 29 | Docker mode activated. |
||| _ Prechecks _ |
| +1 | dupname | 0 | No case conflicting files found. |
| +1 | @author | 0 | The patch does not contain any @author tags. |
| +1 | test4tests | 0 | The patch appears to include 3 new or modified test
files. |
||| _ trunk Compile Tests _ |
| 0 | mvndep | 74 | Maven dependency ordering for branch |
| +1 | mvninstall | 541 | trunk passed |
| +1 | compile | 264 | trunk passed |
| +1 | checkstyle | 78 | trunk passed |
| +1 | mvnsite | 0 | trunk passed |
| +1 | shadedclient | 944 | branch has no errors when building and testing
our client artifacts. |
| +1 | javadoc | 158 | trunk passed |
| 0 | spotbugs | 312 | Used deprecated FindBugs config; considering
switching to SpotBugs. |
| +1 | findbugs | 525 | trunk passed |
||| _ Patch Compile Tests _ |
| 0 | mvndep | 29 | Maven dependency ordering for patch |
| +1 | mvninstall | 503 | the patch passed |
| +1 | compile | 273 | the patch passed |
| +1 | javac | 273 | the patch passed |
| -0 | checkstyle | 40 | hadoop-hdds: The patch generated 9 new + 0
unchanged - 0 fixed = 9 total (was 0) |
| -0 | checkstyle | 41 | hadoop-ozone: The patch generated 1 new + 0
unchanged - 0 fixed = 1 total (was 0) |
| +1 | mvnsite | 0 | the patch passed |
| -1 | whitespace | 0 | The patch has 3 line(s) that end in whitespace. Use
git apply --whitespace=fix <<patch_file>>. Refer
https://git-scm.com/docs/git-apply |
| +1 | shadedclient | 738 | patch has no errors when building and testing
our client artifacts. |
| +1 | javadoc | 148 | the patch passed |
| -1 | findbugs | 205 | hadoop-hdds generated 6 new + 0 unchanged - 0 fixed
= 6 total (was 0) |
| -1 | findbugs | 297 | hadoop-ozone generated 2 new + 0 unchanged - 0 fixed
= 2 total (was 0) |
||| _ Other Tests _ |
| -1 | unit | 162 | hadoop-hdds in the patch failed. |
| -1 | unit | 1182 | hadoop-ozone in the patch failed. |
| +1 | asflicense | 49 | The patch does not generate ASF License warnings. |
| | | 6470 | |
| Reason | Tests |
|-------:|:------|
| FindBugs | module:hadoop-hdds |
| | Inconsistent synchronization of
org.apache.hadoop.hdds.scm.storage.BlockInputStream.chunkIndex; locked 88% of
time Unsynchronized access at BlockInputStream.java:88% of time
Unsynchronized access at BlockInputStream.java:[line 379] |
| | Inconsistent synchronization of
org.apache.hadoop.hdds.scm.storage.BlockInputStream.chunkOffsets; locked 88% of
time Unsynchronized access at BlockInputStream.java:88% of time
Unsynchronized access at BlockInputStream.java:[line 336] |
| | Inconsistent synchronization of
org.apache.hadoop.hdds.scm.storage.BlockInputStream.chunkStreams; locked 92% of
time Unsynchronized access at BlockInputStream.java:92% of time
Unsynchronized access at BlockInputStream.java:[line 336] |
| | Inconsistent synchronization of
org.apache.hadoop.hdds.scm.storage.ChunkInputStream.allocated; locked 50% of
time Unsynchronized access at ChunkInputStream.java:50% of time
Unsynchronized access at ChunkInputStream.java:[line 501] |
| | Inconsistent synchronization of
org.apache.hadoop.hdds.scm.storage.ChunkInputStream.bufferLength; locked 50% of
time Unsynchronized access at ChunkInputStream.java:50% of time
Unsynchronized access at ChunkInputStream.java:[line 491] |
| | Inconsistent synchronization of
org.apache.hadoop.hdds.scm.storage.ChunkInputStream.bufferOffset; locked 53% of
time Unsynchronized access at ChunkInputStream.java:53% of time
Unsynchronized access at ChunkInputStream.java:[line 491] |
| FindBugs | module:hadoop-ozone |
| | Inconsistent synchronization of
org.apache.hadoop.ozone.client.io.KeyInputStream.blockIndex; locked 92% of time
Unsynchronized access at KeyInputStream.java:92% of time Unsynchronized
access at KeyInputStream.java:[line 238] |
| | Inconsistent synchronization of
org.apache.hadoop.ozone.client.io.KeyInputStream.blockOffsets; locked 66% of
time Unsynchronized access at KeyInputStream.java:66% of time Unsynchronized
access at KeyInputStream.java:[line 91] |
| Failed junit tests | hadoop.ozone.om.TestScmSafeMode |
| | hadoop.ozone.client.rpc.TestBCSID |
| | hadoop.ozone.container.common.impl.TestContainerPersistence |
| Subsystem | Report/Notes |
|----------:|:-------------|
| Docker | Client=17.05.0-ce Server=17.05.0-ce base:
https://builds.apache.org/job/hadoop-multibranch/job/PR-804/8/artifact/out/Dockerfile
|
| GITHUB PR | https://github.com/apache/hadoop/pull/804 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall
mvnsite unit shadedclient findbugs checkstyle |
| uname | Linux 909f42f9c8ba 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 6d0e79c |
| Default Java | 1.8.0_212 |
| checkstyle |
https://builds.apache.org/job/hadoop-multibranch/job/PR-804/8/artifact/out/diff-checkstyle-hadoop-hdds.txt
|
| checkstyle |
https://builds.apache.org/job/hadoop-multibranch/job/PR-804/8/artifact/out/diff-checkstyle-hadoop-ozone.txt
|
| whitespace |
https://builds.apache.org/job/hadoop-multibranch/job/PR-804/8/artifact/out/whitespace-eol.txt
|
| findbugs |
https://builds.apache.org/job/hadoop-multibranch/job/PR-804/8/artifact/out/new-findbugs-hadoop-hdds.html
|
| findbugs |
https://builds.apache.org/job/hadoop-multibranch/job/PR-804/8/artifact/out/new-findbugs-hadoop-ozone.html
|
| unit |
https://builds.apache.org/job/hadoop-multibranch/job/PR-804/8/artifact/out/patch-unit-hadoop-hdds.txt
|
| unit |
https://builds.apache.org/job/hadoop-multibranch/job/PR-804/8/artifact/out/patch-unit-hadoop-ozone.txt
|
| Test Results |
https://builds.apache.org/job/hadoop-multibranch/job/PR-804/8/testReport/ |
| Max. process+thread count | 4425 (vs. ulimit of 5500) |
| modules | C: hadoop-hdds/client hadoop-hdds/common hadoop-ozone/client
hadoop-ozone/ozone-manager U: . |
| Console output |
https://builds.apache.org/job/hadoop-multibranch/job/PR-804/8/console |
| versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
| Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
This message was automatically generated.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 248277)
Time Spent: 5h (was: 4h 50m)
> Support partial chunk reads and checksum verification
> -----------------------------------------------------
>
> Key: HDDS-1496
> URL: https://issues.apache.org/jira/browse/HDDS-1496
> Project: Hadoop Distributed Data Store
> Issue Type: Improvement
> Reporter: Hanisha Koneru
> Assignee: Hanisha Koneru
> Priority: Major
> Labels: pull-request-available
> Time Spent: 5h
> Remaining Estimate: 0h
>
> BlockInputStream#readChunkFromContainer() reads the whole chunk from disk
> even if we need to read only a part of the chunk.
> This Jira aims to improve readChunkFromContainer so that only that part of
> the chunk file is read which is needed by client plus the part of chunk file
> which is required to verify the checksum.
> For example, lets say the client is reading from index 120 to 450 in the
> chunk. And let's say checksum is stored for every 100 bytes in the chunk i.e.
> the first checksum is for bytes from index 0 to 99, the next for bytes from
> index 100 to 199 and so on. To verify bytes from 120 to 450, we would need to
> read from bytes 100 to 499 so that checksum verification can be done.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]