[jira] [Commented] (HDDS-786) Fix the findbugs for SCMClientProtocolServer#getContainerWithPipeline

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16671147#comment-16671147
 ] 

Hadoop QA commented on HDDS-786:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
39s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} hadoop-hdds/server-scm generated 0 new + 0 unchanged 
- 1 fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
31s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-786 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946482/HDDS-786.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7fefdeec8a38 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c5eb237 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1585/artifact/out/branch-findbugs-hadoop-hdds_server-scm-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1585/testReport/ |
| Max. process+thread count | 305 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
| Console output | 

[jira] [Commented] (HDFS-12257) Expose getSnapshottableDirListing as a public API in HdfsAdmin

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16671145#comment-16671145
 ] 

Hadoop QA commented on HDFS-12257:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-12257 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-12257 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12889633/HDFS-12257.003.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25408/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Expose getSnapshottableDirListing as a public API in HdfsAdmin
> --
>
> Key: HDFS-12257
> URL: https://issues.apache.org/jira/browse/HDFS-12257
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 2.6.5
>Reporter: Andrew Wang
>Assignee: Huafeng Wang
>Priority: Major
> Attachments: HDFS-12257.001.patch, HDFS-12257.002.patch, 
> HDFS-12257.003.patch
>
>
> Found at HIVE-16294. We have a CLI API for listing snapshottable dirs, but no 
> programmatic API. Other snapshot APIs are exposed in HdfsAdmin, I think we 
> should expose listing there as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11885) createEncryptionZone should not block on initializing EDEK cache

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-11885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16671143#comment-16671143
 ] 

Hadoop QA commented on HDFS-11885:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HDFS-11885 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-11885 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880136/HDFS-11885.004.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25407/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> createEncryptionZone should not block on initializing EDEK cache
> 
>
> Key: HDFS-11885
> URL: https://issues.apache.org/jira/browse/HDFS-11885
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.6.5
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Major
> Attachments: HDFS-11885.001.patch, HDFS-11885.002.patch, 
> HDFS-11885.003.patch, HDFS-11885.004.patch
>
>
> When creating an encryption zone, we call {{ensureKeyIsInitialized}}, which 
> calls {{provider.warmUpEncryptedKeys(keyName)}}. This is a blocking call, 
> which attempts to fill the key cache up to the low watermark.
> If the KMS is down or slow, this can take a very long time, and cause the 
> createZone RPC to fail with a timeout.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-697) update and validate the BCSID for PutSmallFile/GetSmallFile command

2018-10-31 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16671140#comment-16671140
 ] 

Hudson commented on HDDS-697:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15342 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15342/])
HDDS-697. update and validate the BCSID for PutSmallFile/GetSmallFile 
(shashikant: rev b13c56742a6fc0f6cb1ddd63e1afd51eb216e052)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/SmallFileUtils.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestContainerStateMachineFailures.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/BlockUtils.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/storage/ContainerProtocolCalls.java
* (edit) hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestContainerSmallFile.java


> update and validate the BCSID for PutSmallFile/GetSmallFile command
> ---
>
> Key: HDDS-697
> URL: https://issues.apache.org/jira/browse/HDDS-697
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-697.000.patch, HDDS-697.001.patch, 
> HDDS-697.002.patch
>
>
> Similar to putBlock/GetBlock, putSmallFile transaction in Ratis needs to 
> update the BCSID in the container db on datanode. getSmallFile should 
> validate the bcsId while reading the block similar to getBlock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14024) RBF: ProvidedCapacityTotal json exception in NamenodeHeartbeatService

2018-10-31 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16671139#comment-16671139
 ] 

CR Hota commented on HDFS-14024:


[~elgoiri] This change is fairly simple and isolated. May be we can resolve 
this as is and open a separate ticket to add extensive tests for whole of 
jmxparams. It may need couple of modifications to the code itself to make it 
testable.

Thoughts ?

> RBF: ProvidedCapacityTotal json exception in NamenodeHeartbeatService
> -
>
> Key: HDFS-14024
> URL: https://issues.apache.org/jira/browse/HDFS-14024
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14024-HDFS-13891.0.patch, HDFS-14024.0.patch
>
>
> Routers may be proxying for a downstream name node that is NOT migrated to 
> understand "ProvidedCapacityTotal". updateJMXParameters method in 
> NamenodeHeartbeatService should handle this without breaking.
>  
> {code:java}
> jsonObject.getLong("MissingBlocks"),
> jsonObject.getLong("PendingReplicationBlocks"),
> jsonObject.getLong("UnderReplicatedBlocks"),
> jsonObject.getLong("PendingDeletionBlocks"),
> jsonObject.getLong("ProvidedCapacityTotal"));
> {code}
> One way to do this is create a json wrapper while gives back some default if 
> json node is not found.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13834) RBF: Connection creator thread should catch Throwable

2018-10-31 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16671136#comment-16671136
 ] 

CR Hota commented on HDFS-13834:


[~elgoiri] Thanks for looking into the details.

ConnectionManager is not a thread, what uses it is the rpc handler thread which 
doesn't die as this issue gets caught in RouterRpcClient getConnection method 
in scenarios when a sync connection is created (This only happens when a pool 
is not available and is created which internally creates min connections 
synchronously). However what died earlier before the patch was 
ConnectionCreator thread, which is a single async thread that can't be allowed 
die.

> RBF: Connection creator thread should catch Throwable
> -
>
> Key: HDFS-13834
> URL: https://issues.apache.org/jira/browse/HDFS-13834
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Critical
> Attachments: HDFS-13834-HDFS-13891.0.patch, HDFS-13834.0.patch, 
> HDFS-13834.1.patch
>
>
> Connection creator thread is a single thread thats responsible for creating 
> all downstream namenode connections.
> This is very critical thread and hence should not die understand 
> exception/error scenarios.
> We saw this behavior in production systems where the thread died leaving the 
> router process in bad state.
> The thread should also catch a generic error/exception.
> {code}
> @Override
> public void run() {
>   while (this.running) {
> try {
>   ConnectionPool pool = this.queue.take();
>   try {
> int total = pool.getNumConnections();
> int active = pool.getNumActiveConnections();
> if (pool.getNumConnections() < pool.getMaxSize() &&
> active >= MIN_ACTIVE_RATIO * total) {
>   ConnectionContext conn = pool.newConnection();
>   pool.addConnection(conn);
> } else {
>   LOG.debug("Cannot add more than {} connections to {}",
>   pool.getMaxSize(), pool);
> }
>   } catch (IOException e) {
> LOG.error("Cannot create a new connection", e);
>   }
> } catch (InterruptedException e) {
>   LOG.error("The connection creator was interrupted");
>   this.running = false;
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-3455) Add docs for NameNode initializeSharedEdits and bootstrapStandby commands

2018-10-31 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-3455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16671127#comment-16671127
 ] 

Dinesh Chitlangia commented on HDFS-3455:
-

[~tlipcon] Currently, the documentation is available in 
HDFSHighAvailabilityWithQJM.md and HDFSHighAvailabilityWithNFS.md

If there are no further concerns, I think we can close this.

> Add docs for NameNode initializeSharedEdits and bootstrapStandby commands
> -
>
> Key: HDFS-3455
> URL: https://issues.apache.org/jira/browse/HDFS-3455
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.0.0-alpha
>Reporter: Todd Lipcon
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
>
> We've made the HA setup easier by adding new flags to the namenode to 
> automatically set up the standby. But, we didn't document them yet. We should 
> amend the HDFSHighAvailability.apt.vm docs to include this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-3455) Add docs for NameNode initializeSharedEdits and bootstrapStandby commands

2018-10-31 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-3455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia reassigned HDFS-3455:
---

Assignee: Dinesh Chitlangia

> Add docs for NameNode initializeSharedEdits and bootstrapStandby commands
> -
>
> Key: HDFS-3455
> URL: https://issues.apache.org/jira/browse/HDFS-3455
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.0.0-alpha
>Reporter: Todd Lipcon
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
>
> We've made the HA setup easier by adding new flags to the namenode to 
> automatically set up the standby. But, we didn't document them yet. We should 
> amend the HDFSHighAvailability.apt.vm docs to include this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-709) Modify Close Container handling sequence on datanodes

2018-10-31 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-709:
-
Status: Open  (was: Patch Available)

> Modify Close Container handling sequence on datanodes
> -
>
> Key: HDDS-709
> URL: https://issues.apache.org/jira/browse/HDDS-709
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-709.000.patch
>
>
> With quasi closed container state for handling majority node failures, the 
> close container handling sequence in Datanodes need to change. Once the 
> datanodes receive a close container command from SCM, the open container 
> replicas individually be marked in the closing state. In a closing state, 
> only the transactions coming from the Ratis leader  are allowed , all other 
> write transaction will fail. A close container transaction will be queued via 
> Ratis on the leader which will be replayed to the followers which makes it 
> transition to CLOSED/QUASI CLOSED state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13678) StorageType is incompatible when rolling upgrade to 2.6/2.6+ versions

2018-10-31 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-13678:
-
Target Version/s: 2.10.0, 2.9.3  (was: 2.10.0, 2.9.2)

> StorageType is incompatible when rolling upgrade to 2.6/2.6+ versions
> -
>
> Key: HDFS-13678
> URL: https://issues.apache.org/jira/browse/HDFS-13678
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rolling upgrades
>Affects Versions: 2.5.0
>Reporter: Yiqun Lin
>Priority: Major
>
> In version 2.6.0, we supported more storage types in HDFS that implemented in 
> HDFS-6584. But this seems a incompatible change when we rolling upgrade our 
> cluster from 2.5.0 to 2.6.0 and throw following error.
> {noformat}
> 2018-06-14 11:43:39,246 ERROR [DataNode: 
> [[[DISK]file:/home/vipshop/hard_disk/dfs/, [DISK]file:/data1/dfs/, 
> [DISK]file:/data2/dfs/]] heartbeating to xx.xx.xx.xx:8022] 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in BPOfferService 
> for Block pool BP-670256553-xx.xx.xx.xx-1528795419404 (Datanode Uuid 
> ab150e05-fcb7-49ed-b8ba-f05c27593fee) service to xx.xx.xx.xx:8022
> java.lang.ArrayStoreException
>  at java.util.ArrayList.toArray(ArrayList.java:412)
>  at 
> java.util.Collections$UnmodifiableCollection.toArray(Collections.java:1034)
>  at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1030)
>  at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:836)
>  at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat(DatanodeProtocolClientSideTranslatorPB.java:146)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:566)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:664)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:835)
>  at java.lang.Thread.run(Thread.java:748)
> {noformat}
> The scenery is that old DN parses StorageType error that got from new NN. 
> This error is taking place in sending heratbeat to NN and blocks won't be 
> reported to NN successfully. This will lead subsequent errors.
> Corresponding logic in 2.5.0:
> {code}
>   public static BlockCommand convert(BlockCommandProto blkCmd) {
> ...
> StorageType[][] targetStorageTypes = new StorageType[targetList.size()][];
> List targetStorageTypesList = 
> blkCmd.getTargetStorageTypesList();
> if (targetStorageTypesList.isEmpty()) { // missing storage types
>   for(int i = 0; i < targetStorageTypes.length; i++) {
> targetStorageTypes[i] = new StorageType[targets[i].length];
> Arrays.fill(targetStorageTypes[i], StorageType.DEFAULT);
>   }
> } else {
>   for(int i = 0; i < targetStorageTypes.length; i++) {
> List p = 
> targetStorageTypesList.get(i).getStorageTypesList();
> targetStorageTypes[i] = p.toArray(new StorageType[p.size()]);  < 
> error here
>   }
> }
> {code}
> But corresponding to the current logic , it's will be better to return 
> default type instead of a exception in case StorageType changed(new fields 
> added or new types) in new versions during rolling upgrade.
> {code:java}
> public static StorageType convertStorageType(StorageTypeProto type) {
> switch(type) {
> case DISK:
>   return StorageType.DISK;
> case SSD:
>   return StorageType.SSD;
> case ARCHIVE:
>   return StorageType.ARCHIVE;
> case RAM_DISK:
>   return StorageType.RAM_DISK;
> case PROVIDED:
>   return StorageType.PROVIDED;
> default:
>   throw new IllegalStateException(
>   "BUG: StorageTypeProto not found, type=" + type);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12257) Expose getSnapshottableDirListing as a public API in HdfsAdmin

2018-10-31 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-12257:
-
Target Version/s: 2.8.3, 3.2.0, 2.9.3  (was: 2.8.3, 3.2.0, 2.9.2)

> Expose getSnapshottableDirListing as a public API in HdfsAdmin
> --
>
> Key: HDFS-12257
> URL: https://issues.apache.org/jira/browse/HDFS-12257
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 2.6.5
>Reporter: Andrew Wang
>Assignee: Huafeng Wang
>Priority: Major
> Attachments: HDFS-12257.001.patch, HDFS-12257.002.patch, 
> HDFS-12257.003.patch
>
>
> Found at HIVE-16294. We have a CLI API for listing snapshottable dirs, but no 
> programmatic API. Other snapshot APIs are exposed in HdfsAdmin, I think we 
> should expose listing there as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11885) createEncryptionZone should not block on initializing EDEK cache

2018-10-31 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-11885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-11885:
-
Target Version/s: 2.8.3, 3.2.0, 2.9.3  (was: 2.8.3, 3.2.0, 2.9.2)

> createEncryptionZone should not block on initializing EDEK cache
> 
>
> Key: HDFS-11885
> URL: https://issues.apache.org/jira/browse/HDFS-11885
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.6.5
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Major
> Attachments: HDFS-11885.001.patch, HDFS-11885.002.patch, 
> HDFS-11885.003.patch, HDFS-11885.004.patch
>
>
> When creating an encryption zone, we call {{ensureKeyIsInitialized}}, which 
> calls {{provider.warmUpEncryptedKeys(keyName)}}. This is a blocking call, 
> which attempts to fill the key cache up to the low watermark.
> If the KMS is down or slow, this can take a very long time, and cause the 
> createZone RPC to fail with a timeout.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-771) ChunkGroupOutputStream stream entries need to be properly updated on closed container exception

2018-10-31 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-771:
--
Description: Currently ChunkGroupOutputStream does not increment the 
currentStreamIndex when a chunk write completes but there is no data in the 
buffer. This leads to overwriting of stream entry.  (was: Currently 
ChunkGroupOutputStream does not increment the currentStreamIndex when a chunk 
write completes but there is no data in the buffer. This leads to overwriting 
of stream entry.

We also need to update the bcsid in case of closed container exception. The 
stream entry's bcsid needs to be updated with the bcsid of the committed block.)

> ChunkGroupOutputStream stream entries need to be properly updated on closed 
> container exception
> ---
>
> Key: HDDS-771
> URL: https://issues.apache.org/jira/browse/HDDS-771
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Blocker
>
> Currently ChunkGroupOutputStream does not increment the currentStreamIndex 
> when a chunk write completes but there is no data in the buffer. This leads 
> to overwriting of stream entry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-697) update and validate the BCSID for PutSmallFile/GetSmallFile command

2018-10-31 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-697:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~jnp] for the review. I have fixed the checkstyle issues and committed 
this change to trunk.

The test failures and ASF license warnings are not related to the patch.

> update and validate the BCSID for PutSmallFile/GetSmallFile command
> ---
>
> Key: HDDS-697
> URL: https://issues.apache.org/jira/browse/HDDS-697
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-697.000.patch, HDDS-697.001.patch, 
> HDDS-697.002.patch
>
>
> Similar to putBlock/GetBlock, putSmallFile transaction in Ratis needs to 
> update the BCSID in the container db on datanode. getSmallFile should 
> validate the bcsId while reading the block similar to getBlock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-697) update and validate the BCSID for PutSmallFile/GetSmallFile command

2018-10-31 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-697:
-
Fix Version/s: 0.4.0

> update and validate the BCSID for PutSmallFile/GetSmallFile command
> ---
>
> Key: HDDS-697
> URL: https://issues.apache.org/jira/browse/HDDS-697
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-697.000.patch, HDDS-697.001.patch, 
> HDDS-697.002.patch
>
>
> Similar to putBlock/GetBlock, putSmallFile transaction in Ratis needs to 
> update the BCSID in the container db on datanode. getSmallFile should 
> validate the bcsId while reading the block similar to getBlock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies

2018-10-31 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16671118#comment-16671118
 ] 

Xiao Chen commented on HDFS-12946:
--

Thanks for the patch Kitti! I think we're getting close.

Some review comments:
- FSNamesystem: We generally needs fsn/fsd locks when accessing internal 
states. In this case, I think DNManager is fine, but ECPManager should be 
protected with a readlock.
- ErasureCodingClusterSetupVerifier: I think we should finer extract the logic. 
In NN, we don't need to loop through the datanodes to get the number of racks - 
we can just get from {{NetworkTopology}} (e.g. via DNManager). IMO the 'highly 
uneven rack' check feels like something we can do as a future improvement. It's 
more subjective, and problem will be visible EC'ed or not.
- Following the above, there would be no need to {{reportSet.toArray}} in NN. 
With thousands of DNs in a cluster, this could be perf-heavy.
- EcClusterSetupVerifyResult: Private class doesn't have to define a 
{{InterfaceStability}}. They're by default Unstable.
- Naming: feels like {{ErasureCodingClusterSetupVerifier}} is a bit long. How 
about {{ECTopologyVerifier}}? We can assume EC is a known concept since this is 
HDFS private class. If it confused future developers, class javadoc should make 
it fairly clear. Similar to the method names. For example 
{{getVerifyClusterSetupSupportsEnabledEcPoliciesResult}} I think 
{{getECTopologyVerifierResult}} should be ok, or even {{verifyECWithTopology}}.
- There's an unnecessary change in {{ECBlockGroupsMBean}}

> Add a tool to check rack configuration against EC policies
> --
>
> Key: HDFS-12946
> URL: https://issues.apache.org/jira/browse/HDFS-12946
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Reporter: Xiao Chen
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-12946.01.patch, HDFS-12946.02.patch, 
> HDFS-12946.03.patch, HDFS-12946.04.fsck.patch, HDFS-12946.05.patch
>
>
> From testing we have seen setups with problematic racks / datanodes that 
> would not suffice basic EC usages. These are usually found out only after the 
> tests failed.
> We should provide a way to check this beforehand.
> Some scenarios:
> - not enough datanodes compared to EC policy's highest data+parity number
> - not enough racks to satisfy BPPRackFaultTolerant
> - highly uneven racks to satisfy BPPRackFaultTolerant
> - highly uneven racks (so that BPP's considerLoad logic may exclude some busy 
> nodes on the rack, resulting in #2)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-777) Fix missing jenkins issue in s3gateway module

2018-10-31 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16671100#comment-16671100
 ] 

Hudson commented on HDDS-777:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15341 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15341/])
HDDS-777. Fix missing jenkins issue in s3gateway module. Contributed by 
(bharat: rev c5eb237e3e951e27565d40a47b6f55e7eb399f5c)
* (edit) hadoop-ozone/s3gateway/pom.xml
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/util/S3utils.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/util/S3Consts.java


> Fix missing jenkins issue in s3gateway module
> -
>
> Key: HDDS-777
> URL: https://issues.apache.org/jira/browse/HDDS-777
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Minor
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-777.00.patch
>
>
> There were some issue missed with commits from HDDS-659.
> And also spelling mistake in s3gateway pom.xml. Thank You [~arpitagarwal] for 
> reporting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-785) Ozone shell put key does not create parent directories

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16671088#comment-16671088
 ] 

Hadoop QA commented on HDDS-785:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 27s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-785 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946466/HDDS-785.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5bb0e258fdd2 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 

[jira] [Updated] (HDFS-13996) Make HttpFS' ACLs RegEx configurable

2018-10-31 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13996:
--
Attachment: HDFS-13996.004.patch
Status: Patch Available  (was: In Progress)

Uploaded patch rev 004. Addressing checkstyle warnings.

> Make HttpFS' ACLs RegEx configurable
> 
>
> Key: HDFS-13996
> URL: https://issues.apache.org/jira/browse/HDFS-13996
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.7.7, 3.0.3, 2.6.5
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13996.001.patch, HDFS-13996.002.patch, 
> HDFS-13996.003.patch, HDFS-13996.004.patch
>
>
> Previously in HDFS-11421, WebHDFS' ACLs RegEx is made configurable, but it's 
> not configurable yet in HttpFS. For now in HttpFS, the ACL permission pattern 
> is fixed to DFS_WEBHDFS_ACL_PERMISSION_PATTERN_DEFAULT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13996) Make HttpFS' ACLs RegEx configurable

2018-10-31 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13996:
--
Status: In Progress  (was: Patch Available)

> Make HttpFS' ACLs RegEx configurable
> 
>
> Key: HDFS-13996
> URL: https://issues.apache.org/jira/browse/HDFS-13996
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.7.7, 3.0.3, 2.6.5
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13996.001.patch, HDFS-13996.002.patch, 
> HDFS-13996.003.patch
>
>
> Previously in HDFS-11421, WebHDFS' ACLs RegEx is made configurable, but it's 
> not configurable yet in HttpFS. For now in HttpFS, the ACL permission pattern 
> is fixed to DFS_WEBHDFS_ACL_PERMISSION_PATTERN_DEFAULT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-786) Fix the findbugs for SCMClientProtocolServer#getContainerWithPipeline

2018-10-31 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16671078#comment-16671078
 ] 

Bharat Viswanadham commented on HDDS-786:
-

+1 LGTM.
Thank you @yiqun lin for fixing this issue.

> Fix the findbugs for SCMClientProtocolServer#getContainerWithPipeline
> -
>
> Key: HDDS-786
> URL: https://issues.apache.org/jira/browse/HDDS-786
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDDS-786.001.patch
>
>
> There is a findbugs warning existing 
> recently(https://builds.apache.org/job/PreCommit-HDDS-Build/1517/artifact/out/branch-findbugs-hadoop-hdds_server-scm-warnings.html).
> {noformat}
> Dead store to remoteUser in 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(long)
> Bug type DLS_DEAD_LOCAL_STORE (click for details) 
> In class org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer
> In method 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(long)
> Local variable named remoteUser
> At SCMClientProtocolServer.java:[line 192]
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-786) Fix the findbugs for SCMClientProtocolServer#getContainerWithPipeline

2018-10-31 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDDS-786:
---
Target Version/s: 0.3.0, 0.4.0  (was: 0.4.0)

> Fix the findbugs for SCMClientProtocolServer#getContainerWithPipeline
> -
>
> Key: HDDS-786
> URL: https://issues.apache.org/jira/browse/HDDS-786
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDDS-786.001.patch
>
>
> There is a findbugs warning existing recently.
> {noformat}
> Dead store to remoteUser in 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(long)
> Bug type DLS_DEAD_LOCAL_STORE (click for details) 
> In class org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer
> In method 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(long)
> Local variable named remoteUser
> At SCMClientProtocolServer.java:[line 192]
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-786) Fix the findbugs for SCMClientProtocolServer#getContainerWithPipeline

2018-10-31 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDDS-786:
---
Description: 
There is a findbugs warning existing 
recently(https://builds.apache.org/job/PreCommit-HDDS-Build/1517/artifact/out/branch-findbugs-hadoop-hdds_server-scm-warnings.html).
{noformat}
Dead store to remoteUser in 
org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(long)
Bug type DLS_DEAD_LOCAL_STORE (click for details) 
In class org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer
In method 
org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(long)
Local variable named remoteUser
At SCMClientProtocolServer.java:[line 192]
{noformat}

  was:
There is a findbugs warning existing recently.
{noformat}
Dead store to remoteUser in 
org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(long)
Bug type DLS_DEAD_LOCAL_STORE (click for details) 
In class org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer
In method 
org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(long)
Local variable named remoteUser
At SCMClientProtocolServer.java:[line 192]
{noformat}


> Fix the findbugs for SCMClientProtocolServer#getContainerWithPipeline
> -
>
> Key: HDDS-786
> URL: https://issues.apache.org/jira/browse/HDDS-786
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDDS-786.001.patch
>
>
> There is a findbugs warning existing 
> recently(https://builds.apache.org/job/PreCommit-HDDS-Build/1517/artifact/out/branch-findbugs-hadoop-hdds_server-scm-warnings.html).
> {noformat}
> Dead store to remoteUser in 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(long)
> Bug type DLS_DEAD_LOCAL_STORE (click for details) 
> In class org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer
> In method 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(long)
> Local variable named remoteUser
> At SCMClientProtocolServer.java:[line 192]
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-753) Fix failure in TestSecureOzoneCluster

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16671069#comment-16671069
 ] 

Hadoop QA commented on HDDS-753:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
55s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
57s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
45s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
48s{color} | {color:red} hadoop-hdds/server-scm in HDDS-4 has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
31s{color} | {color:green} root: The patch generated 0 new + 6 unchanged - 1 
fixed = 6 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
25s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 14s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}113m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
|   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
|   | hadoop.hdds.scm.pipeline.TestNodeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-753 |
| JIRA Patch URL | 

[jira] [Comment Edited] (HDDS-786) Fix the findbugs for SCMClientProtocolServer#getContainerWithPipeline

2018-10-31 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16671064#comment-16671064
 ] 

Yiqun Lin edited comment on HDDS-786 at 11/1/18 3:31 AM:
-

This is related of the JIRA HDDS-608. Removing this line should be the correct 
way. 
Attach the patch.


was (Author: linyiqun):
I think here we planned to use \{{remoteUser}} to do the access control rather 
than removing that. Attach the patch.

> Fix the findbugs for SCMClientProtocolServer#getContainerWithPipeline
> -
>
> Key: HDDS-786
> URL: https://issues.apache.org/jira/browse/HDDS-786
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDDS-786.001.patch
>
>
> There is a findbugs warning existing recently.
> {noformat}
> Dead store to remoteUser in 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(long)
> Bug type DLS_DEAD_LOCAL_STORE (click for details) 
> In class org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer
> In method 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(long)
> Local variable named remoteUser
> At SCMClientProtocolServer.java:[line 192]
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-786) Fix the findbugs for SCMClientProtocolServer#getContainerWithPipeline

2018-10-31 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDDS-786:
---
Attachment: HDDS-786.001.patch

> Fix the findbugs for SCMClientProtocolServer#getContainerWithPipeline
> -
>
> Key: HDDS-786
> URL: https://issues.apache.org/jira/browse/HDDS-786
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDDS-786.001.patch
>
>
> There is a findbugs warning existing recently.
> {noformat}
> Dead store to remoteUser in 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(long)
> Bug type DLS_DEAD_LOCAL_STORE (click for details) 
> In class org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer
> In method 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(long)
> Local variable named remoteUser
> At SCMClientProtocolServer.java:[line 192]
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-786) Fix the findbugs for SCMClientProtocolServer#getContainerWithPipeline

2018-10-31 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDDS-786:
---
Attachment: (was: HDDS-786.001.patch)

> Fix the findbugs for SCMClientProtocolServer#getContainerWithPipeline
> -
>
> Key: HDDS-786
> URL: https://issues.apache.org/jira/browse/HDDS-786
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDDS-786.001.patch
>
>
> There is a findbugs warning existing recently.
> {noformat}
> Dead store to remoteUser in 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(long)
> Bug type DLS_DEAD_LOCAL_STORE (click for details) 
> In class org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer
> In method 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(long)
> Local variable named remoteUser
> At SCMClientProtocolServer.java:[line 192]
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-786) Fix the findbugs for SCMClientProtocolServer#getContainerWithPipeline

2018-10-31 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDDS-786:
---
Status: Patch Available  (was: Open)

I think here we planned to use \{{remoteUser}} to do the access control rather 
than removing that. Attach the patch.

> Fix the findbugs for SCMClientProtocolServer#getContainerWithPipeline
> -
>
> Key: HDDS-786
> URL: https://issues.apache.org/jira/browse/HDDS-786
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDDS-786.001.patch
>
>
> There is a findbugs warning existing recently.
> {noformat}
> Dead store to remoteUser in 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(long)
> Bug type DLS_DEAD_LOCAL_STORE (click for details) 
> In class org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer
> In method 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(long)
> Local variable named remoteUser
> At SCMClientProtocolServer.java:[line 192]
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-786) Fix the findbugs for SCMClientProtocolServer#getContainerWithPipeline

2018-10-31 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDDS-786:
---
Attachment: HDDS-786.001.patch

> Fix the findbugs for SCMClientProtocolServer#getContainerWithPipeline
> -
>
> Key: HDDS-786
> URL: https://issues.apache.org/jira/browse/HDDS-786
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDDS-786.001.patch
>
>
> There is a findbugs warning existing recently.
> {noformat}
> Dead store to remoteUser in 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(long)
> Bug type DLS_DEAD_LOCAL_STORE (click for details) 
> In class org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer
> In method 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(long)
> Local variable named remoteUser
> At SCMClientProtocolServer.java:[line 192]
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-786) Fix the findbugs for SCMClientProtocolServer#getContainerWithPipeline

2018-10-31 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDDS-786:
--

 Summary: Fix the findbugs for 
SCMClientProtocolServer#getContainerWithPipeline
 Key: HDDS-786
 URL: https://issues.apache.org/jira/browse/HDDS-786
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Yiqun Lin
Assignee: Yiqun Lin


There is a findbugs warning existing recently.
{noformat}
Dead store to remoteUser in 
org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(long)
Bug type DLS_DEAD_LOCAL_STORE (click for details) 
In class org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer
In method 
org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(long)
Local variable named remoteUser
At SCMClientProtocolServer.java:[line 192]
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-751) Replace usage of Guava Optional with Java Optional

2018-10-31 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16671050#comment-16671050
 ] 

Yiqun Lin commented on HDDS-751:


Rebase the patch.

> Replace usage of Guava Optional with Java Optional
> --
>
> Key: HDDS-751
> URL: https://issues.apache.org/jira/browse/HDDS-751
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Yiqun Lin
>Priority: Critical
>  Labels: newbie
> Attachments: HDDS-751.002.patch, HDDS-751.003.patch, 
> HDFS-751.001.patch
>
>
> Ozone and HDDS code uses {{com.google.common.base.Optional}} in multiple 
> places.
> Let's replace with the Java Optional since we only target JDK8+.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-751) Replace usage of Guava Optional with Java Optional

2018-10-31 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDDS-751:
---
Attachment: HDDS-751.003.patch

> Replace usage of Guava Optional with Java Optional
> --
>
> Key: HDDS-751
> URL: https://issues.apache.org/jira/browse/HDDS-751
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Yiqun Lin
>Priority: Critical
>  Labels: newbie
> Attachments: HDDS-751.002.patch, HDDS-751.003.patch, 
> HDFS-751.001.patch
>
>
> Ozone and HDDS code uses {{com.google.common.base.Optional}} in multiple 
> places.
> Let's replace with the Java Optional since we only target JDK8+.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2018-10-31 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan resolved HDFS-12026.
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   3.2.0

> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>Priority: Blocker
> Fix For: 3.2.0, 3.3.0
>
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch, 
> HDFS-12026.HDFS-8707.003.patch, HDFS-12026.HDFS-8707.004.patch, 
> HDFS-12026.HDFS-8707.005.patch, HDFS-12026.HDFS-8707.006.patch, 
> HDFS-12026.HDFS-8707.007.patch, HDFS-12026.HDFS-8707.008.patch, 
> HDFS-12026.HDFS-8707.009.patch, HDFS-12026.HDFS-8707.010.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flag:
> -std=c++11
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2018-10-31 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16671035#comment-16671035
 ] 

Sunil Govindan commented on HDFS-12026:
---

HDFS-14033 is committed. Closing this.

> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>Priority: Blocker
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch, 
> HDFS-12026.HDFS-8707.003.patch, HDFS-12026.HDFS-8707.004.patch, 
> HDFS-12026.HDFS-8707.005.patch, HDFS-12026.HDFS-8707.006.patch, 
> HDFS-12026.HDFS-8707.007.patch, HDFS-12026.HDFS-8707.008.patch, 
> HDFS-12026.HDFS-8707.009.patch, HDFS-12026.HDFS-8707.010.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flag:
> -std=c++11
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13404) RBF: TestRouterWebHDFSContractAppend.testRenameFileBeingAppended fails

2018-10-31 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16671032#comment-16671032
 ] 

Íñigo Goiri commented on HDFS-13404:


I'm also curious why the regular WebHDFS test doesn't have issues with timing.
In any case, I would prefer using some delaying than skipping the whole thing.

> RBF: TestRouterWebHDFSContractAppend.testRenameFileBeingAppended fails
> --
>
> Key: HDFS-13404
> URL: https://issues.apache.org/jira/browse/HDFS-13404
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: detailed_error.log
>
>
> This is reported by [~elgoiri].
> {noformat}
> java.io.FileNotFoundException: 
> Failed to append to non-existent file /test/test/target for client 127.0.0.1
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp.appendFile(FSDirAppendOp.java:104)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2621)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:805)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:485)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
> ...
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:549)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:527)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:136)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$FsPathOutputStreamRunner$1.close(WebHdfsFileSystem.java:1013)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractAppendTest.testRenameFileBeingAppended(AbstractContractAppendTest.java:139)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-777) Fix missing jenkins issue in s3gateway module

2018-10-31 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-777:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Find bug is already fixed, (not sure Jenkins reported it again, and in that 
line, there is no use of it).

 

Thank You, Arpit Agarwal, for review.

I have committed to trunk and ozone-0.3.

 

> Fix missing jenkins issue in s3gateway module
> -
>
> Key: HDDS-777
> URL: https://issues.apache.org/jira/browse/HDDS-777
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Minor
> Attachments: HDDS-777.00.patch
>
>
> There were some issue missed with commits from HDDS-659.
> And also spelling mistake in s3gateway pom.xml. Thank You [~arpitagarwal] for 
> reporting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-777) Fix missing jenkins issue in s3gateway module

2018-10-31 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-777:

Fix Version/s: 0.4.0
   0.3.0

> Fix missing jenkins issue in s3gateway module
> -
>
> Key: HDDS-777
> URL: https://issues.apache.org/jira/browse/HDDS-777
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Minor
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-777.00.patch
>
>
> There were some issue missed with commits from HDDS-659.
> And also spelling mistake in s3gateway pom.xml. Thank You [~arpitagarwal] for 
> reporting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13834) RBF: Connection creator thread should catch Throwable

2018-10-31 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16671018#comment-16671018
 ] 

Íñigo Goiri commented on HDFS-13834:


I think I get it, ConnectionPool is the one failing but then ConnectionManager 
is not failing anymore, correct?
Then, I think in addition to this, we should have a ConnecitonManager that 
doesn't die even when we ask for unknownhost, right?

> RBF: Connection creator thread should catch Throwable
> -
>
> Key: HDFS-13834
> URL: https://issues.apache.org/jira/browse/HDFS-13834
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Critical
> Attachments: HDFS-13834-HDFS-13891.0.patch, HDFS-13834.0.patch, 
> HDFS-13834.1.patch
>
>
> Connection creator thread is a single thread thats responsible for creating 
> all downstream namenode connections.
> This is very critical thread and hence should not die understand 
> exception/error scenarios.
> We saw this behavior in production systems where the thread died leaving the 
> router process in bad state.
> The thread should also catch a generic error/exception.
> {code}
> @Override
> public void run() {
>   while (this.running) {
> try {
>   ConnectionPool pool = this.queue.take();
>   try {
> int total = pool.getNumConnections();
> int active = pool.getNumActiveConnections();
> if (pool.getNumConnections() < pool.getMaxSize() &&
> active >= MIN_ACTIVE_RATIO * total) {
>   ConnectionContext conn = pool.newConnection();
>   pool.addConnection(conn);
> } else {
>   LOG.debug("Cannot add more than {} connections to {}",
>   pool.getMaxSize(), pool);
> }
>   } catch (IOException e) {
> LOG.error("Cannot create a new connection", e);
>   }
> } catch (InterruptedException e) {
>   LOG.error("The connection creator was interrupted");
>   this.running = false;
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13834) RBF: Connection creator thread should catch Throwable

2018-10-31 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16671015#comment-16671015
 ] 

Íñigo Goiri commented on HDFS-13834:


The unit tests seems to pass here:
https://builds.apache.org/job/PreCommit-HDFS-Build/25400/testReport/org.apache.hadoop.hdfs.server.federation.router/TestConnectionManager/testGetConnectionWithException/

However, I don't fully understand what's going on.
Now, when we get one of these errors, we will just log them and not throw an 
exception, correct?
How is the unit test expecting an exception, not getting it (as it's now 
swallow), and still passing?
I'm obviously missing something.

> RBF: Connection creator thread should catch Throwable
> -
>
> Key: HDFS-13834
> URL: https://issues.apache.org/jira/browse/HDFS-13834
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Critical
> Attachments: HDFS-13834-HDFS-13891.0.patch, HDFS-13834.0.patch, 
> HDFS-13834.1.patch
>
>
> Connection creator thread is a single thread thats responsible for creating 
> all downstream namenode connections.
> This is very critical thread and hence should not die understand 
> exception/error scenarios.
> We saw this behavior in production systems where the thread died leaving the 
> router process in bad state.
> The thread should also catch a generic error/exception.
> {code}
> @Override
> public void run() {
>   while (this.running) {
> try {
>   ConnectionPool pool = this.queue.take();
>   try {
> int total = pool.getNumConnections();
> int active = pool.getNumActiveConnections();
> if (pool.getNumConnections() < pool.getMaxSize() &&
> active >= MIN_ACTIVE_RATIO * total) {
>   ConnectionContext conn = pool.newConnection();
>   pool.addConnection(conn);
> } else {
>   LOG.debug("Cannot add more than {} connections to {}",
>   pool.getMaxSize(), pool);
> }
>   } catch (IOException e) {
> LOG.error("Cannot create a new connection", e);
>   }
> } catch (InterruptedException e) {
>   LOG.error("The connection creator was interrupted");
>   this.running = false;
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-779) Fix ASF License violation in S3Consts and S3Utils

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16671014#comment-16671014
 ] 

Hadoop QA commented on HDDS-779:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
35s{color} | {color:red} hadoop-ozone/s3gateway in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} s3gateway in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-779 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946445/HDDS-779.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 848074fc2f0e 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6668c19 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1581/artifact/out/branch-findbugs-hadoop-ozone_s3gateway-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1581/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/s3gateway U: hadoop-ozone/s3gateway |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1581/console |
| Powered by | Apache 

[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670983#comment-16670983
 ] 

Hadoop QA commented on HDFS-12946:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m  0s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 481 unchanged - 
0 fixed = 482 total (was 481) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 54s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 17 new + 322 unchanged - 0 fixed = 339 total (was 322) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}174m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-12946 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946414/HDFS-12946.05.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4bcdf14b233b 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6668c19 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25404/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| checkstyle | 

[jira] [Commented] (HDFS-13752) fs.Path stores file path in java.net.URI causes big memory waste

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670976#comment-16670976
 ] 

Hadoop QA commented on HDFS-13752:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HDFS-13752 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-13752 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25405/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> fs.Path stores file path in java.net.URI causes big memory waste
> 
>
> Key: HDFS-13752
> URL: https://issues.apache.org/jira/browse/HDFS-13752
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.6
> Environment: Hive 2.1.1 and hadoop 2.7.6 
>Reporter: Barnabas Maidics
>Priority: Major
> Attachments: HDFS-13752.001.patch, HDFS-13752.002.patch, 
> HDFS-13752.003.patch, HDFSbenchmark.pdf, Screen Shot 2018-07-20 at 
> 11.12.38.png, heapdump-10partitions.html, measurement.pdf
>
>
> I was looking at HiveServer2 memory usage, and a big percentage of this was 
> because of org.apache.hadoop.fs.Path, where you store file paths in a 
> java.net.URI object. The URI implementation stores the same string in 3 
> different objects (see the attached image). In Hive when there are many 
> partitions this cause a big memory usage. In my particular case 42% of memory 
> was used by java.net.URI so it could be reduced to 14%. 
> I wonder if the community is open to replace it with a more memory efficient 
> implementation and what other things should be considered here? It can be a 
> huge memory improvement for Hadoop and for Hive as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-778) Add an interface for CA and Clients for Certificate operations

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670975#comment-16670975
 ] 

Hadoop QA commented on HDDS-778:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
41s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
20s{color} | {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
20s{color} | {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 20s{color} 
| {color:red} common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
46s{color} | {color:red} common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
11s{color} | {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m 
51s{color} | {color:red} hadoop-hdds_common generated 4 new + 0 unchanged - 0 
fixed = 4 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m  9s{color} 
| {color:red} common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-778 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946436/HDDS-778-HDDS-4.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d210fec99ae0 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDDS-4 / 5df4129 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1580/artifact/out/patch-mvninstall-hadoop-hdds_common.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1580/artifact/out/patch-compile-hadoop-hdds_common.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1580/artifact/out/patch-compile-hadoop-hdds_common.txt
 |
| mvnsite | 

[jira] [Commented] (HDFS-14008) NN should log snapshotdiff report

2018-10-31 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670966#comment-16670966
 ] 

Wei-Chiu Chuang commented on HDFS-14008:


+1

> NN should log snapshotdiff report
> -
>
> Key: HDFS-14008
> URL: https://issues.apache.org/jira/browse/HDFS-14008
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0, 3.1.1, 3.0.3
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HDFS-14008.001.patch, HDFS-14008.002.patch, 
> HDFS-14008.003.patch
>
>
> It will be helpful to log message for snapshotdiff  to correlate snapshotdiff 
> operation against memory spikes in NN heap.  It will be good to log the below 
> details at the end of snapshot diff operation, this will help us to know the 
> time spent in the snapshotdiff operation and to know the number of 
> files/directories processed and compared.
> a) Total dirs processed
> b) Total dirs compared
> c) Total files processed
> d)  Total files compared
>  e) Total children listing time



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14008) NN should log snapshotdiff report

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670928#comment-16670928
 ] 

Hadoop QA commented on HDFS-14008:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
33s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 32s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
52s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}190m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestCrcCorruption |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14008 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946215/HDFS-14008.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2d27f563db03 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6668c19 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 

[jira] [Commented] (HDFS-14039) ec -listPolicies doesn't show correct state for the default policy when the default is not RS(6,3)

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670919#comment-16670919
 ] 

Hadoop QA commented on HDFS-14039:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 10s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}157m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestSafeModeWithStripedFile |
|   | hadoop.hdfs.server.namenode.TestFSImage |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14039 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946408/HDFS-14039.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b18a7f87d206 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6668c19 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25403/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25403/testReport/ |
| Max. process+thread count | 3180 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 

[jira] [Updated] (HDDS-785) Ozone shell put key does not create parent directories

2018-10-31 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-785:

Status: Patch Available  (was: Open)

> Ozone shell put key does not create parent directories
> --
>
> Key: HDDS-785
> URL: https://issues.apache.org/jira/browse/HDDS-785
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-785.001.patch
>
>
> When we create a key in ozone through Ozone Shell, the parent directory 
> structure is not created. 
> {code:java}
> $ ./ozone sh key put /volume1/bucket1/o3sh/t1/dir1/file1 /etc/hosts -r=ONE 
> $ ./ozone sh key list /volume1/bucket1 
> [ { 
>    ….
>    "size" : 5898, 
>    "keyName" : "o3sh/t1/dir1/file1” 
> } ] 
> $ ./ozone fs -ls o3fs://bucket1.volume1/o3sh/t1/dir1/ 
> ls: `o3fs://bucket1.volume1/o3sh/t1/dir1/': No such file or directory 
> $ ./ozone fs -ls o3fs://bucket1.volume1/o3sh/t1/dir1/file1 
> -rw-rw-rw- 1 hk hk       5898 2018-10-23 18:02 
> o3fs://bucket1.volume1/o3sh/t1/dir1/file1{code}
> OzoneFileSystem and S3AFileSystem, when creating files, create the parent 
> directories if they do not exist. We should match this behavior in Ozone 
> shell as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-785) Ozone shell put key does not create parent directories

2018-10-31 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-785:

Attachment: HDDS-785.001.patch

> Ozone shell put key does not create parent directories
> --
>
> Key: HDDS-785
> URL: https://issues.apache.org/jira/browse/HDDS-785
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-785.001.patch
>
>
> When we create a key in ozone through Ozone Shell, the parent directory 
> structure is not created. 
> {code:java}
> $ ./ozone sh key put /volume1/bucket1/o3sh/t1/dir1/file1 /etc/hosts -r=ONE 
> $ ./ozone sh key list /volume1/bucket1 
> [ { 
>    ….
>    "size" : 5898, 
>    "keyName" : "o3sh/t1/dir1/file1” 
> } ] 
> $ ./ozone fs -ls o3fs://bucket1.volume1/o3sh/t1/dir1/ 
> ls: `o3fs://bucket1.volume1/o3sh/t1/dir1/': No such file or directory 
> $ ./ozone fs -ls o3fs://bucket1.volume1/o3sh/t1/dir1/file1 
> -rw-rw-rw- 1 hk hk       5898 2018-10-23 18:02 
> o3fs://bucket1.volume1/o3sh/t1/dir1/file1{code}
> OzoneFileSystem and S3AFileSystem, when creating files, create the parent 
> directories if they do not exist. We should match this behavior in Ozone 
> shell as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-785) Ozone shell put key does not create parent directories

2018-10-31 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-785:
---

 Summary: Ozone shell put key does not create parent directories
 Key: HDDS-785
 URL: https://issues.apache.org/jira/browse/HDDS-785
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


When we create a key in ozone through Ozone Shell, the parent directory 
structure is not created. 
{code:java}
$ ./ozone sh key put /volume1/bucket1/o3sh/t1/dir1/file1 /etc/hosts -r=ONE 
$ ./ozone sh key list /volume1/bucket1 
[ { 
   ….
   "size" : 5898, 
   "keyName" : "o3sh/t1/dir1/file1” 
} ] 

$ ./ozone fs -ls o3fs://bucket1.volume1/o3sh/t1/dir1/ 
ls: `o3fs://bucket1.volume1/o3sh/t1/dir1/': No such file or directory 

$ ./ozone fs -ls o3fs://bucket1.volume1/o3sh/t1/dir1/file1 
-rw-rw-rw- 1 hk hk       5898 2018-10-23 18:02 
o3fs://bucket1.volume1/o3sh/t1/dir1/file1{code}
OzoneFileSystem and S3AFileSystem, when creating files, create the parent 
directories if they do not exist. We should match this behavior in Ozone shell 
as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-777) Fix missing jenkins issue in s3gateway module

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670905#comment-16670905
 ] 

Hadoop QA commented on HDDS-777:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-ozone/s3gateway in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-ozone/s3gateway generated 0 new + 0 unchanged 
- 1 fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} s3gateway in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-777 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946430/HDDS-777.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux a5caf3bcb052 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6668c19 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1579/artifact/out/branch-findbugs-hadoop-ozone_s3gateway-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1579/testReport/ |
| Max. 

[jira] [Commented] (HDDS-592) Fix ozone-secure.robot test

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670883#comment-16670883
 ] 

Hadoop QA commented on HDDS-592:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
 8s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
25s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
16s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 40s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} dist in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestSecureOzoneCluster |
|   | hadoop.ozone.TestOzoneConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-592 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946412/HDDS-592-HDDS-4.00.patch
 |
| Optional Tests |  asflicense  unit  compile  javac  javadoc  mvninstall  
mvnsite  shadedclient  shellcheck  shelldocs  |
| uname | Linux 9b6f441d3862 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDDS-4 / 5df4129 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| shellcheck | v0.4.6 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1578/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1578/testReport/ |
| Max. process+thread count | 2756 (vs. ulimit of 1) |
| modules | C: 

[jira] [Commented] (HDDS-697) update and validate the BCSID for PutSmallFile/GetSmallFile command

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670880#comment-16670880
 ] 

Hadoop QA commented on HDDS-697:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 17m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 42s{color} | {color:orange} root: The patch generated 4 new + 0 unchanged - 
0 fixed = 4 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
14s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
58s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 46s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
46s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}127m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
|   | hadoop.ozone.ozShell.TestOzoneShell |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce 

[jira] [Commented] (HDFS-13794) [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670854#comment-16670854
 ] 

Hadoop QA commented on HDFS-13794:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 23m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12090 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  5m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
38s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
19s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
28s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} HDFS-12090 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 17m 
25s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
40s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} HDFS-12090 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 16m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 26s{color} | {color:orange} root: The patch generated 1 new + 461 unchanged 
- 0 fixed = 462 total (was 461) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 11m 
22s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 36s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 32s{color} 
| {color:red} hadoop-fs2img in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}136m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13794 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946236/HDFS-13794-HDFS-12090.004.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux db817e0f75cc 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-12090 / 06477ab |
| maven | version: Apache Maven 3.3.9 |
| Default Java 

[jira] [Resolved] (HDDS-688) Hive Query hangs, if DN's are restarted before the query is submitted

2018-10-31 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari resolved HDDS-688.
---
Resolution: Fixed

This is fixed with the recent changes. Resolving it.

> Hive Query hangs, if DN's are restarted before the query is submitted
> -
>
> Key: HDDS-688
> URL: https://issues.apache.org/jira/browse/HDDS-688
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Mukul Kumar Singh
>Priority: Major
>
> Run a Hive Insert Query. It runs fine as below:
> {code:java}
> 0: jdbc:hive2://ctr-e138-1518143905142-510793> insert into testo3 values(1, 
> "aa", 3.0);
> INFO : Compiling 
> command(queryId=hive_20181018005729_fe644ab2-f8cc-41c3-b2d8-ffe1022de607): 
> insert into testo3 values(1, "aa", 3.0)
> INFO : Semantic Analysis Completed (retrial = false)
> INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:_col0, 
> type:int, comment:null), FieldSchema(name:_col1, type:string, comment:null), 
> FieldSchema(name:_col2, type:float, comment:null)], properties:null)
> INFO : Completed compiling 
> command(queryId=hive_20181018005729_fe644ab2-f8cc-41c3-b2d8-ffe1022de607); 
> Time taken: 0.52 seconds
> INFO : Executing 
> command(queryId=hive_20181018005729_fe644ab2-f8cc-41c3-b2d8-ffe1022de607): 
> insert into testo3 values(1, "aa", 3.0)
> INFO : Query ID = hive_20181018005729_fe644ab2-f8cc-41c3-b2d8-ffe1022de607
> INFO : Total jobs = 1
> INFO : Launching Job 1 out of 1
> INFO : Starting task [Stage-1:MAPRED] in serial mode
> INFO : Subscribed to counters: [] for queryId: 
> hive_20181018005729_fe644ab2-f8cc-41c3-b2d8-ffe1022de607
> INFO : Session is already open
> INFO : Dag name: insert into testo3 values(1, "aa", 3.0) (Stage-1)
> INFO : Status: Running (Executing on YARN cluster with App id 
> application_1539383731490_0073)
> --
> VERTICES MODE STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED
> --
> Map 1 .. container SUCCEEDED 1 1 0 0 0 0
> Reducer 2 .. container SUCCEEDED 1 1 0 0 0 0
> --
> VERTICES: 02/02 [==>>] 100% ELAPSED TIME: 11.95 s
> --
> INFO : Status: DAG finished successfully in 10.68 seconds
> INFO :
> INFO : Query Execution Summary
> INFO : 
> --
> INFO : OPERATION DURATION
> INFO : 
> --
> INFO : Compile Query 0.52s
> INFO : Prepare Plan 0.23s
> INFO : Get Query Coordinator (AM) 0.00s
> INFO : Submit Plan 0.11s
> INFO : Start DAG 0.57s
> INFO : Run DAG 10.68s
> INFO : 
> --
> INFO :
> INFO : Task Execution Summary
> INFO : 
> --
> INFO : VERTICES DURATION(ms) CPU_TIME(ms) GC_TIME(ms) INPUT_RECORDS 
> OUTPUT_RECORDS
> INFO : 
> --
> INFO : Map 1 7074.00 11,280 276 3 1
> INFO : Reducer 2 1074.00 2,040 0 1 0
> INFO : 
> --
> INFO :
> INFO : org.apache.tez.common.counters.DAGCounter:
> INFO : NUM_SUCCEEDED_TASKS: 2
> INFO : TOTAL_LAUNCHED_TASKS: 2
> INFO : AM_CPU_MILLISECONDS: 1390
> INFO : AM_GC_TIME_MILLIS: 0
> INFO : File System Counters:
> INFO : FILE_BYTES_READ: 135
> INFO : FILE_BYTES_WRITTEN: 135
> INFO : HDFS_BYTES_WRITTEN: 199
> INFO : HDFS_READ_OPS: 3
> INFO : HDFS_WRITE_OPS: 2
> INFO : HDFS_OP_CREATE: 1
> INFO : HDFS_OP_GET_FILE_STATUS: 3
> INFO : HDFS_OP_RENAME: 1
> INFO : org.apache.tez.common.counters.TaskCounter:
> INFO : SPILLED_RECORDS: 0
> INFO : NUM_SHUFFLED_INPUTS: 1
> INFO : NUM_FAILED_SHUFFLE_INPUTS: 0
> INFO : GC_TIME_MILLIS: 276
> INFO : TASK_DURATION_MILLIS: 8474
> INFO : CPU_MILLISECONDS: 13320
> INFO : PHYSICAL_MEMORY_BYTES: 4294967296
> INFO : VIRTUAL_MEMORY_BYTES: 11205029888
> INFO : COMMITTED_HEAP_BYTES: 4294967296
> INFO : INPUT_RECORDS_PROCESSED: 5
> INFO : INPUT_SPLIT_LENGTH_BYTES: 1
> INFO : OUTPUT_RECORDS: 1
> INFO : OUTPUT_LARGE_RECORDS: 0
> INFO : OUTPUT_BYTES: 94
> INFO : OUTPUT_BYTES_WITH_OVERHEAD: 102
> INFO : OUTPUT_BYTES_PHYSICAL: 127
> INFO : ADDITIONAL_SPILLS_BYTES_WRITTEN: 0
> INFO : 

[jira] [Commented] (HDFS-13996) Make HttpFS' ACLs RegEx configurable

2018-10-31 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670848#comment-16670848
 ] 

Wei-Chiu Chuang commented on HDFS-13996:


Ok that's fine if you didn't touch it.

> Make HttpFS' ACLs RegEx configurable
> 
>
> Key: HDFS-13996
> URL: https://issues.apache.org/jira/browse/HDFS-13996
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.6.5, 3.0.3, 2.7.7
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13996.001.patch, HDFS-13996.002.patch, 
> HDFS-13996.003.patch
>
>
> Previously in HDFS-11421, WebHDFS' ACLs RegEx is made configurable, but it's 
> not configurable yet in HttpFS. For now in HttpFS, the ACL permission pattern 
> is fixed to DFS_WEBHDFS_ACL_PERMISSION_PATTERN_DEFAULT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-778) Add an interface for CA and Clients for Certificate operations

2018-10-31 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670843#comment-16670843
 ] 

Anu Engineer commented on HDDS-778:
---

bq. Having component in function parameters gives an impression that one 
component can override/write private keys for other one which we would like to 
avoid.

It is the same process, so we have the same security boundary. We need this 
since we will want to store the certificates from different components that we 
have talked to or fetched the certs from SCM.

For components like SCM, there will be CA certs and non-CA certs. so overall, 
this helps.

bq. Do we need api to get certificate for given component/client or check for 
it?
Yes, that will be handler for the QueryCertificate call in the server side. So 
we do need it.


> Add an interface for CA and Clients for Certificate operations
> --
>
> Key: HDDS-778
> URL: https://issues.apache.org/jira/browse/HDDS-778
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM, SCM Client
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-778-HDDS-4.001.patch
>
>
> This JIRA proposes to add an interface specification that can be programmed 
> against by Datanodes and Ozone Manager and other clients that want to use the 
> certificate-based security features of HDDS.
> We will also add a Certificate Server interface, this interface can be used 
> to use non-SCM based CA or if we need to use HSM based secret storage 
> services. 
> At this point, it is simply an interface and nothing more. Thanks to [~xyao] 
> for suggesting this idea.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-779) Fix ASF License violation in S3Consts and S3Utils

2018-10-31 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-779:

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

Thank You [~dineshchitlangia] for reporting and fixing this issue.

This has been taken care under HDDS-777.

> Fix ASF License violation in S3Consts and S3Utils
> -
>
> Key: HDDS-779
> URL: https://issues.apache.org/jira/browse/HDDS-779
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Affects Versions: 0.3.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
> Attachments: HDDS-779.001.patch
>
>
> Spotted this issue during one of the Jenkins runs for HDDS-120.
> [https://builds.apache.org/job/PreCommit-HDDS-Build/1569/artifact/out/patch-asflicense-problems.txt]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-782) MR example pi job runs 5 min for 1 Map/1 Sample

2018-10-31 Thread Soumitra Sulav (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670828#comment-16670828
 ] 

Soumitra Sulav commented on HDDS-782:
-

Similar observation after running a TeraSort job on a 10KB 2 split file.

Total time was 6 mins.

> MR example pi job runs 5 min for 1 Map/1 Sample
> ---
>
> Key: HDDS-782
> URL: https://issues.apache.org/jira/browse/HDDS-782
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Priority: Major
> Attachments: ozoneMRJob.log
>
>
> On running a hadoop examples pi job it takes 250+ seconds which generally run 
> in few seconds in HDFS cluster.
> On seeing the service/job logs it seems that there are few 
> _SocketTimeoutException_ in between and YARN keeps on _Waiting for 
> AsyncDispatcher to drain_. Thread state is _WAITING_ for a very long interval.
>  
> Refer attached log for further details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14041) NegativeArraySizeException when PROVIDED replication >1

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670825#comment-16670825
 ] 

Hadoop QA commented on HDFS-14041:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 31m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  5m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
30m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 32s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} hadoop-fs2img in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
49s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}231m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSClientRetries |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14041 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946307/HDFS-14041.000.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux cd0aae80ad11 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 478b2cb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Commented] (HDDS-779) Fix ASF License violation in S3Consts and S3Utils

2018-10-31 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670822#comment-16670822
 ] 

Dinesh Chitlangia commented on HDDS-779:


cc: [~elek], [~bharatviswa]

> Fix ASF License violation in S3Consts and S3Utils
> -
>
> Key: HDDS-779
> URL: https://issues.apache.org/jira/browse/HDDS-779
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Affects Versions: 0.3.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
> Attachments: HDDS-779.001.patch
>
>
> Spotted this issue during one of the Jenkins runs for HDDS-120.
> [https://builds.apache.org/job/PreCommit-HDDS-Build/1569/artifact/out/patch-asflicense-problems.txt]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14042) NPE when PROVIDED storage is missing

2018-10-31 Thread JIRA
Íñigo Goiri created HDFS-14042:
--

 Summary: NPE when PROVIDED storage is missing
 Key: HDFS-14042
 URL: https://issues.apache.org/jira/browse/HDFS-14042
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.3.0
Reporter: Íñigo Goiri
Assignee: Virajith Jalaparti


java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.updateStorageStats(DatanodeDescriptor.java:460)
at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.updateHeartbeatState(DatanodeDescriptor.java:390)
at 
org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager.updateLifeline(HeartbeatManager.java:254)
at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.handleLifeline(DatanodeManager.java:1789)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.handleLifeline(FSNamesystem.java:3997)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.sendLifeline(NameNodeRpcServer.java:1666)
at 
org.apache.hadoop.hdfs.protocolPB.DatanodeLifelineProtocolServerSideTranslatorPB.sendLifeline(DatanodeLifelineProtocolServerSideTranslatorPB.java:62)
at 
org.apache.hadoop.hdfs.protocol.proto.DatanodeLifelineProtocolProtos$DatanodeLifelineProtocolService$2.callBlockingMethod(DatanodeLifelineProtocolProtos.java:409)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:898)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:844)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2727)




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13996) Make HttpFS' ACLs RegEx configurable

2018-10-31 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670818#comment-16670818
 ] 

Siyao Meng commented on HDFS-13996:
---

I'm not sure about one checkstyle warning:
{code:java}
./hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestHdfsHelper.java:57:
  public Statement apply(Statement statement, FrameworkMethod frameworkMethod, 
Object o) {:36: 'statement' hides a field. [HiddenField]
{code}
I didn't change it at all. Should I ignore it? [~jojochuang]

> Make HttpFS' ACLs RegEx configurable
> 
>
> Key: HDFS-13996
> URL: https://issues.apache.org/jira/browse/HDFS-13996
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.6.5, 3.0.3, 2.7.7
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13996.001.patch, HDFS-13996.002.patch, 
> HDFS-13996.003.patch
>
>
> Previously in HDFS-11421, WebHDFS' ACLs RegEx is made configurable, but it's 
> not configurable yet in HttpFS. For now in HttpFS, the ACL permission pattern 
> is fixed to DFS_WEBHDFS_ACL_PERMISSION_PATTERN_DEFAULT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-784) ozone fs volume created with non-existing unix user

2018-10-31 Thread Soumitra Sulav (JIRA)
Soumitra Sulav created HDDS-784:
---

 Summary: ozone fs volume created with non-existing unix user
 Key: HDDS-784
 URL: https://issues.apache.org/jira/browse/HDDS-784
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Filesystem
Affects Versions: 0.3.0
Reporter: Soumitra Sulav


ozone command to create a volume with owner being any username runs 
successfully even if it is not part of unix users.

The command throws a security warning _(security.ShellBasedUnixGroupsMapping)_ 
but still creates the volume.

As a result we can't list the volume, and volume listing with root returns an 
empty string.

ozone cli Command run :
{code:java}
ozone sh volume create testvolume -u=hdfs{code}
WARNING thrown :

 
{code:java}
2018-10-30 10:19:38,268 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
2018-10-30 10:19:39,061 WARN security.ShellBasedUnixGroupsMapping: unable to 
return groups for user hdfs
PartialGroupNameException The user name 'hdfs' is not found. id: hdfs: no such 
user
id: hdfs: no such user
at 
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.resolvePartialGroupNames(ShellBasedUnixGroupsMapping.java:294)
 at 
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:207)
 at 
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:97)
 at 
org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:51)
 at 
org.apache.hadoop.security.Groups$GroupCacheLoader.fetchGroupList(Groups.java:387)
 at org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:321)
 at org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:270)
 at 
com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
 at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
 at 
com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
 at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
 at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
 at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969)
 at 
com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829)
 at org.apache.hadoop.security.Groups.getGroups(Groups.java:228)
 at 
org.apache.hadoop.security.UserGroupInformation.getGroups(UserGroupInformation.java:1588)
 at 
org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1576)
 at 
org.apache.hadoop.ozone.client.rpc.RpcClient.createVolume(RpcClient.java:187)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
 at com.sun.proxy.$Proxy15.createVolume(Unknown Source)
 at org.apache.hadoop.ozone.client.ObjectStore.createVolume(ObjectStore.java:82)
 at 
org.apache.hadoop.ozone.web.ozShell.volume.CreateVolumeHandler.call(CreateVolumeHandler.java:103)
 at 
org.apache.hadoop.ozone.web.ozShell.volume.CreateVolumeHandler.call(CreateVolumeHandler.java:41)
 at picocli.CommandLine.execute(CommandLine.java:919)
 at picocli.CommandLine.access$700(CommandLine.java:104)
 at picocli.CommandLine$RunLast.handle(CommandLine.java:1083)
 at picocli.CommandLine$RunLast.handle(CommandLine.java:1051)
 at 
picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:959)
 at picocli.CommandLine.parseWithHandlers(CommandLine.java:1242)
 at picocli.CommandLine.parseWithHandler(CommandLine.java:1181)
 at org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:61)
 at org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:52)
 at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:80)
2018-10-30 10:19:39,073 INFO rpc.RpcClient: Creating Volume: testvolume, with 
hdfs as owner and quota set to 1152921504606846976 bytes.
{code}
Volume list empty return :
{code:java}
[root@ctr-e138-1518143905142-552728-01-02 ~]# ozone sh volume list
2018-10-30 10:20:03,275 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
[ ]{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-783) writeStateMachineData times out (tracking RATIS-382)

2018-10-31 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-783:
---
Summary: writeStateMachineData times out (tracking RATIS-382)  (was: 
writeStateMachineData times out (tracking RATIS fix))

> writeStateMachineData times out (tracking RATIS-382)
> 
>
> Key: HDDS-783
> URL: https://issues.apache.org/jira/browse/HDDS-783
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.3.0
>Reporter: Nilotpal Nandi
>Priority: Blocker
> Fix For: 0.3.0
>
>
> Tracking jira to address RATIS-382.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-783) writeStateMachineData times out (tracking RATIS fix)

2018-10-31 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-783:
---
Summary: writeStateMachineData times out (tracking RATIS fix)  (was: 
writeStateMachineData times out)

> writeStateMachineData times out (tracking RATIS fix)
> 
>
> Key: HDDS-783
> URL: https://issues.apache.org/jira/browse/HDDS-783
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.3.0
>Reporter: Nilotpal Nandi
>Priority: Blocker
> Fix For: 0.3.0
>
>
> Tracking jira to address RATIS-382.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-783) writeStateMachineData times out

2018-10-31 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-783:
---
Description: Tracking jira to address RATIS-382.  (was: datanode stopped 
due to following error :

datanode.log
{noformat}
2018-10-31 09:12:04,517 INFO org.apache.ratis.server.impl.RaftServerImpl: 
9fab9937-fbcd-4196-8014-cb165045724b: set configuration 169: 
[9fab9937-fbcd-4196-8014-cb165045724b:172.27.15.131:9858, 
ce0084c2-97cd-4c97-9378-e5175daad18b:172.27.15.139:9858, 
f0291cb4-7a48-456a-847f-9f91a12aa850:172.27.38.9:9858], old=null at 169
2018-10-31 09:12:22,187 ERROR org.apache.ratis.server.storage.RaftLogWorker: 
Terminating with exit status 1: 
9fab9937-fbcd-4196-8014-cb165045724b-RaftLogWorker failed.
org.apache.ratis.protocol.TimeoutIOException: Timeout: WriteLog:182: (t:10, 
i:182), STATEMACHINELOGENTRY, client-611073BBFA46, cid=127-writeStateMachineData
 at org.apache.ratis.util.IOUtils.getFromFuture(IOUtils.java:87)
 at 
org.apache.ratis.server.storage.RaftLogWorker$WriteLog.execute(RaftLogWorker.java:310)
 at org.apache.ratis.server.storage.RaftLogWorker.run(RaftLogWorker.java:182)
 at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.TimeoutException
 at java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1771)
 at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1915)
 at org.apache.ratis.util.IOUtils.getFromFuture(IOUtils.java:79)
 ... 3 more{noformat})

> writeStateMachineData times out
> ---
>
> Key: HDDS-783
> URL: https://issues.apache.org/jira/browse/HDDS-783
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.3.0
>Reporter: Nilotpal Nandi
>Priority: Blocker
> Fix For: 0.3.0
>
>
> Tracking jira to address RATIS-382.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Moved] (HDDS-783) writeStateMachineData times out

2018-10-31 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal moved RATIS-384 to HDDS-783:
--

Fix Version/s: (was: 0.3.0)
   0.3.0
Affects Version/s: (was: 0.3.0)
   0.3.0
 Workflow: patch-available, re-open possible  (was: 
no-reopen-closed, patch-avail)
  Key: HDDS-783  (was: RATIS-384)
  Project: Hadoop Distributed Data Store  (was: Ratis)

> writeStateMachineData times out
> ---
>
> Key: HDDS-783
> URL: https://issues.apache.org/jira/browse/HDDS-783
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.3.0
>Reporter: Nilotpal Nandi
>Priority: Blocker
> Fix For: 0.3.0
>
>
> datanode stopped due to following error :
> datanode.log
> {noformat}
> 2018-10-31 09:12:04,517 INFO org.apache.ratis.server.impl.RaftServerImpl: 
> 9fab9937-fbcd-4196-8014-cb165045724b: set configuration 169: 
> [9fab9937-fbcd-4196-8014-cb165045724b:172.27.15.131:9858, 
> ce0084c2-97cd-4c97-9378-e5175daad18b:172.27.15.139:9858, 
> f0291cb4-7a48-456a-847f-9f91a12aa850:172.27.38.9:9858], old=null at 169
> 2018-10-31 09:12:22,187 ERROR org.apache.ratis.server.storage.RaftLogWorker: 
> Terminating with exit status 1: 
> 9fab9937-fbcd-4196-8014-cb165045724b-RaftLogWorker failed.
> org.apache.ratis.protocol.TimeoutIOException: Timeout: WriteLog:182: (t:10, 
> i:182), STATEMACHINELOGENTRY, client-611073BBFA46, 
> cid=127-writeStateMachineData
>  at org.apache.ratis.util.IOUtils.getFromFuture(IOUtils.java:87)
>  at 
> org.apache.ratis.server.storage.RaftLogWorker$WriteLog.execute(RaftLogWorker.java:310)
>  at org.apache.ratis.server.storage.RaftLogWorker.run(RaftLogWorker.java:182)
>  at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.concurrent.TimeoutException
>  at 
> java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1771)
>  at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1915)
>  at org.apache.ratis.util.IOUtils.getFromFuture(IOUtils.java:79)
>  ... 3 more{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-781) Ambari HDP NoClassDefFoundError for MR jobs

2018-10-31 Thread Soumitra Sulav (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Soumitra Sulav updated HDDS-781:

Description: 
HDP integrated with Ambari has a 
_/*usr/hdp//hadoop/mapreduce.tar.gz*_ file containing all the 
libraries needed for a MR job to run and is copied in the yarn containers at 
time of execution.

As introducing ozone filesystem, relevant jars need to be packaged as part of 
the tar, also the tar is placed as part of _yum install hadoop_ components done 
by Ambari during cluster setup.

During an MR Job run, I faced below java.lang.*NoClassDefFoundError* exceptions 
:
{code:java}
org/apache/hadoop/fs/ozone/OzoneFileSystem
org/apache/ratis/proto/RaftProtos$ReplicationLevel
org/apache/ratis/thirdparty/com/google/protobuf/ProtocolMessageEnum
{code}
 

Adding the relevant jar in the mentioned tar file resolves the exception.

Complete stacktrace for one of the *NoClassDefFoundError* exception :
{code:java}
2018-10-31 10:03:05,191 ERROR [main] 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
java.lang.NoClassDefFoundError: 
org/apache/ratis/proto/RaftProtos$ReplicationLevel
 at org.apache.hadoop.hdds.scm.ScmConfigKeys.(ScmConfigKeys.java:64)
 at org.apache.hadoop.ozone.OzoneConfigKeys.(OzoneConfigKeys.java:221)
 at org.apache.hadoop.ozone.client.OzoneBucket.(OzoneBucket.java:116)
 at 
org.apache.hadoop.ozone.client.rpc.RpcClient.getBucketDetails(RpcClient.java:421)
 at org.apache.hadoop.ozone.client.OzoneVolume.getBucket(OzoneVolume.java:214)
 at 
org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(OzoneFileSystem.java:127)
 at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
 at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
 at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
 at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
 at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
 at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
 at 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.(FileOutputCommitter.java:160)
 at 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.(FileOutputCommitter.java:116)
 at 
org.apache.hadoop.mapreduce.lib.output.PathOutputCommitterFactory.createFileOutputCommitter(PathOutputCommitterFactory.java:134)
 at 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitterFactory.createOutputCommitter(FileOutputCommitterFactory.java:35)
 at 
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.getOutputCommitter(FileOutputFormat.java:338)
 at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$3.call(MRAppMaster.java:552)
 at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$3.call(MRAppMaster.java:534)
 at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.callWithJobClassLoader(MRAppMaster.java:1802)
 at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.createOutputCommitter(MRAppMaster.java:534)
 at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:311)
 at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
 at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$6.run(MRAppMaster.java:1760)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
 at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1757)
 at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1691)
Caused by: java.lang.ClassNotFoundException: 
org.apache.ratis.proto.RaftProtos$ReplicationLevel
 at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
 ... 29 more
2018-10-31 10:03:05,203 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
with status 1: java.lang.NoClassDefFoundError: 
org/apache/ratis/proto/RaftProtos$ReplicationLevel{code}
 

  was:
HDP integrated with Ambari has a 
_/*usr/hdp//hadoop/mapreduce.tar.gz*_ file containing all the 
libraries needed for a MR job to run and is copied in the yarn containers at 
time of execution.

As introducing ozone filesystem, relevant jars need to be packaged as part of 
the tar, also the tar is placed as part of _yum install hadoop_ components done 
by Ambari during cluster setup.

During an MR Job run, I faced below java.lang.*NoClassDefFoundError* exceptions 
:

 
{code:java}
org/apache/hadoop/fs/ozone/OzoneFileSystem
org/apache/ratis/proto/RaftProtos$ReplicationLevel
org/apache/ratis/thirdparty/com/google/protobuf/ProtocolMessageEnum
{code}
 

Adding the relevant jar in the mentioned tar file resolves the exception.

 

Complete stacktrace for one of the *NoClassDefFoundError* exception :
{code:java}

[jira] [Commented] (HDFS-13834) RBF: Connection creator thread should catch Throwable

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670802#comment-16670802
 ] 

Hadoop QA commented on HDFS-13834:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
32s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
57s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:3e39f4f |
| JIRA Issue | HDFS-13834 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946339/HDFS-13834-HDFS-13891.0.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e6720ff7a435 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 
17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 39114c3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25400/testReport/ |
| Max. process+thread count | 961 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25400/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Connection creator thread should catch Throwable
> -

[jira] [Commented] (HDDS-781) Ambari HDP NoClassDefFoundError for MR jobs

2018-10-31 Thread Soumitra Sulav (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670800#comment-16670800
 ] 

Soumitra Sulav commented on HDDS-781:
-

Above exceptions were resolved by added mentioned jar containing the class in 
*hadoop/share/hadoop/mapreduce/lib/* folder of the mapreduce.tar.gz :
{code:java}
org/apache/hadoop/fs/ozone/OzoneFileSystem - 
hadoop-ozone-filesystem-0.3.0-SNAPSHOT.jar
org/apache/ratis/proto/RaftProtos$ReplicationLevel - 
ratis-proto-0.3.0-9b2d7b6-SNAPSHOT.jar
org/apache/ratis/thirdparty/com/google/protobuf/ProtocolMessageEnum - 
ratis-thirdparty-0.1.0-SNAPSHOT.jar{code}

> Ambari HDP NoClassDefFoundError for MR jobs
> ---
>
> Key: HDDS-781
> URL: https://issues.apache.org/jira/browse/HDDS-781
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Priority: Major
>
> HDP integrated with Ambari has a 
> _/*usr/hdp//hadoop/mapreduce.tar.gz*_ file containing all the 
> libraries needed for a MR job to run and is copied in the yarn containers at 
> time of execution.
> As introducing ozone filesystem, relevant jars need to be packaged as part of 
> the tar, also the tar is placed as part of _yum install hadoop_ components 
> done by Ambari during cluster setup.
> During an MR Job run, I faced below java.lang.*NoClassDefFoundError* 
> exceptions :
>  
> {code:java}
> org/apache/hadoop/fs/ozone/OzoneFileSystem
> org/apache/ratis/proto/RaftProtos$ReplicationLevel
> org/apache/ratis/thirdparty/com/google/protobuf/ProtocolMessageEnum
> {code}
>  
> Adding the relevant jar in the mentioned tar file resolves the exception.
>  
> Complete stacktrace for one of the *NoClassDefFoundError* exception :
> {code:java}
> 2018-10-31 10:03:05,191 ERROR [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.lang.NoClassDefFoundError: 
> org/apache/ratis/proto/RaftProtos$ReplicationLevel
>  at org.apache.hadoop.hdds.scm.ScmConfigKeys.(ScmConfigKeys.java:64)
>  at org.apache.hadoop.ozone.OzoneConfigKeys.(OzoneConfigKeys.java:221)
>  at org.apache.hadoop.ozone.client.OzoneBucket.(OzoneBucket.java:116)
>  at 
> org.apache.hadoop.ozone.client.rpc.RpcClient.getBucketDetails(RpcClient.java:421)
>  at org.apache.hadoop.ozone.client.OzoneVolume.getBucket(OzoneVolume.java:214)
>  at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(OzoneFileSystem.java:127)
>  at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
>  at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
>  at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
>  at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
>  at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
>  at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
>  at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.(FileOutputCommitter.java:160)
>  at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.(FileOutputCommitter.java:116)
>  at 
> org.apache.hadoop.mapreduce.lib.output.PathOutputCommitterFactory.createFileOutputCommitter(PathOutputCommitterFactory.java:134)
>  at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitterFactory.createOutputCommitter(FileOutputCommitterFactory.java:35)
>  at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.getOutputCommitter(FileOutputFormat.java:338)
>  at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$3.call(MRAppMaster.java:552)
>  at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$3.call(MRAppMaster.java:534)
>  at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.callWithJobClassLoader(MRAppMaster.java:1802)
>  at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.createOutputCommitter(MRAppMaster.java:534)
>  at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:311)
>  at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>  at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$6.run(MRAppMaster.java:1760)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1757)
>  at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1691)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.ratis.proto.RaftProtos$ReplicationLevel
>  at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>  at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>  at 

[jira] [Commented] (HDFS-13996) Make HttpFS' ACLs RegEx configurable

2018-10-31 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670799#comment-16670799
 ] 

Wei-Chiu Chuang commented on HDFS-13996:


Thanks [~smeng] Would you please also take care of the checkstyle warnings? 
After that I think the patch is ready.

> Make HttpFS' ACLs RegEx configurable
> 
>
> Key: HDFS-13996
> URL: https://issues.apache.org/jira/browse/HDFS-13996
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.6.5, 3.0.3, 2.7.7
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13996.001.patch, HDFS-13996.002.patch, 
> HDFS-13996.003.patch
>
>
> Previously in HDFS-11421, WebHDFS' ACLs RegEx is made configurable, but it's 
> not configurable yet in HttpFS. For now in HttpFS, the ACL permission pattern 
> is fixed to DFS_WEBHDFS_ACL_PERMISSION_PATTERN_DEFAULT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-117) Wrapper for set/get Standalone, Ratis and Rest Ports in DatanodeDetails.

2018-10-31 Thread Danilo Perez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danilo Perez reassigned HDDS-117:
-

Assignee: Danilo Perez

> Wrapper for set/get Standalone, Ratis and Rest Ports in DatanodeDetails.
> 
>
> Key: HDDS-117
> URL: https://issues.apache.org/jira/browse/HDDS-117
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Nanda kumar
>Assignee: Danilo Perez
>Priority: Major
>  Labels: newbie
>
> It will be very helpful to have a wrapper for set/get Standalone, Ratis and 
> Rest Ports in DatanodeDetails.
> Search and Replace usage of DatanodeDetails#newPort directly in current code. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-782) MR example pi job runs 5 min for 1 Map/1 Sample

2018-10-31 Thread Soumitra Sulav (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670797#comment-16670797
 ] 

Soumitra Sulav commented on HDDS-782:
-

Attached scm and datanode from one of the node for the job runtime logs.

> MR example pi job runs 5 min for 1 Map/1 Sample
> ---
>
> Key: HDDS-782
> URL: https://issues.apache.org/jira/browse/HDDS-782
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Priority: Major
> Attachments: ozoneMRJob.log
>
>
> On running a hadoop examples pi job it takes 250+ seconds which generally run 
> in few seconds in HDFS cluster.
> On seeing the service/job logs it seems that there are few 
> _SocketTimeoutException_ in between and YARN keeps on _Waiting for 
> AsyncDispatcher to drain_. Thread state is _WAITING_ for a very long interval.
>  
> Refer attached log for further details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-781) Ambari HDP NoClassDefFoundError for MR jobs

2018-10-31 Thread Soumitra Sulav (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Soumitra Sulav updated HDDS-781:

Description: 
HDP integrated with Ambari has a 
_/*usr/hdp//hadoop/mapreduce.tar.gz*_ file containing all the 
libraries needed for a MR job to run and is copied in the yarn containers at 
time of execution.

As introducing ozone filesystem, relevant jars need to be packaged as part of 
the tar, also the tar is placed as part of _yum install hadoop_ components done 
by Ambari during cluster setup.

During an MR Job run, I faced below java.lang.*NoClassDefFoundError* exceptions 
:

 
{code:java}
org/apache/hadoop/fs/ozone/OzoneFileSystem
org/apache/ratis/proto/RaftProtos$ReplicationLevel
org/apache/ratis/thirdparty/com/google/protobuf/ProtocolMessageEnum
{code}
 

Adding the relevant jar in the mentioned tar file resolves the exception.

 

Complete stacktrace for one of the *NoClassDefFoundError* exception :
{code:java}
2018-10-31 10:03:05,191 ERROR [main] 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
java.lang.NoClassDefFoundError: 
org/apache/ratis/proto/RaftProtos$ReplicationLevel
 at org.apache.hadoop.hdds.scm.ScmConfigKeys.(ScmConfigKeys.java:64)
 at org.apache.hadoop.ozone.OzoneConfigKeys.(OzoneConfigKeys.java:221)
 at org.apache.hadoop.ozone.client.OzoneBucket.(OzoneBucket.java:116)
 at 
org.apache.hadoop.ozone.client.rpc.RpcClient.getBucketDetails(RpcClient.java:421)
 at org.apache.hadoop.ozone.client.OzoneVolume.getBucket(OzoneVolume.java:214)
 at 
org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(OzoneFileSystem.java:127)
 at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
 at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
 at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
 at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
 at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
 at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
 at 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.(FileOutputCommitter.java:160)
 at 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.(FileOutputCommitter.java:116)
 at 
org.apache.hadoop.mapreduce.lib.output.PathOutputCommitterFactory.createFileOutputCommitter(PathOutputCommitterFactory.java:134)
 at 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitterFactory.createOutputCommitter(FileOutputCommitterFactory.java:35)
 at 
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.getOutputCommitter(FileOutputFormat.java:338)
 at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$3.call(MRAppMaster.java:552)
 at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$3.call(MRAppMaster.java:534)
 at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.callWithJobClassLoader(MRAppMaster.java:1802)
 at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.createOutputCommitter(MRAppMaster.java:534)
 at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:311)
 at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
 at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$6.run(MRAppMaster.java:1760)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
 at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1757)
 at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1691)
Caused by: java.lang.ClassNotFoundException: 
org.apache.ratis.proto.RaftProtos$ReplicationLevel
 at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
 ... 29 more
2018-10-31 10:03:05,203 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
with status 1: java.lang.NoClassDefFoundError: 
org/apache/ratis/proto/RaftProtos$ReplicationLevel{code}
 

  was:
HDP integrated with Ambari has a 
_/usr/hdp//hadoop/mapreduce.tar.gz_ file containing all the 
libraries needed for a MR job to run and is copied in the yarn containers at 
time of execution.

As introducing ozone filesystem, relevant jars need to be packaged as part of 
the tar, also the tar is placed as part of _yum install hadoop_ components done 
by Ambari during cluster setup.

During an MR Job run, I faced below java.lang.NoClassDefFoundError exceptions :

org/apache/hadoop/fs/ozone/OzoneFileSystem

org/apache/ratis/proto/RaftProtos$ReplicationLevel

org/apache/ratis/thirdparty/com/google/protobuf/ProtocolMessageEnum

Adding the relevant jar in the mentioned tar file resolves the exception.

 


> Ambari HDP NoClassDefFoundError for MR jobs
> ---
>
>  

[jira] [Commented] (HDDS-753) Fix failure in TestSecureOzoneCluster

2018-10-31 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670789#comment-16670789
 ] 

Ajay Kumar commented on HDDS-753:
-

patch v2 adds asf license to test class.

> Fix failure in TestSecureOzoneCluster
> -
>
> Key: HDDS-753
> URL: https://issues.apache.org/jira/browse/HDDS-753
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-753-HDDS-4.00.patch, HDDS-753-HDDS-4.01.patch, 
> HDDS-753-HDDS-4.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-782) MR example pi job runs 5 min for 1 Map/1 Sample

2018-10-31 Thread Soumitra Sulav (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Soumitra Sulav updated HDDS-782:

Attachment: ozoneMRJob.log

> MR example pi job runs 5 min for 1 Map/1 Sample
> ---
>
> Key: HDDS-782
> URL: https://issues.apache.org/jira/browse/HDDS-782
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Priority: Major
> Attachments: ozoneMRJob.log
>
>
> On running a hadoop examples pi job it takes 250+ seconds which generally run 
> in few seconds in HDFS cluster.
> On seeing the service/job logs it seems that there are few 
> _SocketTimeoutException_ in between and YARN keeps on _Waiting for 
> AsyncDispatcher to drain_. Thread state is _WAITING_ for a very long interval.
>  
> Refer attached log for further details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-753) Fix failure in TestSecureOzoneCluster

2018-10-31 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-753:

Attachment: HDDS-753-HDDS-4.02.patch

> Fix failure in TestSecureOzoneCluster
> -
>
> Key: HDDS-753
> URL: https://issues.apache.org/jira/browse/HDDS-753
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-753-HDDS-4.00.patch, HDDS-753-HDDS-4.01.patch, 
> HDDS-753-HDDS-4.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-782) MR example pi job runs 5 min for 1 Map/1 Sample

2018-10-31 Thread Soumitra Sulav (JIRA)
Soumitra Sulav created HDDS-782:
---

 Summary: MR example pi job runs 5 min for 1 Map/1 Sample
 Key: HDDS-782
 URL: https://issues.apache.org/jira/browse/HDDS-782
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Filesystem
Affects Versions: 0.3.0
Reporter: Soumitra Sulav


On running a hadoop examples pi job it takes 250+ seconds which generally run 
in few seconds in HDFS cluster.

On seeing the service/job logs it seems that there are few 
_SocketTimeoutException_ in between and YARN keeps on _Waiting for 
AsyncDispatcher to drain_. Thread state is _WAITING_ for a very long interval.

 

Refer attached log for further details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13996) Make HttpFS' ACLs RegEx configurable

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670782#comment-16670782
 ] 

Hadoop QA commented on HDFS-13996:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 32s{color} | {color:orange} hadoop-hdfs-project: The patch generated 5 new + 
502 unchanged - 0 fixed = 507 total (was 502) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
27s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}195m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-13996 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946292/HDFS-13996.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d5330d1c4bdc 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk 

[jira] [Commented] (HDDS-755) ContainerInfo and ContainerReplica protobuf changes

2018-10-31 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670748#comment-16670748
 ] 

Hudson commented on HDDS-755:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15340 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15340/])
HDDS-755. ContainerInfo and ContainerReplica protobuf changes. (nanda: rev 
e4f22b08e0d1074c315680ba20d8666be21a25db)
* (edit) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/scm/cli/SQLCLI.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainerData.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java
* (edit) hadoop-hdds/common/src/main/proto/hdds.proto
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/TestUtils.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainer.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestContainerStateMachineFailures.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/ScmTestMock.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/HddsTestUtils.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerData.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueHandler.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReportHandler.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerDataYaml.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerSet.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/ContainerOperationClient.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerPersistence.java
* (edit) 
hadoop-hdds/container-service/src/main/proto/StorageContainerDatanodeProtocol.proto
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerReportHandler.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/TestKeyValueContainerData.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueContainer.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerDataYaml.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/SCMContainerManager.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestSCMContainerManager.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/Container.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/HddsDispatcher.java
* (edit) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/container/InfoSubcommand.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerInfo.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocolPB/StorageContainerLocationProtocolClientSideTranslatorPB.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestCloseContainerHandlingByClient.java
* (edit) hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestBlockDeletion.java
* (edit) 
hadoop-hdds/common/src/main/proto/StorageContainerLocationProtocol.proto


> ContainerInfo and ContainerReplica protobuf changes
> ---
>
> Key: HDDS-755
> URL: https://issues.apache.org/jira/browse/HDDS-755
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode, SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-755.000.patch, HDDS-755.001.patch
>
>
> We have different classes that maintain container related information, we can 
> consolidate them so that it is easy to read the code.
> Proposal:
> In SCM: will be used in communication between SCM and Client, also used for 
> storing in db
> * ContainerInfoProto
> * ContainerInfo
>  
> In Datanode: Used in 

[jira] [Commented] (HDDS-773) Loading ozone s3 bucket browser could be failed

2018-10-31 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670753#comment-16670753
 ] 

Hudson commented on HDDS-773:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15340 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15340/])
HDDS-773. Loading ozone s3 bucket browser could be failed. Contributed (bharat: 
rev 478b2cba0de5aadf655ac0b5a607760d46cc2a1e)
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/BucketEndpoint.java
* (edit) hadoop-ozone/s3gateway/pom.xml
* (edit) hadoop-ozone/dist/src/main/smoketest/s3/README.md


> Loading ozone s3 bucket browser could be failed
> ---
>
> Key: HDDS-773
> URL: https://issues.apache.org/jira/browse/HDDS-773
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-773-ozone-0.3.001.patch, 
> HDDS-773-ozone-0.3.002.patch
>
>
> Ozone S3 gateway support an internal bucket browser to display the content of 
> the ozone s3 buckets in the browser.
> You can check the content of any bucket with using the url 
> http://localhost:9878/bucket?browser=true
> This endpoint is failing some times with the following error:
> {code}
> 2018-10-31 11:26:55 WARN  HttpChannel:486 - //localhost:9878/blist?browser=x
> javax.servlet.ServletException: javax.servlet.ServletException: 
> org.glassfish.jersey.server.ContainerException: java.io.IOException: Stream 
> closed
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:139)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:534)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: javax.servlet.ServletException: 
> org.glassfish.jersey.server.ContainerException: java.io.IOException: Stream 
> closed
>   at 
> org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:432)
>   at 
> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229)
>   at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
>   at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1610)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> 

[jira] [Commented] (HDDS-762) Fix unit test failure for TestContainerSQLCli & TestSCMMetrics

2018-10-31 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670746#comment-16670746
 ] 

Hudson commented on HDDS-762:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15340 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15340/])
HDDS-762. Fix unit test failure for TestContainerSQLCli & (aengineer: rev 
e33b61f3351c09b00717f6eef32ff7d24345d06e)
* (edit) hadoop-hdds/pom.xml
* (edit) hadoop-ozone/pom.xml
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/TestCSMMetrics.java
* (edit) 
hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/scm/TestContainerSQLCli.java


> Fix unit test failure for TestContainerSQLCli & TestSCMMetrics
> --
>
> Key: HDDS-762
> URL: https://issues.apache.org/jira/browse/HDDS-762
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Blocker
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-762-ozone-0.3.001.patch, HDDS-762.001.patch
>
>
> TestContainerSQLCli & TestCSMMetrics are currently failing consistently 
> because of a mismatch in metrics register name. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-751) Replace usage of Guava Optional with Java Optional

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670758#comment-16670758
 ] 

Hadoop QA commented on HDDS-751:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDDS-751 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-751 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946212/HDDS-751.002.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1577/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Replace usage of Guava Optional with Java Optional
> --
>
> Key: HDDS-751
> URL: https://issues.apache.org/jira/browse/HDDS-751
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Yiqun Lin
>Priority: Critical
>  Labels: newbie
> Attachments: HDDS-751.002.patch, HDFS-751.001.patch
>
>
> Ozone and HDDS code uses {{com.google.common.base.Optional}} in multiple 
> places.
> Let's replace with the Java Optional since we only target JDK8+.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-712) Use x-amz-storage-class to specify replication type and replication factor

2018-10-31 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670751#comment-16670751
 ] 

Hudson commented on HDDS-712:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15340 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15340/])
HDDS-712. Use x-amz-storage-class to specify replication type and (elek: rev 
ecac351aac1702194c56743ced5a66242643f28c)
* (edit) hadoop-ozone/dist/src/main/smoketest/s3/objectdelete.robot
* (edit) hadoop-ozone/dist/src/main/smoketest/s3/objectputget.robot
* (edit) hadoop-ozone/dist/src/main/smoketest/s3/objectmultidelete.robot
* (edit) hadoop-ozone/dist/src/main/smoketest/s3/objectcopy.robot
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestPutObject.java
* (add) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/util/S3StorageType.java
* (edit) hadoop-ozone/dist/src/main/smoketest/s3/awss3.robot
* (add) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/util/package-info.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/ObjectEndpoint.java
* (add) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/util/S3Consts.java


> Use x-amz-storage-class to specify replication type and replication factor
> --
>
> Key: HDDS-712
> URL: https://issues.apache.org/jira/browse/HDDS-712
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-712.00.patch, HDDS-712.01.patch
>
>
>  
> This has been a comment in the Jira in HDDS-693 from [~anu]
> @DefaultValue("STAND_ALONE") @QueryParam("replicationType")
> Just an opportunistic comment. Not part of this patch, this query param will 
> not be sent by S3 hence this will always default to Stand_Alone. At some 
> point we need to move to RATIS, Perhaps we have to read this via 
> x-amz-storage-class.
> *I propose below solution for this:*
> Currently, in code we take query params replicationType and replicationFactor 
> and default them to Stand alone and 1. But these query params cannot be 
> passed from aws cli.
> We want to use x-amz-storage-class header and pass the values. By default for 
> S3 If you don't specify this it defaults to Standard. So, in Ozone over S3 
> also, as we want to default to Ratis and replication factor three by default.
> We can use the mapping Standard to RATIS and REDUCED_REDUNDANCY to Stand 
> alone.
>  
> There are 2 more values 
> STANDARD_IA and ONEZONE_IA these need to be considered later how we want to 
> use them. Intially we are considering to use Standard and Reduced_Redundancy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-677) Create documentation for s3 gateway to the docs

2018-10-31 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670755#comment-16670755
 ] 

Hudson commented on HDDS-677:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15340 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15340/])
HDDS-677. Create documentation for s3 gateway to the docs. Contributed (bharat: 
rev 6668c19dafde530c43ccacb23de11455ae1813b5)
* (add) hadoop-ozone/docs/content/S3.md


> Create documentation for s3 gateway to the docs
> ---
>
> Key: HDDS-677
> URL: https://issues.apache.org/jira/browse/HDDS-677
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Critical
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-677.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-754) VolumeInfo#getScmUsed throws NPE

2018-10-31 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670747#comment-16670747
 ] 

Hudson commented on HDDS-754:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15340 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15340/])
HDDS-754. VolumeInfo#getScmUsed throws NPE. Contributed by Hanisha (aengineer: 
rev 773f0d1519715e3ddf77c139998cc12d7447da66)
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/volume/TestHddsVolume.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneClusterImpl.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/VolumeSet.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestNodeFailure.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/volume/TestVolumeSet.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneCluster.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/VolumeInfo.java


> VolumeInfo#getScmUsed throws NPE
> 
>
> Key: HDDS-754
> URL: https://issues.apache.org/jira/browse/HDDS-754
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: Hanisha Koneru
>Priority: Blocker
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-754.001.patch, HDDS-754.002.patch, 
> HDDS-754.003.patch, HDDS-754.004.patch
>
>
> The failure can be seen at the following jenkins run
> https://builds.apache.org/job/PreCommit-HDDS-Build/1540/testReport/org.apache.hadoop.hdds.scm.pipeline/TestNodeFailure/testPipelineFail/
> {code}
> 2018-10-29 13:44:11,984 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(50)) - Execution exception 
> when running task in Datanode ReportManager Thread - 3
> 2018-10-29 13:44:11,984 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(63)) - Caught exception in 
> thread Datanode ReportManager Thread - 3: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:379)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:225)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-781) Ambari HDP NoClassDefFoundError for MR jobs

2018-10-31 Thread Soumitra Sulav (JIRA)
Soumitra Sulav created HDDS-781:
---

 Summary: Ambari HDP NoClassDefFoundError for MR jobs
 Key: HDDS-781
 URL: https://issues.apache.org/jira/browse/HDDS-781
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Filesystem
Affects Versions: 0.3.0
Reporter: Soumitra Sulav


HDP integrated with Ambari has a 
_/usr/hdp//hadoop/mapreduce.tar.gz_ file containing all the 
libraries needed for a MR job to run and is copied in the yarn containers at 
time of execution.

As introducing ozone filesystem, relevant jars need to be packaged as part of 
the tar, also the tar is placed as part of _yum install hadoop_ components done 
by Ambari during cluster setup.

During an MR Job run, I faced below java.lang.NoClassDefFoundError exceptions :

org/apache/hadoop/fs/ozone/OzoneFileSystem

org/apache/ratis/proto/RaftProtos$ReplicationLevel

org/apache/ratis/thirdparty/com/google/protobuf/ProtocolMessageEnum

Adding the relevant jar in the mentioned tar file resolves the exception.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13942) [JDK10] Fix javadoc errors in hadoop-hdfs module

2018-10-31 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670749#comment-16670749
 ] 

Hudson commented on HDFS-13942:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15340 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15340/])
HDFS-13942. [JDK10] Fix javadoc errors in hadoop-hdfs module. (aajisaka: rev 
fac9f91b2944cee641049fffcafa6b65e0cf68f2)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/datamodel/DiskBalancerDataNode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/datamodel/DiskBalancerCluster.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/NamenodeProtocol.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/top/window/RollingWindowManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/XMLUtils.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AclStorage.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeProtocol.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrStorage.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/StartupProgressView.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodesInPath.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/OutlierDetector.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HostFileManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/AbstractINodeDiffList.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/DiskBalancerException.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/NameDistributionVisitor.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/top/metrics/TopMetrics.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/planner/GreedyPlanner.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockTokenSecretManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FileIoProvider.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CorruptReplicasMap.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/Diff.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java
* (edit) 

[jira] [Commented] (HDDS-759) Create config settings for SCM and OM DB directories

2018-10-31 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670754#comment-16670754
 ] 

Hudson commented on HDDS-759:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15340 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15340/])
HDDS-759. Create config settings for SCM and OM DB directories. (arp: rev 
08bb0362e0c57f562e2f2e366cba725649d1d9c8)
* (edit) hadoop-hdds/common/src/main/resources/ozone-default.xml
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManager.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/block/TestDeletedBlockLog.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestMiniOzoneCluster.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManager.java
* (edit) 
hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/om/TestOmSQLCli.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/TestDatanodeStateMachine.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerReportHandler.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMStorage.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestDeadNodeHandler.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/block/TestBlockManager.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestCloseContainerEventHandler.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/DeletedBlockLogImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneClusterImpl.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestOzoneContainer.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestSCMContainerManager.java
* (edit) 
hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/scm/TestContainerSQLCli.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/SCMTestUtils.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestSCMPipelineManager.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/SCMContainerManager.java
* (edit) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OmUtils.java
* (edit) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ServerUtils.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMConfigKeys.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SCMPipelineManager.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/TestHddsServerUtils.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMStorage.java
* (add) 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/TestOmUtils.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestContainerPlacement.java
* (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/hdds/scm/HddsServerUtil.java


> Create config settings for SCM and OM DB directories
> 
>
> Key: HDDS-759
> URL: https://issues.apache.org/jira/browse/HDDS-759
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-759.01.patch, HDDS-759.02.patch
>
>
> Currently SCM, OM and DN all use {{ozone.metadata.dirs}} for storing 
> metadata. 
> Looking more closely, it appears that SCM and OM have no option to choose 
> separate locations. We should provide custom config settings. For most 
> production clusters, admins will want to carefully choose where they place OM 
> and SCM metadata, similar to how they choose locations for NN metadata.
> To avoid 

[jira] [Commented] (HDDS-659) Implement pagination in GET bucket (object list) endpoint

2018-10-31 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670752#comment-16670752
 ] 

Hudson commented on HDDS-659:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15340 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15340/])
HDDS-659. Implement pagination in GET bucket (object list) endpoint. (elek: rev 
b519f3f2a0ae960391ce7bff59f1fdd21a22e030)
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/client/OzoneBucketStub.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/ListObjectResponse.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/BucketEndpoint.java
* (add) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/util/S3utils.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/util/S3Consts.java
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestBucketGet.java


> Implement pagination in GET bucket (object list) endpoint
> -
>
> Key: HDDS-659
> URL: https://issues.apache.org/jira/browse/HDDS-659
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Attachments: HDDS-659.00-WIP.patch, HDDS-659.01.patch, 
> HDDS-659.02.patch, HDDS-659.03.patch
>
>
> The current implementation always returns with all the elements. We need to 
> support paging with supporting the following headers:
>  * {{start-after}}
>  * {{continuation-token}}{{}}
>  * {{max-keys}}{{}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14033) [libhdfs++] Disable libhdfs++ build on systems that do not support thread_local

2018-10-31 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670750#comment-16670750
 ] 

Hudson commented on HDFS-14033:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15340 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15340/])
HDFS-14033. [libhdfs++] Disable libhdfs++ build on systems that do not (sunilg: 
rev 9c438abe52d4ee0b25345a4b7ec1697dd66f85e9)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMakeLists.txt
* (edit) hadoop-hdfs-project/hadoop-hdfs-native-client/src/CMakeLists.txt


> [libhdfs++] Disable libhdfs++ build on systems that do not support 
> thread_local
> ---
>
> Key: HDFS-14033
> URL: https://issues.apache.org/jira/browse/HDFS-14033
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>Priority: Major
> Fix For: 3.2.0, 3.3.0
>
> Attachments: HDFS-14033.000.patch, HDFS-14033.001.patch
>
>
> In order to still be able to build Hadoop on older systems (such as rhel 6) 
> we need to disable libhdfs++ build on systems that do not support 
> thread_local. We should also emit a warning saying libhdfs++ wasn't built.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14024) RBF: ProvidedCapacityTotal json exception in NamenodeHeartbeatService

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670728#comment-16670728
 ] 

Hadoop QA commented on HDFS-14024:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
13s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
51s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:3e39f4f |
| JIRA Issue | HDFS-14024 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946243/HDFS-14024-HDFS-13891.0.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a47a7de46d6d 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 39114c3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25396/testReport/ |
| Max. process+thread count | 1038 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25396/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically 

[jira] [Updated] (HDDS-780) ozone client daemon start/stop fails if triggered from different host

2018-10-31 Thread Soumitra Sulav (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Soumitra Sulav updated HDDS-780:

Summary: ozone client daemon start/stop fails if triggered from different 
host  (was: ozone client daemon restart fails if triggered from different host)

> ozone client daemon start/stop fails if triggered from different host
> -
>
> Key: HDDS-780
> URL: https://issues.apache.org/jira/browse/HDDS-780
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Priority: Major
>
> Ozone client operation throws *java.net.BindException: Cannot assign 
> requested address* exception if om, scm are not located on the same node from 
> where the cli command is run.
> Command triggered from a node which has scm but not om :
> {code:java}
> ozone --daemon start om{code}
> Complete stacktrace of Exception :
> {code:java}
> 2018-10-30 10:17:22,675 INFO org.apache.hadoop.ipc.CallQueueManager: Using 
> callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 
> 2000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: 
> false.
> 2018-10-30 10:17:22,683 ERROR org.apache.hadoop.ozone.om.OzoneManager: Failed 
> to start the OzoneManager.
> java.net.BindException: Problem binding to 
> [ctr-e138-1518143905142-552728-01-03.hwx.site:9889] 
> java.net.BindException: Cannot assign requested address; For more details 
> see: http://wiki.apache.org/hadoop/BindException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:736)
> at org.apache.hadoop.ipc.Server.bind(Server.java:566)
> at org.apache.hadoop.ipc.Server$Listener.(Server.java:1042)
> at org.apache.hadoop.ipc.Server.(Server.java:2815)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:994)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:421)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:804)
> at 
> org.apache.hadoop.ozone.om.OzoneManager.startRpcServer(OzoneManager.java:241)
> at org.apache.hadoop.ozone.om.OzoneManager.(OzoneManager.java:156)
> at org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:339)
> at org.apache.hadoop.ozone.om.OzoneManager.main(OzoneManager.java:265)
> Caused by: java.net.BindException: Cannot assign requested address
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at org.apache.hadoop.ipc.Server.bind(Server.java:549)
> ... 10 more
> 2018-10-30 10:17:22,687 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: java.net.BindException: Problem binding to 
> [ctr-e138-1518143905142-552728-01-03.hwx.site:9889] 
> java.net.BindException: Cannot assign requested address; For more details 
> see: http://wiki.apache.org/hadoop/BindException
> {code}
> Same applies for daemon stop operations as well. It doesn't throw any 
> exception but at the same time doesn't stop the daemon as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-780) ozone client daemon restart fails if triggered from different host

2018-10-31 Thread Soumitra Sulav (JIRA)
Soumitra Sulav created HDDS-780:
---

 Summary: ozone client daemon restart fails if triggered from 
different host
 Key: HDDS-780
 URL: https://issues.apache.org/jira/browse/HDDS-780
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Affects Versions: 0.3.0
Reporter: Soumitra Sulav


Ozone client operation throws *java.net.BindException: Cannot assign requested 
address* exception if om, scm are not located on the same node from where the 
cli command is run.

Command triggered from a node which has scm but not om :
{code:java}
ozone --daemon start om{code}
Complete stacktrace of Exception :
{code:java}
2018-10-30 10:17:22,675 INFO org.apache.hadoop.ipc.CallQueueManager: Using 
callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 2000, 
scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
2018-10-30 10:17:22,683 ERROR org.apache.hadoop.ozone.om.OzoneManager: Failed 
to start the OzoneManager.
java.net.BindException: Problem binding to 
[ctr-e138-1518143905142-552728-01-03.hwx.site:9889] java.net.BindException: 
Cannot assign requested address; For more details see: 
http://wiki.apache.org/hadoop/BindException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:736)
at org.apache.hadoop.ipc.Server.bind(Server.java:566)
at org.apache.hadoop.ipc.Server$Listener.(Server.java:1042)
at org.apache.hadoop.ipc.Server.(Server.java:2815)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:994)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:421)
at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:804)
at org.apache.hadoop.ozone.om.OzoneManager.startRpcServer(OzoneManager.java:241)
at org.apache.hadoop.ozone.om.OzoneManager.(OzoneManager.java:156)
at org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:339)
at org.apache.hadoop.ozone.om.OzoneManager.main(OzoneManager.java:265)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.apache.hadoop.ipc.Server.bind(Server.java:549)
... 10 more
2018-10-30 10:17:22,687 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
status 1: java.net.BindException: Problem binding to 
[ctr-e138-1518143905142-552728-01-03.hwx.site:9889] java.net.BindException: 
Cannot assign requested address; For more details see: 
http://wiki.apache.org/hadoop/BindException
{code}
Same applies for daemon stop operations as well. It doesn't throw any exception 
but at the same time doesn't stop the daemon as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14035) NN status discovery does not leverage delegation token

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670713#comment-16670713
 ] 

Hadoop QA commented on HDFS-14035:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
46s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 17s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-client: The 
patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
33s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14035 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946240/HDFS-14035-HDFS-12943.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2a99fc166a90 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-12943 / 8b5277f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25398/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25398/testReport/ |
| Max. process+thread count | 329 (vs. ulimit of 1) |
| modules | C: 

[jira] [Commented] (HDFS-13507) RBF: Remove update functionality from routeradmin's add cmd

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670693#comment-16670693
 ] 

Hadoop QA commented on HDFS-13507:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-13507 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-13507 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12921383/HDFS-13507.002.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25401/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Remove update functionality from routeradmin's add cmd
> ---
>
> Key: HDFS-13507
> URL: https://issues.apache.org/jira/browse/HDFS-13507
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Gang Li
>Priority: Minor
>  Labels: incompatible
> Attachments: HDFS-13507.000.patch, HDFS-13507.001.patch, 
> HDFS-13507.002.patch
>
>
> Follow up the discussion in HDFS-13326. We should remove the "update" 
> functionality from routeradmin's add cmd, to make it consistent with RPC 
> calls.
> Note that: this is an incompatible change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-779) Fix ASF License violation in S3Consts and S3Utils

2018-10-31 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-779:
---
Status: Patch Available  (was: In Progress)

> Fix ASF License violation in S3Consts and S3Utils
> -
>
> Key: HDDS-779
> URL: https://issues.apache.org/jira/browse/HDDS-779
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Affects Versions: 0.3.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
> Attachments: HDDS-779.001.patch
>
>
> Spotted this issue during one of the Jenkins runs for HDDS-120.
> [https://builds.apache.org/job/PreCommit-HDDS-Build/1569/artifact/out/patch-asflicense-problems.txt]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-779) Fix ASF License violation in S3Consts and S3Utils

2018-10-31 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-779:
---
Attachment: HDDS-779.001.patch

> Fix ASF License violation in S3Consts and S3Utils
> -
>
> Key: HDDS-779
> URL: https://issues.apache.org/jira/browse/HDDS-779
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Affects Versions: 0.3.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
> Attachments: HDDS-779.001.patch
>
>
> Spotted this issue during one of the Jenkins runs for HDDS-120.
> [https://builds.apache.org/job/PreCommit-HDDS-Build/1569/artifact/out/patch-asflicense-problems.txt]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-779) Fix ASF License violation in S3Consts and S3Utils

2018-10-31 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-779:
---
Component/s: S3

> Fix ASF License violation in S3Consts and S3Utils
> -
>
> Key: HDDS-779
> URL: https://issues.apache.org/jira/browse/HDDS-779
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Affects Versions: 0.3.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
> Attachments: HDDS-779.001.patch
>
>
> Spotted this issue during one of the Jenkins runs for HDDS-120.
> [https://builds.apache.org/job/PreCommit-HDDS-Build/1569/artifact/out/patch-asflicense-problems.txt]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13752) fs.Path stores file path in java.net.URI causes big memory waste

2018-10-31 Thread Barnabas Maidics (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barnabas Maidics updated HDFS-13752:

Attachment: (was: HDFS-13752 - HDFS benchmark .pdf)

> fs.Path stores file path in java.net.URI causes big memory waste
> 
>
> Key: HDFS-13752
> URL: https://issues.apache.org/jira/browse/HDFS-13752
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.6
> Environment: Hive 2.1.1 and hadoop 2.7.6 
>Reporter: Barnabas Maidics
>Priority: Major
> Attachments: HDFS-13752.001.patch, HDFS-13752.002.patch, 
> HDFS-13752.003.patch, HDFSbenchmark.pdf, Screen Shot 2018-07-20 at 
> 11.12.38.png, heapdump-10partitions.html, measurement.pdf
>
>
> I was looking at HiveServer2 memory usage, and a big percentage of this was 
> because of org.apache.hadoop.fs.Path, where you store file paths in a 
> java.net.URI object. The URI implementation stores the same string in 3 
> different objects (see the attached image). In Hive when there are many 
> partitions this cause a big memory usage. In my particular case 42% of memory 
> was used by java.net.URI so it could be reduced to 14%. 
> I wonder if the community is open to replace it with a more memory efficient 
> implementation and what other things should be considered here? It can be a 
> huge memory improvement for Hadoop and for Hive as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >