Re: branch-2.9.2 is almost closed for commit

2018-11-12 Thread Akira Ajisaka
Recently I hit the gpg-agent cache ttl issue while creating 2.9.2 RC0,
and the issue was fixed by HADOOP-15923.
I'll create RC0 and start the vote by the end of this week. Sorry for the delay.

Thanks,
Akira
2018年11月6日(火) 13:51 Akira Ajisaka :
>
> Hi folks,
>
> Now there is only 1 critical issues targeted for 2.9.2 (YARN-8233), so
> I'd like to close branch-2.9.2 except YARN-8233.
> I create RC0 as soon as YARN-8233 is committed to branch-2.9.2.
>
> Thanks,
> Akira

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-734) Remove create container logic from OzoneClient

2018-11-12 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain resolved HDDS-734.
--
Resolution: Duplicate

This issue has been fixed via HDDS-733.

> Remove create container logic from OzoneClient
> --
>
> Key: HDDS-734
> URL: https://issues.apache.org/jira/browse/HDDS-734
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Reporter: Nanda kumar
>Assignee: Shashikant Banerjee
>Priority: Major
>
> After HDDS-733, the container will be created as part of the first chunk 
> write, we don't need explicit container creation code in {{OzoneClient}} 
> anymore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-735) Remove ALLOCATED and CREATING state from ContainerStateManager

2018-11-12 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain resolved HDDS-735.
--
Resolution: Duplicate

This issue has been fixed via HDDS-733.

> Remove ALLOCATED and CREATING state from ContainerStateManager
> --
>
> Key: HDDS-735
> URL: https://issues.apache.org/jira/browse/HDDS-735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Lokesh Jain
>Priority: Major
>
> After HDDS-733 and HDDS-734, we don't need ALLOCATED and CREATING state for 
> containers in SCM. The container will move to OPEN state as soon as it is 
> allocated in SCM. Since the container creation happens as part of the first 
> chunk write and container creation operation in datanode idempotent we don't 
> have to worry about giving out the same container to multiple clients as soon 
> as it is allocated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14070) Refactor NameNodeWebHdfsMethods to allow better extensibility

2018-11-12 Thread CR Hota (JIRA)
CR Hota created HDFS-14070:
--

 Summary: Refactor NameNodeWebHdfsMethods to allow better 
extensibility
 Key: HDFS-14070
 URL: https://issues.apache.org/jira/browse/HDFS-14070
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: CR Hota
Assignee: CR Hota


Router extends NamenodeWebHdfsMethods, methods such as renewDelegationToken, 
cancelDelegationToken and generateDelegationTokens should be extensible. Router 
can then have its own implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14069) Add BlockIds to JMX info

2018-11-12 Thread Danny Becker (JIRA)
Danny Becker created HDFS-14069:
---

 Summary: Add BlockIds to JMX info
 Key: HDFS-14069
 URL: https://issues.apache.org/jira/browse/HDFS-14069
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, hdfs, namenode
Reporter: Danny Becker






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-832) Docs folder is missing from the distribution package

2018-11-12 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-832:
-

 Summary: Docs folder is missing from the distribution package
 Key: HDDS-832
 URL: https://issues.apache.org/jira/browse/HDDS-832
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Elek, Marton
Assignee: Elek, Marton


After the 0.2.1 release the dist package create (together with the classpath 
generation) are changed. 

Problems: 
1. /docs folder is missing from the dist package
2. /docs is missing from the scm/om ui



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14068) Allow manual transition from Standby to Observer

2018-11-12 Thread Plamen Jeliazkov (JIRA)
Plamen Jeliazkov created HDFS-14068:
---

 Summary: Allow manual transition from Standby to Observer
 Key: HDFS-14068
 URL: https://issues.apache.org/jira/browse/HDFS-14068
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Plamen Jeliazkov


With automatic failover enabled, I am unable to make use of the new 
transitionToObserver HAAdmin command. This JIRA is to remove the limitation and 
allow manual transition between Standby and Observer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14068) Allow manual transition from Standby to Observer

2018-11-12 Thread Plamen Jeliazkov (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Plamen Jeliazkov resolved HDFS-14068.
-
Resolution: Duplicate

> Allow manual transition from Standby to Observer
> 
>
> Key: HDFS-14068
> URL: https://issues.apache.org/jira/browse/HDFS-14068
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Plamen Jeliazkov
>Priority: Major
>
> With automatic failover enabled, I am unable to make use of the new 
> transitionToObserver HAAdmin command. This JIRA is to remove the limitation 
> and allow manual transition between Standby and Observer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14067) Allow manual failover between standby and observer

2018-11-12 Thread Chao Sun (JIRA)
Chao Sun created HDFS-14067:
---

 Summary: Allow manual failover between standby and observer
 Key: HDFS-14067
 URL: https://issues.apache.org/jira/browse/HDFS-14067
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Chao Sun
Assignee: Chao Sun


Currently if automatic failover is enabled in a HA environment, transition from 
standby to observer would be blocked:
{code}
[hdfs@*** hadoop-3.3.0-SNAPSHOT]$ bin/hdfs haadmin -transitionToObserver ha2
Automatic failover is enabled for NameNode at 
Refusing to manually manage HA state, since it may cause
a split-brain scenario or other incorrect state.
If you are very sure you know what you are doing, please
specify the --forcemanual flag.
{code}

We should allow manual transition between standby and observer in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-831) TestOzoneShell in integration-test is flaky

2018-11-12 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-831:


 Summary: TestOzoneShell in integration-test is flaky
 Key: HDDS-831
 URL: https://issues.apache.org/jira/browse/HDDS-831
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Nanda kumar
Assignee: Nanda kumar


TestOzoneShell in integration-test is flaky, fails in few Jenkins runs.
https://builds.apache.org/job/PreCommit-HDDS-Build/1685/artifact/out/patch-unit-hadoop-ozone_integration-test.txt



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-830) Datanode should not start XceiverServerRatis before getting version information from SCM

2018-11-12 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-830:


 Summary: Datanode should not start XceiverServerRatis before 
getting version information from SCM
 Key: HDDS-830
 URL: https://issues.apache.org/jira/browse/HDDS-830
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Datanode
Affects Versions: 0.3.0
Reporter: Nanda kumar


If a datanode restarts quickly before SCM detects, it will rejoin the ratis 
ring (existing pipeline). Since SCM didn't detect this restart, the pipeline is 
not closed. Now there is a time gap after the datanode is started and it got 
the version information from SCM. During this time, the SCM ID in datanode is 
not set(null). If a client tries to use this pipeline during that time, the 
container state machine will throw {{java.lang.NullPointerException: scmId 
cannot be nul}}. This will cause {{RaftLogWorker}} to terminate resulting in 
datanode crash.

{code}
2018-11-12 19:45:31,811 ERROR storage.RaftLogWorker 
(ExitUtils.java:terminate(86)) - Terminating with exit status 1: 
407fd181-2ff7-4651-9a47-a0927ede4c51-RaftLogWorker failed.
java.io.IOException: java.lang.NullPointerException: scmId cannot be null
  at org.apache.ratis.util.IOUtils.asIOException(IOUtils.java:54)
  at org.apache.ratis.util.IOUtils.toIOException(IOUtils.java:61)
  at org.apache.ratis.util.IOUtils.getFromFuture(IOUtils.java:83)
  at 
org.apache.ratis.server.storage.RaftLogWorker$StateMachineDataPolicy.getFromFuture(RaftLogWorker.java:76)
  at 
org.apache.ratis.server.storage.RaftLogWorker$WriteLog.execute(RaftLogWorker.java:344)
  at org.apache.ratis.server.storage.RaftLogWorker.run(RaftLogWorker.java:216)
  at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException: scmId cannot be null
  at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
  at 
org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.create(KeyValueContainer.java:106)
  at 
org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handleCreateContainer(KeyValueHandler.java:242)
  at 
org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handle(KeyValueHandler.java:165)
  at 
org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.createContainer(HddsDispatcher.java:206)
  at 
org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:124)
  at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:274)
  at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.runCommand(ContainerStateMachine.java:280)
  at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$handleWriteChunk$1(ContainerStateMachine.java:301)
  at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  ... 1 more
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-11-12 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/955/

[Nov 11, 2018 7:12:53 PM] (botong) YARN-8933. [AMRMProxy] Fix potential empty 
fields in allocation




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.util.TestReadWriteDiskValidator 
   hadoop.hdfs.server.namenode.sps.TestBlockStorageMovementAttemptedItems 
   hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.namenode.TestNamenodeCapacityReport 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/955/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/955/artifact/out/diff-compile-javac-root.txt
  [324K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/955/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/955/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/955/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/955/artifact/out/diff-patch-pylint.txt
  [40K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/955/artifact/out/diff-patch-shellcheck.txt
  [68K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/955/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/955/artifact/out/whitespace-eol.txt
  [9.3M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/955/artifact/out/whitespace-tabs.txt
  [1.1M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/955/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/955/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/955/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/955/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/955/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/955/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/955/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/955/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/955/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/955/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/955/artifact/out/branch-findbugs-hadoop-ozone_s3gateway.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/955/artifact/out/branch-findbugs-hadoop-ozone_tools.txt
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/955/artifact/out/diff-javadoc-javadoc-root.txt
  [752K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/955/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [176K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/955/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [332K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/955/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [80K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/955/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-tests.txt
  [104K]
   

[jira] [Created] (HDFS-14066) upgradeDomain : Datanode was stopped when re-configure the datanodes in upgrade Domain script file

2018-11-12 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-14066:


 Summary: upgradeDomain : Datanode was stopped when re-configure 
the datanodes in upgrade Domain script file 
 Key: HDFS-14066
 URL: https://issues.apache.org/jira/browse/HDFS-14066
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.1.1
Reporter: Harshakiran Reddy


{{Steps:-}}

{noformat}
1. Create 3 upgrade Domain groups with 2 Datanode each upgrade domains 
   UD1->DN1,DN2
   UD2->DN3,DN4
   UD3->DN5,DN6
2. Remove DN4 and DN6 from the JSON script file
3. Verify the status of DN4 and DN6
4. Again add those 2 Datanode to respective Upgrade Domains 
5. Again Verify the status of DN4 and DN6 Datanodes 
{noformat}

{{Actual Output:-}}
{noformat}
Datanode status was stopped as per the Datanod UI page but  Datanode service is 
running in that Node
{noformat}

{{Excepted Output:-}}
{noformat}
Datanode should be in Running state and when we re-configure those 2 Datanodes 
it should take and work properly
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14065) Failed Storage Locations shows nothing in the Datanode Volume Failures

2018-11-12 Thread Ayush Saxena (JIRA)
Ayush Saxena created HDFS-14065:
---

 Summary: Failed Storage Locations shows nothing in the Datanode 
Volume Failures
 Key: HDFS-14065
 URL: https://issues.apache.org/jira/browse/HDFS-14065
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ayush Saxena
Assignee: Ayush Saxena


The failed storage locations in the *DataNode Volume Failure* UI shows nothing. 
Despite having failed Storages.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org