[jira] [Created] (HDFS-14736) Starting the datanode unsuccessfully because of the corrupted sub dir in the data directory

2019-08-14 Thread liying (JIRA)
liying created HDFS-14736:
-

 Summary: Starting the datanode unsuccessfully because of the 
corrupted sub dir in the data directory
 Key: HDFS-14736
 URL: https://issues.apache.org/jira/browse/HDFS-14736
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.7.2
Reporter: liying
Assignee: liying


If subdirectories in the datanode data directory was corrupted for some reason, 
the it would restart datanode unsuccessfully. 
For example, a sudden power failure in the computer room. The error infomation 
in the datanode log as the follow:

2019-08-09 10:01:06,703 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
block pool BP-518068284-10.252.12.3-152341691
1512 on volume /data06/block/current...
2019-08-09 10:01:06,703 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
block pool BP-518068284-10.252.12.3-152341691
1512 on volume /data07/block/current...
2019-08-09 10:01:06,704 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
block pool BP-518068284-10.252.12.3-152341691
1512 on volume /data08/block/current...
2019-08-09 10:01:06,704 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
block pool BP-518068284-10.252.12.3-152341691
1512 on volume /data09/block/current...
2019-08-09 10:01:06,704 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
block pool BP-518068284-10.252.12.3-152341691
1512 on volume /data10/block/current...
2019-08-09 10:01:06,704 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
block pool BP-518068284-10.252.12.3-152341691
1512 on volume /data11/block/current...
2019-08-09 10:01:06,704 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
block pool BP-518068284-10.252.12.3-152341691
1512 on volume /data12/block/current...
2019-08-09 10:01:06,707 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Caught 
exception while scanning /data05/block/current.
 Will throw later.
*java.io.IOException: Mkdirs failed to create 
/data05/block/current/BP-518068284-10.252.12.3-1523416911512/tmp*
 at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.(BlockPoolSlice.java:138)
 at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addBlockPool(FsVolumeImpl.java:837)
 at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$2.run(FsVolumeList.java:406)
2019-08-09 10:01:15,330 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken 
to scan block pool BP-518068284-10.252.12.3
-1523416911512 on /data06/block/current: 8627ms
2019-08-09 10:01:15,348 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken 
to scan block pool BP-518068284-10.252.12.3
-1523416911512 on /data11/block/current: 8645ms
2019-08-09 10:01:15,352 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken 
to scan block pool BP-518068284-10.252.12.3
-1523416911512 on /data01/block/current: 8649ms
2019-08-09 10:01:15,361 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken 
to scan block pool BP-518068284-10.252.12.3
-1523416911512 on /data12/block/current: 8658ms
2019-08-09 10:01:15,362 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken 
to scan block pool BP-518068284-10.252.12.3
-1523416911512 on /data03/block/current: 8659ms

 

 

I check the codes of the whole process, and find some code are weird in the 
#DataNode# and #FsVolumeImpl# as the follow:
{code:java}
//代码占位符
void initBlockPool(BPOfferService bpos) throws IOException {
  NamespaceInfo nsInfo = bpos.getNamespaceInfo();
  if (nsInfo == null) {
throw new IOException("NamespaceInfo not found: Block pool " + bpos
+ " should have retrieved namespace info before initBlockPool.");
  }
  
  setClusterId(nsInfo.clusterID, nsInfo.getBlockPoolID());

  // Register the new block pool with the BP manager.
  blockPoolManager.addBlockPool(bpos);
  
  // In the case that this is the first block pool to connect, initialize
  // the dataset, block scanners, etc.
  initStorage(nsInfo);

  // Exclude failed disks before initializing the block pools to avoid startup
  // failures.
  checkDiskError();

  data.addBlockPool(nsInfo.getBlockPoolID(), conf);
  blockScanner.enableBlockPoolId(bpos.getBlockPoolId());
  initDirectoryScanner(conf);
}
{code}
{code:java}
//代码占位符
void checkDirs() throws DiskErrorException {
  // TODO:FEDERATION valid synchronization
  for(BlockPoolSlice s : bpSlices.values()) {
s.checkDirs();
  }
}{code}
during restarting the datanode, BPServiceActor will invoke initBlockPool to 
init the datastorage in this blockpool. It will execute checkDirs 

[jira] [Created] (HDFS-14735) File could only be replicated to 0 nodes instead of minReplication (=1)

2019-08-14 Thread Tatyana Alexeyev (JIRA)
Tatyana Alexeyev created HDFS-14735:
---

 Summary: File could only be replicated to 0 nodes instead of 
minReplication (=1)
 Key: HDFS-14735
 URL: https://issues.apache.org/jira/browse/HDFS-14735
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Tatyana Alexeyev


Hello I have intermitent error when running my EMR Hadoop Cluster:

"Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
/user/sphdadm/_sqoop/00501bd7b05e4182b5006b9d51 
bafb7f_f405b2f3/_temporary/1/_temporary/attempt_1565136887564_20057_m_00_0/part-m-0.snappy
 could only be replicated to 0 nodes instead of minReplication (=1). There are 
5 datanode(s) running and no node(s) are excluded in this operation."

I am running Hadoop version 

sphdadm@ip-10-6-15-108 hadoop]$ hadoop version

Hadoop 2.8.5-amzn-4

 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1970) Upgrade Bootstrap and jQuery versions of Ozone web UIs

2019-08-14 Thread Vivek Ratnavel Subramanian (JIRA)
Vivek Ratnavel Subramanian created HDDS-1970:


 Summary: Upgrade Bootstrap and jQuery versions of Ozone web UIs 
 Key: HDDS-1970
 URL: https://issues.apache.org/jira/browse/HDDS-1970
 Project: Hadoop Distributed Data Store
  Issue Type: Task
  Components: website
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian


The current versions of bootstrap and jquery used by Ozone web UIs are reported 
to have known medium severity CVEs and need to be updated to the latest 
versions.

 

I suggest updating bootstrap and jQuery to 3.4.1.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14734) [FGL] Introduce Latch Lock to replace Namesystem global lock.

2019-08-14 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-14734:
--

 Summary: [FGL] Introduce Latch Lock to replace Namesystem global 
lock.
 Key: HDFS-14734
 URL: https://issues.apache.org/jira/browse/HDFS-14734
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Konstantin Shvachko


The concept of Latch Lock associates a separate lock with each partition of 
PartitionedGSet.
Define the order of acquiring locks on the partitions. Some operations will 
require holding locks on multiple partitions.
It is preferable to retain the global lock for some operations, such as rename.




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14733) [FGL] Introduce INode key.

2019-08-14 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-14733:
--

 Summary: [FGL] Introduce INode key.
 Key: HDFS-14733
 URL: https://issues.apache.org/jira/browse/HDFS-14733
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Konstantin Shvachko


INode keys should satisfy the locality requirement.
Keys should be plugable via a configuration parameter.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14732) [FGL] Introduce PartitionedGSet a new implementation of GSet.

2019-08-14 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-14732:
--

 Summary: [FGL] Introduce PartitionedGSet a new implementation of 
GSet.
 Key: HDFS-14732
 URL: https://issues.apache.org/jira/browse/HDFS-14732
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Konstantin Shvachko


INodeMap and BlocksMap are currently represented by a hash table implemented as 
LightWeightGSet. For fine-grained locking it should be replaced by 
PartitionedGSet - a new implementation of GSet interface, which partitions 
INodes into ranges based on a key.
We should target static partitioning into a configurable number of ranges. This 
should allow avoiding the high level lock for RangeMap. It should not be a 
compromise on efficiency, because parallelism on a single node is bounded by 
the number of CPU cores.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14731) [FGL] Remove redundant locking on NameNode.

2019-08-14 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-14731:
--

 Summary: [FGL] Remove redundant locking on NameNode.
 Key: HDFS-14731
 URL: https://issues.apache.org/jira/browse/HDFS-14731
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Konstantin Shvachko


Currently NameNode has two global locks: FSNamesystemLock and FSDirectoryLock. 
An analysis shows that single FSNamesystemLock is sufficient to guarantee 
consistency of the NameNode state. FSDirectoryLock can be removed.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1969) Implement OM GetDelegationToken request to use Cache and DoubleBuffer

2019-08-14 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1969:


 Summary: Implement OM GetDelegationToken request to use Cache and 
DoubleBuffer
 Key: HDDS-1969
 URL: https://issues.apache.org/jira/browse/HDDS-1969
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Manager
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham
 Fix For: 0.5.0


Implement S3 Abort MPU request to use OM Cache, double buffer.

 

In this Jira will add the changes to implement S3 bucket operations, and 
HA/Non-HA will have a different code path, but once all requests are 
implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1968) Add an RPC endpoint in SCM to publish UNHEALTHY containers.

2019-08-14 Thread Aravindan Vijayan (JIRA)
Aravindan Vijayan created HDDS-1968:
---

 Summary: Add an RPC endpoint in SCM to publish UNHEALTHY 
containers.
 Key: HDDS-1968
 URL: https://issues.apache.org/jira/browse/HDDS-1968
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Aravindan Vijayan
 Fix For: 0.5.0






--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1967) TestBlockOutputStreamWithFailures is flaky

2019-08-14 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-1967:
-

 Summary: TestBlockOutputStreamWithFailures is flaky
 Key: HDDS-1967
 URL: https://issues.apache.org/jira/browse/HDDS-1967
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Nanda kumar


{{TestBlockOutputStreamWithFailures}} is flaky. 
{noformat}
[ERROR] 
test2DatanodesFailure(org.apache.hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures)
  Time elapsed: 23.816 s  <<< FAILURE!
java.lang.AssertionError: expected:<4> but was:<8>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures.test2DatanodesFailure(TestBlockOutputStreamWithFailures.java:425)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
{noformat}

{noformat}
[ERROR] 
testWatchForCommitDatanodeFailure(org.apache.hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures)
  Time elapsed: 30.895 s  <<< FAILURE!
java.lang.AssertionError: expected:<2> but was:<3>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures.testWatchForCommitDatanodeFailure(TestBlockOutputStreamWithFailures.java:366)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 

[jira] [Resolved] (HDDS-1923) static/docs/start.html page doesn't render correctly on Firefox

2019-08-14 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh resolved HDDS-1923.
-
Resolution: Invalid

Thanks for looking into this [~adoroszlai]. I just started a docker instance 
and the rendering looks fine. Resolving this.

> static/docs/start.html page doesn't render correctly on Firefox
> ---
>
> Key: HDDS-1923
> URL: https://issues.apache.org/jira/browse/HDDS-1923
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Anu Engineer
>Priority: Blocker
>
> static/docs/start.html page doesn't render correctly on Firefox



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14730) Deprecate configuration dfs.web.authentication.filter

2019-08-14 Thread Chen Zhang (JIRA)
Chen Zhang created HDFS-14730:
-

 Summary: Deprecate configuration dfs.web.authentication.filter 
 Key: HDFS-14730
 URL: https://issues.apache.org/jira/browse/HDFS-14730
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Chen Zhang


After HADOOP-16314, this configuration is not used any where, so I propose to 
deprecate it to avoid misuse.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1957) MiniOzoneChaosCluster exits because of ArrayIndexOutOfBoundsException in load generator

2019-08-14 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila resolved HDDS-1957.
-
Resolution: Duplicate

> MiniOzoneChaosCluster exits because of ArrayIndexOutOfBoundsException in load 
> generator
> ---
>
> Key: HDDS-1957
> URL: https://issues.apache.org/jira/browse/HDDS-1957
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>
> MiniOzoneChaosCluster exits because of ArrayIndexOutOfBoundsException in load 
> generator.
> It is exiting because of the following exception.
> {code}
> java.lang.ArrayIndexOutOfBoundsException: 1
> at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.readData(MiniOzoneLoadGenerator.java:153)
> at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.startAgedFilesLoad(MiniOzoneLoadGenerator.java:216)
> at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.lambda$startIO$1(MiniOzoneLoadGenerator.java:242)
> at 
> java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-08-14 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1228/

[Aug 13, 2019 2:57:20 AM] (ayushsaxena) HDFS-14708.
[Aug 13, 2019 10:08:55 AM] (31469764+bshashikant) HDDS-1908. 
TestMultiBlockWritesWithDnFailures is failing (#1282)
[Aug 13, 2019 1:34:00 PM] (abmodi) YARN-9744. 
RollingLevelDBTimelineStore.getEntityByTime fails with NPE.
[Aug 13, 2019 1:39:01 PM] (nanda) HDDS-1952. Disable TestMiniChaosOzoneCluster 
in integration.sh. (#1284)
[Aug 13, 2019 1:47:10 PM] (ayushsaxena) HDFS-13505. Turn on HDFS ACLs by 
default. Contributed by Siyao Meng.
[Aug 13, 2019 3:52:59 PM] (xkrogen) HDFS-14717. [Dynamometer] Remove explicit 
search for JUnit dependency
[Aug 13, 2019 4:21:18 PM] (ebadger) YARN-9442. container working directory has 
group read permissions.
[Aug 13, 2019 9:37:32 PM] (aengineer) HDDS-1886. Use ArrayList#clear to address 
audit failure scenario
[Aug 13, 2019 9:57:05 PM] (aengineer) HDDS-1488. Scm cli command to start/stop 
replication manager.
[Aug 13, 2019 10:30:53 PM] (aengineer) HDDS-1891. Ozone fs shell command should 
work with default port when
[Aug 13, 2019 10:57:24 PM] (aengineer) HDDS-1961. 
TestStorageContainerManager#testScmProcessDatanodeHeartbeat
[Aug 13, 2019 11:03:31 PM] (aengineer) HDDS-1917. TestOzoneRpcClientAbstract is 
failing.
[Aug 13, 2019 11:27:57 PM] (weichiu) HDFS-14665. HttpFS: LISTSTATUS response is 
missing HDFS-specific fields
[Aug 13, 2019 11:39:40 PM] (iwasakims) HDFS-14423. Percent (%) and plus (+) 
characters no longer work in
[Aug 13, 2019 11:50:49 PM] (weichiu) HDFS-14625. Make DefaultAuditLogger class 
in FSnamesystem to Abstract.
[Aug 13, 2019 11:56:59 PM] (aengineer) HDDS-1916. Only contract tests are run 
in ozonefs module
[Aug 14, 2019 12:10:36 AM] (aengineer) HDDS-1659. Define the process to add 
proposal/design docs to the Ozone
[Aug 14, 2019 12:15:26 AM] (weichiu) HDFS-14491. More Clarity on Namenode UI 
Around Blocks and Replicas.
[Aug 14, 2019 12:27:04 AM] (aengineer) HDDS-1928. Cannot run ozone-recon 
compose due to syntax error




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

Failed CTEST tests :

   test_test_libhdfs_ops_hdfs_static 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_libhdfs_threaded_hdfspp_test_shim_static 
   test_hdfspp_mini_dfs_smoke_hdfspp_test_shim_static 
   libhdfs_mini_stress_valgrind_hdfspp_test_static 
   memcheck_libhdfs_mini_stress_valgrind_hdfspp_test_static 
   test_libhdfs_mini_stress_hdfspp_test_shim_static 
   test_hdfs_ext_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup 
   hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken 
   hadoop.hdfs.server.federation.router.TestRouterFaultTolerant 
   hadoop.mapreduce.v2.hs.webapp.TestHSWebApp 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1228/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1228/artifact/out/diff-compile-javac-root.txt
  [332K]

   checkstyle:

   

Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-08-14 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/413/

[Aug 13, 2019 3:30:33 PM] (ekrogen) HDFS-14370. Add exponential backoff to the 
edit log tailer to avoid
[Aug 13, 2019 5:27:43 PM] (ebadger) YARN-9442. container working directory has 
group read permissions.
[Aug 13, 2019 9:28:18 PM] (xkrogen) HADOOP-16459. Backport of HADOOP-16266. Add 
more fine-grained processing




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.tools.TestDFSAdminWithHA 
   hadoop.hdfs.TestRollingUpgrade 
   hadoop.hdfs.TestDFSStorageStateRecovery 
   hadoop.hdfs.TestHdfsAdmin 
   hadoop.yarn.client.cli.TestRMAdminCLI 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/413/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/413/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/413/artifact/out/diff-compile-cc-root-jdk1.8.0_222.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/413/artifact/out/diff-compile-javac-root-jdk1.8.0_222.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/413/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/413/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/413/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/413/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/413/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/413/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/413/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/413/artifact/out/whitespace-tabs.txt
  [1.2M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/413/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/413/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/413/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/413/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_222.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/413/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [316K]
   

[jira] [Created] (HDDS-1966) Wrong expected key ACL in acceptance test

2019-08-14 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1966:
---

 Summary: Wrong expected key ACL in acceptance test
 Key: HDDS-1966
 URL: https://issues.apache.org/jira/browse/HDDS-1966
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Affects Versions: 0.4.1
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


Acceptance test fails at ACL checks:

{code:title=https://elek.github.io/ozone-ci/trunk/trunk-nightly-wxhxr/acceptance/smokeresult/log.html#s1-s16-s2-t4-k2}
[ {
  "type" : "USER",
  "name" : "testuser/s...@example.com",
  "aclScope" : "ACCESS",
  "aclList" : [ "ALL" ]
}, {
  "type" : "GROUP",
  "name" : "root",
  "aclScope" : "ACCESS",
  "aclList" : [ "ALL" ]
}, {
  "type" : "GROUP",
  "name" : "superuser1",
  "aclScope" : "ACCESS",
  "aclList" : [ "ALL" ]
}, {
  "type" : "USER",
  "name" : "superuser1",
  "aclScope" : "ACCESS",
  "aclList" : [ "READ", "WRITE", "READ_ACL", "WRITE_ACL" ]
} ]' does not match '"type" : "GROUP",
.*"name" : "superuser1*",
.*"aclScope" : "ACCESS",
.*"aclList" : . "READ", "WRITE", "READ_ACL", "WRITE_ACL"'
{code}

The test [sets user 
ACL|https://github.com/apache/hadoop/blob/0e4b757955ae8da1651b870c12458e3344c0b613/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot#L123],
 but [checks group 
ACL|https://github.com/apache/hadoop/blob/0e4b757955ae8da1651b870c12458e3344c0b613/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot#L125].
  I think this passed previously due to a bug that was 
[fixed|https://github.com/apache/hadoop/pull/1234/files#diff-2d061b57a9838854d07da9e0eca64f31]
 by HDDS-1917.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1965) Compile error due to leftover ScmBlockLocationTestIngClient file

2019-08-14 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1965:
---

 Summary: Compile error due to leftover 
ScmBlockLocationTestIngClient file
 Key: HDDS-1965
 URL: https://issues.apache.org/jira/browse/HDDS-1965
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: build
Affects Versions: 0.5.0
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


{code:title=https://ci.anzix.net/job/ozone/17667/consoleText}
[ERROR] COMPILATION ERROR : 
[INFO] -
[ERROR] 
/var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestIngClient.java:[65,8]
 class ScmBlockLocationTestingClient is public, should be declared in a file 
named ScmBlockLocationTestingClient.java
[ERROR] 
/var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestingClient.java:[65,8]
 duplicate class: org.apache.hadoop.ozone.om.ScmBlockLocationTestingClient
[INFO] 2 errors 
{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1964) TestOzoneClientProducer fails with ConnectException

2019-08-14 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1964:
---

 Summary: TestOzoneClientProducer fails with ConnectException
 Key: HDDS-1964
 URL: https://issues.apache.org/jira/browse/HDDS-1964
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Affects Versions: 0.5.0
Reporter: Doroszlai, Attila


{code:title=https://raw.githubusercontent.com/elek/ozone-ci/master/trunk/trunk-nightly-wxhxr/unit/hadoop-ozone/s3gateway/org.apache.hadoop.ozone.s3.TestOzoneClientProducer.txt}
---
Test set: org.apache.hadoop.ozone.s3.TestOzoneClientProducer
---
Tests run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 222.239 s <<< 
FAILURE! - in org.apache.hadoop.ozone.s3.TestOzoneClientProducer
testGetClientFailure[0](org.apache.hadoop.ozone.s3.TestOzoneClientProducer)  
Time elapsed: 111.036 s  <<< FAILURE!
java.lang.AssertionError: 
 Expected to find 'Couldn't create protocol ' but got unexpected exception: 
java.net.ConnectException: Your endpoint configuration is wrong; For more 
details see:  http://wiki.apache.org/hadoop/UnsetHostnameOrPort
{code}

Log output (with local log4j config) reveals that connection is attempted to 
0.0.0.0:9862:

{code:title=log output}
2019-08-14 10:49:14,225 [main] INFO  ipc.Client 
(Client.java:handleConnectionFailure(948)) - Retrying connect to server: 
0.0.0.0/0.0.0.0:9862. Already tried 0 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
{code}

The address 0.0.0.0:9862 was added as default in 
[HDDS-1920|https://github.com/apache/hadoop/commit/bf457797f607f3aeeb2292e63f440cb13e15a2d9].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1963) OM DB Schema defintion in OmMetadataManagerImpl and OzoneConsts are not consistent

2019-08-14 Thread Sammi Chen (JIRA)
Sammi Chen created HDDS-1963:


 Summary: OM DB Schema defintion in OmMetadataManagerImpl and 
OzoneConsts are not consistent
 Key: HDDS-1963
 URL: https://issues.apache.org/jira/browse/HDDS-1963
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Sammi Chen


OzoneConsts.java

 * OM DB Schema:
   *  --
   *  |  KEY | VALUE   |
   *  --
   *  | $userName|  VolumeList |
   *  --
   *  | /#volumeName |  VolumeInfo |
   *  --
   *  | /#volumeName/#bucketName |  BucketInfo |
   *  --
   *  | /volumeName/bucketName/keyName   |  KeyInfo|
   *  --
   *  | #deleting#/volumeName/bucketName/keyName |  KeyInfo|
   *  --

OmMetadataManagerImpl.java

/**
   * OM RocksDB Structure .
   * 
   * OM DB stores metadata as KV pairs in different column families.
   * 
   * OM DB Schema:
   * |---|
   * |  Column Family |VALUE |
   * |---|
   * | userTable  | user->VolumeList |
   * |---|
   * | volumeTable| /volume->VolumeInfo  |
   * |---|
   * | bucketTable| /volume/bucket-> BucketInfo  |
   * |---|
   * | keyTable   | /volumeName/bucketName/keyName->KeyInfo  |
   * |---|
   * | deletedTable   | /volumeName/bucketName/keyName->KeyInfo  |
   * |---|
   * | openKey| /volumeName/bucketName/keyName/id->KeyInfo   |
   * |---|
   * | s3Table| s3BucketName -> /volumeName/bucketName   |
   * |---|
   * | s3SecretTable  | s3g_access_key_id -> s3Secret|
   * |---|
   * | dTokenTable| s3g_access_key_id -> s3Secret|
   * |---|
   * | prefixInfoTable | prefix -> PrefixInfo   |
   * |---|
   */

It's better to put OM DB Schema defintion in one place to resolve this 
inconsistency due to information redundancy. 





--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1962) Reduce the compilation times for Ozone

2019-08-14 Thread Anu Engineer (JIRA)
Anu Engineer created HDDS-1962:
--

 Summary: Reduce the compilation times for Ozone 
 Key: HDDS-1962
 URL: https://issues.apache.org/jira/browse/HDDS-1962
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Anu Engineer


Due to the introduction of some Javascript libraries, the build time and all 
the npm/yarn processing is adding too much to the build time of Ozone. This 
Jira is to track and solve that issue.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org