[jira] [Created] (HDFS-13873) ObserverNode should reject read requests when it is too far behind.

2018-08-27 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-13873:
--

 Summary: ObserverNode should reject read requests when it is too 
far behind.
 Key: HDFS-13873
 URL: https://issues.apache.org/jira/browse/HDFS-13873
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client, namenode
Affects Versions: HDFS-12943
Reporter: Konstantin Shvachko


Add a server-side threshold for ObserverNode to reject read requests when it is 
too far behind.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13872) Only ClientProtocol should perform msync wait

2018-08-27 Thread Erik Krogen (JIRA)
Erik Krogen created HDFS-13872:
--

 Summary: Only ClientProtocol should perform msync wait
 Key: HDFS-13872
 URL: https://issues.apache.org/jira/browse/HDFS-13872
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Erik Krogen


Currently the implementation of msync added in HDFS-13767 waits until the 
server has caught up to the client-specified transaction ID regardless of what 
the inbound RPC is. This particularly causes problems for 
ObserverReadProxyProvider (see HDFS-13779) when we try to fetch the state from 
an observer/standby; this should be a quick operation, but it has to wait for 
the node to catch up to the most current state. I initially thought all 
{{HAServiceProtocol}} methods should thus be excluded from the wait period, but 
actually I think the right approach is that _only_ {{ClientProtocol}} methods 
should be subjected to the wait period. I propose that we can do this via an 
annotation on client protocol which can then be checked within {{ipc.Server}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13871) HttpFS: Implement APIs that are in WebHDFS but not in HttpFS

2018-08-27 Thread Siyao Meng (JIRA)
Siyao Meng created HDFS-13871:
-

 Summary: HttpFS: Implement APIs that are in WebHDFS but not in 
HttpFS
 Key: HDFS-13871
 URL: https://issues.apache.org/jira/browse/HDFS-13871
 Project: Hadoop HDFS
  Issue Type: Task
  Components: httpfs
Affects Versions: 3.0.3, 3.1.0
Reporter: Siyao Meng


HttpFS doesn't have the following APIs that exists in WebHDFS as of now:
ALLOWSNAPSHOT, DISALLOWSNAPSHOT (since 2.8.0, HDFS-9057), GETSNAPSHOTDIFF 
(since 3.0.3, HDFS-13052), GETSNAPSHOTTABLEDIRECTORYLIST (HDFS-13141).

We might want to port those APIs from WebHDFS to HttpFS to keep the interface 
consistent.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13870) WebHDFS: Document new APIs

2018-08-27 Thread Siyao Meng (JIRA)
Siyao Meng created HDFS-13870:
-

 Summary: WebHDFS: Document new APIs
 Key: HDFS-13870
 URL: https://issues.apache.org/jira/browse/HDFS-13870
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: Siyao Meng


ALLOWSNAPSHOT, DISALLOWSNAPSHOT (since 2.8.0, HDFS-9057),

GETSNAPSHOTDIFF (since 3.0.3, HDFS-13052), GETSNAPSHOTTABLEDIRECTORYLIST 
(HDFS-13141) don't have their API usage documentation in the [official 
doc]([https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/WebHDFS.html)]
 yet.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-08-27 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/881/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine
 
   Unread field:FSBasedSubmarineStorageImpl.java:[line 39] 
   Found reliance on default encoding in 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component):in 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component): new java.io.FileWriter(File) At 
YarnServiceJobSubmitter.java:[line 192] 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component) may fail to clean up java.io.Writer on checked exception 
Obligation to clean up resource created at YarnServiceJobSubmitter.java:to 
clean up java.io.Writer on checked exception Obligation to clean up resource 
created at YarnServiceJobSubmitter.java:[line 192] is not discharged 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceUtils.getComponentArrayJson(String,
 int, String) concatenates strings using + in a loop At 
YarnServiceUtils.java:using + in a loop At YarnServiceUtils.java:[line 72] 

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.security.TestRaceWhenRelogin 
   hadoop.util.TestBasicDiskValidator 
   hadoop.hdfs.TestLeaseRecovery2 
   hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestMRTimelineEventHandling 
   hadoop.yarn.service.TestServiceAM 
   hadoop.yarn.sls.TestSLSRunner 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/881/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/881/artifact/out/diff-compile-javac-root.txt
  [328K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/881/artifact/out/diff-checkstyle-root.txt
  [17M]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/881/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/881/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/881/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/881/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/881/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/881/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/881/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/881/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-submarine-warnings.html
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/881/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [36K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/881/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/881/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/881/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/881/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/881/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/881/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [20K]

[jira] [Created] (HDDS-379) Simplify and improve the cli arg parsing of ozone scmcli

2018-08-27 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-379:
-

 Summary: Simplify and improve the cli arg parsing of ozone scmcli
 Key: HDDS-379
 URL: https://issues.apache.org/jira/browse/HDDS-379
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Elek, Marton
 Fix For: 0.2.1


SCMCLI is a useful tool to test SCM. It can create/delete/close/list containers.

There are multiple problems with the current scmcli.

The biggest one is the cli argument handling. Similar to HDDS-190, it's often 
very hard to get the help for a specific subcommand.

The other one is that a big part of the code is the argument handling which is 
mixed with the business logic.

I propose to use a more modern argument handler library and simplify the 
argument handling (and improve the user experience).

I propose to use [picocli|https://github.com/remkop/picocli].

1.) It supports subcommands and subcommand specific and general arguments.
2.) It could work based on annotation with very few additional boilerplate code
3.) It's very well documented and easy to use
4.) It's licenced under Apache licence
5.) It supports tab autocompletion for bash and zsh and colorful output
6.) Actively maintainer project
7.) Adopter by other bigger projects (groovy, junit, log4j)

In this patch I would like to demonstrate how the cli handling could be 
simplified. And if it's accepted, we can start to use similar approach for 
other ozone cli as well.

The patch also fixes the cli (the name of the main class was wrong). 

It also requires HDDS-377 for the be compiled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-08-27 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/571/

[Aug 27, 2018 6:55:46 AM] (yqlin) HDFS-13831. Make block increment deletion 
number configurable.
[Aug 27, 2018 9:41:08 AM] (elek) HDDS-334. Update GettingStarted page to 
mention details about Ozone


ERROR: File 'out/email-report.txt' does not exist

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

[jira] [Created] (HDDS-378) Remove dependencies between hdds/ozone and hdfs proto files

2018-08-27 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-378:
-

 Summary: Remove dependencies between hdds/ozone and hdfs proto 
files
 Key: HDDS-378
 URL: https://issues.apache.org/jira/browse/HDDS-378
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Elek, Marton


It would be great to make the hdds/ozone proto files independent from hdfs 
proto files. It would help as to start ozone with multiple version of hadoop 
version.

Also helps to make artifacts from the hdds protos:  HDDS-220

 Currently we have a few unused "hdfs.proto" import in the proto files and we 
use the StorageTypeProto from hdfs:

{code}
cd hadoop-hdds
grep -r "hdfs" --include="*.proto"
common/src/main/proto/ScmBlockLocationProtocol.proto:import "hdfs.proto";
common/src/main/proto/StorageContainerLocationProtocol.proto:import 
"hdfs.proto";

 cd ../hadoop-ozone
grep -r "hdfs" --include="*.proto"
common/src/main/proto/OzoneManagerProtocol.proto:import "hdfs.proto";
common/src/main/proto/OzoneManagerProtocol.proto:required 
hadoop.hdfs.StorageTypeProto storageType = 5 [default = DISK];
common/src/main/proto/OzoneManagerProtocol.proto:optional 
hadoop.hdfs.StorageTypeProto storageType = 6;
{code}

I propose to 

1.) remove the hdfs import statements from the proto files
2.) Copy the StorageTypeProto and create a Hdds version from it (without 
PROVIDED)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-377) Make the ScmClient closable and stop the started threads

2018-08-27 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-377:
-

 Summary: Make the ScmClient closable and stop the started threads
 Key: HDDS-377
 URL: https://issues.apache.org/jira/browse/HDDS-377
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: SCM Client
Reporter: Elek, Marton
 Fix For: 0.2.1


Current ScmClient class opens additional threads which won't be closed. For 
example SCMCLI can't stop because of this running thread:

{code}
"nioEventLoopGroup-2-1" #15 prio=10 os_prio=0 tid=0x7f1c84c74800 nid=0x77f4 
runnable [0x7f1c52238000]
   java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <0x000771adf7b0> (a 
org.apache.ratis.shaded.io.netty.channel.nio.SelectedSelectionKeySet)
- locked <0x000771ae12d8> (a java.util.Collections$UnmodifiableSet)
- locked <0x000771ae1010> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at 
org.apache.ratis.shaded.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62)
at 
org.apache.ratis.shaded.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753)
at 
org.apache.ratis.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409)
at 
org.apache.ratis.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at 
org.apache.ratis.shaded.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
at java.lang.Thread.run(Thread.java:745)

{code}

(Note this is netty, but the Grpc based xceiver also have some special threads).

I propose to make ScmClient auto-closable and stop the XceiverClientManager in 
case of close. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org