[jira] [Created] (HDDS-1033) Add FSStatics for OzoneFileSystem

2019-01-29 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-1033:
---

 Summary: Add FSStatics for OzoneFileSystem
 Key: HDDS-1033
 URL: https://issues.apache.org/jira/browse/HDDS-1033
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Filesystem
Affects Versions: 0.4.0
Reporter: Mukul Kumar Singh
 Fix For: 0.4.0


This jira proposes to add FS Statistics for Ozone File System.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1032) Package builds are failing with missing org.mockito:mockito-core dependency version

2019-01-29 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-1032:
---

 Summary: Package builds are failing with missing 
org.mockito:mockito-core dependency version
 Key: HDDS-1032
 URL: https://issues.apache.org/jira/browse/HDDS-1032
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Affects Versions: 0.4.0
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh
 Fix For: 0.4.0


Package builds using "mvn package -Pdist -DskipTests -Dtar 
-Dmaven.javadoc.skip=true -Phdds" are failing with the following error.

{code}
[ERROR] [ERROR] Some problems were encountered while processing the POMs:
[ERROR] 'dependencies.dependency.version' for org.mockito:mockito-core:jar is 
missing. @ line 36, column 17
[ERROR] 'dependencies.dependency.version' for org.mockito:mockito-core:jar is 
missing. @ line 89, column 17
 @ 
[ERROR] The build could not read 2 projects -> [Help 1]
[ERROR]   
[ERROR]   The project 
org.apache.hadoop:hadoop-hdds-server-framework:0.4.0-SNAPSHOT 
(/Users/msingh/code/apache/ozone/oz_new3/hadoop-hdds/framework/pom.xml) has 1 
error
[ERROR] 'dependencies.dependency.version' for org.mockito:mockito-core:jar 
is missing. @ line 36, column 17
[ERROR]   
[ERROR]   The project org.apache.hadoop:hadoop-hdds-server-scm:0.4.0-SNAPSHOT 
(/Users/msingh/code/apache/ozone/oz_new3/hadoop-hdds/server-scm/pom.xml) has 1 
error
[ERROR] 'dependencies.dependency.version' for org.mockito:mockito-core:jar 
is missing. @ line 89, column 17
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14243) Modify HDFS permissions to allow writes, reads but No Deletes for specific directory

2019-01-29 Thread Tanmoy (JIRA)
Tanmoy created HDFS-14243:
-

 Summary: Modify HDFS permissions to allow writes, reads but No 
Deletes for specific directory
 Key: HDFS-14243
 URL: https://issues.apache.org/jira/browse/HDFS-14243
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Tanmoy


Currently using HDFS permissions and acls, we can represent a variety of access 
permissions. There can be scenarios , where the owner of a particular directory 
can always write to HDFS, but due to audit/security issues, deletes should not 
be allowed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1031) Update ratis version to fix a DN restart Bug

2019-01-29 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1031:


 Summary: Update ratis version to fix a DN restart Bug
 Key: HDDS-1031
 URL: https://issues.apache.org/jira/browse/HDDS-1031
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


This is related to RATIS-460.

When datanode is restarted, after ratis has taken a snapshot, we see below 
stack trace, and DN won't boot up. For more info refer RATIS-460

 
{code:java}
java.io.IOException: java.lang.IllegalStateException: lastEntry = 72856=72856: 
[77969640-aad9-4678-813b-8fb35bd5f568:172.27.37.0:9858, 
7c6ae4fe-7db5-4e97-a407-0a9edff70c2c:172.27.35.192:9858, 
add14303-ecdf-4aed-84b7-abc3152177f6:172.27.37.128:9858], old=null, 
lastEntry.index >= logIndex = 0
        at org.apache.ratis.util.IOUtils.asIOException(IOUtils.java:54)
        at org.apache.ratis.util.IOUtils.toIOException(IOUtils.java:61)
        at org.apache.ratis.util.IOUtils.getFromFuture(IOUtils.java:70)
        at 
org.apache.ratis.server.impl.RaftServerProxy.getImpls(RaftServerProxy.java:283)
        at 
org.apache.ratis.server.impl.RaftServerProxy.start(RaftServerProxy.java:295)
        at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis.start(XceiverServerRatis.java:427)
        at 
org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.start(OzoneContainer.java:149)
        at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:165)
        at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:334)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalStateException: lastEntry = 72856=72856: 
[77969640-aad9-4678-813b-8fb35bd5f568:172.27.37.0:9858, 
7c6ae4fe-7db5-4e97-a407-0a9edff70c2c:172.27.35.192:9858, 
add14303-ecdf-4aed-84b7-abc3152177f6:172.27.37.128:9858], old=null, 
lastEntry.index >= logIndex = 0
        at org.apache.ratis.util.Preconditions.assertTrue(Preconditions.java:72)
        at 
org.apache.ratis.server.impl.ConfigurationManager.addConfiguration(ConfigurationManager.java:54)
        at 
org.apache.ratis.server.impl.ServerState.setRaftConf(ServerState.java:352)
        at 
org.apache.ratis.server.impl.ServerState.setRaftConf(ServerState.java:347)
        at 
org.apache.ratis.server.storage.RaftLog.lambda$open$6(RaftLog.java:237)
        at 
org.apache.ratis.server.storage.LogSegment.lambda$loadSegment$0(LogSegment.java:140)
        at 
org.apache.ratis.server.storage.LogSegment.readSegmentFile(LogSegment.java:121)
        at 
org.apache.ratis.server.storage.LogSegment.loadSegment(LogSegment.java:137)
        at 
org.apache.ratis.server.storage.RaftLogCache.loadSegment(RaftLogCache.java:272)
        at 
org.apache.ratis.server.storage.SegmentedRaftLog.loadLogSegments(SegmentedRaftLog.java:159)
        at 
org.apache.ratis.server.storage.SegmentedRaftLog.openImpl(SegmentedRaftLog.java:129)
        at org.apache.ratis.server.storage.RaftLog.open(RaftLog.java:233)
        at 
org.apache.ratis.server.impl.ServerState.initLog(ServerState.java:191)
        at org.apache.ratis.server.impl.ServerState.(ServerState.java:114)
        at 
org.apache.ratis.server.impl.RaftServerImpl.(RaftServerImpl.java:103)
        at 
org.apache.ratis.server.impl.RaftServerProxy.lambda$newRaftServerImpl$2(RaftServerProxy.java:207)
        at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
        at 
java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1582)
        at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
        at 
java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
        at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
        at 
java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
2019-01-29 01:43:41,137 [main] ERROR      - Exception in HddsDatanodeService.
java.lang.NullPointerException
        at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.join(DatanodeStateMachine.java:363)
        at 
org.apache.hadoop.ozone.HddsDatanodeService.join(HddsDatanodeService.java:270)
        at 
org.apache.hadoop.ozone.HddsDatanodeService.main(HddsDatanodeService.java:127)
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1030) Move auditparser robot tests under ozone basic

2019-01-29 Thread Dinesh Chitlangia (JIRA)
Dinesh Chitlangia created HDDS-1030:
---

 Summary: Move auditparser robot tests under ozone basic
 Key: HDDS-1030
 URL: https://issues.apache.org/jira/browse/HDDS-1030
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Affects Versions: 0.4.0
Reporter: Dinesh Chitlangia
Assignee: Dinesh Chitlangia


Based on [review 
comment|https://issues.apache.org/jira/browse/HDDS-1007?focusedCommentId=16753848=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16753848]
 from [~elek] in HDDS-1007, this Jira aims to move the audit parser robot test 
to basic tests folder so that it can use ozone env.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: proposed new repository for hadoop/ozone docker images (+update on docker works)

2019-01-29 Thread Anu Engineer
Marton please correct me I am wrong, but I believe that without this branch it 
is hard for us to push to Apache DockerHub. This allows for Apache account 
integration and dockerHub.
Does YARN publish to the Docker Hub via Apache account?


Thanks
Anu


On 1/29/19, 4:54 PM, "Eric Yang"  wrote:

By separating Hadoop docker related build into a separate git repository 
have some slippery slope.  It is harder to synchronize the changes between two 
separate source trees.  There is multi-steps process to build jar, tarball, and 
docker images.  This might be problematic to reproduce.

It would be best to arrange code such that docker image build process can 
be invoked as part of maven build process.  The profile is activated only if 
docker is installed and running on the environment.  This allows to produce 
jar, tarball, and docker images all at once without hindering existing build 
procedure.

YARN-7129 is one of the examples that making a subproject in YARN to build 
a docker image that can run in YARN.  It automatically detects presence of 
docker and build docker image when docker is available.  If docker is not 
running, the subproject skips and proceed to next sub-project.  Please try out 
YARN-7129 style of build process, and see this is a possible solution to solve 
docker image generation issue?  Thanks

Regards,
Eric

On 1/29/19, 3:44 PM, "Arpit Agarwal"  wrote:

I’ve requested a new repo hadoop-docker-ozone.git in gitbox.


> On Jan 22, 2019, at 4:59 AM, Elek, Marton  wrote:
> 
> 
> 
> TLDR;
> 
> I proposed to create a separated git repository for ozone docker 
images
> in HDDS-851 (hadoop-docker-ozone.git)
> 
> If there is no objections in the next 3 days I will ask an Apache 
Member
> to create the repository.
> 
> 
> 
> 
> LONG VERSION:
> 
> In HADOOP-14898 multiple docker containers and helper scripts are
> created for Hadoop.
> 
> The main goal was to:
> 
> 1.) help the development with easy-to-use docker images
> 2.) provide official hadoop images to make it easy to test new 
features
> 
> As of now we have:
> 
> - apache/hadoop-runner image (which contains the required dependency
> but no hadoop)
> - apache/hadoop:2 and apache/hadoop:3 images (to try out latest hadoop
> from 2/3 lines)
> 
> The base image to run hadoop (apache/hadoop-runner) is also heavily 
used
> for Ozone distribution/development.
> 
> The Ozone distribution contains docker-compose based cluster 
definitions
> to start various type of clusters and scripts to do smoketesting. (See
> HADOOP-16063 for more details).
> 
> Note: I personally believe that these definitions help a lot to start
> different type of clusters. For example it could be tricky to try out
> router based federation as it requires multiple HA clusters. But with 
a
> simple docker-compose definition [1] it could be started under 3
> minutes. (HADOOP-16063 is about creating these definitions for various
> hdfs/yarn use cases)
> 
> As of now we have dedicated branches in the hadoop git repository for
> the docker images (docker-hadoop-runner, docker-hadoop-2,
> docker-hadoop-3). It turns out that a separated repository would be 
more
> effective as the dockerhub can use only full branch names as tags.
> 
> We would like to provide ozone docker images to make the evaluation as
> easy as 'docker run -d apache/hadoop-ozone:0.3.0', therefore in 
HDDS-851
> we agreed to create a separated repository for the hadoop-ozone docker
> images.
> 
> If this approach works well we can also move out the existing
> docker-hadoop-2/docker-hadoop-3/docker-hadoop-runner branches from
> hadoop.git to an other separated hadoop-docker.git repository)
> 
> Please let me know if you have any comments,
> 
> Thanks,
> Marton
> 
> 1: see
> https://github.com/flokkr/runtime-compose/tree/master/hdfs/routerfeder
> as an example
> 
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> 


-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org





Re: proposed new repository for hadoop/ozone docker images (+update on docker works)

2019-01-29 Thread Eric Yang
By separating Hadoop docker related build into a separate git repository have 
some slippery slope.  It is harder to synchronize the changes between two 
separate source trees.  There is multi-steps process to build jar, tarball, and 
docker images.  This might be problematic to reproduce.

It would be best to arrange code such that docker image build process can be 
invoked as part of maven build process.  The profile is activated only if 
docker is installed and running on the environment.  This allows to produce 
jar, tarball, and docker images all at once without hindering existing build 
procedure.

YARN-7129 is one of the examples that making a subproject in YARN to build a 
docker image that can run in YARN.  It automatically detects presence of docker 
and build docker image when docker is available.  If docker is not running, the 
subproject skips and proceed to next sub-project.  Please try out YARN-7129 
style of build process, and see this is a possible solution to solve docker 
image generation issue?  Thanks

Regards,
Eric

On 1/29/19, 3:44 PM, "Arpit Agarwal"  wrote:

I’ve requested a new repo hadoop-docker-ozone.git in gitbox.


> On Jan 22, 2019, at 4:59 AM, Elek, Marton  wrote:
> 
> 
> 
> TLDR;
> 
> I proposed to create a separated git repository for ozone docker images
> in HDDS-851 (hadoop-docker-ozone.git)
> 
> If there is no objections in the next 3 days I will ask an Apache Member
> to create the repository.
> 
> 
> 
> 
> LONG VERSION:
> 
> In HADOOP-14898 multiple docker containers and helper scripts are
> created for Hadoop.
> 
> The main goal was to:
> 
> 1.) help the development with easy-to-use docker images
> 2.) provide official hadoop images to make it easy to test new features
> 
> As of now we have:
> 
> - apache/hadoop-runner image (which contains the required dependency
> but no hadoop)
> - apache/hadoop:2 and apache/hadoop:3 images (to try out latest hadoop
> from 2/3 lines)
> 
> The base image to run hadoop (apache/hadoop-runner) is also heavily used
> for Ozone distribution/development.
> 
> The Ozone distribution contains docker-compose based cluster definitions
> to start various type of clusters and scripts to do smoketesting. (See
> HADOOP-16063 for more details).
> 
> Note: I personally believe that these definitions help a lot to start
> different type of clusters. For example it could be tricky to try out
> router based federation as it requires multiple HA clusters. But with a
> simple docker-compose definition [1] it could be started under 3
> minutes. (HADOOP-16063 is about creating these definitions for various
> hdfs/yarn use cases)
> 
> As of now we have dedicated branches in the hadoop git repository for
> the docker images (docker-hadoop-runner, docker-hadoop-2,
> docker-hadoop-3). It turns out that a separated repository would be more
> effective as the dockerhub can use only full branch names as tags.
> 
> We would like to provide ozone docker images to make the evaluation as
> easy as 'docker run -d apache/hadoop-ozone:0.3.0', therefore in HDDS-851
> we agreed to create a separated repository for the hadoop-ozone docker
> images.
> 
> If this approach works well we can also move out the existing
> docker-hadoop-2/docker-hadoop-3/docker-hadoop-runner branches from
> hadoop.git to an other separated hadoop-docker.git repository)
> 
> Please let me know if you have any comments,
> 
> Thanks,
> Marton
> 
> 1: see
> https://github.com/flokkr/runtime-compose/tree/master/hdfs/routerfeder
> as an example
> 
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> 


-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org





[jira] [Created] (HDDS-1029) Allow option for force in DeleteContainerCommand

2019-01-29 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1029:


 Summary: Allow option for force in DeleteContainerCommand
 Key: HDDS-1029
 URL: https://issues.apache.org/jira/browse/HDDS-1029
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


Right now, we check container state if it is not open, and then we delete 
container.

We need a way to delete the containers which are open, so adding a force flag 
will allow deleting a container without any state checks. (This is required for 
delete replica's when SCM detects over-replicated, and that container to delete 
can be in open state)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: proposed new repository for hadoop/ozone docker images (+update on docker works)

2019-01-29 Thread Arpit Agarwal
I’ve requested a new repo hadoop-docker-ozone.git in gitbox.


> On Jan 22, 2019, at 4:59 AM, Elek, Marton  wrote:
> 
> 
> 
> TLDR;
> 
> I proposed to create a separated git repository for ozone docker images
> in HDDS-851 (hadoop-docker-ozone.git)
> 
> If there is no objections in the next 3 days I will ask an Apache Member
> to create the repository.
> 
> 
> 
> 
> LONG VERSION:
> 
> In HADOOP-14898 multiple docker containers and helper scripts are
> created for Hadoop.
> 
> The main goal was to:
> 
> 1.) help the development with easy-to-use docker images
> 2.) provide official hadoop images to make it easy to test new features
> 
> As of now we have:
> 
> - apache/hadoop-runner image (which contains the required dependency
> but no hadoop)
> - apache/hadoop:2 and apache/hadoop:3 images (to try out latest hadoop
> from 2/3 lines)
> 
> The base image to run hadoop (apache/hadoop-runner) is also heavily used
> for Ozone distribution/development.
> 
> The Ozone distribution contains docker-compose based cluster definitions
> to start various type of clusters and scripts to do smoketesting. (See
> HADOOP-16063 for more details).
> 
> Note: I personally believe that these definitions help a lot to start
> different type of clusters. For example it could be tricky to try out
> router based federation as it requires multiple HA clusters. But with a
> simple docker-compose definition [1] it could be started under 3
> minutes. (HADOOP-16063 is about creating these definitions for various
> hdfs/yarn use cases)
> 
> As of now we have dedicated branches in the hadoop git repository for
> the docker images (docker-hadoop-runner, docker-hadoop-2,
> docker-hadoop-3). It turns out that a separated repository would be more
> effective as the dockerhub can use only full branch names as tags.
> 
> We would like to provide ozone docker images to make the evaluation as
> easy as 'docker run -d apache/hadoop-ozone:0.3.0', therefore in HDDS-851
> we agreed to create a separated repository for the hadoop-ozone docker
> images.
> 
> If this approach works well we can also move out the existing
> docker-hadoop-2/docker-hadoop-3/docker-hadoop-runner branches from
> hadoop.git to an other separated hadoop-docker.git repository)
> 
> Please let me know if you have any comments,
> 
> Thanks,
> Marton
> 
> 1: see
> https://github.com/flokkr/runtime-compose/tree/master/hdfs/routerfeder
> as an example
> 
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> 


-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-01-29 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1031/

[Jan 28, 2019 9:03:12 AM] (yqlin) HDDS-974. Add getServiceAddress method to 
ServiceInfo and use it in
[Jan 28, 2019 11:05:53 PM] (eyang) YARN-9074. Consolidate docker removal logic 
in ContainerCleanup.
[Jan 28, 2019 11:10:33 PM] (eyang) YARN-8901. Fixed restart policy 
NEVER/ON_FAILURE with component




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.util.TestReadWriteDiskValidator 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.TestClientMetrics 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1031/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1031/artifact/out/diff-compile-javac-root.txt
  [336K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1031/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1031/artifact/out/diff-patch-hadolint.txt
  [8.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1031/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1031/artifact/out/diff-patch-pylint.txt
  [88K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1031/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1031/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1031/artifact/out/whitespace-eol.txt
  [9.3M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1031/artifact/out/whitespace-tabs.txt
  [1.1M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1031/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1031/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1031/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1031/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1031/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1031/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1031/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1031/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1031/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1031/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt
  [20K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1031/artifact/out/branch-findbugs-hadoop-ozone_s3gateway.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1031/artifact/out/branch-findbugs-hadoop-ozone_tools.txt
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1031/artifact/out/diff-javadoc-javadoc-root.txt
  [752K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1031/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [176K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1031/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [328K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1031/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [84K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1031/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
  [8.0K]
   

Re: [DISCUSS] Moving branch-2 to java 8

2019-01-29 Thread Steve Loughran
branch-2 is the JDK 7 branch, but for a long time I (and presumably others) 
have relied on jenkins to keep us honest by doing that build and test

right now, we can't do that any more, due to jdk7 bugs which will never be 
fixed by oracle, or at least, not in a public release.

If we can still do the compile in java 7 language and link to java 7 JDK, then 
that bit of the release is good -then java 8 can be used for that test

Ultimately, we're going to be forced onto java 8 just because all our 
dependencies have moved onto it, and some CVE will force us to move.

At which point, I think its time to declare branch-2 dead. It's had a great 
life, but trying to keep java 7 support alive isn't sustainable. Not just in 
this testing, but
cherrypicking patches back gets more and more difficult -branch-3 has moved on 
in both use of java 8 language, and in the codebase in general. 

> On 28 Jan 2019, at 20:18, Vinod Kumar Vavilapalli  wrote:
> 
> The community made a decision long time ago that we'd like to keep the 
> compatibility & so tie branch-2 to Java 7, but do Java 8+ only work on 3.x.
> 
> I always assumed that most (all?) downstream users build branch-2 on JDK 7 
> only, can anyone confirm? If so, there may be an easier way to address these 
> test issues.
> 
> +Vinod
> 
>> On Jan 28, 2019, at 11:24 AM, Jonathan Hung  wrote:
>> 
>> Hi folks,
>> 
>> Forking a discussion based on HADOOP-15711. To summarize, there are issues
>> with branch-2 tests running on java 7 (openjdk) which don't exist on java
>> 8. From our testing, the build can pass with openjdk 8.
>> 
>> For branch-3, the work to move the build to use java 8 was done in
>> HADOOP-14816 as part of the Dockerfile OS version change. HADOOP-16053 was
>> filed to backport this OS version change to branch-2 (but without the java
>> 7 -> java 8 change). So my proposal is to also make the java 7 -> java 8
>> version change in branch-2.
>> 
>> As mentioned in HADOOP-15711, the main issue is around source and binary
>> compatibility. I don't currently have a great answer, but one initial
>> thought is to build source/binary against java 7 to ensure compatibility
>> and run the rest of the build as java 8.
>> 
>> Thoughts?
>> 
>> Jonathan Hung
> 
> 
> -
> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
> 


-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[Ozone][status] 2019.01.29

2019-01-29 Thread Elek, Marton


Quick summary from the Ozone Community Call of yesterday [1]


1. 0.4.0 release status: Security branch is merged. Release is expected
in the next 2-3 weeks (TODOs: a few security related issues. Connect s3
api with security. Reliability issues, see the testing bellow)

2. Release managers:

 * Ajay Kumar would volunteer as a release manager for 0.4.0
 * Hanisha Koneru would do 0.5.0 (HA release)

3. There will be quick Ozone presentation/update at the upcoming Apache
Hadoop
Contributors Meetup (Bay Area) [2]

4. Some ongoing works are discussed :
  * Hanisha is working on HA,
  * Classpatj issues with ozonefs (short term: jar files can be added,
long term: HDDS-922)

5. Different kind of tests are planned or started recently

  * Blockade tests: Failover/partitioning tests. Similar to the
legendary Jepsen tests, but using the docker based blockade [3]
framework. Check the open jiras, to test it.

  * 1TB TPCDS test is planned. Stabilization work is expected.

  * Additional upstream tests are planned based on realistic workloads,
using ozonefs from upstream bigdata projects

6. Ozone is experimental. There are new solutions there which could be
backported to the regular hadoop distribution as well (for example the
real classpath separation). A new wiki page is started [4] to collect
candidates which can be moved to the hadoop-common/hdfs projects.

7. Distributed tracing poc is demonstrated in Ozone Manager / Storage
Container Manager. Blocking requirement for the e2e performance tests
(HDDS-1017)

As usual: all the feedback is welcomed. The next call will be at
04/02/2019, 9am PST. Please join if you are interested.

Thanks,
Marton


[1] https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Community+Calls

[2] https://www.meetup.com/Hadoop-Contributors/events/257793743/

[3] https://github.com/worstcase/blockade

[4]
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=103088227

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1028) Improve logging in SCMPipelineManager

2019-01-29 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-1028:
-

 Summary: Improve logging in SCMPipelineManager
 Key: HDDS-1028
 URL: https://issues.apache.org/jira/browse/HDDS-1028
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Reporter: Lokesh Jain
 Fix For: 0.4.0


Currently SCMPipelineManager does not log events like pipeline creation and 
deletion. It would be a good idea to log such events.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1027) Add blockade Tests for datanode isolation and scm failures

2019-01-29 Thread Nilotpal Nandi (JIRA)
Nilotpal Nandi created HDDS-1027:


 Summary: Add blockade Tests for datanode isolation and scm failures
 Key: HDDS-1027
 URL: https://issues.apache.org/jira/browse/HDDS-1027
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Nilotpal Nandi






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org