[hadoop] branch branch-3.2 updated: YARN-9099. GpuResourceAllocator#getReleasingGpus calculates number of GPUs in a wrong way. Contributed by Szilard Nemeth.

2019-01-30 Thread sunilg
This is an automated email from the ASF dual-hosted git repository.

sunilg pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 0e7060a  YARN-9099. GpuResourceAllocator#getReleasingGpus calculates 
number of GPUs in a wrong way. Contributed by Szilard Nemeth.
0e7060a is described below

commit 0e7060a1d57cf80bcd07ddc5b238cbfd3149be75
Author: Sunil G 
AuthorDate: Thu Jan 31 09:25:29 2019 +0530

YARN-9099. GpuResourceAllocator#getReleasingGpus calculates number of GPUs 
in a wrong way. Contributed by Szilard Nemeth.

(cherry picked from commit 71c49fa60faad2504b0411979a6e46e595b97a85)
---
 .../containermanager/linux/resources/gpu/GpuResourceAllocator.java   | 5 +
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/gpu/GpuResourceAllocator.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/gpu/GpuResourceAllocator.java
index 81a9655..49aac6d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/gpu/GpuResourceAllocator.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/gpu/GpuResourceAllocator.java
@@ -258,10 +258,7 @@ public class GpuResourceAllocator {
 
   private synchronized long getReleasingGpus() {
 long releasingGpus = 0;
-Iterator> iter = usedDevices.entrySet()
-.iterator();
-while (iter.hasNext()) {
-  ContainerId containerId = iter.next().getValue();
+for (ContainerId containerId : ImmutableSet.copyOf(usedDevices.values())) {
   Container container;
   if ((container = nmContext.getContainers().get(containerId)) != null) {
 if (container.isContainerInFinalStates()) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: YARN-9099. GpuResourceAllocator#getReleasingGpus calculates number of GPUs in a wrong way. Contributed by Szilard Nemeth.

2019-01-30 Thread sunilg
This is an automated email from the ASF dual-hosted git repository.

sunilg pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 3b03ff6  YARN-9099. GpuResourceAllocator#getReleasingGpus calculates 
number of GPUs in a wrong way. Contributed by Szilard Nemeth.
3b03ff6 is described below

commit 3b03ff6fdde39d98e893109501c42e0d659cd36b
Author: Sunil G 
AuthorDate: Thu Jan 31 09:25:29 2019 +0530

YARN-9099. GpuResourceAllocator#getReleasingGpus calculates number of GPUs 
in a wrong way. Contributed by Szilard Nemeth.

(cherry picked from commit 71c49fa60faad2504b0411979a6e46e595b97a85)
---
 .../containermanager/linux/resources/gpu/GpuResourceAllocator.java   | 5 +
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/gpu/GpuResourceAllocator.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/gpu/GpuResourceAllocator.java
index 81a9655..49aac6d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/gpu/GpuResourceAllocator.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/gpu/GpuResourceAllocator.java
@@ -258,10 +258,7 @@ public class GpuResourceAllocator {
 
   private synchronized long getReleasingGpus() {
 long releasingGpus = 0;
-Iterator> iter = usedDevices.entrySet()
-.iterator();
-while (iter.hasNext()) {
-  ContainerId containerId = iter.next().getValue();
+for (ContainerId containerId : ImmutableSet.copyOf(usedDevices.values())) {
   Container container;
   if ((container = nmContext.getContainers().get(containerId)) != null) {
 if (container.isContainerInFinalStates()) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: YARN-9099. GpuResourceAllocator#getReleasingGpus calculates number of GPUs in a wrong way. Contributed by Szilard Nemeth.

2019-01-30 Thread sunilg
This is an automated email from the ASF dual-hosted git repository.

sunilg pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 71c49fa  YARN-9099. GpuResourceAllocator#getReleasingGpus calculates 
number of GPUs in a wrong way. Contributed by Szilard Nemeth.
71c49fa is described below

commit 71c49fa60faad2504b0411979a6e46e595b97a85
Author: Sunil G 
AuthorDate: Thu Jan 31 09:25:29 2019 +0530

YARN-9099. GpuResourceAllocator#getReleasingGpus calculates number of GPUs 
in a wrong way. Contributed by Szilard Nemeth.
---
 .../containermanager/linux/resources/gpu/GpuResourceAllocator.java   | 5 +
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/gpu/GpuResourceAllocator.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/gpu/GpuResourceAllocator.java
index 28584b5..2496ac8 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/gpu/GpuResourceAllocator.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/gpu/GpuResourceAllocator.java
@@ -258,10 +258,7 @@ public class GpuResourceAllocator {
 
   private synchronized long getReleasingGpus() {
 long releasingGpus = 0;
-Iterator> iter = usedDevices.entrySet()
-.iterator();
-while (iter.hasNext()) {
-  ContainerId containerId = iter.next().getValue();
+for (ContainerId containerId : ImmutableSet.copyOf(usedDevices.values())) {
   Container container;
   if ((container = nmContext.getContainers().get(containerId)) != null) {
 if (container.isContainerInFinalStates()) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-1035. Intermittent TestRootList failure. Contributed by Doroszlai Attila.

2019-01-30 Thread bharat
This is an automated email from the ASF dual-hosted git repository.

bharat pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 5372927  HDDS-1035. Intermittent TestRootList failure. Contributed by 
Doroszlai Attila.
5372927 is described below

commit 53729279c7ee6ff8cf0ee69089c18d84417a4a7f
Author: Bharat Viswanadham 
AuthorDate: Wed Jan 30 16:21:42 2019 -0800

HDDS-1035. Intermittent TestRootList failure. Contributed by Doroszlai 
Attila.
---
 .../java/org/apache/hadoop/ozone/s3/endpoint/TestRootList.java| 8 +++-
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git 
a/hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestRootList.java
 
b/hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestRootList.java
index 4f76067..8759fff 100644
--- 
a/hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestRootList.java
+++ 
b/hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestRootList.java
@@ -24,7 +24,6 @@ import org.apache.hadoop.ozone.client.ObjectStore;
 import org.apache.hadoop.ozone.client.OzoneClientStub;
 import org.apache.hadoop.ozone.s3.header.AuthenticationHeaderParser;
 
-import org.apache.commons.lang3.RandomStringUtils;
 import static org.junit.Assert.assertEquals;
 import org.junit.Before;
 import org.junit.Test;
@@ -58,14 +57,13 @@ public class TestRootList {
   @Test
   public void testListBucket() throws Exception {
 
-// List operation should success even there is no bucket.
+// List operation should succeed even there is no bucket.
 ListBucketResponse response = rootEndpoint.get();
 assertEquals(0, response.getBucketsNum());
 
-String bucketBaseName = "bucket-";
+String bucketBaseName = "bucket-" + getClass().getName();
 for(int i = 0; i < 10; i++) {
-  objectStoreStub.createS3Bucket(userName,
-  bucketBaseName + RandomStringUtils.randomNumeric(3));
+  objectStoreStub.createS3Bucket(userName, bucketBaseName + i);
 }
 response = rootEndpoint.get();
 assertEquals(10, response.getBucketsNum());


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-549. Add support for key rename in Ozone Shell. Contributed by Doroszlai Attila.

2019-01-30 Thread bharat
This is an automated email from the ASF dual-hosted git repository.

bharat pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 945a61c  HDDS-549. Add support for key rename in Ozone Shell. 
Contributed by Doroszlai Attila.
945a61c is described below

commit 945a61c1649a615cb1d8c510efd3f291b0fe2571
Author: Bharat Viswanadham 
AuthorDate: Wed Jan 30 16:00:18 2019 -0800

HDDS-549. Add support for key rename in Ozone Shell. Contributed by 
Doroszlai Attila.
---
 hadoop-hdds/docs/content/KeyCommands.md| 17 +
 .../src/main/smoketest/basic/ozone-shell.robot |  5 +-
 .../hadoop/ozone/ozShell/TestOzoneShell.java   | 54 +---
 .../hadoop/ozone/web/ozShell/keys/KeyCommands.java |  1 +
 .../ozone/web/ozShell/keys/RenameKeyHandler.java   | 73 ++
 5 files changed, 142 insertions(+), 8 deletions(-)

diff --git a/hadoop-hdds/docs/content/KeyCommands.md 
b/hadoop-hdds/docs/content/KeyCommands.md
index 1a77762..427ee46 100644
--- a/hadoop-hdds/docs/content/KeyCommands.md
+++ b/hadoop-hdds/docs/content/KeyCommands.md
@@ -29,6 +29,7 @@ Ozone shell supports the following key commands.
   * [delete](#delete)
   * [info](#info)
   * [list](#list)
+  * [rename](#rename)
 
 
 ### Get
@@ -119,6 +120,22 @@ ozone sh key list /hive/jan
 
 This command will list all keys in the bucket _/hive/jan_.
 
+### Rename
+
+The `key rename` command changes the name of an existing key in the specified 
bucket.
+
+***Params:***
+
+| Arguments  |  Comment|
+||-|
+|  Uri   | The name of the bucket in 
**/volume/bucket** format.
+|  FromKey   | The existing key to be renamed
+|  ToKey | The new desired name of the key
+
+{{< highlight bash >}}
+ozone sh key rename /hive/jan sales.orc new_name.orc
+{{< /highlight >}}
+The above command will rename `sales.orc` to `new_name.orc` in the bucket 
`/hive/jan`.
 
 
 
diff --git a/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot 
b/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot
index 14a5761..574c50b 100644
--- a/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot
+++ b/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot
@@ -79,4 +79,7 @@ Test key handling
 Should contain  ${result}   createdOn
 ${result} = Execute ozone sh key list 
${protocol}${server}/${volume}/bb1 | grep -Ev 
'Removed|WARN|DEBUG|ERROR|INFO|TRACE' | jq -r '.[] | select(.keyName=="key1") | 
.keyName'
 Should Be Equal ${result}   key1
-Execute ozone sh key delete 
${protocol}${server}/${volume}/bb1/key1
+Execute ozone sh key rename 
${protocol}${server}/${volume}/bb1 key1 key2
+${result} = Execute ozone sh key list 
${protocol}${server}/${volume}/bb1 | grep -Ev 
'Removed|WARN|DEBUG|ERROR|INFO|TRACE' | jq -r '.[].keyName'
+Should Be Equal ${result}   key2
+Execute ozone sh key delete 
${protocol}${server}/${volume}/bb1/key2
diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
index f00c756..4bf7d52 100644
--- 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
@@ -950,13 +950,7 @@ public class TestOzoneShell {
 execute(shell, args);
 
 // verify if key has been deleted in the bucket
-try {
-  bucket.getKey(keyName);
-  fail("Get key should have thrown.");
-} catch (IOException e) {
-  GenericTestUtils.assertExceptionContains(
-  "Lookup key failed, error:KEY_NOT_FOUND", e);
-}
+assertKeyNotExists(bucket, keyName);
 
 // test delete key in a non-exist bucket
 args = new String[] {"key", "delete",
@@ -972,6 +966,29 @@ public class TestOzoneShell {
   }
 
   @Test
+  public void testRenameKey() throws Exception {
+LOG.info("Running testRenameKey");
+OzoneBucket bucket = creatBucket();
+OzoneKey oldKey = createTestKey(bucket);
+
+String oldName = oldKey.getName();
+String newName = oldName + ".new";
+String[] args = new String[]{
+"key", "rename",
+String.format("%s/%s/%s",
+url, oldKey.getVolumeName(), oldKey.getBucketName()),
+oldName,
+newName
+};
+execute(shell, args);
+
+OzoneKey newKey = bucket.getKey(newName);
+assertEquals(oldKey.getCreationTime(), newKey.getCreationTime());
+   

[hadoop] branch trunk updated: HDDS-1016. Allow marking containers as unhealthy. Contributed by Arpit Agarwal.

2019-01-30 Thread arp
This is an automated email from the ASF dual-hosted git repository.

arp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new c354195  HDDS-1016. Allow marking containers as unhealthy. Contributed 
by Arpit Agarwal.
c354195 is described below

commit c35419579b5c5b315c5b62d8b89149924416b480
Author: Arpit Agarwal 
AuthorDate: Wed Jan 30 11:40:50 2019 -0800

HDDS-1016. Allow marking containers as unhealthy. Contributed by Arpit 
Agarwal.
---
 .../container/common/interfaces/Container.java |   5 +
 .../container/keyvalue/KeyValueContainer.java  |  60 +-
 .../ozone/container/keyvalue/KeyValueHandler.java  |  94 -
 .../TestKeyValueContainerMarkUnhealthy.java| 172 
 .../TestKeyValueHandlerWithUnhealthyContainer.java | 227 +
 5 files changed, 538 insertions(+), 20 deletions(-)

diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/Container.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/Container.java
index 405cac3..58e3383 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/Container.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/Container.java
@@ -85,6 +85,11 @@ public interface Container extends RwLock {
   void markContainerForClose() throws StorageContainerException;
 
   /**
+   * Marks the container replica as unhealthy.
+   */
+  void markContainerUnhealthy() throws StorageContainerException;
+
+  /**
* Quasi Closes a open container, if it is already closed or does not exist a
* StorageContainerException is thrown.
*
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainer.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainer.java
index e737a53..ba559e9 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainer.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainer.java
@@ -64,6 +64,7 @@ import static 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos
 .Result.CONTAINER_FILES_CREATE_ERROR;
 import static org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos
 .Result.CONTAINER_INTERNAL_ERROR;
+import static 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.Result.CONTAINER_NOT_OPEN;
 import static org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos
 .Result.DISK_OUT_OF_SPACE;
 import static org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos
@@ -72,6 +73,7 @@ import static 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos
 .Result.INVALID_CONTAINER_STATE;
 import static org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos
 .Result.UNSUPPORTED_REQUEST;
+
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -109,8 +111,8 @@ public class KeyValueContainer implements 
Container {
 
 File containerMetaDataPath = null;
 //acquiring volumeset read lock
-volumeSet.readLock();
 long maxSize = containerData.getMaxSize();
+volumeSet.readLock();
 try {
   HddsVolume containerVolume = volumeChoosingPolicy.chooseVolume(volumeSet
   .getVolumesList(), maxSize);
@@ -270,28 +272,67 @@ public class KeyValueContainer implements 
Container {
 
   @Override
   public void markContainerForClose() throws StorageContainerException {
-updateContainerData(() ->
-containerData.setState(ContainerDataProto.State.CLOSING));
+writeLock();
+try {
+  if (getContainerState() != ContainerDataProto.State.OPEN) {
+throw new StorageContainerException(
+"Attempting to close a " + getContainerState() + " container.",
+CONTAINER_NOT_OPEN);
+  }
+  updateContainerData(() ->
+  containerData.setState(ContainerDataProto.State.CLOSING));
+} finally {
+  writeUnlock();
+}
+  }
+
+  @Override
+  public void markContainerUnhealthy() throws StorageContainerException {
+writeLock();
+try {
+  updateContainerData(() ->
+  containerData.setState(ContainerDataProto.State.UNHEALTHY));
+} finally {
+  writeUnlock();
+}
   }
 
   @Override
   public void quasiClose() throws StorageContainerException {
-updateContainerData(containerData::quasiCloseContainer);
+writeLock();
+try {
+  updateContainerData(containerData::quasiCloseContainer);
+} finally {
+  writeUnlock();
+}
   }
 
   @Override
   public void close() throws StorageContainerException {
-updateContainerData(containerData::closeContainer);
+

[hadoop] branch trunk updated: HDDS-1031. Update ratis version to fix a DN restart Bug. Contributed by Bharat Viswanadham.

2019-01-30 Thread bharat
This is an automated email from the ASF dual-hosted git repository.

bharat pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 7456fc9  HDDS-1031. Update ratis version to fix a DN restart Bug. 
Contributed by Bharat Viswanadham.
7456fc9 is described below

commit 7456fc99ee01562b92d36e56b93081a0d7af6514
Author: Bharat Viswanadham 
AuthorDate: Wed Jan 30 11:14:02 2019 -0800

HDDS-1031. Update ratis version to fix a DN restart Bug. Contributed by 
Bharat Viswanadham.
---
 hadoop-hdds/pom.xml  | 2 +-
 hadoop-ozone/pom.xml | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/hadoop-hdds/pom.xml b/hadoop-hdds/pom.xml
index 4c8def5..d9ecedd 100644
--- a/hadoop-hdds/pom.xml
+++ b/hadoop-hdds/pom.xml
@@ -46,7 +46,7 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
 0.4.0-SNAPSHOT
 
 
-0.4.0-a8c4ca0-SNAPSHOT
+0.4.0-f283ffa-SNAPSHOT
 
 1.60
 
diff --git a/hadoop-ozone/pom.xml b/hadoop-ozone/pom.xml
index 8de7ef7..0dc70b6 100644
--- a/hadoop-ozone/pom.xml
+++ b/hadoop-ozone/pom.xml
@@ -33,7 +33,7 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
 3.2.0
 0.4.0-SNAPSHOT
 0.4.0-SNAPSHOT
-0.4.0-a8c4ca0-SNAPSHOT
+0.4.0-f283ffa-SNAPSHOT
 1.60
 Badlands
 ${ozone.version}


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-1030. Move auditparser robot tests under ozone basic. Contributed by Dinesh Chitlangia.

2019-01-30 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 0e95ae4  HDDS-1030. Move auditparser robot tests under ozone basic. 
Contributed by Dinesh Chitlangia.
0e95ae4 is described below

commit 0e95ae402ce108c88cb66b3d8e18df1943b0ff33
Author: Márton Elek 
AuthorDate: Wed Jan 30 17:33:38 2019 +0100

HDDS-1030. Move auditparser robot tests under ozone basic. Contributed by 
Dinesh Chitlangia.
---
 .../smoketest/{auditparser/parser.robot => basic/auditparser.robot} | 0
 hadoop-ozone/dist/src/main/smoketest/test.sh| 2 --
 2 files changed, 2 deletions(-)

diff --git a/hadoop-ozone/dist/src/main/smoketest/auditparser/parser.robot 
b/hadoop-ozone/dist/src/main/smoketest/basic/auditparser.robot
similarity index 100%
rename from hadoop-ozone/dist/src/main/smoketest/auditparser/parser.robot
rename to hadoop-ozone/dist/src/main/smoketest/basic/auditparser.robot
diff --git a/hadoop-ozone/dist/src/main/smoketest/test.sh 
b/hadoop-ozone/dist/src/main/smoketest/test.sh
index 04f5abd..b447481 100755
--- a/hadoop-ozone/dist/src/main/smoketest/test.sh
+++ b/hadoop-ozone/dist/src/main/smoketest/test.sh
@@ -140,8 +140,6 @@ if [ "$RUN_ALL" = true ]; then
 #
 # We select the test suites and execute them on multiple type of clusters
 #
-   DEFAULT_TESTS=("auditparser")
-   execute_tests auditparser "${DEFAULT_TESTS[@]}"
DEFAULT_TESTS=("security")
execute_tests ozonesecure "${DEFAULT_TESTS[@]}"
DEFAULT_TESTS=("basic")


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: YARN-9251. Build failure for -Dhbase.profile=2.0. Contributed by Rohith Sharma K S.

2019-01-30 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new a3a9ae3  YARN-9251. Build failure for -Dhbase.profile=2.0. Contributed 
by Rohith Sharma K S.
a3a9ae3 is described below

commit a3a9ae3cea077c55113ae722e69f09b07c81cc27
Author: Akira Ajisaka 
AuthorDate: Wed Jan 30 05:16:24 2019 -0800

YARN-9251. Build failure for -Dhbase.profile=2.0. Contributed by Rohith 
Sharma K S.
---
 .../hadoop-yarn-server-timelineservice-hbase-tests/pom.xml  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/pom.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/pom.xml
index 4c8767d..01eeec9 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/pom.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/pom.xml
@@ -484,7 +484,7 @@
 
 
   org.mockito
-  mockito-all
+  mockito-core
   test
 
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-1032. Package builds are failing with missing org.mockito:mockito-core dependency version. Contributed by Doroszlai, Attila.

2019-01-30 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 14441cc  HDDS-1032. Package builds are failing with missing 
org.mockito:mockito-core dependency version. Contributed by Doroszlai, Attila.
14441cc is described below

commit 14441ccbc67f653d22cf40be98f3d31a054301e4
Author: Márton Elek 
AuthorDate: Wed Jan 30 13:40:45 2019 +0100

HDDS-1032. Package builds are failing with missing org.mockito:mockito-core 
dependency version. Contributed by Doroszlai, Attila.
---
 hadoop-hdds/framework/pom.xml  | 2 +-
 hadoop-hdds/server-scm/pom.xml | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/hadoop-hdds/framework/pom.xml b/hadoop-hdds/framework/pom.xml
index 40d039a..650442d 100644
--- a/hadoop-hdds/framework/pom.xml
+++ b/hadoop-hdds/framework/pom.xml
@@ -35,7 +35,7 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
 
 
   org.mockito
-  mockito-core
+  mockito-all
   test
 
   
diff --git a/hadoop-hdds/server-scm/pom.xml b/hadoop-hdds/server-scm/pom.xml
index 298dfc3..aff0d29 100644
--- a/hadoop-hdds/server-scm/pom.xml
+++ b/hadoop-hdds/server-scm/pom.xml
@@ -88,7 +88,7 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
 
 
   org.mockito
-  mockito-core
+  mockito-all
   test
 
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org