[jira] [Created] (HDFS-14229) Nonblocking HDFS create|write

2019-01-24 Thread Zheng Shao (JIRA)
Zheng Shao created HDFS-14229:
-

 Summary: Nonblocking HDFS create|write
 Key: HDFS-14229
 URL: https://issues.apache.org/jira/browse/HDFS-14229
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs-client
Reporter: Zheng Shao


Right now, the create call on HDFS is blocking.  The write call can also be 
blocking if the write buffer reached its limit.

However, for most applications, the only requirement is that when "close" on a 
file is called, the file is persisted and visible in HDFS.  There is no need to 
make "create" visible right after the "create" call returns.

A particular use case of this is to use HDFS as a place to store shuffle data 
(in Spark, Map-Reduce, or other loose-coupled applications).

 

This Jira proposes that we add a new "async-hdfs://" protocol that maps to a 
new AsyncDistributedFileSystem class, whose create call is nonblocking but 
still returns a FSOutputStream that is never blocked on write (even when the 
file has not been physically created on HDFS yet).  The close call on the 
FSOutputStream will block until the creation and all previous writes are 
completed and the file is closed.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[VOTE] Release Apache Hadoop 3.1.2 - RC0

2019-01-24 Thread Wangda Tan
Hi folks,

With tons of help from Sunil, we have created RC0 for Apache Hadoop 3.1.2.
The artifacts are available here:

*http://home.apache.org/~sunilg/hadoop-3.1.2-RC0/
*

The RC tag in git is release-3.1.2-RC0:
https://github.com/apache/hadoop/commits/release-3.1.2-RC0

The maven artifacts are available via repository.apache.org at
*https://repository.apache.org/content/repositories/orgapachehadoop-1212/
*

This vote will run 5 days from now.

3.1.2 contains 325 [1] fixed JIRA issues since 3.1.1.

I have done testing with a pseudo cluster and distributed shell job. My +1
to start.

Best,
Wangda Tan and Sunil Govind

[1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.1.2)
ORDER BY priority DESC


[jira] [Created] (HDDS-1010) ContainerSet#getContainerMap should be renamed

2019-01-24 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-1010:
---

 Summary: ContainerSet#getContainerMap should be renamed
 Key: HDDS-1010
 URL: https://issues.apache.org/jira/browse/HDDS-1010
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Arpit Agarwal


ContainerSet#getContainerMap should be renamed to something like 
getContainerMapCopy to make it explicit that it creates a copy of the entire 
container map! Also it should be tagged with {{@VisibleForTesting}}.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1009) TestAbortMultipartUpload is missing the apache license text

2019-01-24 Thread Dinesh Chitlangia (JIRA)
Dinesh Chitlangia created HDDS-1009:
---

 Summary: TestAbortMultipartUpload is missing the apache license 
text
 Key: HDDS-1009
 URL: https://issues.apache.org/jira/browse/HDDS-1009
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: S3, test
Affects Versions: 0.4.0
Reporter: Dinesh Chitlangia
Assignee: Dinesh Chitlangia
 Attachments: HDDS-1009.00.patch

This was flagged by [Jenkins 
run|https://issues.apache.org/jira/browse/HDDS-1007?focusedCommentId=16751692=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16751692]
 in HDDS-1007




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1008) Invalidate closed container replicas on a failed volume

2019-01-24 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-1008:
---

 Summary: Invalidate closed container replicas on a failed volume
 Key: HDDS-1008
 URL: https://issues.apache.org/jira/browse/HDDS-1008
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


When a volume is detected as failed, all closed containers on the volume should 
be marked as invalid.

Open containers will be handled separately.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1007) Add robot test for AuditParser

2019-01-24 Thread Dinesh Chitlangia (JIRA)
Dinesh Chitlangia created HDDS-1007:
---

 Summary: Add robot test for AuditParser
 Key: HDDS-1007
 URL: https://issues.apache.org/jira/browse/HDDS-1007
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: test, Tools
Reporter: Dinesh Chitlangia
Assignee: Dinesh Chitlangia


This jira aims to add Robot test for AuditParser tool.

The robot test must run freon in order to generate audit log and then test the 
auditparser commands.

We have separate audit logs for OM, SCM, DN. However, for the robot test, just 
testing for OM is sufficient since the logs are generated using a common 
mechanism.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1006) AuditParser assumes incorrect log format

2019-01-24 Thread Dinesh Chitlangia (JIRA)
Dinesh Chitlangia created HDDS-1006:
---

 Summary: AuditParser assumes incorrect log format
 Key: HDDS-1006
 URL: https://issues.apache.org/jira/browse/HDDS-1006
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Tools
Affects Versions: 0.4.0
Reporter: Dinesh Chitlangia
Assignee: Dinesh Chitlangia


While creating AuditParser, I had mistakenly used incorrect test sample to 
verify.
Thus, due to improper column position the auditparser would yield incorrect 
query results for columns Result, Exception and Params.

This jira aims to fix this issue and sample test data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1005) Implement ozone omcli for ContainerMapper

2019-01-24 Thread sarun singla (JIRA)
sarun singla created HDDS-1005:
--

 Summary: Implement ozone omcli for ContainerMapper
 Key: HDDS-1005
 URL: https://issues.apache.org/jira/browse/HDDS-1005
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone CLI
Reporter: sarun singla


Add container Mapper implementation to an 'ozone omcli' command. This is Jira 
is a continuation of [HDDS-936|https://issues.apache.org/jira/browse/HDDS-936]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-01-24 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1026/

[Jan 23, 2019 6:14:49 AM] (wwei) YARN-9205. When using custom resource type, 
application will fail to run
[Jan 23, 2019 8:35:49 AM] (shashikant) HDDS-932. Add blockade Tests for Network 
partition. Contributed by
[Jan 23, 2019 9:59:36 AM] (wwei) YARN-8101. Add UT to verify node-attributes in 
RM nodes rest API.
[Jan 23, 2019 10:31:07 AM] (elek) HDDS-982. Fix 
TestContainerDataYaml#testIncorrectContainerFile.
[Jan 23, 2019 11:30:37 AM] (surendralilhore) HDFS-14153. [SPS] : Add Support 
for Storage Policy Satisfier in WEBHDFS.
[Jan 23, 2019 7:37:49 PM] (bharat) HDDS-764. Run S3 smoke tests with 
replication STANDARD. (#462)
[Jan 23, 2019 10:40:57 PM] (weichiu) HDFS-14061. Check if the cluster topology 
supports the EC policy before
[Jan 23, 2019 11:34:20 PM] (ajay) HDDS-975. Manage ozone security tokens with 
ozone shell cli. Contributed
[Jan 23, 2019 11:57:39 PM] (templedf) HDFS-14185. Cleanup method calls to 
static Assert methods in




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.TestDFSClientRetries 
   hadoop.yarn.server.resourcemanager.TestCapacitySchedulerMetrics 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1026/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1026/artifact/out/diff-compile-javac-root.txt
  [336K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1026/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1026/artifact/out/diff-patch-hadolint.txt
  [8.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1026/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1026/artifact/out/diff-patch-pylint.txt
  [88K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1026/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1026/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1026/artifact/out/whitespace-eol.txt
  [9.3M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1026/artifact/out/whitespace-tabs.txt
  [1.1M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1026/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [36K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1026/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1026/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1026/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1026/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1026/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1026/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1026/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1026/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1026/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt
  [24K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1026/artifact/out/branch-findbugs-hadoop-ozone_s3gateway.txt
  [48K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1026/artifact/out/branch-findbugs-hadoop-ozone_tools.txt
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1026/artifact/out/diff-javadoc-javadoc-root.txt
  [752K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1026/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [332K]
   

[jira] [Created] (HDFS-14228) Incorrect getSnapshottableDirListing() javadoc

2019-01-24 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-14228:
--

 Summary: Incorrect getSnapshottableDirListing() javadoc
 Key: HDFS-14228
 URL: https://issues.apache.org/jira/browse/HDFS-14228
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: snapshots
Affects Versions: 2.1.0-beta
Reporter: Wei-Chiu Chuang


The Javadoc for {{DistributedFileSystem#getSnapshottableDirListing()}} is not 
consistent with {{FSNamesystem#getSnapshottableDirListing()}}

{code:title=ClientProtocol#getSnapshottableDirListing()}
/**
   * Get listing of all the snapshottable directories.
   *
   * @return Information about all the current snapshottable directory
   * @throws IOException If an I/O error occurred
   */
  @Idempotent
  @ReadOnly(isCoordinated = true)
  SnapshottableDirectoryStatus[] getSnapshottableDirListing()
  throws IOException;
{code}

{code:title=DistributedFileSystem#getSnapshottableDirListing()}
/**
   * @return All the snapshottable directories
   * @throws IOException
   */
  public SnapshottableDirectoryStatus[] getSnapshottableDirListing()
{code}

But the implementation at NameNode side is:
{code:title=FSNamesystem#getSnapshottableDirListing()}
/**
   * Get the list of snapshottable directories that are owned 
   * by the current user. Return all the snapshottable directories if the 
   * current user is a super user.
   * @return The list of all the current snapshottable directories
   * @throws IOException
   */
  public SnapshottableDirectoryStatus[] getSnapshottableDirListing()
{code}

That is, if this method is called by a non-super user, it does not return all 
snapshottable directories. File this jira to get this corrected to avoid 
confusion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1004) SCMContainerManager#updateContainerStateInternal fails for QUASI_CLOSE and FORCE_CLOSE events

2019-01-24 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-1004:
-

 Summary: SCMContainerManager#updateContainerStateInternal fails 
for QUASI_CLOSE and FORCE_CLOSE events
 Key: HDDS-1004
 URL: https://issues.apache.org/jira/browse/HDDS-1004
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Lokesh Jain
Assignee: Lokesh Jain


SCMContainerManager#updateContainerStateInternal currently fails for 
QUASI_CLOSE and FORCE_CLOSE events.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1003) Intermittent IO exceptions encountered during pre-commit tests

2019-01-24 Thread Supratim Deka (JIRA)
Supratim Deka created HDDS-1003:
---

 Summary: Intermittent IO exceptions encountered during pre-commit 
tests
 Key: HDDS-1003
 URL: https://issues.apache.org/jira/browse/HDDS-1003
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Supratim Deka


stack trace from 
https://builds.apache.org/job/PreCommit-HDDS-Build/2095/testReport/org.apache.hadoop.ozone.client.rpc/TestOzoneRpcClient/testPutKey/


java.io.IOException: Unexpected Storage Container Exception: 
java.util.concurrent.ExecutionException: 
org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
exception
at 
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunkToContainer(BlockOutputStream.java:622)
at 
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunk(BlockOutputStream.java:464)
at 
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.close(BlockOutputStream.java:480)
at 
org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.close(BlockOutputStreamEntry.java:137)
at 
org.apache.hadoop.ozone.client.io.KeyOutputStream.handleFlushOrClose(KeyOutputStream.java:481)
at 
org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:314)
at 
org.apache.hadoop.ozone.client.io.KeyOutputStream.write(KeyOutputStream.java:258)
at 
org.apache.hadoop.ozone.client.io.OzoneOutputStream.write(OzoneOutputStream.java:49)
at java.io.OutputStream.write(OutputStream.java:75)
at 
org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.testPutKey(TestOzoneRpcClientAbstract.java:522)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14227) RBF:HDFS "dfsadmin -printTopology" not displaying the rack details properly

2019-01-24 Thread venkata ramkumar (JIRA)
venkata ramkumar created HDFS-14227:
---

 Summary: RBF:HDFS "dfsadmin -printTopology" not displaying the 
rack details properly
 Key: HDFS-14227
 URL: https://issues.apache.org/jira/browse/HDFS-14227
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: venkata ramkumar
Assignee: venkata ramkumar


namespaces : hacluster1 ,hacluster2
under hacluster1 :(IP1, IP2)
under hacluster2 :(IP3,IP4)

commands :
{noformat}
/router/bin> ./hdfs dfsadmin -printTopology
19/01/24 15:12:53 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Rack: /hacluster1/default-rack
   IP1:9866 (BLR121217)
   IP2:9866 (linux-110)
   IP3:9866 (linux111)
   IP4:9866 (linux112)
{noformat}

expected o/p:
{noformat}
/router/bin> ./hdfs dfsadmin -printTopology
19/01/24 15:12:53 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Rack: /hacluster1/default-rack
   IP1:9866 (BLR121217)
   IP2:9866 (linux-110)
Rack: /hacluster2/default-rack
   IP3:9866 (linux111)
   IP4:9866 (linux112)
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1002) ozonesecure compose incompatible with smoke test

2019-01-24 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1002:
---

 Summary: ozonesecure compose incompatible with smoke test
 Key: HDDS-1002
 URL: https://issues.apache.org/jira/browse/HDDS-1002
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: docker
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


{code:title=hadoop-ozone/dist/target/ozone-0.4.0-SNAPSHOT/smoketest/test.sh 
--keep --env ozonesecure security}
Creating network "ozonesecure_default" with the default driver
Creating ozonesecure_kdc_1  ... done
Creating ozonesecure_scm_1  ... done
Creating ozonesecure_datanode_1 ... done
Creating ozonesecure_datanode_2 ... done
Creating ozonesecure_datanode_3 ... done
Creating ozonesecure_om_1   ... done
0 datanode is up and healhty (until now)
3 datanodes are up and registered to the scm
ERROR: No such service: ozoneManager
[ ERROR ] Reading XML source 'smoketest/result/robot-*.xml' failed: No such 
file or directory

Try --help for usage information.
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-907) Use WAITFOR environment variable to handle dependencies between ozone containers

2019-01-24 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila resolved HDDS-907.

Resolution: Done
  Assignee: Supratim Deka  (was: Doroszlai, Attila)

> Use WAITFOR environment variable to handle dependencies between ozone 
> containers
> 
>
> Key: HDDS-907
> URL: https://issues.apache.org/jira/browse/HDDS-907
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: docker
>Reporter: Elek, Marton
>Assignee: Supratim Deka
>Priority: Major
>  Labels: newbie
>
> Until HDDS-839 we had a hard-coded 15 seconds sleep before we started 
> ozoneManager with the docker-compose files 
> (hadoop-ozone/dist/target/ozone-0.4.0-SNAPSHOT/compose).
> For initialization of the OzoneManager we need the scm. Om will retry the 
> connection if scm is not available but the dns resolution is cached: if the 
> dns of scm is not available at the startup of om, it can't be initialized.
> Before HDDS-839 we handled this dependency with the 15 seconds sleep, which 
> was usually slower what we need.
> Now we can use the WAITFOR environment variables from HDDS-839 to handle this 
> dependency (like WAITFOR:scmL9876) which can be added to all the 
> docker-compose files.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org