[jira] [Created] (HDDS-1938) Change omPort parameter type from String to int in BasicOzoneFileSystem#createAdapter

2019-08-08 Thread Siyao Meng (JIRA)
Siyao Meng created HDDS-1938:


 Summary: Change omPort parameter type from String to int in 
BasicOzoneFileSystem#createAdapter
 Key: HDDS-1938
 URL: https://issues.apache.org/jira/browse/HDDS-1938
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Filesystem
Reporter: Siyao Meng
Assignee: Siyao Meng
 Attachments: HDDS-1938.001.patch

The diff will be based on HDDS-1891.

Goal:
1. Change omPort type to int because it is eventually used as int anyway
2. Refactor the parse code



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14710) RBF:Improve some RPC performances

2019-08-08 Thread xuzq (JIRA)
xuzq created HDFS-14710:
---

 Summary: RBF:Improve some RPC performances
 Key: HDFS-14710
 URL: https://issues.apache.org/jira/browse/HDFS-14710
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: rbf
Reporter: xuzq


We can improve some RPC performance if the extendedBlock is not null.

Such as addBlock, getAdditionalDatanode and complete.

Since HDFS encourages user to write large files, so the extendedBlock is not 
null in most case.

In the scenario of Multiple Destination and large file, the effect is more 
obvious.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14696) Backport HDFS-11273 to branch-2 (Move TransferFsImage#doGetUrl function to a Util class)

2019-08-08 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-14696.

   Resolution: Fixed
Fix Version/s: 2.10.0

Merge the PR, resolve this jira. Thanks [~smeng]

> Backport HDFS-11273 to branch-2 (Move TransferFsImage#doGetUrl function to a 
> Util class)
> 
>
> Key: HDFS-14696
> URL: https://issues.apache.org/jira/browse/HDFS-14696
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 2.10.0
>
> Attachments: HDFS-14696-branch-2.003.patch
>
>
> Backporting HDFS-11273 Move TransferFsImage#doGetUrl function to a Util class 
> to branch-2.
> To avoid confusion with branch-2 patches in HDFS-11273, patch revision number 
> will continue from 003.
> *HDFS-14696-branch-2.003.patch* is the same as 
> *HDFS-11273-branch-2.003.patch*.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-08-08 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/407/

[Aug 7, 2019 3:01:51 PM] (yqlin) HDFS-14313. Get hdfs used space from 
FsDatasetImpl#volumeMap#ReplicaInfo
[Aug 7, 2019 8:46:56 PM] (ekrogen) HDFS-14631. The DirectoryScanner doesn't fix 
the wrongly placed replica.




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.contrib.bkjournal.TestBookKeeperJournalManager 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.TestSafeMode 
   hadoop.hdfs.server.namenode.TestStartup 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.TestLeaseRecovery2 
   hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload 
   hadoop.contrib.bkjournal.TestBookKeeperJournalManager 
   hadoop.mapreduce.v2.TestMROldApiJobs 
   hadoop.mapreduce.v2.TestNonExistentJob 
   hadoop.mapreduce.v2.TestRMNMInfo 
   hadoop.mapreduce.v2.TestMRJobsWithHistoryService 
   hadoop.fs.azurebfs.services.TestAbfsClientThrottlingAnalyzer 
   hadoop.mapred.gridmix.TestLoadJob 
   hadoop.mapred.gridmix.TestGridmixSubmission 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/407/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/407/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/407/artifact/out/diff-compile-cc-root-jdk1.8.0_222.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/407/artifact/out/diff-compile-javac-root-jdk1.8.0_222.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/407/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/407/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/407/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/407/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/407/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/407/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/407/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/407/artifact/out/whitespace-tabs.txt
  [1.2M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/407/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/407/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/407/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   

[jira] [Resolved] (HDDS-1829) On OM reload/restart OmMetrics#numKeys should be updated

2019-08-08 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDDS-1829.
-
   Resolution: Fixed
Fix Version/s: 0.5.0

Committed to trunk. Thanks for the contribution [~smeng].

> On OM reload/restart OmMetrics#numKeys should be updated
> 
>
> Key: HDDS-1829
> URL: https://issues.apache.org/jira/browse/HDDS-1829
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> When OM is restarted or the state is reloaded, OM Metrics is re-initialized. 
> The saved numKeys value might not be valid as the DB state could have 
> changed. Hence, the numKeys metric must be updated with the correct value on 
> metrics re-initialization.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Please file a JIRA for your PR

2019-08-08 Thread Wei-Chiu Chuang
The Hadoop community welcome your patch contribution, and like increasingly
patches are submitted via GitHub Pull Requests.

That is great, as it reduces the friction to review code & commit code.

However, please make sure to file a jira for your PR, as described in the How
to Contribute
 wiki.
The fact is, if your PR isn't associated with a JIRA ID, and add the JIRA
ID in the title of your PR, your PR is not likely going to be noticed by
committers. Most Hadoop committers use Apache JIRA to track issues, and
folks usually find it easier to exchange in-depth technical discussion over
JIRAs than PR.

Thank you and happy patching!
Wei-Chiu


[jira] [Created] (HDDS-1937) Acceptance tests fail if scm webui shows invalid json

2019-08-08 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1937:
--

 Summary: Acceptance tests fail if scm webui shows invalid json
 Key: HDDS-1937
 URL: https://issues.apache.org/jira/browse/HDDS-1937
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Elek, Marton
Assignee: Elek, Marton


Acceptance test of a nightly build is failed with the following error:

{code}
Creating ozonesecure_datanode_3 ... 

Creating ozonesecure_kdc_1  ... done

Creating ozonesecure_om_1   ... done

Creating ozonesecure_scm_1  ... done

Creating ozonesecure_datanode_3 ... done

Creating ozonesecure_kms_1  ... done

Creating ozonesecure_s3g_1  ... done

Creating ozonesecure_datanode_2 ... done

Creating ozonesecure_datanode_1 ... done
parse error: Invalid numeric literal at line 2, column 0
{code}

https://raw.githubusercontent.com/elek/ozone-ci/master/byscane/byscane-nightly-5b87q/acceptance/output.log

The problem is in the script which checks the number of available datanodes.

If the HTTP endpoint of the SCM is already started BUT not ready yet it may 
return with a simple HTML error message instead of json. Which can not be 
parsed by jq:

In testlib.sh:

{code}
  37   │   if [[ "${SECURITY_ENABLED}" == 'true' ]]; then
  38   │ docker-compose -f "${compose_file}" exec -T scm bash -c "kinit -k 
HTTP/scm@EXAMPL
   │ E.COM -t /etc/security/keytabs/HTTP.keytab && curl --negotiate -u : -s 
'${jmx_url}'"
  39   │   else
  40   │ docker-compose -f "${compose_file}" exec -T scm curl -s 
"${jmx_url}"
  41   │   fi \
  42   │ | jq -r '.beans[0].NodeCount[] | select(.key=="HEALTHY") | .value'
{code}

One possible fix is to adjust the error handling (set +x / set -x) per method 
instead of using a generic set -x at the beginning. It would provide a more 
predictable behavior. In our case count_datanode should not fail evert (as the 
caller method: wait_for_datanodes can retry anyway).



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14709) Add encryption zone related REST APIs to WebHDFS

2019-08-08 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-14709:
--

 Summary: Add encryption zone related REST APIs to WebHDFS
 Key: HDFS-14709
 URL: https://issues.apache.org/jira/browse/HDFS-14709
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Wei-Chiu Chuang


Webhdfs doesn't handle encryption zone related REST APIs: 
createEncryptionZone,
getEZForPath,
listEncryptionZones,
reencryptEncryptionZone,
listReencryptionStatus,
getFileEncryptionInfo,
provisionEZTrash,

This is unrelated to HDFS-12355.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1936) ozonesecure s3 test fails intermittently

2019-08-08 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1936:
---

 Summary: ozonesecure s3 test fails intermittently
 Key: HDDS-1936
 URL: https://issues.apache.org/jira/browse/HDDS-1936
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Doroszlai, Attila


Sometimes acceptance tests fail at ozonesecure s3 test, starting with:

{code:title=https://ci.anzix.net/job/ozone/17607/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s18-s1-t1-k3-k1-k2}
Completed 29 Bytes/29 Bytes (6 Bytes/s) with 1 file(s) remaining
upload failed: ../../tmp/testfile to s3://bucket-07853/testfile An error 
occurred (500) when calling the PutObject operation (reached max retries: 4): 
Internal Server Error
{code}

followed by:

{code:title=https://ci.anzix.net/job/ozone/17607/artifact/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/result/log.html#s1-s18-s5-t1}
('Connection aborted.', error(32, 'Broken pipe'))
{code}

in subsequent test cases.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1935) Improve the visibility with Ozone Insight tool

2019-08-08 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1935:
--

 Summary: Improve the visibility with Ozone Insight tool
 Key: HDDS-1935
 URL: https://issues.apache.org/jira/browse/HDDS-1935
 Project: Hadoop Distributed Data Store
  Issue Type: New Feature
Reporter: Elek, Marton
Assignee: Elek, Marton




Visibility is a key aspect for the operation of any Ozone cluster. We need 
better visibility to improve correctnes and performance. While the distributed 
tracing is a good tool for improving the visibility of performance we have no 
powerful tool which can be used to check the internal state of the Ozone 
cluster and debug certain correctness issues.

To improve the visibility of the internal components I propose to introduce a 
new command line application `ozone insight`.

The new tool will show the selected metrics / logs / configuration for any of 
the internal components (like replication-manager, pipeline, etc.).

For each insight points we can define the required logs and log levels, metrics 
and configuration and the tool can display only the component specific 
information during the debug.

h2. Usage

First we can check the available insight point:

{code}
bash-4.2$ ozone insight list
Available insight points:


  scm.node-manager SCM Datanode management related 
information.
  scm.replica-manager  SCM closed container replication manager
  scm.event-queue  Information about the internal async 
event delivery
  scm.protocol.block-location  SCM Block location protocol endpoint
  scm.protocol.container-location  Planned insight point which is not yet 
implemented.
  scm.protocol.datanodePlanned insight point which is not yet 
implemented.
  scm.protocol.securityPlanned insight point which is not yet 
implemented.
  scm.http Planned insight point which is not yet 
implemented.
  om.key-manager   OM Key Manager
  om.protocol.client   Ozone Manager RPC endpoint
  om.http  Planned insight point which is not yet 
implemented.
  datanode.pipeline[id]More information about one ratis 
datanode ring.
  datanode.rocksdb More information about one ratis 
datanode ring.
  s3g.http Planned insight point which is not yet 
implemented.
{code}

Insight points can define configuration, metrics and/or logs. Configuration can 
be displayed based on the configuration objects:

{code}
ozone insight config scm.protocol.block-location
Configuration for `scm.protocol.block-location` (SCM Block location protocol 
endpoint)

>>> ozone.scm.block.client.bind.host
   default: 0.0.0.0
   current: 0.0.0.0

The hostname or IP address used by the SCM block client  endpoint to bind


>>> ozone.scm.block.client.port
   default: 9863
   current: 9863

The port number of the Ozone SCM block client service.


>>> ozone.scm.block.client.address
   default: ${ozone.scm.client.address}
   current: scm

The address of the Ozone SCM block client service. If not defined value of 
ozone.scm.client.address is used

{code}

Metrics can be retrieved from the prometheus entrypoint:

{code}
ozone insight metrics scm.protocol.block-location
Metrics for `scm.protocol.block-location` (SCM Block location protocol endpoint)

RPC connections

  Open connections: 0
  Dropped connections: 0
  Received bytes: 0
  Sent bytes: 0


RPC queue

  RPC average queue time: 0.0
  RPC call queue length: 0


RPC performance

  RPC processing time average: 0.0
  Number of slow calls: 0


Message type counters

  Number of AllocateScmBlock: 0
  Number of DeleteScmKeyBlocks: 0
  Number of GetScmInfo: 2
  Number of SortDatanodes: 0
{code}

Log levels can be adjusted with the existing logLevel servlet and can be 
collected / streamd via a simple logstream servlet:

{code}
ozone insight log scm.node-manager
[SCM] 2019-08-08 12:42:37,392 
[DEBUG|org.apache.hadoop.hdds.scm.node.SCMNodeManager|SCMNodeManager] 
Processing node report from [datanode=ozone_datanode_1.ozone_default]
[SCM] 2019-08-08 12:43:37,392 
[DEBUG|org.apache.hadoop.hdds.scm.node.SCMNodeManager|SCMNodeManager] 
Processing node report from [datanode=ozone_datanode_1.ozone_default]
[SCM] 2019-08-08 12:44:37,392 
[DEBUG|org.apache.hadoop.hdds.scm.node.SCMNodeManager|SCMNodeManager] 
Processing node report from [datanode=ozone_datanode_1.ozone_default]
[SCM] 2019-08-08 12:45:37,393 
[DEBUG|org.apache.hadoop.hdds.scm.node.SCMNodeManager|SCMNodeManager] 
Processing node report from [datanode=ozone_datanode_1.ozone_default]
[SCM] 2019-08-08 12:46:37,392 
[DEBUG|org.apache.hadoop.hdds.scm.node.SCMNodeManager|SCMNodeManager] 
Processing node report from [datanode=ozone_datanode_1.ozone_default]
{code}

The verbose mode can display the raw messages 

[jira] [Resolved] (HDFS-5656) add some configuration keys to hdfs-default.xml

2019-08-08 Thread Andras Bokor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HDFS-5656.

Resolution: Duplicate

> add some configuration keys to hdfs-default.xml
> ---
>
> Key: HDFS-5656
> URL: https://issues.apache.org/jira/browse/HDFS-5656
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Colin P. McCabe
>Priority: Minor
>
> Some configuration keys like {{dfs.client.read.shortcircuit}} are not present 
> in {{hdfs-default.xml}} as they should be.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1934) TestSecureOzoneCluster may fail due to port conflict

2019-08-08 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1934:
---

 Summary: TestSecureOzoneCluster may fail due to port conflict
 Key: HDDS-1934
 URL: https://issues.apache.org/jira/browse/HDDS-1934
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Affects Versions: 0.4.1
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


{{TestSecureOzoneCluster}} fails if SCM is already running on same host.

Steps to reproduce:

# Start {{ozone}} docker compose cluster
# Run {{TestSecureOzoneCluster}} test

{noformat:title=https://ci.anzix.net/job/ozone/17602/consoleText}
[ERROR] Tests run: 10, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 49.821 
s <<< FAILURE! - in org.apache.hadoop.ozone.TestSecureOzoneCluster
[ERROR] testSCMSecurityProtocol(org.apache.hadoop.ozone.TestSecureOzoneCluster) 
 Time elapsed: 6.59 s  <<< ERROR!
java.net.BindException: Port in use: 0.0.0.0:9876
at 
org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1203)
at 
org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1225)
at 
org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1284)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139)
at 
org.apache.hadoop.hdds.server.BaseHttpServer.start(BaseHttpServer.java:181)
at 
org.apache.hadoop.hdds.scm.server.StorageContainerManager.start(StorageContainerManager.java:779)
at 
org.apache.hadoop.ozone.TestSecureOzoneCluster.testSCMSecurityProtocol(TestSecureOzoneCluster.java:277)
...

[ERROR] testSecureOmReInit(org.apache.hadoop.ozone.TestSecureOzoneCluster)  
Time elapsed: 5.312 s  <<< ERROR!
java.net.BindException: Port in use: 0.0.0.0:9876
at 
org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1203)
at 
org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1225)
at 
org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1284)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139)
at 
org.apache.hadoop.hdds.server.BaseHttpServer.start(BaseHttpServer.java:181)
at 
org.apache.hadoop.hdds.scm.server.StorageContainerManager.start(StorageContainerManager.java:779)
at 
org.apache.hadoop.ozone.TestSecureOzoneCluster.testSecureOmReInit(TestSecureOzoneCluster.java:743)
...

[ERROR] testSecureOmInitSuccess(org.apache.hadoop.ozone.TestSecureOzoneCluster) 
 Time elapsed: 5.312 s  <<< ERROR!
java.net.BindException: Port in use: 0.0.0.0:9876
at 
org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1203)
at 
org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1225)
at 
org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1284)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139)
at 
org.apache.hadoop.hdds.server.BaseHttpServer.start(BaseHttpServer.java:181)
at 
org.apache.hadoop.hdds.scm.server.StorageContainerManager.start(StorageContainerManager.java:779)
at 
org.apache.hadoop.ozone.TestSecureOzoneCluster.testSecureOmInitSuccess(TestSecureOzoneCluster.java:789)
...
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1933) Datanode should use hostname in place of ip addresses to allow DN's to work when ipaddress change

2019-08-08 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-1933:
---

 Summary: Datanode should use hostname in place of ip addresses to 
allow DN's to work when ipaddress change
 Key: HDDS-1933
 URL: https://issues.apache.org/jira/browse/HDDS-1933
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode, SCM
Affects Versions: 0.4.0
Reporter: Mukul Kumar Singh


This was noticed by [~elek] while deploying Ozone on Kubernetes based 
environment.

When the datanode ip address change on restart, the Datanode details cease to 
be correct for the datanode. and this prevents the cluster from functioning 
after a restart.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1932) Add support for object expiration in the s3 api

2019-08-08 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-1932:
---

 Summary: Add support for object expiration in the s3 api
 Key: HDDS-1932
 URL: https://issues.apache.org/jira/browse/HDDS-1932
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: S3
Affects Versions: 0.4.0
Reporter: Mukul Kumar Singh


This jira proposes to add support for object expiration in the s3 API. Objects 
are deleted once the object life cycle time is elapsed.

https://aws.amazon.com/blogs/aws/amazon-s3-object-expiration/



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org