[jira] [Created] (HDDS-2466) Split OM Key into a Prefix Part and a Name Part

2019-11-12 Thread Supratim Deka (Jira)
Supratim Deka created HDDS-2466:
---

 Summary: Split OM Key into a Prefix Part and a Name Part
 Key: HDDS-2466
 URL: https://issues.apache.org/jira/browse/HDDS-2466
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Manager
Reporter: Supratim Deka
Assignee: Supratim Deka


OM stores every key in a key table, which maps the key to a KeyInfo.

If we split the key into a prefix and a name part which are then stored in 
separate tables, serves 2 purposes:
1. OzoneFS operations can be made efficient by deriving a prefix tree 
representation of the pathnames(prefixes) - details of this are outside the 
current scope. Also, the prefix table can get preferential treatment when it 
comes to caching.
2. PutKey is not penalised by having to parse the key into each path component 
- this is for cases where the dataset is a pure object store. Splitting into a 
prefix and a name is the minimal work to be done inline during the putKey 
operation.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14983) RBF: Add dfsrouteradmin -refreshSuperUserGroupsConfiguration command option

2019-11-12 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HDFS-14983:


 Summary: RBF: Add dfsrouteradmin 
-refreshSuperUserGroupsConfiguration command option
 Key: HDFS-14983
 URL: https://issues.apache.org/jira/browse/HDFS-14983
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: rbf
Reporter: Akira Ajisaka


NameNode can update proxyuser config by -refreshSuperUserGroupsConfiguration 
without restarting but DFSRouter cannot.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2465) S3 Multipart upload failing

2019-11-12 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2465:


 Summary: S3 Multipart upload failing
 Key: HDDS-2465
 URL: https://issues.apache.org/jira/browse/HDDS-2465
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


When I run attached java program, facing below error, during 
completeMultipartUpload.
{code:java}
ERROR StatusLogger No Log4j 2 configuration file found. Using default 
configuration (logging only errors to the console), or user programmatically 
provided configurations. Set system property 'log4j2.debug' to show Log4j 2 
internal initialization logging. See 
https://logging.apache.org/log4j/2.x/manual/configuration.html for instructions 
on how to configure Log4j 2ERROR StatusLogger No Log4j 2 configuration file 
found. Using default configuration (logging only errors to the console), or 
user programmatically provided configurations. Set system property 
'log4j2.debug' to show Log4j 2 internal initialization logging. See 
https://logging.apache.org/log4j/2.x/manual/configuration.html for instructions 
on how to configure Log4j 2Exception in thread "main" 
com.amazonaws.services.s3.model.AmazonS3Exception: Bad Request (Service: Amazon 
S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: 
c7b87393-955b-4c93-85f6-b02945e293ca; S3 Extended Request ID: 7tnVbqgc4bgb), S3 
Extended Request ID: 7tnVbqgc4bgb at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1712)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1367)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1113)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:770)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:744)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:726)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:686)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:668)
 at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:532) at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:512) at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4921) at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4867) at 
com.amazonaws.services.s3.AmazonS3Client.completeMultipartUpload(AmazonS3Client.java:3464)
 at org.apache.hadoop.ozone.freon.MPU.main(MPU.java:96){code}
When I debug it is not the request is not been received by S3Gateway, and I 
don't see any trace of this in audit log.

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14792) [SBN read] StanbyNode does not come out of safemode while adding new blocks.

2019-11-12 Thread Konstantin Shvachko (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko resolved HDFS-14792.

Fix Version/s: 2.10.1
   Resolution: Fixed

This turned out to be related to the same race condition between edits 
{{OP_ADD_BLOCK}} and IBRs of HDFS-14941. We do not see any delays in leaving 
safemode on StandbyNode after the HDFS-14941 fix.
Closing this as fixed.

> [SBN read] StanbyNode does not come out of safemode while adding new blocks.
> 
>
> Key: HDFS-14792
> URL: https://issues.apache.org/jira/browse/HDFS-14792
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Konstantin Shvachko
>Priority: Major
> Fix For: 2.10.1
>
>
> During startup StandbyNode reports that it needs additional X blocks to reach 
> the threshold 1.. Where X is changing up and down.
> This is because with fast tailing SBN adds new blocks from edits while DNs 
> have not reported replicas yet. Being in SafeMode SBN counts new blocks 
> towards the threshold and can stay in SafeMode for a long time.
> By design, the purpose of startup SafeMode is to disallow modifications of 
> the namespace and blocks map until all DN replicas are reported.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14982) Backport HADOOP-16152 to branch-3.1

2019-11-12 Thread Siyao Meng (Jira)
Siyao Meng created HDFS-14982:
-

 Summary: Backport HADOOP-16152 to branch-3.1
 Key: HDFS-14982
 URL: https://issues.apache.org/jira/browse/HDFS-14982
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.1.3
Reporter: Siyao Meng
Assignee: Siyao Meng


HADOOP-16152. Upgrade Eclipse Jetty version to 9.4.x



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2464) Avoid unnecessary allocations for FileChannel.open call

2019-11-12 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2464:
--

 Summary: Avoid unnecessary allocations for FileChannel.open call
 Key: HDDS-2464
 URL: https://issues.apache.org/jira/browse/HDDS-2464
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Datanode
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


{{ChunkUtils}} calls {{FileChannel#open(Path, OpenOption...)}}.  Vararg array 
elements are then added to a new {{HashSet}} to call {{FileChannel#open(Path, 
Set, FileAttribute...)}}.  We can call the latter 
directly instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2462) Add jq dependency in Contribution guideline

2019-11-12 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2462.

Fix Version/s: 0.5.0
   Resolution: Fixed

Committed to Master branch.

> Add jq dependency in Contribution guideline
> ---
>
> Key: HDDS-2462
> URL: https://issues.apache.org/jira/browse/HDDS-2462
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Istvan Fajth
>Assignee: Istvan Fajth
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Docker based tests are using JQ to parse JMX pages of different processes, 
> but the documentation does not mention it as a dependency.
> Add it to CONTRIBUTION.MD in the "Additional requirements to execute 
> different type of tests" section.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2463) Remove unnecessary getServiceInfo calls

2019-11-12 Thread Xiaoyu Yao (Jira)
Xiaoyu Yao created HDDS-2463:


 Summary: Remove unnecessary getServiceInfo calls
 Key: HDDS-2463
 URL: https://issues.apache.org/jira/browse/HDDS-2463
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Affects Versions: 0.4.1
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


OzoneManagerProtocolClientSideTranslatorPB.java Line 766-772 has multiple 
impl.getServiceInfo() which can be reduced by adding a local variable. 
{code:java}
 
resp.addAllServiceInfo(impl.getServiceInfo().getServiceInfoList().stream()
 .map(ServiceInfo::getProtobuf)
 .collect(Collectors.toList()));
if (impl.getServiceInfo().getCaCertificate() != null) {
 resp.setCaCertificate(impl.getServiceInfo().getCaCertificate()); {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14959) [SBNN read] access time should be turned off

2019-11-12 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-14959.

Resolution: Fixed

Merged the PR to trunk and cherry pick the commit to branch-3.2 and branch-3.1.
Thanks [~csun]!

> [SBNN read] access time should be turned off
> 
>
> Key: HDFS-14959
> URL: https://issues.apache.org/jira/browse/HDFS-14959
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: documentation
>Reporter: Wei-Chiu Chuang
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
>
> Both Uber and Didi shared that access time has to be switched off to avoid 
> spiky NameNode RPC process time. If access time is not off entirely, 
> getBlockLocations RPCs have to update access time and must access the active 
> NameNode. (that's my understanding. haven't checked the code)
> We should record this as a best practice in our doc.
> (If you are on the ASF slack, check out this thread
> https://the-asf.slack.com/archives/CAD7C52Q3/p1572033324008600)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14981) BlockStateChange logging is exceedingly verbose

2019-11-12 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk resolved HDFS-14981.
-
Resolution: Duplicate

Yep, I think you're right. Thanks for the pointer [~weichiu].

> BlockStateChange logging is exceedingly verbose
> ---
>
> Key: HDFS-14981
> URL: https://issues.apache.org/jira/browse/HDFS-14981
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: logging
>Reporter: Nick Dimiduk
>Priority: Major
>
> On a moderately loaded cluster, name node logs are flooded with entries of 
> {{INFO BlockStateChange...}}, to the tune of ~30 lines per millisecond. This 
> provides operators with little to no usable information. I suggest reducing 
> this log message to {{DEBUG}}. Perhaps this information (and other logging 
> related to it) should be directed to a dedicated block-audit.log file that 
> can be queried, rotated on a separate schedule from the log of the main 
> process.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14981) BlockStateChange logging is exceedingly verbose

2019-11-12 Thread Nick Dimiduk (Jira)
Nick Dimiduk created HDFS-14981:
---

 Summary: BlockStateChange logging is exceedingly verbose
 Key: HDFS-14981
 URL: https://issues.apache.org/jira/browse/HDFS-14981
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: logging
Reporter: Nick Dimiduk


On a moderately loaded cluster, name node logs are flooded with entries of 
{{INFO BlockStateChange...}}, to the tune of ~30 lines per millisecond. This 
provides operators with little to no usable information. I suggest reducing 
this log message to {{DEBUG}}. Perhaps this information (and other logging 
related to it) should be directed to a dedicated block-audit.log file that can 
be queried, rotated on a separate schedule from the log of the main process.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2462) Add jq dependency in how to contribute docs

2019-11-12 Thread Istvan Fajth (Jira)
Istvan Fajth created HDDS-2462:
--

 Summary: Add jq dependency in how to contribute docs
 Key: HDDS-2462
 URL: https://issues.apache.org/jira/browse/HDDS-2462
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Istvan Fajth


Docker based tests are using JQ to parse JMX pages of different processes, but 
the documentation does not mention it as a dependency.

Add it to CONTRIBUTION.MD in the "Additional requirements to execute different 
type of tests" section.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2461) Logging by ChunkUtils is misleading

2019-11-12 Thread Marton Elek (Jira)
Marton Elek created HDDS-2461:
-

 Summary: Logging by ChunkUtils is misleading
 Key: HDDS-2461
 URL: https://issues.apache.org/jira/browse/HDDS-2461
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Reporter: Marton Elek


During a k8s based test I found a lot of log message like:
{code:java}
2019-11-12 14:27:13 WARN  ChunkManagerImpl:209 - Duplicate write chunk request. 
Chunk overwrite without explicit request. 
ChunkInfo{chunkName='A9UrLxiEUN_testdata_chunk_4465025, offset=0, len=1024} 
{code}
I was very surprised as at ChunkManagerImpl:209 there was no similar lines.

It turned out that it's logged by ChunkUtils but it's used the logger of 
ChunkManagerImpl.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-11-12 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/503/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.util.TestReadWriteDiskValidator 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.TestMultipleNNPortQOP 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/503/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/503/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/503/artifact/out/diff-compile-cc-root-jdk1.8.0_222.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/503/artifact/out/diff-compile-javac-root-jdk1.8.0_222.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/503/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/503/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/503/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/503/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/503/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/503/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/503/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/503/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/503/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/503/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/503/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/503/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_222.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/503/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [160K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/503/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [328K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/503/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   

[jira] [Created] (HDDS-2460) Default checksum type is wrong in description

2019-11-12 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2460:
--

 Summary: Default checksum type is wrong in description
 Key: HDDS-2460
 URL: https://issues.apache.org/jira/browse/HDDS-2460
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Reporter: Attila Doroszlai


Default client checksum type is CRC32, but the config item's description says 
it's SHA256 (leftover from HDDS-1149).  The description should be updated to 
match the actual default value.

{code:title=https://github.com/apache/hadoop-ozone/blob/a6f80c096b5320f50b6e9e9b4ba5f7c7e3544385/hadoop-hdds/common/src/main/resources/ozone-default.xml#L1489-L1497}
  
ozone.client.checksum.type
CRC32
OZONE, CLIENT, MANAGEMENT
The checksum type [NONE/ CRC32/ CRC32C/ SHA256/ MD5] determines
  which algorithm would be used to compute checksum for chunk data.
  Default checksum type is SHA256.

  
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14980) diskbalancer query command always tries to contact to port 9867

2019-11-12 Thread Nilotpal Nandi (Jira)
Nilotpal Nandi created HDFS-14980:
-

 Summary: diskbalancer query command always tries to contact to 
port 9867
 Key: HDFS-14980
 URL: https://issues.apache.org/jira/browse/HDFS-14980
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: diskbalancer
Reporter: Nilotpal Nandi


disbalancer query commands always tries to connect to port 9867 even when 
datanode IPC port is different.

In this setup , datanode IPC port is set to 20001.

 

diskbalancer report command works fine and connects to IPC port 20001

 
{noformat}
hdfs diskbalancer -report -node 172.27.131.193
19/11/12 08:58:55 INFO command.Command: Processing report command
19/11/12 08:58:57 INFO balancer.KeyManager: Block token params received from 
NN: update interval=10hrs, 0sec, token lifetime=10hrs, 0sec
19/11/12 08:58:57 INFO block.BlockTokenSecretManager: Setting block keys
19/11/12 08:58:57 INFO balancer.KeyManager: Update block keys every 2hrs, 
30mins, 0sec
19/11/12 08:58:58 INFO command.Command: Reporting volume information for 
DataNode(s). These DataNode(s) are parsed from '172.27.131.193'.
Processing report command
Reporting volume information for DataNode(s). These DataNode(s) are parsed from 
'172.27.131.193'.
[172.27.131.193:20001] - : 3 
volumes with node data density 0.05.
[DISK: volume-/dataroot/ycloud/dfs/NEW_DISK1/] - 0.15 used: 
39343871181/259692498944, 0.85 free: 220348627763/259692498944, isFailed: 
False, isReadOnly: False, isSkip: False, isTransient: False.
[DISK: volume-/dataroot/ycloud/dfs/NEW_DISK2/] - 0.15 used: 
39371179986/259692498944, 0.85 free: 220321318958/259692498944, isFailed: 
False, isReadOnly: False, isSkip: False, isTransient: False.
[DISK: volume-/dataroot/ycloud/dfs/dn/] - 0.19 used: 49934903670/259692498944, 
0.81 free: 209757595274/259692498944, isFailed: False, isReadOnly: False, 
isSkip: False, isTransient: False.
 
{noformat}
 

But  diskbalancer query command fails and tries to connect to port 9867 
(default port).

 
{noformat}
hdfs diskbalancer -query 172.27.131.193
19/11/12 06:37:15 INFO command.Command: Executing "query plan" command.
19/11/12 06:37:16 INFO ipc.Client: Retrying connect to server: 
/172.27.131.193:9867. Already tried 0 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
19/11/12 06:37:17 INFO ipc.Client: Retrying connect to server: 
/172.27.131.193:9867. Already tried 1 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
..
..
..

19/11/12 06:37:25 ERROR tools.DiskBalancerCLI: Exception thrown while running 
DiskBalancerCLI.

{noformat}
 

 

Expectation :

diskbalancer query command should work fine without explicitly mentioning 
datanode IPC port address



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org