[jira] [Issue Comment Deleted] (HDFS-14853) NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode is not present

2019-09-25 Thread Ranith Sardar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14853:
-
Comment: was deleted

(was: Sure [~xkrogen])

> NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode 
> is not present
> 
>
> Key: HDFS-14853
> URL: https://issues.apache.org/jira/browse/HDFS-14853
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14853.001.patch, HDFS-14853.002.patch, 
> HDFS-14853.003.patch
>
>
>  
> {{org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:229)
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:77)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14814) RBF: RouterQuotaUpdateService supports inherited rule.

2019-09-25 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938293#comment-16938293
 ] 

Hadoop QA commented on HDFS-14814:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  9s{color} 
| {color:red} HDFS-14814 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-14814 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12981401/HDFS-14814.006.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27967/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: RouterQuotaUpdateService supports inherited rule.
> --
>
> Key: HDFS-14814
> URL: https://issues.apache.org/jira/browse/HDFS-14814
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-14814.001.patch, HDFS-14814.002.patch, 
> HDFS-14814.003.patch, HDFS-14814.004.patch, HDFS-14814.005.patch, 
> HDFS-14814.006.patch
>
>
> I want to add a rule *'The quota should be set the same as the nearest 
> parent'* to Global Quota. Supposing we have the mount table below.
> M1: /dir-a                            ns0->/dir-a     \{nquota=10,squota=20}
> M2: /dir-a/dir-b                 ns1->/dir-b     \{nquota=-1,squota=30}
> M3: /dir-a/dir-b/dir-c       ns2->/dir-c     \{nquota=-1,squota=-1}
> M4: /dir-d                           ns3->/dir-d     \{nquota=-1,squota=-1}
>  
> The quota for the remote locations on the namespaces should be:
>  ns0->/dir-a     \{nquota=10,squota=20}
>  ns1->/dir-b     \{nquota=10,squota=30}
>  ns2->/dir-c      \{nquota=10,squota=30}
>  ns3->/dir-d     \{nquota=-1,squota=-1}
>  
> The quota of the remote location is set the same as the corresponding 
> MountTable, and if there is no quota of the MountTable then the quota is set 
> to the nearest parent MountTable with quota.
>  
> It's easy to implement it. In RouterQuotaUpdateService each time we compute 
> the currentQuotaUsage, we can get the quota info for each MountTable. We can 
> do a
>  check and fix all the MountTable which's quota doesn't match the rule above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14814) RBF: RouterQuotaUpdateService supports inherited rule.

2019-09-25 Thread Jinglun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938292#comment-16938292
 ] 

Jinglun commented on HDFS-14814:


Thanks [~ayushtkn] for your nice comments. Agree with your suggestions ! Upload 
v06 and pend jenkins.

> RBF: RouterQuotaUpdateService supports inherited rule.
> --
>
> Key: HDFS-14814
> URL: https://issues.apache.org/jira/browse/HDFS-14814
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-14814.001.patch, HDFS-14814.002.patch, 
> HDFS-14814.003.patch, HDFS-14814.004.patch, HDFS-14814.005.patch, 
> HDFS-14814.006.patch
>
>
> I want to add a rule *'The quota should be set the same as the nearest 
> parent'* to Global Quota. Supposing we have the mount table below.
> M1: /dir-a                            ns0->/dir-a     \{nquota=10,squota=20}
> M2: /dir-a/dir-b                 ns1->/dir-b     \{nquota=-1,squota=30}
> M3: /dir-a/dir-b/dir-c       ns2->/dir-c     \{nquota=-1,squota=-1}
> M4: /dir-d                           ns3->/dir-d     \{nquota=-1,squota=-1}
>  
> The quota for the remote locations on the namespaces should be:
>  ns0->/dir-a     \{nquota=10,squota=20}
>  ns1->/dir-b     \{nquota=10,squota=30}
>  ns2->/dir-c      \{nquota=10,squota=30}
>  ns3->/dir-d     \{nquota=-1,squota=-1}
>  
> The quota of the remote location is set the same as the corresponding 
> MountTable, and if there is no quota of the MountTable then the quota is set 
> to the nearest parent MountTable with quota.
>  
> It's easy to implement it. In RouterQuotaUpdateService each time we compute 
> the currentQuotaUsage, we can get the quota info for each MountTable. We can 
> do a
>  check and fix all the MountTable which's quota doesn't match the rule above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14814) RBF: RouterQuotaUpdateService supports inherited rule.

2019-09-25 Thread Jinglun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-14814:
---
Attachment: HDFS-14814.006.patch

> RBF: RouterQuotaUpdateService supports inherited rule.
> --
>
> Key: HDFS-14814
> URL: https://issues.apache.org/jira/browse/HDFS-14814
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-14814.001.patch, HDFS-14814.002.patch, 
> HDFS-14814.003.patch, HDFS-14814.004.patch, HDFS-14814.005.patch, 
> HDFS-14814.006.patch
>
>
> I want to add a rule *'The quota should be set the same as the nearest 
> parent'* to Global Quota. Supposing we have the mount table below.
> M1: /dir-a                            ns0->/dir-a     \{nquota=10,squota=20}
> M2: /dir-a/dir-b                 ns1->/dir-b     \{nquota=-1,squota=30}
> M3: /dir-a/dir-b/dir-c       ns2->/dir-c     \{nquota=-1,squota=-1}
> M4: /dir-d                           ns3->/dir-d     \{nquota=-1,squota=-1}
>  
> The quota for the remote locations on the namespaces should be:
>  ns0->/dir-a     \{nquota=10,squota=20}
>  ns1->/dir-b     \{nquota=10,squota=30}
>  ns2->/dir-c      \{nquota=10,squota=30}
>  ns3->/dir-d     \{nquota=-1,squota=-1}
>  
> The quota of the remote location is set the same as the corresponding 
> MountTable, and if there is no quota of the MountTable then the quota is set 
> to the nearest parent MountTable with quota.
>  
> It's easy to implement it. In RouterQuotaUpdateService each time we compute 
> the currentQuotaUsage, we can get the quota info for each MountTable. We can 
> do a
>  check and fix all the MountTable which's quota doesn't match the rule above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14869) Data loss in case of distcp using snapshot diff.

2019-09-25 Thread Aasha Medhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aasha Medhi updated HDFS-14869:
---
Description: 
This issue arises when a directory or file is excluded while distcp replication 
due to a exclusion filter. Even if the directory is renamed later to a name 
which is not excluded by the filter, the snapshot diff reports only a rename 
operation.  The directory is never copied to target even though its not 
excluded now. This also doesn't throw any error so there is no way to find the 
issue. 

Steps to reproduce
 * Create a directory in hdfs to copy using distcp.
 * Include a staging folder in the directory.

{code:java}
[hdfs@ctr-e141-1563959304486-33995-01-03 hadoop-mapreduce]$ hadoop fs -ls 
/tmp/tocopy
Found 4 items
-rw-r--r--   3 hdfs hdfs 16 2019-09-12 10:32 /tmp/tocopy/.b.txt
drwxr-xr-x   - hdfs hdfs  0 2019-09-23 09:18 /tmp/tocopy/.staging
-rw-r--r--   3 hdfs hdfs 12 2019-09-12 10:32 /tmp/tocopy/a.txt
-rw-r--r--   3 hdfs hdfs  4 2019-09-20 08:23 /tmp/tocopy/foo.txt{code}
 * The exclusion filter is set to exclude any staging directory

{code:java}
[hdfs@ctr-e141-1563959304486-33995-01-03 hadoop-mapreduce]$ cat /tmp/filter
.*\.Trash.*
.*\.staging.*{code}
 * Do a copy using distcp snapshots, the staging directory is not replicated.

{code:java}
hadoop jar hadoop-distcp-3.3.0-SNAPSHOT.jar 
-Dmapreduce.job.user.classpath.first=true -filters /tmp/filter 
/tmp/tocopy/.snapshot/s1 /tmp/target

[hdfs@ctr-e141-1563959304486-33995-01-03 root]$ hadoop fs -ls /tmp/target
Found 3 items
-rw-r--r--   3 hdfs hdfs 16 2019-09-24 06:56 /tmp/target/.b.txt
-rw-r--r--   3 hdfs hdfs 12 2019-09-24 06:56 /tmp/target/a.txt
-rw-r--r--   3 hdfs hdfs  4 2019-09-24 06:56 /tmp/target/foo.txt{code}
 * Rename the staging directory to final

{code:java}
[hdfs@ctr-e141-1563959304486-33995-01-03 hadoop-mapreduce]$ hadoop fs -mv 
/tmp/tocopy/.staging /tmp/tocopy/final{code}
 * Do a copy using snapshot diff.

{code:java}
[hdfs@ctr-e141-1563959304486-33995-01-03 hadoop-mapreduce]$ hdfs 
snapshotDiff /tmp/tocopy s1 s2[hdfs@ctr-e141-1563959304486-33995-01-03 
hadoop-mapreduce]$ hdfs snapshotDiff /tmp/tocopy s1 s2Difference between 
snapshot s1 and snapshot s2 under directory /tmp/tocopy:M .R ./.staging -> 
./final

{code}
 * The diff report just has a rename record and the new final directory is 
never copied.

{code:java}
[hdfs@ctr-e141-1563959304486-33995-01-03 hadoop-mapreduce]$ hadoop jar 
hadoop-distcp-3.3.0-SNAPSHOT.jar -Dmapreduce.job.user.classpath.first=true 
-filters /tmp/filter -diff s1 s2 -update /tmp/tocopy /tmp/target
19/09/24 07:05:32 INFO tools.DistCp: Input Options: 
DistCpOptions{atomicCommit=false, syncFolder=true, deleteMissing=false, 
ignoreFailures=false, overwrite=false, append=false, useDiff=true, 
useRdiff=false, fromSnapshot=s1, toSnapshot=s2, skipCRC=false, blocking=true, 
numListstatusThreads=0, maxMaps=20, mapBandwidth=0.0, 
copyStrategy='uniformsize', preserveStatus=[BLOCKSIZE], atomicWorkPath=null, 
logPath=null, sourceFileListing=null, sourcePaths=[/tmp/tocopy], 
targetPath=/tmp/target, filtersFile='/tmp/filter', blocksPerChunk=0, 
copyBufferSize=8192, verboseLog=false, directWrite=false}, 
sourcePaths=[/tmp/tocopy], targetPathExists=true, preserveRawXattrsfalse
19/09/24 07:05:32 INFO client.RMProxy: Connecting to ResourceManager at 
ctr-e141-1563959304486-33995-01-03.hwx.site/172.27.68.128:8050
19/09/24 07:05:33 INFO client.AHSProxy: Connecting to Application History 
server at ctr-e141-1563959304486-33995-01-03.hwx.site/172.27.68.128:10200
19/09/24 07:05:33 INFO tools.DistCp: Number of paths in the copy list: 0
19/09/24 07:05:33 INFO client.RMProxy: Connecting to ResourceManager at 
ctr-e141-1563959304486-33995-01-03.hwx.site/172.27.68.128:8050
19/09/24 07:05:33 INFO client.AHSProxy: Connecting to Application History 
server at ctr-e141-1563959304486-33995-01-03.hwx.site/172.27.68.128:10200
19/09/24 07:05:33 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding 
for path: /user/hdfs/.staging/job_1568647978682_0010
19/09/24 07:05:34 INFO mapreduce.JobSubmitter: number of splits:0
19/09/24 07:05:34 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
job_1568647978682_0010
19/09/24 07:05:34 INFO mapreduce.JobSubmitter: Executing with tokens: []
19/09/24 07:05:34 INFO conf.Configuration: found resource resource-types.xml at 
file:/etc/hadoop/3.1.4.0-272/0/resource-types.xml
19/09/24 07:05:34 INFO impl.YarnClientImpl: Submitted application 
application_1568647978682_0010
19/09/24 07:05:34 INFO mapreduce.Job: The url to track the job: 
http://ctr-e141-1563959304486-33995-01-03.hwx.site:8088/proxy/application_1568647978682_0010/
19/09/24 07:05:34 INFO tools.DistCp: DistCp job-id: job_1568647978682_0010
19/09/24 07:05:34 INFO mapreduce.Job: Running job: job_1568647978682_0010
19/09/24 

[jira] [Updated] (HDFS-14874) Fix TestHDFSCLI and TestDFSShell test break because of logging change in mkdir

2019-09-25 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14874:

Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Merged PR.
Thanx [~gabor.bota] for the fix.

> Fix TestHDFSCLI and TestDFSShell test break because of logging change in mkdir
> --
>
> Key: HDFS-14874
> URL: https://issues.apache.org/jira/browse/HDFS-14874
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Fix For: 3.3.0
>
>
> The change in HADOOP-16138 breaks TestHDFSCLI and TestDFSShell
> Since it changed the text in the exception :
> {code:java}
> -throw new PathNotFoundException(itemParentPath.toString());
> +throw new PathNotFoundException(String.format(
> +"mkdir failed for path: %s. Item parent path not found: %s.",
> +itemPath.toString(), itemParentPath.toString()));
>}
> {code}
> For reference :
> https://builds.apache.org/job/PreCommit-HDFS-Build/27958/testReport/
> The way I plan to fix it: create this jira where I `revert` this change in 
> the sense that I will create a PR with the original log. No need for the 
> additional logging what we added
> Thanks [~ayushtkn] for finding this issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14874) Fix TestHDFSCLI and TestDFSShell test break because of logging change in mkdir

2019-09-25 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938271#comment-16938271
 ] 

Hudson commented on HDFS-14874:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17391 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17391/])
HDFS-14874. Fix TestHDFSCLI and TestDFSShell test break because of 
(ayushsaxena: rev 587a8eeec8145a8831a36e66b3c45fbff1e3c4c9)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Mkdir.java


> Fix TestHDFSCLI and TestDFSShell test break because of logging change in mkdir
> --
>
> Key: HDFS-14874
> URL: https://issues.apache.org/jira/browse/HDFS-14874
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> The change in HADOOP-16138 breaks TestHDFSCLI and TestDFSShell
> Since it changed the text in the exception :
> {code:java}
> -throw new PathNotFoundException(itemParentPath.toString());
> +throw new PathNotFoundException(String.format(
> +"mkdir failed for path: %s. Item parent path not found: %s.",
> +itemPath.toString(), itemParentPath.toString()));
>}
> {code}
> For reference :
> https://builds.apache.org/job/PreCommit-HDFS-Build/27958/testReport/
> The way I plan to fix it: create this jira where I `revert` this change in 
> the sense that I will create a PR with the original log. No need for the 
> additional logging what we added
> Thanks [~ayushtkn] for finding this issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14869) Data loss in case of distcp using snapshot diff.

2019-09-25 Thread Aasha Medhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aasha Medhi reassigned HDFS-14869:
--

Assignee: Aasha Medhi

> Data loss in case of distcp using snapshot diff.
> 
>
> Key: HDFS-14869
> URL: https://issues.apache.org/jira/browse/HDFS-14869
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>
> Steps to reproduce
>  * Create a directory in hdfs to copy using distcp.
>  * Include a staging folder in the directory.
> {code:java}
> [hdfs@ctr-e141-1563959304486-33995-01-03 hadoop-mapreduce]$ hadoop fs -ls 
> /tmp/tocopy
> Found 4 items
> -rw-r--r--   3 hdfs hdfs 16 2019-09-12 10:32 /tmp/tocopy/.b.txt
> drwxr-xr-x   - hdfs hdfs  0 2019-09-23 09:18 /tmp/tocopy/.staging
> -rw-r--r--   3 hdfs hdfs 12 2019-09-12 10:32 /tmp/tocopy/a.txt
> -rw-r--r--   3 hdfs hdfs  4 2019-09-20 08:23 /tmp/tocopy/foo.txt{code}
>  * The exclusion filter is set to exclude any staging directory
> {code:java}
> [hdfs@ctr-e141-1563959304486-33995-01-03 hadoop-mapreduce]$ cat 
> /tmp/filter
> .*\.Trash.*
> .*\.staging.*{code}
>  * Do a copy using distcp snapshots, the staging directory is not replicated.
> {code:java}
> hadoop jar hadoop-distcp-3.3.0-SNAPSHOT.jar 
> -Dmapreduce.job.user.classpath.first=true -filters /tmp/filter 
> /tmp/tocopy/.snapshot/s1 /tmp/target
> [hdfs@ctr-e141-1563959304486-33995-01-03 root]$ hadoop fs -ls /tmp/target
> Found 3 items
> -rw-r--r--   3 hdfs hdfs 16 2019-09-24 06:56 /tmp/target/.b.txt
> -rw-r--r--   3 hdfs hdfs 12 2019-09-24 06:56 /tmp/target/a.txt
> -rw-r--r--   3 hdfs hdfs  4 2019-09-24 06:56 /tmp/target/foo.txt{code}
>  * Rename the staging directory to final
> {code:java}
> [hdfs@ctr-e141-1563959304486-33995-01-03 hadoop-mapreduce]$ hadoop fs -mv 
> /tmp/tocopy/.staging /tmp/tocopy/final{code}
>  * Do a copy using snapshot diff.
> {code:java}
> [hdfs@ctr-e141-1563959304486-33995-01-03 hadoop-mapreduce]$ hdfs 
> snapshotDiff /tmp/tocopy s1 s2[hdfs@ctr-e141-1563959304486-33995-01-03 
> hadoop-mapreduce]$ hdfs snapshotDiff /tmp/tocopy s1 s2Difference between 
> snapshot s1 and snapshot s2 under directory /tmp/tocopy:M .R ./.staging -> 
> ./final
> {code}
>  * The diff report just has a rename record and the new final directory is 
> never copied.
> {code:java}
> [hdfs@ctr-e141-1563959304486-33995-01-03 hadoop-mapreduce]$ hadoop jar 
> hadoop-distcp-3.3.0-SNAPSHOT.jar -Dmapreduce.job.user.classpath.first=true 
> -filters /tmp/filter -diff s1 s2 -update /tmp/tocopy /tmp/target
> 19/09/24 07:05:32 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=true, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=true, 
> useRdiff=false, fromSnapshot=s1, toSnapshot=s2, skipCRC=false, blocking=true, 
> numListstatusThreads=0, maxMaps=20, mapBandwidth=0.0, 
> copyStrategy='uniformsize', preserveStatus=[BLOCKSIZE], atomicWorkPath=null, 
> logPath=null, sourceFileListing=null, sourcePaths=[/tmp/tocopy], 
> targetPath=/tmp/target, filtersFile='/tmp/filter', blocksPerChunk=0, 
> copyBufferSize=8192, verboseLog=false, directWrite=false}, 
> sourcePaths=[/tmp/tocopy], targetPathExists=true, preserveRawXattrsfalse
> 19/09/24 07:05:32 INFO client.RMProxy: Connecting to ResourceManager at 
> ctr-e141-1563959304486-33995-01-03.hwx.site/172.27.68.128:8050
> 19/09/24 07:05:33 INFO client.AHSProxy: Connecting to Application History 
> server at ctr-e141-1563959304486-33995-01-03.hwx.site/172.27.68.128:10200
> 19/09/24 07:05:33 INFO tools.DistCp: Number of paths in the copy list: 0
> 19/09/24 07:05:33 INFO client.RMProxy: Connecting to ResourceManager at 
> ctr-e141-1563959304486-33995-01-03.hwx.site/172.27.68.128:8050
> 19/09/24 07:05:33 INFO client.AHSProxy: Connecting to Application History 
> server at ctr-e141-1563959304486-33995-01-03.hwx.site/172.27.68.128:10200
> 19/09/24 07:05:33 INFO mapreduce.JobResourceUploader: Disabling Erasure 
> Coding for path: /user/hdfs/.staging/job_1568647978682_0010
> 19/09/24 07:05:34 INFO mapreduce.JobSubmitter: number of splits:0
> 19/09/24 07:05:34 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
> job_1568647978682_0010
> 19/09/24 07:05:34 INFO mapreduce.JobSubmitter: Executing with tokens: []
> 19/09/24 07:05:34 INFO conf.Configuration: found resource resource-types.xml 
> at file:/etc/hadoop/3.1.4.0-272/0/resource-types.xml
> 19/09/24 07:05:34 INFO impl.YarnClientImpl: Submitted application 
> application_1568647978682_0010
> 19/09/24 07:05:34 INFO mapreduce.Job: The url to track the job: 
> 

[jira] [Work logged] (HDDS-1569) Add ability to SCM for creating multiple pipelines with same datanode

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1569?focusedWorklogId=318785=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318785
 ]

ASF GitHub Bot logged work on HDDS-1569:


Author: ASF GitHub Bot
Created on: 26/Sep/19 04:56
Start Date: 26/Sep/19 04:56
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #1431: HDDS-1569 Support 
creating multiple pipelines with same datanode
URL: https://github.com/apache/hadoop/pull/1431#issuecomment-535335518
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 318785)
Time Spent: 4h 10m  (was: 4h)

> Add ability to SCM for creating multiple pipelines with same datanode
> -
>
> Key: HDDS-1569
> URL: https://issues.apache.org/jira/browse/HDDS-1569
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Siddharth Wagle
>Assignee: Li Cheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> - Refactor _RatisPipelineProvider.create()_ to be able to create pipelines 
> with datanodes that are not a part of sufficient pipelines
> - Define soft and hard upper bounds for pipeline membership
> - Create SCMAllocationManager that can be leveraged to get a candidate set of 
> datanodes based on placement policies
> - Add the datanodes to internal datastructures



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13901) INode access time is ignored because of race between open and rename

2019-09-25 Thread Jinglun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-13901:
---
Attachment: HDFS-13901.006.patch

> INode access time is ignored because of race between open and rename
> 
>
> Key: HDFS-13901
> URL: https://issues.apache.org/jira/browse/HDFS-13901
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-13901.000.patch, HDFS-13901.001.patch, 
> HDFS-13901.002.patch, HDFS-13901.003.patch, HDFS-13901.004.patch, 
> HDFS-13901.005.patch, HDFS-13901.006.patch
>
>
> That's because in getBlockLocations there is a gap between readUnlock and 
> re-fetch write lock (to update access time). If a rename operation occurs in 
> the gap, the update of access time will be ignored. We can calculate new path 
> from the inode and use the new path to update access time. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13901) INode access time is ignored because of race between open and rename

2019-09-25 Thread Jinglun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938253#comment-16938253
 ] 

Jinglun commented on HDFS-13901:


Thanks [~jojochuang]  for your nice comments ! I delete the old comment and 
change to DFSTestUtil.createFile, it's cleaner now.
{quote}Can you use something else other than sleep 1ms? Like CountDownLatch or 
semaphore? Using sleep 1ms to control order of threads is almost always going 
to create flaky tests.
{quote}
Very thoughtful ! Using sleep 1ms is a bad idea, I change to Semaphore.

Upload v05 and pending jenkins.

> INode access time is ignored because of race between open and rename
> 
>
> Key: HDFS-13901
> URL: https://issues.apache.org/jira/browse/HDFS-13901
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-13901.000.patch, HDFS-13901.001.patch, 
> HDFS-13901.002.patch, HDFS-13901.003.patch, HDFS-13901.004.patch, 
> HDFS-13901.005.patch
>
>
> That's because in getBlockLocations there is a gap between readUnlock and 
> re-fetch write lock (to update access time). If a rename operation occurs in 
> the gap, the update of access time will be ignored. We can calculate new path 
> from the inode and use the new path to update access time. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13901) INode access time is ignored because of race between open and rename

2019-09-25 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938246#comment-16938246
 ] 

Hadoop QA commented on HDFS-13901:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  9s{color} 
| {color:red} HDFS-13901 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-13901 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12981387/HDFS-13901.005.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27965/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> INode access time is ignored because of race between open and rename
> 
>
> Key: HDFS-13901
> URL: https://issues.apache.org/jira/browse/HDFS-13901
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-13901.000.patch, HDFS-13901.001.patch, 
> HDFS-13901.002.patch, HDFS-13901.003.patch, HDFS-13901.004.patch, 
> HDFS-13901.005.patch
>
>
> That's because in getBlockLocations there is a gap between readUnlock and 
> re-fetch write lock (to update access time). If a rename operation occurs in 
> the gap, the update of access time will be ignored. We can calculate new path 
> from the inode and use the new path to update access time. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=318764=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318764
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 26/Sep/19 04:00
Start Date: 26/Sep/19 04:00
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1511: HDDS-2162. 
Make Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#issuecomment-535322831
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 318764)
Time Spent: 3h 10m  (was: 3h)

> Make Kerberos related configuration support HA style config
> ---
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> To have a single configuration to use across OM cluster, few of the configs 
> like 
> OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,
> OZONE_OM_KERBEROS_PRINCIPAL_KEY,
> OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,
> OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append 
> with service id and node id.
>  
> Addressed OM_DB_DIRS, OZONE_OM_ADDRESS_KEY also in this patch.
>  
> This Jira is to fix the above configs.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13901) INode access time is ignored because of race between open and rename

2019-09-25 Thread Jinglun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-13901:
---
Attachment: HDFS-13901.005.patch

> INode access time is ignored because of race between open and rename
> 
>
> Key: HDFS-13901
> URL: https://issues.apache.org/jira/browse/HDFS-13901
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-13901.000.patch, HDFS-13901.001.patch, 
> HDFS-13901.002.patch, HDFS-13901.003.patch, HDFS-13901.004.patch, 
> HDFS-13901.005.patch
>
>
> That's because in getBlockLocations there is a gap between readUnlock and 
> re-fetch write lock (to update access time). If a rename operation occurs in 
> the gap, the update of access time will be ignored. We can calculate new path 
> from the inode and use the new path to update access time. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1590) Add per pipeline metrics

2019-09-25 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938216#comment-16938216
 ] 

Xiaoyu Yao commented on HDDS-1590:
--

For now, we still have a sync path to create pipeline in the key write path. 
When adding metrics, we should also add how many pipelines are created in the 
sync write path and how many are created by the async path. 

> Add per pipeline metrics
> 
>
> Key: HDDS-1590
> URL: https://issues.apache.org/jira/browse/HDDS-1590
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Siddharth Wagle
>Priority: Major
>
> Add metrics on per pipeline basis:
> - bytes read/written
> - container metrics (state changes)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2034) Async RATIS pipeline creation and destroy through heartbeat commands

2019-09-25 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-2034:
-
Status: Patch Available  (was: Open)

> Async RATIS pipeline creation and destroy through heartbeat commands
> 
>
> Key: HDDS-2034
> URL: https://issues.apache.org/jira/browse/HDDS-2034
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h
>  Remaining Estimate: 0h
>
> Currently, pipeline creation and destroy are synchronous operations. SCM 
> directly connect to each datanode of the pipeline through gRPC channel to 
> create the pipeline to destroy the pipeline.  
> This task is to remove the gRPC channel, send pipeline creation and destroy 
> action through heartbeat command to each datanode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1569) Add ability to SCM for creating multiple pipelines with same datanode

2019-09-25 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1569:
-
Status: Patch Available  (was: Open)

> Add ability to SCM for creating multiple pipelines with same datanode
> -
>
> Key: HDDS-1569
> URL: https://issues.apache.org/jira/browse/HDDS-1569
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Siddharth Wagle
>Assignee: Li Cheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> - Refactor _RatisPipelineProvider.create()_ to be able to create pipelines 
> with datanodes that are not a part of sufficient pipelines
> - Define soft and hard upper bounds for pipeline membership
> - Create SCMAllocationManager that can be leveraged to get a candidate set of 
> datanodes based on placement policies
> - Add the datanodes to internal datastructures



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14873) Fix dfsadmin doc for triggerBlockReport

2019-09-25 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938200#comment-16938200
 ] 

Hadoop QA commented on HDFS-14873:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
48s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
31m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:efed4450bf1 |
| JIRA Issue | HDFS-14873 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12981383/HDFS-14873.002.patch |
| Optional Tests |  dupname  asflicense  mvnsite  |
| uname | Linux 610d15a65615 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 606e341 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 342 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27964/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Fix dfsadmin doc for triggerBlockReport
> ---
>
> Key: HDFS-14873
> URL: https://issues.apache.org/jira/browse/HDFS-14873
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14873.001.patch, HDFS-14873.002.patch
>
>
> Doc for dfsadmin triggerBlockReport has a small issue in HDFSCommands.md
> {quote}
> hdfs dfsadmin [-triggerBlockReport [-incremental]  
> [-namenode] ]
> {quote}
> *-namenode * is optional. It should be *[-namenode 
> ]*, not *[-namenode] *



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14874) Fix TestHDFSCLI and TestDFSShell test break because of logging change in mkdir

2019-09-25 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938194#comment-16938194
 ] 

Ayush Saxena commented on HDFS-14874:
-

Yeps, seems flaky. Anyway the test passed for me too.

> Fix TestHDFSCLI and TestDFSShell test break because of logging change in mkdir
> --
>
> Key: HDFS-14874
> URL: https://issues.apache.org/jira/browse/HDFS-14874
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> The change in HADOOP-16138 breaks TestHDFSCLI and TestDFSShell
> Since it changed the text in the exception :
> {code:java}
> -throw new PathNotFoundException(itemParentPath.toString());
> +throw new PathNotFoundException(String.format(
> +"mkdir failed for path: %s. Item parent path not found: %s.",
> +itemPath.toString(), itemParentPath.toString()));
>}
> {code}
> For reference :
> https://builds.apache.org/job/PreCommit-HDFS-Build/27958/testReport/
> The way I plan to fix it: create this jira where I `revert` this change in 
> the sense that I will create a PR with the original log. No need for the 
> additional logging what we added
> Thanks [~ayushtkn] for finding this issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14509) DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 3.x

2019-09-25 Thread Yuxuan Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938192#comment-16938192
 ] 

Yuxuan Wang commented on HDFS-14509:


OK, I'll update my PR later.
Thanks for [~shv], [~vagarychen]'s comments.

> DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 
> 3.x
> ---
>
> Key: HDFS-14509
> URL: https://issues.apache.org/jira/browse/HDFS-14509
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yuxuan Wang
>Priority: Blocker
> Attachments: HDFS-14509-001.patch
>
>
> According to the doc, if we want to upgrade cluster from 2.x to 3.x, we need 
> upgrade NN first. And there will be a intermediate state that NN is 3.x and 
> DN is 2.x. At that moment, if a client reads (or writes) a block, it will get 
> a block token from NN and then deliver the token to DN who can verify the 
> token. But the verification in the code now is :
> {code:title=BlockTokenSecretManager.java|borderStyle=solid}
> public void checkAccess(...)
> {
> ...
> id.readFields(new DataInputStream(new 
> ByteArrayInputStream(token.getIdentifier(;
> ...
> if (!Arrays.equals(retrievePassword(id), token.getPassword())) {
>   throw new InvalidToken("Block token with " + id.toString()
>   + " doesn't have the correct token password");
> }
> }
> {code} 
> And {{retrievePassword(id)}} is:
> {code} 
> public byte[] retrievePassword(BlockTokenIdentifier identifier)
> {
> ...
> return createPassword(identifier.getBytes(), key.getKey());
> }
> {code} 
> So, if NN's identifier add new fields, DN will lose the fields and compute 
> wrong password.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2034) Async RATIS pipeline creation and destroy through heartbeat commands

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2034?focusedWorklogId=318729=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318729
 ]

ASF GitHub Bot logged work on HDDS-2034:


Author: ASF GitHub Bot
Created on: 26/Sep/19 02:07
Start Date: 26/Sep/19 02:07
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on pull request #1469: HDDS-2034. 
Async RATIS pipeline creation and destroy through heartbea…
URL: https://github.com/apache/hadoop/pull/1469#discussion_r328408887
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/safemode/OneReplicaPipelineSafeModeRule.java
 ##
 @@ -75,69 +66,59 @@ public OneReplicaPipelineSafeModeRule(String ruleName, 
EventQueue eventQueue,
 HDDS_SCM_SAFEMODE_ONE_NODE_REPORTED_PIPELINE_PCT  +
 " value should be >= 0.0 and <= 1.0");
 
+// Exclude CLOSED pipeline
 int totalPipelineCount =
 pipelineManager.getPipelines(HddsProtos.ReplicationType.RATIS,
-HddsProtos.ReplicationFactor.THREE).size();
+HddsProtos.ReplicationFactor.THREE, Pipeline.PipelineState.OPEN)
+.size() +
+pipelineManager.getPipelines(HddsProtos.ReplicationType.RATIS,
+HddsProtos.ReplicationFactor.THREE,
+Pipeline.PipelineState.ALLOCATED).size();
 
 Review comment:
   Hi  @xiaoyuyao , here are two cases, 
   1.  A new cluster, ALLOCATED pipelie numbe is 0 at cluster start up. 
   2.  A running cluster, restart SCM,  all the pipelines loaded from DB is 
marked as ALLOCATED(SCMPipelineManager#initializePipelineState(), current 
logic). So in this case, ALLOCATED state pipeline means OPEN pipeline actually. 
 
   
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 318729)
Time Spent: 7h  (was: 6h 50m)

> Async RATIS pipeline creation and destroy through heartbeat commands
> 
>
> Key: HDDS-2034
> URL: https://issues.apache.org/jira/browse/HDDS-2034
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h
>  Remaining Estimate: 0h
>
> Currently, pipeline creation and destroy are synchronous operations. SCM 
> directly connect to each datanode of the pipeline through gRPC channel to 
> create the pipeline to destroy the pipeline.  
> This task is to remove the gRPC channel, send pipeline creation and destroy 
> action through heartbeat command to each datanode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14873) Fix dfsadmin doc for triggerBlockReport

2019-09-25 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-14873:
---
Attachment: HDFS-14873.002.patch

> Fix dfsadmin doc for triggerBlockReport
> ---
>
> Key: HDFS-14873
> URL: https://issues.apache.org/jira/browse/HDFS-14873
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14873.001.patch, HDFS-14873.002.patch
>
>
> Doc for dfsadmin triggerBlockReport has a small issue in HDFSCommands.md
> {quote}
> hdfs dfsadmin [-triggerBlockReport [-incremental]  
> [-namenode] ]
> {quote}
> *-namenode * is optional. It should be *[-namenode 
> ]*, not *[-namenode] *



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14873) Fix dfsadmin doc for triggerBlockReport

2019-09-25 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938179#comment-16938179
 ] 

Fei Hui commented on HDFS-14873:


[~ayushtkn] Thanks for your comments.
Upload v002, Fix the format issue. View the change on online markdown editor, 
it is expected.
Please review.

> Fix dfsadmin doc for triggerBlockReport
> ---
>
> Key: HDFS-14873
> URL: https://issues.apache.org/jira/browse/HDFS-14873
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14873.001.patch, HDFS-14873.002.patch
>
>
> Doc for dfsadmin triggerBlockReport has a small issue in HDFSCommands.md
> {quote}
> hdfs dfsadmin [-triggerBlockReport [-incremental]  
> [-namenode] ]
> {quote}
> *-namenode * is optional. It should be *[-namenode 
> ]*, not *[-namenode] *



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10648) Expose Balancer metrics through Metrics2

2019-09-25 Thread Chen Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-10648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938168#comment-16938168
 ] 

Chen Zhang commented on HDFS-10648:
---

Hi [~LeonG], yes, I've a draft patch in local, but it's delayed by the work on 
other Jira. I'll refine the draft patch and hopefully can upload the initial 
patch for review in one week. Thanks for the ping.

> Expose Balancer metrics through Metrics2
> 
>
> Key: HDFS-10648
> URL: https://issues.apache.org/jira/browse/HDFS-10648
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: balancer  mover, metrics
>Reporter: Mark Wagner
>Assignee: Chen Zhang
>Priority: Major
>  Labels: metrics
>
> The Balancer currently prints progress information to the console. For 
> deployments that run the balancer frequently, it would be helpful to collect 
> those metrics for publishing to the available sinks. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14785) [SBN read] Change client logging to be less aggressive

2019-09-25 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938164#comment-16938164
 ] 

Hadoop QA commented on HDFS-14785:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
55s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
51s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:efed4450bf1 |
| JIRA Issue | HDFS-14785 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12981372/HDFS-14785.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6644ecfe9d2e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / bdaaa3b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27963/testReport/ |
| Max. process+thread count | 342 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27963/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> [SBN read] Change client logging to 

[jira] [Commented] (HDFS-14284) RBF: Log Router identifier when reporting exceptions

2019-09-25 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938155#comment-16938155
 ] 

Iñigo Goiri commented on HDFS-14284:


As I mentioned before, I think routerId should be a field in RouterIOException 
and store it.
Then having a getRouterId method.

> RBF: Log Router identifier when reporting exceptions
> 
>
> Key: HDFS-14284
> URL: https://issues.apache.org/jira/browse/HDFS-14284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14284.001.patch, HDFS-14284.002.patch
>
>
> The typical setup is to use multiple Routers through 
> ConfiguredFailoverProxyProvider.
> In a regular HA Namenode setup, it is easy to know which NN was used.
> However, in RBF, any Router can be the one reporting the exception and it is 
> hard to know which was the one.
> We should have a way to identify which Router/Namenode was the one triggering 
> the exception.
> This would also apply with Observer Namenodes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10648) Expose Balancer metrics through Metrics2

2019-09-25 Thread Leon Gao (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-10648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938151#comment-16938151
 ] 

Leon Gao commented on HDFS-10648:
-

[~zhangchen] Just check-in if you are actively working on this?

It is a useful feature for us as well.

> Expose Balancer metrics through Metrics2
> 
>
> Key: HDFS-10648
> URL: https://issues.apache.org/jira/browse/HDFS-10648
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: balancer  mover, metrics
>Reporter: Mark Wagner
>Assignee: Chen Zhang
>Priority: Major
>  Labels: metrics
>
> The Balancer currently prints progress information to the console. For 
> deployments that run the balancer frequently, it would be helpful to collect 
> those metrics for publishing to the available sinks. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2067) Create generic service facade with tracing/metrics/logging support

2019-09-25 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938148#comment-16938148
 ] 

Hudson commented on HDDS-2067:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17388 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17388/])
HDDS-2067. Create generic service facade with tracing/metrics/logging 
(aengineer: rev f647185905f6047fc9734b8aa37d6ef59b6082c2)
* (add) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/OzoneProtocolMessageDispatcher.java
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/protocol/package-info.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/function/FunctionWithServiceException.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMBlockProtocolServer.java
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/protocol/StorageContainerLocationProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
* (delete) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/protocolPB/StorageContainerLocationProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMBlockProtocolServer.java
* (delete) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/protocolPB/ScmBlockLocationProtocolServerSideTranslatorPB.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/function/package-info.java
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/protocol/ScmBlockLocationProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/scm/ScmProtocolBlockLocationInsight.java


> Create generic service facade with tracing/metrics/logging support
> --
>
> Key: HDDS-2067
> URL: https://issues.apache.org/jira/browse/HDDS-2067
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> We started to use a message based GRPC approach. Wen have only one method and 
> the requests are routed based on a "type" field in the proto message. 
> For example in OM protocol:
> {code}
> /**
>  The OM service that takes care of Ozone namespace.
> */
> service OzoneManagerService {
> // A client-to-OM RPC to send client requests to OM Ratis server
> rpc submitRequest(OMRequest)
>   returns(OMResponse);
> }
> {code}
> And 
> {code}
> message OMRequest {
>   required Type cmdType = 1; // Type of the command
> ...
> {code}
> This approach makes it possible to use the same code to process incoming 
> messages in the server side.
> ScmBlockLocationProtocolServerSideTranslatorPB.send method contains the logic 
> of:
>  * Logging the request/response message (can be displayed with ozone insight)
>  * Updated metrics
>  * Handle open tracing context propagation.
> These functions are generic. For example 
> OzoneManagerProtocolServerSideTranslatorPB use the same (=similar) code.
> The goal in this jira is to provide a generic utility and move the common 
> code for tracing/request logging/response logging/metrics calculation to a 
> common utility which can be used from all the ServerSide translators.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2067) Create generic service facade with tracing/metrics/logging support

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2067?focusedWorklogId=318666=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318666
 ]

ASF GitHub Bot logged work on HDDS-2067:


Author: ASF GitHub Bot
Created on: 26/Sep/19 00:32
Start Date: 26/Sep/19 00:32
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1501: HDDS-2067. Create 
generic service facade with tracing/metrics/logging support
URL: https://github.com/apache/hadoop/pull/1501#issuecomment-535277607
 
 
   @bharatviswa504  @adoroszlai  Thanks for the reviews. I have committed this 
patch to the trunk branch
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 318666)
Time Spent: 2h 10m  (was: 2h)

> Create generic service facade with tracing/metrics/logging support
> --
>
> Key: HDDS-2067
> URL: https://issues.apache.org/jira/browse/HDDS-2067
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> We started to use a message based GRPC approach. Wen have only one method and 
> the requests are routed based on a "type" field in the proto message. 
> For example in OM protocol:
> {code}
> /**
>  The OM service that takes care of Ozone namespace.
> */
> service OzoneManagerService {
> // A client-to-OM RPC to send client requests to OM Ratis server
> rpc submitRequest(OMRequest)
>   returns(OMResponse);
> }
> {code}
> And 
> {code}
> message OMRequest {
>   required Type cmdType = 1; // Type of the command
> ...
> {code}
> This approach makes it possible to use the same code to process incoming 
> messages in the server side.
> ScmBlockLocationProtocolServerSideTranslatorPB.send method contains the logic 
> of:
>  * Logging the request/response message (can be displayed with ozone insight)
>  * Updated metrics
>  * Handle open tracing context propagation.
> These functions are generic. For example 
> OzoneManagerProtocolServerSideTranslatorPB use the same (=similar) code.
> The goal in this jira is to provide a generic utility and move the common 
> code for tracing/request logging/response logging/metrics calculation to a 
> common utility which can be used from all the ServerSide translators.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2067) Create generic service facade with tracing/metrics/logging support

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2067?focusedWorklogId=318667=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318667
 ]

ASF GitHub Bot logged work on HDDS-2067:


Author: ASF GitHub Bot
Created on: 26/Sep/19 00:32
Start Date: 26/Sep/19 00:32
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1501: HDDS-2067. 
Create generic service facade with tracing/metrics/logging support
URL: https://github.com/apache/hadoop/pull/1501
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 318667)
Time Spent: 2h 20m  (was: 2h 10m)

> Create generic service facade with tracing/metrics/logging support
> --
>
> Key: HDDS-2067
> URL: https://issues.apache.org/jira/browse/HDDS-2067
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> We started to use a message based GRPC approach. Wen have only one method and 
> the requests are routed based on a "type" field in the proto message. 
> For example in OM protocol:
> {code}
> /**
>  The OM service that takes care of Ozone namespace.
> */
> service OzoneManagerService {
> // A client-to-OM RPC to send client requests to OM Ratis server
> rpc submitRequest(OMRequest)
>   returns(OMResponse);
> }
> {code}
> And 
> {code}
> message OMRequest {
>   required Type cmdType = 1; // Type of the command
> ...
> {code}
> This approach makes it possible to use the same code to process incoming 
> messages in the server side.
> ScmBlockLocationProtocolServerSideTranslatorPB.send method contains the logic 
> of:
>  * Logging the request/response message (can be displayed with ozone insight)
>  * Updated metrics
>  * Handle open tracing context propagation.
> These functions are generic. For example 
> OzoneManagerProtocolServerSideTranslatorPB use the same (=similar) code.
> The goal in this jira is to provide a generic utility and move the common 
> code for tracing/request logging/response logging/metrics calculation to a 
> common utility which can be used from all the ServerSide translators.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2067) Create generic service facade with tracing/metrics/logging support

2019-09-25 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2067:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

[~elek] Thank you for the contribution. I have committed this patch to the 
trunk branch.

> Create generic service facade with tracing/metrics/logging support
> --
>
> Key: HDDS-2067
> URL: https://issues.apache.org/jira/browse/HDDS-2067
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> We started to use a message based GRPC approach. Wen have only one method and 
> the requests are routed based on a "type" field in the proto message. 
> For example in OM protocol:
> {code}
> /**
>  The OM service that takes care of Ozone namespace.
> */
> service OzoneManagerService {
> // A client-to-OM RPC to send client requests to OM Ratis server
> rpc submitRequest(OMRequest)
>   returns(OMResponse);
> }
> {code}
> And 
> {code}
> message OMRequest {
>   required Type cmdType = 1; // Type of the command
> ...
> {code}
> This approach makes it possible to use the same code to process incoming 
> messages in the server side.
> ScmBlockLocationProtocolServerSideTranslatorPB.send method contains the logic 
> of:
>  * Logging the request/response message (can be displayed with ozone insight)
>  * Updated metrics
>  * Handle open tracing context propagation.
> These functions are generic. For example 
> OzoneManagerProtocolServerSideTranslatorPB use the same (=similar) code.
> The goal in this jira is to provide a generic utility and move the common 
> code for tracing/request logging/response logging/metrics calculation to a 
> common utility which can be used from all the ServerSide translators.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14785) [SBN read] Change client logging to be less aggressive

2019-09-25 Thread Chen Liang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14785:
--
Status: Patch Available  (was: Open)

> [SBN read] Change client logging to be less aggressive
> --
>
> Key: HDFS-14785
> URL: https://issues.apache.org/jira/browse/HDFS-14785
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: 3.1.2, 3.2.0, 2.10.0, 3.3.0
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
>  Labels: release-blocker
> Attachments: HDFS-14785.001.patch
>
>
> Currently {{ObserverReadProxyProvider}} logs a lot of information. There are 
> states that are acceptable but {{ObserverReadProxyProvider}} still log an 
> overwhelmingly large amount of messages. One example is that, if some NN runs 
> an older version, the lack of {{getHAServiceState}} method in older version 
> NN will lead to a Exception prints on every single call.
> We can change them to debug log. This should be minimum risk, because this is 
> only client side, we can always enable the log back by changing to DEBUG log 
> level on client side.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-2181) Ozone Manager should send correct ACL type in ACL requests to Authorizer

2019-09-25 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2181 started by Vivek Ratnavel Subramanian.

> Ozone Manager should send correct ACL type in ACL requests to Authorizer
> 
>
> Key: HDDS-2181
> URL: https://issues.apache.org/jira/browse/HDDS-2181
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> Currently, Ozone manager sends "WRITE" as ACLType for key create, key delete 
> and bucket create operation. Fix the acl type in all requests to the 
> authorizer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2181) Ozone Manager should send correct ACL type in ACL requests to Authorizer

2019-09-25 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDDS-2181:


 Summary: Ozone Manager should send correct ACL type in ACL 
requests to Authorizer
 Key: HDDS-2181
 URL: https://issues.apache.org/jira/browse/HDDS-2181
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager
Affects Versions: 0.4.1
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian


Currently, Ozone manager sends "WRITE" as ACLType for key create, key delete 
and bucket create operation. Fix the acl type in all requests to the authorizer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=318659=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318659
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 25/Sep/19 23:29
Start Date: 25/Sep/19 23:29
Worklog Time Spent: 10m 
  Work Description: arp7 commented on issue #1511: HDDS-2162. Make Kerberos 
related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#issuecomment-535264311
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 318659)
Time Spent: 3h  (was: 2h 50m)

> Make Kerberos related configuration support HA style config
> ---
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> To have a single configuration to use across OM cluster, few of the configs 
> like 
> OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,
> OZONE_OM_KERBEROS_PRINCIPAL_KEY,
> OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,
> OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append 
> with service id and node id.
>  
> Addressed OM_DB_DIRS, OZONE_OM_ADDRESS_KEY also in this patch.
>  
> This Jira is to fix the above configs.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=318657=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318657
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 25/Sep/19 23:26
Start Date: 25/Sep/19 23:26
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1511: HDDS-2162. Make 
Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328380337
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ha/OMHANodeDetails.java
 ##
 @@ -0,0 +1,309 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.om.ha;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.net.NetUtils;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.OzoneIllegalArgumentException;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.InetSocketAddress;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_ADDRESS_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_NODES_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_NODE_ID_KEY;
+import static 
org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_RATIS_PORT_DEFAULT;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_RATIS_PORT_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_SERVICE_IDS_KEY;
+
+/**
+ * Class which maintains peer information and it's own OM node information.
+ */
+public class OMHANodeDetails {
+
+  public static final Logger LOG =
+  LoggerFactory.getLogger(OMHANodeDetails.class);
+  private final OMNodeDetails localNodeDetails;
+  private final List peerNodeDetails;
+
+  public OMHANodeDetails(OMNodeDetails localNodeDetails,
+  List peerNodeDetails) {
+this.localNodeDetails = localNodeDetails;
+this.peerNodeDetails = peerNodeDetails;
+  }
+
+  public OMNodeDetails getLocalNodeDetails() {
+return localNodeDetails;
+  }
+
+  public List< OMNodeDetails > getPeerNodeDetails() {
+return peerNodeDetails;
+  }
+
+
+  /**
+   * Inspects and loads OM node configurations.
+   *
+   * If {@link OMConfigKeys#OZONE_OM_SERVICE_IDS_KEY} is configured with
+   * multiple ids and/ or if {@link OMConfigKeys#OZONE_OM_NODE_ID_KEY} is not
+   * specifically configured , this method determines the omServiceId
+   * and omNodeId by matching the node's address with the configured
+   * addresses. When a match is found, it sets the omServicId and omNodeId from
+   * the corresponding configuration key. This method also finds the OM peers
+   * nodes belonging to the same OM service.
+   *
+   * @param conf
+   */
+  public static OMHANodeDetails loadOMHAConfig(OzoneConfiguration conf) {
+InetSocketAddress localRpcAddress = null;
+String localOMServiceId = null;
+String localOMNodeId = null;
+int localRatisPort = 0;
+Collection omServiceIds = conf.getTrimmedStringCollection(
+OZONE_OM_SERVICE_IDS_KEY);
+
+String knownOMNodeId = conf.get(OZONE_OM_NODE_ID_KEY);
 
 Review comment:
   Can you file a jira to add that? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 318657)
Time Spent: 2h 50m  (was: 2h 40m)

> Make Kerberos related configuration support HA style config
> ---
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
> 

[jira] [Work logged] (HDDS-2180) Add Object ID and update ID on VolumeList Object

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2180?focusedWorklogId=318651=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318651
 ]

ASF GitHub Bot logged work on HDDS-2180:


Author: ASF GitHub Bot
Created on: 25/Sep/19 23:08
Start Date: 25/Sep/19 23:08
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1526: HDDS-2180. Add 
Object ID and update ID on VolumeList Object.
URL: https://github.com/apache/hadoop/pull/1526#issuecomment-535258351
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 37 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for branch |
   | -1 | mvninstall | 30 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 24 | hadoop-ozone in trunk failed. |
   | -1 | compile | 20 | hadoop-hdds in trunk failed. |
   | -1 | compile | 15 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 49 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 843 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 20 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 943 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 33 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 20 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 19 | Maven dependency ordering for patch |
   | -1 | mvninstall | 36 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 30 | hadoop-ozone in the patch failed. |
   | -1 | compile | 24 | hadoop-hdds in the patch failed. |
   | -1 | compile | 19 | hadoop-ozone in the patch failed. |
   | -1 | cc | 24 | hadoop-hdds in the patch failed. |
   | -1 | cc | 19 | hadoop-ozone in the patch failed. |
   | -1 | javac | 24 | hadoop-hdds in the patch failed. |
   | -1 | javac | 19 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 57 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 722 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 21 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 20 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 32 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 20 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 28 | hadoop-hdds in the patch failed. |
   | -1 | unit | 24 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 2356 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1526/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1526 |
   | Optional Tests | dupname asflicense compile cc mvnsite javac unit javadoc 
mvninstall shadedclient findbugs checkstyle |
   | uname | Linux becb47a90b3b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / bdaaa3b |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1526/2/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1526/2/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1526/2/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1526/2/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1526/2/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1526/2/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1526/2/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1526/2/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 

[jira] [Updated] (HDFS-14785) [SBN read] Change client logging to be less aggressive

2019-09-25 Thread Chen Liang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14785:
--
Attachment: HDFS-14785.001.patch

> [SBN read] Change client logging to be less aggressive
> --
>
> Key: HDFS-14785
> URL: https://issues.apache.org/jira/browse/HDFS-14785
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: 2.10.0, 3.2.0, 3.1.2, 3.3.0
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
>  Labels: release-blocker
> Attachments: HDFS-14785.001.patch
>
>
> Currently {{ObserverReadProxyProvider}} logs a lot of information. There are 
> states that are acceptable but {{ObserverReadProxyProvider}} still log an 
> overwhelmingly large amount of messages. One example is that, if some NN runs 
> an older version, the lack of {{getHAServiceState}} method in older version 
> NN will lead to a Exception prints on every single call.
> We can change them to debug log. This should be minimum risk, because this is 
> only client side, we can always enable the log back by changing to DEBUG log 
> level on client side.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14305) Serial number in BlockTokenSecretManager could overlap between different namenodes

2019-09-25 Thread Arpit Agarwal (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938117#comment-16938117
 ] 

Arpit Agarwal edited comment on HDFS-14305 at 9/25/19 10:54 PM:


Hi Konstantin, I am not in favor of reverting this change now. The alternate 
approach sounds risky to me.
{quote}I actually would prefer to go back to computing ranges depending on the 
number of configured NameNodes as in HDFS-6440, just fix the issue with 
negative initial serial number. [~csun] you are right this can cause collisions 
when adding/removing NameNodes to the existing cluster. But there are 
techniques to avoid collisions by starting NNs in a certain order.
{quote}
HDFS is resilient to starting services in arbitrary order. It's not a good idea 
to break that.


was (Author: arpitagarwal):
Hi Konstantin, I am not in favor of reverting this change now. The alternate 
approach sounds risky to me.

> Serial number in BlockTokenSecretManager could overlap between different 
> namenodes
> --
>
> Key: HDFS-14305
> URL: https://issues.apache.org/jira/browse/HDFS-14305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, security
>Reporter: Chao Sun
>Assignee: Xiaoqiao He
>Priority: Major
> Fix For: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14305.001.patch, HDFS-14305.002.patch, 
> HDFS-14305.003.patch, HDFS-14305.004.patch, HDFS-14305.005.patch, 
> HDFS-14305.006.patch
>
>
> Currently, a {{BlockTokenSecretManager}} starts with a random integer as the 
> initial serial number, and then use this formula to rotate it:
> {code:java}
> this.intRange = Integer.MAX_VALUE / numNNs;
> this.nnRangeStart = intRange * nnIndex;
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
>  {code}
> while {{numNNs}} is the total number of NameNodes in the cluster, and 
> {{nnIndex}} is the index of the current NameNode specified in the 
> configuration {{dfs.ha.namenodes.}}.
> However, with this approach, different NameNode could have overlapping ranges 
> for serial number. For simplicity, let's assume {{Integer.MAX_VALUE}} is 100, 
> and we have 2 NameNodes {{nn1}} and {{nn2}} in configuration. Then the ranges 
> for these two are:
> {code}
> nn1 -> [-49, 49]
> nn2 -> [1, 99]
> {code}
> This is because the initial serial number could be any negative integer.
> Moreover, when the keys are updated, the serial number will again be updated 
> with the formula:
> {code}
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
> {code}
> which means the new serial number could be updated to a range that belongs to 
> a different NameNode, thus increasing the chance of collision again.
> When the collision happens, DataNodes could overwrite an existing key which 
> will cause clients to fail because of {{InvalidToken}} error.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14305) Serial number in BlockTokenSecretManager could overlap between different namenodes

2019-09-25 Thread Arpit Agarwal (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938117#comment-16938117
 ] 

Arpit Agarwal commented on HDFS-14305:
--

Hi Konstantin, I am not in favor of reverting this change now. The alternate 
approach sounds risky to me.

> Serial number in BlockTokenSecretManager could overlap between different 
> namenodes
> --
>
> Key: HDFS-14305
> URL: https://issues.apache.org/jira/browse/HDFS-14305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, security
>Reporter: Chao Sun
>Assignee: Xiaoqiao He
>Priority: Major
> Fix For: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14305.001.patch, HDFS-14305.002.patch, 
> HDFS-14305.003.patch, HDFS-14305.004.patch, HDFS-14305.005.patch, 
> HDFS-14305.006.patch
>
>
> Currently, a {{BlockTokenSecretManager}} starts with a random integer as the 
> initial serial number, and then use this formula to rotate it:
> {code:java}
> this.intRange = Integer.MAX_VALUE / numNNs;
> this.nnRangeStart = intRange * nnIndex;
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
>  {code}
> while {{numNNs}} is the total number of NameNodes in the cluster, and 
> {{nnIndex}} is the index of the current NameNode specified in the 
> configuration {{dfs.ha.namenodes.}}.
> However, with this approach, different NameNode could have overlapping ranges 
> for serial number. For simplicity, let's assume {{Integer.MAX_VALUE}} is 100, 
> and we have 2 NameNodes {{nn1}} and {{nn2}} in configuration. Then the ranges 
> for these two are:
> {code}
> nn1 -> [-49, 49]
> nn2 -> [1, 99]
> {code}
> This is because the initial serial number could be any negative integer.
> Moreover, when the keys are updated, the serial number will again be updated 
> with the formula:
> {code}
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
> {code}
> which means the new serial number could be updated to a range that belongs to 
> a different NameNode, thus increasing the chance of collision again.
> When the collision happens, DataNodes could overwrite an existing key which 
> will cause clients to fail because of {{InvalidToken}} error.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14509) DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 3.x

2019-09-25 Thread Chen Liang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938114#comment-16938114
 ] 

Chen Liang commented on HDFS-14509:
---

I prefer the fix from [~John Smith], it's a pretty clever fix I would say. A 
nit:
the current line of {{this.cache = null;}} at the beginning of 
{{readFields(DataInput in)}} seems no longer needed.

I was a bit concerned with the fact that there is code where cache gets reset 
to null (i.e. {{setExpiryDate}} and {{setKeyId}}) and if {{getBytes}} gets 
called after cache got reset to null, {{cache = super.getBytes();}} will be 
called and we run into this same issue. But after more checking, it looks like 
{{setExpiryDate}} and {{setKeyId}} are only called at NN side when creating the 
token, so on DN side, once the cache is set, it stays at those bytes. So this 
should not be an issue.

> DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 
> 3.x
> ---
>
> Key: HDFS-14509
> URL: https://issues.apache.org/jira/browse/HDFS-14509
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yuxuan Wang
>Priority: Blocker
> Attachments: HDFS-14509-001.patch
>
>
> According to the doc, if we want to upgrade cluster from 2.x to 3.x, we need 
> upgrade NN first. And there will be a intermediate state that NN is 3.x and 
> DN is 2.x. At that moment, if a client reads (or writes) a block, it will get 
> a block token from NN and then deliver the token to DN who can verify the 
> token. But the verification in the code now is :
> {code:title=BlockTokenSecretManager.java|borderStyle=solid}
> public void checkAccess(...)
> {
> ...
> id.readFields(new DataInputStream(new 
> ByteArrayInputStream(token.getIdentifier(;
> ...
> if (!Arrays.equals(retrievePassword(id), token.getPassword())) {
>   throw new InvalidToken("Block token with " + id.toString()
>   + " doesn't have the correct token password");
> }
> }
> {code} 
> And {{retrievePassword(id)}} is:
> {code} 
> public byte[] retrievePassword(BlockTokenIdentifier identifier)
> {
> ...
> return createPassword(identifier.getBytes(), key.getKey());
> }
> {code} 
> So, if NN's identifier add new fields, DN will lose the fields and compute 
> wrong password.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2020) Remove mTLS from Ozone GRPC

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2020?focusedWorklogId=318639=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318639
 ]

ASF GitHub Bot logged work on HDDS-2020:


Author: ASF GitHub Bot
Created on: 25/Sep/19 22:38
Start Date: 25/Sep/19 22:38
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #1524: HDDS-2020. Remove 
mTLS from Ozone GRPC. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/1524#issuecomment-535251313
 
 
   The acceptance test and integration tests failures seem not related to this 
change. Here is the result from yesterday, which has the similar failures. 
   
https://github.com/elek/ozone-ci/tree/master/byscane/byscane-nightly-20190925-4z549
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 318639)
Time Spent: 4.5h  (was: 4h 20m)

> Remove mTLS from Ozone GRPC
> ---
>
> Key: HDDS-2020
> URL: https://issues.apache.org/jira/browse/HDDS-2020
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> Generic GRPC support mTLS for mutual authentication. However, Ozone has built 
> in block token mechanism for server to authenticate the client. We only need 
> TLS for client to authenticate the server and wire encryption. 
> Remove the mTLS support also simplify the GRPC server/client configuration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-14305) Serial number in BlockTokenSecretManager could overlap between different namenodes

2019-09-25 Thread Konstantin Shvachko (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko reopened HDFS-14305:


Reopening this.
I think we should revert it before it got into a release and became a liability 
causing incompatible change.

> Serial number in BlockTokenSecretManager could overlap between different 
> namenodes
> --
>
> Key: HDFS-14305
> URL: https://issues.apache.org/jira/browse/HDFS-14305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, security
>Reporter: Chao Sun
>Assignee: Xiaoqiao He
>Priority: Major
> Fix For: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14305.001.patch, HDFS-14305.002.patch, 
> HDFS-14305.003.patch, HDFS-14305.004.patch, HDFS-14305.005.patch, 
> HDFS-14305.006.patch
>
>
> Currently, a {{BlockTokenSecretManager}} starts with a random integer as the 
> initial serial number, and then use this formula to rotate it:
> {code:java}
> this.intRange = Integer.MAX_VALUE / numNNs;
> this.nnRangeStart = intRange * nnIndex;
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
>  {code}
> while {{numNNs}} is the total number of NameNodes in the cluster, and 
> {{nnIndex}} is the index of the current NameNode specified in the 
> configuration {{dfs.ha.namenodes.}}.
> However, with this approach, different NameNode could have overlapping ranges 
> for serial number. For simplicity, let's assume {{Integer.MAX_VALUE}} is 100, 
> and we have 2 NameNodes {{nn1}} and {{nn2}} in configuration. Then the ranges 
> for these two are:
> {code}
> nn1 -> [-49, 49]
> nn2 -> [1, 99]
> {code}
> This is because the initial serial number could be any negative integer.
> Moreover, when the keys are updated, the serial number will again be updated 
> with the formula:
> {code}
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
> {code}
> which means the new serial number could be updated to a range that belongs to 
> a different NameNode, thus increasing the chance of collision again.
> When the collision happens, DataNodes could overwrite an existing key which 
> will cause clients to fail because of {{InvalidToken}} error.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2180) Add Object ID and update ID on VolumeList Object

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2180?focusedWorklogId=318633=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318633
 ]

ASF GitHub Bot logged work on HDDS-2180:


Author: ASF GitHub Bot
Created on: 25/Sep/19 22:27
Start Date: 25/Sep/19 22:27
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1526: HDDS-2180. Add 
Object ID and update ID on VolumeList Object.
URL: https://github.com/apache/hadoop/pull/1526#issuecomment-535248354
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for branch |
   | -1 | mvninstall | 30 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 26 | hadoop-ozone in trunk failed. |
   | -1 | compile | 21 | hadoop-hdds in trunk failed. |
   | -1 | compile | 15 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 49 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 834 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 21 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 19 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 930 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 31 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 19 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | -1 | mvninstall | 35 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 29 | hadoop-ozone in the patch failed. |
   | -1 | compile | 24 | hadoop-hdds in the patch failed. |
   | -1 | compile | 19 | hadoop-ozone in the patch failed. |
   | -1 | cc | 24 | hadoop-hdds in the patch failed. |
   | -1 | cc | 19 | hadoop-ozone in the patch failed. |
   | -1 | javac | 24 | hadoop-hdds in the patch failed. |
   | -1 | javac | 19 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 56 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 738 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 21 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 20 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 31 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 20 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 28 | hadoop-hdds in the patch failed. |
   | -1 | unit | 24 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 2364 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1526/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1526 |
   | Optional Tests | dupname asflicense compile cc mvnsite javac unit javadoc 
mvninstall shadedclient findbugs checkstyle |
   | uname | Linux 19a38fab1e05 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / bdaaa3b |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1526/1/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1526/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1526/1/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1526/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1526/1/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1526/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1526/1/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1526/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 

[jira] [Work logged] (HDDS-2180) Add Object ID and update ID on VolumeList Object

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2180?focusedWorklogId=318626=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318626
 ]

ASF GitHub Bot logged work on HDDS-2180:


Author: ASF GitHub Bot
Created on: 25/Sep/19 22:13
Start Date: 25/Sep/19 22:13
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1526: HDDS-2180. 
Add Object ID and update ID on VolumeList Object.
URL: https://github.com/apache/hadoop/pull/1526#discussion_r328362976
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeRequest.java
 ##
 @@ -95,13 +100,18 @@ protected VolumeList addVolumeToOwnerList(VolumeList 
volumeList,
 }
 
 List prevVolList = new ArrayList<>();
+long objectID = 100;
 if (volumeList != null) {
   prevVolList.addAll(volumeList.getVolumeNamesList());
+  objectID = volumeList.getObjectID();
 
 Review comment:
   Yes, for some reason, when we replace a VolumeList, we do it like this. We 
read the current list, remove or add a new volume, create a new object and then 
write it back. This is reality nothing but a small change in the list of 
volumes. So I keep the same object ID as if Object is same.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 318626)
Time Spent: 50m  (was: 40m)

> Add Object ID and update ID on VolumeList Object
> 
>
> Key: HDDS-2180
> URL: https://issues.apache.org/jira/browse/HDDS-2180
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> This JIRA proposes to add Object ID and Update IDs to the Volume List Object.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2180) Add Object ID and update ID on VolumeList Object

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2180?focusedWorklogId=318625=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318625
 ]

ASF GitHub Bot logged work on HDDS-2180:


Author: ASF GitHub Bot
Created on: 25/Sep/19 22:12
Start Date: 25/Sep/19 22:12
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1526: HDDS-2180. 
Add Object ID and update ID on VolumeList Object.
URL: https://github.com/apache/hadoop/pull/1526#discussion_r328362600
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeRequest.java
 ##
 @@ -95,13 +100,18 @@ protected VolumeList addVolumeToOwnerList(VolumeList 
volumeList,
 }
 
 List prevVolList = new ArrayList<>();
+long objectID = 100;
 
 Review comment:
   No, thanks I will fix that, I discovered it from a test code path, and left 
over from the fix. I will get this fixed.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 318625)
Time Spent: 40m  (was: 0.5h)

> Add Object ID and update ID on VolumeList Object
> 
>
> Key: HDDS-2180
> URL: https://issues.apache.org/jira/browse/HDDS-2180
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This JIRA proposes to add Object ID and Update IDs to the Volume List Object.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14785) [SBN read] Change client logging to be less aggressive

2019-09-25 Thread Konstantin Shvachko (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-14785:
---
Target Version/s: 2.10.0
  Labels: release-blocker  (was: )

Adding to blockers for 2.10

> [SBN read] Change client logging to be less aggressive
> --
>
> Key: HDFS-14785
> URL: https://issues.apache.org/jira/browse/HDFS-14785
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: 2.10.0, 3.2.0, 3.1.2, 3.3.0
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
>  Labels: release-blocker
>
> Currently {{ObserverReadProxyProvider}} logs a lot of information. There are 
> states that are acceptable but {{ObserverReadProxyProvider}} still log an 
> overwhelmingly large amount of messages. One example is that, if some NN runs 
> an older version, the lack of {{getHAServiceState}} method in older version 
> NN will lead to a Exception prints on every single call.
> We can change them to debug log. This should be minimum risk, because this is 
> only client side, we can always enable the log back by changing to DEBUG log 
> level on client side.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2180) Add Object ID and update ID on VolumeList Object

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2180?focusedWorklogId=318623=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318623
 ]

ASF GitHub Bot logged work on HDDS-2180:


Author: ASF GitHub Bot
Created on: 25/Sep/19 22:09
Start Date: 25/Sep/19 22:09
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1526: HDDS-2180. 
Add Object ID and update ID on VolumeList Object.
URL: https://github.com/apache/hadoop/pull/1526#discussion_r328361911
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeRequest.java
 ##
 @@ -95,13 +100,18 @@ protected VolumeList addVolumeToOwnerList(VolumeList 
volumeList,
 }
 
 List prevVolList = new ArrayList<>();
+long objectID = 100;
 if (volumeList != null) {
   prevVolList.addAll(volumeList.getVolumeNamesList());
+  objectID = volumeList.getObjectID();
 
 Review comment:
   Here we always use the objectID of the new volumeList to replace the 
existing one on line 113, is this expected?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 318623)
Time Spent: 0.5h  (was: 20m)

> Add Object ID and update ID on VolumeList Object
> 
>
> Key: HDDS-2180
> URL: https://issues.apache.org/jira/browse/HDDS-2180
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> This JIRA proposes to add Object ID and Update IDs to the Volume List Object.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2180) Add Object ID and update ID on VolumeList Object

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2180?focusedWorklogId=318619=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318619
 ]

ASF GitHub Bot logged work on HDDS-2180:


Author: ASF GitHub Bot
Created on: 25/Sep/19 22:00
Start Date: 25/Sep/19 22:00
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1526: HDDS-2180. 
Add Object ID and update ID on VolumeList Object.
URL: https://github.com/apache/hadoop/pull/1526#discussion_r328359463
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeRequest.java
 ##
 @@ -95,13 +100,18 @@ protected VolumeList addVolumeToOwnerList(VolumeList 
volumeList,
 }
 
 List prevVolList = new ArrayList<>();
+long objectID = 100;
 
 Review comment:
   Is there a reason for initialize objectID to 100 here for new volume?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 318619)
Time Spent: 20m  (was: 10m)

> Add Object ID and update ID on VolumeList Object
> 
>
> Key: HDDS-2180
> URL: https://issues.apache.org/jira/browse/HDDS-2180
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This JIRA proposes to add Object ID and Update IDs to the Volume List Object.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14284) RBF: Log Router identifier when reporting exceptions

2019-09-25 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938093#comment-16938093
 ] 

Hadoop QA commented on HDFS-14284:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
48s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
52s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 23m 
17s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}136m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:efed4450bf1 |
| JIRA Issue | HDFS-14284 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12981361/HDFS-14284.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8b500c5752ab 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / bdaaa3b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| whitespace | 

[jira] [Updated] (HDDS-2180) Add Object ID and update ID on VolumeList Object

2019-09-25 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2180:
---
Target Version/s: 0.5.0

> Add Object ID and update ID on VolumeList Object
> 
>
> Key: HDDS-2180
> URL: https://issues.apache.org/jira/browse/HDDS-2180
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This JIRA proposes to add Object ID and Update IDs to the Volume List Object.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2180) Add Object ID and update ID on VolumeList Object

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2180:
-
Labels: pull-request-available  (was: )

> Add Object ID and update ID on VolumeList Object
> 
>
> Key: HDDS-2180
> URL: https://issues.apache.org/jira/browse/HDDS-2180
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
>
> This JIRA proposes to add Object ID and Update IDs to the Volume List Object.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2180) Add Object ID and update ID on VolumeList Object

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2180?focusedWorklogId=318615=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318615
 ]

ASF GitHub Bot logged work on HDDS-2180:


Author: ASF GitHub Bot
Created on: 25/Sep/19 21:47
Start Date: 25/Sep/19 21:47
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1526: HDDS-2180. 
Add Object ID and update ID on VolumeList Object.
URL: https://github.com/apache/hadoop/pull/1526
 
 
   https://issues.apache.org/jira/browse/HDDS-2180
   Adds Object ID and Update ID to VolumeList object.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 318615)
Remaining Estimate: 0h
Time Spent: 10m

> Add Object ID and update ID on VolumeList Object
> 
>
> Key: HDDS-2180
> URL: https://issues.apache.org/jira/browse/HDDS-2180
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This JIRA proposes to add Object ID and Update IDs to the Volume List Object.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2180) Add Object ID and update ID on VolumeList Object

2019-09-25 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2180:
--

 Summary: Add Object ID and update ID on VolumeList Object
 Key: HDDS-2180
 URL: https://issues.apache.org/jira/browse/HDDS-2180
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Anu Engineer
Assignee: Anu Engineer


This JIRA proposes to add Object ID and Update IDs to the Volume List Object.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14509) DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 3.x

2019-09-25 Thread Konstantin Shvachko (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938086#comment-16938086
 ] 

Konstantin Shvachko commented on HDFS-14509:


So [~John Smith], do you want to update your PR or submit a patch here? With 
your current PR it seems that some tests are failing including the one that you 
added to {{TestBlockToken}}. Also worth looking at javac and checkstyle warning.
As for unit tests I think we need two
# that verifies the upgrade from 2.x to 3.x is possible. 
# that verifies the upgrade from 2.x-1 to 2.x is still possible.

You should be able to cover both using your mocking approach in 
{{testRetrievePasswordWithUnknownFields()}}.

> DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 
> 3.x
> ---
>
> Key: HDFS-14509
> URL: https://issues.apache.org/jira/browse/HDFS-14509
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yuxuan Wang
>Priority: Blocker
> Attachments: HDFS-14509-001.patch
>
>
> According to the doc, if we want to upgrade cluster from 2.x to 3.x, we need 
> upgrade NN first. And there will be a intermediate state that NN is 3.x and 
> DN is 2.x. At that moment, if a client reads (or writes) a block, it will get 
> a block token from NN and then deliver the token to DN who can verify the 
> token. But the verification in the code now is :
> {code:title=BlockTokenSecretManager.java|borderStyle=solid}
> public void checkAccess(...)
> {
> ...
> id.readFields(new DataInputStream(new 
> ByteArrayInputStream(token.getIdentifier(;
> ...
> if (!Arrays.equals(retrievePassword(id), token.getPassword())) {
>   throw new InvalidToken("Block token with " + id.toString()
>   + " doesn't have the correct token password");
> }
> }
> {code} 
> And {{retrievePassword(id)}} is:
> {code} 
> public byte[] retrievePassword(BlockTokenIdentifier identifier)
> {
> ...
> return createPassword(identifier.getBytes(), key.getKey());
> }
> {code} 
> So, if NN's identifier add new fields, DN will lose the fields and compute 
> wrong password.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2174) Delete GDPR Encryption Key from metadata when a Key is deleted

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2174?focusedWorklogId=318606=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318606
 ]

ASF GitHub Bot logged work on HDDS-2174:


Author: ASF GitHub Bot
Created on: 25/Sep/19 21:18
Start Date: 25/Sep/19 21:18
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1519: HDDS-2174. 
Delete GDPR Encryption Key from metadata when a Key is deleted
URL: https://github.com/apache/hadoop/pull/1519#issuecomment-535225524
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 15 | Maven dependency ordering for branch |
   | -1 | mvninstall | 30 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 26 | hadoop-ozone in trunk failed. |
   | -1 | compile | 20 | hadoop-hdds in trunk failed. |
   | -1 | compile | 15 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 49 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 843 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 21 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 20 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 941 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 31 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 20 | hadoop-ozone in trunk failed. |
   | -0 | patch | 978 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | -1 | mvninstall | 35 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 28 | hadoop-ozone in the patch failed. |
   | -1 | compile | 25 | hadoop-hdds in the patch failed. |
   | -1 | compile | 19 | hadoop-ozone in the patch failed. |
   | -1 | javac | 25 | hadoop-hdds in the patch failed. |
   | -1 | javac | 19 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 29 | hadoop-ozone: The patch generated 1 new + 2 
unchanged - 0 fixed = 3 total (was 2) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 717 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 21 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 21 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 30 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 20 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 27 | hadoop-hdds in the patch failed. |
   | -1 | unit | 24 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 2348 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1519 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux c017e978e19c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / bdaaa3b |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/5/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/5/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/5/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/5/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/5/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/5/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/5/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/5/artifact/out/branch-findbugs-hadoop-ozone.txt

[jira] [Work logged] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=318607=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318607
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 25/Sep/19 21:18
Start Date: 25/Sep/19 21:18
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1511: HDDS-2162. Make 
Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#issuecomment-535225523
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for branch |
   | -1 | mvninstall | 30 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 23 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-hdds in trunk failed. |
   | -1 | compile | 15 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 52 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 994 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 18 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 15 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1073 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 27 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 15 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for patch |
   | -1 | mvninstall | 30 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 24 | hadoop-ozone in the patch failed. |
   | -1 | compile | 19 | hadoop-hdds in the patch failed. |
   | -1 | compile | 14 | hadoop-ozone in the patch failed. |
   | -1 | javac | 19 | hadoop-hdds in the patch failed. |
   | -1 | javac | 14 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 49 | the patch passed |
   | +1 | mvnsite | 1 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 771 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 18 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 15 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 27 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 16 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 23 | hadoop-hdds in the patch failed. |
   | -1 | unit | 19 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 2450 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1511 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 3bb453cc6bdc 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / bdaaa3b |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/3/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/3/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/3/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/3/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/3/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/3/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/3/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/3/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/3/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 

[jira] [Work logged] (HDDS-2179) ConfigFileGenerator fails with Java 10 or newer

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2179?focusedWorklogId=318605=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318605
 ]

ASF GitHub Bot logged work on HDDS-2179:


Author: ASF GitHub Bot
Created on: 25/Sep/19 21:15
Start Date: 25/Sep/19 21:15
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1525: HDDS-2179. 
ConfigFileGenerator fails with Java 10 or newer
URL: https://github.com/apache/hadoop/pull/1525#issuecomment-535224605
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 1833 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 46 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 30 | hadoop-ozone in trunk failed. |
   | -1 | compile | 20 | hadoop-hdds in trunk failed. |
   | -1 | compile | 16 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 66 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 849 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 16 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 14 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 925 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 26 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 15 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 30 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 23 | hadoop-ozone in the patch failed. |
   | -1 | compile | 19 | hadoop-hdds in the patch failed. |
   | -1 | compile | 13 | hadoop-ozone in the patch failed. |
   | -1 | javac | 19 | hadoop-hdds in the patch failed. |
   | -1 | javac | 13 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 47 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 669 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 16 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 14 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 25 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 15 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 22 | hadoop-hdds in the patch failed. |
   | -1 | unit | 18 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 27 | The patch does not generate ASF License warnings. |
   | | | 3963 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1525/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1525 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux a939772e15d9 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / bdaaa3b |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1525/1/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1525/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1525/1/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1525/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1525/1/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1525/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1525/1/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1525/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1525/1/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 

[jira] [Work logged] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=318591=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318591
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 25/Sep/19 20:43
Start Date: 25/Sep/19 20:43
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1511: HDDS-2162. 
Make Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#issuecomment-535212610
 
 
   Thank You @arp7 for the review.
   I have addressed the review comments.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 318591)
Time Spent: 2.5h  (was: 2h 20m)

> Make Kerberos related configuration support HA style config
> ---
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> To have a single configuration to use across OM cluster, few of the configs 
> like 
> OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,
> OZONE_OM_KERBEROS_PRINCIPAL_KEY,
> OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,
> OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append 
> with service id and node id.
>  
> Addressed OM_DB_DIRS, OZONE_OM_ADDRESS_KEY also in this patch.
>  
> This Jira is to fix the above configs.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=318587=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318587
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 25/Sep/19 20:36
Start Date: 25/Sep/19 20:36
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1511: 
HDDS-2162. Make Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328328247
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ha/OMHANodeDetails.java
 ##
 @@ -0,0 +1,309 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.om.ha;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.net.NetUtils;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.OzoneIllegalArgumentException;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.InetSocketAddress;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_ADDRESS_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_NODES_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_NODE_ID_KEY;
+import static 
org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_RATIS_PORT_DEFAULT;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_RATIS_PORT_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_SERVICE_IDS_KEY;
+
+/**
+ * Class which maintains peer information and it's own OM node information.
+ */
+public class OMHANodeDetails {
+
+  public static final Logger LOG =
+  LoggerFactory.getLogger(OMHANodeDetails.class);
+  private final OMNodeDetails localNodeDetails;
+  private final List peerNodeDetails;
+
+  public OMHANodeDetails(OMNodeDetails localNodeDetails,
+  List peerNodeDetails) {
+this.localNodeDetails = localNodeDetails;
+this.peerNodeDetails = peerNodeDetails;
+  }
+
+  public OMNodeDetails getLocalNodeDetails() {
+return localNodeDetails;
+  }
+
+  public List< OMNodeDetails > getPeerNodeDetails() {
+return peerNodeDetails;
+  }
+
+
+  /**
+   * Inspects and loads OM node configurations.
+   *
+   * If {@link OMConfigKeys#OZONE_OM_SERVICE_IDS_KEY} is configured with
+   * multiple ids and/ or if {@link OMConfigKeys#OZONE_OM_NODE_ID_KEY} is not
+   * specifically configured , this method determines the omServiceId
+   * and omNodeId by matching the node's address with the configured
+   * addresses. When a match is found, it sets the omServicId and omNodeId from
+   * the corresponding configuration key. This method also finds the OM peers
+   * nodes belonging to the same OM service.
+   *
+   * @param conf
+   */
+  public static OMHANodeDetails loadOMHAConfig(OzoneConfiguration conf) {
+InetSocketAddress localRpcAddress = null;
+String localOMServiceId = null;
+String localOMNodeId = null;
+int localRatisPort = 0;
+Collection omServiceIds = conf.getTrimmedStringCollection(
+OZONE_OM_SERVICE_IDS_KEY);
+
+String knownOMNodeId = conf.get(OZONE_OM_NODE_ID_KEY);
+int found = 0;
+boolean isOMAddressSet = false;
+
+for (String serviceId : omServiceIds) {
+  Collection omNodeIds = OmUtils.getOMNodeIds(conf, serviceId);
+
+  if (omNodeIds.size() == 0) {
+String msg = "Configuration does not have any value set for " +
+OZONE_OM_NODES_KEY + " for service ID " + serviceId + ". List of " 
+
+"OM Node ID's should be specified for the service ID";
+throw new OzoneIllegalArgumentException(msg);
+  }
+
+  List peerNodesList = new ArrayList<>();
+  boolean isPeer;
+  for (String nodeId : omNodeIds) {
+if (knownOMNodeId != null && !knownOMNodeId.equals(nodeId)) {
+  isPeer = true;
+} else {
+ 

[jira] [Work logged] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=318585=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318585
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 25/Sep/19 20:36
Start Date: 25/Sep/19 20:36
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1511: 
HDDS-2162. Make Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328328143
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ha/OMHANodeDetails.java
 ##
 @@ -0,0 +1,309 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.om.ha;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.net.NetUtils;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.OzoneIllegalArgumentException;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.InetSocketAddress;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_ADDRESS_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_NODES_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_NODE_ID_KEY;
+import static 
org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_RATIS_PORT_DEFAULT;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_RATIS_PORT_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_SERVICE_IDS_KEY;
+
+/**
+ * Class which maintains peer information and it's own OM node information.
+ */
+public class OMHANodeDetails {
+
+  public static final Logger LOG =
+  LoggerFactory.getLogger(OMHANodeDetails.class);
+  private final OMNodeDetails localNodeDetails;
+  private final List peerNodeDetails;
+
+  public OMHANodeDetails(OMNodeDetails localNodeDetails,
+  List peerNodeDetails) {
+this.localNodeDetails = localNodeDetails;
+this.peerNodeDetails = peerNodeDetails;
+  }
+
+  public OMNodeDetails getLocalNodeDetails() {
+return localNodeDetails;
+  }
+
+  public List< OMNodeDetails > getPeerNodeDetails() {
+return peerNodeDetails;
+  }
+
+
+  /**
+   * Inspects and loads OM node configurations.
+   *
+   * If {@link OMConfigKeys#OZONE_OM_SERVICE_IDS_KEY} is configured with
+   * multiple ids and/ or if {@link OMConfigKeys#OZONE_OM_NODE_ID_KEY} is not
+   * specifically configured , this method determines the omServiceId
+   * and omNodeId by matching the node's address with the configured
+   * addresses. When a match is found, it sets the omServicId and omNodeId from
+   * the corresponding configuration key. This method also finds the OM peers
+   * nodes belonging to the same OM service.
+   *
+   * @param conf
+   */
+  public static OMHANodeDetails loadOMHAConfig(OzoneConfiguration conf) {
+InetSocketAddress localRpcAddress = null;
+String localOMServiceId = null;
+String localOMNodeId = null;
+int localRatisPort = 0;
+Collection omServiceIds = conf.getTrimmedStringCollection(
+OZONE_OM_SERVICE_IDS_KEY);
+
+String knownOMNodeId = conf.get(OZONE_OM_NODE_ID_KEY);
+int found = 0;
+boolean isOMAddressSet = false;
+
+for (String serviceId : omServiceIds) {
+  Collection omNodeIds = OmUtils.getOMNodeIds(conf, serviceId);
+
+  if (omNodeIds.size() == 0) {
+String msg = "Configuration does not have any value set for " +
+OZONE_OM_NODES_KEY + " for service ID " + serviceId + ". List of " 
+
+"OM Node ID's should be specified for the service ID";
+throw new OzoneIllegalArgumentException(msg);
+  }
+
+  List peerNodesList = new ArrayList<>();
+  boolean isPeer;
+  for (String nodeId : omNodeIds) {
+if (knownOMNodeId != null && !knownOMNodeId.equals(nodeId)) {
+  isPeer = true;
+} else {
+ 

[jira] [Work logged] (HDDS-2174) Delete GDPR Encryption Key from metadata when a Key is deleted

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2174?focusedWorklogId=318586=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318586
 ]

ASF GitHub Bot logged work on HDDS-2174:


Author: ASF GitHub Bot
Created on: 25/Sep/19 20:36
Start Date: 25/Sep/19 20:36
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on issue #1519: HDDS-2174. 
Delete GDPR Encryption Key from metadata when a Key is deleted
URL: https://github.com/apache/hadoop/pull/1519#issuecomment-535209252
 
 
   > Overall LGTM. One minor comment inline.
   @bharatviswa504 
   Most recent commit addresses this. Thank you for flagging and detailed 
review!
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 318586)
Time Spent: 2.5h  (was: 2h 20m)

> Delete GDPR Encryption Key from metadata when a Key is deleted
> --
>
> Key: HDDS-2174
> URL: https://issues.apache.org/jira/browse/HDDS-2174
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> As advised by [~arp]  & [~aengineer], when a deleteKey command is executed, 
> delete the gdpr encryption key details from key metadata before moving it to 
> deletedTable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2174) Delete GDPR Encryption Key from metadata when a Key is deleted

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2174?focusedWorklogId=318584=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318584
 ]

ASF GitHub Bot logged work on HDDS-2174:


Author: ASF GitHub Bot
Created on: 25/Sep/19 20:36
Start Date: 25/Sep/19 20:36
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on issue #1519: HDDS-2174. 
Delete GDPR Encryption Key from metadata when a Key is deleted
URL: https://github.com/apache/hadoop/pull/1519#issuecomment-535209252
 
 
   > Overall LGTM. One minor comment inline.
   
   Most recent commit addresses this. Thank you for flagging and detailed 
review!
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 318584)
Time Spent: 2h 20m  (was: 2h 10m)

> Delete GDPR Encryption Key from metadata when a Key is deleted
> --
>
> Key: HDDS-2174
> URL: https://issues.apache.org/jira/browse/HDDS-2174
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> As advised by [~arp]  & [~aengineer], when a deleteKey command is executed, 
> delete the gdpr encryption key details from key metadata before moving it to 
> deletedTable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=318583=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318583
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 25/Sep/19 20:35
Start Date: 25/Sep/19 20:35
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1511: 
HDDS-2162. Make Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328327822
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ha/OMHANodeDetails.java
 ##
 @@ -0,0 +1,309 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.om.ha;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.net.NetUtils;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.OzoneIllegalArgumentException;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.InetSocketAddress;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_ADDRESS_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_NODES_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_NODE_ID_KEY;
+import static 
org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_RATIS_PORT_DEFAULT;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_RATIS_PORT_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_SERVICE_IDS_KEY;
+
+/**
+ * Class which maintains peer information and it's own OM node information.
+ */
+public class OMHANodeDetails {
+
+  public static final Logger LOG =
+  LoggerFactory.getLogger(OMHANodeDetails.class);
+  private final OMNodeDetails localNodeDetails;
+  private final List peerNodeDetails;
+
+  public OMHANodeDetails(OMNodeDetails localNodeDetails,
+  List peerNodeDetails) {
+this.localNodeDetails = localNodeDetails;
+this.peerNodeDetails = peerNodeDetails;
+  }
+
+  public OMNodeDetails getLocalNodeDetails() {
+return localNodeDetails;
+  }
+
+  public List< OMNodeDetails > getPeerNodeDetails() {
+return peerNodeDetails;
+  }
+
+
+  /**
+   * Inspects and loads OM node configurations.
+   *
+   * If {@link OMConfigKeys#OZONE_OM_SERVICE_IDS_KEY} is configured with
+   * multiple ids and/ or if {@link OMConfigKeys#OZONE_OM_NODE_ID_KEY} is not
+   * specifically configured , this method determines the omServiceId
+   * and omNodeId by matching the node's address with the configured
+   * addresses. When a match is found, it sets the omServicId and omNodeId from
+   * the corresponding configuration key. This method also finds the OM peers
+   * nodes belonging to the same OM service.
+   *
+   * @param conf
+   */
+  public static OMHANodeDetails loadOMHAConfig(OzoneConfiguration conf) {
+InetSocketAddress localRpcAddress = null;
+String localOMServiceId = null;
+String localOMNodeId = null;
+int localRatisPort = 0;
+Collection omServiceIds = conf.getTrimmedStringCollection(
+OZONE_OM_SERVICE_IDS_KEY);
+
+String knownOMNodeId = conf.get(OZONE_OM_NODE_ID_KEY);
 
 Review comment:
   This is generally used in MiniOzoneClusterHA testing. For each OM this is 
set with different value. (As now we don't have federated OM setup it is not 
really required at this point, I agree we need this at some point for testing)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 318583)
Time Spent: 2h  (was: 1h 50m)

> Make Kerberos related configuration 

[jira] [Work logged] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=318580=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318580
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 25/Sep/19 20:32
Start Date: 25/Sep/19 20:32
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1511: 
HDDS-2162. Make Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328326246
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ha/OMHANodeDetails.java
 ##
 @@ -0,0 +1,309 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.om.ha;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.net.NetUtils;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.OzoneIllegalArgumentException;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.InetSocketAddress;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_ADDRESS_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_NODES_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_NODE_ID_KEY;
+import static 
org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_RATIS_PORT_DEFAULT;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_RATIS_PORT_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_SERVICE_IDS_KEY;
+
+/**
+ * Class which maintains peer information and it's own OM node information.
+ */
+public class OMHANodeDetails {
+
+  public static final Logger LOG =
+  LoggerFactory.getLogger(OMHANodeDetails.class);
+  private final OMNodeDetails localNodeDetails;
+  private final List peerNodeDetails;
+
+  public OMHANodeDetails(OMNodeDetails localNodeDetails,
+  List peerNodeDetails) {
+this.localNodeDetails = localNodeDetails;
+this.peerNodeDetails = peerNodeDetails;
+  }
+
+  public OMNodeDetails getLocalNodeDetails() {
+return localNodeDetails;
+  }
+
+  public List< OMNodeDetails > getPeerNodeDetails() {
+return peerNodeDetails;
+  }
+
+
+  /**
+   * Inspects and loads OM node configurations.
+   *
+   * If {@link OMConfigKeys#OZONE_OM_SERVICE_IDS_KEY} is configured with
+   * multiple ids and/ or if {@link OMConfigKeys#OZONE_OM_NODE_ID_KEY} is not
+   * specifically configured , this method determines the omServiceId
+   * and omNodeId by matching the node's address with the configured
+   * addresses. When a match is found, it sets the omServicId and omNodeId from
+   * the corresponding configuration key. This method also finds the OM peers
+   * nodes belonging to the same OM service.
+   *
+   * @param conf
+   */
+  public static OMHANodeDetails loadOMHAConfig(OzoneConfiguration conf) {
+InetSocketAddress localRpcAddress = null;
+String localOMServiceId = null;
+String localOMNodeId = null;
+int localRatisPort = 0;
+Collection omServiceIds = conf.getTrimmedStringCollection(
+OZONE_OM_SERVICE_IDS_KEY);
+
+String knownOMNodeId = conf.get(OZONE_OM_NODE_ID_KEY);
+int found = 0;
+boolean isOMAddressSet = false;
+
+for (String serviceId : omServiceIds) {
+  Collection omNodeIds = OmUtils.getOMNodeIds(conf, serviceId);
+
+  if (omNodeIds.size() == 0) {
+String msg = "Configuration does not have any value set for " +
+OZONE_OM_NODES_KEY + " for service ID " + serviceId + ". List of " 
+
+"OM Node ID's should be specified for the service ID";
+throw new OzoneIllegalArgumentException(msg);
+  }
+
+  List peerNodesList = new ArrayList<>();
+  boolean isPeer;
+  for (String nodeId : omNodeIds) {
+if (knownOMNodeId != null && !knownOMNodeId.equals(nodeId)) {
+  isPeer = true;
+} else {
+ 

[jira] [Updated] (HDDS-2179) ConfigFileGenerator fails with Java 10 or newer

2019-09-25 Thread Doroszlai, Attila (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-2179:

Status: Patch Available  (was: In Progress)

> ConfigFileGenerator fails with Java 10 or newer
> ---
>
> Key: HDDS-2179
> URL: https://issues.apache.org/jira/browse/HDDS-2179
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {code:title=mvn -f pom.ozone.xml -DskipTests -am -pl :hadoop-hdds-config 
> clean package}
> ...
> [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
> hadoop-hdds-config ---
> [INFO] Compiling 3 source files to hadoop-hdds/config/target/test-classes
> ...
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-hdds-config: Compilation failure
> [ERROR] Can't generate the config file from annotation: 
> hadoop-hdds/config/target/test-classes/ozone-default-generated.xml
> {code}
> The root cause is that new Java (I guess it's 9+, but tried only on 10+) 
> throws a different {{IOException}} subclass: {{NoSuchFileException}} instead 
> of {{FileNotFoundException}}.
> {code}
> java.nio.file.NoSuchFileException: 
> hadoop-hdds/config/target/test-classes/ozone-default-generated.xml
>   at 
> java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
>   at 
> java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:219)
>   at java.base/java.nio.file.Files.newByteChannel(Files.java:374)
>   at java.base/java.nio.file.Files.newByteChannel(Files.java:425)
>   at 
> java.base/java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:420)
>   at java.base/java.nio.file.Files.newInputStream(Files.java:159)
>   at 
> jdk.compiler/com.sun.tools.javac.file.PathFileObject.openInputStream(PathFileObject.java:461)
>   at 
> java.compiler@13/javax.tools.ForwardingFileObject.openInputStream(ForwardingFileObject.java:74)
>   at 
> org.apache.hadoop.hdds.conf.ConfigFileGenerator.process(ConfigFileGenerator.java:62)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=318567=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318567
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 25/Sep/19 20:09
Start Date: 25/Sep/19 20:09
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1511: HDDS-2162. Make 
Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328316564
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ha/OMHANodeDetails.java
 ##
 @@ -0,0 +1,309 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.om.ha;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.net.NetUtils;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.OzoneIllegalArgumentException;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.InetSocketAddress;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_ADDRESS_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_NODES_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_NODE_ID_KEY;
+import static 
org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_RATIS_PORT_DEFAULT;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_RATIS_PORT_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_SERVICE_IDS_KEY;
+
+/**
+ * Class which maintains peer information and it's own OM node information.
+ */
+public class OMHANodeDetails {
+
+  public static final Logger LOG =
+  LoggerFactory.getLogger(OMHANodeDetails.class);
+  private final OMNodeDetails localNodeDetails;
+  private final List peerNodeDetails;
+
+  public OMHANodeDetails(OMNodeDetails localNodeDetails,
+  List peerNodeDetails) {
+this.localNodeDetails = localNodeDetails;
+this.peerNodeDetails = peerNodeDetails;
+  }
+
+  public OMNodeDetails getLocalNodeDetails() {
+return localNodeDetails;
+  }
+
+  public List< OMNodeDetails > getPeerNodeDetails() {
+return peerNodeDetails;
+  }
+
+
+  /**
+   * Inspects and loads OM node configurations.
+   *
+   * If {@link OMConfigKeys#OZONE_OM_SERVICE_IDS_KEY} is configured with
+   * multiple ids and/ or if {@link OMConfigKeys#OZONE_OM_NODE_ID_KEY} is not
+   * specifically configured , this method determines the omServiceId
+   * and omNodeId by matching the node's address with the configured
+   * addresses. When a match is found, it sets the omServicId and omNodeId from
+   * the corresponding configuration key. This method also finds the OM peers
+   * nodes belonging to the same OM service.
+   *
+   * @param conf
+   */
+  public static OMHANodeDetails loadOMHAConfig(OzoneConfiguration conf) {
+InetSocketAddress localRpcAddress = null;
+String localOMServiceId = null;
+String localOMNodeId = null;
+int localRatisPort = 0;
+Collection omServiceIds = conf.getTrimmedStringCollection(
+OZONE_OM_SERVICE_IDS_KEY);
+
+String knownOMNodeId = conf.get(OZONE_OM_NODE_ID_KEY);
 
 Review comment:
   Oh there should be one then. We cannot assume nodeID is unique across 
services. We can do so in a separate jira.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 318567)
Time Spent: 1h 40m  (was: 1.5h)

> Make Kerberos related configuration support HA style config
> ---
>
> Key: HDDS-2162
> 

[jira] [Work logged] (HDDS-2179) ConfigFileGenerator fails with Java 10 or newer

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2179?focusedWorklogId=318565=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318565
 ]

ASF GitHub Bot logged work on HDDS-2179:


Author: ASF GitHub Bot
Created on: 25/Sep/19 20:08
Start Date: 25/Sep/19 20:08
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1525: HDDS-2179. 
ConfigFileGenerator fails with Java 10 or newer
URL: https://github.com/apache/hadoop/pull/1525#issuecomment-535191139
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 318565)
Time Spent: 20m  (was: 10m)

> ConfigFileGenerator fails with Java 10 or newer
> ---
>
> Key: HDDS-2179
> URL: https://issues.apache.org/jira/browse/HDDS-2179
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {code:title=mvn -f pom.ozone.xml -DskipTests -am -pl :hadoop-hdds-config 
> clean package}
> ...
> [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
> hadoop-hdds-config ---
> [INFO] Compiling 3 source files to hadoop-hdds/config/target/test-classes
> ...
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-hdds-config: Compilation failure
> [ERROR] Can't generate the config file from annotation: 
> hadoop-hdds/config/target/test-classes/ozone-default-generated.xml
> {code}
> The root cause is that new Java (I guess it's 9+, but tried only on 10+) 
> throws a different {{IOException}} subclass: {{NoSuchFileException}} instead 
> of {{FileNotFoundException}}.
> {code}
> java.nio.file.NoSuchFileException: 
> hadoop-hdds/config/target/test-classes/ozone-default-generated.xml
>   at 
> java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
>   at 
> java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:219)
>   at java.base/java.nio.file.Files.newByteChannel(Files.java:374)
>   at java.base/java.nio.file.Files.newByteChannel(Files.java:425)
>   at 
> java.base/java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:420)
>   at java.base/java.nio.file.Files.newInputStream(Files.java:159)
>   at 
> jdk.compiler/com.sun.tools.javac.file.PathFileObject.openInputStream(PathFileObject.java:461)
>   at 
> java.compiler@13/javax.tools.ForwardingFileObject.openInputStream(ForwardingFileObject.java:74)
>   at 
> org.apache.hadoop.hdds.conf.ConfigFileGenerator.process(ConfigFileGenerator.java:62)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2179) ConfigFileGenerator fails with Java 10 or newer

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2179:
-
Labels: pull-request-available  (was: )

> ConfigFileGenerator fails with Java 10 or newer
> ---
>
> Key: HDDS-2179
> URL: https://issues.apache.org/jira/browse/HDDS-2179
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>
> {code:title=mvn -f pom.ozone.xml -DskipTests -am -pl :hadoop-hdds-config 
> clean package}
> ...
> [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
> hadoop-hdds-config ---
> [INFO] Compiling 3 source files to hadoop-hdds/config/target/test-classes
> ...
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-hdds-config: Compilation failure
> [ERROR] Can't generate the config file from annotation: 
> hadoop-hdds/config/target/test-classes/ozone-default-generated.xml
> {code}
> The root cause is that new Java (I guess it's 9+, but tried only on 10+) 
> throws a different {{IOException}} subclass: {{NoSuchFileException}} instead 
> of {{FileNotFoundException}}.
> {code}
> java.nio.file.NoSuchFileException: 
> hadoop-hdds/config/target/test-classes/ozone-default-generated.xml
>   at 
> java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
>   at 
> java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:219)
>   at java.base/java.nio.file.Files.newByteChannel(Files.java:374)
>   at java.base/java.nio.file.Files.newByteChannel(Files.java:425)
>   at 
> java.base/java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:420)
>   at java.base/java.nio.file.Files.newInputStream(Files.java:159)
>   at 
> jdk.compiler/com.sun.tools.javac.file.PathFileObject.openInputStream(PathFileObject.java:461)
>   at 
> java.compiler@13/javax.tools.ForwardingFileObject.openInputStream(ForwardingFileObject.java:74)
>   at 
> org.apache.hadoop.hdds.conf.ConfigFileGenerator.process(ConfigFileGenerator.java:62)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2179) ConfigFileGenerator fails with Java 10 or newer

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2179?focusedWorklogId=318564=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318564
 ]

ASF GitHub Bot logged work on HDDS-2179:


Author: ASF GitHub Bot
Created on: 25/Sep/19 20:08
Start Date: 25/Sep/19 20:08
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #1525: HDDS-2179. 
ConfigFileGenerator fails with Java 10 or newer
URL: https://github.com/apache/hadoop/pull/1525
 
 
   ## What changes were proposed in this pull request?
   
   Allow building HDDS Config (and Ozone in general) with newer JDKs.
   
   https://issues.apache.org/jira/browse/HDDS-2179
   
   ## How was this patch tested?
   
   Tested HDDS Config build with Java 8, 10, 11, 13.
   
   ```
   $ mvn -f pom.ozone.xml -DskipTests -am -pl :hadoop-hdds-config clean package
   ...
   [INFO] Apache Hadoop Ozone Main ... SUCCESS [  0.472 
s]
   [INFO] Apache Hadoop HDDS . SUCCESS [  1.718 
s]
   [INFO] Apache Hadoop HDDS Config .. SUCCESS [  1.651 
s]
   [INFO] 

   [INFO] BUILD SUCCESS
   
   $ wc hadoop-hdds/config/target/test-classes/ozone-default-generated.xml
 33  631060 
hadoop-hdds/config/target/test-classes/ozone-default-generated.xml
   ```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 318564)
Remaining Estimate: 0h
Time Spent: 10m

> ConfigFileGenerator fails with Java 10 or newer
> ---
>
> Key: HDDS-2179
> URL: https://issues.apache.org/jira/browse/HDDS-2179
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code:title=mvn -f pom.ozone.xml -DskipTests -am -pl :hadoop-hdds-config 
> clean package}
> ...
> [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
> hadoop-hdds-config ---
> [INFO] Compiling 3 source files to hadoop-hdds/config/target/test-classes
> ...
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-hdds-config: Compilation failure
> [ERROR] Can't generate the config file from annotation: 
> hadoop-hdds/config/target/test-classes/ozone-default-generated.xml
> {code}
> The root cause is that new Java (I guess it's 9+, but tried only on 10+) 
> throws a different {{IOException}} subclass: {{NoSuchFileException}} instead 
> of {{FileNotFoundException}}.
> {code}
> java.nio.file.NoSuchFileException: 
> hadoop-hdds/config/target/test-classes/ozone-default-generated.xml
>   at 
> java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
>   at 
> java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:219)
>   at java.base/java.nio.file.Files.newByteChannel(Files.java:374)
>   at java.base/java.nio.file.Files.newByteChannel(Files.java:425)
>   at 
> java.base/java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:420)
>   at java.base/java.nio.file.Files.newInputStream(Files.java:159)
>   at 
> jdk.compiler/com.sun.tools.javac.file.PathFileObject.openInputStream(PathFileObject.java:461)
>   at 
> java.compiler@13/javax.tools.ForwardingFileObject.openInputStream(ForwardingFileObject.java:74)
>   at 
> org.apache.hadoop.hdds.conf.ConfigFileGenerator.process(ConfigFileGenerator.java:62)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=318561=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318561
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 25/Sep/19 20:06
Start Date: 25/Sep/19 20:06
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1511: HDDS-2162. Make 
Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328315566
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ha/OMHANodeDetails.java
 ##
 @@ -0,0 +1,309 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.om.ha;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.net.NetUtils;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.OzoneIllegalArgumentException;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.InetSocketAddress;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_ADDRESS_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_NODES_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_NODE_ID_KEY;
+import static 
org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_RATIS_PORT_DEFAULT;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_RATIS_PORT_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_SERVICE_IDS_KEY;
+
+/**
+ * Class which maintains peer information and it's own OM node information.
+ */
+public class OMHANodeDetails {
+
+  public static final Logger LOG =
+  LoggerFactory.getLogger(OMHANodeDetails.class);
+  private final OMNodeDetails localNodeDetails;
+  private final List peerNodeDetails;
+
+  public OMHANodeDetails(OMNodeDetails localNodeDetails,
+  List peerNodeDetails) {
+this.localNodeDetails = localNodeDetails;
+this.peerNodeDetails = peerNodeDetails;
+  }
+
+  public OMNodeDetails getLocalNodeDetails() {
+return localNodeDetails;
+  }
+
+  public List< OMNodeDetails > getPeerNodeDetails() {
+return peerNodeDetails;
+  }
+
+
+  /**
+   * Inspects and loads OM node configurations.
+   *
+   * If {@link OMConfigKeys#OZONE_OM_SERVICE_IDS_KEY} is configured with
+   * multiple ids and/ or if {@link OMConfigKeys#OZONE_OM_NODE_ID_KEY} is not
+   * specifically configured , this method determines the omServiceId
+   * and omNodeId by matching the node's address with the configured
+   * addresses. When a match is found, it sets the omServicId and omNodeId from
+   * the corresponding configuration key. This method also finds the OM peers
+   * nodes belonging to the same OM service.
+   *
+   * @param conf
+   */
+  public static OMHANodeDetails loadOMHAConfig(OzoneConfiguration conf) {
+InetSocketAddress localRpcAddress = null;
+String localOMServiceId = null;
+String localOMNodeId = null;
+int localRatisPort = 0;
+Collection omServiceIds = conf.getTrimmedStringCollection(
+OZONE_OM_SERVICE_IDS_KEY);
+
+String knownOMNodeId = conf.get(OZONE_OM_NODE_ID_KEY);
+int found = 0;
+boolean isOMAddressSet = false;
+
+for (String serviceId : omServiceIds) {
+  Collection omNodeIds = OmUtils.getOMNodeIds(conf, serviceId);
+
+  if (omNodeIds.size() == 0) {
+String msg = "Configuration does not have any value set for " +
+OZONE_OM_NODES_KEY + " for service ID " + serviceId + ". List of " 
+
+"OM Node ID's should be specified for the service ID";
+throw new OzoneIllegalArgumentException(msg);
+  }
+
+  List peerNodesList = new ArrayList<>();
+  boolean isPeer;
+  for (String nodeId : omNodeIds) {
+if (knownOMNodeId != null && !knownOMNodeId.equals(nodeId)) {
+  isPeer = true;
+} else {
+  

[jira] [Work logged] (HDDS-2174) Delete GDPR Encryption Key from metadata when a Key is deleted

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2174?focusedWorklogId=318560=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318560
 ]

ASF GitHub Bot logged work on HDDS-2174:


Author: ASF GitHub Bot
Created on: 25/Sep/19 20:06
Start Date: 25/Sep/19 20:06
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1519: 
HDDS-2174. Delete GDPR Encryption Key from metadata when a Key is deleted
URL: https://github.com/apache/hadoop/pull/1519#discussion_r328315416
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OmUtils.java
 ##
 @@ -498,13 +501,35 @@ public static File createOMDir(String dirPath) {
   }
 
   /**
-   * Returns the DB key name of a deleted key in OM metadata store. The
-   * deleted key name is the _.
-   * @param key Original key name
-   * @param timestamp timestamp of deletion
-   * @return Deleted key name
+   * Prepares key info to be moved to deletedTable.
+   * 1. It strips GDPR metadata from key info
+   * 2. Check if an entry exists in deletedTable for given objectKey, if yes,
 
 Review comment:
   The 2nd point should be updated, it is still mentioning about old logic
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 318560)
Time Spent: 2h 10m  (was: 2h)

> Delete GDPR Encryption Key from metadata when a Key is deleted
> --
>
> Key: HDDS-2174
> URL: https://issues.apache.org/jira/browse/HDDS-2174
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> As advised by [~arp]  & [~aengineer], when a deleteKey command is executed, 
> delete the gdpr encryption key details from key metadata before moving it to 
> deletedTable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=318558=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318558
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 25/Sep/19 20:05
Start Date: 25/Sep/19 20:05
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1511: HDDS-2162. Make 
Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328315080
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ha/OMHANodeDetails.java
 ##
 @@ -0,0 +1,309 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.om.ha;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.net.NetUtils;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.OzoneIllegalArgumentException;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.InetSocketAddress;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_ADDRESS_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_NODES_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_NODE_ID_KEY;
+import static 
org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_RATIS_PORT_DEFAULT;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_RATIS_PORT_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_SERVICE_IDS_KEY;
+
+/**
+ * Class which maintains peer information and it's own OM node information.
+ */
+public class OMHANodeDetails {
+
+  public static final Logger LOG =
+  LoggerFactory.getLogger(OMHANodeDetails.class);
+  private final OMNodeDetails localNodeDetails;
+  private final List peerNodeDetails;
+
+  public OMHANodeDetails(OMNodeDetails localNodeDetails,
+  List peerNodeDetails) {
+this.localNodeDetails = localNodeDetails;
+this.peerNodeDetails = peerNodeDetails;
+  }
+
+  public OMNodeDetails getLocalNodeDetails() {
+return localNodeDetails;
+  }
+
+  public List< OMNodeDetails > getPeerNodeDetails() {
+return peerNodeDetails;
+  }
+
+
+  /**
+   * Inspects and loads OM node configurations.
+   *
+   * If {@link OMConfigKeys#OZONE_OM_SERVICE_IDS_KEY} is configured with
+   * multiple ids and/ or if {@link OMConfigKeys#OZONE_OM_NODE_ID_KEY} is not
+   * specifically configured , this method determines the omServiceId
+   * and omNodeId by matching the node's address with the configured
+   * addresses. When a match is found, it sets the omServicId and omNodeId from
+   * the corresponding configuration key. This method also finds the OM peers
+   * nodes belonging to the same OM service.
+   *
+   * @param conf
+   */
+  public static OMHANodeDetails loadOMHAConfig(OzoneConfiguration conf) {
+InetSocketAddress localRpcAddress = null;
+String localOMServiceId = null;
+String localOMNodeId = null;
+int localRatisPort = 0;
+Collection omServiceIds = conf.getTrimmedStringCollection(
+OZONE_OM_SERVICE_IDS_KEY);
+
+String knownOMNodeId = conf.get(OZONE_OM_NODE_ID_KEY);
+int found = 0;
+boolean isOMAddressSet = false;
+
+for (String serviceId : omServiceIds) {
+  Collection omNodeIds = OmUtils.getOMNodeIds(conf, serviceId);
+
+  if (omNodeIds.size() == 0) {
+String msg = "Configuration does not have any value set for " +
+OZONE_OM_NODES_KEY + " for service ID " + serviceId + ". List of " 
+
+"OM Node ID's should be specified for the service ID";
+throw new OzoneIllegalArgumentException(msg);
+  }
+
+  List peerNodesList = new ArrayList<>();
+  boolean isPeer;
+  for (String nodeId : omNodeIds) {
+if (knownOMNodeId != null && !knownOMNodeId.equals(nodeId)) {
+  isPeer = true;
+} else {
+  

[jira] [Work logged] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=318541=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318541
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 25/Sep/19 19:56
Start Date: 25/Sep/19 19:56
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1511: HDDS-2162. Make 
Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328311433
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ha/OMHANodeDetails.java
 ##
 @@ -0,0 +1,309 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.om.ha;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.net.NetUtils;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.OzoneIllegalArgumentException;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.InetSocketAddress;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_ADDRESS_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_NODES_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_NODE_ID_KEY;
+import static 
org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_RATIS_PORT_DEFAULT;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_RATIS_PORT_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_SERVICE_IDS_KEY;
+
+/**
+ * Class which maintains peer information and it's own OM node information.
+ */
+public class OMHANodeDetails {
+
+  public static final Logger LOG =
+  LoggerFactory.getLogger(OMHANodeDetails.class);
+  private final OMNodeDetails localNodeDetails;
+  private final List peerNodeDetails;
+
+  public OMHANodeDetails(OMNodeDetails localNodeDetails,
+  List peerNodeDetails) {
+this.localNodeDetails = localNodeDetails;
+this.peerNodeDetails = peerNodeDetails;
+  }
+
+  public OMNodeDetails getLocalNodeDetails() {
+return localNodeDetails;
+  }
+
+  public List< OMNodeDetails > getPeerNodeDetails() {
+return peerNodeDetails;
+  }
+
+
+  /**
+   * Inspects and loads OM node configurations.
+   *
+   * If {@link OMConfigKeys#OZONE_OM_SERVICE_IDS_KEY} is configured with
+   * multiple ids and/ or if {@link OMConfigKeys#OZONE_OM_NODE_ID_KEY} is not
+   * specifically configured , this method determines the omServiceId
+   * and omNodeId by matching the node's address with the configured
+   * addresses. When a match is found, it sets the omServicId and omNodeId from
+   * the corresponding configuration key. This method also finds the OM peers
+   * nodes belonging to the same OM service.
+   *
+   * @param conf
+   */
+  public static OMHANodeDetails loadOMHAConfig(OzoneConfiguration conf) {
+InetSocketAddress localRpcAddress = null;
+String localOMServiceId = null;
+String localOMNodeId = null;
+int localRatisPort = 0;
+Collection omServiceIds = conf.getTrimmedStringCollection(
+OZONE_OM_SERVICE_IDS_KEY);
+
+String knownOMNodeId = conf.get(OZONE_OM_NODE_ID_KEY);
+int found = 0;
+boolean isOMAddressSet = false;
+
+for (String serviceId : omServiceIds) {
+  Collection omNodeIds = OmUtils.getOMNodeIds(conf, serviceId);
+
+  if (omNodeIds.size() == 0) {
+String msg = "Configuration does not have any value set for " +
+OZONE_OM_NODES_KEY + " for service ID " + serviceId + ". List of " 
+
+"OM Node ID's should be specified for the service ID";
+throw new OzoneIllegalArgumentException(msg);
+  }
+
+  List peerNodesList = new ArrayList<>();
+  boolean isPeer;
+  for (String nodeId : omNodeIds) {
+if (knownOMNodeId != null && !knownOMNodeId.equals(nodeId)) {
+  isPeer = true;
+} else {
+  

[jira] [Work logged] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=318533=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318533
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 25/Sep/19 19:51
Start Date: 25/Sep/19 19:51
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1511: 
HDDS-2162. Make Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328309300
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ha/OMHANodeDetails.java
 ##
 @@ -0,0 +1,309 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.om.ha;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.net.NetUtils;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.OzoneIllegalArgumentException;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.InetSocketAddress;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_ADDRESS_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_NODES_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_NODE_ID_KEY;
+import static 
org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_RATIS_PORT_DEFAULT;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_RATIS_PORT_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_SERVICE_IDS_KEY;
+
+/**
+ * Class which maintains peer information and it's own OM node information.
+ */
+public class OMHANodeDetails {
+
+  public static final Logger LOG =
+  LoggerFactory.getLogger(OMHANodeDetails.class);
+  private final OMNodeDetails localNodeDetails;
+  private final List peerNodeDetails;
+
+  public OMHANodeDetails(OMNodeDetails localNodeDetails,
+  List peerNodeDetails) {
+this.localNodeDetails = localNodeDetails;
+this.peerNodeDetails = peerNodeDetails;
+  }
+
+  public OMNodeDetails getLocalNodeDetails() {
+return localNodeDetails;
+  }
+
+  public List< OMNodeDetails > getPeerNodeDetails() {
+return peerNodeDetails;
+  }
+
+
+  /**
+   * Inspects and loads OM node configurations.
+   *
+   * If {@link OMConfigKeys#OZONE_OM_SERVICE_IDS_KEY} is configured with
+   * multiple ids and/ or if {@link OMConfigKeys#OZONE_OM_NODE_ID_KEY} is not
+   * specifically configured , this method determines the omServiceId
+   * and omNodeId by matching the node's address with the configured
+   * addresses. When a match is found, it sets the omServicId and omNodeId from
+   * the corresponding configuration key. This method also finds the OM peers
+   * nodes belonging to the same OM service.
+   *
+   * @param conf
+   */
+  public static OMHANodeDetails loadOMHAConfig(OzoneConfiguration conf) {
+InetSocketAddress localRpcAddress = null;
+String localOMServiceId = null;
+String localOMNodeId = null;
+int localRatisPort = 0;
+Collection omServiceIds = conf.getTrimmedStringCollection(
+OZONE_OM_SERVICE_IDS_KEY);
+
+String knownOMNodeId = conf.get(OZONE_OM_NODE_ID_KEY);
 
 Review comment:
   Isn't there also a key for our own service ID?
   No, we don't have. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 318533)
Time Spent: 1h  (was: 50m)

> Make Kerberos related configuration support HA style config
> ---
>
> Key: HDDS-2162
> URL: 

[jira] [Commented] (HDFS-14863) Remove Synchronization From BlockPlacementPolicyDefault

2019-09-25 Thread David Mollitor (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938012#comment-16938012
 ] 

David Mollitor commented on HDFS-14863:
---

Different unit tests failed on the second Yetus run.  Flaky tests.

This particular data structure is accessed in a few places, but this is the 
only place it is synchronized on.  I just don't see a reason for it and it's 
not documented anywhere as to why this may be the case.

> Remove Synchronization From BlockPlacementPolicyDefault
> ---
>
> Key: HDFS-14863
> URL: https://issues.apache.org/jira/browse/HDFS-14863
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HDFS-14863.1.patch, HDFS-14863.2.patch
>
>
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java#L1010
> The {{clusterMap}} has its own internal synchronization.  Also, these are 
> only read operations so any changes applied to the {{clusterMap}} from 
> another thread will be applied since no other thread synchronizes on the 
> {{clusterMap}} itself (that I could find).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=318530=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318530
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 25/Sep/19 19:44
Start Date: 25/Sep/19 19:44
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1511: HDDS-2162. Make 
Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328306380
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ha/OMHANodeDetails.java
 ##
 @@ -0,0 +1,309 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.om.ha;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.net.NetUtils;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.OzoneIllegalArgumentException;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.InetSocketAddress;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_ADDRESS_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_NODES_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_NODE_ID_KEY;
+import static 
org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_RATIS_PORT_DEFAULT;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_RATIS_PORT_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_SERVICE_IDS_KEY;
+
+/**
+ * Class which maintains peer information and it's own OM node information.
+ */
+public class OMHANodeDetails {
+
+  public static final Logger LOG =
+  LoggerFactory.getLogger(OMHANodeDetails.class);
+  private final OMNodeDetails localNodeDetails;
+  private final List peerNodeDetails;
+
+  public OMHANodeDetails(OMNodeDetails localNodeDetails,
+  List peerNodeDetails) {
+this.localNodeDetails = localNodeDetails;
+this.peerNodeDetails = peerNodeDetails;
+  }
+
+  public OMNodeDetails getLocalNodeDetails() {
+return localNodeDetails;
+  }
+
+  public List< OMNodeDetails > getPeerNodeDetails() {
+return peerNodeDetails;
+  }
+
+
+  /**
+   * Inspects and loads OM node configurations.
+   *
+   * If {@link OMConfigKeys#OZONE_OM_SERVICE_IDS_KEY} is configured with
+   * multiple ids and/ or if {@link OMConfigKeys#OZONE_OM_NODE_ID_KEY} is not
+   * specifically configured , this method determines the omServiceId
+   * and omNodeId by matching the node's address with the configured
+   * addresses. When a match is found, it sets the omServicId and omNodeId from
+   * the corresponding configuration key. This method also finds the OM peers
+   * nodes belonging to the same OM service.
+   *
+   * @param conf
+   */
+  public static OMHANodeDetails loadOMHAConfig(OzoneConfiguration conf) {
+InetSocketAddress localRpcAddress = null;
+String localOMServiceId = null;
+String localOMNodeId = null;
+int localRatisPort = 0;
+Collection omServiceIds = conf.getTrimmedStringCollection(
+OZONE_OM_SERVICE_IDS_KEY);
+
+String knownOMNodeId = conf.get(OZONE_OM_NODE_ID_KEY);
 
 Review comment:
   Also you should trim the string here after getting from conf.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 318530)
Time Spent: 50m  (was: 40m)

> Make Kerberos related configuration support HA style config
> ---
>
> Key: HDDS-2162
> URL: 

[jira] [Work logged] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=318529=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318529
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 25/Sep/19 19:42
Start Date: 25/Sep/19 19:42
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1511: HDDS-2162. Make 
Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r328305846
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ha/OMHANodeDetails.java
 ##
 @@ -0,0 +1,309 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.om.ha;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.net.NetUtils;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.OzoneIllegalArgumentException;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.InetSocketAddress;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_ADDRESS_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_NODES_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_NODE_ID_KEY;
+import static 
org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_RATIS_PORT_DEFAULT;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_RATIS_PORT_KEY;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_SERVICE_IDS_KEY;
+
+/**
+ * Class which maintains peer information and it's own OM node information.
+ */
+public class OMHANodeDetails {
+
+  public static final Logger LOG =
+  LoggerFactory.getLogger(OMHANodeDetails.class);
+  private final OMNodeDetails localNodeDetails;
+  private final List peerNodeDetails;
+
+  public OMHANodeDetails(OMNodeDetails localNodeDetails,
+  List peerNodeDetails) {
+this.localNodeDetails = localNodeDetails;
+this.peerNodeDetails = peerNodeDetails;
+  }
+
+  public OMNodeDetails getLocalNodeDetails() {
+return localNodeDetails;
+  }
+
+  public List< OMNodeDetails > getPeerNodeDetails() {
+return peerNodeDetails;
+  }
+
+
+  /**
+   * Inspects and loads OM node configurations.
+   *
+   * If {@link OMConfigKeys#OZONE_OM_SERVICE_IDS_KEY} is configured with
+   * multiple ids and/ or if {@link OMConfigKeys#OZONE_OM_NODE_ID_KEY} is not
+   * specifically configured , this method determines the omServiceId
+   * and omNodeId by matching the node's address with the configured
+   * addresses. When a match is found, it sets the omServicId and omNodeId from
+   * the corresponding configuration key. This method also finds the OM peers
+   * nodes belonging to the same OM service.
+   *
+   * @param conf
+   */
+  public static OMHANodeDetails loadOMHAConfig(OzoneConfiguration conf) {
+InetSocketAddress localRpcAddress = null;
+String localOMServiceId = null;
+String localOMNodeId = null;
+int localRatisPort = 0;
+Collection omServiceIds = conf.getTrimmedStringCollection(
+OZONE_OM_SERVICE_IDS_KEY);
+
+String knownOMNodeId = conf.get(OZONE_OM_NODE_ID_KEY);
 
 Review comment:
   Isn't there also a key for our own service ID?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 318529)
Time Spent: 40m  (was: 0.5h)

> Make Kerberos related configuration support HA style config
> ---
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
>

[jira] [Work logged] (HDDS-2174) Delete GDPR Encryption Key from metadata when a Key is deleted

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2174?focusedWorklogId=318527=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318527
 ]

ASF GitHub Bot logged work on HDDS-2174:


Author: ASF GitHub Bot
Created on: 25/Sep/19 19:40
Start Date: 25/Sep/19 19:40
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1519: HDDS-2174. 
Delete GDPR Encryption Key from metadata when a Key is deleted
URL: https://github.com/apache/hadoop/pull/1519#issuecomment-535179736
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for branch |
   | -1 | mvninstall | 30 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 25 | hadoop-ozone in trunk failed. |
   | -1 | compile | 20 | hadoop-hdds in trunk failed. |
   | -1 | compile | 16 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 52 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 868 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 23 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 19 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 967 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 32 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 20 | hadoop-ozone in trunk failed. |
   | -0 | patch | 1004 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | -1 | mvninstall | 35 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 29 | hadoop-ozone in the patch failed. |
   | -1 | compile | 24 | hadoop-hdds in the patch failed. |
   | -1 | compile | 19 | hadoop-ozone in the patch failed. |
   | -1 | javac | 24 | hadoop-hdds in the patch failed. |
   | -1 | javac | 19 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 29 | hadoop-ozone: The patch generated 1 new + 2 
unchanged - 0 fixed = 3 total (was 2) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 719 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 19 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 30 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 20 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 27 | hadoop-hdds in the patch failed. |
   | -1 | unit | 24 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 2373 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1519 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux a69c2fc543e5 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / bdaaa3b |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/4/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/4/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/4/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/4/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/4/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/4/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/4/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 

[jira] [Created] (HDDS-2179) ConfigFileGenerator fails with Java 10 or newer

2019-09-25 Thread Doroszlai, Attila (Jira)
Doroszlai, Attila created HDDS-2179:
---

 Summary: ConfigFileGenerator fails with Java 10 or newer
 Key: HDDS-2179
 URL: https://issues.apache.org/jira/browse/HDDS-2179
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: build
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


{code:title=mvn -f pom.ozone.xml -DskipTests -am -pl :hadoop-hdds-config clean 
package}
...
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hadoop-hdds-config ---
[INFO] Compiling 3 source files to hadoop-hdds/config/target/test-classes
...
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
(default-testCompile) on project hadoop-hdds-config: Compilation failure
[ERROR] Can't generate the config file from annotation: 
hadoop-hdds/config/target/test-classes/ozone-default-generated.xml
{code}

The root cause is that new Java (I guess it's 9+, but tried only on 10+) throws 
a different {{IOException}} subclass: {{NoSuchFileException}} instead of 
{{FileNotFoundException}}.

{code}
java.nio.file.NoSuchFileException: 
hadoop-hdds/config/target/test-classes/ozone-default-generated.xml
at 
java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
at 
java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
at 
java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
at 
java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:219)
at java.base/java.nio.file.Files.newByteChannel(Files.java:374)
at java.base/java.nio.file.Files.newByteChannel(Files.java:425)
at 
java.base/java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:420)
at java.base/java.nio.file.Files.newInputStream(Files.java:159)
at 
jdk.compiler/com.sun.tools.javac.file.PathFileObject.openInputStream(PathFileObject.java:461)
at 
java.compiler@13/javax.tools.ForwardingFileObject.openInputStream(ForwardingFileObject.java:74)
at 
org.apache.hadoop.hdds.conf.ConfigFileGenerator.process(ConfigFileGenerator.java:62)
{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-2179) ConfigFileGenerator fails with Java 10 or newer

2019-09-25 Thread Doroszlai, Attila (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2179 started by Doroszlai, Attila.
---
> ConfigFileGenerator fails with Java 10 or newer
> ---
>
> Key: HDDS-2179
> URL: https://issues.apache.org/jira/browse/HDDS-2179
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>
> {code:title=mvn -f pom.ozone.xml -DskipTests -am -pl :hadoop-hdds-config 
> clean package}
> ...
> [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
> hadoop-hdds-config ---
> [INFO] Compiling 3 source files to hadoop-hdds/config/target/test-classes
> ...
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-hdds-config: Compilation failure
> [ERROR] Can't generate the config file from annotation: 
> hadoop-hdds/config/target/test-classes/ozone-default-generated.xml
> {code}
> The root cause is that new Java (I guess it's 9+, but tried only on 10+) 
> throws a different {{IOException}} subclass: {{NoSuchFileException}} instead 
> of {{FileNotFoundException}}.
> {code}
> java.nio.file.NoSuchFileException: 
> hadoop-hdds/config/target/test-classes/ozone-default-generated.xml
>   at 
> java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
>   at 
> java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:219)
>   at java.base/java.nio.file.Files.newByteChannel(Files.java:374)
>   at java.base/java.nio.file.Files.newByteChannel(Files.java:425)
>   at 
> java.base/java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:420)
>   at java.base/java.nio.file.Files.newInputStream(Files.java:159)
>   at 
> jdk.compiler/com.sun.tools.javac.file.PathFileObject.openInputStream(PathFileObject.java:461)
>   at 
> java.compiler@13/javax.tools.ForwardingFileObject.openInputStream(ForwardingFileObject.java:74)
>   at 
> org.apache.hadoop.hdds.conf.ConfigFileGenerator.process(ConfigFileGenerator.java:62)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14284) RBF: Log Router identifier when reporting exceptions

2019-09-25 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-14284:
-
Attachment: HDFS-14284.002.patch

> RBF: Log Router identifier when reporting exceptions
> 
>
> Key: HDFS-14284
> URL: https://issues.apache.org/jira/browse/HDFS-14284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14284.001.patch, HDFS-14284.002.patch
>
>
> The typical setup is to use multiple Routers through 
> ConfiguredFailoverProxyProvider.
> In a regular HA Namenode setup, it is easy to know which NN was used.
> However, in RBF, any Router can be the one reporting the exception and it is 
> hard to know which was the one.
> We should have a way to identify which Router/Namenode was the one triggering 
> the exception.
> This would also apply with Observer Namenodes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2174) Delete GDPR Encryption Key from metadata when a Key is deleted

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2174?focusedWorklogId=318513=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318513
 ]

ASF GitHub Bot logged work on HDDS-2174:


Author: ASF GitHub Bot
Created on: 25/Sep/19 18:52
Start Date: 25/Sep/19 18:52
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on issue #1519: HDDS-2174. 
Delete GDPR Encryption Key from metadata when a Key is deleted
URL: https://github.com/apache/hadoop/pull/1519#issuecomment-535161698
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 318513)
Time Spent: 1h 50m  (was: 1h 40m)

> Delete GDPR Encryption Key from metadata when a Key is deleted
> --
>
> Key: HDDS-2174
> URL: https://issues.apache.org/jira/browse/HDDS-2174
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> As advised by [~arp]  & [~aengineer], when a deleteKey command is executed, 
> delete the gdpr encryption key details from key metadata before moving it to 
> deletedTable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14853) NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode is not present

2019-09-25 Thread Ranith Sardar (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16937972#comment-16937972
 ] 

Ranith Sardar commented on HDFS-14853:
--

Sure [~xkrogen]

> NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode 
> is not present
> 
>
> Key: HDFS-14853
> URL: https://issues.apache.org/jira/browse/HDFS-14853
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14853.001.patch, HDFS-14853.002.patch, 
> HDFS-14853.003.patch
>
>
>  
> {{org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:229)
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:77)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14853) NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode is not present

2019-09-25 Thread Erik Krogen (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16937966#comment-16937966
 ] 

Erik Krogen commented on HDFS-14853:


Hi [~ayushtkn] and [~RANith], this seems like a good candidate for a bug fix in 
other release lines, are you interested in helping to backport?

> NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode 
> is not present
> 
>
> Key: HDFS-14853
> URL: https://issues.apache.org/jira/browse/HDFS-14853
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14853.001.patch, HDFS-14853.002.patch, 
> HDFS-14853.003.patch
>
>
>  
> {{org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:229)
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:77)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2020) Remove mTLS from Ozone GRPC

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2020?focusedWorklogId=318495=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318495
 ]

ASF GitHub Bot logged work on HDDS-2020:


Author: ASF GitHub Bot
Created on: 25/Sep/19 18:08
Start Date: 25/Sep/19 18:08
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1524: HDDS-2020. 
Remove mTLS from Ozone GRPC. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/1524#issuecomment-535143449
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 774 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 14 new or modified test 
files. |
   ||| _ ozone-0.4.1 Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for branch |
   | +1 | mvninstall | 598 | ozone-0.4.1 passed |
   | +1 | compile | 380 | ozone-0.4.1 passed |
   | +1 | checkstyle | 84 | ozone-0.4.1 passed |
   | +1 | mvnsite | 0 | ozone-0.4.1 passed |
   | +1 | shadedclient | 870 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 177 | ozone-0.4.1 passed |
   | 0 | spotbugs | 422 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 619 | ozone-0.4.1 passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 40 | Maven dependency ordering for patch |
   | +1 | mvninstall | 551 | the patch passed |
   | +1 | compile | 392 | the patch passed |
   | +1 | cc | 392 | the patch passed |
   | +1 | javac | 392 | the patch passed |
   | +1 | checkstyle | 43 | hadoop-hdds: The patch generated 0 new + 0 
unchanged - 10 fixed = 0 total (was 10) |
   | +1 | checkstyle | 46 | The patch passed checkstyle in hadoop-ozone |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 4 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 687 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 176 | the patch passed |
   | +1 | findbugs | 632 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 326 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1719 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 57 | The patch does not generate ASF License warnings. |
   | | | 8410 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.ozShell.TestOzoneShell |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1524/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1524 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml cc |
   | uname | Linux 147290bd764b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | ozone-0.4.1 / 2eb41fb |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1524/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1524/1/testReport/ |
   | Max. process+thread count | 4915 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds hadoop-hdds/client hadoop-hdds/common 
hadoop-hdds/container-service hadoop-hdds/framework hadoop-hdds/server-scm 
hadoop-ozone hadoop-ozone/client hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/objectstore-service 
hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1524/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 318495)
Time Spent: 4h 20m  (was: 4h 10m)

> Remove mTLS from Ozone GRPC
> ---
>
> Key: HDDS-2020
> URL: https://issues.apache.org/jira/browse/HDDS-2020
> Project: Hadoop Distributed Data Store
>  

[jira] [Comment Edited] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complete

2019-09-25 Thread Siddharth Wagle (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16937952#comment-16937952
 ] 

Siddharth Wagle edited comment on HDDS-1868 at 9/25/19 6:00 PM:


Hi [~ljain], regarding 3., the _setLeaderId_ is called for leader as well so 
doing anything for  _notifyLeader_ is not needed, right? Agree to trigger HB 
right away.
4. Good catch, I missed the multi-raft point. 
" Can we keep it simple so that we call pipeline.reportDatanode(dn) once a 
pipeline report with leaderId set is received?" -> Not sure I follow, when we 
receive HB from 3rd DN with leaderID we need to open pipeline, so we do need to 
maintain PipelineID -> LeaderIDSet map to know if all 3 reported. That's what 
you meant right? 


was (Author: swagle):
Hi [~ljain], regarding 3., the setLeaderId is called for leader as well so 
doing anything for  _notifyLeader_ is not needed, right? Agree to trigger HB 
right away.
4. Good catch, I missed the multi-raft point. 
" Can we keep it simple so that we call pipeline.reportDatanode(dn) once a 
pipeline report with leaderId set is received?" -> Not sure I follow, when we 
receive HB from 3rd DN with leaderID we need to open pipeline, so we do need to 
maintain PipelineID -> LeaderIDSet map to know if all 3 reported. That's what 
you meant right? 

> Ozone pipelines should be marked as ready only after the leader election is 
> complete
> 
>
> Key: HDDS-1868
> URL: https://issues.apache.org/jira/browse/HDDS-1868
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1868.01.patch, HDDS-1868.02.patch, 
> HDDS-1868.03.patch, HDDS-1868.04.patch
>
>
> Ozone pipeline on restart start in allocated state, they are moved into open 
> state after all the pipeline have reported to it. However this potentially 
> can lead into an issue where the pipeline is still not ready to accept any 
> incoming IO operations.
> The pipelines should be marked as ready only after the leader election is 
> complete and leader is ready to accept incoming IO.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complete

2019-09-25 Thread Siddharth Wagle (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16937952#comment-16937952
 ] 

Siddharth Wagle edited comment on HDDS-1868 at 9/25/19 6:00 PM:


Hi [~ljain], regarding 3., the setLeaderId is called for leader as well so 
doing anything for  _notifyLeader_ is not needed, right? Agree to trigger HB 
right away.
4. Good catch, I missed the multi-raft point. 
" Can we keep it simple so that we call pipeline.reportDatanode(dn) once a 
pipeline report with leaderId set is received?" -> Not sure I follow, when we 
receive HB from 3rd DN with leaderID we need to open pipeline, so we do need to 
maintain PipelineID -> LeaderIDSet map to know if all 3 reported. That's what 
you meant right? 


was (Author: swagle):
Hi [~ljain], regarding 3., the setLeaderId is called for leader as well so 
_notifyLeader_ is not needed, right? Agree to trigger HB right away.
4. Good catch, I missed the multi-raft point. 
" Can we keep it simple so that we call pipeline.reportDatanode(dn) once a 
pipeline report with leaderId set is received?" -> Not sure I follow, when we 
receive HB from 3rd DN with leaderID we need to open pipeline, so we do need to 
maintain PipelineID -> LeaderIDSet map to know if all 3 reported. That's what 
you meant right? 

> Ozone pipelines should be marked as ready only after the leader election is 
> complete
> 
>
> Key: HDDS-1868
> URL: https://issues.apache.org/jira/browse/HDDS-1868
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1868.01.patch, HDDS-1868.02.patch, 
> HDDS-1868.03.patch, HDDS-1868.04.patch
>
>
> Ozone pipeline on restart start in allocated state, they are moved into open 
> state after all the pipeline have reported to it. However this potentially 
> can lead into an issue where the pipeline is still not ready to accept any 
> incoming IO operations.
> The pipelines should be marked as ready only after the leader election is 
> complete and leader is ready to accept incoming IO.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complete

2019-09-25 Thread Siddharth Wagle (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16937952#comment-16937952
 ] 

Siddharth Wagle commented on HDDS-1868:
---

Hi [~ljain], regarding 3., the setLeaderId is called for leader as well so 
_notifyLeader_ is not needed, right? Agree to trigger HB right away.
4. Good catch, I missed the multi-raft point. 
" Can we keep it simple so that we call pipeline.reportDatanode(dn) once a 
pipeline report with leaderId set is received?" -> Not sure I follow, when we 
receive HB from 3rd DN with leaderID we need to open pipeline, so we do need to 
maintain PipelineID -> LeaderIDSet map to know if all 3 reported. That's what 
you meant right? 

> Ozone pipelines should be marked as ready only after the leader election is 
> complete
> 
>
> Key: HDDS-1868
> URL: https://issues.apache.org/jira/browse/HDDS-1868
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1868.01.patch, HDDS-1868.02.patch, 
> HDDS-1868.03.patch, HDDS-1868.04.patch
>
>
> Ozone pipeline on restart start in allocated state, they are moved into open 
> state after all the pipeline have reported to it. However this potentially 
> can lead into an issue where the pipeline is still not ready to accept any 
> incoming IO operations.
> The pipelines should be marked as ready only after the leader election is 
> complete and leader is ready to accept incoming IO.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2174) Delete GDPR Encryption Key from metadata when a Key is deleted

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2174?focusedWorklogId=318489=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318489
 ]

ASF GitHub Bot logged work on HDDS-2174:


Author: ASF GitHub Bot
Created on: 25/Sep/19 17:51
Start Date: 25/Sep/19 17:51
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on pull request #1519: 
HDDS-2174. Delete GDPR Encryption Key from metadata when a Key is deleted
URL: https://github.com/apache/hadoop/pull/1519#discussion_r328258067
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OmUtils.java
 ##
 @@ -497,14 +500,27 @@ public static File createOMDir(String dirPath) {
 return dirFile;
   }
 
-  /**
-   * Returns the DB key name of a deleted key in OM metadata store. The
-   * deleted key name is the _.
-   * @param key Original key name
-   * @param timestamp timestamp of deletion
-   * @return Deleted key name
-   */
-  public static String getDeletedKeyName(String key, long timestamp) {
-return key + "_" + timestamp;
+  public static RepeatedOmKeyInfo stripGdprMetadata(
 
 Review comment:
   Thanks for checking @bharatviswa504 . I have updated PR to address this and 
also renamed the method appropriately while I was writing the javadoc.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 318489)
Time Spent: 1h 40m  (was: 1.5h)

> Delete GDPR Encryption Key from metadata when a Key is deleted
> --
>
> Key: HDDS-2174
> URL: https://issues.apache.org/jira/browse/HDDS-2174
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> As advised by [~arp]  & [~aengineer], when a deleteKey command is executed, 
> delete the gdpr encryption key details from key metadata before moving it to 
> deletedTable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14863) Remove Synchronization From BlockPlacementPolicyDefault

2019-09-25 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16937940#comment-16937940
 ] 

Hadoop QA commented on HDFS-14863:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
56s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 50s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}158m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSShell |
|   | hadoop.cli.TestHDFSCLI |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:efed4450bf1 |
| JIRA Issue | HDFS-14863 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12981339/HDFS-14863.2.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux dcc5feb17596 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / bdaaa3b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27961/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27961/testReport/ |
| Max. process+thread count | 2795 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Assigned] (HDFS-14495) RBF: Duplicate FederationRPCMetrics

2019-09-25 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina reassigned HDFS-14495:


Assignee: hemanthboyina

> RBF: Duplicate FederationRPCMetrics
> ---
>
> Key: HDFS-14495
> URL: https://issues.apache.org/jira/browse/HDFS-14495
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: metrics
>Reporter: Akira Ajisaka
>Assignee: hemanthboyina
>Priority: Major
>
> There are two FederationRPCMetrics displayed in Web UI (http:// hostname>:/jmx) and most of the metrics are the same.
> * FederationRPCMetrics via {{@Metrics}} and {{@Metric}} annotations
> * FederationRPCMetrics via registering FederationRPCMBean
> Can we remove {{@Metrics}} and {{@Metric}} annotations to remove duplication?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11934) Add assertion to TestDefaultNameNodePort#testGetAddressFromConf

2019-09-25 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16937928#comment-16937928
 ] 

Ayush Saxena commented on HDFS-11934:
-

Thanx [~nikhil.navadiya] for the patch.
v002 LGTM +1
Will push this by EOD if no further comments.

> Add assertion to TestDefaultNameNodePort#testGetAddressFromConf
> ---
>
> Key: HDFS-11934
> URL: https://issues.apache.org/jira/browse/HDFS-11934
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.0.0-alpha4
>Reporter: legend
>Assignee: Nikhil Navadiya
>Priority: Minor
> Attachments: HDFS-11934.002.patch, HDFS-11934.patch
>
>
> Add an additional assertion to TestDefaultNameNodePort, verify that 
> testGetAddressFromConf returns 555 if setDefaultUri(conf, "foo:555").



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14865) Reduce Synchronization in DatanodeManager

2019-09-25 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16937924#comment-16937924
 ] 

Íñigo Goiri commented on HDFS-14865:


Thanks [~belugabehr], it would be nice to have some benchmark to highlight the 
improvement here.

> Reduce Synchronization in DatanodeManager
> -
>
> Key: HDFS-14865
> URL: https://issues.apache.org/jira/browse/HDFS-14865
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HDFS-14865.1.patch, HDFS-14865.2.patch, 
> HDFS-14865.3.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14850) Optimize FileSystemAccessService#getFileSystemConfiguration

2019-09-25 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16937923#comment-16937923
 ] 

Íñigo Goiri commented on HDFS-14850:


+1 on [^HDFS-14850.005.patch].

> Optimize FileSystemAccessService#getFileSystemConfiguration
> ---
>
> Key: HDFS-14850
> URL: https://issues.apache.org/jira/browse/HDFS-14850
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, performance
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14850.001.patch, HDFS-14850.002.patch, 
> HDFS-14850.003.patch, HDFS-14850.004(2).patch, HDFS-14850.004.patch, 
> HDFS-14850.005.patch
>
>
> {code:java}
>  @Override
>   public Configuration getFileSystemConfiguration() {
> Configuration conf = new Configuration(true);
> ConfigurationUtils.copy(serviceHadoopConf, conf);
> conf.setBoolean(FILE_SYSTEM_SERVICE_CREATED, true);
> // Force-clear server-side umask to make HttpFS match WebHDFS behavior
> conf.set(FsPermission.UMASK_LABEL, "000");
> return conf;
>   }
> {code}
> As above code,when call 
> FileSystemAccessService#getFileSystemConfiguration,current code  new 
> Configuration every time.  
> It is not necessary and affects performance. I think it only need to new 
> Configuration in FileSystemAccessService#init once and  
> FileSystemAccessService#getFileSystemConfiguration get it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2174) Delete GDPR Encryption Key from metadata when a Key is deleted

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2174?focusedWorklogId=318459=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318459
 ]

ASF GitHub Bot logged work on HDDS-2174:


Author: ASF GitHub Bot
Created on: 25/Sep/19 17:02
Start Date: 25/Sep/19 17:02
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1519: HDDS-2174. 
Delete GDPR Encryption Key from metadata when a Key is deleted
URL: https://github.com/apache/hadoop/pull/1519#issuecomment-535117197
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 70 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for branch |
   | -1 | mvninstall | 37 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 23 | hadoop-ozone in trunk failed. |
   | -1 | compile | 18 | hadoop-hdds in trunk failed. |
   | -1 | compile | 13 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 56 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 943 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 20 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 16 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1030 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 30 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 16 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 15 | Maven dependency ordering for patch |
   | -1 | mvninstall | 32 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 25 | hadoop-ozone in the patch failed. |
   | -1 | compile | 21 | hadoop-hdds in the patch failed. |
   | -1 | compile | 15 | hadoop-ozone in the patch failed. |
   | -1 | javac | 21 | hadoop-hdds in the patch failed. |
   | -1 | javac | 15 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 53 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 804 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 18 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 18 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 27 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 17 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 27 | hadoop-hdds in the patch failed. |
   | -1 | unit | 19 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 2511 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1519 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux a94f3d24c8c3 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / bdaaa3b |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/3/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/3/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/3/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/3/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/3/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/3/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/3/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/3/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1519/3/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 

[jira] [Work logged] (HDDS-2034) Async RATIS pipeline creation and destroy through heartbeat commands

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2034?focusedWorklogId=318456=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318456
 ]

ASF GitHub Bot logged work on HDDS-2034:


Author: ASF GitHub Bot
Created on: 25/Sep/19 17:00
Start Date: 25/Sep/19 17:00
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1469: HDDS-2034. 
Async RATIS pipeline creation and destroy through heartbea…
URL: https://github.com/apache/hadoop/pull/1469#discussion_r328235365
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/safemode/OneReplicaPipelineSafeModeRule.java
 ##
 @@ -75,69 +66,59 @@ public OneReplicaPipelineSafeModeRule(String ruleName, 
EventQueue eventQueue,
 HDDS_SCM_SAFEMODE_ONE_NODE_REPORTED_PIPELINE_PCT  +
 " value should be >= 0.0 and <= 1.0");
 
+// Exclude CLOSED pipeline
 int totalPipelineCount =
 pipelineManager.getPipelines(HddsProtos.ReplicationType.RATIS,
-HddsProtos.ReplicationFactor.THREE).size();
+HddsProtos.ReplicationFactor.THREE, Pipeline.PipelineState.OPEN)
+.size() +
+pipelineManager.getPipelines(HddsProtos.ReplicationType.RATIS,
+HddsProtos.ReplicationFactor.THREE,
+Pipeline.PipelineState.ALLOCATED).size();
 
 Review comment:
   The allocated pipeline does not guarantee to be usable on DNs. Should we 
exclude them from the calculation here?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 318456)
Time Spent: 6h 50m  (was: 6h 40m)

> Async RATIS pipeline creation and destroy through heartbeat commands
> 
>
> Key: HDDS-2034
> URL: https://issues.apache.org/jira/browse/HDDS-2034
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 50m
>  Remaining Estimate: 0h
>
> Currently, pipeline creation and destroy are synchronous operations. SCM 
> directly connect to each datanode of the pipeline through gRPC channel to 
> create the pipeline to destroy the pipeline.  
> This task is to remove the gRPC channel, send pipeline creation and destroy 
> action through heartbeat command to each datanode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2174) Delete GDPR Encryption Key from metadata when a Key is deleted

2019-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2174?focusedWorklogId=318455=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318455
 ]

ASF GitHub Bot logged work on HDDS-2174:


Author: ASF GitHub Bot
Created on: 25/Sep/19 16:58
Start Date: 25/Sep/19 16:58
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1519: 
HDDS-2174. Delete GDPR Encryption Key from metadata when a Key is deleted
URL: https://github.com/apache/hadoop/pull/1519#discussion_r328234500
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OmUtils.java
 ##
 @@ -497,14 +500,27 @@ public static File createOMDir(String dirPath) {
 return dirFile;
   }
 
-  /**
-   * Returns the DB key name of a deleted key in OM metadata store. The
-   * deleted key name is the _.
-   * @param key Original key name
-   * @param timestamp timestamp of deletion
-   * @return Deleted key name
-   */
-  public static String getDeletedKeyName(String key, long timestamp) {
-return key + "_" + timestamp;
+  public static RepeatedOmKeyInfo stripGdprMetadata(
 
 Review comment:
   Can you add javadoc for this?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 318455)
Time Spent: 1h 20m  (was: 1h 10m)

> Delete GDPR Encryption Key from metadata when a Key is deleted
> --
>
> Key: HDDS-2174
> URL: https://issues.apache.org/jira/browse/HDDS-2174
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> As advised by [~arp]  & [~aengineer], when a deleteKey command is executed, 
> delete the gdpr encryption key details from key metadata before moving it to 
> deletedTable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1228) Chunk Scanner Checkpoints

2019-09-25 Thread Doroszlai, Attila (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1228 started by Doroszlai, Attila.
---
> Chunk Scanner Checkpoints
> -
>
> Key: HDDS-1228
> URL: https://issues.apache.org/jira/browse/HDDS-1228
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Doroszlai, Attila
>Priority: Critical
>
> Checkpoint the progress of the chunk verification scanner.
> Save the checkpoint persistently to support scanner resume from checkpoint - 
> after a datanode restart.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1228) Chunk Scanner Checkpoints

2019-09-25 Thread Arpit Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reassigned HDDS-1228:
---

Assignee: Doroszlai, Attila  (was: Hrishikesh Gadre)

> Chunk Scanner Checkpoints
> -
>
> Key: HDDS-1228
> URL: https://issues.apache.org/jira/browse/HDDS-1228
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Doroszlai, Attila
>Priority: Critical
>
> Checkpoint the progress of the chunk verification scanner.
> Save the checkpoint persistently to support scanner resume from checkpoint - 
> after a datanode restart.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >