[jira] [Resolved] (HDDS-2512) Sonar TraceAllMethod NPE Could be Thrown

2019-11-20 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2512.

Fix Version/s: 0.5.0
   Resolution: Fixed

[~MatthewSharp] Thanks for the contribution. [~adoroszlai] Thanks for the 
reviews. I have committed this patch to the master branch.

> Sonar TraceAllMethod NPE Could be Thrown
> 
>
> Key: HDDS-2512
> URL: https://issues.apache.org/jira/browse/HDDS-2512
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Matthew Sharp
>Assignee: Matthew Sharp
>Priority: Minor
>  Labels: pull-request-available, sonar
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Sonar cleanup: 
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-2WKcVY8lQ4ZsNQ=AW5md-2WKcVY8lQ4ZsNQ]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2541) CI builds should use merged code state instead of the forked branch

2019-11-20 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2541.

Fix Version/s: 0.5.0
   Resolution: Won't Fix

Closing based on comments in the PR. Please re-open if needed.

> CI builds should use merged code state instead of the forked branch
> ---
>
> Key: HDDS-2541
> URL: https://issues.apache.org/jira/browse/HDDS-2541
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As of now the github actions based CI runs uses the branch of the PR which is 
> the forked repo most of the time.
> It would be better to force a rebase/merge (without push) before the builds 
> to test the possible state after the merge not before.
> For example if a PR branch uses elek/hadoop-ozone:HDDS-1234 and request a 
> merge to apache/hadoop-ozone:master then the build should download the 
> HDDS-1234 from elek/hadoop-ozone AND *rebase/merge* to the 
> apache/hadoop-ozone *before* the build.
> This merge is temporary just for the build/checks (no push at all).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2543) Format specifiers should be used instead of string concatenation

2019-11-19 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2543.

Fix Version/s: 0.5.0
   Resolution: Fixed

committed to the master branch.

> Format specifiers should be used instead of string concatenation
> 
>
> Key: HDDS-2543
> URL: https://issues.apache.org/jira/browse/HDDS-2543
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Abhishek Purohit
>Assignee: Abhishek Purohit
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Related to : 
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV-=AW5md_AGKcVY8lQ4ZsV-]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2442) Add ServiceName support for getting Signed Cert.

2019-11-19 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2442.

Fix Version/s: 0.5.0
   Resolution: Fixed

[~apurohit] Thank you for the contribution. I have committed this patch to the 
master.

> Add ServiceName support for getting Signed Cert.
> 
>
> Key: HDDS-2442
> URL: https://issues.apache.org/jira/browse/HDDS-2442
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Anu Engineer
>Assignee: Abhishek Purohit
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We need to add support for adding Service name into the Certificate Signing 
> Request.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2544) No need to call "toString()" method as formatting and string conversion is done by the Formatter.

2019-11-19 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2544.

Fix Version/s: 0.5.0
   Resolution: Fixed

Thank you for the contribution. I have committed this patch to the master 
branch.

> No need to call "toString()" method as formatting and string conversion is 
> done by the Formatter.
> -
>
> Key: HDDS-2544
> URL: https://issues.apache.org/jira/browse/HDDS-2544
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Abhishek Purohit
>Assignee: Abhishek Purohit
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Related to: 
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsWC=AW5md_AGKcVY8lQ4ZsWC



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2546) Reorder the modifiers to comply with the Java Language Specification

2019-11-19 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2546.

Fix Version/s: 0.5.0
   Resolution: Fixed

Thank you for the contribution. I have committed this patch to the master.

> Reorder the modifiers to comply with the Java Language Specification
> 
>
> Key: HDDS-2546
> URL: https://issues.apache.org/jira/browse/HDDS-2546
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Abhishek Purohit
>Assignee: Abhishek Purohit
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Related to : 
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AbKcVY8lQ4ZsWo=AW5md_AbKcVY8lQ4ZsWo



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2548) The return type of this method should be an interface such as "ConcurrentMap" rather than the implementation "ConcurrentHashMap"

2019-11-19 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2548.

Fix Version/s: 0.5.0
   Resolution: Fixed

Thank you for the contribution. I have committed this to the master branch.

> The return type of this method should be an interface such as "ConcurrentMap" 
> rather than the implementation "ConcurrentHashMap"
> 
>
> Key: HDDS-2548
> URL: https://issues.apache.org/jira/browse/HDDS-2548
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Abhishek Purohit
>Assignee: Abhishek Purohit
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Related to : 
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AKKcVY8lQ4ZsWH=AW5md_AKKcVY8lQ4ZsWH



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2547) Sonar: remove volatile keyword from BlockOutputStream blockID field (#79)

2019-11-19 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2547.

Resolution: Fixed

[~MohammadJKhan] Thanks for the contribution. I have committed this patch to 
the master branch.

> Sonar: remove volatile keyword from BlockOutputStream blockID field (#79)
> -
>
> Key: HDDS-2547
> URL: https://issues.apache.org/jira/browse/HDDS-2547
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Mohammad
>Assignee: Mohammad
>Priority: Minor
>  Labels: pull-request-available, pull-requests-available, sonar
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Sonar report :
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVd=false=BUG|https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-4jKcVY8lQ4ZsPQ=AW5md-4jKcVY8lQ4ZsPQ]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2473) Fix code reliability issues found by Sonar in Ozone Recon module.

2019-11-14 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2473.

Resolution: Fixed

Thank you for an excellent patch. Much appreciated.  I have committed this to 
the Master.

> Fix code reliability issues found by Sonar in Ozone Recon module.
> -
>
> Key: HDDS-2473
> URL: https://issues.apache.org/jira/browse/HDDS-2473
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available, sonar
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> sonarcloud.io has flagged a number of code reliability issues in Ozone recon 
> (https://sonarcloud.io/code?id=hadoop-ozone=hadoop-ozone%3Ahadoop-ozone%2Frecon%2Fsrc%2Fmain%2Fjava%2Forg%2Fapache%2Fhadoop%2Fozone%2Frecon).
> Following issues will be triaged / fixed.
> * Double Brace Initialization should not be used
> * Resources should be closed
> * InterruptedException should not be ignored



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2479) Sonar : replace instanceof with catch block in XceiverClientGrpc.sendCommandWithRetry

2019-11-14 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2479.

Fix Version/s: 0.5.0
   Resolution: Fixed

> Sonar : replace instanceof with catch block in 
> XceiverClientGrpc.sendCommandWithRetry
> -
>
> Key: HDDS-2479
> URL: https://issues.apache.org/jira/browse/HDDS-2479
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Minor
>  Labels: pull-request-available, sonar
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Sonar issue:
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV_=AW5md_AGKcVY8lQ4ZsV_



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2480) Sonar : remove log spam for exceptions inside XceiverClientGrpc.reconnect

2019-11-14 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2480.

Fix Version/s: 0.5.0
   Resolution: Fixed

[~sdeka] Thank you for the contribution. I have committed this patch to the 
master branch.

> Sonar : remove log spam for exceptions inside XceiverClientGrpc.reconnect
> -
>
> Key: HDDS-2480
> URL: https://issues.apache.org/jira/browse/HDDS-2480
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Minor
>  Labels: pull-request-available, sonar
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Sonar issue:
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsWE=AW5md_AGKcVY8lQ4ZsWE



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2308) Switch to centos with the apache/ozone-build docker image

2019-11-13 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2308.

Fix Version/s: 0.5.0
   Resolution: Fixed

Committed to the build branch.

> Switch to centos with the apache/ozone-build docker image
> -
>
> Key: HDDS-2308
> URL: https://issues.apache.org/jira/browse/HDDS-2308
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: hs_err_pid16346.log
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> I realized multiple JVM crashes in the daily builds:
>  
> {code:java}
> ERROR] ExecutionException The forked VM terminated without properly saying 
> goodbye. VM crash or System.exit called?
>   
>   
> [ERROR] Command was /bin/sh -c cd /workdir/hadoop-ozone/ozonefs && 
> /usr/lib/jvm/java-1.8-openjdk/jre/bin/java -Xmx2048m 
> -XX:+HeapDumpOnOutOfMemoryError -jar 
> /workdir/hadoop-ozone/ozonefs/target/surefire/surefirebooter9018689154779946208.jar
>  /workdir/hadoop-ozone/ozonefs/target/surefire 
> 2019-10-06T14-52-40_697-jvmRun1 surefire7569723928289175829tmp 
> surefire_947955725320624341206tmp
>   
>   
> [ERROR] Error occurred in starting fork, check output in log
>   
>   
> [ERROR] Process Exit Code: 139
>   
>   
> [ERROR] Crashed tests:
>   
>   
> [ERROR] org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRename
>   
>   
> [ERROR] ExecutionException The forked VM terminated without properly 
> saying goodbye. VM crash or System.exit called?
>   
>   
> [ERROR] Command was /bin/sh -c cd /workdir/hadoop-ozone/ozonefs && 
> /usr/lib/jvm/java-1.8-openjdk/jre/bin/java -Xmx2048m 
> -XX:+HeapDumpOnOutOfMemoryError -jar 
> /workdir/hadoop-ozone/ozonefs/target/surefire/surefirebooter5429192218879128313.jar
>  /workdir/hadoop-ozone/ozonefs/target/surefire 
> 2019-10-06T14-52-40_697-jvmRun1 surefire7227403571189445391tmp 
> surefire_1011197392458143645283tmp
>   
>   
> [ERROR] Error occurred in starting fork, check output in log
>   
>   
> [ERROR] Process Exit Code: 139
>   
>   
> [ERROR] Crashed tests:
>   
>   
> [ERROR] org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDistCp
>   
>   
> [ERROR] org.apache.maven.surefire.booter.SurefireBooterForkException: 
> ExecutionException The forked VM terminated without properly saying goodbye. 
> VM crash or System.exit called?
>   
>   
> [ERROR] Command was /bin/sh -c cd /workdir/hadoop-ozone/ozonefs && 
> /usr/lib/jvm/java-1.8-openjdk/jre/bin/java -Xmx2048m 
> -XX:+HeapDumpOnOutOfMemoryError -jar 
> /workdir/hadoop-ozone/ozonefs/target/surefire/surefirebooter1355604543311368443.jar
>  /workdir/hadoop-ozone/ozonefs/target/surefire 
> 2019-10-06T14-52-40_697-jvmRun1 surefire3938612864214747736tmp 
> surefire_933162535733309260236tmp
>   
>   
> [ERROR] Error occurred in starting fork, check output in log
>   
>   
> [ERROR] Process Exit Code: 139
>   
>   
> [ERROR] ExecutionException The forked VM terminated without properly 
> saying goodbye. VM crash or System.exit called?
>   
>   
> [ERROR] Command was /bin/sh -c cd /workdir/hadoop-ozone/ozonefs && 
> /usr/lib/jvm/java-1.8-openjdk/jre/bin/java -Xmx2048m 
> -XX:+HeapDumpOnOutOfMemoryError -jar 
> /workdir/hadoop-ozone/ozonefs/target/surefire/surefirebooter9018689154779946208.jar
>  /workdir/hadoop-ozone/ozonefs/target/surefire 
> 2019-10-06T14-52-40_697-jvmRun1 surefire7569723928289175829tmp 
> surefire_947955725320624341206tmp
>   
>   
> [ERROR] Error occurred in starting fork, check output in log
>   
>   
> [ERROR] Process Exit Code: 139 {code}
>  
> Based on the crash log (uploaded) it's related to the rocksdb JNI interface.
> In the current ozone-build docker image (which provides the environment for 
> build) we use alpine where musl libc is used instead of the main glibc. I 
> think it would be more safe to use the same glibc what is used in production.
> I tested with centos based docker image and it seems to be more stable. 
> Didn't see any more JVM crashes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1847) Datanode Kerberos principal and keytab config key looks inconsistent

2019-11-13 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-1847.

Fix Version/s: 0.5.0
   Resolution: Fixed

[~chris.t...@gmail.com] Thanks for the contribution. [~elek] Thanks for 
retesting this patch. I have committed this change to the master branch.

> Datanode Kerberos principal and keytab config key looks inconsistent
> 
>
> Key: HDDS-1847
> URL: https://issues.apache.org/jira/browse/HDDS-1847
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Assignee: Chris Teoh
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Ozone Kerberos configuration can be very confusing:
> | config name | Description |
> | hdds.scm.kerberos.principal | SCM service principal |
> | hdds.scm.kerberos.keytab.file | SCM service keytab file |
> | ozone.om.kerberos.principal | Ozone Manager service principal |
> | ozone.om.kerberos.keytab.file | Ozone Manager keytab file |
> | hdds.scm.http.kerberos.principal | SCM service spnego principal |
> | hdds.scm.http.kerberos.keytab.file | SCM service spnego keytab file |
> | ozone.om.http.kerberos.principal | Ozone Manager spnego principal |
> | ozone.om.http.kerberos.keytab.file | Ozone Manager spnego keytab file |
> | hdds.datanode.http.kerberos.keytab | Datanode spnego keytab file |
> | hdds.datanode.http.kerberos.principal | Datanode spnego principal |
> | dfs.datanode.kerberos.principal | Datanode service principal |
> | dfs.datanode.keytab.file | Datanode service keytab file |
> The prefix are very different for each of the datanode configuration.  It 
> would be nice to have some consistency for datanode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2364) Add a OM metrics to find the false positive rate for the keyMayExist

2019-11-13 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2364.

Fix Version/s: 0.5.0
   Resolution: Fixed

[~avijayan] Thanks for the contribution. [~bharat] Thanks for the reviews. I 
have committed this to the master branch.

> Add a OM metrics to find the false positive rate for the keyMayExist
> 
>
> Key: HDDS-2364
> URL: https://issues.apache.org/jira/browse/HDDS-2364
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.5.0
>Reporter: Mukul Kumar Singh
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Add a OM metrics to find the false positive rate for the keyMayExist.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2412) Define description/topics/merge strategy for the github repository with .asf.yaml

2019-11-13 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2412.

Fix Version/s: 0.5.0
   Resolution: Fixed

Thanks, I have committed this patch to the master. [~elek] Thanks for the 
contribution. [~adoroszlai] Thanks for the reviews.

> Define description/topics/merge strategy for the github repository with 
> .asf.yaml
> -
>
> Key: HDDS-2412
> URL: https://issues.apache.org/jira/browse/HDDS-2412
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> .asf.yaml helps to set different parameters on github repositories without 
> admin privileges:
> [https://cwiki.apache.org/confluence/display/INFRA/.asf.yaml+features+for+git+repositories]
> This basic .asf.yaml defines description/url/topics and the allowed merge 
> buttons.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2400) Enable github actions based builds for Ozone

2019-11-13 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2400.

Fix Version/s: 0.5.0
   Resolution: Fixed

Thanks, Committed to the master.

> Enable github actions based builds for Ozone
> 
>
> Key: HDDS-2400
> URL: https://issues.apache.org/jira/browse/HDDS-2400
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Current PR checks are executed in a private branch based on the scripts in 
> [https://github.com/elek/argo-ozone]
> but the results are stored in a public repositories:
> [https://github.com/elek/ozone-ci-q4|https://github.com/elek/ozone-ci-q3]
> [https://github.com/elek/ozone-ci-03]
>  
> As we discussed during the community calls, it would be great to use github 
> actions (or any other cloud based build) to make all the build definitions 
> more accessible for the community.
> [~vivekratnavel] checked CircleCI which has better reporting capabilities. 
> But INFRA has concerns about the permission model of circle-ci:
> {quote}it is highly unlikley we will allow a bot to be able to commit code 
> (whether or not that is the intention, allowing circle-ci will make this 
> possible, and is a complete no)
> {quote}
> See:
> https://issues.apache.org/jira/browse/INFRA-18131
> [https://lists.apache.org/thread.html/af52e2a3e865c01596d46374e8b294f2740587dbd59d85e132429b6c@%3Cbuilds.apache.org%3E]
>  
> Fortunately we have a clear contract. Or build scripts are stored under 
> _hadoop-ozone/dev-support/checks_ (return code show the result, details are 
> printed out to the console output). It's very easy to experiment with 
> different build systems.
>  
> Github action seems to be an obvious choice: it's integrated well with GitHub 
> and it has more generous resource limitations.
>  
> With this Jira I propose to enable github actions based PR checks for a few 
> tests (author, rat, unit, acceptance, checkstyle, findbugs) as an experiment.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2462) Add jq dependency in Contribution guideline

2019-11-12 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2462.

Fix Version/s: 0.5.0
   Resolution: Fixed

Committed to Master branch.

> Add jq dependency in Contribution guideline
> ---
>
> Key: HDDS-2462
> URL: https://issues.apache.org/jira/browse/HDDS-2462
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Istvan Fajth
>Assignee: Istvan Fajth
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Docker based tests are using JQ to parse JMX pages of different processes, 
> but the documentation does not mention it as a dependency.
> Add it to CONTRIBUTION.MD in the "Additional requirements to execute 
> different type of tests" section.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2404) Add support for Registered id as service identifier for CSR.

2019-11-07 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2404.

Fix Version/s: 0.5.0
   Resolution: Fixed

Committed to the master.

> Add support for Registered id as service identifier for CSR.
> 
>
> Key: HDDS-2404
> URL: https://issues.apache.org/jira/browse/HDDS-2404
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Anu Engineer
>Assignee: Abhishek Purohit
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The SCM HA needs the ability to represent a group as a single entity. So that 
> Tokens for each of the OM which is part of an HA group can be honored by the 
> data nodes. 
> This patch adds the notion of a service group ID to the Certificate 
> Infrastructure. In the next JIRAs, we will use this capability when issuing 
> certificates to OM -- especially when they are in HA mode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2442) Add ServiceName support for Certificate Signing Request.

2019-11-07 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2442:
--

 Summary: Add ServiceName support for Certificate Signing Request.
 Key: HDDS-2442
 URL: https://issues.apache.org/jira/browse/HDDS-2442
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: SCM
Reporter: Anu Engineer
Assignee: Abhishek Purohit


We need to add support for adding Service name into the Certificate Signing 
Request.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2441) Add documentation for Empty-Trash command.

2019-11-07 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2441:
--

 Summary: Add documentation for Empty-Trash command.
 Key: HDDS-2441
 URL: https://issues.apache.org/jira/browse/HDDS-2441
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: documentation
Reporter: Anu Engineer


Add documentation for empty-trash command.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2440) Add empty-trash to ozone shell.

2019-11-07 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2440:
--

 Summary: Add empty-trash to ozone shell.
 Key: HDDS-2440
 URL: https://issues.apache.org/jira/browse/HDDS-2440
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone CLI
Reporter: Anu Engineer


Add emptry-trash command to Ozone shell. We should decide if we want to add 
this to the admin shell or normal shell.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2439) Add robot tests for empty-trash as owner.

2019-11-07 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2439:
--

 Summary: Add robot tests for empty-trash as owner.
 Key: HDDS-2439
 URL: https://issues.apache.org/jira/browse/HDDS-2439
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Manager
Reporter: Anu Engineer


We need to make sure that only Owner or Admins can execute the empty-trash 
command. We need to verify this using end-to-end tests, for example, robot tests



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2438) Add the core logic for empty-trash

2019-11-07 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2438:
--

 Summary: Add the core logic for empty-trash
 Key: HDDS-2438
 URL: https://issues.apache.org/jira/browse/HDDS-2438
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Manager
Reporter: Anu Engineer






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2437) Restrict empty-trash to admins and owners only

2019-11-07 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2437:
--

 Summary: Restrict empty-trash to admins and owners only
 Key: HDDS-2437
 URL: https://issues.apache.org/jira/browse/HDDS-2437
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Manager
Reporter: Anu Engineer


Make sure that only the owner of a key/adminstrator can empty-trash. The delete 
ACL is not enough to empty-trash. This is becasue a shared bucket can have 
deletes but the owner should be able to recover them. Once empty-trash is 
executed even the owner will be able to recover the deleted keys




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2436) Add security profile support for empty-trash command

2019-11-07 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2436:
--

 Summary: Add security profile support for empty-trash command
 Key: HDDS-2436
 URL: https://issues.apache.org/jira/browse/HDDS-2436
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Manager
Reporter: Anu Engineer


Add support for a certain groups to have the ability to have empty-trash. It 
might be the case where we want this command only to be run by admins.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2435) Add the ability to disable empty-trash command.

2019-11-07 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2435:
--

 Summary: Add the ability to disable empty-trash command.
 Key: HDDS-2435
 URL: https://issues.apache.org/jira/browse/HDDS-2435
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Anu Engineer


Add a configuration key to disable empty-trash command. We can discuss if this 
should be a system-wide setting or per bucket. It is easier to do this 
system-wide I guess.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2434) Add server side support for empty-trash command.

2019-11-07 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2434:
--

 Summary: Add server side support for empty-trash command.
 Key: HDDS-2434
 URL: https://issues.apache.org/jira/browse/HDDS-2434
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Manager
Reporter: Anu Engineer


Add server side support for empty-trash command.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2433) Add client side support for the empty-trash command.

2019-11-07 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2433:
--

 Summary: Add client side support for the empty-trash command.
 Key: HDDS-2433
 URL: https://issues.apache.org/jira/browse/HDDS-2433
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Manager
Reporter: Anu Engineer


Add client side support for the empty-trash command.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2432) Add documentation for the recover-trash

2019-11-07 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2432:
--

 Summary: Add documentation for the recover-trash
 Key: HDDS-2432
 URL: https://issues.apache.org/jira/browse/HDDS-2432
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: documentation
Reporter: Anu Engineer


Add documentation for the recover-trash command in Ozone Documentation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2431) Add recover-trash command to the ozone shell.

2019-11-07 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2431:
--

 Summary: Add recover-trash command to the ozone shell.
 Key: HDDS-2431
 URL: https://issues.apache.org/jira/browse/HDDS-2431
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone CLI
Reporter: Anu Engineer


Add recover-trash command to the Ozone CLI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2430) Recover-trash should warn and skip if at-rest encryption is enabled and keys are missing.

2019-11-07 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2430:
--

 Summary: Recover-trash should warn and skip if at-rest encryption 
is enabled and keys are missing.
 Key: HDDS-2430
 URL: https://issues.apache.org/jira/browse/HDDS-2430
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Manager
Reporter: Anu Engineer


If TDE is enabled, recovering a key is useful only if the actual keys that are 
used for encryption are still recoverable. We should warn and fail the recovery 
if the actual keys are missing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2429) Recover-trash should warn and skip if the key is GDPR-ed key that recovery is pointless since the encryption keys are lost.

2019-11-07 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2429:
--

 Summary: Recover-trash should warn and skip if the key is GDPR-ed 
key that recovery is pointless since the encryption keys are lost.
 Key: HDDS-2429
 URL: https://issues.apache.org/jira/browse/HDDS-2429
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Anu Engineer


If a bucket has GDPR enabled set, then it means that keys used to recover the 
data from the blocks is irrecoverably lost. In that case, a recover from trash 
is pointless. The recover-trash command should detect this case and let the 
users know about it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2428) Rename a recovered file as .recovered if the file already exists in the target bucket.

2019-11-07 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2428:
--

 Summary: Rename a recovered file as .recovered if the file already 
exists in the target bucket.
 Key: HDDS-2428
 URL: https://issues.apache.org/jira/browse/HDDS-2428
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Anu Engineer


During recovery if the file name exists in the bucket, then the new key that is 
being recovered should be automatically renamed. The proposal is to rename it 
as key.recovered.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2426) Support recover-trash to an existing bucket.

2019-11-07 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2426:
--

 Summary:  Support recover-trash to an existing bucket.
 Key: HDDS-2426
 URL: https://issues.apache.org/jira/browse/HDDS-2426
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Manager
Reporter: Anu Engineer


Support recovering trash to an existing bucket. We should also add a config key 
that prevents this mode, so admins can force the recovery to a new bucket 
always.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2425) Support the ability to recover-trash to a new bucket.

2019-11-07 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2425:
--

 Summary: Support the ability to recover-trash to a new bucket.
 Key: HDDS-2425
 URL: https://issues.apache.org/jira/browse/HDDS-2425
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Manager
Reporter: Anu Engineer


recover-trash can be run to recover to an existing bucket or to a new bucket. 
If the bucket does not exist, the recover-trash command should create that 
bucket automatically.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2424) Add the recover-trash command server side handling.

2019-11-07 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2424:
--

 Summary: Add the recover-trash command server side handling.
 Key: HDDS-2424
 URL: https://issues.apache.org/jira/browse/HDDS-2424
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Manager
Reporter: Anu Engineer


Add the standard server side code for command handling.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2423) Add the recover-trash command client side code

2019-11-07 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2423:
--

 Summary: Add the recover-trash command client side code
 Key: HDDS-2423
 URL: https://issues.apache.org/jira/browse/HDDS-2423
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Manager
Reporter: Anu Engineer


Add protobuf, RpcClient and ClientSideTranslator code for the Empty-trash 
command.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2422) Add robot tests for list-trash command.

2019-11-07 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2422:
--

 Summary: Add robot tests for list-trash command.
 Key: HDDS-2422
 URL: https://issues.apache.org/jira/browse/HDDS-2422
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: test
Reporter: Anu Engineer


Add robot tests for list-trash command and add those tests to integration.sh so 
these commands are run as part of CI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2421) Add documentation for list trash command.

2019-11-07 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2421:
--

 Summary: Add documentation for list trash command.
 Key: HDDS-2421
 URL: https://issues.apache.org/jira/browse/HDDS-2421
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: documentation
Reporter: Anu Engineer


Add documentation about the list-trash command.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2420) Add the Ozone shell support for list-trash command.

2019-11-07 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2420:
--

 Summary: Add the Ozone shell support for list-trash command.
 Key: HDDS-2420
 URL: https://issues.apache.org/jira/browse/HDDS-2420
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone CLI
Reporter: Anu Engineer


Add support for list-trash command in Ozone CLI. Please see the attached design 
doc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2419) Add the core logic to process list trash command.

2019-11-07 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2419:
--

 Summary: Add the core logic to process list trash command.
 Key: HDDS-2419
 URL: https://issues.apache.org/jira/browse/HDDS-2419
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Manager
Reporter: Anu Engineer


Add the core logic of reading from the deleted table, and return the entries 
that match the user query.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2418) Add the list trash command server side handling.

2019-11-07 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2418:
--

 Summary: Add the list trash command server side handling.
 Key: HDDS-2418
 URL: https://issues.apache.org/jira/browse/HDDS-2418
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Manager
Reporter: Anu Engineer


Add the standard code for any command handling in the server side.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2417) Add the list trash command to the client side

2019-11-07 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2417:
--

 Summary: Add the list trash command to the client side
 Key: HDDS-2417
 URL: https://issues.apache.org/jira/browse/HDDS-2417
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Manager
Reporter: Anu Engineer


Add the list-trash command to the protobuf files and to the client side 
translator.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2404) Add support for Registered id as service identifier for CSR.

2019-11-04 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2404:
--

 Summary: Add support for Registered id as service identifier for 
CSR.
 Key: HDDS-2404
 URL: https://issues.apache.org/jira/browse/HDDS-2404
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: SCM
Reporter: Anu Engineer


The SCM HA needs the ability to represent a group as a single entity. So that 
Tokens for each of the OM which is part of an HA group can be honored by the 
data nodes. 

This patch adds the notion of a service group ID to the Certificate 
Infrastructure. In the next JIRAs, we will use this capability when issuing 
certificates to OM -- especially when they are in HA mode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1847) Datanode Kerberos principal and keytab config key looks inconsistent

2019-10-31 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-1847.

Fix Version/s: 0.5.0
   Resolution: Fixed

I have committed this patch to the master branch. 

> Datanode Kerberos principal and keytab config key looks inconsistent
> 
>
> Key: HDDS-1847
> URL: https://issues.apache.org/jira/browse/HDDS-1847
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Eric Yang
>Assignee: Chris Teoh
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Ozone Kerberos configuration can be very confusing:
> | config name | Description |
> | hdds.scm.kerberos.principal | SCM service principal |
> | hdds.scm.kerberos.keytab.file | SCM service keytab file |
> | ozone.om.kerberos.principal | Ozone Manager service principal |
> | ozone.om.kerberos.keytab.file | Ozone Manager keytab file |
> | hdds.scm.http.kerberos.principal | SCM service spnego principal |
> | hdds.scm.http.kerberos.keytab.file | SCM service spnego keytab file |
> | ozone.om.http.kerberos.principal | Ozone Manager spnego principal |
> | ozone.om.http.kerberos.keytab.file | Ozone Manager spnego keytab file |
> | hdds.datanode.http.kerberos.keytab | Datanode spnego keytab file |
> | hdds.datanode.http.kerberos.principal | Datanode spnego principal |
> | dfs.datanode.kerberos.principal | Datanode service principal |
> | dfs.datanode.keytab.file | Datanode service keytab file |
> The prefix are very different for each of the datanode configuration.  It 
> would be nice to have some consistency for datanode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2366) Remove ozone.enabled flag

2019-10-30 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2366.

Fix Version/s: 0.5.0
   Resolution: Fixed

Committed to the master branch. [~swagle] Thank you for the contribution.

> Remove ozone.enabled flag
> -
>
> Key: HDDS-2366
> URL: https://issues.apache.org/jira/browse/HDDS-2366
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Now when ozone is started the start-ozone.sh/stop-ozone.sh script check 
> whether this property is enabled or not to start ozone services. Now, this 
> property and this check can be removed.
>  
> This was needed when ozone is part of Hadoop, and we don't want to start 
> ozone services by default. Now there is no such requirement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDDS-426) Add field modificationTime for Volume and Bucket

2019-10-29 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reopened HDDS-426:
---

> Add field modificationTime for Volume and Bucket
> 
>
> Key: HDDS-426
> URL: https://issues.apache.org/jira/browse/HDDS-426
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie
>
> There are update operations that can be performed for Volume, Bucket and Key.
> While Key records the modification time, Volume and & Bucket do not capture 
> this.
>  
> This Jira proposes to add the required field to Volume and Bucket in order to 
> capture the modficationTime.
>  
> Current Status:
> {noformat}
> hadoop@1987b5de4203:~$ ./bin/ozone oz -infoVolume /dummyvol
> 2018-09-10 17:16:12 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> {
> "owner" : {
> "name" : "bilbo"
> },
> "quota" : {
> "unit" : "TB",
> "size" : 1048576
> },
> "volumeName" : "dummyvol",
> "createdOn" : "Mon, 10 Sep 2018 17:11:32 GMT",
> "createdBy" : "bilbo"
> }
> hadoop@1987b5de4203:~$ ./bin/ozone oz -infoBucket /dummyvol/mybuck
> 2018-09-10 17:15:25 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> {
> "volumeName" : "dummyvol",
> "bucketName" : "mybuck",
> "createdOn" : "Mon, 10 Sep 2018 17:12:09 GMT",
> "acls" : [ {
> "type" : "USER",
> "name" : "hadoop",
> "rights" : "READ_WRITE"
> }, {
> "type" : "GROUP",
> "name" : "users",
> "rights" : "READ_WRITE"
> }, {
> "type" : "USER",
> "name" : "spark",
> "rights" : "READ_WRITE"
> } ],
> "versioning" : "DISABLED",
> "storageType" : "DISK"
> }
> hadoop@1987b5de4203:~$ ./bin/ozone oz -infoKey /dummyvol/mybuck/myk1
> 2018-09-10 17:19:43 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> {
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Mon, 10 Sep 2018 17:19:04 GMT",
> "modifiedOn" : "Mon, 10 Sep 2018 17:19:04 GMT",
> "size" : 0,
> "keyName" : "myk1",
> "keyLocations" : [ ]
> }{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-426) Add field modificationTime for Volume and Bucket

2019-10-29 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-426.
---
Fix Version/s: 0.5.0
   Resolution: Fixed

Looks like HDDS-1551 added Creation Time to bucketInfo and HDDS-1620 added 
creationTime to VolumeInfo. 

> Add field modificationTime for Volume and Bucket
> 
>
> Key: HDDS-426
> URL: https://issues.apache.org/jira/browse/HDDS-426
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie
> Fix For: 0.5.0
>
>
> There are update operations that can be performed for Volume, Bucket and Key.
> While Key records the modification time, Volume and & Bucket do not capture 
> this.
>  
> This Jira proposes to add the required field to Volume and Bucket in order to 
> capture the modficationTime.
>  
> Current Status:
> {noformat}
> hadoop@1987b5de4203:~$ ./bin/ozone oz -infoVolume /dummyvol
> 2018-09-10 17:16:12 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> {
> "owner" : {
> "name" : "bilbo"
> },
> "quota" : {
> "unit" : "TB",
> "size" : 1048576
> },
> "volumeName" : "dummyvol",
> "createdOn" : "Mon, 10 Sep 2018 17:11:32 GMT",
> "createdBy" : "bilbo"
> }
> hadoop@1987b5de4203:~$ ./bin/ozone oz -infoBucket /dummyvol/mybuck
> 2018-09-10 17:15:25 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> {
> "volumeName" : "dummyvol",
> "bucketName" : "mybuck",
> "createdOn" : "Mon, 10 Sep 2018 17:12:09 GMT",
> "acls" : [ {
> "type" : "USER",
> "name" : "hadoop",
> "rights" : "READ_WRITE"
> }, {
> "type" : "GROUP",
> "name" : "users",
> "rights" : "READ_WRITE"
> }, {
> "type" : "USER",
> "name" : "spark",
> "rights" : "READ_WRITE"
> } ],
> "versioning" : "DISABLED",
> "storageType" : "DISK"
> }
> hadoop@1987b5de4203:~$ ./bin/ozone oz -infoKey /dummyvol/mybuck/myk1
> 2018-09-10 17:19:43 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> {
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Mon, 10 Sep 2018 17:19:04 GMT",
> "modifiedOn" : "Mon, 10 Sep 2018 17:19:04 GMT",
> "size" : 0,
> "keyName" : "myk1",
> "keyLocations" : [ ]
> }{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2374) Make Ozone Readme.txt point to the Ozone websites instead of Hadoop.

2019-10-28 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2374.

Fix Version/s: 0.5.0
   Resolution: Fixed

merged to the master

> Make Ozone Readme.txt point to the Ozone websites instead of Hadoop.
> 
>
> Key: HDDS-2374
> URL: https://issues.apache.org/jira/browse/HDDS-2374
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> See the title.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2374) Make Ozone Readme.txt point to the Ozone websites instead of Hadoop.

2019-10-28 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2374:
--

 Summary: Make Ozone Readme.txt point to the Ozone websites instead 
of Hadoop.
 Key: HDDS-2374
 URL: https://issues.apache.org/jira/browse/HDDS-2374
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Anu Engineer
Assignee: Anu Engineer


See the title.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2254) Fix flaky unit testTestContainerStateMachine#testRatisSnapshotRetention

2019-10-17 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2254.

Fix Version/s: 0.5.0
   Resolution: Fixed

Committed to the master branch.

> Fix flaky unit testTestContainerStateMachine#testRatisSnapshotRetention
> ---
>
> Key: HDDS-2254
> URL: https://issues.apache.org/jira/browse/HDDS-2254
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Test always fails with assertion error:
> {code}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.ozone.client.rpc.TestContainerStateMachine.testRatisSnapshotRetention(TestContainerStateMachine.java:188)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2302) Manage common pom versions in one common place

2019-10-16 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2302.

Fix Version/s: 0.5.0
   Resolution: Fixed

> Manage common pom versions in one common place
> --
>
> Key: HDDS-2302
> URL: https://issues.apache.org/jira/browse/HDDS-2302
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Some of the versions (eg. ozone.version, hdds.version, ratis.version) are 
> required for both ozone and hdds subprojects. As we have a common pom.xml it 
> can be safer to manage them in one common place at the root pom.xml instead 
> of managing them multiple times.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2289) Put testing information and a problem description to the github PR template

2019-10-16 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2289.

Resolution: Fixed

> Put testing information and a problem description to the github PR template
> ---
>
> Key: HDDS-2289
> URL: https://issues.apache.org/jira/browse/HDDS-2289
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Anu Engineer
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This is suggested by [~aengineer] during an offline discussion to add more 
> information to the github PR template based on the template of ambari (by 
> Vivek):
> https://github.com/apache/ambari/commit/579cec8cf5bcfe1a1a0feacf055ed6569f674e6a



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2316) Support to skip recon and/or ozonefs during the build

2019-10-16 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2316.

Fix Version/s: 0.5.0
   Resolution: Fixed

Committed to the master.

> Support to skip recon and/or ozonefs during the build
> -
>
> Key: HDDS-2316
> URL: https://issues.apache.org/jira/browse/HDDS-2316
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Anu Engineer
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> (I almost use this Jira summary: "Fast-lane to ozone build" It was very hard 
> to resist...)
>  
>  The two slowest part of Ozone build as of now:
>  # The (multiple) shading of ozonefs
>  # And the frontend build/obfuscation of ozone recon
> [~aengineer] suggested to introduce options to skip them as they are not 
> required for the build all the time.
> This patch introduces '-DskipRecon' and '-DskipShade' options to provide a 
> faster way to create a *partial* build.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2256) Checkstyle issues in CheckSumByteBuffer.java

2019-10-04 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2256:
--

 Summary: Checkstyle issues in CheckSumByteBuffer.java
 Key: HDDS-2256
 URL: https://issues.apache.org/jira/browse/HDDS-2256
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Anu Engineer


HDDS-, added some checkstyle failures in CheckSumByteBuffer.java. This JIRA 
is to track and fix those checkstyle issues.

{code}
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java
 84: Inner assignments should be avoided.
 85: Inner assignments should be avoided.
 101: child has incorrect indentation level 8, expected level should be 6.
 102: child has incorrect indentation level 8, expected level should be 6.
 103:  child has incorrect indentation level 8, expected level should be 6.
 104:  child has incorrect indentation level 8, expected level should be 6.
 105: child has incorrect indentation level 8, expected level should be 6.
 106:  child has incorrect indentation level 8, expected level should be 6.
 107:  child has incorrect indentation level 8, expected level should be 6.
 108: child has incorrect indentation level 8, expected level should be 6.
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2020) Remove mTLS from Ozone GRPC

2019-10-03 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2020.

Fix Version/s: 0.4.1
   Resolution: Fixed

Committed to both 0.4.1 and trunk

> Remove mTLS from Ozone GRPC
> ---
>
> Key: HDDS-2020
> URL: https://issues.apache.org/jira/browse/HDDS-2020
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Generic GRPC support mTLS for mutual authentication. However, Ozone has built 
> in block token mechanism for server to authenticate the client. We only need 
> TLS for client to authenticate the server and wire encryption. 
> Remove the mTLS support also simplify the GRPC server/client configuration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2200) Recon does not handle the NULL snapshot from OM DB cleanly.

2019-10-03 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2200.

Fix Version/s: 0.5.0
   Resolution: Fixed

Committed to the trunk. Thanks for the contribution.

> Recon does not handle the NULL snapshot from OM DB cleanly.
> ---
>
> Key: HDDS-2200
> URL: https://issues.apache.org/jira/browse/HDDS-2200
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> {code}
> 2019-09-27 11:35:19,835 [pool-9-thread-1] ERROR  - Null snapshot location 
> got from OM.
> 2019-09-27 11:35:19,839 [pool-9-thread-1] INFO   - Calling reprocess on 
> Recon tasks.
> 2019-09-27 11:35:19,840 [pool-7-thread-1] INFO   - Starting a 'reprocess' 
> run of ContainerKeyMapperTask.
> 2019-09-27 11:35:20,069 [pool-7-thread-1] INFO   - Creating new Recon 
> Container DB at /tmp/recon/db/recon-container.db_1569609319840
> 2019-09-27 11:35:20,069 [pool-7-thread-1] INFO   - Cleaning up old Recon 
> Container DB at /tmp/recon/db/recon-container.db_1569609258721.
> 2019-09-27 11:35:20,144 [pool-9-thread-1] ERROR  - Unexpected error :
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at 
> org.apache.hadoop.ozone.recon.tasks.ReconTaskControllerImpl.reInitializeTasks(ReconTaskControllerImpl.java:181)
> at 
> org.apache.hadoop.ozone.recon.spi.impl.OzoneManagerServiceProviderImpl.syncDataFromOM(OzoneManagerServiceProviderImpl.java:333)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.recon.tasks.ContainerKeyMapperTask.reprocess(ContainerKeyMapperTask.java:81)
> at 
> org.apache.hadoop.ozone.recon.tasks.ReconTaskControllerImpl.lambda$reInitializeTasks$3(ReconTaskControllerImpl.java:176)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2226) S3 Secrets should use a strong RNG

2019-10-03 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2226.

Fix Version/s: 0.5.0
   Resolution: Fixed

Committed to the trunk.

> S3 Secrets should use a strong RNG
> --
>
> Key: HDDS-2226
> URL: https://issues.apache.org/jira/browse/HDDS-2226
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: S3
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> The S3 token generation under ozone should use a strong RNG. 
> I want to thank Jonathan Leitschuh, for originally noticing this issue and 
> reporting it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2227) GDPR key generation could benefit from secureRandom

2019-10-02 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2227.

Fix Version/s: 0.5.0
   Resolution: Fixed

Committed to the trunk branch.

> GDPR key generation could benefit from secureRandom
> ---
>
> Key: HDDS-2227
> URL: https://issues.apache.org/jira/browse/HDDS-2227
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The SecureRandom can be used for the symetric key for GDPR. While GDPR is not 
> a security feature, this is a good to have optional feature.
> I want to thank Jonathan Leitschuh, for originally noticing this issue and 
> reporting it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2234) Running rat.sh without any parameter on mac fails due to the following files.

2019-10-02 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2234:
--

 Summary: Running rat.sh without any parameter on mac fails due to 
the following files.
 Key: HDDS-2234
 URL: https://issues.apache.org/jira/browse/HDDS-2234
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Anu Engineer


[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn  -rf :hadoop-ozone-recon
[INFO] Build failures were ignored.
hadoop-ozone/recon/target/rat.txt: !? 
/Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/index.html
hadoop-ozone/recon/target/rat.txt: !? 
/Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/css/2.8943d5a3.chunk.css
hadoop-ozone/recon/target/rat.txt: !? 
/Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/css/2.8943d5a3.chunk.css.map
hadoop-ozone/recon/target/rat.txt: !? 
/Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/css/main.96eebd44.chunk.css
hadoop-ozone/recon/target/rat.txt: !? 
/Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/js/runtime~main.a8a9905a.js.map
hadoop-ozone/recon/target/rat.txt: !? 
/Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/js/runtime~main.a8a9905a.js
hadoop-ozone/recon/target/rat.txt: !? 
/Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/js/2.ea549bfe.chunk.js
hadoop-ozone/recon/target/rat.txt: !? 
/Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/js/main.5bb53989.chunk.js
hadoop-ozone/recon/target/rat.txt: !? 
/Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/js/2.ea549bfe.chunk.js.map
hadoop-ozone/recon/target/rat.txt: !? 
/Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/precache-manifest.1d05d7a103ee9d6b280ef7adfcab3c01.js
hadoop-ozone/recon/target/rat.txt: !? 
/Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/service-worker.js



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2201) Rename VolumeList to UserVolumeInfo

2019-10-02 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2201.

Fix Version/s: 0.5.0
   Resolution: Fixed

Committed to the trunk.

> Rename VolumeList to UserVolumeInfo
> ---
>
> Key: HDDS-2201
> URL: https://issues.apache.org/jira/browse/HDDS-2201
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Under Ozone Manager, The volume points to a structure called volumeInfo, 
> Bucket points to BucketInfo, Key points to KeyInfo. However, User points to 
> VolumeList. duh?
> This JIRA proposes to refactor the VolumeList as UserVolumeInfo. Why not, 
> UserInfo, because that structure is already taken by the security work of 
> Ozone Manager.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2227) GDPR key generation could benefit from secureRandom

2019-10-01 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2227:
--

 Summary: GDPR key generation could benefit from secureRandom
 Key: HDDS-2227
 URL: https://issues.apache.org/jira/browse/HDDS-2227
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Anu Engineer
Assignee: Anu Engineer


The SecureRandom can be used for the symetric key for GDPR. While GDPR is not a 
security feature, this is a good to have optional feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2226) S3 Secrets should use a strong RNG

2019-10-01 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2226:
--

 Summary: S3 Secrets should use a strong RNG
 Key: HDDS-2226
 URL: https://issues.apache.org/jira/browse/HDDS-2226
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: S3
Reporter: Anu Engineer
Assignee: Anu Engineer


The S3 token generation under ozone should use a strong RNG. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2201) Rename VolumeList to UserVolumeInfo

2019-09-27 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2201:
--

 Summary: Rename VolumeList to UserVolumeInfo
 Key: HDDS-2201
 URL: https://issues.apache.org/jira/browse/HDDS-2201
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Manager
Reporter: Anu Engineer
Assignee: Anu Engineer


Under Ozone Manager, The volume points to a structure called volumeInfo, Bucket 
points to BucketInfo, Key points to KeyInfo. However, User points to 
VolumeList. duh?

This JIRA proposes to refactor the VolumeList as UserVolumeInfo. Why not, 
UserInfo, because that structure is already taken by the security work of Ozone 
Manager.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2193) Adding container related metrics in SCM

2019-09-26 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2193.

Fix Version/s: 0.5.0
   Resolution: Fixed

> Adding container related metrics in SCM
> ---
>
> Key: HDDS-2193
> URL: https://issues.apache.org/jira/browse/HDDS-2193
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
>  * Number of list container ops.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2180) Add Object ID and update ID on VolumeList Object

2019-09-26 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2180.

Fix Version/s: 0.5.0
   Resolution: Fixed

> Add Object ID and update ID on VolumeList Object
> 
>
> Key: HDDS-2180
> URL: https://issues.apache.org/jira/browse/HDDS-2180
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> This JIRA proposes to add Object ID and Update IDs to the Volume List Object.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2180) Add Object ID and update ID on VolumeList Object

2019-09-25 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2180:
--

 Summary: Add Object ID and update ID on VolumeList Object
 Key: HDDS-2180
 URL: https://issues.apache.org/jira/browse/HDDS-2180
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Anu Engineer
Assignee: Anu Engineer


This JIRA proposes to add Object ID and Update IDs to the Volume List Object.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2159) Fix Race condition in ProfileServlet#pid

2019-09-23 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2159.

Fix Version/s: 0.5.0
   Resolution: Fixed

Committed to the trunk

> Fix Race condition in ProfileServlet#pid
> 
>
> Key: HDDS-2159
> URL: https://issues.apache.org/jira/browse/HDDS-2159
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> There is a race condition in ProfileServlet. The Servlet member field pid 
> should not be used for local assignment. It could lead to race condition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2170) Add Object IDs and Update ID to Volume Object

2019-09-23 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2170:
--

 Summary: Add Object IDs and Update ID to Volume Object
 Key: HDDS-2170
 URL: https://issues.apache.org/jira/browse/HDDS-2170
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Anu Engineer
Assignee: Anu Engineer


This patch proposes to add object ID and update ID when a volume is created. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2128) Make ozone sh command work with OM HA service ids

2019-09-20 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2128.

Fix Version/s: 0.5.0
   Resolution: Fixed

> Make ozone sh command work with OM HA service ids
> -
>
> Key: HDDS-2128
> URL: https://issues.apache.org/jira/browse/HDDS-2128
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Now that HDDS-2007 is committed. I can use some common helper function to 
> make this work.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1982) Extend SCMNodeManager to support decommission and maintenance states

2019-09-20 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-1982.

Fix Version/s: 0.5.0
   Resolution: Fixed

[~sodonnell] Thank you for the contribution. I have committed this patch to the 
HDDS-1880-Decom branch.

> Extend SCMNodeManager to support decommission and maintenance states
> 
>
> Key: HDDS-1982
> URL: https://issues.apache.org/jira/browse/HDDS-1982
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 7h
>  Remaining Estimate: 0h
>
> Currently, within SCM a node can have the following states:
> HEALTHY
> STALE
> DEAD
> DECOMMISSIONING
> DECOMMISSIONED
> The last 2 are not currently used.
> In order to support decommissioning and maintenance mode, we need to extend 
> the set of states a node can have to include decommission and maintenance 
> states.
> It is also important to note that a node decommissioning or entering 
> maintenance can also be HEALTHY, STALE or go DEAD.
> Therefore in this Jira I propose we should model a node state with two 
> different sets of values. The first, is effectively the liveliness of the 
> node, with the following states. This is largely what is in place now:
> HEALTHY
> STALE
> DEAD
> The second is the node operational state:
> IN_SERVICE
> DECOMMISSIONING
> DECOMMISSIONED
> ENTERING_MAINTENANCE
> IN_MAINTENANCE
> That means the overall total number of states for a node is the cross-product 
> of the two above lists, however it probably makes sense to keep the two 
> states seperate internally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2093) Add Ranger specific information to documentation

2019-09-05 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2093:
--

 Summary: Add Ranger specific information to documentation
 Key: HDDS-2093
 URL: https://issues.apache.org/jira/browse/HDDS-2093
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Anu Engineer


Apache Ranger version 2.0 supports an Ozone Manager plug-in, which allows Ozone 
policies to be controlled via Ranger. We need to update the Ozone documentation 
that explains how to configure and use Apache Ranger as the Ozone's policy 
engine.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2092) Support groups in adminstrators in SCM

2019-09-05 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2092:
--

 Summary: Support groups in adminstrators in SCM
 Key: HDDS-2092
 URL: https://issues.apache.org/jira/browse/HDDS-2092
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Anu Engineer


Today SCM adminstrators are a set of users specified by an key in Ozone. We 
should add support for groups, so that instead of users groups can be specified 
as SCM administrators.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2091) Document the who are adminstators Under Ozone.

2019-09-05 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2091:
--

 Summary: Document the who are adminstators Under Ozone. 
 Key: HDDS-2091
 URL: https://issues.apache.org/jira/browse/HDDS-2091
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Anu Engineer


Ozone uses ozone.adminstrators as a key to indicate who are administrators. 
This information is missing from the documentation. We need to add that to both 
security pages and CLI pages.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1708) Expose metrics for unhealthy containers

2019-09-05 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-1708.

Fix Version/s: 0.5.0
   Resolution: Fixed

[~hgadre] Thank you for the contribution. I have committed this patch to the 
trunk branch.

> Expose metrics for unhealthy containers
> ---
>
> Key: HDDS-1708
> URL: https://issues.apache.org/jira/browse/HDDS-1708
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HDDS-1201 introduced a capability for datanode to report unhealthy containers 
> to SCM. This Jira is to expose this information as a metric for user 
> visibility.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2087) Remove the hard coded config key in ChunkManager

2019-09-04 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2087:
--

 Summary: Remove the hard coded config key in ChunkManager
 Key: HDDS-2087
 URL: https://issues.apache.org/jira/browse/HDDS-2087
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Anu Engineer


We have a hard-coded config key in the {{ChunkManagerFactory.java.}}

 
{code}
boolean scrubber = config.getBoolean(
 "hdds.containerscrub.enabled",
 false);
{code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1200) Ozone Data Scrubbing : Checksum verification for chunks

2019-09-04 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-1200.

Fix Version/s: 0.5.0
   Resolution: Fixed

[~hgadre] Thank you for the contribution. I have committed this patch to the 
trunk.

> Ozone Data Scrubbing : Checksum verification for chunks
> ---
>
> Key: HDDS-1200
> URL: https://issues.apache.org/jira/browse/HDDS-1200
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Supratim Deka
>Assignee: Hrishikesh Gadre
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> Background scrubber should read each chunk and verify the checksum.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1881) Design doc: decommissioning in Ozone

2019-08-28 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-1881.

Fix Version/s: 0.5.0
   Resolution: Fixed

Thank you for all the comments, discussions and contributions to this design.  
I have committed this design doc, since we have not had any more comments for 
the last 30 days.

> Design doc: decommissioning in Ozone
> 
>
> Key: HDDS-1881
> URL: https://issues.apache.org/jira/browse/HDDS-1881
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: design, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 43h
>  Remaining Estimate: 0h
>
> Design doc can be attached to the documentation. In this jira the design doc 
> will be attached and merged to the documentation page.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2049) Fix Ozone Rest client documentation

2019-08-28 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2049:
--

 Summary: Fix Ozone Rest client documentation
 Key: HDDS-2049
 URL: https://issues.apache.org/jira/browse/HDDS-2049
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Anu Engineer
Assignee: Anu Engineer
 Fix For: 0.5.0


We have removed Ozone Rest protocol support and moved to using S3 as the 
standard REST protocol. The ozone documentation needs to be updated for the 
0.5.0 release.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2003) Ozone - Easy simple start does not work as expected.

2019-08-23 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2003.

Resolution: Later

> Ozone - Easy simple start does not work as expected.
> 
>
> Key: HDDS-2003
> URL: https://issues.apache.org/jira/browse/HDDS-2003
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Affects Versions: 0.5.0
>Reporter: Anu Engineer
>Priority: Blocker
>
> During the verification of Ozone documentation, I followed the instructions 
> on the easy start page.
>  
> Run this instruction:
> {noformat}
> docker run -p 9878:9878 -p 9876:9876 apache/ozone {noformat}
> Followed by this command in another window:
> {noformat}
>   aws s3api --endpoint http://localhost:9878/ create-bucket 
> --bucket=bucket1{noformat}
>  
> The S3Gateway is probably crashing since it is not able to find Ozone client. 
> This is on trunk, I will reverify on the Ozone-0.4.1. FYI: [~elek], 
> [~bharatviswa], [~nandakumar131].
>  
> Here is the crash stack:
> {noformat}
>  2019-08-21 20:01:44 INFO  SCMChillModeManager:274 - SCM in chill mode. 1 
> DataNodes registered, 1 required.2019-08-21 20:01:44 INFO  
> SCMChillModeManager:274 - SCM in chill mode. 1 DataNodes registered, 1 
> required.2019-08-21 20:01:44 INFO  SCMChillModeManager:110 - SCM exiting 
> chill mode.2019-08-21 20:02:40 ERROR OzoneClientFactory:294 - Couldn't create 
> protocol class org.apache.hadoop.ozone.client.rpc.RpcClient 
> exception:java.lang.reflect.InvocationTargetException at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:291)
>  at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClient(OzoneClientFactory.java:92)
>  at 
> org.apache.hadoop.ozone.s3.OzoneClientProducer.createClient(OzoneClientProducer.java:45)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  {noformat}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2006) Autogenerated docker config fails with space in the file name issue.

2019-08-21 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2006:
--

 Summary: Autogenerated docker config fails with space in the file 
name issue.
 Key: HDDS-2006
 URL: https://issues.apache.org/jira/browse/HDDS-2006
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Anu Engineer


If you follow the instructions in the "Local multi-container cluster" and 
generate docker-config, and later try to use it. The docker-compose up -d 
command will fail with

 

*ERROR: In file ~/testOzoneInstructions/docker-config: environment variable 
name 'Setting up environment!' may not contains whitespace.*



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2005) Local Multi-Container Cluster Example generates spurious lines in the config

2019-08-21 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2005:
--

 Summary: Local Multi-Container Cluster Example generates spurious 
lines in the config
 Key: HDDS-2005
 URL: https://issues.apache.org/jira/browse/HDDS-2005
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: om
Reporter: Anu Engineer


The instructions in our 
 # [Home|http://localhost:1313/]
 ## [Getting Started|http://localhost:1313/start.html]
 ### Simple Single Ozone 

says that the user can generate a local multi-container cluster using the 
following commands.
{code:java}

docker run apache/ozone cat docker-compose.yaml > docker-compose.yaml
docker run apache/ozone cat docker-config > docker-config

{code}
However, running these commands on *Docker version 19.03.1, build 74b1e89* on 
Mac generates configs with these spurious lines first lines in both 
docker-compose.yaml and docker-config.
{noformat}
Setting up environment!
 {noformat}
 

This causes the failure when you do a docker-compose up -d.

 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2004) Ozone documentation issues

2019-08-21 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2004:
--

 Summary: Ozone documentation issues 
 Key: HDDS-2004
 URL: https://issues.apache.org/jira/browse/HDDS-2004
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Anu Engineer


This is an umbrella Jira to collect all Ozone-0.4.1 documentation related 
issues. We will need to test and fix all the issues found in the current 
documentation.

 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2003) Ozone - Easy simple start does not work as expected.

2019-08-21 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2003:
--

 Summary: Ozone - Easy simple start does not work as expected.
 Key: HDDS-2003
 URL: https://issues.apache.org/jira/browse/HDDS-2003
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: S3
Affects Versions: 0.5.0
Reporter: Anu Engineer


During the verification of Ozone documentation, I followed the instructions on 
the easy start page.

 

Run this instruction:
{noformat}

docker run -p 9878:9878 -p 9876:9876 apache/ozone {noformat}
Followed by this command in another window:
{noformat}

  aws s3api --endpoint http://localhost:9878/ create-bucket 
--bucket=bucket1{noformat}
 

The S3Gateway is probably crashing since it is not able to find Ozone client. 

This is on trunk, I will reverify on the Ozone-0.4.1. FYI: [~elek], 
[~bharatviswa], [~nandakumar131].

 

Here is the crash stack:
{noformat}
 2019-08-21 20:01:44 INFO  SCMChillModeManager:274 - SCM in chill mode. 1 
DataNodes registered, 1 required.2019-08-21 20:01:44 INFO  
SCMChillModeManager:274 - SCM in chill mode. 1 DataNodes registered, 1 
required.2019-08-21 20:01:44 INFO  SCMChillModeManager:110 - SCM exiting chill 
mode.2019-08-21 20:02:40 ERROR OzoneClientFactory:294 - Couldn't create 
protocol class org.apache.hadoop.ozone.client.rpc.RpcClient 
exception:java.lang.reflect.InvocationTargetException at 
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:291)
 at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getClient(OzoneClientFactory.java:92)
 at 
org.apache.hadoop.ozone.s3.OzoneClientProducer.createClient(OzoneClientProducer.java:45)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 {noformat}
 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1962) Reduce the compilation times for Ozone

2019-08-14 Thread Anu Engineer (JIRA)
Anu Engineer created HDDS-1962:
--

 Summary: Reduce the compilation times for Ozone 
 Key: HDDS-1962
 URL: https://issues.apache.org/jira/browse/HDDS-1962
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Anu Engineer


Due to the introduction of some Javascript libraries, the build time and all 
the npm/yarn processing is adding too much to the build time of Ozone. This 
Jira is to track and solve that issue.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-13 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-1366.

   Resolution: Fixed
Fix Version/s: 0.5.0

[~shwetayakkali] Thanks for the contribution and [~arp] Thanks for commit. I am 
just resolving this JIRA. Please reopen if needed.

> Add ability in Recon to track the number of small files in an Ozone cluster.
> 
>
> Key: HDDS-1366
> URL: https://issues.apache.org/jira/browse/HDDS-1366
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Shweta
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 12h
>  Remaining Estimate: 0h
>
> Ozone users may want to track the number of small files they have in their 
> cluster and where they are present. Recon can help them with the information 
> by iterating the OM Key Table and dividing the keys into different buckets 
> based on the data size. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1891) Ozone fs shell command should work with default port when port number is not specified

2019-08-13 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-1891.

   Resolution: Fixed
Fix Version/s: 0.5.0

I have committed this patch to the trunk branch. [~nandakumar131] This is 
tagged for 0.4.1, please let me know if you would this to be committed into 
ozone-0.4.1

> Ozone fs shell command should work with default port when port number is not 
> specified
> --
>
> Key: HDDS-1891
> URL: https://issues.apache.org/jira/browse/HDDS-1891
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> {code:bash|title=Without port number -> Error}
> $ ozone fs -ls o3fs://bucket.volume.localhost/
> -ls: Ozone file system url should be either one of the two forms: 
> o3fs://bucket.volume/key  OR o3fs://bucket.volume.om-host.example.com:5678/key
> ...
> {code}
> {code:bash|title=With port number -> Success}
> $ ozone fs -ls o3fs://bucket.volume.localhost:9862/
> Found 1 items
> -rw-rw-rw-   1 hadoop hadoop   1485 2019-08-01 21:14 
> o3fs://bucket.volume.localhost:9862/README.txt
> {code}
> We expect the first command to attempt port 9862 by default.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1858) mTLS support for Ozone is incorrect

2019-07-24 Thread Anu Engineer (JIRA)
Anu Engineer created HDDS-1858:
--

 Summary: mTLS support for Ozone is incorrect
 Key: HDDS-1858
 URL: https://issues.apache.org/jira/browse/HDDS-1858
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Josh Elser


Thanks to Josh for reporting that we have missing 'Not' in the if condition 
check.
{code}
if (conf.isGrpcMutualTlsRequired()) {
return new GrpcTlsConfig(
null, null, conf.getTrustStoreFile(), false);
  } else {
return new GrpcTlsConfig(conf.getClientPrivateKeyFile(),
conf.getClientCertChainFile(), conf.getTrustStoreFile(), true);
  }
{code}

it should have been
{code}
if (!conf.isGrpcMutualTlsRequired()) {
return new GrpcTlsConfig(
null, null, conf.getTrustStoreFile(), false);
  } else {
return new GrpcTlsConfig(conf.getClientPrivateKeyFile(),
conf.getClientCertChainFile(), conf.getTrustStoreFile(), true);
  }
{code}







--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14664) [Dynamometer] Build fails

2019-07-23 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDFS-14664.
-
Resolution: Not A Problem

> [Dynamometer] Build fails
> -
>
> Key: HDFS-14664
> URL: https://issues.apache.org/jira/browse/HDFS-14664
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Erkin Alp Güney
>Priority: Blocker
>
> Failed to execute goal on project hadoop-dynamometer-infra: Could not resolve 
> dependencies for project 
> org.apache.hadoop:hadoop-dynamometer-infra:jar:3.3.0-SNAPSHOT: Failure to 
> find org.apache.hadoop:hadoop-dynamometer-workload:jar:tests:3.3.0-SNAPSHOT 
> in https://repository.apache.org/content/repositories/snapshots was cached in 
> the local repository, resolution will not be reattempted until the update 
> interval of apache.snapshots.https has elapsed or updates are forced 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1799) Add goofyfs to the ozone-runner docker image

2019-07-22 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-1799.

   Resolution: Fixed
Fix Version/s: 0.4.1
   0.5.0

I have committed this patch to trunk and ozone-0.4.1. Thanks for your 
contribution.

[~arp] Thanks for the reviews and earlier commits.

> Add goofyfs to the ozone-runner docker image
> 
>
> Key: HDDS-1799
> URL: https://issues.apache.org/jira/browse/HDDS-1799
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.5.0, 0.4.1
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Goofys is a s3 fuse driver which is required for the ozone csi setup.
> As of now it's installed in hadoop-ozone/dist/src/main/docker/Dockerfile from 
> a non-standard location (because it couldn't be part of hadoop-runner earlier 
> as it's ozone specific).
> It should be installed to the ozone-runner from a canonical goffys release 
> URL.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1585) Add LICENSE.txt and NOTICE.txt to Ozone Recon Web

2019-07-22 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-1585.

   Resolution: Fixed
Fix Version/s: 0.4.1
   0.5.0

[~vivekratnavel] Thank you for the contribution. I have committed this patch to 
trunk and 0.4.1. [~elek] [~eyang] Thank you for the review comments.

> Add LICENSE.txt and NOTICE.txt to Ozone Recon Web
> -
>
> Key: HDDS-1585
> URL: https://issues.apache.org/jira/browse/HDDS-1585
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.5.0, 0.4.1
>
>  Time Spent: 45.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1712) Remove sudo access from Ozone docker image

2019-07-16 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-1712.

Resolution: Not A Problem

Docker images are examples: we have clearly documented that in the Ozone-0.4.1 
Documentation. Therefore this discussion is point less. I am resolving this 
JIRA.

 

> Remove sudo access from Ozone docker image
> --
>
> Key: HDDS-1712
> URL: https://issues.apache.org/jira/browse/HDDS-1712
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1712.001.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Ozone docker image is given unlimited sudo access to hadoop user.  This poses 
> a security risk where host level user uid 1000 can attach a debugger to the 
> container process to obtain root access.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-545) NullPointerException error thrown while trying to close container

2019-07-11 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-545.
---
Resolution: Cannot Reproduce

Not seen this issue for a while. Resolving for now. please reopen if needed.

> NullPointerException error thrown while trying to close container
> -
>
> Key: HDDS-545
> URL: https://issues.apache.org/jira/browse/HDDS-545
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Nanda kumar
>Priority: Critical
> Attachments: all-node-ozone-logs-1537875436.tar.gz
>
>
> Seen following null pointer in ozone.log while trying to close the container 
> on receiving SCM container close request'.
>  
> ozone version:
> --
>  
> {noformat}
> Source code repository g...@github.com:apache/hadoop.git -r 
> 968082ffa5d9e50ed8538f653c375edd1b8feea5
> Compiled by elek on 2018-09-19T20:57Z
> Compiled with protoc 2.5.0
> From source with checksum efbdeabb5670d69d9efde85846e4ee98
> Using HDDS 0.2.1-alpha
> Source code repository g...@github.com:apache/hadoop.git -r 
> 968082ffa5d9e50ed8538f653c375edd1b8feea5
> Compiled by elek on 2018-09-19T20:56Z
> Compiled with protoc 2.5.0
> From source with checksum 8bf78cff4b73c95d486da5b21053ef
> {noformat}
>  
> ozone.log
> {noformat}
> 2018-09-24 11:32:55,910 [Thread-2921] DEBUG (XceiverServerRatis.java:401) - 
> pipeline Action CLOSE on pipeline 
> pipelineId=eabdcbe2-da3b-41be-a281-f0ea8d4120f7.Reason : 
> 7d1c7be2-7882-4446-be61-be868d2e188a is in candidate state for 1074164ms
> 2018-09-24 11:32:56,343 [Datanode State Machine Thread - 1] DEBUG 
> (HeartbeatEndpointTask.java:255) - Received SCM container close request for 
> container 54
> 2018-09-24 11:32:56,347 [Datanode State Machine Thread - 1] DEBUG 
> (HeartbeatEndpointTask.java:255) - Received SCM container close request for 
> container 42
> 2018-09-24 11:32:56,347 [Datanode State Machine Thread - 1] DEBUG 
> (HeartbeatEndpointTask.java:255) - Received SCM container close request for 
> container 44
> 2018-09-24 11:32:56,354 [Datanode State Machine Thread - 1] DEBUG 
> (HeartbeatEndpointTask.java:255) - Received SCM container close request for 
> container 46
> 2018-09-24 11:32:56,355 [Datanode State Machine Thread - 1] DEBUG 
> (HeartbeatEndpointTask.java:255) - Received SCM container close request for 
> container 48
> 2018-09-24 11:32:56,357 [Datanode State Machine Thread - 1] DEBUG 
> (HeartbeatEndpointTask.java:255) - Received SCM container close request for 
> container 50
> 2018-09-24 11:32:56,357 [Datanode State Machine Thread - 1] DEBUG 
> (HeartbeatEndpointTask.java:255) - Received SCM container close request for 
> container 52
> 2018-09-24 11:32:56,548 [Command processor thread] DEBUG 
> (CloseContainerCommandHandler.java:64) - Processing Close Container command.
> 2018-09-24 11:32:56,636 [Command processor thread] ERROR 
> (CloseContainerCommandHandler.java:105) - Can't close container 54
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.submitContainerRequest(OzoneContainer.java:192)
>  at 
> org.apache.hadoop.ozone.container.common.statemachine.commandhandler.CloseContainerCommandHandler.handle(CloseContainerCommandHandler.java:91)
>  at 
> org.apache.hadoop.ozone.container.common.statemachine.commandhandler.CommandDispatcher.handle(CommandDispatcher.java:93)
>  at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$initCommandHandlerThread$1(DatanodeStateMachine.java:382)
>  at java.lang.Thread.run(Thread.java:745)
> 2018-09-24 11:32:56,726 [Command processor thread] DEBUG 
> (CloseContainerCommandHandler.java:64) - Processing Close Container command.
> 2018-09-24 11:32:56,728 [Command processor thread] ERROR 
> (CloseContainerCommandHandler.java:105) - Can't close container 42
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.submitContainerRequest(OzoneContainer.java:192)
>  at 
> org.apache.hadoop.ozone.container.common.statemachine.commandhandler.CloseContainerCommandHandler.handle(CloseContainerCommandHandler.java:91)
>  at 
> org.apache.hadoop.ozone.container.common.statemachine.commandhandler.CommandDispatcher.handle(CommandDispatcher.java:93)
>  at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$initCommandHandlerThread$1(DatanodeStateMachine.java:382)
>  at java.lang.Thread.run(Thread.java:745)
> 2018-09-24 11:32:56,787 [Command processor thread] DEBUG 
> (CloseContainerCommandHandler.java:64) - Processing Close Container command.
> 2018-09-24 11:32:56,814 [Command processor thread] ERROR 
> (CloseContainerCommandHandler.java:105) - Can't close container 44
> java.lang.NullPointerException
>  

[jira] [Resolved] (HDDS-622) Datanode shuts down with RocksDBStore java.lang.NoSuchMethodError

2019-07-11 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-622.
---
Resolution: Not A Problem

I don't think this is a problem anymore. Since we are going to support later 
versions of Hadoop only. Closing for now.

> Datanode shuts down with RocksDBStore java.lang.NoSuchMethodError
> -
>
> Key: HDDS-622
> URL: https://issues.apache.org/jira/browse/HDDS-622
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Priority: Critical
>  Labels: beta1
>
> Datanodes are registered fine on a Hadoop + Ozone cluster.
> While running jobs against Ozone, Datanode shuts down as below:
> {code:java}
> 2018-10-10 21:50:42,708 INFO storage.RaftLogWorker 
> (RaftLogWorker.java:rollLogSegment(263)) - Rolling 
> segment:7c1a32b5-34ed-4a2a-aa07-ac75d25858b6-RaftLogWorker index to:2
> 2018-10-10 21:50:42,714 INFO impl.RaftServerImpl 
> (ServerState.java:setRaftConf(319)) - 7c1a32b5-34ed-4a2a-aa07-ac75d25858b6: 
> set configuration 2: [7c1a32b5-34ed-4a2a-aa07-ac75d25858b6:172.27.56.9:9858, 
> ee
> 20b6291-d898-46de-8cb2-861523aed1a3:172.27.87.64:9858, 
> b7fbd501-27ae-4304-8c42-a612915094c6:172.27.17.133:9858], old=null at 2
> 2018-10-10 21:50:42,729 WARN impl.LogAppender (LogUtils.java:warn(135)) - 
> 7c1a32b5-34ed-4a2a-aa07-ac75d25858b6: Failed appendEntries to 
> e20b6291-d898-46de-8cb2-861523aed1a3:172.27.87.64:9858: org.apache..
> ratis.shaded.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
> 2018-10-10 21:50:43,245 WARN impl.LogAppender (LogUtils.java:warn(135)) - 
> 7c1a32b5-34ed-4a2a-aa07-ac75d25858b6: Failed appendEntries to 
> e20b6291-d898-46de-8cb2-861523aed1a3:172.27.87.64:9858: org.apache..
> ratis.shaded.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
> 2018-10-10 21:50:43,310 ERROR impl.RaftServerImpl 
> (RaftServerImpl.java:applyLogToStateMachine(1153)) - 
> 7c1a32b5-34ed-4a2a-aa07-ac75d25858b6: applyTransaction failed for index:1 
> proto:(t:2, i:1)SMLOGENTRY,,
> client-894EC0846FDF, cid=0
> 2018-10-10 21:50:43,313 ERROR impl.StateMachineUpdater 
> (ExitUtils.java:terminate(86)) - Terminating with exit status 2: 
> StateMachineUpdater-7c1a32b5-34ed-4a2a-aa07-ac75d25858b6: the 
> StateMachineUpdater hii
> ts Throwable
> java.lang.NoSuchMethodError: 
> org.apache.hadoop.metrics2.util.MBeans.register(Ljava/lang/String;Ljava/lang/String;Ljava/util/Map;Ljava/lang/Object;)Ljavax/management/ObjectName;
> at org.apache.hadoop.utils.RocksDBStore.(RocksDBStore.java:74)
> at 
> org.apache.hadoop.utils.MetadataStoreBuilder.build(MetadataStoreBuilder.java:142)
> at 
> org.apache.hadoop.ozone.container.keyvalue.helpers.KeyValueContainerUtil.createContainerMetaData(KeyValueContainerUtil.java:78)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.create(KeyValueContainer.java:133)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handleCreateContainer(KeyValueHandler.java:256)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handle(KeyValueHandler.java:179)
> at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:142)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:223)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.runCommand(ContainerStateMachine.java:229)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.access$300(ContainerStateMachine.java:115)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine$StateMachineHelper.handleCreateContainer(ContainerStateMachine.java:618)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine$StateMachineHelper.executeContainerCommand(ContainerStateMachine.java:642)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.applyTransaction(ContainerStateMachine.java:396)
> at 
> org.apache.ratis.server.impl.RaftServerImpl.applyLogToStateMachine(RaftServerImpl.java:1150)
> at 
> org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:148)
> at java.lang.Thread.run(Thread.java:748)
> 2018-10-10 21:50:43,320 INFO datanode.DataNode (LogAdapter.java:info(51)) - 
> SHUTDOWN_MSG:
> /
> SHUTDOWN_MSG: Shutting down DataNode at 
> ctr-e138-1518143905142-510793-01-02.hwx.site/172.27.56.9
> /
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To 

[jira] [Resolved] (HDDS-1611) Evaluate ACL on volume bucket key and prefix to authorize access

2019-07-10 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-1611.

Resolution: Fixed

> Evaluate ACL on volume bucket key and prefix to authorize access 
> -
>
> Key: HDDS-1611
> URL: https://issues.apache.org/jira/browse/HDDS-1611
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0, 0.4.1
>
> Attachments: HDDS-1611-fix-trunk.patch
>
>  Time Spent: 6h 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDDS-1611) Evaluate ACL on volume bucket key and prefix to authorize access

2019-07-10 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reopened HDDS-1611:


> Evaluate ACL on volume bucket key and prefix to authorize access 
> -
>
> Key: HDDS-1611
> URL: https://issues.apache.org/jira/browse/HDDS-1611
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0, 0.4.1
>
>  Time Spent: 6h 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1611) Evaluate ACL on volume bucket key and prefix to authorize access

2019-07-10 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-1611.

   Resolution: Fixed
Fix Version/s: 0.4.1
   0.5.0

Thanks for the patch. I have committed this patch to the trunk. I will cherry 
pick this to branch 0.4.1 shortly.

> Evaluate ACL on volume bucket key and prefix to authorize access 
> -
>
> Key: HDDS-1611
> URL: https://issues.apache.org/jira/browse/HDDS-1611
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0, 0.4.1
>
>  Time Spent: 6h 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-373) Genconf tool must generate ozone-site.xml with sample values

2019-07-01 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-373.
---
   Resolution: Fixed
Fix Version/s: 0.4.1

[~dineshchitlangia] Thank you for the contribution. [~bharatviswa] Thanks for 
the review. I have committed this patch to the trunk.

> Genconf tool must generate ozone-site.xml with sample values
> 
>
> Key: HDDS-373
> URL: https://issues.apache.org/jira/browse/HDDS-373
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-373.001.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> As discussed with [~anu], currently, the genconf tool generates a template 
> ozone-site.xml. This is not very useful for new users as they would have to 
> understand what values should be set for the minimal configuration properties.
> This Jira proposes to modify the ozone-default.xml which is leveraged by 
> genconf tool to generate ozone-site.xml
>  
> Further, as suggested by [~arpitagarwal], we must add a {{--pseudo}} option 
> to generate configs for starting pseudo-cluster. This should be useful for 
> quick dev-testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1733) Fix Ozone documentation

2019-06-28 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-1733.

   Resolution: Fixed
Fix Version/s: 0.4.1

[~dineshchitlangia] Thanks for the contribution. I have committed this patch to 
the trunk.

> Fix Ozone documentation
> ---
>
> Key: HDDS-1733
> URL: https://issues.apache.org/jira/browse/HDDS-1733
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> JIRA to fix various typo, image and other issues in the ozone documentation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1672) Improve locking in OzoneManager

2019-06-25 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-1672.

   Resolution: Fixed
Fix Version/s: 0.4.1

[~bharatviswa] Thanks for the contribution. I have committed this patch to the 
trunk.

> Improve locking in OzoneManager
> ---
>
> Key: HDDS-1672
> URL: https://issues.apache.org/jira/browse/HDDS-1672
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Affects Versions: 0.4.0
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: Ozone Locks in OM.pdf
>
>  Time Spent: 7h 10m
>  Remaining Estimate: 0h
>
> In this Jira, we shall follow the new lock ordering. In this way, in volume 
> requests we can solve the issue of acquire/release/reacquire problem. And few 
> bugs in the current implementation of S3Bucket/Volume operations.
>  
> Currently after acquiring volume lock, we cannot acquire user lock. 
> This is causing an issue in Volume request implementation, 
> acquire/release/reacquire volume lock.
>  
> Case of Delete Volume Request: 
>  # Acquire volume lock.
>  # Get Volume Info from DB
>  # Release Volume lock. (We are releasing the lock, because while acquiring 
> volume lock, we cannot acquire user lock0
>  # Get owner from volume Info read from DB
>  # Acquire owner lock
>  # Acquire volume lock
>  # Do delete logic
>  # release volume lock
>  # release user lock
>  
> We can avoid this acquire/release/reacquire lock issue by making volume lock 
> as low weight. 
>  
> In this way, the above deleteVolume request will change as below
>  # Acquire volume lock
>  # Get Volume Info from DB
>  # Get owner from volume Info read from DB
>  # Acquire owner lock
>  # Do delete logic
>  # release owner lock
>  # release volume lock. 
> Same issue is seen with SetOwner for Volume request also.
> During HDDS-1620 [~arp] brought up this issue. 
> I am proposing the above solution to solve this issue. Any other 
> idea/suggestions are welcome.
> This also resolves a bug in setOwner for Volume request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



  1   2   3   4   5   >