[jira] [Commented] (HDDS-4020) ACL commands like getacl and setacl should return a response only when Native Authorizer is enabled

2020-07-28 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17166748#comment-17166748
 ] 

Bharat Viswanadham commented on HDDS-4020:
--

Hi [~pifta]
Thanks for the suggestion. 
This Jira is to solve the confusion of acl commands works, but turns out they 
don't take affect when external authorizer is configured. So, this Jira is to 
print a message to users when external authorizer is configured acl shell 
commands are not supported.

Yes, I agree with your suggestion on the improvement. Currently Ranger does not 
support ACLType of READ_ACL, WRITE_ACL, when ever ranger does not support that 
kind of AclType, it returns false and that is the reason we see the error when 
getAcl Operation. Ranger Authorizer Code link [link 
|https://github.com/apache/ranger/blob/master/plugin-ozone/src/main/java/org/apache/ranger/authorization/ozone/authorizer/RangerOzoneAuthorizer.java#L109]

I have a question to support this, we need to change IAccessAuthorizer to 
support getAcl, if that can be supported then why can't we also support set/Add 
Acl also. Just trying to understand about why only for readAcl operations, why 
not for all Acl Operations. 

In this Jira, I will just target to fix the Usability issue.

> ACL commands like getacl and setacl should return a response only when Native 
> Authorizer is enabled
> ---
>
> Key: HDDS-4020
> URL: https://issues.apache.org/jira/browse/HDDS-4020
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone CLI, Ozone Manager
>Affects Versions: 0.5.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Bharat Viswanadham
>Priority: Major
>
> Currently, the getacl and setacl commands return wrong information when an 
> external authorizer such as Ranger is enabled. There should be a check to 
> verify if Native Authorizer is enabled before returning any response for 
> these two commands.
> If an external authorizer is enabled, it should show a nice message about 
> managing acls in external authorizer.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4034) Add Unit Test for HadoopNestedDirGenerator

2020-07-28 Thread Aryan Gupta (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aryan Gupta updated HDDS-4034:
--
Labels: https://github.com/apache/hadoop-ozone/pull/1266  (was: )

> Add Unit Test for HadoopNestedDirGenerator
> --
>
> Key: HDDS-4034
> URL: https://issues.apache.org/jira/browse/HDDS-4034
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Aryan Gupta
>Assignee: Aryan Gupta
>Priority: Major
>  Labels: https://github.com/apache/hadoop-ozone/pull/1266
>
> Unit test - It checks the span and depth of nested directories created by the 
> HadoopNestedDirGenerator Tool.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4044) Deprecate ozone.s3g.volume.name

2020-07-28 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4044:
-
Component/s: S3

> Deprecate ozone.s3g.volume.name
> ---
>
> Key: HDDS-4044
> URL: https://issues.apache.org/jira/browse/HDDS-4044
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>
> HDDS-3612 introduced bucket links.
> After this feature now we don't need this parameter, any volume/bucket can be 
> exposed to S3 via using bucket links.
> ozone bucket link srcvol/srcbucket destvol/destbucket
> So now to expose any ozone bucket to S3G
> For example, the user wants to expose a bucket named bucket1 under volume1 to 
> S3G, they can run below command
> {code:java}
> ozone bucket link volume1/bucket1 s3v/bucket2
> {code}
> Now, the user can access all the keys in volume/bucket1 using s3v/bucket2 and 
> also ingest data to the volume/bucket1 using using s3v/bucket2
> This Jira is opened to remove the config from ozone-default.xml
> And also log a warning message to use bucket links, when it does not have 
> default value s3v.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4044) Deprecate ozone.s3g.volume.name

2020-07-28 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4044:
-
Target Version/s: 0.6.0

> Deprecate ozone.s3g.volume.name
> ---
>
> Key: HDDS-4044
> URL: https://issues.apache.org/jira/browse/HDDS-4044
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>
> HDDS-3612 introduced bucket links.
> After this feature now we don't need this parameter, any volume/bucket can be 
> exposed to S3 via using bucket links.
> ozone bucket link srcvol/srcbucket destvol/destbucket
> So now to expose any ozone bucket to S3G
> For example, the user wants to expose a bucket named bucket1 under volume1 to 
> S3G, they can run below command
> {code:java}
> ozone bucket link volume1/bucket1 s3v/bucket2
> {code}
> Now, the user can access all the keys in volume/bucket1 using s3v/bucket2 and 
> also ingest data to the volume/bucket1 using using s3v/bucket2
> This Jira is opened to remove the config from ozone-default.xml
> And also log a warning message to use bucket links, when it does not have 
> default value s3v.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-4044) Deprecate ozone.s3g.volume.name

2020-07-28 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-4044:


Assignee: Bharat Viswanadham

> Deprecate ozone.s3g.volume.name
> ---
>
> Key: HDDS-4044
> URL: https://issues.apache.org/jira/browse/HDDS-4044
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>
> HDDS-3612 introduced bucket links.
> After this feature now we don't need this parameter, any volume/bucket can be 
> exposed to S3 via using bucket links.
> ozone bucket link srcvol/srcbucket destvol/destbucket
> So now to expose any ozone bucket to S3G
> For example, the user wants to expose a bucket named bucket1 under volume1 to 
> S3G, they can run below command
> {code:java}
> ozone bucket link volume1/bucket1 s3v/bucket2
> {code}
> Now, the user can access all the keys in volume/bucket1 using s3v/bucket2 and 
> also ingest data to the volume/bucket1.
> This Jira is opened to remove the config from ozone-default.xml
> And also log a warning message to use bucket links, when it is not default 
> value s3v.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4044) Deprecate ozone.s3g.volume.name

2020-07-28 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4044:
-
Priority: Blocker  (was: Major)

> Deprecate ozone.s3g.volume.name
> ---
>
> Key: HDDS-4044
> URL: https://issues.apache.org/jira/browse/HDDS-4044
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Blocker
>
> HDDS-3612 introduced bucket links.
> After this feature now we don't need this parameter, any volume/bucket can be 
> exposed to S3 via using bucket links.
> ozone bucket link srcvol/srcbucket destvol/destbucket
> So now to expose any ozone bucket to S3G
> For example, the user wants to expose a bucket named bucket1 under volume1 to 
> S3G, they can run below command
> {code:java}
> ozone bucket link volume1/bucket1 s3v/bucket2
> {code}
> Now, the user can access all the keys in volume/bucket1 using s3v/bucket2 and 
> also ingest data to the volume/bucket1.
> This Jira is opened to remove the config from ozone-default.xml
> And also log a warning message to use bucket links, when it is not default 
> value s3v.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4044) Deprecate ozone.s3g.volume.name

2020-07-28 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-4044:


 Summary: Deprecate ozone.s3g.volume.name
 Key: HDDS-4044
 URL: https://issues.apache.org/jira/browse/HDDS-4044
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


HDDS-3612 introduced bucket links.
After this feature now we don't need this parameter, any volume/bucket can be 
exposed to S3 via using bucket links.

ozone bucket link srcvol/srcbucket destvol/destbucket

So now to expose any ozone bucket to S3G

For example, the user wants to expose a bucket named bucket1 under volume1 to 
S3G, they can run below command

{code:java}
ozone bucket link volume1/bucket1 s3v/bucket2
{code}

Now, the user can access all the keys in volume/bucket1 using s3v/bucket2 and 
also ingest data to the volume/bucket1.

This Jira is opened to remove the config from ozone-default.xml
And also log a warning message to use bucket links, when it is not default 
value s3v.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4044) Deprecate ozone.s3g.volume.name

2020-07-28 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4044:
-
Description: 
HDDS-3612 introduced bucket links.
After this feature now we don't need this parameter, any volume/bucket can be 
exposed to S3 via using bucket links.

ozone bucket link srcvol/srcbucket destvol/destbucket

So now to expose any ozone bucket to S3G

For example, the user wants to expose a bucket named bucket1 under volume1 to 
S3G, they can run below command

{code:java}
ozone bucket link volume1/bucket1 s3v/bucket2
{code}

Now, the user can access all the keys in volume/bucket1 using s3v/bucket2 and 
also ingest data to the volume/bucket1 using using s3v/bucket2

This Jira is opened to remove the config from ozone-default.xml
And also log a warning message to use bucket links, when it does not have 
default value s3v.


  was:
HDDS-3612 introduced bucket links.
After this feature now we don't need this parameter, any volume/bucket can be 
exposed to S3 via using bucket links.

ozone bucket link srcvol/srcbucket destvol/destbucket

So now to expose any ozone bucket to S3G

For example, the user wants to expose a bucket named bucket1 under volume1 to 
S3G, they can run below command

{code:java}
ozone bucket link volume1/bucket1 s3v/bucket2
{code}

Now, the user can access all the keys in volume/bucket1 using s3v/bucket2 and 
also ingest data to the volume/bucket1.

This Jira is opened to remove the config from ozone-default.xml
And also log a warning message to use bucket links, when it is not default 
value s3v.



> Deprecate ozone.s3g.volume.name
> ---
>
> Key: HDDS-4044
> URL: https://issues.apache.org/jira/browse/HDDS-4044
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>
> HDDS-3612 introduced bucket links.
> After this feature now we don't need this parameter, any volume/bucket can be 
> exposed to S3 via using bucket links.
> ozone bucket link srcvol/srcbucket destvol/destbucket
> So now to expose any ozone bucket to S3G
> For example, the user wants to expose a bucket named bucket1 under volume1 to 
> S3G, they can run below command
> {code:java}
> ozone bucket link volume1/bucket1 s3v/bucket2
> {code}
> Now, the user can access all the keys in volume/bucket1 using s3v/bucket2 and 
> also ingest data to the volume/bucket1 using using s3v/bucket2
> This Jira is opened to remove the config from ozone-default.xml
> And also log a warning message to use bucket links, when it does not have 
> default value s3v.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3955) Unable to list intermediate paths on keys created using S3G.

2020-07-28 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-3955:
-
Fix Version/s: 0.6.0

> Unable to list intermediate paths on keys created using S3G.
> 
>
> Key: HDDS-3955
> URL: https://issues.apache.org/jira/browse/HDDS-3955
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: Ozone Manager
>Reporter: Aravindan Vijayan
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.6.0
>
>
> Keys created via the S3 Gateway currently use the createKey OM API to create 
> the ozone key. Hence, when using a hdfs client to list intermediate 
> directories in the key, OM returns key not found error. This was encountered 
> while using fluentd to write Hive logs to Ozone via the s3 gateway.
> cc [~bharat]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4040) OFS should use deleteObjects when delete directory

2020-07-28 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-4040:
-
Description: 
This Jira is to use deleteObjects in OFS delete now that .

Currently when ozone.om.enable.filesystem.paths is enabled it normalizes the 
path, so using deleteKey for delete directory will fail.

According to [~bharat] this should be a blocker for 0.6.0.


  was:
This Jira is to use deleteObjects in OFS delete.

Currently when ozone.om.enable.filesystem.paths is enabled it normalizes the 
path, so using deleteKey for delete directory will fail.

According to [~bharat] this should be a blocker for 0.6.0.



> OFS should use deleteObjects when delete directory
> --
>
> Key: HDDS-4040
> URL: https://issues.apache.org/jira/browse/HDDS-4040
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.6.0
>Reporter: Bharat Viswanadham
>Assignee: Siyao Meng
>Priority: Blocker
>
> This Jira is to use deleteObjects in OFS delete now that .
> Currently when ozone.om.enable.filesystem.paths is enabled it normalizes the 
> path, so using deleteKey for delete directory will fail.
> According to [~bharat] this should be a blocker for 0.6.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4040) OFS should use deleteObjects when delete directory

2020-07-28 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-4040:
-
Description: 
This Jira is to use deleteObjects in OFS delete now that HDDS-3286 is committed.

Currently when ozone.om.enable.filesystem.paths is enabled it normalizes the 
path, so using deleteKey for delete directory will fail.

According to [~bharat] this should be a blocker for 0.6.0.


  was:
This Jira is to use deleteObjects in OFS delete now that .

Currently when ozone.om.enable.filesystem.paths is enabled it normalizes the 
path, so using deleteKey for delete directory will fail.

According to [~bharat] this should be a blocker for 0.6.0.



> OFS should use deleteObjects when delete directory
> --
>
> Key: HDDS-4040
> URL: https://issues.apache.org/jira/browse/HDDS-4040
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.6.0
>Reporter: Bharat Viswanadham
>Assignee: Siyao Meng
>Priority: Blocker
>
> This Jira is to use deleteObjects in OFS delete now that HDDS-3286 is 
> committed.
> Currently when ozone.om.enable.filesystem.paths is enabled it normalizes the 
> path, so using deleteKey for delete directory will fail.
> According to [~bharat] this should be a blocker for 0.6.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4040) OFS should use deleteObjects when delete directory

2020-07-28 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-4040:
-
Priority: Blocker  (was: Major)

> OFS should use deleteObjects when delete directory
> --
>
> Key: HDDS-4040
> URL: https://issues.apache.org/jira/browse/HDDS-4040
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Siyao Meng
>Priority: Blocker
>
> This Jira is to use deleteObjects in OFS delete.
> Currently when ozone.om.enable.filesystem.paths is enabled it normalizes the 
> path, so using deleteKey for delete directory will fail.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4040) OFS should use deleteObjects when delete directory

2020-07-28 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-4040:
-
Target Version/s: 0.6.0
 Description: 
This Jira is to use deleteObjects in OFS delete.

Currently when ozone.om.enable.filesystem.paths is enabled it normalizes the 
path, so using deleteKey for delete directory will fail.

According to [~bharat] this should be a blocker for 0.6.0.


  was:
This Jira is to use deleteObjects in OFS delete.

Currently when ozone.om.enable.filesystem.paths is enabled it normalizes the 
path, so using deleteKey for delete directory will fail.



> OFS should use deleteObjects when delete directory
> --
>
> Key: HDDS-4040
> URL: https://issues.apache.org/jira/browse/HDDS-4040
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.6.0
>Reporter: Bharat Viswanadham
>Assignee: Siyao Meng
>Priority: Blocker
>
> This Jira is to use deleteObjects in OFS delete.
> Currently when ozone.om.enable.filesystem.paths is enabled it normalizes the 
> path, so using deleteKey for delete directory will fail.
> According to [~bharat] this should be a blocker for 0.6.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4040) OFS should use deleteObjects when delete directory

2020-07-28 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-4040:
-
Affects Version/s: 0.6.0

> OFS should use deleteObjects when delete directory
> --
>
> Key: HDDS-4040
> URL: https://issues.apache.org/jira/browse/HDDS-4040
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.6.0
>Reporter: Bharat Viswanadham
>Assignee: Siyao Meng
>Priority: Blocker
>
> This Jira is to use deleteObjects in OFS delete.
> Currently when ozone.om.enable.filesystem.paths is enabled it normalizes the 
> path, so using deleteKey for delete directory will fail.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4040) OFS should use deleteObjects when delete directory

2020-07-28 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-4040:
-
Summary: OFS should use deleteObjects when delete directory  (was: OFS use 
deleteObjects when delete directory)

> OFS should use deleteObjects when delete directory
> --
>
> Key: HDDS-4040
> URL: https://issues.apache.org/jira/browse/HDDS-4040
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Siyao Meng
>Priority: Major
>
> This Jira is to use deleteObjects in OFS delete.
> Currently when ozone.om.enable.filesystem.paths is enabled it normalizes the 
> path, so using deleteKey for delete directory will fail.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4043) allow deletion from Trash directory without -skipTrash option

2020-07-28 Thread Nilotpal Nandi (Jira)
Nilotpal Nandi created HDDS-4043:


 Summary: allow deletion from Trash directory without -skipTrash 
option
 Key: HDDS-4043
 URL: https://issues.apache.org/jira/browse/HDDS-4043
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Filesystem
Reporter: Nilotpal Nandi


"skipTrash" option is mandatory while deleting  from "Trash".

Deletion from Trash should be allowed even when skipTrash option is not used.

 

ozone fs -rm -r o3fs://bucket3.s3v.ozone1/.Trash
20/07/28 14:50:46 INFO Configuration.deprecation: io.bytes.per.checksum is 
deprecated. Instead, use dfs.bytes-per-checksum
rm: Failed to move to trash: o3fs://bucket3.s3v.ozone1/.Trash: rename from 
o3fs://bucket3.s3v.ozone1/.Trash to /.Trash/hrt_qa/Current/.Trash failed.. 
Consider using -skipTrash option

 

ozone fs -rm -r -skipTrash o3fs://bucket3.s3v.ozone1/.Trash
Deleted o3fs://bucket3.s3v.ozone1/.Trash



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3413) Ozone documentation to be revised for OM HA Support

2020-07-28 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-3413.
---
Resolution: Duplicate

Closing this one as I opened a new one to update the documentation not only 
with OM HA:

HDDS-4042

> Ozone documentation to be revised for OM HA Support
> ---
>
> Key: HDDS-3413
> URL: https://issues.apache.org/jira/browse/HDDS-3413
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.5.0
>Reporter: Srinivasu Majeti
>Assignee: Marton Elek
>Priority: Major
>  Labels: TriagePending
>
> As OM HA is now supported in current version , We might need to update all 
> documentation pages wherever service id is applicable and any new parameters 
> that we might need to configure in core-site.xml for service id access for 
> remote clusters etc. And all volume/bucket/key CLI syntaxes including service 
> id for OM HA enabled clusters should be included in documentation



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4042) Update documentation for the GA release

2020-07-28 Thread Marton Elek (Jira)
Marton Elek created HDDS-4042:
-

 Summary: Update documentation for the GA release
 Key: HDDS-4042
 URL: https://issues.apache.org/jira/browse/HDDS-4042
 Project: Hadoop Distributed Data Store
  Issue Type: Task
  Components: documentation
Reporter: Marton Elek
Assignee: Marton Elek


HDDS-3413 is opened to add OM HA related documentation to the Ozone docs but it 
turned out that it contains additional out-of-date (and missing) information.

This issue is opened to track a big documentation update.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4041) Ozone /conf endpoint triggers kerberos replay error when SPNEGO is enabled

2020-07-28 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-4041:
-
Summary: Ozone /conf endpoint triggers kerberos replay error when SPNEGO is 
enabled   (was: Ozone /conf endpoint trigger kerberos replay error when SPNEGO 
is enabled )

> Ozone /conf endpoint triggers kerberos replay error when SPNEGO is enabled 
> ---
>
> Key: HDDS-4041
> URL: https://issues.apache.org/jira/browse/HDDS-4041
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Xiaoyu Yao
>Priority: Major
>
> {code}
> curl  -k --negotiate -X GET -u : 
> "https://quasar-jsajkc-8.quasar-jsajkc.root.hwx.site:9877/conf;
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> URI:/conf
> STATUS:403
> MESSAGE:GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> SERVLET:conf
> 
> 
> 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-3413) Ozone documentation to be revised for OM HA Support

2020-07-28 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek reassigned HDDS-3413:
-

Assignee: Marton Elek  (was: Bharat Viswanadham)

> Ozone documentation to be revised for OM HA Support
> ---
>
> Key: HDDS-3413
> URL: https://issues.apache.org/jira/browse/HDDS-3413
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.5.0
>Reporter: Srinivasu Majeti
>Assignee: Marton Elek
>Priority: Major
>  Labels: TriagePending
>
> As OM HA is now supported in current version , We might need to update all 
> documentation pages wherever service id is applicable and any new parameters 
> that we might need to configure in core-site.xml for service id access for 
> remote clusters etc. And all volume/bucket/key CLI syntaxes including service 
> id for OM HA enabled clusters should be included in documentation



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4041) Ozone /conf endpoint trigger kerberos replay error when SPNEGO is enabled

2020-07-28 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17166233#comment-17166233
 ] 

Xiaoyu Yao commented on HDDS-4041:
--

The root cause is the default /conf servlet has been overwrite by ozone but the 
authentication filter has been attached twice, which triggers the Kerberos 
reply error. 

The fix is to remove the previous attached filter like we have done to remove 
previous defined servlet to the same path spec "conf". 

> Ozone /conf endpoint trigger kerberos replay error when SPNEGO is enabled 
> --
>
> Key: HDDS-4041
> URL: https://issues.apache.org/jira/browse/HDDS-4041
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Xiaoyu Yao
>Priority: Major
>
> {code}
> curl  -k --negotiate -X GET -u : 
> "https://quasar-jsajkc-8.quasar-jsajkc.root.hwx.site:9877/conf;
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> URI:/conf
> STATUS:403
> MESSAGE:GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> SERVLET:conf
> 
> 
> 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Moved] (HDDS-4041) Ozone /conf endpoint trigger kerberos replay error when SPNEGO is enabled

2020-07-28 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao moved HADOOP-17162 to HDDS-4041:
---

 Key: HDDS-4041  (was: HADOOP-17162)
Target Version/s: 0.6.0  (was: 0.6.0)
Workflow: patch-available, re-open possible  (was: 
no-reopen-closed, patch-avail)
  Issue Type: Bug  (was: Improvement)
 Project: Hadoop Distributed Data Store  (was: Hadoop Common)

> Ozone /conf endpoint trigger kerberos replay error when SPNEGO is enabled 
> --
>
> Key: HDDS-4041
> URL: https://issues.apache.org/jira/browse/HDDS-4041
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Xiaoyu Yao
>Priority: Major
>
> {code}
> curl  -k --negotiate -X GET -u : 
> "https://quasar-jsajkc-8.quasar-jsajkc.root.hwx.site:9877/conf;
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> URI:/conf
> STATUS:403
> MESSAGE:GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> SERVLET:conf
> 
> 
> 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org