[jira] [Assigned] (HDFS-14614) Add Secure Flag for DataNode Web UI Cookies

2022-01-05 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian reassigned HDFS-14614:
-

Assignee: (was: Vivek Ratnavel Subramanian)

> Add Secure Flag for DataNode Web UI Cookies
> ---
>
> Key: HDFS-14614
> URL: https://issues.apache.org/jira/browse/HDFS-14614
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.0
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> It looks like HDFS-7279 removed Secure Flag for DataNode Web UI. I think we 
> should add it back.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15850) Superuser actions should be reported to external enforcers

2021-03-17 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDFS-15850:
--
Attachment: HDFS-15850.v2.patch

> Superuser actions should be reported to external enforcers
> --
>
> Key: HDFS-15850
> URL: https://issues.apache.org/jira/browse/HDFS-15850
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
> Attachments: HDFS-15850.v1.patch, HDFS-15850.v2.patch
>
>
> Currently, HDFS superuser checks or actions are not reported to external 
> enforcers like Ranger and the audit report provided by such external enforces 
> are not complete and are missing the superuser actions. To fix this, add a 
> new method to "AccessControlEnforcer" for all superuser checks. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15850) Superuser actions should be reported to external enforcers

2021-03-12 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDFS-15850:
--
Status: Patch Available  (was: In Progress)

> Superuser actions should be reported to external enforcers
> --
>
> Key: HDFS-15850
> URL: https://issues.apache.org/jira/browse/HDFS-15850
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
> Attachments: HDFS-15850.v1.patch
>
>
> Currently, HDFS superuser checks or actions are not reported to external 
> enforcers like Ranger and the audit report provided by such external enforces 
> are not complete and are missing the superuser actions. To fix this, add a 
> new method to "AccessControlEnforcer" for all superuser checks. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15850) Superuser actions should be reported to external enforcers

2021-03-12 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDFS-15850:
--
Attachment: HDFS-15850.v1.patch

> Superuser actions should be reported to external enforcers
> --
>
> Key: HDFS-15850
> URL: https://issues.apache.org/jira/browse/HDFS-15850
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
> Attachments: HDFS-15850.v1.patch
>
>
> Currently, HDFS superuser checks or actions are not reported to external 
> enforcers like Ranger and the audit report provided by such external enforces 
> are not complete and are missing the superuser actions. To fix this, add a 
> new method to "AccessControlEnforcer" for all superuser checks. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-15850) Superuser actions should be reported to external enforcers

2021-03-11 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-15850 started by Vivek Ratnavel Subramanian.
-
> Superuser actions should be reported to external enforcers
> --
>
> Key: HDFS-15850
> URL: https://issues.apache.org/jira/browse/HDFS-15850
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> Currently, HDFS superuser checks or actions are not reported to external 
> enforcers like Ranger and the audit report provided by such external enforces 
> are not complete and are missing the superuser actions. To fix this, add a 
> new method to "AccessControlEnforcer" for all superuser checks. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15850) Superuser actions should be reported to external enforcers

2021-02-22 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDFS-15850:
-

 Summary: Superuser actions should be reported to external enforcers
 Key: HDFS-15850
 URL: https://issues.apache.org/jira/browse/HDFS-15850
 Project: Hadoop HDFS
  Issue Type: Task
  Components: security
Affects Versions: 3.3.0
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian


Currently, HDFS superuser checks or actions are not reported to external 
enforcers like Ranger and the audit report provided by such external enforces 
are not complete and are missing the superuser actions. To fix this, add a new 
method to "AccessControlEnforcer" for all superuser checks. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15531) Namenode UI: List snapshots in separate table for each snapshottable directory

2020-08-27 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian resolved HDFS-15531.
---
Resolution: Fixed

> Namenode UI: List snapshots in separate table for each snapshottable directory
> --
>
> Key: HDFS-15531
> URL: https://issues.apache.org/jira/browse/HDFS-15531
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ui
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-15531) Namenode UI: List snapshots in separate table for each snapshottable directory

2020-08-13 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-15531 started by Vivek Ratnavel Subramanian.
-
> Namenode UI: List snapshots in separate table for each snapshottable directory
> --
>
> Key: HDFS-15531
> URL: https://issues.apache.org/jira/browse/HDFS-15531
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ui
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15531) Namenode UI: List snapshots in separate table for each snapshottable directory

2020-08-13 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDFS-15531:
-

 Summary: Namenode UI: List snapshots in separate table for each 
snapshottable directory
 Key: HDFS-15531
 URL: https://issues.apache.org/jira/browse/HDFS-15531
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ui
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15496) Add UI for deleted snapshots

2020-08-10 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDFS-15496:
--
Status: Patch Available  (was: In Progress)

> Add UI for deleted snapshots
> 
>
> Key: HDFS-15496
> URL: https://issues.apache.org/jira/browse/HDFS-15496
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mukul Kumar Singh
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> Add UI for deleted snapshots
> a) Show the list of snapshots per snapshottable directory
> b) Add deleted status in the JMX output for the Snapshot along with a snap ID
> e) NN UI, should sort the snapshots for snapIds. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-15496) Add UI for deleted snapshots

2020-08-10 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-15496 started by Vivek Ratnavel Subramanian.
-
> Add UI for deleted snapshots
> 
>
> Key: HDFS-15496
> URL: https://issues.apache.org/jira/browse/HDFS-15496
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mukul Kumar Singh
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> Add UI for deleted snapshots
> a) Show the list of snapshots per snapshottable directory
> b) Add deleted status in the JMX output for the Snapshot along with a snap ID
> e) NN UI, should sort the snapshots for snapIds. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14614) Add Secure Flag for DataNode Web UI Cookies

2020-02-26 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian reassigned HDFS-14614:
-

Assignee: Vivek Ratnavel Subramanian  (was: Lisheng Sun)

> Add Secure Flag for DataNode Web UI Cookies
> ---
>
> Key: HDFS-14614
> URL: https://issues.apache.org/jira/browse/HDFS-14614
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.0
>Reporter: Wei-Chiu Chuang
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> It looks like HDFS-7279 removed Secure Flag for DataNode Web UI. I think we 
> should add it back.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2394) Ozone S3 Gateway allows bucket name with underscore to be created but throws an error during put key operation

2019-11-20 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-2394:
-
Summary: Ozone S3 Gateway allows bucket name with underscore to be created 
but throws an error during put key operation  (was: Ozone allows bucket name 
with underscore to be created but throws an error during put key operation)

> Ozone S3 Gateway allows bucket name with underscore to be created but throws 
> an error during put key operation
> --
>
> Key: HDDS-2394
> URL: https://issues.apache.org/jira/browse/HDDS-2394
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> Steps to reproduce:
> aws s3api --endpoint http://localhost:9878 create-bucket --bucket ozone_test
> aws s3api --endpoint http://localhost:9878 put-object --bucket ozone_test 
> --key ozone-site.xml --body /etc/hadoop/conf/ozone-site.xml
> S3 gateway throws a warning:
> {code:java}
> javax.servlet.ServletException: javax.servlet.ServletException: 
> java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported 
> character : _
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:139)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:539)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: javax.servlet.ServletException: 
> java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported 
> character : _
>   at 
> org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:432)
>   at 
> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229)
>   at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1780)
>   at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1628)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
>   at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:583)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:513)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   ... 13 more
> {code}



--
This message was sent by 

[jira] [Created] (HDDS-2394) Ozone allows bucket name with underscore to be created but throws an error during put key operation

2019-10-31 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDDS-2394:


 Summary: Ozone allows bucket name with underscore to be created 
but throws an error during put key operation
 Key: HDDS-2394
 URL: https://issues.apache.org/jira/browse/HDDS-2394
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager
Affects Versions: 0.4.1
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian


Steps to reproduce:
aws s3api --endpoint http://localhost:9878 create-bucket --bucket ozone_test

aws s3api --endpoint http://localhost:9878 put-object --bucket ozone_test --key 
ozone-site.xml --body /etc/hadoop/conf/ozone-site.xml

S3 gateway throws a warning:
{code:java}
javax.servlet.ServletException: javax.servlet.ServletException: 
java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported 
character : _
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:139)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:539)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:748)
Caused by: javax.servlet.ServletException: java.lang.IllegalArgumentException: 
Bucket or Volume name has an unsupported character : _
at 
org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:432)
at 
org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370)
at 
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389)
at 
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342)
at 
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229)
at 
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1780)
at 
org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1628)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:583)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:513)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
... 13 more
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2385) Ozone shell list volume command lists only user owned volumes and not all the volumes

2019-10-30 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDDS-2385:


 Summary: Ozone shell list volume command lists only user owned 
volumes and not all the volumes
 Key: HDDS-2385
 URL: https://issues.apache.org/jira/browse/HDDS-2385
 Project: Hadoop Distributed Data Store
  Issue Type: Task
  Components: Ozone CLI
Affects Versions: 0.4.1
Reporter: Vivek Ratnavel Subramanian


The command `ozone sh volume ls` lists only the volumes that are owned by the 
user.

 

Expected behavior: The command should list all the volumes in the system if the 
user is an ozone administrator. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1505) Remove "ozone.enabled" parameter from ozone configs

2019-10-26 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian resolved HDDS-1505.
--
Resolution: Duplicate

> Remove "ozone.enabled" parameter from ozone configs
> ---
>
> Key: HDDS-1505
> URL: https://issues.apache.org/jira/browse/HDDS-1505
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>
> Remove "ozone.enabled" config as it is no longer needed



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2181) Ozone Manager should send correct ACL type in ACL requests to Authorizer

2019-10-17 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-2181:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Ozone Manager should send correct ACL type in ACL requests to Authorizer
> 
>
> Key: HDDS-2181
> URL: https://issues.apache.org/jira/browse/HDDS-2181
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 11h 10m
>  Remaining Estimate: 0h
>
> Currently, Ozone manager sends "WRITE" as ACLType for key create, key delete 
> and bucket create operation. Fix the acl type in all requests to the 
> authorizer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2181) Ozone Manager should send correct ACL type in ACL requests to Authorizer

2019-10-16 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-2181:
-
Status: Patch Available  (was: Reopened)

> Ozone Manager should send correct ACL type in ACL requests to Authorizer
> 
>
> Key: HDDS-2181
> URL: https://issues.apache.org/jira/browse/HDDS-2181
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 11h
>  Remaining Estimate: 0h
>
> Currently, Ozone manager sends "WRITE" as ACLType for key create, key delete 
> and bucket create operation. Fix the acl type in all requests to the 
> authorizer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-2310) Add support to add ozone ranger plugin to Ozone Manager classpath

2019-10-15 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2310 started by Vivek Ratnavel Subramanian.

> Add support to add ozone ranger plugin to Ozone Manager classpath
> -
>
> Key: HDDS-2310
> URL: https://issues.apache.org/jira/browse/HDDS-2310
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Affects Versions: 0.5.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> Currently, there is no way to add Ozone Ranger plugin to Ozone Manager 
> classpath. 
> We should be able to set an environment variable that will be respected by 
> ozone and added to Ozone Manager classpath.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2310) Add support to add ozone ranger plugin to Ozone Manager classpath

2019-10-15 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDDS-2310:


 Summary: Add support to add ozone ranger plugin to Ozone Manager 
classpath
 Key: HDDS-2310
 URL: https://issues.apache.org/jira/browse/HDDS-2310
 Project: Hadoop Distributed Data Store
  Issue Type: Task
  Components: Ozone Manager
Affects Versions: 0.5.0
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian


Currently, there is no way to add Ozone Ranger plugin to Ozone Manager 
classpath. 

We should be able to set an environment variable that will be respected by 
ozone and added to Ozone Manager classpath.

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2181) Ozone Manager should send correct ACL type in ACL requests to Authorizer

2019-10-14 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-2181:
-
Status: Patch Available  (was: In Progress)

> Ozone Manager should send correct ACL type in ACL requests to Authorizer
> 
>
> Key: HDDS-2181
> URL: https://issues.apache.org/jira/browse/HDDS-2181
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10h 50m
>  Remaining Estimate: 0h
>
> Currently, Ozone manager sends "WRITE" as ACLType for key create, key delete 
> and bucket create operation. Fix the acl type in all requests to the 
> authorizer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2257) Fix checkstyle issues in ChecksumByteBuffer

2019-10-04 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-2257:
-
Status: Patch Available  (was: Open)

> Fix checkstyle issues in ChecksumByteBuffer
> ---
>
> Key: HDDS-2257
> URL: https://issues.apache.org/jira/browse/HDDS-2257
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: newbie
>
> hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java
>  84: Inner assignments should be avoided.
>  85: Inner assignments should be avoided.
>  101: case child has incorrect indentation level 8, expected 
> level should be 6.
>  102: case child has incorrect indentation level 8, expected 
> level should be 6.
>  103: case child has incorrect indentation level 8, expected 
> level should be 6.
>  104: case child has incorrect indentation level 8, expected 
> level should be 6.
>  105: case child has incorrect indentation level 8, expected 
> level should be 6.
>  106: case child has incorrect indentation level 8, expected 
> level should be 6.
>  107: case child has incorrect indentation level 8, expected 
> level should be 6.
>  108: case child has incorrect indentation level 8, expected 
> level should be 6.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-2258) Fix checkstyle issues introduced by HDDS-2222

2019-10-04 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2258 started by Vivek Ratnavel Subramanian.

> Fix checkstyle issues introduced by HDDS-
> -
>
> Key: HDDS-2258
> URL: https://issues.apache.org/jira/browse/HDDS-2258
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2258) Fix checkstyle issues introduced by HDDS-2222

2019-10-04 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDDS-2258:


 Summary: Fix checkstyle issues introduced by HDDS-
 Key: HDDS-2258
 URL: https://issues.apache.org/jira/browse/HDDS-2258
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2192) Optimize Ozone CLI commands to send one ACL request to authorizers instead of sending multiple requests

2019-09-26 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDDS-2192:


 Summary: Optimize Ozone CLI commands to send one ACL request to 
authorizers instead of sending multiple requests
 Key: HDDS-2192
 URL: https://issues.apache.org/jira/browse/HDDS-2192
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone CLI
Affects Versions: 0.5.0
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian


Currently, when trying to read a key, three requests are sent to the authorizer:
volume read, bucket read, key read.

 

It should instead be just one request to the authorizer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2191) Handle bucket create request in OzoneNativeAuthorizer

2019-09-26 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDDS-2191:


 Summary: Handle bucket create request in OzoneNativeAuthorizer
 Key: HDDS-2191
 URL: https://issues.apache.org/jira/browse/HDDS-2191
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Security
Affects Versions: 0.5.0
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian


OzoneNativeAuthorizer should handle bucket create request when the bucket 
object is not yet created.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2190) Ozone administrators should be able to list all the volumes

2019-09-26 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDDS-2190:


 Summary: Ozone administrators should be able to list all the 
volumes
 Key: HDDS-2190
 URL: https://issues.apache.org/jira/browse/HDDS-2190
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager
Affects Versions: 0.4.1
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian


Currently, ozone administrators are not able to list all the volumes in the 
system. `ozone sh volume ls` only lists the volumes owned by the admin user.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2182) Fix checkstyle violations introduced by HDDS-1738

2019-09-26 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDDS-2182:


 Summary: Fix checkstyle violations introduced by HDDS-1738
 Key: HDDS-2182
 URL: https://issues.apache.org/jira/browse/HDDS-2182
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Affects Versions: 0.4.1
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-2181) Ozone Manager should send correct ACL type in ACL requests to Authorizer

2019-09-25 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2181 started by Vivek Ratnavel Subramanian.

> Ozone Manager should send correct ACL type in ACL requests to Authorizer
> 
>
> Key: HDDS-2181
> URL: https://issues.apache.org/jira/browse/HDDS-2181
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> Currently, Ozone manager sends "WRITE" as ACLType for key create, key delete 
> and bucket create operation. Fix the acl type in all requests to the 
> authorizer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2181) Ozone Manager should send correct ACL type in ACL requests to Authorizer

2019-09-25 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDDS-2181:


 Summary: Ozone Manager should send correct ACL type in ACL 
requests to Authorizer
 Key: HDDS-2181
 URL: https://issues.apache.org/jira/browse/HDDS-2181
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager
Affects Versions: 0.4.1
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian


Currently, Ozone manager sends "WRITE" as ACLType for key create, key delete 
and bucket create operation. Fix the acl type in all requests to the authorizer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-2168) TestOzoneManagerDoubleBufferWithOMResponse sometimes fails with out of memory error

2019-09-23 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2168 started by Vivek Ratnavel Subramanian.

> TestOzoneManagerDoubleBufferWithOMResponse sometimes fails with out of memory 
> error
> ---
>
> Key: HDDS-2168
> URL: https://issues.apache.org/jira/browse/HDDS-2168
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> testDoubleBuffer() in TestOzoneManagerDoubleBufferWithOMResponse fails with 
> outofmemory exceptions at times in dev machines.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2168) TestOzoneManagerDoubleBufferWithOMResponse sometimes fails with out of memory error

2019-09-23 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDDS-2168:


 Summary: TestOzoneManagerDoubleBufferWithOMResponse sometimes 
fails with out of memory error
 Key: HDDS-2168
 URL: https://issues.apache.org/jira/browse/HDDS-2168
 Project: Hadoop Distributed Data Store
  Issue Type: Task
  Components: Ozone Manager
Affects Versions: 0.4.1
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian


testDoubleBuffer() in TestOzoneManagerDoubleBufferWithOMResponse fails with 
outofmemory exceptions at times in dev machines.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2163) Add "Replication factor" to the output of list keys

2019-09-20 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDDS-2163:


 Summary: Add "Replication factor" to the output of list keys 
 Key: HDDS-2163
 URL: https://issues.apache.org/jira/browse/HDDS-2163
 Project: Hadoop Distributed Data Store
  Issue Type: Task
  Components: Ozone CLI
Affects Versions: 0.4.1
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian


The output of "ozone sh key list /vol1/bucket1" does not include replication 
factor and it will be good to have it in the output.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2101) Ozone filesystem provider doesn't exist

2019-09-19 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-2101:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Ozone filesystem provider doesn't exist
> ---
>
> Key: HDDS-2101
> URL: https://issues.apache.org/jira/browse/HDDS-2101
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Jitendra Nath Pandey
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> We don't have a filesystem provider in META-INF. 
> i.e. following file doesn't exist.
> {{hadoop-ozone/ozonefs/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem}}
> See for example
> {{hadoop-tools/hadoop-aws/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2156) Fix alignment issues in HDDS doc pages

2019-09-19 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-2156:
-
Status: Patch Available  (was: In Progress)

> Fix alignment issues in HDDS doc pages
> --
>
> Key: HDDS-2156
> URL: https://issues.apache.org/jira/browse/HDDS-2156
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> The cards in HDDS doc pages don't align properly and needs to be fixed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-2156) Fix alignment issues in HDDS doc pages

2019-09-19 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2156 started by Vivek Ratnavel Subramanian.

> Fix alignment issues in HDDS doc pages
> --
>
> Key: HDDS-2156
> URL: https://issues.apache.org/jira/browse/HDDS-2156
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> The cards in HDDS doc pages don't align properly and needs to be fixed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2156) Fix alignment issues in HDDS doc pages

2019-09-19 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDDS-2156:


 Summary: Fix alignment issues in HDDS doc pages
 Key: HDDS-2156
 URL: https://issues.apache.org/jira/browse/HDDS-2156
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: documentation
Affects Versions: 0.4.1
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian


The cards in HDDS doc pages don't align properly and needs to be fixed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2101) Ozone filesystem provider doesn't exist

2019-09-19 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-2101:
-
Status: Patch Available  (was: In Progress)

> Ozone filesystem provider doesn't exist
> ---
>
> Key: HDDS-2101
> URL: https://issues.apache.org/jira/browse/HDDS-2101
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Jitendra Nath Pandey
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> We don't have a filesystem provider in META-INF. 
> i.e. following file doesn't exist.
> {{hadoop-ozone/ozonefs/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem}}
> See for example
> {{hadoop-tools/hadoop-aws/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-2101) Ozone filesystem provider doesn't exist

2019-09-19 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2101 started by Vivek Ratnavel Subramanian.

> Ozone filesystem provider doesn't exist
> ---
>
> Key: HDDS-2101
> URL: https://issues.apache.org/jira/browse/HDDS-2101
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Jitendra Nath Pandey
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>
> We don't have a filesystem provider in META-INF. 
> i.e. following file doesn't exist.
> {{hadoop-ozone/ozonefs/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem}}
> See for example
> {{hadoop-tools/hadoop-aws/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2107) Datanodes should retry forever to connect to SCM in an unsecure environment

2019-09-13 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-2107:
-
Description: 
In an unsecure environment, the datanodes try upto 10 times after waiting for 
1000 milliseconds each time before throwing this error:
{code:java}
Unable to communicate to SCM server at scm:9861 for past 0 seconds.
java.net.ConnectException: Call From scm/10.65.36.118 to scm:9861 failed on 
connection exception: java.net.ConnectException: Connection refused; For more 
details see:  http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:755)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515)
at org.apache.hadoop.ipc.Client.call(Client.java:1457)
at org.apache.hadoop.ipc.Client.call(Client.java:1367)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy33.getVersion(Unknown Source)
at 
org.apache.hadoop.ozone.protocolPB.StorageContainerDatanodeProtocolClientSideTranslatorPB.getVersion(StorageContainerDatanodeProtocolClientSideTranslatorPB.java:112)
at 
org.apache.hadoop.ozone.container.common.states.endpoint.VersionEndpointTask.call(VersionEndpointTask.java:70)
at 
org.apache.hadoop.ozone.container.common.states.endpoint.VersionEndpointTask.call(VersionEndpointTask.java:42)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at 
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at 
org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:690)
at 
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:794)
at org.apache.hadoop.ipc.Client$Connection.access$3700(Client.java:411)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1572)
at org.apache.hadoop.ipc.Client.call(Client.java:1403)
... 13 more
{code}
The datanodes should try forever to connect with SCM and not throw any errors.

  was:
In an unsecure environment, the datanodes try upto 10 times after waiting for 
1000 milliseconds each time before throwing this error:
{code:java}
Unable to communicate to SCM server at scm:9861 for past 0 seconds.
java.net.ConnectException: Call From scm/10.65.36.118 to scm:9861 failed on 
connection exception: java.net.ConnectException: Connection refused; For more 
details see:  http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:755)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515)
at org.apache.hadoop.ipc.Client.call(Client.java:1457)
at org.apache.hadoop.ipc.Client.call(Client.java:1367)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy33.getVersion(Unknown Source)
at 
org.apache.hadoop.ozone.protocolPB.StorageContainerDatanodeProtocolClientSideTranslatorPB.getVersion(StorageContainerDatanodeProtocolClientSideTranslatorPB.java:112)
at 

[jira] [Updated] (HDDS-2107) Datanodes should retry forever to connect to SCM in an unsecure environment

2019-09-10 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-2107:
-
Description: 
In an unsecure environment, the datanodes try upto 10 times after waiting for 
1000 milliseconds each time before throwing this error:
{code:java}
Unable to communicate to SCM server at scm:9861 for past 0 seconds.
java.net.ConnectException: Call From scm/10.65.36.118 to scm:9861 failed on 
connection exception: java.net.ConnectException: Connection refused; For more 
details see:  http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:755)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515)
at org.apache.hadoop.ipc.Client.call(Client.java:1457)
at org.apache.hadoop.ipc.Client.call(Client.java:1367)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy33.getVersion(Unknown Source)
at 
org.apache.hadoop.ozone.protocolPB.StorageContainerDatanodeProtocolClientSideTranslatorPB.getVersion(StorageContainerDatanodeProtocolClientSideTranslatorPB.java:112)
at 
org.apache.hadoop.ozone.container.common.states.endpoint.VersionEndpointTask.call(VersionEndpointTask.java:70)
at 
org.apache.hadoop.ozone.container.common.states.endpoint.VersionEndpointTask.call(VersionEndpointTask.java:42)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at 
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at 
org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:690)
at 
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:794)
at org.apache.hadoop.ipc.Client$Connection.access$3700(Client.java:411)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1572)
at org.apache.hadoop.ipc.Client.call(Client.java:1403)
... 13 more
{code}
The datanodes should try forever to connect with SCM and not fail immediately 
after 10 retries.

  was:
In an unsecure environment, the datanodes try upto 10 times after waiting for 
1000 milliseconds each time before throwing this error:
{code:java}
Unable to communicate to SCM server at 
jmccarthy-ozone-unsecure2-2.vpc.cloudera.com:9861 for past 0 seconds.
java.net.ConnectException: Call From 
jmccarthy-ozone-unsecure2-4.vpc.cloudera.com/10.65.36.118 to 
jmccarthy-ozone-unsecure2-2.vpc.cloudera.com:9861 failed on connection 
exception: java.net.ConnectException: Connection refused; For more details see: 
 http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:755)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515)
at org.apache.hadoop.ipc.Client.call(Client.java:1457)
at org.apache.hadoop.ipc.Client.call(Client.java:1367)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy33.getVersion(Unknown Source)
at 

[jira] [Work started] (HDDS-2107) Datanodes should retry forever to connect to SCM in an unsecure environment

2019-09-10 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2107 started by Vivek Ratnavel Subramanian.

> Datanodes should retry forever to connect to SCM in an unsecure environment
> ---
>
> Key: HDDS-2107
> URL: https://issues.apache.org/jira/browse/HDDS-2107
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> In an unsecure environment, the datanodes try upto 10 times after waiting for 
> 1000 milliseconds each time before throwing this error:
> {code:java}
> Unable to communicate to SCM server at 
> jmccarthy-ozone-unsecure2-2.vpc.cloudera.com:9861 for past 0 seconds.
> java.net.ConnectException: Call From 
> jmccarthy-ozone-unsecure2-4.vpc.cloudera.com/10.65.36.118 to 
> jmccarthy-ozone-unsecure2-2.vpc.cloudera.com:9861 failed on connection 
> exception: java.net.ConnectException: Connection refused; For more details 
> see:  http://wiki.apache.org/hadoop/ConnectionRefused
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:755)
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1457)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1367)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
>   at com.sun.proxy.$Proxy33.getVersion(Unknown Source)
>   at 
> org.apache.hadoop.ozone.protocolPB.StorageContainerDatanodeProtocolClientSideTranslatorPB.getVersion(StorageContainerDatanodeProtocolClientSideTranslatorPB.java:112)
>   at 
> org.apache.hadoop.ozone.container.common.states.endpoint.VersionEndpointTask.call(VersionEndpointTask.java:70)
>   at 
> org.apache.hadoop.ozone.container.common.states.endpoint.VersionEndpointTask.call(VersionEndpointTask.java:42)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.net.ConnectException: Connection refused
>   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>   at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
>   at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
>   at 
> org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:690)
>   at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:794)
>   at org.apache.hadoop.ipc.Client$Connection.access$3700(Client.java:411)
>   at org.apache.hadoop.ipc.Client.getConnection(Client.java:1572)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1403)
>   ... 13 more
> {code}
> The datanodes should try forever to connect with SCM and not fail immediately 
> after 10 retries.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-2041) Don't depend on DFSUtil to check HTTP policy

2019-09-10 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2041 started by Vivek Ratnavel Subramanian.

> Don't depend on DFSUtil to check HTTP policy
> 
>
> Key: HDDS-2041
> URL: https://issues.apache.org/jira/browse/HDDS-2041
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: website
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> Currently, BaseHttpServer uses DFSUtil to get Http policy. With this, when 
> http policy is set to HTTPS on hdfs-site.xml, ozone http servers try to come 
> up with HTTPS and fail if SSL certificates are not present in the required 
> location.
> Ozone web UIs should not depend on HDFS config to determine HTTP policy. 
> Instead, it should have its own config to determine the policy. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2107) Datanodes should retry forever to connect to SCM in an unsecure environment

2019-09-10 Thread Vivek Ratnavel Subramanian (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16927049#comment-16927049
 ] 

Vivek Ratnavel Subramanian commented on HDDS-2107:
--

cc [~xyao]

> Datanodes should retry forever to connect to SCM in an unsecure environment
> ---
>
> Key: HDDS-2107
> URL: https://issues.apache.org/jira/browse/HDDS-2107
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> In an unsecure environment, the datanodes try upto 10 times after waiting for 
> 1000 milliseconds each time before throwing this error:
> {code:java}
> Unable to communicate to SCM server at 
> jmccarthy-ozone-unsecure2-2.vpc.cloudera.com:9861 for past 0 seconds.
> java.net.ConnectException: Call From 
> jmccarthy-ozone-unsecure2-4.vpc.cloudera.com/10.65.36.118 to 
> jmccarthy-ozone-unsecure2-2.vpc.cloudera.com:9861 failed on connection 
> exception: java.net.ConnectException: Connection refused; For more details 
> see:  http://wiki.apache.org/hadoop/ConnectionRefused
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:755)
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1457)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1367)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
>   at com.sun.proxy.$Proxy33.getVersion(Unknown Source)
>   at 
> org.apache.hadoop.ozone.protocolPB.StorageContainerDatanodeProtocolClientSideTranslatorPB.getVersion(StorageContainerDatanodeProtocolClientSideTranslatorPB.java:112)
>   at 
> org.apache.hadoop.ozone.container.common.states.endpoint.VersionEndpointTask.call(VersionEndpointTask.java:70)
>   at 
> org.apache.hadoop.ozone.container.common.states.endpoint.VersionEndpointTask.call(VersionEndpointTask.java:42)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.net.ConnectException: Connection refused
>   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>   at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
>   at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
>   at 
> org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:690)
>   at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:794)
>   at org.apache.hadoop.ipc.Client$Connection.access$3700(Client.java:411)
>   at org.apache.hadoop.ipc.Client.getConnection(Client.java:1572)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1403)
>   ... 13 more
> {code}
> The datanodes should try forever to connect with SCM and not fail immediately 
> after 10 retries.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2107) Datanodes should retry forever to connect to SCM in an unsecure environment

2019-09-10 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDDS-2107:


 Summary: Datanodes should retry forever to connect to SCM in an 
unsecure environment
 Key: HDDS-2107
 URL: https://issues.apache.org/jira/browse/HDDS-2107
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Affects Versions: 0.4.1
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian


In an unsecure environment, the datanodes try upto 10 times after waiting for 
1000 milliseconds each time before throwing this error:
{code:java}
Unable to communicate to SCM server at 
jmccarthy-ozone-unsecure2-2.vpc.cloudera.com:9861 for past 0 seconds.
java.net.ConnectException: Call From 
jmccarthy-ozone-unsecure2-4.vpc.cloudera.com/10.65.36.118 to 
jmccarthy-ozone-unsecure2-2.vpc.cloudera.com:9861 failed on connection 
exception: java.net.ConnectException: Connection refused; For more details see: 
 http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:755)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515)
at org.apache.hadoop.ipc.Client.call(Client.java:1457)
at org.apache.hadoop.ipc.Client.call(Client.java:1367)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy33.getVersion(Unknown Source)
at 
org.apache.hadoop.ozone.protocolPB.StorageContainerDatanodeProtocolClientSideTranslatorPB.getVersion(StorageContainerDatanodeProtocolClientSideTranslatorPB.java:112)
at 
org.apache.hadoop.ozone.container.common.states.endpoint.VersionEndpointTask.call(VersionEndpointTask.java:70)
at 
org.apache.hadoop.ozone.container.common.states.endpoint.VersionEndpointTask.call(VersionEndpointTask.java:42)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at 
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at 
org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:690)
at 
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:794)
at org.apache.hadoop.ipc.Client$Connection.access$3700(Client.java:411)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1572)
at org.apache.hadoop.ipc.Client.call(Client.java:1403)
... 13 more
{code}
The datanodes should try forever to connect with SCM and not fail immediately 
after 10 retries.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1505) Remove "ozone.enabled" parameter from ozone configs

2019-09-09 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian reassigned HDDS-1505:


Assignee: Vivek Ratnavel Subramanian

> Remove "ozone.enabled" parameter from ozone configs
> ---
>
> Key: HDDS-1505
> URL: https://issues.apache.org/jira/browse/HDDS-1505
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>
> Remove "ozone.enabled" config as it is no longer needed



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2101) Ozone filesystem provider doesn't exist

2019-09-07 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian reassigned HDDS-2101:


Assignee: Vivek Ratnavel Subramanian

> Ozone filesystem provider doesn't exist
> ---
>
> Key: HDDS-2101
> URL: https://issues.apache.org/jira/browse/HDDS-2101
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Jitendra Nath Pandey
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>
> We don't have a filesystem provider in META-INF. 
> i.e. following file doesn't exist.
> {{hadoop-ozone/ozonefs/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem}}
> See for example
> {{hadoop-tools/hadoop-aws/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem}}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1970) Upgrade Bootstrap and jQuery versions of Ozone web UIs

2019-09-06 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian resolved HDDS-1970.
--
Resolution: Fixed

> Upgrade Bootstrap and jQuery versions of Ozone web UIs 
> ---
>
> Key: HDDS-1970
> URL: https://issues.apache.org/jira/browse/HDDS-1970
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: website
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> The current versions of bootstrap and jquery used by Ozone web UIs are 
> reported to have known medium severity CVEs and need to be updated to the 
> latest versions.
>  
> I suggest updating bootstrap and jQuery to 3.4.1.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-2087) Remove the hard coded config key in ChunkManager

2019-09-05 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2087 started by Vivek Ratnavel Subramanian.

> Remove the hard coded config key in ChunkManager
> 
>
> Key: HDDS-2087
> URL: https://issues.apache.org/jira/browse/HDDS-2087
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Anu Engineer
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> We have a hard-coded config key in the {{ChunkManagerFactory.java.}}
>  
> {code}
> boolean scrubber = config.getBoolean(
>  "hdds.containerscrub.enabled",
>  false);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2087) Remove the hard coded config key in ChunkManager

2019-09-05 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian reassigned HDDS-2087:


Assignee: Vivek Ratnavel Subramanian  (was: Siddharth Wagle)

> Remove the hard coded config key in ChunkManager
> 
>
> Key: HDDS-2087
> URL: https://issues.apache.org/jira/browse/HDDS-2087
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Anu Engineer
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> We have a hard-coded config key in the {{ChunkManagerFactory.java.}}
>  
> {code}
> boolean scrubber = config.getBoolean(
>  "hdds.containerscrub.enabled",
>  false);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2050) Error while compiling ozone-recon-web

2019-08-29 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-2050:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Error while compiling ozone-recon-web
> -
>
> Key: HDDS-2050
> URL: https://issues.apache.org/jira/browse/HDDS-2050
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Nanda kumar
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The following error is seen while compiling {{ozone-recon-web}}
> {noformat}
> [INFO] Running 'yarn install' in 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web
> [INFO] yarn install v1.9.2
> [INFO] [1/4] Resolving packages...
> [INFO] [2/4] Fetching packages...
> [ERROR] (node:31190) [DEP0005] DeprecationWarning: Buffer() is deprecated due 
> to security and usability issues. Please use the Buffer.alloc(), 
> Buffer.allocUnsafe(), or Buffer.from() methods instead.
> [INFO] [3/4] Linking dependencies...
> [ERROR] warning " > less-loader@5.0.0" has unmet peer dependency 
> "webpack@^2.0.0 || ^3.0.0 || ^4.0.0".
> [INFO] [4/4] Building fresh packages...
> [ERROR] warning Error running install script for optional dependency: 
> "/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents:
>  Command failed.
> [ERROR] Exit code: 1
> [ERROR] Command: node install
> [ERROR] Arguments:
> [ERROR] Directory: 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents
> [ERROR] Output:
> [ERROR] node-pre-gyp info it worked if it ends with ok
> [INFO] info This module is OPTIONAL, you can safely ignore this error
> [ERROR] node-pre-gyp info using node-pre-gyp@0.12.0
> [ERROR] node-pre-gyp info using node@12.1.0 | darwin | x64
> [ERROR] node-pre-gyp WARN Using request for node-pre-gyp https download
> [ERROR] node-pre-gyp info check checked for 
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/lib/binding/Release/node-v72-darwin-x64/fse.node\"
>  (not found)
> [ERROR] node-pre-gyp http GET 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp http 404 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp WARN Tried to download(404): 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp WARN Pre-built binaries not found for fsevents@1.2.8 and 
> node@12.1.0 (node-v72 ABI, unknown) (falling back to source compile with 
> node-gyp)
> [ERROR] node-pre-gyp http 404 status code downloading tarball 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp ERR! build error
> [ERROR] node-pre-gyp ERR! stack Error: Failed to execute 'node-gyp clean' 
> (Error: spawn node-gyp ENOENT)
> [ERROR] node-pre-gyp ERR! stack at ChildProcess. 
> (/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/node_modules/node-pre-gyp/lib/util/compile.js:77:29)
> [ERROR] node-pre-gyp ERR! stack at ChildProcess.emit (events.js:196:13)
> [ERROR] node-pre-gyp ERR! stack at Process.ChildProcess._handle.onexit 
> (internal/child_process.js:254:12)
> [ERROR] node-pre-gyp ERR! stack at onErrorNT 
> (internal/child_process.js:431:16)
> [ERROR] node-pre-gyp ERR! stack at processTicksAndRejections 
> (internal/process/task_queues.js:84:17)
> [ERROR] node-pre-gyp ERR! System Darwin 18.5.0
> [ERROR] node-pre-gyp ERR! command 
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/target/node/node\"
>  
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/node_modules/node-pre-gyp/bin/node-pre-gyp\"
>  \"install\" \"--fallback-to-build\"
> [ERROR] node-pre-gyp ERR! cwd 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents
> [ERROR] node-pre-gyp ERR! node -v v12.1.0
> [ERROR] node-pre-gyp ERR! node-pre-gyp -v v0.12.0
> [ERROR] node-pre-gyp ERR! not ok
> [ERROR] Failed to execute 'node-gyp clean' (Error: spawn node-gyp ENOENT)"
> [INFO] Done in 102.54s.
> {noformat}



--
This message was sent by Atlassian 

[jira] [Commented] (HDDS-2050) Error while compiling ozone-recon-web

2019-08-28 Thread Vivek Ratnavel Subramanian (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918192#comment-16918192
 ] 

Vivek Ratnavel Subramanian commented on HDDS-2050:
--

I have a patch available to fix these errors - 
[https://github.com/apache/hadoop/pull/1374]

> Error while compiling ozone-recon-web
> -
>
> Key: HDDS-2050
> URL: https://issues.apache.org/jira/browse/HDDS-2050
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Nanda kumar
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The following error is seen while compiling {{ozone-recon-web}}
> {noformat}
> [INFO] Running 'yarn install' in 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web
> [INFO] yarn install v1.9.2
> [INFO] [1/4] Resolving packages...
> [INFO] [2/4] Fetching packages...
> [ERROR] (node:31190) [DEP0005] DeprecationWarning: Buffer() is deprecated due 
> to security and usability issues. Please use the Buffer.alloc(), 
> Buffer.allocUnsafe(), or Buffer.from() methods instead.
> [INFO] [3/4] Linking dependencies...
> [ERROR] warning " > less-loader@5.0.0" has unmet peer dependency 
> "webpack@^2.0.0 || ^3.0.0 || ^4.0.0".
> [INFO] [4/4] Building fresh packages...
> [ERROR] warning Error running install script for optional dependency: 
> "/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents:
>  Command failed.
> [ERROR] Exit code: 1
> [ERROR] Command: node install
> [ERROR] Arguments:
> [ERROR] Directory: 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents
> [ERROR] Output:
> [ERROR] node-pre-gyp info it worked if it ends with ok
> [INFO] info This module is OPTIONAL, you can safely ignore this error
> [ERROR] node-pre-gyp info using node-pre-gyp@0.12.0
> [ERROR] node-pre-gyp info using node@12.1.0 | darwin | x64
> [ERROR] node-pre-gyp WARN Using request for node-pre-gyp https download
> [ERROR] node-pre-gyp info check checked for 
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/lib/binding/Release/node-v72-darwin-x64/fse.node\"
>  (not found)
> [ERROR] node-pre-gyp http GET 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp http 404 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp WARN Tried to download(404): 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp WARN Pre-built binaries not found for fsevents@1.2.8 and 
> node@12.1.0 (node-v72 ABI, unknown) (falling back to source compile with 
> node-gyp)
> [ERROR] node-pre-gyp http 404 status code downloading tarball 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp ERR! build error
> [ERROR] node-pre-gyp ERR! stack Error: Failed to execute 'node-gyp clean' 
> (Error: spawn node-gyp ENOENT)
> [ERROR] node-pre-gyp ERR! stack at ChildProcess. 
> (/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/node_modules/node-pre-gyp/lib/util/compile.js:77:29)
> [ERROR] node-pre-gyp ERR! stack at ChildProcess.emit (events.js:196:13)
> [ERROR] node-pre-gyp ERR! stack at Process.ChildProcess._handle.onexit 
> (internal/child_process.js:254:12)
> [ERROR] node-pre-gyp ERR! stack at onErrorNT 
> (internal/child_process.js:431:16)
> [ERROR] node-pre-gyp ERR! stack at processTicksAndRejections 
> (internal/process/task_queues.js:84:17)
> [ERROR] node-pre-gyp ERR! System Darwin 18.5.0
> [ERROR] node-pre-gyp ERR! command 
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/target/node/node\"
>  
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/node_modules/node-pre-gyp/bin/node-pre-gyp\"
>  \"install\" \"--fallback-to-build\"
> [ERROR] node-pre-gyp ERR! cwd 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents
> [ERROR] node-pre-gyp ERR! node -v v12.1.0
> [ERROR] node-pre-gyp ERR! node-pre-gyp -v v0.12.0
> [ERROR] node-pre-gyp ERR! not ok
> [ERROR] Failed to execute 'node-gyp clean' (Error: spawn node-gyp ENOENT)"
> [INFO] Done in 

[jira] [Updated] (HDDS-2050) Error while compiling ozone-recon-web

2019-08-28 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-2050:
-
Status: Patch Available  (was: In Progress)

> Error while compiling ozone-recon-web
> -
>
> Key: HDDS-2050
> URL: https://issues.apache.org/jira/browse/HDDS-2050
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Nanda kumar
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> The following error is seen while compiling {{ozone-recon-web}}
> {noformat}
> [INFO] Running 'yarn install' in 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web
> [INFO] yarn install v1.9.2
> [INFO] [1/4] Resolving packages...
> [INFO] [2/4] Fetching packages...
> [ERROR] (node:31190) [DEP0005] DeprecationWarning: Buffer() is deprecated due 
> to security and usability issues. Please use the Buffer.alloc(), 
> Buffer.allocUnsafe(), or Buffer.from() methods instead.
> [INFO] [3/4] Linking dependencies...
> [ERROR] warning " > less-loader@5.0.0" has unmet peer dependency 
> "webpack@^2.0.0 || ^3.0.0 || ^4.0.0".
> [INFO] [4/4] Building fresh packages...
> [ERROR] warning Error running install script for optional dependency: 
> "/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents:
>  Command failed.
> [ERROR] Exit code: 1
> [ERROR] Command: node install
> [ERROR] Arguments:
> [ERROR] Directory: 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents
> [ERROR] Output:
> [ERROR] node-pre-gyp info it worked if it ends with ok
> [INFO] info This module is OPTIONAL, you can safely ignore this error
> [ERROR] node-pre-gyp info using node-pre-gyp@0.12.0
> [ERROR] node-pre-gyp info using node@12.1.0 | darwin | x64
> [ERROR] node-pre-gyp WARN Using request for node-pre-gyp https download
> [ERROR] node-pre-gyp info check checked for 
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/lib/binding/Release/node-v72-darwin-x64/fse.node\"
>  (not found)
> [ERROR] node-pre-gyp http GET 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp http 404 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp WARN Tried to download(404): 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp WARN Pre-built binaries not found for fsevents@1.2.8 and 
> node@12.1.0 (node-v72 ABI, unknown) (falling back to source compile with 
> node-gyp)
> [ERROR] node-pre-gyp http 404 status code downloading tarball 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp ERR! build error
> [ERROR] node-pre-gyp ERR! stack Error: Failed to execute 'node-gyp clean' 
> (Error: spawn node-gyp ENOENT)
> [ERROR] node-pre-gyp ERR! stack at ChildProcess. 
> (/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/node_modules/node-pre-gyp/lib/util/compile.js:77:29)
> [ERROR] node-pre-gyp ERR! stack at ChildProcess.emit (events.js:196:13)
> [ERROR] node-pre-gyp ERR! stack at Process.ChildProcess._handle.onexit 
> (internal/child_process.js:254:12)
> [ERROR] node-pre-gyp ERR! stack at onErrorNT 
> (internal/child_process.js:431:16)
> [ERROR] node-pre-gyp ERR! stack at processTicksAndRejections 
> (internal/process/task_queues.js:84:17)
> [ERROR] node-pre-gyp ERR! System Darwin 18.5.0
> [ERROR] node-pre-gyp ERR! command 
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/target/node/node\"
>  
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/node_modules/node-pre-gyp/bin/node-pre-gyp\"
>  \"install\" \"--fallback-to-build\"
> [ERROR] node-pre-gyp ERR! cwd 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents
> [ERROR] node-pre-gyp ERR! node -v v12.1.0
> [ERROR] node-pre-gyp ERR! node-pre-gyp -v v0.12.0
> [ERROR] node-pre-gyp ERR! not ok
> [ERROR] Failed to execute 'node-gyp clean' (Error: spawn node-gyp ENOENT)"
> [INFO] Done in 102.54s.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: 

[jira] [Work started] (HDDS-2050) Error while compiling ozone-recon-web

2019-08-28 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2050 started by Vivek Ratnavel Subramanian.

> Error while compiling ozone-recon-web
> -
>
> Key: HDDS-2050
> URL: https://issues.apache.org/jira/browse/HDDS-2050
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Nanda kumar
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> The following error is seen while compiling {{ozone-recon-web}}
> {noformat}
> [INFO] Running 'yarn install' in 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web
> [INFO] yarn install v1.9.2
> [INFO] [1/4] Resolving packages...
> [INFO] [2/4] Fetching packages...
> [ERROR] (node:31190) [DEP0005] DeprecationWarning: Buffer() is deprecated due 
> to security and usability issues. Please use the Buffer.alloc(), 
> Buffer.allocUnsafe(), or Buffer.from() methods instead.
> [INFO] [3/4] Linking dependencies...
> [ERROR] warning " > less-loader@5.0.0" has unmet peer dependency 
> "webpack@^2.0.0 || ^3.0.0 || ^4.0.0".
> [INFO] [4/4] Building fresh packages...
> [ERROR] warning Error running install script for optional dependency: 
> "/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents:
>  Command failed.
> [ERROR] Exit code: 1
> [ERROR] Command: node install
> [ERROR] Arguments:
> [ERROR] Directory: 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents
> [ERROR] Output:
> [ERROR] node-pre-gyp info it worked if it ends with ok
> [INFO] info This module is OPTIONAL, you can safely ignore this error
> [ERROR] node-pre-gyp info using node-pre-gyp@0.12.0
> [ERROR] node-pre-gyp info using node@12.1.0 | darwin | x64
> [ERROR] node-pre-gyp WARN Using request for node-pre-gyp https download
> [ERROR] node-pre-gyp info check checked for 
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/lib/binding/Release/node-v72-darwin-x64/fse.node\"
>  (not found)
> [ERROR] node-pre-gyp http GET 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp http 404 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp WARN Tried to download(404): 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp WARN Pre-built binaries not found for fsevents@1.2.8 and 
> node@12.1.0 (node-v72 ABI, unknown) (falling back to source compile with 
> node-gyp)
> [ERROR] node-pre-gyp http 404 status code downloading tarball 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp ERR! build error
> [ERROR] node-pre-gyp ERR! stack Error: Failed to execute 'node-gyp clean' 
> (Error: spawn node-gyp ENOENT)
> [ERROR] node-pre-gyp ERR! stack at ChildProcess. 
> (/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/node_modules/node-pre-gyp/lib/util/compile.js:77:29)
> [ERROR] node-pre-gyp ERR! stack at ChildProcess.emit (events.js:196:13)
> [ERROR] node-pre-gyp ERR! stack at Process.ChildProcess._handle.onexit 
> (internal/child_process.js:254:12)
> [ERROR] node-pre-gyp ERR! stack at onErrorNT 
> (internal/child_process.js:431:16)
> [ERROR] node-pre-gyp ERR! stack at processTicksAndRejections 
> (internal/process/task_queues.js:84:17)
> [ERROR] node-pre-gyp ERR! System Darwin 18.5.0
> [ERROR] node-pre-gyp ERR! command 
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/target/node/node\"
>  
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/node_modules/node-pre-gyp/bin/node-pre-gyp\"
>  \"install\" \"--fallback-to-build\"
> [ERROR] node-pre-gyp ERR! cwd 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents
> [ERROR] node-pre-gyp ERR! node -v v12.1.0
> [ERROR] node-pre-gyp ERR! node-pre-gyp -v v0.12.0
> [ERROR] node-pre-gyp ERR! not ok
> [ERROR] Failed to execute 'node-gyp clean' (Error: spawn node-gyp ENOENT)"
> [INFO] Done in 102.54s.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: 

[jira] [Created] (HDDS-2052) Separate the metadata directories to store security certificates and keys for different services

2019-08-28 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDDS-2052:


 Summary: Separate the metadata directories to store security 
certificates and keys for different services
 Key: HDDS-2052
 URL: https://issues.apache.org/jira/browse/HDDS-2052
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Security
Affects Versions: 0.4.1
Reporter: Vivek Ratnavel Subramanian


Currently, certificates and keys are stored in ozone.metadata.dirs and this 
needs to be moved to specific metadata dir for each service.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2050) Error while compiling ozone-recon-web

2019-08-28 Thread Vivek Ratnavel Subramanian (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918031#comment-16918031
 ] 

Vivek Ratnavel Subramanian commented on HDDS-2050:
--

I am looking at ways to fix this and will update my findings here

> Error while compiling ozone-recon-web
> -
>
> Key: HDDS-2050
> URL: https://issues.apache.org/jira/browse/HDDS-2050
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Nanda kumar
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> The following error is seen while compiling {{ozone-recon-web}}
> {noformat}
> [INFO] Running 'yarn install' in 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web
> [INFO] yarn install v1.9.2
> [INFO] [1/4] Resolving packages...
> [INFO] [2/4] Fetching packages...
> [ERROR] (node:31190) [DEP0005] DeprecationWarning: Buffer() is deprecated due 
> to security and usability issues. Please use the Buffer.alloc(), 
> Buffer.allocUnsafe(), or Buffer.from() methods instead.
> [INFO] [3/4] Linking dependencies...
> [ERROR] warning " > less-loader@5.0.0" has unmet peer dependency 
> "webpack@^2.0.0 || ^3.0.0 || ^4.0.0".
> [INFO] [4/4] Building fresh packages...
> [ERROR] warning Error running install script for optional dependency: 
> "/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents:
>  Command failed.
> [ERROR] Exit code: 1
> [ERROR] Command: node install
> [ERROR] Arguments:
> [ERROR] Directory: 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents
> [ERROR] Output:
> [ERROR] node-pre-gyp info it worked if it ends with ok
> [INFO] info This module is OPTIONAL, you can safely ignore this error
> [ERROR] node-pre-gyp info using node-pre-gyp@0.12.0
> [ERROR] node-pre-gyp info using node@12.1.0 | darwin | x64
> [ERROR] node-pre-gyp WARN Using request for node-pre-gyp https download
> [ERROR] node-pre-gyp info check checked for 
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/lib/binding/Release/node-v72-darwin-x64/fse.node\"
>  (not found)
> [ERROR] node-pre-gyp http GET 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp http 404 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp WARN Tried to download(404): 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp WARN Pre-built binaries not found for fsevents@1.2.8 and 
> node@12.1.0 (node-v72 ABI, unknown) (falling back to source compile with 
> node-gyp)
> [ERROR] node-pre-gyp http 404 status code downloading tarball 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp ERR! build error
> [ERROR] node-pre-gyp ERR! stack Error: Failed to execute 'node-gyp clean' 
> (Error: spawn node-gyp ENOENT)
> [ERROR] node-pre-gyp ERR! stack at ChildProcess. 
> (/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/node_modules/node-pre-gyp/lib/util/compile.js:77:29)
> [ERROR] node-pre-gyp ERR! stack at ChildProcess.emit (events.js:196:13)
> [ERROR] node-pre-gyp ERR! stack at Process.ChildProcess._handle.onexit 
> (internal/child_process.js:254:12)
> [ERROR] node-pre-gyp ERR! stack at onErrorNT 
> (internal/child_process.js:431:16)
> [ERROR] node-pre-gyp ERR! stack at processTicksAndRejections 
> (internal/process/task_queues.js:84:17)
> [ERROR] node-pre-gyp ERR! System Darwin 18.5.0
> [ERROR] node-pre-gyp ERR! command 
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/target/node/node\"
>  
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/node_modules/node-pre-gyp/bin/node-pre-gyp\"
>  \"install\" \"--fallback-to-build\"
> [ERROR] node-pre-gyp ERR! cwd 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents
> [ERROR] node-pre-gyp ERR! node -v v12.1.0
> [ERROR] node-pre-gyp ERR! node-pre-gyp -v v0.12.0
> [ERROR] node-pre-gyp ERR! not ok
> [ERROR] Failed to execute 'node-gyp clean' (Error: spawn node-gyp ENOENT)"
> [INFO] Done in 102.54s.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HDDS-2050) Error while compiling ozone-recon-web

2019-08-28 Thread Vivek Ratnavel Subramanian (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918029#comment-16918029
 ] 

Vivek Ratnavel Subramanian commented on HDDS-2050:
--

More about optional dependencies can be found here - 
[https://yarnpkg.com/lang/en/docs/dependency-types/#toc-optionaldependencies] 

> Error while compiling ozone-recon-web
> -
>
> Key: HDDS-2050
> URL: https://issues.apache.org/jira/browse/HDDS-2050
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Nanda kumar
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> The following error is seen while compiling {{ozone-recon-web}}
> {noformat}
> [INFO] Running 'yarn install' in 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web
> [INFO] yarn install v1.9.2
> [INFO] [1/4] Resolving packages...
> [INFO] [2/4] Fetching packages...
> [ERROR] (node:31190) [DEP0005] DeprecationWarning: Buffer() is deprecated due 
> to security and usability issues. Please use the Buffer.alloc(), 
> Buffer.allocUnsafe(), or Buffer.from() methods instead.
> [INFO] [3/4] Linking dependencies...
> [ERROR] warning " > less-loader@5.0.0" has unmet peer dependency 
> "webpack@^2.0.0 || ^3.0.0 || ^4.0.0".
> [INFO] [4/4] Building fresh packages...
> [ERROR] warning Error running install script for optional dependency: 
> "/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents:
>  Command failed.
> [ERROR] Exit code: 1
> [ERROR] Command: node install
> [ERROR] Arguments:
> [ERROR] Directory: 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents
> [ERROR] Output:
> [ERROR] node-pre-gyp info it worked if it ends with ok
> [INFO] info This module is OPTIONAL, you can safely ignore this error
> [ERROR] node-pre-gyp info using node-pre-gyp@0.12.0
> [ERROR] node-pre-gyp info using node@12.1.0 | darwin | x64
> [ERROR] node-pre-gyp WARN Using request for node-pre-gyp https download
> [ERROR] node-pre-gyp info check checked for 
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/lib/binding/Release/node-v72-darwin-x64/fse.node\"
>  (not found)
> [ERROR] node-pre-gyp http GET 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp http 404 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp WARN Tried to download(404): 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp WARN Pre-built binaries not found for fsevents@1.2.8 and 
> node@12.1.0 (node-v72 ABI, unknown) (falling back to source compile with 
> node-gyp)
> [ERROR] node-pre-gyp http 404 status code downloading tarball 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp ERR! build error
> [ERROR] node-pre-gyp ERR! stack Error: Failed to execute 'node-gyp clean' 
> (Error: spawn node-gyp ENOENT)
> [ERROR] node-pre-gyp ERR! stack at ChildProcess. 
> (/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/node_modules/node-pre-gyp/lib/util/compile.js:77:29)
> [ERROR] node-pre-gyp ERR! stack at ChildProcess.emit (events.js:196:13)
> [ERROR] node-pre-gyp ERR! stack at Process.ChildProcess._handle.onexit 
> (internal/child_process.js:254:12)
> [ERROR] node-pre-gyp ERR! stack at onErrorNT 
> (internal/child_process.js:431:16)
> [ERROR] node-pre-gyp ERR! stack at processTicksAndRejections 
> (internal/process/task_queues.js:84:17)
> [ERROR] node-pre-gyp ERR! System Darwin 18.5.0
> [ERROR] node-pre-gyp ERR! command 
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/target/node/node\"
>  
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/node_modules/node-pre-gyp/bin/node-pre-gyp\"
>  \"install\" \"--fallback-to-build\"
> [ERROR] node-pre-gyp ERR! cwd 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents
> [ERROR] node-pre-gyp ERR! node -v v12.1.0
> [ERROR] node-pre-gyp ERR! node-pre-gyp -v v0.12.0
> [ERROR] node-pre-gyp ERR! not ok
> [ERROR] Failed to execute 'node-gyp clean' (Error: spawn node-gyp ENOENT)"
> [INFO] Done in 102.54s.
> {noformat}



--
This message was sent by 

[jira] [Commented] (HDDS-2050) Error while compiling ozone-recon-web

2019-08-28 Thread Vivek Ratnavel Subramanian (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918020#comment-16918020
 ] 

Vivek Ratnavel Subramanian commented on HDDS-2050:
--

It is an optional module and we can safely ignore this error. The errors shown 
here will not cause any kind of failure to the build and will not affect the 
compilation in any way. [~nandakumar131] Did you get build failure due to this 
error?

> Error while compiling ozone-recon-web
> -
>
> Key: HDDS-2050
> URL: https://issues.apache.org/jira/browse/HDDS-2050
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Nanda kumar
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> The following error is seen while compiling {{ozone-recon-web}}
> {noformat}
> [INFO] Running 'yarn install' in 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web
> [INFO] yarn install v1.9.2
> [INFO] [1/4] Resolving packages...
> [INFO] [2/4] Fetching packages...
> [ERROR] (node:31190) [DEP0005] DeprecationWarning: Buffer() is deprecated due 
> to security and usability issues. Please use the Buffer.alloc(), 
> Buffer.allocUnsafe(), or Buffer.from() methods instead.
> [INFO] [3/4] Linking dependencies...
> [ERROR] warning " > less-loader@5.0.0" has unmet peer dependency 
> "webpack@^2.0.0 || ^3.0.0 || ^4.0.0".
> [INFO] [4/4] Building fresh packages...
> [ERROR] warning Error running install script for optional dependency: 
> "/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents:
>  Command failed.
> [ERROR] Exit code: 1
> [ERROR] Command: node install
> [ERROR] Arguments:
> [ERROR] Directory: 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents
> [ERROR] Output:
> [ERROR] node-pre-gyp info it worked if it ends with ok
> [INFO] info This module is OPTIONAL, you can safely ignore this error
> [ERROR] node-pre-gyp info using node-pre-gyp@0.12.0
> [ERROR] node-pre-gyp info using node@12.1.0 | darwin | x64
> [ERROR] node-pre-gyp WARN Using request for node-pre-gyp https download
> [ERROR] node-pre-gyp info check checked for 
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/lib/binding/Release/node-v72-darwin-x64/fse.node\"
>  (not found)
> [ERROR] node-pre-gyp http GET 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp http 404 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp WARN Tried to download(404): 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp WARN Pre-built binaries not found for fsevents@1.2.8 and 
> node@12.1.0 (node-v72 ABI, unknown) (falling back to source compile with 
> node-gyp)
> [ERROR] node-pre-gyp http 404 status code downloading tarball 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp ERR! build error
> [ERROR] node-pre-gyp ERR! stack Error: Failed to execute 'node-gyp clean' 
> (Error: spawn node-gyp ENOENT)
> [ERROR] node-pre-gyp ERR! stack at ChildProcess. 
> (/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/node_modules/node-pre-gyp/lib/util/compile.js:77:29)
> [ERROR] node-pre-gyp ERR! stack at ChildProcess.emit (events.js:196:13)
> [ERROR] node-pre-gyp ERR! stack at Process.ChildProcess._handle.onexit 
> (internal/child_process.js:254:12)
> [ERROR] node-pre-gyp ERR! stack at onErrorNT 
> (internal/child_process.js:431:16)
> [ERROR] node-pre-gyp ERR! stack at processTicksAndRejections 
> (internal/process/task_queues.js:84:17)
> [ERROR] node-pre-gyp ERR! System Darwin 18.5.0
> [ERROR] node-pre-gyp ERR! command 
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/target/node/node\"
>  
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/node_modules/node-pre-gyp/bin/node-pre-gyp\"
>  \"install\" \"--fallback-to-build\"
> [ERROR] node-pre-gyp ERR! cwd 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents
> [ERROR] node-pre-gyp ERR! node -v v12.1.0
> [ERROR] node-pre-gyp ERR! node-pre-gyp -v v0.12.0
> [ERROR] node-pre-gyp ERR! not ok
> [ERROR] Failed to execute 'node-gyp 

[jira] [Created] (HDDS-2047) Datanodes fail to come up after 10 retries in a secure environment

2019-08-27 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDDS-2047:


 Summary: Datanodes fail to come up after 10 retries in a secure 
environment
 Key: HDDS-2047
 URL: https://issues.apache.org/jira/browse/HDDS-2047
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode, Security
Affects Versions: 0.4.1
Reporter: Vivek Ratnavel Subramanian


{code:java}
10:06:36.585 PMERRORHddsDatanodeService
Error while storing SCM signed certificate.
java.net.ConnectException: Call From 
jmccarthy-ozone-secure-2.vpc.cloudera.com/10.65.50.127 to 
jmccarthy-ozone-secure-1.vpc.cloudera.com:9961 failed on connection exception: 
java.net.ConnectException: Connection refused; For more details see:  
http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:755)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515)
at org.apache.hadoop.ipc.Client.call(Client.java:1457)
at org.apache.hadoop.ipc.Client.call(Client.java:1367)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy15.getDataNodeCertificate(Unknown Source)
at 
org.apache.hadoop.hdds.protocolPB.SCMSecurityProtocolClientSideTranslatorPB.getDataNodeCertificateChain(SCMSecurityProtocolClientSideTranslatorPB.java:156)
at 
org.apache.hadoop.ozone.HddsDatanodeService.getSCMSignedCert(HddsDatanodeService.java:278)
at 
org.apache.hadoop.ozone.HddsDatanodeService.initializeCertificateClient(HddsDatanodeService.java:248)
at 
org.apache.hadoop.ozone.HddsDatanodeService.start(HddsDatanodeService.java:211)
at 
org.apache.hadoop.ozone.HddsDatanodeService.start(HddsDatanodeService.java:168)
at 
org.apache.hadoop.ozone.HddsDatanodeService.call(HddsDatanodeService.java:143)
at 
org.apache.hadoop.ozone.HddsDatanodeService.call(HddsDatanodeService.java:70)
at picocli.CommandLine.execute(CommandLine.java:1173)
at picocli.CommandLine.access$800(CommandLine.java:141)
at picocli.CommandLine$RunLast.handle(CommandLine.java:1367)
at picocli.CommandLine$RunLast.handle(CommandLine.java:1335)
at 
picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:1243)
at picocli.CommandLine.parseWithHandlers(CommandLine.java:1526)
at picocli.CommandLine.parseWithHandler(CommandLine.java:1465)
at org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:65)
at org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:56)
at 
org.apache.hadoop.ozone.HddsDatanodeService.main(HddsDatanodeService.java:126)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at 
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:690)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:794)
at org.apache.hadoop.ipc.Client$Connection.access$3700(Client.java:411)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1572)
at org.apache.hadoop.ipc.Client.call(Client.java:1403)
... 21 more
{code}

Datanodes try to get SCM signed certificate for just 10 times with interval of 
1 sec. When SCM takes a little longer to come up, datanodes throw an exception 
and fail.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2041) Don't depend on DFSUtil to check HTTP policy

2019-08-26 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDDS-2041:


 Summary: Don't depend on DFSUtil to check HTTP policy
 Key: HDDS-2041
 URL: https://issues.apache.org/jira/browse/HDDS-2041
 Project: Hadoop Distributed Data Store
  Issue Type: Task
  Components: website
Affects Versions: 0.4.1
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian


Currently, BaseHttpServer uses DFSUtil to get Http policy. With this, when http 
policy is set to HTTPS on hdfs-site.xml, ozone http servers try to come up with 
HTTPS and fail if SSL certificates are not present in the required location.

Ozone web UIs should not depend on HDFS config to determine HTTP policy. 
Instead, it should have its own config to determine the policy. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2040) Fix TestSecureContainerServer.testClientServerRatisGrpc integration test failure

2019-08-26 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDDS-2040:


 Summary: Fix TestSecureContainerServer.testClientServerRatisGrpc 
integration test failure
 Key: HDDS-2040
 URL: https://issues.apache.org/jira/browse/HDDS-2040
 Project: Hadoop Distributed Data Store
  Issue Type: Task
  Components: Security
Affects Versions: 0.4.1
Reporter: Vivek Ratnavel Subramanian


The integration test TestSecureContainerServer.testClientServerRatisGrpc fails 
with the following error in trunk:


{code:java}
Caused by: org.apache.ratis.protocol.StateMachineException: 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: 
Block token verification failed. Fail to find any token (empty or null.)
{code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2023) Fix rat check failures in trunk

2019-08-23 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-2023:
-
Status: Patch Available  (was: In Progress)

> Fix rat check failures in trunk
> ---
>
> Key: HDDS-2023
> URL: https://issues.apache.org/jira/browse/HDDS-2023
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Several files in hadop-ozone do not have apache license headers and cause a 
> failure in trunk. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2023) Fix rat check failures in trunk

2019-08-23 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDDS-2023:


 Summary: Fix rat check failures in trunk
 Key: HDDS-2023
 URL: https://issues.apache.org/jira/browse/HDDS-2023
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian


Several files in hadop-ozone do not have apache license headers and cause a 
failure in trunk. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-2023) Fix rat check failures in trunk

2019-08-23 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2023 started by Vivek Ratnavel Subramanian.

> Fix rat check failures in trunk
> ---
>
> Key: HDDS-2023
> URL: https://issues.apache.org/jira/browse/HDDS-2023
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> Several files in hadop-ozone do not have apache license headers and cause a 
> failure in trunk. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2000) Don't depend on bootstrap/jquery versions from hadoop-trunk snapshot

2019-08-22 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-2000:
-
Status: Patch Available  (was: In Progress)

> Don't depend on bootstrap/jquery versions from hadoop-trunk snapshot
> 
>
> Key: HDDS-2000
> URL: https://issues.apache.org/jira/browse/HDDS-2000
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: om, SCM
>Reporter: Elek, Marton
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The OM/SCM web pages are broken due to the upgrade in HDFS-14729 (which is a 
> great patch on the Hadoop side). To have more stability I propose to use our 
> own instance from jquery/bootstrap instead of copying the actual version from 
> hadoop trunk which is a SNAPSHOT build.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2000) Don't depend on bootstrap/jquery versions from hadoop-trunk snapshot

2019-08-22 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-2000:
-
Status: In Progress  (was: Patch Available)

> Don't depend on bootstrap/jquery versions from hadoop-trunk snapshot
> 
>
> Key: HDDS-2000
> URL: https://issues.apache.org/jira/browse/HDDS-2000
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: om, SCM
>Reporter: Elek, Marton
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The OM/SCM web pages are broken due to the upgrade in HDFS-14729 (which is a 
> great patch on the Hadoop side). To have more stability I propose to use our 
> own instance from jquery/bootstrap instead of copying the actual version from 
> hadoop trunk which is a SNAPSHOT build.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1946) CertificateClient should not persist keys/certs to ozone.metadata.dir

2019-08-22 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-1946:
-
Status: Patch Available  (was: In Progress)

> CertificateClient should not persist keys/certs to ozone.metadata.dir
> -
>
> Key: HDDS-1946
> URL: https://issues.apache.org/jira/browse/HDDS-1946
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> For example, when OM and SCM are deployed on the same host with 
> ozone.metadata.dir defined. SCM can start successfully but OM can not because 
> the key/cert from OM will collide with SCM.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2009) scm web ui should publish the list of scm pipeline by type and factor

2019-08-22 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian reassigned HDDS-2009:


Assignee: Vivek Ratnavel Subramanian

> scm web ui should publish the list of scm pipeline by type and factor
> -
>
> Key: HDDS-2009
> URL: https://issues.apache.org/jira/browse/HDDS-2009
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.5.0
>Reporter: Mukul Kumar Singh
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> scm web ui should publish the list of scm pipeline by type and factor, this 
> helps in monitoring the cluster in real time.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1157) TestOzoneContainerWithTLS is failing due to the missing native libraries

2019-08-21 Thread Vivek Ratnavel Subramanian (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912599#comment-16912599
 ] 

Vivek Ratnavel Subramanian commented on HDDS-1157:
--

cc [~xyao]

> TestOzoneContainerWithTLS is failing due to the missing native libraries
> 
>
> Key: HDDS-1157
> URL: https://issues.apache.org/jira/browse/HDDS-1157
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> When we use an alpine based (docker-in-docker) contain to build the native 
> tls library can't be found:
> {code:java}
> java.lang.UnsatisfiedLinkError: failed to load the required native library
>   at 
> org.apache.ratis.thirdparty.io.netty.handler.ssl.OpenSsl.ensureAvailability(OpenSsl.java:346)
>   at 
> org.apache.ratis.thirdparty.io.netty.handler.ssl.ReferenceCountedOpenSslContext.(ReferenceCountedOpenSslContext.java:202)
>   at 
> org.apache.ratis.thirdparty.io.netty.handler.ssl.OpenSslContext.(OpenSslContext.java:43)
>   at 
> org.apache.ratis.thirdparty.io.netty.handler.ssl.OpenSslServerContext.(OpenSslServerContext.java:347)
>   at 
> org.apache.ratis.thirdparty.io.netty.handler.ssl.OpenSslServerContext.(OpenSslServerContext.java:335)
>   at 
> org.apache.ratis.thirdparty.io.netty.handler.ssl.SslContext.newServerContextInternal(SslContext.java:422)
>   at 
> org.apache.ratis.thirdparty.io.netty.handler.ssl.SslContextBuilder.build(SslContextBuilder.java:447)
>   at org.apache.ratis.grpc.server.GrpcService.(GrpcService.java:123)
>   at org.apache.ratis.grpc.server.GrpcService.(GrpcService.java:85)
>   at 
> org.apache.ratis.grpc.server.GrpcService.(GrpcService.java:47){code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14729) Upgrade Bootstrap and jQuery versions used in HDFS UIs

2019-08-20 Thread Vivek Ratnavel Subramanian (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1694#comment-1694
 ] 

Vivek Ratnavel Subramanian commented on HDFS-14729:
---

Except whitespace errors, all other issues reported by the jenkins job are not 
related to the patch. In this case, it is best to exclude the 
`hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/css/bootstrap-editable.css`
 file from whitespace checks since it is from a third party vendor.

> Upgrade Bootstrap and jQuery versions used in HDFS UIs
> --
>
> Key: HDFS-14729
> URL: https://issues.apache.org/jira/browse/HDFS-14729
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ui
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
> Attachments: HDFS-14729.v1.patch
>
>
> The current versions of bootstrap and jquery have multiple medium severity 
> CVEs reported till date and needs to be updated to the latest versions with 
> no reported CVEs.
>  
> I suggest updating the following libraries:
> ||Library||From version||To version||
> |Bootstrap|3.3.7|3.4.1|
> |jQuery|3.3.1|3.4.1|



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14729) Upgrade Bootstrap and jQuery versions used in HDFS UIs

2019-08-19 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDFS-14729:
--
Status: Patch Available  (was: Open)

> Upgrade Bootstrap and jQuery versions used in HDFS UIs
> --
>
> Key: HDFS-14729
> URL: https://issues.apache.org/jira/browse/HDFS-14729
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ui
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
> Attachments: HDFS-14729.v1.patch
>
>
> The current versions of bootstrap and jquery have multiple medium severity 
> CVEs reported till date and needs to be updated to the latest versions with 
> no reported CVEs.
>  
> I suggest updating the following libraries:
> ||Library||From version||To version||
> |Bootstrap|3.3.7|3.4.1|
> |jQuery|3.3.1|3.4.1|



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14729) Upgrade Bootstrap and jQuery versions used in HDFS UIs

2019-08-19 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDFS-14729:
--
Status: Open  (was: Patch Available)

> Upgrade Bootstrap and jQuery versions used in HDFS UIs
> --
>
> Key: HDFS-14729
> URL: https://issues.apache.org/jira/browse/HDFS-14729
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ui
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
> Attachments: HDFS-14729.v1.patch
>
>
> The current versions of bootstrap and jquery have multiple medium severity 
> CVEs reported till date and needs to be updated to the latest versions with 
> no reported CVEs.
>  
> I suggest updating the following libraries:
> ||Library||From version||To version||
> |Bootstrap|3.3.7|3.4.1|
> |jQuery|3.3.1|3.4.1|



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14729) Upgrade Bootstrap and jQuery versions used in HDFS UIs

2019-08-19 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDFS-14729:
--
Attachment: HDFS-14729.v1.patch

> Upgrade Bootstrap and jQuery versions used in HDFS UIs
> --
>
> Key: HDFS-14729
> URL: https://issues.apache.org/jira/browse/HDFS-14729
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ui
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
> Attachments: HDFS-14729.v1.patch
>
>
> The current versions of bootstrap and jquery have multiple medium severity 
> CVEs reported till date and needs to be updated to the latest versions with 
> no reported CVEs.
>  
> I suggest updating the following libraries:
> ||Library||From version||To version||
> |Bootstrap|3.3.7|3.4.1|
> |jQuery|3.3.1|3.4.1|



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14729) Upgrade Bootstrap and jQuery versions used in HDFS UIs

2019-08-19 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDFS-14729:
--
Status: Patch Available  (was: In Progress)

> Upgrade Bootstrap and jQuery versions used in HDFS UIs
> --
>
> Key: HDFS-14729
> URL: https://issues.apache.org/jira/browse/HDFS-14729
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ui
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
> Attachments: HDFS-14729.v1.patch
>
>
> The current versions of bootstrap and jquery have multiple medium severity 
> CVEs reported till date and needs to be updated to the latest versions with 
> no reported CVEs.
>  
> I suggest updating the following libraries:
> ||Library||From version||To version||
> |Bootstrap|3.3.7|3.4.1|
> |jQuery|3.3.1|3.4.1|



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1979) Fix checkstyle errors

2019-08-17 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1979 started by Vivek Ratnavel Subramanian.

> Fix checkstyle errors
> -
>
> Key: HDDS-1979
> URL: https://issues.apache.org/jira/browse/HDDS-1979
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: SCM
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> There are checkstyle errors in ListPipelinesSubcommand.java that needs to be 
> fixed.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1979) Fix checkstyle errors

2019-08-17 Thread Vivek Ratnavel Subramanian (JIRA)
Vivek Ratnavel Subramanian created HDDS-1979:


 Summary: Fix checkstyle errors
 Key: HDDS-1979
 URL: https://issues.apache.org/jira/browse/HDDS-1979
 Project: Hadoop Distributed Data Store
  Issue Type: Task
  Components: SCM
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian


There are checkstyle errors in ListPipelinesSubcommand.java that needs to be 
fixed.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1946) CertificateClient should not persist keys/certs to ozone.metadata.dir

2019-08-17 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1946 started by Vivek Ratnavel Subramanian.

> CertificateClient should not persist keys/certs to ozone.metadata.dir
> -
>
> Key: HDDS-1946
> URL: https://issues.apache.org/jira/browse/HDDS-1946
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> For example, when OM and SCM are deployed on the same host with 
> ozone.metadata.dir defined. SCM can start successfully but OM can not because 
> the key/cert from OM will collide with SCM.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1976) Ozone manager init fails when certificate is missing in a kerberized cluster

2019-08-16 Thread Vivek Ratnavel Subramanian (JIRA)
Vivek Ratnavel Subramanian created HDDS-1976:


 Summary: Ozone manager init fails when certificate is missing in a 
kerberized cluster
 Key: HDDS-1976
 URL: https://issues.apache.org/jira/browse/HDDS-1976
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Security
Reporter: Vivek Ratnavel Subramanian
Assignee: Anu Engineer


When Ozone Manager gets into a state where certificate is missing, it does not 
try to recover by creating a certificate.


{code:java}
3:30:48.620 PM INFO OzoneManager Initializing secure OzoneManager. 
3:30:49.788 PM INFO OMCertificateClient Loading certificate from 
location:/var/lib/hadoop-ozone/om/data/certs. 
3:30:49.896 PM INFO OMCertificateClient Added certificate from 
file:/var/lib/hadoop-ozone/om/data/certs/8136899895890.crt. 
3:30:49.904 PM INFO OMCertificateClient Added certificate from 
file:/var/lib/hadoop-ozone/om/data/certs/CA-1.crt. 
3:30:49.930 PM ERROR OMCertificateClient Default certificate serial id is not 
set. Can't locate the default certificate for this client. 
3:30:49.930 PM INFO OMCertificateClient Certificate client init case: 6 
3:30:49.932 PM INFO OMCertificateClient Found private and public key but 
certificate is missing. 
3:30:50.194 PM INFO OzoneManager Init response: RECOVER 
3:30:50.230 PM ERROR OzoneManager OM security initialization failed. OM 
certificate is missing.
{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1970) Upgrade Bootstrap and jQuery versions of Ozone web UIs

2019-08-14 Thread Vivek Ratnavel Subramanian (JIRA)
Vivek Ratnavel Subramanian created HDDS-1970:


 Summary: Upgrade Bootstrap and jQuery versions of Ozone web UIs 
 Key: HDDS-1970
 URL: https://issues.apache.org/jira/browse/HDDS-1970
 Project: Hadoop Distributed Data Store
  Issue Type: Task
  Components: website
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian


The current versions of bootstrap and jquery used by Ozone web UIs are reported 
to have known medium severity CVEs and need to be updated to the latest 
versions.

 

I suggest updating bootstrap and jQuery to 3.4.1.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14729) Upgrade Bootstrap and jQuery versions used in HDFS UIs

2019-08-14 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDFS-14729:
--
Summary: Upgrade Bootstrap and jQuery versions used in HDFS UIs  (was: 
Upgrade Bootstrap and jquery versions used in HDFS UIs)

> Upgrade Bootstrap and jQuery versions used in HDFS UIs
> --
>
> Key: HDFS-14729
> URL: https://issues.apache.org/jira/browse/HDFS-14729
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ui
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> The current versions of bootstrap and jquery have multiple medium severity 
> CVEs reported till date and needs to be updated to the latest versions with 
> no reported CVEs.
>  
> I suggest updating the following libraries:
> ||Library||From version||To version||
> |Bootstrap|3.3.7|3.4.1|
> |jQuery|3.3.1|3.4.1|



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14729) Upgrade Bootstrap and jquery versions used in HDFS UIs

2019-08-14 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDFS-14729:
--
Description: 
The current versions of bootstrap and jquery have multiple medium severity CVEs 
reported till date and needs to be updated to the latest versions with no 
reported CVEs.

 

I suggest updating the following libraries:
||Library||From version||To version||
|Bootstrap|3.3.7|3.4.1|
|jQuery|3.3.1|3.4.1|

  was:
The current versions of bootstrap, jquery and wildfly have multiple medium 
severity CVEs reported till date and needs to be updated to the latest versions 
with no reported CVEs.

 

I suggest updating the following libraries:
||Library||From version||To version||
|Bootstrap|3.3.7|3.4.1|
|jQuery|3.3.1|3.4.1|
|Wildfly|11.0.0.Beta1|12.0.0|


> Upgrade Bootstrap and jquery versions used in HDFS UIs
> --
>
> Key: HDFS-14729
> URL: https://issues.apache.org/jira/browse/HDFS-14729
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ui
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> The current versions of bootstrap and jquery have multiple medium severity 
> CVEs reported till date and needs to be updated to the latest versions with 
> no reported CVEs.
>  
> I suggest updating the following libraries:
> ||Library||From version||To version||
> |Bootstrap|3.3.7|3.4.1|
> |jQuery|3.3.1|3.4.1|



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-14729) Upgrade Bootstrap and jquery versions used in HDFS UIs

2019-08-14 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-14729 started by Vivek Ratnavel Subramanian.
-
> Upgrade Bootstrap and jquery versions used in HDFS UIs
> --
>
> Key: HDFS-14729
> URL: https://issues.apache.org/jira/browse/HDFS-14729
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ui
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> The current versions of bootstrap and jquery have multiple medium severity 
> CVEs reported till date and needs to be updated to the latest versions with 
> no reported CVEs.
>  
> I suggest updating the following libraries:
> ||Library||From version||To version||
> |Bootstrap|3.3.7|3.4.1|
> |jQuery|3.3.1|3.4.1|



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14729) Upgrade Bootstrap and jquery versions used in HDFS UIs

2019-08-14 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDFS-14729:
--
Summary: Upgrade Bootstrap and jquery versions used in HDFS UIs  (was: 
Upgrade Bootstrap, jquery and wildfly)

> Upgrade Bootstrap and jquery versions used in HDFS UIs
> --
>
> Key: HDFS-14729
> URL: https://issues.apache.org/jira/browse/HDFS-14729
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ui
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> The current versions of bootstrap, jquery and wildfly have multiple medium 
> severity CVEs reported till date and needs to be updated to the latest 
> versions with no reported CVEs.
>  
> I suggest updating the following libraries:
> ||Library||From version||To version||
> |Bootstrap|3.3.7|3.4.1|
> |jQuery|3.3.1|3.4.1|
> |Wildfly|11.0.0.Beta1|12.0.0|



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14729) Upgrade Bootstrap, jquery and wildfly

2019-08-13 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian reassigned HDFS-14729:
-

Assignee: Vivek Ratnavel Subramanian

> Upgrade Bootstrap, jquery and wildfly
> -
>
> Key: HDFS-14729
> URL: https://issues.apache.org/jira/browse/HDFS-14729
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ui
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> The current versions of bootstrap, jquery and wildfly have multiple medium 
> severity CVEs reported till date and needs to be updated to the latest 
> versions with no reported CVEs.
>  
> I suggest updating the following libraries:
> ||Library||From version||To version||
> |Bootstrap|3.3.7|3.4.1|
> |jQuery|3.3.1|3.4.1|
> |Wildfly|11.0.0.Beta1|12.0.0|



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14729) Upgrade Bootstrap, jquery and wildfly

2019-08-13 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian reassigned HDFS-14729:
-

   Assignee: (was: Vivek Ratnavel Subramanian)
Component/s: (was: website)
 ui
 Issue Type: Task  (was: Bug)
Key: HDFS-14729  (was: HADOOP-16513)
Project: Hadoop HDFS  (was: Hadoop Common)

> Upgrade Bootstrap, jquery and wildfly
> -
>
> Key: HDFS-14729
> URL: https://issues.apache.org/jira/browse/HDFS-14729
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ui
>Reporter: Vivek Ratnavel Subramanian
>Priority: Major
>
> The current versions of bootstrap, jquery and wildfly have multiple medium 
> severity CVEs reported till date and needs to be updated to the latest 
> versions with no reported CVEs.
>  
> I suggest updating the following libraries:
> ||Library||From version||To version||
> |Bootstrap|3.3.7|3.4.1|
> |jQuery|3.3.1|3.4.1|
> |Wildfly|11.0.0.Beta1|12.0.0|



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1335) Basic Recon UI for serving up container key mapping.

2019-08-02 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-1335:
-
Target Version/s: 0.5.0

> Basic Recon UI for serving up container key mapping.
> 
>
> Key: HDDS-1335
> URL: https://issues.apache.org/jira/browse/HDDS-1335
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1335) Basic Recon UI for serving up container key mapping.

2019-08-02 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-1335:
-
Fix Version/s: (was: 0.4.1)

> Basic Recon UI for serving up container key mapping.
> 
>
> Key: HDDS-1335
> URL: https://issues.apache.org/jira/browse/HDDS-1335
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1335) Basic Recon UI for serving up container key mapping.

2019-08-02 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1335 started by Vivek Ratnavel Subramanian.

> Basic Recon UI for serving up container key mapping.
> 
>
> Key: HDDS-1335
> URL: https://issues.apache.org/jira/browse/HDDS-1335
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
> Fix For: 0.4.1
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1788) Fix kerberos principal error in Ozone Recon

2019-08-02 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-1788:
-
Summary: Fix kerberos principal error in Ozone Recon  (was: Add kerberos 
support to Ozone Recon)

> Fix kerberos principal error in Ozone Recon
> ---
>
> Key: HDDS-1788
> URL: https://issues.apache.org/jira/browse/HDDS-1788
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Recon fails to startup in a kerberized cluster with the following error:
> {code:java}
> Failed startup of context 
> o.e.j.w.WebAppContext@2009f9b0{/,file:///tmp/jetty-0.0.0.0-9888-recon-_-any-2565178148822292652.dir/webapp/,UNAVAILABLE}{/recon}
>  javax.servlet.ServletException: javax.servlet.ServletException: Principal 
> not defined in configuration at 
> org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.init(KerberosAuthenticationHandler.java:188)
>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeAuthHandler(AuthenticationFilter.java:194)
>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:180)
>  at org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:139) 
> at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:873) 
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:349)
>  at 
> org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1406) 
> at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1368) 
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:778)
>  at 
> org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:262)
>  at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:522) at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>  at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
>  at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:113)
>  at 
> org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
>  at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>  at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
>  at org.eclipse.jetty.server.Server.start(Server.java:427) at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:105)
>  at 
> org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
>  at org.eclipse.jetty.server.Server.doStart(Server.java:394) at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>  at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1140) at 
> org.apache.hadoop.hdds.server.BaseHttpServer.start(BaseHttpServer.java:175) 
> at org.apache.hadoop.ozone.recon.ReconServer.call(ReconServer.java:102) at 
> org.apache.hadoop.ozone.recon.ReconServer.call(ReconServer.java:50) at 
> picocli.CommandLine.execute(CommandLine.java:1173) at 
> picocli.CommandLine.access$800(CommandLine.java:141) at 
> picocli.CommandLine$RunLast.handle(CommandLine.java:1367) at 
> picocli.CommandLine$RunLast.handle(CommandLine.java:1335) at 
> picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:1243)
>  at picocli.CommandLine.parseWithHandlers(CommandLine.java:1526) at 
> picocli.CommandLine.parseWithHandler(CommandLine.java:1465) at 
> org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:65) at 
> org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:56) at 
> org.apache.hadoop.ozone.recon.ReconServer.main(ReconServer.java:61)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1873) Recon should store last successful run timestamp for each task

2019-07-29 Thread Vivek Ratnavel Subramanian (JIRA)
Vivek Ratnavel Subramanian created HDDS-1873:


 Summary: Recon should store last successful run timestamp for each 
task
 Key: HDDS-1873
 URL: https://issues.apache.org/jira/browse/HDDS-1873
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Recon
Affects Versions: 0.4.1
Reporter: Vivek Ratnavel Subramanian


Recon should store last ozone manager snapshot received timestamp along with 
timestamps of last successful run for each task.

This is important to give users a sense of how latest the current data that 
they are looking at is. And, we need this per task because some tasks might 
fail to run or might take much longer time to run than other tasks and this 
needs to be reflected in the UI for better and consistent user experience.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1788) Add kerberos support to Ozone Recon

2019-07-11 Thread Vivek Ratnavel Subramanian (JIRA)
Vivek Ratnavel Subramanian created HDDS-1788:


 Summary: Add kerberos support to Ozone Recon
 Key: HDDS-1788
 URL: https://issues.apache.org/jira/browse/HDDS-1788
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Recon
Affects Versions: 0.4.0
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian


Recon fails to startup in a kerberized cluster with the following error:


{code:java}
Failed startup of context 
o.e.j.w.WebAppContext@2009f9b0{/,file:///tmp/jetty-0.0.0.0-9888-recon-_-any-2565178148822292652.dir/webapp/,UNAVAILABLE}{/recon}
 javax.servlet.ServletException: javax.servlet.ServletException: Principal not 
defined in configuration at 
org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.init(KerberosAuthenticationHandler.java:188)
 at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeAuthHandler(AuthenticationFilter.java:194)
 at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:180)
 at org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:139) at 
org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:873) at 
org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:349)
 at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1406) 
at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1368) 
at 
org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:778)
 at 
org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:262)
 at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:522) at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
 at 
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
 at 
org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:113)
 at 
org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
 at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
 at 
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
 at org.eclipse.jetty.server.Server.start(Server.java:427) at 
org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:105)
 at 
org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
 at org.eclipse.jetty.server.Server.doStart(Server.java:394) at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
 at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1140) at 
org.apache.hadoop.hdds.server.BaseHttpServer.start(BaseHttpServer.java:175) at 
org.apache.hadoop.ozone.recon.ReconServer.call(ReconServer.java:102) at 
org.apache.hadoop.ozone.recon.ReconServer.call(ReconServer.java:50) at 
picocli.CommandLine.execute(CommandLine.java:1173) at 
picocli.CommandLine.access$800(CommandLine.java:141) at 
picocli.CommandLine$RunLast.handle(CommandLine.java:1367) at 
picocli.CommandLine$RunLast.handle(CommandLine.java:1335) at 
picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:1243)
 at picocli.CommandLine.parseWithHandlers(CommandLine.java:1526) at 
picocli.CommandLine.parseWithHandler(CommandLine.java:1465) at 
org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:65) at 
org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:56) at 
org.apache.hadoop.ozone.recon.ReconServer.main(ReconServer.java:61)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1788) Add kerberos support to Ozone Recon

2019-07-11 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1788 started by Vivek Ratnavel Subramanian.

> Add kerberos support to Ozone Recon
> ---
>
> Key: HDDS-1788
> URL: https://issues.apache.org/jira/browse/HDDS-1788
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> Recon fails to startup in a kerberized cluster with the following error:
> {code:java}
> Failed startup of context 
> o.e.j.w.WebAppContext@2009f9b0{/,file:///tmp/jetty-0.0.0.0-9888-recon-_-any-2565178148822292652.dir/webapp/,UNAVAILABLE}{/recon}
>  javax.servlet.ServletException: javax.servlet.ServletException: Principal 
> not defined in configuration at 
> org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.init(KerberosAuthenticationHandler.java:188)
>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeAuthHandler(AuthenticationFilter.java:194)
>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:180)
>  at org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:139) 
> at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:873) 
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:349)
>  at 
> org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1406) 
> at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1368) 
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:778)
>  at 
> org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:262)
>  at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:522) at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>  at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
>  at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:113)
>  at 
> org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
>  at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>  at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
>  at org.eclipse.jetty.server.Server.start(Server.java:427) at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:105)
>  at 
> org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
>  at org.eclipse.jetty.server.Server.doStart(Server.java:394) at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>  at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1140) at 
> org.apache.hadoop.hdds.server.BaseHttpServer.start(BaseHttpServer.java:175) 
> at org.apache.hadoop.ozone.recon.ReconServer.call(ReconServer.java:102) at 
> org.apache.hadoop.ozone.recon.ReconServer.call(ReconServer.java:50) at 
> picocli.CommandLine.execute(CommandLine.java:1173) at 
> picocli.CommandLine.access$800(CommandLine.java:141) at 
> picocli.CommandLine$RunLast.handle(CommandLine.java:1367) at 
> picocli.CommandLine$RunLast.handle(CommandLine.java:1335) at 
> picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:1243)
>  at picocli.CommandLine.parseWithHandlers(CommandLine.java:1526) at 
> picocli.CommandLine.parseWithHandler(CommandLine.java:1465) at 
> org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:65) at 
> org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:56) at 
> org.apache.hadoop.ozone.recon.ReconServer.main(ReconServer.java:61)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1585) Add LICENSE.txt and NOTICE.txt to Ozone Recon Web

2019-07-08 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1585 started by Vivek Ratnavel Subramanian.

> Add LICENSE.txt and NOTICE.txt to Ozone Recon Web
> -
>
> Key: HDDS-1585
> URL: https://issues.apache.org/jira/browse/HDDS-1585
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Blocker
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1705) Recon: Add estimatedTotalCount to the response of containers and containers/{id} endpoints

2019-07-01 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1705 started by Vivek Ratnavel Subramanian.

> Recon: Add estimatedTotalCount to the response of containers and 
> containers/{id} endpoints
> --
>
> Key: HDDS-1705
> URL: https://issues.apache.org/jira/browse/HDDS-1705
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1705) Recon: Add estimatedTotalCount to the response of containers and containers/{id} endpoints

2019-06-18 Thread Vivek Ratnavel Subramanian (JIRA)
Vivek Ratnavel Subramanian created HDDS-1705:


 Summary: Recon: Add estimatedTotalCount to the response of 
containers and containers/{id} endpoints
 Key: HDDS-1705
 URL: https://issues.apache.org/jira/browse/HDDS-1705
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Recon
Affects Versions: 0.4.0
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1685) Recon: Add support for "start" query param to containers and containers/{id} endpoints

2019-06-18 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-1685:
-
Description: * Support "start" query param to seek to the given key in 
RocksDB.  (was: * Support "start" query param to seek to the given key in 
RocksDB.
 * Add estimatedTotalCount to the response)

> Recon: Add support for "start" query param to containers and containers/{id} 
> endpoints
> --
>
> Key: HDDS-1685
> URL: https://issues.apache.org/jira/browse/HDDS-1685
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> * Support "start" query param to seek to the given key in RocksDB.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1685) Recon: Add support for "start" query param to containers and containers/{id} endpoints

2019-06-18 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1685 started by Vivek Ratnavel Subramanian.

> Recon: Add support for "start" query param to containers and containers/{id} 
> endpoints
> --
>
> Key: HDDS-1685
> URL: https://issues.apache.org/jira/browse/HDDS-1685
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> * Support "start" query param to seek to the given key in RocksDB.
>  * Add estimatedTotalCount to the response



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1685) Recon: Add support for "start" query param to containers and containers/{id} endpoints

2019-06-17 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-1685:
-
Summary: Recon: Add support for "start" query param to containers and 
containers/{id} endpoints  (was: Add support for "start" query param to 
containers and containers/{id} API in Recon)

> Recon: Add support for "start" query param to containers and containers/{id} 
> endpoints
> --
>
> Key: HDDS-1685
> URL: https://issues.apache.org/jira/browse/HDDS-1685
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> * Support "start" query param to seek to the given key in RocksDB.
>  * Add estimatedTotalCount to the response



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1685) Add support for "start" query param to containers and containers/{id} API in Recon

2019-06-17 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-1685:
-
Summary: Add support for "start" query param to containers and 
containers/{id} API in Recon  (was: Add "start" support to containers and 
containers/{id} API in Recon)

> Add support for "start" query param to containers and containers/{id} API in 
> Recon
> --
>
> Key: HDDS-1685
> URL: https://issues.apache.org/jira/browse/HDDS-1685
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> * Support "start" query param to seek to the given key in RocksDB.
>  * Add estimatedTotalCount to the response



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1685) Add "start" support to containers and containers/{id} API in Recon

2019-06-17 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-1685:
-
Issue Type: Sub-task  (was: Task)
Parent: HDDS-1084

> Add "start" support to containers and containers/{id} API in Recon
> --
>
> Key: HDDS-1685
> URL: https://issues.apache.org/jira/browse/HDDS-1685
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> * Support "start" query param to seek to the given key in RocksDB.
>  * Add estimatedTotalCount to the response



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >