[jira] [Updated] (HADOOP-15159) hadoop_connect_to_hosts shouldn't fail if both HADOOP_WORKERS and HADOOP_WORKERS_NAMES are defined

2018-01-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-15159:
--
Priority: Trivial  (was: Major)

> hadoop_connect_to_hosts shouldn't fail if both HADOOP_WORKERS and 
> HADOOP_WORKERS_NAMES are defined
> --
>
> Key: HADOOP-15159
> URL: https://issues.apache.org/jira/browse/HADOOP-15159
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
> Environment: hadoop 3.0.0
>Reporter: gehaijiang
>Priority: Trivial
>
> If a user provides a value for HADOOP_WORKERS in hadoop-env.sh, 
> hadoop_connect_to_hosts shouldn't fail when the sbin/start and sbin/stop 
> commands are used.  Instead, it should just use the HADOOP_WORKERS_NAMES 
> value (probably with no warning, since it is a fairly common thing to do).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15159) hadoop_connect_to_hosts shouldn't fail if both HADOOP_WORKERS and HADOOP_WORKERS_NAMES are defined

2018-01-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reassigned HADOOP-15159:
-

Assignee: Allen Wittenauer

> hadoop_connect_to_hosts shouldn't fail if both HADOOP_WORKERS and 
> HADOOP_WORKERS_NAMES are defined
> --
>
> Key: HADOOP-15159
> URL: https://issues.apache.org/jira/browse/HADOOP-15159
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
> Environment: hadoop 3.0.0
>Reporter: gehaijiang
>Assignee: Allen Wittenauer
>Priority: Trivial
>
> If a user provides a value for HADOOP_WORKERS in hadoop-env.sh, 
> hadoop_connect_to_hosts shouldn't fail when the sbin/start and sbin/stop 
> commands are used.  Instead, it should just use the HADOOP_WORKERS_NAMES 
> value (probably with no warning, since it is a fairly common thing to do).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15159) hadoop_connect_to_hosts shouldn't fail if both HADOOP_WORKERS and HADOOP_WORKERS_NAMES are defined

2018-01-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-15159:
--
Component/s: (was: common)
 scripts

> hadoop_connect_to_hosts shouldn't fail if both HADOOP_WORKERS and 
> HADOOP_WORKERS_NAMES are defined
> --
>
> Key: HADOOP-15159
> URL: https://issues.apache.org/jira/browse/HADOOP-15159
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
> Environment: hadoop 3.0.0
>Reporter: gehaijiang
>Priority: Trivial
>
> If a user provides a value for HADOOP_WORKERS in hadoop-env.sh, 
> hadoop_connect_to_hosts shouldn't fail when the sbin/start and sbin/stop 
> commands are used.  Instead, it should just use the HADOOP_WORKERS_NAMES 
> value (probably with no warning, since it is a fairly common thing to do).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15159) hadoop_connect_to_hosts shouldn't fail if both HADOOP_WORKERS and HADOOP_WORKERS_NAMES are defined

2018-01-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-15159:
--
Description: If a user provides a value for HADOOP_WORKERS in 
hadoop-env.sh, hadoop_connect_to_hosts shouldn't fail when the sbin/start and 
sbin/stop commands are used.  Instead, it should just use the 
HADOOP_WORKERS_NAMES value (probably with no warning, since it is a fairly 
common thing to do).  (was: run   ./stop-dfs.sh 

ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAMES were defined. Aborting.
Stopping datanodes
10.50.132.147: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
10.50.132.151: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
10.50.132.146: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
10.50.132.150: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
10.50.132.154: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
10.50.132.145: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
10.50.132.152: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
10.50.132.148: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
10.50.132.149: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
10.50.132.153: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
Stopping journal nodes [10.50.132.145 10.50.132.146 10.50.132.147]
ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAMES were defined. Aborting.
Stopping ZK Failover Controllers on NN hosts [10.50.132.145 10.50.132.146 
10.50.132.147]
ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAMES were defined. Aborting.
)

> hadoop_connect_to_hosts shouldn't fail if both HADOOP_WORKERS and 
> HADOOP_WORKERS_NAMES are defined
> --
>
> Key: HADOOP-15159
> URL: https://issues.apache.org/jira/browse/HADOOP-15159
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
> Environment: hadoop 3.0.0
>Reporter: gehaijiang
>
> If a user provides a value for HADOOP_WORKERS in hadoop-env.sh, 
> hadoop_connect_to_hosts shouldn't fail when the sbin/start and sbin/stop 
> commands are used.  Instead, it should just use the HADOOP_WORKERS_NAMES 
> value (probably with no warning, since it is a fairly common thing to do).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15159) hadoop_connect_to_hosts shouldn't fail if both HADOOP_WORKERS and HADOOP_WORKERS_NAMES are defined

2018-01-05 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16314375#comment-16314375
 ] 

Allen Wittenauer commented on HADOOP-15159:
---

There is a (trivial) bug here. That said, the line you've got uncommented is 
the default.  So re-commenting it will get you working again.

> hadoop_connect_to_hosts shouldn't fail if both HADOOP_WORKERS and 
> HADOOP_WORKERS_NAMES are defined
> --
>
> Key: HADOOP-15159
> URL: https://issues.apache.org/jira/browse/HADOOP-15159
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.0.0
> Environment: hadoop 3.0.0
>Reporter: gehaijiang
>
> run   ./stop-dfs.sh 
> ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAMES were defined. Aborting.
> Stopping datanodes
> 10.50.132.147: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.151: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.146: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.150: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.154: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.145: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.152: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.148: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.149: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.153: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> Stopping journal nodes [10.50.132.145 10.50.132.146 10.50.132.147]
> ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAMES were defined. Aborting.
> Stopping ZK Failover Controllers on NN hosts [10.50.132.145 10.50.132.146 
> 10.50.132.147]
> ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAMES were defined. Aborting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15159) hadoop_connect_to_hosts shouldn't fail if both HADOOP_WORKERS and HADOOP_WORKERS_NAMES are defined

2018-01-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-15159:
--
Summary: hadoop_connect_to_hosts shouldn't fail if both HADOOP_WORKERS and 
HADOOP_WORKERS_NAMES are defined  (was: hadoop 3.0 Operation and maintenance 
shell script ERROR)

> hadoop_connect_to_hosts shouldn't fail if both HADOOP_WORKERS and 
> HADOOP_WORKERS_NAMES are defined
> --
>
> Key: HADOOP-15159
> URL: https://issues.apache.org/jira/browse/HADOOP-15159
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.0.0
> Environment: hadoop 3.0.0
>Reporter: gehaijiang
>
> run   ./stop-dfs.sh 
> ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAMES were defined. Aborting.
> Stopping datanodes
> 10.50.132.147: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.151: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.146: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.150: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.154: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.145: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.152: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.148: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.149: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.153: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> Stopping journal nodes [10.50.132.145 10.50.132.146 10.50.132.147]
> ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAMES were defined. Aborting.
> Stopping ZK Failover Controllers on NN hosts [10.50.132.145 10.50.132.146 
> 10.50.132.147]
> ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAMES were defined. Aborting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15159) hadoop 3.0 Operation and maintenance shell script ERROR

2018-01-05 Thread gehaijiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16314351#comment-16314351
 ] 

gehaijiang commented on HADOOP-15159:
-

 hadoop-env.sh include:export HADOOP_WORKERS="${HADOOP_CONF_DIR}/workers"

> hadoop 3.0 Operation and maintenance shell script ERROR
> ---
>
> Key: HADOOP-15159
> URL: https://issues.apache.org/jira/browse/HADOOP-15159
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.0.0
> Environment: hadoop 3.0.0
>Reporter: gehaijiang
>
> run   ./stop-dfs.sh 
> ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAMES were defined. Aborting.
> Stopping datanodes
> 10.50.132.147: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.151: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.146: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.150: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.154: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.145: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.152: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.148: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.149: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.153: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> Stopping journal nodes [10.50.132.145 10.50.132.146 10.50.132.147]
> ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAMES were defined. Aborting.
> Stopping ZK Failover Controllers on NN hosts [10.50.132.145 10.50.132.146 
> 10.50.132.147]
> ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAMES were defined. Aborting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15159) hadoop 3.0 Operation and maintenance shell script ERROR

2018-01-05 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16314213#comment-16314213
 ] 

Allen Wittenauer commented on HADOOP-15159:
---

Is HADOOP_WORKERS defined in hadoop-env.sh?

> hadoop 3.0 Operation and maintenance shell script ERROR
> ---
>
> Key: HADOOP-15159
> URL: https://issues.apache.org/jira/browse/HADOOP-15159
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.0.0
> Environment: hadoop 3.0.0
>Reporter: gehaijiang
>
> run   ./stop-dfs.sh 
> ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAMES were defined. Aborting.
> Stopping datanodes
> 10.50.132.147: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.151: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.146: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.150: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.154: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.145: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.152: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.148: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.149: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> 10.50.132.153: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
> HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
> Stopping journal nodes [10.50.132.145 10.50.132.146 10.50.132.147]
> ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAMES were defined. Aborting.
> Stopping ZK Failover Controllers on NN hosts [10.50.132.145 10.50.132.146 
> 10.50.132.147]
> ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAMES were defined. Aborting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15162) UserGroupInformation.createRmoteUser hardcode authentication method to SIMPLE

2018-01-05 Thread Eric Yang (JIRA)
Eric Yang created HADOOP-15162:
--

 Summary: UserGroupInformation.createRmoteUser hardcode 
authentication method to SIMPLE
 Key: HADOOP-15162
 URL: https://issues.apache.org/jira/browse/HADOOP-15162
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Eric Yang


{{UserGroupInformation.createRemoteUser(String user)}} is hard coded 
Authentication method to SIMPLE by HADOOP-10683.  This by passed proxyuser ACL 
check, isSecurityEnabled check, and allow caller to impersonate as anyone.  
This method could be abused in the main code base, which can cause part of 
Hadoop to become insecure without proxyuser check for both SIMPLE or Kerberos 
enabled environment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15006) Encrypt S3A data client-side with Hadoop libraries & Hadoop KMS

2018-01-05 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16314171#comment-16314171
 ] 

Aaron Fabbri commented on HADOOP-15006:
---

Thanks again for writing this up [~moist]--it is very helpful. I'm in general 
agreement with the discussion here.

The length / seek issue is interesting.

Do have any good links for further reading on the crypto algorithms, 
particularly the NoPadding variant you mention?  (How do lengths and byte 
offsets map from the user data to the encrypted stream?)

What are the actual atomicity requirements? Specifically, how do we handle 
multiple clients racing to create the same path?

Option 5 (store encryption metadata in Dynamo, but in its own separate table) 
sounds good to me. As we discussed offline, data in S3Guard has a different 
lifetime (it is not required to be retained, and that policy offers multiple 
benefits for S3Guard but would cause data loss for CSE). Also since the scope 
of the encryption zone is the bucket, we could get by with a very low 
provisioned I/O budget on the Dynamo table and save money, no?

I'm available any time to give a walkthrough of S3Guard's DynamoDB logic or 
answer any questions about it.

Also thanks [~xiaochen] and Steve for taking time to look over this.

> Encrypt S3A data client-side with Hadoop libraries & Hadoop KMS
> ---
>
> Key: HADOOP-15006
> URL: https://issues.apache.org/jira/browse/HADOOP-15006
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3, kms
>Reporter: Steve Moist
>Priority: Minor
> Attachments: S3-CSE Proposal.pdf
>
>
> This is for the proposal to introduce Client Side Encryption to S3 in such a 
> way that it can leverage HDFS transparent encryption, use the Hadoop KMS to 
> manage keys, use the `hdfs crypto` command line tools to manage encryption 
> zones in the cloud, and enable distcp to copy from HDFS to S3 (and 
> vice-versa) with data still encrypted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15161) s3a: Stream and common statistics missing from metrics

2018-01-05 Thread Sean Mackrory (JIRA)
Sean Mackrory created HADOOP-15161:
--

 Summary: s3a: Stream and common statistics missing from metrics
 Key: HADOOP-15161
 URL: https://issues.apache.org/jira/browse/HADOOP-15161
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.1.0
Reporter: Sean Mackrory
Assignee: Sean Mackrory


Input stream statistics aren't being passed through to metrics once merged. 
Also, the following "common statistics" are not being incremented or tracked by 
metrics:
{code}
OP_APPEND
OP_CREATE
OP_CREATE_NON_RECURSIVE
OP_DELETE
OP_GET_CONTENT_SUMMARY
OP_GET_FILE_CHECKSUM
OP_GET_STATUS
OP_MODIFY_ACL_ENTRIES
OP_OPEN
OP_REMOVE_ACL
OP_REMOVE_ACL_ENTRIES
OP_REMOVE_DEFAULT_ACL
OP_SET_ACL
OP_SET_OWNER
OP_SET_PERMISSION
OP_SET_TIMES
OP_TRUNCATE
{code}

Most of those make sense, but we can easily add OP_CREATE (and it's 
non-recursive cousin), OP_DELETE, OP_OPEN.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15160) Confusing text in http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Compatibility.html

2018-01-05 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16314068#comment-16314068
 ] 

Ajay Kumar commented on HADOOP-15160:
-

agree, I think those two sections with incompatible changes should be merged. 
Thoughts on {{Delete an optional field as long as the optional field has 
reasonable defaults to allow deletions}} ? To me this looks like an compatible 
change.

> Confusing text in 
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Compatibility.html
> 
>
> Key: HADOOP-15160
> URL: https://issues.apache.org/jira/browse/HADOOP-15160
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Jim Showalter
>Priority: Minor
>
> The text in wire formats, policy, is confusing.
> First, there are two subsections with the same heading:
> The following changes to a .proto file SHALL be considered incompatible:
> The following changes to a .proto file SHALL be considered incompatible:
> Second, one of the items listed under the first of those two headings seems 
> like it is a compatible change, not an incompatible change:
> Delete an optional field as long as the optional field has reasonable 
> defaults to allow deletions



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15107) Prove the correctness of the new committers, or fix where they are not correct

2018-01-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16313776#comment-16313776
 ] 

Steve Loughran commented on HADOOP-15107:
-

Committers can reduce load on shards by shuffling their requests a bit

* Staging task commit: schedule the largest file first, then shuffle the rest. 
Ensures that the biggest file isn't the straggler, and the rest go wherever.
* All job commit: shuffle the list of pending files


> Prove the correctness of the new committers, or fix where they are not correct
> --
>
> Key: HADOOP-15107
> URL: https://issues.apache.org/jira/browse/HADOOP-15107
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> I'm writing about the paper on the committers, one which, being a proper 
> paper, requires me to show the committers work.
> # define the requirements of a "Correct" committed job (this applies to the 
> FileOutputCommitter too)
> # show that the Staging committer meets these requirements (most of this is 
> implicit in that it uses the V1 FileOutputCommitter to marshall .pendingset 
> lists from committed tasks to the final destination, where they are read and 
> committed.
> # Show the magic committer also works.
> I'm now not sure that the magic committer works.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15160) Confusing text in http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Compatibility.html

2018-01-05 Thread Jim Showalter (JIRA)
Jim Showalter created HADOOP-15160:
--

 Summary: Confusing text in 
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Compatibility.html
 Key: HADOOP-15160
 URL: https://issues.apache.org/jira/browse/HADOOP-15160
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Jim Showalter
Priority: Minor


The text in wire formats, policy, is confusing.

First, there are two subsections with the same heading:

The following changes to a .proto file SHALL be considered incompatible:
The following changes to a .proto file SHALL be considered incompatible:

Second, one of the items listed under the first of those two headings seems 
like it is a compatible change, not an incompatible change:

Delete an optional field as long as the optional field has reasonable defaults 
to allow deletions



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15157) Zookeeper authentication related properties to support CredentialProviders

2018-01-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16313476#comment-16313476
 ] 

genericqa commented on HADOOP-15157:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  3s{color} | {color:orange} root: The patch generated 2 new + 83 unchanged - 
0 fixed = 85 total (was 83) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 59s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
40s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 40s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}197m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15157 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12904800/HADOOP-15157.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a9fcb8052d28 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0c75d06 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13925/artifact/out/diff-checkstyle-root.txt
 |

[jira] [Commented] (HADOOP-15027) AliyunOSS: Support multi-thread pre-read to improve read from Hadoop to Aliyun OSS performance

2018-01-05 Thread wujinhu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16313216#comment-16313216
 ] 

wujinhu commented on HADOOP-15027:
--

Hi [~Sammi]
Thanks for your review. I have attached some performance data, you can view the 
comments above.

> AliyunOSS: Support multi-thread pre-read to improve read from Hadoop to 
> Aliyun OSS performance
> --
>
> Key: HADOOP-15027
> URL: https://issues.apache.org/jira/browse/HADOOP-15027
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0
>Reporter: wujinhu
>Assignee: wujinhu
> Attachments: HADOOP-15027.001.patch, HADOOP-15027.002.patch, 
> HADOOP-15027.003.patch, HADOOP-15027.004.patch, HADOOP-15027.005.patch, 
> HADOOP-15027.006.patch, HADOOP-15027.007.patch, HADOOP-15027.008.patch, 
> HADOOP-15027.009.patch
>
>
> Currently, AliyunOSSInputStream uses single thread to read data from 
> AliyunOSS,  so we can do some refactoring by using multi-thread pre-read to 
> improve read performance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15159) hadoop 3.0 Operation and maintenance shell script ERROR

2018-01-05 Thread gehaijiang (JIRA)
gehaijiang created HADOOP-15159:
---

 Summary: hadoop 3.0 Operation and maintenance shell script ERROR
 Key: HADOOP-15159
 URL: https://issues.apache.org/jira/browse/HADOOP-15159
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Affects Versions: 3.0.0
 Environment: hadoop 3.0.0

Reporter: gehaijiang


run   ./stop-dfs.sh 

ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAMES were defined. Aborting.
Stopping datanodes
10.50.132.147: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
10.50.132.151: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
10.50.132.146: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
10.50.132.150: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
10.50.132.154: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
10.50.132.145: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
10.50.132.152: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
10.50.132.148: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
10.50.132.149: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
10.50.132.153: WARNING: HADOOP_DATANODE_OPTS has been replaced by 
HDFS_DATANODE_OPTS. Using value of HADOOP_DATANODE_OPTS.
Stopping journal nodes [10.50.132.145 10.50.132.146 10.50.132.147]
ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAMES were defined. Aborting.
Stopping ZK Failover Controllers on NN hosts [10.50.132.145 10.50.132.146 
10.50.132.147]
ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAMES were defined. Aborting.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15157) Zookeeper authentication related properties to support CredentialProviders

2018-01-05 Thread Gergo Repas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergo Repas updated HADOOP-15157:
-
Description: 
The hadoop.zk.auth and ha.zookeeper.auth properties currently support either a 
plain-text authentication info (in scheme:value format), or a @/path/to/file 
notation which points to a plain-text file.

This ticket proposes that the hadoop.zk.auth and ha.zookeeper.auth properties 
can be retrieved via the CredentialProviderAPI that's been configured using the 
credential.provider.path, with fallback provided to the clear-text value or 
@/path/to/file notation.

  was:
The hadoop.zk.auth and ha.zookeeper.auth properties currently support either a 
plain-text authentication info (in scheme:value format), or a @/path/to/file 
notation which points to a plain-text file.

This ticket proposes that the value of these properties can also be 
CredentialProvider URI-s (such as a jceks or localjceks URI). This allows users 
to point to an encrypted store containing the authentication info.


> Zookeeper authentication related properties to support CredentialProviders
> --
>
> Key: HADOOP-15157
> URL: https://issues.apache.org/jira/browse/HADOOP-15157
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Gergo Repas
>Assignee: Gergo Repas
>Priority: Minor
> Attachments: HADOOP-15157.000.patch, HADOOP-15157.001.patch
>
>
> The hadoop.zk.auth and ha.zookeeper.auth properties currently support either 
> a plain-text authentication info (in scheme:value format), or a 
> @/path/to/file notation which points to a plain-text file.
> This ticket proposes that the hadoop.zk.auth and ha.zookeeper.auth properties 
> can be retrieved via the CredentialProviderAPI that's been configured using 
> the credential.provider.path, with fallback provided to the clear-text value 
> or @/path/to/file notation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15157) Zookeeper authentication related properties to support CredentialProviders

2018-01-05 Thread Gergo Repas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergo Repas updated HADOOP-15157:
-
Attachment: HADOOP-15157.001.patch

Thanks [~lmccay] for the valuable feedback.
1 - thanks for the correction - indeed credential.provider.path should be used. 
Actually even patch 000 was relying on credential.provider.path, and was not 
working with an URI that's not present in credential.provider.path. I'll edit 
the ticket description to reflect that this: the hadoop.zk.auth and 
ha.zookeeper.auth properties can be retrieved via the CredentialProviderAPI 
that's been configured using the credential.provider.path, fallback is provided 
to the clear-text value or @/path/to/file notation.
2 - due to the changes in point 1), this does not need to be addressed anymore
3 - fixed it
4 - I've adjusted the documentation.

> Zookeeper authentication related properties to support CredentialProviders
> --
>
> Key: HADOOP-15157
> URL: https://issues.apache.org/jira/browse/HADOOP-15157
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Gergo Repas
>Assignee: Gergo Repas
>Priority: Minor
> Attachments: HADOOP-15157.000.patch, HADOOP-15157.001.patch
>
>
> The hadoop.zk.auth and ha.zookeeper.auth properties currently support either 
> a plain-text authentication info (in scheme:value format), or a 
> @/path/to/file notation which points to a plain-text file.
> This ticket proposes that the value of these properties can also be 
> CredentialProvider URI-s (such as a jceks or localjceks URI). This allows 
> users to point to an encrypted store containing the authentication info.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15145) Remove the CORS related code in JMXJsonServlet

2018-01-05 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HADOOP-15145:

Target Version/s: 2.8.4

> Remove the CORS related code in JMXJsonServlet
> --
>
> Key: HADOOP-15145
> URL: https://issues.apache.org/jira/browse/HADOOP-15145
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.0
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HADOOP-15145.01.patch
>
>
> {{JMXJsonServlet.java}} using hardcoded value for 
> "Access-Control-Allow-Origin" and this is added in HADOOP-11385.
> But this change is not required after YARN-4009. YARN-4009 added one new 
> filter for CORS support.
> Please refer 
> [CORS|https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-common/HttpAuthentication.html]
>  support in HttpAuthentication document



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15145) Remove the CORS related code in JMXJsonServlet

2018-01-05 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312946#comment-16312946
 ] 

Vinayakumar B commented on HADOOP-15145:


Its a  good security improvement to have.
Whether this will be incompatible change to existing users? should that be 
consider as incompatible?

> Remove the CORS related code in JMXJsonServlet
> --
>
> Key: HADOOP-15145
> URL: https://issues.apache.org/jira/browse/HADOOP-15145
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.0
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HADOOP-15145.01.patch
>
>
> {{JMXJsonServlet.java}} using hardcoded value for 
> "Access-Control-Allow-Origin" and this is added in HADOOP-11385.
> But this change is not required after YARN-4009. YARN-4009 added one new 
> filter for CORS support.
> Please refer 
> [CORS|https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-common/HttpAuthentication.html]
>  support in HttpAuthentication document



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15027) AliyunOSS: Support multi-thread pre-read to improve read from Hadoop to Aliyun OSS performance

2018-01-05 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312897#comment-16312897
 ] 

SammiChen commented on HADOOP-15027:


Hi [~wujinhu], thanks for refine the patch. can you add some performance 
comparison data here? Compare current multi-thread pre-read and previous singe 
thread pre-read implementation. 

> AliyunOSS: Support multi-thread pre-read to improve read from Hadoop to 
> Aliyun OSS performance
> --
>
> Key: HADOOP-15027
> URL: https://issues.apache.org/jira/browse/HADOOP-15027
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0
>Reporter: wujinhu
>Assignee: wujinhu
> Attachments: HADOOP-15027.001.patch, HADOOP-15027.002.patch, 
> HADOOP-15027.003.patch, HADOOP-15027.004.patch, HADOOP-15027.005.patch, 
> HADOOP-15027.006.patch, HADOOP-15027.007.patch, HADOOP-15027.008.patch, 
> HADOOP-15027.009.patch
>
>
> Currently, AliyunOSSInputStream uses single thread to read data from 
> AliyunOSS,  so we can do some refactoring by using multi-thread pre-read to 
> improve read performance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15145) Remove the CORS related code in JMXJsonServlet

2018-01-05 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312739#comment-16312739
 ] 

Brahma Reddy Battula commented on HADOOP-15145:
---

Nice catch [~surendrasingh].
 
Patch LGTM.

> Remove the CORS related code in JMXJsonServlet
> --
>
> Key: HADOOP-15145
> URL: https://issues.apache.org/jira/browse/HADOOP-15145
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.0
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HADOOP-15145.01.patch
>
>
> {{JMXJsonServlet.java}} using hardcoded value for 
> "Access-Control-Allow-Origin" and this is added in HADOOP-11385.
> But this change is not required after YARN-4009. YARN-4009 added one new 
> filter for CORS support.
> Please refer 
> [CORS|https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-common/HttpAuthentication.html]
>  support in HttpAuthentication document



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15158) AliyunOSS: Supports role based credential

2018-01-05 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HADOOP-15158:
---
Fix Version/s: (was: 3.0.1)
   (was: 2.9.1)

> AliyunOSS: Supports role based credential
> -
>
> Key: HADOOP-15158
> URL: https://issues.apache.org/jira/browse/HADOOP-15158
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Affects Versions: 3.0.0
>Reporter: wujinhu
>Assignee: wujinhu
> Attachments: HADOOP-15158.001.patch
>
>
> Currently, AliyunCredentialsProvider supports credential by 
> configuration(core-site.xml). Sometimes, admin wants to create different 
> temporary credential(key/secret/token) for different roles so that one role 
> cannot read data that belongs to another role.
> So, our code should support pass in the URI when creates an 
> XXXCredentialsProvider so that we can get user info(role) from the URI



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org