[jira] [Commented] (HIVE-16446) org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified by setting t

2017-05-19 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16017265#comment-16017265
 ] 

Steve Loughran commented on HIVE-16446:
---

# try with s3a URS and the fs.s3a secret and access keys
# do not put secrets in your URIs, it's a security leak waiting to be 
discovered. That's why you get told off about it. See HADOOP-3733

> org.apache.hadoop.hive.ql.exec.DDLTask. 
> MetaException(message:java.lang.IllegalArgumentException: AWS Access Key ID 
> and Secret Access Key must be specified by setting the fs.s3n.awsAccessKeyId 
> and fs.s3n.awsSecretAccessKey properties
> -
>
> Key: HIVE-16446
> URL: https://issues.apache.org/jira/browse/HIVE-16446
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.1.0
>Reporter: Kalexin Baoerjiin
>Assignee: Vihang Karajgaonkar
>
> After upgrading our Cloudera cluster to CDH 5.10.1 we are experiencing the 
> following problem during some Hive DDL.
> 
> SET fs.s3n.awsSecretAccessKey=;
> SET fs.s3n.awsAccessKeyId=;
> 
> ALTER TABLE hive_1k_partitions ADD IF NOT EXISTS partition (year='2014', 
> month='2014-01', dt='2014-01-01', hours='00', minutes='16', seconds='22') 
> location 's3n://'
> 
> Stack trace I was able to recover: 
> [ Message content over the limit has been removed. ]
> at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:383)
> at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:318)
> at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:416)
> at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:432)
> at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:726)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:693)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Job Submission failed with exception ‘java.lang.IllegalArgumentException(AWS 
> Access Key ID and Secret Access Key must be specified by setting the 
> fs.s3n.awsAccessKeyId and fs.s3n.awsSecretAccessKey properties 
> (respectively).)’
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask
> [9:31] 
> Logging initialized using configuration in 
> jar:file:/opt/cloudera/parcels/CDH-5.10.1-1.cdh5.10.1.p0.10/jars/hive-common-1.1.0-cdh5.10.1.jar!/hive-log4j.properties
> In the past we did not have to set s3 key and ID in core-site.xml because we 
> were using them dynamically inside our hive DDL scripts.
> After setting S3 secret key and Access ID in core-site.xml this problem goes 
> away. However this is an incompatibility change from the previous Hive 
> shipped in CDH 5.9. 
> Cloudera 5.10.x release note mentioned (HIVE-14269 : Enhanced write 
> performance for Hive tables stored on Amazon S3.) is the only Hive related 
> changes. 
> https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_new_in_cdh_510.html
> https://issues.apache.org/jira/browse/HIVE-14269



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16446) org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified by setting t

2017-05-18 Thread Kalexin Baoerjiin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016735#comment-16016735
 ] 

Kalexin Baoerjiin commented on HIVE-16446:
--

[~vihangk1] We are using hive CLI. 

Have you tried including the key and access in the Hive query like this: 
`LOCATION 's3n://AWS_ACCESS_KEY:AWS_SECRET_KEY@`? 

Is it possible to set multiple pairs of S3 key and secret in core-site.xml?

Thanks for the heads up about s3a [~steve_l]

> org.apache.hadoop.hive.ql.exec.DDLTask. 
> MetaException(message:java.lang.IllegalArgumentException: AWS Access Key ID 
> and Secret Access Key must be specified by setting the fs.s3n.awsAccessKeyId 
> and fs.s3n.awsSecretAccessKey properties
> -
>
> Key: HIVE-16446
> URL: https://issues.apache.org/jira/browse/HIVE-16446
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.1.0
>Reporter: Kalexin Baoerjiin
>Assignee: Vihang Karajgaonkar
>
> After upgrading our Cloudera cluster to CDH 5.10.1 we are experiencing the 
> following problem during some Hive DDL.
> 
> SET fs.s3n.awsSecretAccessKey=;
> SET fs.s3n.awsAccessKeyId=;
> 
> ALTER TABLE hive_1k_partitions ADD IF NOT EXISTS partition (year='2014', 
> month='2014-01', dt='2014-01-01', hours='00', minutes='16', seconds='22') 
> location 's3n://'
> 
> Stack trace I was able to recover: 
> [ Message content over the limit has been removed. ]
> at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:383)
> at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:318)
> at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:416)
> at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:432)
> at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:726)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:693)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Job Submission failed with exception ‘java.lang.IllegalArgumentException(AWS 
> Access Key ID and Secret Access Key must be specified by setting the 
> fs.s3n.awsAccessKeyId and fs.s3n.awsSecretAccessKey properties 
> (respectively).)’
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask
> [9:31] 
> Logging initialized using configuration in 
> jar:file:/opt/cloudera/parcels/CDH-5.10.1-1.cdh5.10.1.p0.10/jars/hive-common-1.1.0-cdh5.10.1.jar!/hive-log4j.properties
> In the past we did not have to set s3 key and ID in core-site.xml because we 
> were using them dynamically inside our hive DDL scripts.
> After setting S3 secret key and Access ID in core-site.xml this problem goes 
> away. However this is an incompatibility change from the previous Hive 
> shipped in CDH 5.9. 
> Cloudera 5.10.x release note mentioned (HIVE-14269 : Enhanced write 
> performance for Hive tables stored on Amazon S3.) is the only Hive related 
> changes. 
> https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_new_in_cdh_510.html
> https://issues.apache.org/jira/browse/HIVE-14269



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16446) org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified by setting t

2017-05-08 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16001686#comment-16001686
 ] 

Vihang Karajgaonkar commented on HIVE-16446:


Hi [~kalexin] I tried this on CDH 5.8 onwards until 5.10. I could not make it 
work using the {{set fs.s3a.secret.key=}} and {{set 
fs.s3a.access.key=}}. Can you tell me the exact version of CDH where it 
works? Are you using BeeLine or HiveCLI? As far as I know you have to add the 
keys to the core-site.xml to make it work.

> org.apache.hadoop.hive.ql.exec.DDLTask. 
> MetaException(message:java.lang.IllegalArgumentException: AWS Access Key ID 
> and Secret Access Key must be specified by setting the fs.s3n.awsAccessKeyId 
> and fs.s3n.awsSecretAccessKey properties
> -
>
> Key: HIVE-16446
> URL: https://issues.apache.org/jira/browse/HIVE-16446
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.1.0
>Reporter: Kalexin Baoerjiin
>Assignee: Vihang Karajgaonkar
>
> After upgrading our Cloudera cluster to CDH 5.10.1 we are experiencing the 
> following problem during some Hive DDL.
> 
> SET fs.s3n.awsSecretAccessKey=;
> SET fs.s3n.awsAccessKeyId=;
> 
> ALTER TABLE hive_1k_partitions ADD IF NOT EXISTS partition (year='2014', 
> month='2014-01', dt='2014-01-01', hours='00', minutes='16', seconds='22') 
> location 's3n://'
> 
> Stack trace I was able to recover: 
> [ Message content over the limit has been removed. ]
> at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:383)
> at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:318)
> at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:416)
> at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:432)
> at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:726)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:693)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Job Submission failed with exception ‘java.lang.IllegalArgumentException(AWS 
> Access Key ID and Secret Access Key must be specified by setting the 
> fs.s3n.awsAccessKeyId and fs.s3n.awsSecretAccessKey properties 
> (respectively).)’
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask
> [9:31] 
> Logging initialized using configuration in 
> jar:file:/opt/cloudera/parcels/CDH-5.10.1-1.cdh5.10.1.p0.10/jars/hive-common-1.1.0-cdh5.10.1.jar!/hive-log4j.properties
> In the past we did not have to set s3 key and ID in core-site.xml because we 
> were using them dynamically inside our hive DDL scripts.
> After setting S3 secret key and Access ID in core-site.xml this problem goes 
> away. However this is an incompatibility change from the previous Hive 
> shipped in CDH 5.9. 
> Cloudera 5.10.x release note mentioned (HIVE-14269 : Enhanced write 
> performance for Hive tables stored on Amazon S3.) is the only Hive related 
> changes. 
> https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_new_in_cdh_510.html
> https://issues.apache.org/jira/browse/HIVE-14269



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16446) org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified by setting t

2017-04-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15979869#comment-15979869
 ] 

Steve Loughran commented on HIVE-16446:
---

you should switch to using s3a:// URLs in things based on Hadoop 2.7.+, which 
the latest CDH versions are. 

Set up securiy as per : 
https://hortonworks.github.io/hdp-aws/s3-security/index.html
Then test on the command line before worrying about Hive: 
https://hortonworks.github.io/hdp-aws/s3-s3aclient/index.html

setting up core-site.xml & testing via the hdfs fs commands will let you get up 
and running faster


> org.apache.hadoop.hive.ql.exec.DDLTask. 
> MetaException(message:java.lang.IllegalArgumentException: AWS Access Key ID 
> and Secret Access Key must be specified by setting the fs.s3n.awsAccessKeyId 
> and fs.s3n.awsSecretAccessKey properties
> -
>
> Key: HIVE-16446
> URL: https://issues.apache.org/jira/browse/HIVE-16446
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.1.0
>Reporter: Kalexin Baoerjiin
>Assignee: Vihang Karajgaonkar
>
> After upgrading our Cloudera cluster to CDH 5.10.1 we are experiencing the 
> following problem during some Hive DDL.
> 
> SET fs.s3n.awsSecretAccessKey=;
> SET fs.s3n.awsAccessKeyId=;
> 
> ALTER TABLE hive_1k_partitions ADD IF NOT EXISTS partition (year='2014', 
> month='2014-01', dt='2014-01-01', hours='00', minutes='16', seconds='22') 
> location 's3n://'
> 
> Stack trace I was able to recover: 
> [ Message content over the limit has been removed. ]
> at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:383)
> at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:318)
> at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:416)
> at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:432)
> at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:726)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:693)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Job Submission failed with exception ‘java.lang.IllegalArgumentException(AWS 
> Access Key ID and Secret Access Key must be specified by setting the 
> fs.s3n.awsAccessKeyId and fs.s3n.awsSecretAccessKey properties 
> (respectively).)’
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask
> [9:31] 
> Logging initialized using configuration in 
> jar:file:/opt/cloudera/parcels/CDH-5.10.1-1.cdh5.10.1.p0.10/jars/hive-common-1.1.0-cdh5.10.1.jar!/hive-log4j.properties
> In the past we did not have to set s3 key and ID in core-site.xml because we 
> were using them dynamically inside our hive DDL scripts.
> After setting S3 secret key and Access ID in core-site.xml this problem goes 
> away. However this is an incompatibility change from the previous Hive 
> shipped in CDH 5.9. 
> Cloudera 5.10.x release note mentioned (HIVE-14269 : Enhanced write 
> performance for Hive tables stored on Amazon S3.) is the only Hive related 
> changes. 
> https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_new_in_cdh_510.html
> https://issues.apache.org/jira/browse/HIVE-14269



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16446) org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified by setting t

2017-04-21 Thread Kalexin Baoerjiin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15979224#comment-15979224
 ] 

Kalexin Baoerjiin commented on HIVE-16446:
--

[~vihangk1] no it doesn't work with fs.s3a.secret.key and fs.s3a.access.key?

> org.apache.hadoop.hive.ql.exec.DDLTask. 
> MetaException(message:java.lang.IllegalArgumentException: AWS Access Key ID 
> and Secret Access Key must be specified by setting the fs.s3n.awsAccessKeyId 
> and fs.s3n.awsSecretAccessKey properties
> -
>
> Key: HIVE-16446
> URL: https://issues.apache.org/jira/browse/HIVE-16446
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.1.0
>Reporter: Kalexin Baoerjiin
>Assignee: Vihang Karajgaonkar
>
> After upgrading our Cloudera cluster to CDH 5.10.1 we are experiencing the 
> following problem during some Hive DDL.
> 
> SET fs.s3n.awsSecretAccessKey=;
> SET fs.s3n.awsAccessKeyId=;
> 
> ALTER TABLE hive_1k_partitions ADD IF NOT EXISTS partition (year='2014', 
> month='2014-01', dt='2014-01-01', hours='00', minutes='16', seconds='22') 
> location 's3n://'
> 
> Stack trace I was able to recover: 
> [ Message content over the limit has been removed. ]
> at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:383)
> at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:318)
> at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:416)
> at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:432)
> at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:726)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:693)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Job Submission failed with exception ‘java.lang.IllegalArgumentException(AWS 
> Access Key ID and Secret Access Key must be specified by setting the 
> fs.s3n.awsAccessKeyId and fs.s3n.awsSecretAccessKey properties 
> (respectively).)’
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask
> [9:31] 
> Logging initialized using configuration in 
> jar:file:/opt/cloudera/parcels/CDH-5.10.1-1.cdh5.10.1.p0.10/jars/hive-common-1.1.0-cdh5.10.1.jar!/hive-log4j.properties
> In the past we did not have to set s3 key and ID in core-site.xml because we 
> were using them dynamically inside our hive DDL scripts.
> After setting S3 secret key and Access ID in core-site.xml this problem goes 
> away. However this is an incompatibility change from the previous Hive 
> shipped in CDH 5.9. 
> Cloudera 5.10.x release note mentioned (HIVE-14269 : Enhanced write 
> performance for Hive tables stored on Amazon S3.) is the only Hive related 
> changes. 
> https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_new_in_cdh_510.html
> https://issues.apache.org/jira/browse/HIVE-14269



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16446) org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified by setting t

2017-04-13 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15968672#comment-15968672
 ] 

Vihang Karajgaonkar commented on HIVE-16446:


I can take a look at this .. Does it work if you use fs.s3a.secret.key and 
fs.s3a.access.key?

> org.apache.hadoop.hive.ql.exec.DDLTask. 
> MetaException(message:java.lang.IllegalArgumentException: AWS Access Key ID 
> and Secret Access Key must be specified by setting the fs.s3n.awsAccessKeyId 
> and fs.s3n.awsSecretAccessKey properties
> -
>
> Key: HIVE-16446
> URL: https://issues.apache.org/jira/browse/HIVE-16446
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.1.0
>Reporter: Kalexin Baoerjiin
>
> After upgrading our Cloudera cluster to CDH 5.10.1 we are experiencing the 
> following problem during some Hive DDL.
> 
> SET fs.s3n.awsSecretAccessKey=;
> SET fs.s3n.awsAccessKeyId=;
> 
> ALTER TABLE hive_1k_partitions ADD IF NOT EXISTS partition (year='2014', 
> month='2014-01', dt='2014-01-01', hours='00', minutes='16', seconds='22') 
> location 's3n://'
> 
> Stack trace I was able to recover: 
> [ Message content over the limit has been removed. ]
> at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:383)
> at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:318)
> at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:416)
> at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:432)
> at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:726)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:693)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Job Submission failed with exception ‘java.lang.IllegalArgumentException(AWS 
> Access Key ID and Secret Access Key must be specified by setting the 
> fs.s3n.awsAccessKeyId and fs.s3n.awsSecretAccessKey properties 
> (respectively).)’
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask
> [9:31] 
> Logging initialized using configuration in 
> jar:file:/opt/cloudera/parcels/CDH-5.10.1-1.cdh5.10.1.p0.10/jars/hive-common-1.1.0-cdh5.10.1.jar!/hive-log4j.properties
> In the past we did not have to set s3 key and ID in core-site.xml because we 
> were using them dynamically inside our hive DDL scripts.
> After setting S3 secret key and Access ID in core-site.xml this problem goes 
> away. However this is an incompatibility change from the previous Hive 
> shipped in CDH 5.9. 
> Cloudera 5.10.x release note mentioned (HIVE-14269 : Enhanced write 
> performance for Hive tables stored on Amazon S3.) is the only Hive related 
> changes. 
> https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_new_in_cdh_510.html
> https://issues.apache.org/jira/browse/HIVE-14269



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)