We are just facing another problem to set  custom parameters.  How do we
set these parameters in beeline at runtime ?    These are out custom
parameters.

SET airflow_cluster=${env:CLUSTER};
SET default_date=unix_timestamp('1970-01-01 00:00:00');
SET default_timestamp=CAST('1970-01-01 00:00:00' AS TIMESTAMP);
SET default_future_date=unix_timestamp('2099-12-31 00:00:00');

We get these errors when we set these parameters.

0: jdbc:hive2://usw2dbdpmn01:10000/> SET default_timestamp=CAST('1970-01-01
00:00:00' AS TIMESTAMP);
Error: Error while processing statement: Cannot modify default_timestamp at
runtime. It is not in list of params that are allowed to be modified at
runtime (state=42000,code=1)


Thanks
Anand


On Mon, Dec 19, 2016 at 5:34 PM, Anandha L Ranganathan <
analog.s...@gmail.com> wrote:

> Cool. After adding  the configuration it is working fine.
>
> 0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.authorization.
> sqlstd.confwhitelist.append;
> +-----------------------------------------------------------
> -------------------------+--+
> |                                        set
> |
> +-----------------------------------------------------------
> -------------------------+--+
> | 
> hive.security.authorization.sqlstd.confwhitelist.append=|fs\.s3a\..*|fs\.s3n\..*
> |  |
> +-----------------------------------------------------------
> -------------------------+--+
>
>
> Thanks Selva for the quick help.
>
>
>
> On Mon, Dec 19, 2016 at 5:29 PM, Selvamohan Neethiraj <sneet...@apache.org
> > wrote:
>
>> Hi,
>>
>> Can you try appending the following string to the  existing value of
>>  hive.security.authorization.sqlstd.confwhitelist
>>
>> |fs\.s3a\..*
>>
>> And restart the HiveServer2 to see if this fixes this issue ?
>>
>> Thanks,
>> Selva-
>> From: Anandha L Ranganathan <analog.s...@gmail.com>
>> Reply-To: "user@ranger.incubator.apache.org" <
>> user@ranger.incubator.apache.org>
>> Date: Monday, December 19, 2016 at 6:27 PM
>>
>> To: "user@ranger.incubator.apache.org" <user@ranger.incubator.apache.org>
>> Subject: Re: Unable to connect to S3 after enabling Ranger with Hive
>>
>> Selva,
>>
>> Please find the results.
>>
>> set hive.security.authorization.sqlstd.confwhitelist;
>>
>> | hive.security.authorization.sqlstd.confwhitelist=hive\.auto\
>> ..*|hive\.cbo\..*|hive\.convert\..*|hive\.exec\.dynamic\.
>> partition.*|hive\.exec\..*\.dynamic\.partitions\..*|hive\.
>> exec\.compress\..*|hive\.exec\.infer\..*|hive\.exec\.mode.
>> local\..*|hive\.exec\.orc\..*|hive\.exec\.parallel.*|hive\.
>> explain\..*|hive\.fetch.task\..*|hive\.groupby\..*|hive\.
>> hbase\..*|hive\.index\..*|hive\.index\..*|hive\.
>> intermediate\..*|hive\.join\..*|hive\.limit\..*|hive\.log\..
>> *|hive\.mapjoin\..*|hive\.merge\..*|hive\.optimize\..*|
>> hive\.orc\..*|hive\.outerjoin\..*|hive\.parquet\..*|hive\.
>> ppd\..*|hive\.prewarm\..*|hive\.server2\.proxy\.user|
>> hive\.skewjoin\..*|hive\.smbjoin\..*|hive\.stats\..*|hive\.tez\..*|hive\.
>> vectorized\..*|mapred\.map\..*|mapred\.reduce\..*|mapred\.
>> output\.compression\.codec|mapred\.job\.queuename|mapred\
>> .output\.compression\.type|mapred\.min\.split\.size|mapre
>> duce\.job\.reduce\.slowstart\.completedmaps|mapreduce\.job\.
>> queuename|mapreduce\.job\.tags|mapreduce\.input\.fileinp
>> utformat\.split\.minsize|mapreduce\.map\..*|mapreduce\.
>> reduce\..*|mapreduce\.output\.fileoutputformat\.compress\.co
>> dec|mapreduce\.output\.fileoutputformat\.compress\.type|tez\
>> .am\..*|tez\.task\..*|tez\.runtime\..*|tez.queue.name|
>> hive\.exec\.reducers\.bytes\.per\.reducer|hive\.client\.
>> stats\.counters|hive\.exec\.default\.partition\.name|hive\
>> .exec\.drop\.ignorenonexistent|hive\.counters\.group\.name|
>> hive\.default\.fileformat\.managed|hive\.enforce\.
>> bucketing|hive\.enforce\.bucketmapjoin|hive\.enforce\.
>> sorting|hive\.enforce\.sortmergebucketmapjoin|hive\.cache\.
>> expr\.evaluation|hive\.hashtable\.loadfactor|hive\.hashtable
>> \.initialCapacity|hive\.ignore\.mapjoin\.hint|hive\.
>> limit\.row\.max\.size|hive\.mapred\.mode|hive\.map\.aggr|
>> hive\.compute\.query\.using\.stats|hive\.exec\.rowoffset|
>> hive\.variable\.substitute|hive\.variable\.substitute\.
>> depth|hive\.autogen\.columnalias\.prefix\.includefuncname|hive\.autogen\.
>> columnalias\.prefix\.label|hive\.exec\.check\.crossproducts|
>> hive\.compat|hive\.exec\.concatenate\.check\.index|
>> hive\.display\.partition\.cols\.separately|hive\.error\.
>> on\.empty\.partition|hive\.execution\.engine|hive\.exim\.
>> uri\.scheme\.whitelist|hive\.file\.max\.footer|hive\.mapred\.supports\.
>> subdirectories|hive\.insert\.into\.multilevel\.dirs|hive\.
>> localize\.resource\.num\.wait\.attempts|hive\.multi\.insert\
>> .move\.tasks\.share\.dependencies|hive\.support\.
>> quoted\.identifiers|hive\.resultset\.use\.unique\.column
>> \.names|hive\.analyze\.stmt\.collect\.partlevel\.stats|
>> hive\.server2\.logging\.operation\.level|hive\.support\.
>> sql11\.reserved\.keywords|hive\.exec\.job\.debug\.
>> capture\.stacktraces|hive\.exec\.job\.debug\.timeout|
>> hive\.exec\.max\.created\.files|hive\.exec\.reducers\.
>> max|hive\.reorder\.nway\.joins|hive\.output\.file\.
>> extension|hive\.exec\.show\.job\.failure\.debug\.info|
>> hive\.exec\.tasklog\.debug\.timeout  |
>>
>>
>>
>> 0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.authorization.sq
>> lstd.confwhitelist.append;
>> +-----------------------------------------------------------
>> ------------+--+
>> |                                  set                                  |
>> +-----------------------------------------------------------
>> ------------+--+
>> | hive.security.authorization.sqlstd.confwhitelist.append is undefined  |
>> +-----------------------------------------------------------
>> ------------+--+
>>
>>
>> On Mon, Dec 19, 2016 at 3:12 PM, Selvamohan Neethiraj <
>> sneet...@apache.org> wrote:
>>
>>> Hi,
>>>
>>> Can you also post here the value for the following two parameters:
>>>
>>> hive.security.authorization.sqlstd.confwhitelist
>>>
>>> hive.security.authorization.sqlstd.confwhitelist.append
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Selva-
>>>
>>> From: Anandha L Ranganathan <analog.s...@gmail.com>
>>> Reply-To: "user@ranger.incubator.apache.org" <
>>> user@ranger.incubator.apache.org>
>>> Date: Monday, December 19, 2016 at 5:54 PM
>>> To: "user@ranger.incubator.apache.org" <user@ranger.incubator.apache.org
>>> >
>>> Subject: Re: Unable to connect to S3 after enabling Ranger with Hive
>>>
>>> Selva,
>>>
>>> We are using HDP and here are versions and results.
>>>
>>> Hive :  1.2.1.2.4
>>> Ranger: 0.5.0.2.4
>>>
>>>
>>>
>>> 0: jdbc:hive2://usw2dxdpmn01:10010> set  hive.conf.restricted.list;
>>> +-----------------------------------------------------------
>>> ------------------------------------------------------------
>>> -----------------+--+
>>> |
>>> set                                                                   |
>>> +-----------------------------------------------------------
>>> ------------------------------------------------------------
>>> -----------------+--+
>>> | hive.conf.restricted.list=hive.security.authorization.enable
>>> d,hive.security.authorization.manager,hive.security.authenticator.manager
>>> |
>>> +-----------------------------------------------------------
>>> ------------------------------------------------------------
>>> -----------------+--+
>>> 1 row selected (0.006 seconds)
>>> 0: jdbc:hive2://usw2dxdpmn01:10010> set hive.security.command.whitelist;
>>> +-----------------------------------------------------------
>>> --------------------+--+
>>> |                                      set
>>> |
>>> +-----------------------------------------------------------
>>> --------------------+--+
>>> | 
>>> hive.security.command.whitelist=set,reset,dfs,add,list,delete,reload,compile
>>> |
>>> +-----------------------------------------------------------
>>> --------------------+--+
>>> 1 row selected (0.008 seconds)
>>>
>>>
>>>
>>>
>>> 0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key=xxxxxxxxxxxx
>>> xxx;
>>> Error: Error while processing statement: Cannot modify fs.s3a.access.key
>>> at runtime. It is not in list of params that are allowed to be modified at
>>> runtime (state=42000,code=1)
>>>
>>> On Mon, Dec 19, 2016 at 2:47 PM, Selvamohan Neethiraj <
>>> sneet...@apache.org> wrote:
>>>
>>>> Hi,
>>>>
>>>> Which version of Hive and Ranger are you using ? Can you check if
>>>> Ranger has added  hiveserver2 parameters  
>>>> hive.conf.restricted.list,hive.security.command.whitelist
>>>>  in the hive configuration file(s) ?
>>>> Can you please list out these parameter values here ?
>>>>
>>>> Thanks,
>>>> Selva-
>>>>
>>>> From: Anandha L Ranganathan <analog.s...@gmail.com>
>>>> Reply-To: "user@ranger.incubator.apache.org" <
>>>> user@ranger.incubator.apache.org>
>>>> Date: Monday, December 19, 2016 at 5:30 PM
>>>> To: "user@ranger.incubator.apache.org" <user@ranger.incubator.apache.
>>>> org>
>>>> Subject: Unable to connect to S3 after enabling Ranger with Hive
>>>>
>>>> Hi,
>>>>
>>>>
>>>> Unable to create table pointing to S3 after enabling Ranger.
>>>>
>>>> This is database we created before enabling Ranger.
>>>>
>>>>
>>>>    1. SET fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
>>>>    2. SET fs.s3a.access.key=xxxxxxx;
>>>>    3. SET fs.s3a.secret.key=yyyyyyyyyyyyyyy;
>>>>    4.
>>>>    5.
>>>>    6. CREATE DATABASE IF NOT EXISTS backup_s3a1
>>>>    7. COMMENT "s3a schema test"
>>>>    8. LOCATION "s3a://gd-de-dp-db-hcat-backup-schema/";
>>>>
>>>> After Ranger was enabled, we try to create another database but it is
>>>> throwing error.
>>>>
>>>>
>>>>    1. 0: jdbc:hive2://usw2dxdpmn01.local:> SET 
>>>> fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem;
>>>>    2. Error: Error while processing statement: Cannot modify fs.s3a.impl 
>>>> at runtime. It is not in list of params that are allowed to be modified at 
>>>> runtime (state=42000,code=1)
>>>>    3.
>>>>
>>>>
>>>>
>>>> I configured the credentials in the core-site.xml and always returns
>>>> "undefined" when I am trying to see the values for  below commands. This is
>>>> in our " dev" environment where Ranger is enabled. In   other environment
>>>> where Ranger is not installed , we are not facing this problem.
>>>>
>>>>
>>>>    1. 0: jdbc:hive2://usw2dxdpmn01:10010> set  fs.s3a.impl;
>>>>    2. +-----------------------------------------------------+--+
>>>>    3. |                         set                         |
>>>>    4. +-----------------------------------------------------+--+
>>>>    5. | fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem  |
>>>>    6. +-----------------------------------------------------+--+
>>>>    7. 1 row selected (0.006 seconds)
>>>>    8. 0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.access.key;
>>>>    9. +---------------------------------+--+
>>>>    10. |               set               |
>>>>    11. +---------------------------------+--+
>>>>    12. | fs.s3a.access.key is undefined  |
>>>>    13. +---------------------------------+--+
>>>>    14. 1 row selected (0.005 seconds)
>>>>    15. 0: jdbc:hive2://usw2dxdpmn01:10010> set fs.s3a.secret.key;
>>>>    16. +---------------------------------+--+
>>>>    17. |               set               |
>>>>    18. +---------------------------------+--+
>>>>    19. | fs.s3a.secret.key is undefined  |
>>>>    20. +---------------------------------+--+
>>>>    21. 1 row selected (0.005 seconds)
>>>>
>>>>
>>>> Any help or pointers is appreciated.
>>>>
>>>>
>>>
>>
>

Reply via email to