[jira] [Commented] (HIVE-18090) acid heartbeat fails when metastore is connected via hadoop credential

2017-11-21 Thread anishek (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16261997#comment-16261997
 ] 

anishek commented on HIVE-18090:


Thanks for the review [~ekoifman], Fixed the typos in "Description", Going to 
do a quick look at tests failures once before i commit, cant access the apache 
logs.

> acid heartbeat fails when metastore is connected via hadoop credential
> --
>
> Key: HIVE-18090
> URL: https://issues.apache.org/jira/browse/HIVE-18090
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, Transactions
>Affects Versions: 1.3.0, 2.0.0
>Reporter: anishek
>Assignee: anishek
> Fix For: 3.0.0
>
> Attachments: HIVE-18090.0.patch
>
>
> steps to recreate the issue. assuming two users 
> * test
> * another 
> create two jceks files for each user and place them on hdfs with access to 
> that file only allowed to the user. hdfs locations with permissions 
> {code}
> -rwx--   1 another another492 2017-11-16 13:06 
> /user/another/another.jceks
> -rwx--   1 test test489 2017-11-16 13:05 /user/test/test.jceks
> {code}
> password used to create 
> * /user/another/another.jceks -- another
> * /user/test/test.jceks -- test
> on core-site.xml 
> {code}
> 
> hadoop.proxyuser.[superuser].hosts
> *
> 
> 
> hadoop.proxyuser.[superuser].groups
> *
> 
> {code}
> and restart hdfs.
> enable ACID on HS2 (change the required properties).additional changes on  
> hiveserver2 configs 
> {code}
> * hive.metastore.warehouse.dir=file:///tmp/hive/test-warehouse
> * hive.server2.enable.doAs=true
> * remove javax.jdo.option.ConnectionPassword property from hive-site.xml
> {code}
> start hiveserver2
> connect to the server using beeline using any user:
> {code}
> create table a (i int, b string);
> insert into a values (0 , '0'), (1 , '1'), (2 , '2'), (3 , '3'), (4 , '4'), 
> (5 , '5'), (6 , '6'), (7 , '7'), (8 , '8'), (9 , '9'), (10 , '10'), (11 , 
> '11'), (12 , '12'), (13 , '13'), (14 , '14'), (15 , '15'), (16 , '16'), (17 , 
> '17'), (18 , '18'), (19 , '19'), (20 , '20'), (21 , '21'), (22 , '22'), (23 , 
> '23'), (24 , '24'), (25 , '25'), (26 , '26'), (27 , '27'), (28 , '28'), (29 , 
> '29'), (30 , '30'), (31 , '31'), (32 , '32'), (33 , '33'), (34 , '34'), (35 , 
> '35'), (36 , '36'), (37 , '37'), (38 , '38'), (39 , '39'), (40 , '40'), (41 , 
> '41'), (42 , '42'), (43 , '43'), (44 , '44'), (45 , '45'), (46 , '46'), (47 , 
> '47'), (48 , '48'), (49 , '49'), (50 , '50'), (51 , '51'), (52 , '52'), (53 , 
> '53'), (54 , '54'), (55 , '55'), (56 , '56'), (57 , '57'), (58 , '58'), (59 , 
> '59'), (60 , '60'), (61 , '61'), (62 , '62'), (63 , '63'), (64 , '64'), (65 , 
> '65'), (66 , '66'), (67 , '67'), (68 , '68'), (69 , '69'), (70 , '70'), (71 , 
> '71'), (72 , '72'), (73 , '73'), (74 , '74'), (75 , '75'), (76 , '76'), (77 , 
> '77'), (78 , '78'), (79 , '79'), (80 , '80'), (81 , '81'), (82 , '82'), (83 , 
> '83'), (84 , '84'), (85 , '85'), (86 , '86'), (87 , '87'), (88 , '88'), (89 , 
> '89'), (90 , '90'), (91 , '91'), (92 , '92'), (93 , '93'), (94 , '94'), (95 , 
> '95'), (96 , '96'), (97 , '97'), (98 , '98'), (99 , '99');
> {code}
> exit beeline and connect with user another 
> {code}
> ./beeline -u 
> "jdbc:hive2://localhost:1/default?hive.strict.checks.cartesian.product=false;hive.txn.timeout=4s;hive.txn.heartbeat.threadpool.size=1;hadoop.security.credential.provider.path=jceks://hdfs/user/another/another.jceks;ssl.server.keystore.keypassword=another"
>  -n another
> create table another_a_acid (i int, b string) clustered by (i) into 8 buckets 
> stored as orc tblproperties('transactional'='true');
> insert overwrite table another_a_acid select a2.i, a3.b from a a1 join a a2 
> join a a3 on 1=1;
> {code}
> open another beeline session with user test:
> {code}
> ./beeline -u 
> "jdbc:hive2://localhost:1/default?hive.strict.checks.cartesian.product=false;hive.txn.timeout=4s;hive.txn.heartbeat.threadpool.size=1;hadoop.security.credential.provider.path=jceks://hdfs/user/test/test.jceks;ssl.server.keystore.keypassword=test"
>  -n test
> create table a_acid (i int, b string) clustered by (i) into 8 buckets stored 
> as orc tblproperties('transactional'='true');
> insert overwrite table a_acid select a2.i, a3.b from a a1 join a a2 join a a3 
> on 1=1;
> {code}
> fails with exception 
> {code}
> 2017-11-17T12:15:52,664 DEBUG [Heartbeater-1] retry.RetryInvocationHandler: 
> Exception while invoking ClientNamenodeProtocolTranslatorPB.getFileInfo over 
> null. Not retrying because try once and fail.
> org.apache.hadoop.ipc.RemoteException: Permission denied: user=test, 
> access=EXECUTE, inode="/user/another/another.jceks":another:another:drwx--
>   at 
> 

[jira] [Commented] (HIVE-18090) acid heartbeat fails when metastore is connected via hadoop credential

2017-11-21 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16260895#comment-16260895
 ] 

Eugene Koifman commented on HIVE-18090:
---

HIVE-12366 is where shared thread pool for heartbeat was introduced

> acid heartbeat fails when metastore is connected via hadoop credential
> --
>
> Key: HIVE-18090
> URL: https://issues.apache.org/jira/browse/HIVE-18090
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, Transactions
>Affects Versions: 1.3.0, 2.0.0
>Reporter: anishek
>Assignee: anishek
> Fix For: 3.0.0
>
> Attachments: HIVE-18090.0.patch
>
>
> steps to recreate the issue. assuming two users 
> * test
> * another 
> create two jceks files for each user and place them on hdfs with access to 
> that file only allowed to the user. hdfs locations with permissions 
> {code}
> -rwx--   1 another another492 2017-11-16 13:06 
> /user/another/another.jceks
> -rwx--   1 test test489 2017-11-16 13:05 /user/test/test.jceks
> {code}
> password used to create 
> * /user/another/another.jceks -- another
> * /user/test/test.jceks -- test
> on core-site.xml 
> {code}
> 
> hadoop.proxyuser.[superuser].hosts
> *
> 
> 
> hadoop.proxyuser.[superuser].groups
> *
> 
> {code}
> and restart hdfs.
> enable ACID on HS2 (change the required properties).additional changes on  
> hiveserver2 configs 
> {code}
> * hive.metastore.warehouse.dir=file:///tmp/hive/test-warehouse
> * hive.server2.enable.doAs=true
> * remove javax.jdo.option.ConnectionPassword property from hive-site.xml
> {code}
> start hiveserver2
> connect to the server using beeline using any user:
> {code}
> create table a (i int, b string);
> insert into a values (0 , '0'), (1 , '1'), (2 , '2'), (3 , '3'), (4 , '4'), 
> (5 , '5'), (6 , '6'), (7 , '7'), (8 , '8'), (9 , '9'), (10 , '10'), (11 , 
> '11'), (12 , '12'), (13 , '13'), (14 , '14'), (15 , '15'), (16 , '16'), (17 , 
> '17'), (18 , '18'), (19 , '19'), (20 , '20'), (21 , '21'), (22 , '22'), (23 , 
> '23'), (24 , '24'), (25 , '25'), (26 , '26'), (27 , '27'), (28 , '28'), (29 , 
> '29'), (30 , '30'), (31 , '31'), (32 , '32'), (33 , '33'), (34 , '34'), (35 , 
> '35'), (36 , '36'), (37 , '37'), (38 , '38'), (39 , '39'), (40 , '40'), (41 , 
> '41'), (42 , '42'), (43 , '43'), (44 , '44'), (45 , '45'), (46 , '46'), (47 , 
> '47'), (48 , '48'), (49 , '49'), (50 , '50'), (51 , '51'), (52 , '52'), (53 , 
> '53'), (54 , '54'), (55 , '55'), (56 , '56'), (57 , '57'), (58 , '58'), (59 , 
> '59'), (60 , '60'), (61 , '61'), (62 , '62'), (63 , '63'), (64 , '64'), (65 , 
> '65'), (66 , '66'), (67 , '67'), (68 , '68'), (69 , '69'), (70 , '70'), (71 , 
> '71'), (72 , '72'), (73 , '73'), (74 , '74'), (75 , '75'), (76 , '76'), (77 , 
> '77'), (78 , '78'), (79 , '79'), (80 , '80'), (81 , '81'), (82 , '82'), (83 , 
> '83'), (84 , '84'), (85 , '85'), (86 , '86'), (87 , '87'), (88 , '88'), (89 , 
> '89'), (90 , '90'), (91 , '91'), (92 , '92'), (93 , '93'), (94 , '94'), (95 , 
> '95'), (96 , '96'), (97 , '97'), (98 , '98'), (99 , '99');
> {code}
> exit beeline and connect with user another 
> {code}
> ./beeline -u 
> "jdbc:hive2://localhost:1/default?hive.strict.checks.cartesian.product=false;hive.txn.timeout=4s;hive.txn.heartbeat.threadpool.size=1;hadoop.security.credential.provider.path=jceks://hdfs/user/another/another.jceks;ssl.server.keystore.keypassword=another"
>  -n another
> create table another_a_acid (i int, b string) clustered by (i) into 8 buckets 
> stored as orc tblproperties('transactional'='tur');
> insert overwrite table another_a_acid select a2.i, a3.b from a a1 join a a2 
> join a a3 on 1=1;
> {code}
> open another beeline session with user test:
> {code}
> ./beeline -u 
> "jdbc:hive2://localhost:1/default?hive.strict.checks.cartesian.product=false;hive.txn.timeout=4s;hive.txn.heartbeat.threadpool.size=1;hadoop.security.credential.provider.path=jceks://hdfs/user/test/test.jceks;ssl.server.keystore.keypassword=test"
>  -n test
> create table a_acid (i int, b string) clustered by (i) into 8 buckets stored 
> as orc tblproperties('transactional'='tur');
> insert overwrite table a_acid select a2.i, a3.b from a a1 join a a2 join a a3 
> on 1=1;
> {code}
> fails with exception 
> {code}
> 2017-11-17T12:15:52,664 DEBUG [Heartbeater-1] retry.RetryInvocationHandler: 
> Exception while invoking ClientNamenodeProtocolTranslatorPB.getFileInfo over 
> null. Not retrying because try once and fail.
> org.apache.hadoop.ipc.RemoteException: Permission denied: user=test, 
> access=EXECUTE, inode="/user/another/another.jceks":another:another:drwx--
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
>   at 
> 

[jira] [Commented] (HIVE-18090) acid heartbeat fails when metastore is connected via hadoop credential

2017-11-17 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257735#comment-16257735
 ] 

Eugene Koifman commented on HIVE-18090:
---

you have 
{noformat}
tblproperties('transactional'='tur');
{noformat}
I assume this is just a typo?

Also, _above will only work if the insert overwrite query takes longer than 
hive.txn.timeout / 2 = 4 / 2 = 2seconds_ - I assume "work" means "reproduce"?

The thread pool for heartbeating DbTxnManager.heartbeatExecutorService is 
static so if you are able to create 2 sessions with different users, the pool 
should have the tread alive with User1 when User2 issues a request, I think.

otherwise the patch LGTM
+1

> acid heartbeat fails when metastore is connected via hadoop credential
> --
>
> Key: HIVE-18090
> URL: https://issues.apache.org/jira/browse/HIVE-18090
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: anishek
> Fix For: 3.0.0
>
> Attachments: HIVE-18090.0.patch
>
>
> steps to recreate the issue. assuming two users 
> * test
> * another 
> create two jceks files for each user and place them on hdfs with access to 
> that file only allowed to the user. hdfs locations with permissions 
> {code}
> -rwx--   1 another another492 2017-11-16 13:06 
> /user/another/another.jceks
> -rwx--   1 test test489 2017-11-16 13:05 /user/test/test.jceks
> {code}
> password used to create 
> * /user/another/another.jceks -- another
> * /user/test/test.jceks -- test
> on core-site.xml 
> {code}
> 
> hadoop.proxyuser.[superuser].hosts
> *
> 
> 
> hadoop.proxyuser.[superuser].groups
> *
> 
> {code}
> and restart hdfs.
> enable ACID on HS2 (change the required properties).additional changes on  
> hiveserver2 configs 
> {code}
> * hive.metastore.warehouse.dir=file:///tmp/hive/test-warehouse
> * hive.server2.enable.doAs=true
> * remove javax.jdo.option.ConnectionPassword property from hive-site.xml
> {code}
> start hiveserver2
> connect to the server using beeline using any user:
> {code}
> create table a (i int, b string);
> insert into a values (0 , '0'), (1 , '1'), (2 , '2'), (3 , '3'), (4 , '4'), 
> (5 , '5'), (6 , '6'), (7 , '7'), (8 , '8'), (9 , '9'), (10 , '10'), (11 , 
> '11'), (12 , '12'), (13 , '13'), (14 , '14'), (15 , '15'), (16 , '16'), (17 , 
> '17'), (18 , '18'), (19 , '19'), (20 , '20'), (21 , '21'), (22 , '22'), (23 , 
> '23'), (24 , '24'), (25 , '25'), (26 , '26'), (27 , '27'), (28 , '28'), (29 , 
> '29'), (30 , '30'), (31 , '31'), (32 , '32'), (33 , '33'), (34 , '34'), (35 , 
> '35'), (36 , '36'), (37 , '37'), (38 , '38'), (39 , '39'), (40 , '40'), (41 , 
> '41'), (42 , '42'), (43 , '43'), (44 , '44'), (45 , '45'), (46 , '46'), (47 , 
> '47'), (48 , '48'), (49 , '49'), (50 , '50'), (51 , '51'), (52 , '52'), (53 , 
> '53'), (54 , '54'), (55 , '55'), (56 , '56'), (57 , '57'), (58 , '58'), (59 , 
> '59'), (60 , '60'), (61 , '61'), (62 , '62'), (63 , '63'), (64 , '64'), (65 , 
> '65'), (66 , '66'), (67 , '67'), (68 , '68'), (69 , '69'), (70 , '70'), (71 , 
> '71'), (72 , '72'), (73 , '73'), (74 , '74'), (75 , '75'), (76 , '76'), (77 , 
> '77'), (78 , '78'), (79 , '79'), (80 , '80'), (81 , '81'), (82 , '82'), (83 , 
> '83'), (84 , '84'), (85 , '85'), (86 , '86'), (87 , '87'), (88 , '88'), (89 , 
> '89'), (90 , '90'), (91 , '91'), (92 , '92'), (93 , '93'), (94 , '94'), (95 , 
> '95'), (96 , '96'), (97 , '97'), (98 , '98'), (99 , '99');
> {code}
> exit beeline and connect with user another 
> {code}
> ./beeline -u 
> "jdbc:hive2://localhost:1/default?hive.strict.checks.cartesian.product=false;hive.txn.timeout=4s;hive.txn.heartbeat.threadpool.size=1;hadoop.security.credential.provider.path=jceks://hdfs/user/another/another.jceks;ssl.server.keystore.keypassword=another"
>  -n another
> create table another_a_acid (i int, b string) clustered by (i) into 8 buckets 
> stored as orc tblproperties('transactional'='tur');
> insert overwrite table another_a_acid select a2.i, a3.b from a a1 join a a2 
> join a a3 on 1=1;
> {code}
> open another beeline session with user test:
> {code}
> ./beeline -u 
> "jdbc:hive2://localhost:1/default?hive.strict.checks.cartesian.product=false;hive.txn.timeout=4s;hive.txn.heartbeat.threadpool.size=1;hadoop.security.credential.provider.path=jceks://hdfs/user/test/test.jceks;ssl.server.keystore.keypassword=test"
>  -n test
> create table a_acid (i int, b string) clustered by (i) into 8 buckets stored 
> as orc tblproperties('transactional'='tur');
> insert overwrite table a_acid select a2.i, a3.b from a a1 join a a2 join a a3 
> on 1=1;
> {code}
> fails with exception 
> {code}
> 2017-11-17T12:15:52,664 DEBUG [Heartbeater-1] 

[jira] [Commented] (HIVE-18090) acid heartbeat fails when metastore is connected via hadoop credential

2017-11-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257364#comment-16257364
 ] 

Hive QA commented on HIVE-18090:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12898154/HIVE-18090.0.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 11383 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] 
(batchId=47)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dbtxnmgr_showlocks] 
(batchId=77)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[unionDistinct_1] 
(batchId=146)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_2] 
(batchId=102)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=223)
org.apache.hive.hcatalog.pig.TestHCatLoaderComplexSchema.testSyntheticComplexSchema[2]
 (batchId=187)
org.apache.hive.hcatalog.pig.TestSequenceFileHCatStorer.testWriteChar 
(batchId=187)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7888/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7888/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7888/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 8 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12898154 - PreCommit-HIVE-Build

> acid heartbeat fails when metastore is connected via hadoop credential
> --
>
> Key: HIVE-18090
> URL: https://issues.apache.org/jira/browse/HIVE-18090
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: anishek
> Fix For: 3.0.0
>
> Attachments: HIVE-18090.0.patch
>
>
> steps to recreate the issue. assuming two users 
> * test
> * another 
> create two jceks files for each user and place them on hdfs with access to 
> that file only allowed to the user. hdfs locations with permissions 
> {code}
> -rwx--   1 another another492 2017-11-16 13:06 
> /user/another/another.jceks
> -rwx--   1 test test489 2017-11-16 13:05 /user/test/test.jceks
> {code}
> password used to create 
> * /user/another/another.jceks -- another
> * /user/test/test.jceks -- test
> on core-site.xml 
> {code}
> 
> hadoop.proxyuser.[superuser].hosts
> *
> 
> 
> hadoop.proxyuser.[superuser].groups
> *
> 
> {code}
> and restart hdfs.
> enable ACID on HS2 (change the required properties).additional changes on  
> hiveserver2 configs 
> {code}
> * hive.metastore.warehouse.dir=file:///tmp/hive/test-warehouse
> * hive.server2.enable.doAs=true
> * remove javax.jdo.option.ConnectionPassword property from hive-site.xml
> {code}
> start hiveserver2
> connect to the server using beeline using any user:
> {code}
> create table a (i int, b string);
> insert into a values (0 , '0'), (1 , '1'), (2 , '2'), (3 , '3'), (4 , '4'), 
> (5 , '5'), (6 , '6'), (7 , '7'), (8 , '8'), (9 , '9'), (10 , '10'), (11 , 
> '11'), (12 , '12'), (13 , '13'), (14 , '14'), (15 , '15'), (16 , '16'), (17 , 
> '17'), (18 , '18'), (19 , '19'), (20 , '20'), (21 , '21'), (22 , '22'), (23 , 
> '23'), (24 , '24'), (25 , '25'), (26 , '26'), (27 , '27'), (28 , '28'), (29 , 
> '29'), (30 , '30'), (31 , '31'), (32 , '32'), (33 , '33'), (34 , '34'), (35 , 
> '35'), (36 , '36'), (37 , '37'), (38 , '38'), (39 , '39'), (40 , '40'), (41 , 
> '41'), (42 , '42'), (43 , '43'), (44 , '44'), (45 , '45'), (46 , '46'), (47 , 
> '47'), (48 , '48'), (49 , '49'), (50 , '50'), (51 , '51'), (52 , '52'), (53 , 
> '53'), (54 , '54'), (55 , '55'), (56 , '56'), (57 , '57'), (58 , '58'), (59 , 
> '59'), (60 , '60'), (61 , '61'), (62 , '62'), (63 , '63'), (64 , '64'), (65 , 
> '65'), (66 , '66'), (67 , '67'), (68 , '68'), (69 , '69'), (70 , '70'), (71 , 
> '71'), (72 , '72'), (73 , '73'), (74 , '74'), (75 , '75'), (76 , '76'), (77 , 
> '77'), (78 , '78'), (79 , '79'), (80 , '80'), (81 , '81'), (82 , '82'), (83 , 
> '83'), (84 , '84'), (85 , '85'), (86 , '86'), (87 , '87'), (88 , '88'), (89 , 
> '89'), (90 , '90'), (91 , '91'), (92 , '92'), (93 , '93'), (94 , '94'), (95 , 
> '95'), (96 , '96'), (97 , '97'), (98 , '98'), (99 , '99');
> {code}
> exit beeline and connect with user another 
>