[jira] [Updated] (HIVE-11443) remove HiveServer1 C++ client library
[ https://issues.apache.org/jira/browse/HIVE-11443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Abdelrahman Shettia updated HIVE-11443: --- Assignee: Abdelrahman Shettia remove HiveServer1 C++ client library - Key: HIVE-11443 URL: https://issues.apache.org/jira/browse/HIVE-11443 Project: Hive Issue Type: Bug Components: ODBC Reporter: Thejas M Nair Assignee: Abdelrahman Shettia Labels: newbie, newdev HiveServer1 has been removed as part of HIVE-6977 . There is still C++ hive client code used by the old ODBC driver that works against HiveServer1. We should remove that unusable code from the code base. This the whole odbc dir. There would also be maven pom.xml entries at top level that would also be candidates for removal. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10528) Hiveserver2 in HTTP mode is not applying auth_to_local rules
[ https://issues.apache.org/jira/browse/HIVE-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560912#comment-14560912 ] Abdelrahman Shettia commented on HIVE-10528: The build has some failures related to the following: {code} 2015-05-27 04:38:59,538 ERROR PTest.run:180 Test run exited with an unexpected error org.apache.hive.ptest.execution.TestsFailedException: 55 tests failed at org.apache.hive.ptest.execution.PTest.run(PTest.java:177) at org.apache.hive.ptest.api.server.TestExecutor.run(TestExecutor.java:120 {code} I am not sure if this is related to the code change in the patch. [~vgumashta] can you please confirm for me? I am able to get a successful local build and ran through the test cases without issues. I am attaching file called 'REPRO-10528.txt' with the testing outcome. The patch did fix the issue and its using auth to local. Thanks -Rahman Hiveserver2 in HTTP mode is not applying auth_to_local rules Key: HIVE-10528 URL: https://issues.apache.org/jira/browse/HIVE-10528 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 1.0.0, 1.2.0, 1.1.0, 1.3.0 Environment: Centos 6 Reporter: Abdelrahman Shettia Assignee: Abdelrahman Shettia Attachments: HIVE-10528.1.patch, HIVE-10528.1.patch, HIVE-10528.2.patch, HIVE-10528.3.patch PROBLEM: Authenticating to HS2 in HTTP mode with Kerberos, auth_to_local mappings do not get applied. Because of this various permissions checks which rely on the local cluster name for a user are going to fail. STEPS TO REPRODUCE: 1. Create kerberos cluster and HS2 in HTTP mode 2. Create a new user, test, along with a kerberos principal for this user 3. Create a separate principal, mapped-test 4. Create an auth_to_local rule to make sure that mapped-test is mapped to test 5. As the test user, connect to HS2 with beeline and create a simple table: {code} CREATE TABLE permtest (field1 int); {code} There is no need to load anything into this table. 6. Establish that it works as the test user: {code} show create table permtest; {code} 7. Drop the test identity and become mapped-test 8. Re-connect to HS2 with beeline, re-run the above command: {code} show create table permtest; {code} You will find that when this is done in HTTP mode, you will get an HDFS error (because of StorageBasedAuthorization doing a HDFS permissions check) and the user will be mapped-test and NOT test as it should be. ANALYSIS: This appears to be HTTP specific and the problem seems to come in {{ThriftHttpServlet$HttpKerberosServerAction.getPrincipalWithoutRealmAndHost()}}: {code} try { fullKerberosName = ShimLoader.getHadoopShims().getKerberosNameShim(fullPrincipal); } catch (IOException e) { throw new HttpAuthenticationException(e); } return fullKerberosName.getServiceName(); {code} getServiceName applies no auth_to_local rules. Seems like maybe this should be getShortName()? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10528) Hiveserver2 in HTTP mode is not applying auth_to_local rules
[ https://issues.apache.org/jira/browse/HIVE-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Abdelrahman Shettia updated HIVE-10528: --- Attachment: REPRO-10528.txt Hiveserver2 in HTTP mode is not applying auth_to_local rules Key: HIVE-10528 URL: https://issues.apache.org/jira/browse/HIVE-10528 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 1.0.0, 1.2.0, 1.1.0, 1.3.0 Environment: Centos 6 Reporter: Abdelrahman Shettia Assignee: Abdelrahman Shettia Attachments: HIVE-10528.1.patch, HIVE-10528.1.patch, HIVE-10528.2.patch, HIVE-10528.3.patch, REPRO-10528.txt PROBLEM: Authenticating to HS2 in HTTP mode with Kerberos, auth_to_local mappings do not get applied. Because of this various permissions checks which rely on the local cluster name for a user are going to fail. STEPS TO REPRODUCE: 1. Create kerberos cluster and HS2 in HTTP mode 2. Create a new user, test, along with a kerberos principal for this user 3. Create a separate principal, mapped-test 4. Create an auth_to_local rule to make sure that mapped-test is mapped to test 5. As the test user, connect to HS2 with beeline and create a simple table: {code} CREATE TABLE permtest (field1 int); {code} There is no need to load anything into this table. 6. Establish that it works as the test user: {code} show create table permtest; {code} 7. Drop the test identity and become mapped-test 8. Re-connect to HS2 with beeline, re-run the above command: {code} show create table permtest; {code} You will find that when this is done in HTTP mode, you will get an HDFS error (because of StorageBasedAuthorization doing a HDFS permissions check) and the user will be mapped-test and NOT test as it should be. ANALYSIS: This appears to be HTTP specific and the problem seems to come in {{ThriftHttpServlet$HttpKerberosServerAction.getPrincipalWithoutRealmAndHost()}}: {code} try { fullKerberosName = ShimLoader.getHadoopShims().getKerberosNameShim(fullPrincipal); } catch (IOException e) { throw new HttpAuthenticationException(e); } return fullKerberosName.getServiceName(); {code} getServiceName applies no auth_to_local rules. Seems like maybe this should be getShortName()? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10528) Hiveserver2 in HTTP mode is not applying auth_to_local rules
[ https://issues.apache.org/jira/browse/HIVE-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Abdelrahman Shettia updated HIVE-10528: --- Attachment: HIVE-10528.3.patch Hiveserver2 in HTTP mode is not applying auth_to_local rules Key: HIVE-10528 URL: https://issues.apache.org/jira/browse/HIVE-10528 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 1.0.0, 1.2.0, 1.1.0, 1.3.0 Environment: Centos 6 Reporter: Abdelrahman Shettia Assignee: Abdelrahman Shettia Attachments: HIVE-10528.1.patch, HIVE-10528.1.patch, HIVE-10528.2.patch, HIVE-10528.3.patch PROBLEM: Authenticating to HS2 in HTTP mode with Kerberos, auth_to_local mappings do not get applied. Because of this various permissions checks which rely on the local cluster name for a user are going to fail. STEPS TO REPRODUCE: 1. Create kerberos cluster and HS2 in HTTP mode 2. Create a new user, test, along with a kerberos principal for this user 3. Create a separate principal, mapped-test 4. Create an auth_to_local rule to make sure that mapped-test is mapped to test 5. As the test user, connect to HS2 with beeline and create a simple table: {code} CREATE TABLE permtest (field1 int); {code} There is no need to load anything into this table. 6. Establish that it works as the test user: {code} show create table permtest; {code} 7. Drop the test identity and become mapped-test 8. Re-connect to HS2 with beeline, re-run the above command: {code} show create table permtest; {code} You will find that when this is done in HTTP mode, you will get an HDFS error (because of StorageBasedAuthorization doing a HDFS permissions check) and the user will be mapped-test and NOT test as it should be. ANALYSIS: This appears to be HTTP specific and the problem seems to come in {{ThriftHttpServlet$HttpKerberosServerAction.getPrincipalWithoutRealmAndHost()}}: {code} try { fullKerberosName = ShimLoader.getHadoopShims().getKerberosNameShim(fullPrincipal); } catch (IOException e) { throw new HttpAuthenticationException(e); } return fullKerberosName.getServiceName(); {code} getServiceName applies no auth_to_local rules. Seems like maybe this should be getShortName()? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10528) Hiveserver2 in HTTP mode is not applying auth_to_local rules
[ https://issues.apache.org/jira/browse/HIVE-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Abdelrahman Shettia updated HIVE-10528: --- Attachment: HIVE-10528.2.patch Hiveserver2 in HTTP mode is not applying auth_to_local rules Key: HIVE-10528 URL: https://issues.apache.org/jira/browse/HIVE-10528 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 1.0.0, 1.2.0, 1.1.0, 1.3.0 Environment: Centos 6 Reporter: Abdelrahman Shettia Assignee: Abdelrahman Shettia Attachments: HIVE-10528.1.patch, HIVE-10528.1.patch, HIVE-10528.2.patch PROBLEM: Authenticating to HS2 in HTTP mode with Kerberos, auth_to_local mappings do not get applied. Because of this various permissions checks which rely on the local cluster name for a user are going to fail. STEPS TO REPRODUCE: 1. Create kerberos cluster and HS2 in HTTP mode 2. Create a new user, test, along with a kerberos principal for this user 3. Create a separate principal, mapped-test 4. Create an auth_to_local rule to make sure that mapped-test is mapped to test 5. As the test user, connect to HS2 with beeline and create a simple table: {code} CREATE TABLE permtest (field1 int); {code} There is no need to load anything into this table. 6. Establish that it works as the test user: {code} show create table permtest; {code} 7. Drop the test identity and become mapped-test 8. Re-connect to HS2 with beeline, re-run the above command: {code} show create table permtest; {code} You will find that when this is done in HTTP mode, you will get an HDFS error (because of StorageBasedAuthorization doing a HDFS permissions check) and the user will be mapped-test and NOT test as it should be. ANALYSIS: This appears to be HTTP specific and the problem seems to come in {{ThriftHttpServlet$HttpKerberosServerAction.getPrincipalWithoutRealmAndHost()}}: {code} try { fullKerberosName = ShimLoader.getHadoopShims().getKerberosNameShim(fullPrincipal); } catch (IOException e) { throw new HttpAuthenticationException(e); } return fullKerberosName.getServiceName(); {code} getServiceName applies no auth_to_local rules. Seems like maybe this should be getShortName()? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10528) Hiveserver2 in HTTP mode is not applying auth_to_local rules
[ https://issues.apache.org/jira/browse/HIVE-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Abdelrahman Shettia updated HIVE-10528: --- Attachment: HIVE-10528.1.patch Hiveserver2 in HTTP mode is not applying auth_to_local rules Key: HIVE-10528 URL: https://issues.apache.org/jira/browse/HIVE-10528 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.14.0 Environment: Centos 6 Reporter: Abdelrahman Shettia Assignee: Abdelrahman Shettia Attachments: HIVE-10528.1.patch PROBLEM: Authenticating to HS2 in HTTP mode with Kerberos, auth_to_local mappings do not get applied. Because of this various permissions checks which rely on the local cluster name for a user are going to fail. STEPS TO REPRODUCE: 1. Create kerberos cluster and HS2 in HTTP mode 2. Create a new user, test, along with a kerberos principal for this user 3. Create a separate principal, mapped-test 4. Create an auth_to_local rule to make sure that mapped-test is mapped to test 5. As the test user, connect to HS2 with beeline and create a simple table: {code} CREATE TABLE permtest (field1 int); {code} There is no need to load anything into this table. 6. Establish that it works as the test user: {code} show create table permtest; {code} 7. Drop the test identity and become mapped-test 8. Re-connect to HS2 with beeline, re-run the above command: {code} show create table permtest; {code} You will find that when this is done in HTTP mode, you will get an HDFS error (because of StorageBasedAuthorization doing a HDFS permissions check) and the user will be mapped-test and NOT test as it should be. ANALYSIS: This appears to be HTTP specific and the problem seems to come in {{ThriftHttpServlet$HttpKerberosServerAction.getPrincipalWithoutRealmAndHost()}}: {code} try { fullKerberosName = ShimLoader.getHadoopShims().getKerberosNameShim(fullPrincipal); } catch (IOException e) { throw new HttpAuthenticationException(e); } return fullKerberosName.getServiceName(); {code} getServiceName applies no auth_to_local rules. Seems like maybe this should be getShortName()? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10528) Hiveserver2 in HTTP mode is not applying auth_to_local rules
[ https://issues.apache.org/jira/browse/HIVE-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Abdelrahman Shettia updated HIVE-10528: --- Attachment: HIVE-10528.1.patch Hiveserver2 in HTTP mode is not applying auth_to_local rules Key: HIVE-10528 URL: https://issues.apache.org/jira/browse/HIVE-10528 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.14.0 Environment: Centos 6 Reporter: Abdelrahman Shettia Assignee: Abdelrahman Shettia Attachments: HIVE-10528.1.patch PROBLEM: Authenticating to HS2 in HTTP mode with Kerberos, auth_to_local mappings do not get applied. Because of this various permissions checks which rely on the local cluster name for a user are going to fail. STEPS TO REPRODUCE: 1. Create kerberos cluster and HS2 in HTTP mode 2. Create a new user, test, along with a kerberos principal for this user 3. Create a separate principal, mapped-test 4. Create an auth_to_local rule to make sure that mapped-test is mapped to test 5. As the test user, connect to HS2 with beeline and create a simple table: {code} CREATE TABLE permtest (field1 int); {code} There is no need to load anything into this table. 6. Establish that it works as the test user: {code} show create table permtest; {code} 7. Drop the test identity and become mapped-test 8. Re-connect to HS2 with beeline, re-run the above command: {code} show create table permtest; {code} You will find that when this is done in HTTP mode, you will get an HDFS error (because of StorageBasedAuthorization doing a HDFS permissions check) and the user will be mapped-test and NOT test as it should be. ANALYSIS: This appears to be HTTP specific and the problem seems to come in {{ThriftHttpServlet$HttpKerberosServerAction.getPrincipalWithoutRealmAndHost()}}: {code} try { fullKerberosName = ShimLoader.getHadoopShims().getKerberosNameShim(fullPrincipal); } catch (IOException e) { throw new HttpAuthenticationException(e); } return fullKerberosName.getServiceName(); {code} getServiceName applies no auth_to_local rules. Seems like maybe this should be getShortName()? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10528) Hiveserver2 in HTTP mode is not applying auth_to_local rules
[ https://issues.apache.org/jira/browse/HIVE-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Abdelrahman Shettia updated HIVE-10528: --- Attachment: (was: HIVE-10528.1.patch) Hiveserver2 in HTTP mode is not applying auth_to_local rules Key: HIVE-10528 URL: https://issues.apache.org/jira/browse/HIVE-10528 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.14.0 Environment: Centos 6 Reporter: Abdelrahman Shettia Assignee: Abdelrahman Shettia PROBLEM: Authenticating to HS2 in HTTP mode with Kerberos, auth_to_local mappings do not get applied. Because of this various permissions checks which rely on the local cluster name for a user are going to fail. STEPS TO REPRODUCE: 1. Create kerberos cluster and HS2 in HTTP mode 2. Create a new user, test, along with a kerberos principal for this user 3. Create a separate principal, mapped-test 4. Create an auth_to_local rule to make sure that mapped-test is mapped to test 5. As the test user, connect to HS2 with beeline and create a simple table: {code} CREATE TABLE permtest (field1 int); {code} There is no need to load anything into this table. 6. Establish that it works as the test user: {code} show create table permtest; {code} 7. Drop the test identity and become mapped-test 8. Re-connect to HS2 with beeline, re-run the above command: {code} show create table permtest; {code} You will find that when this is done in HTTP mode, you will get an HDFS error (because of StorageBasedAuthorization doing a HDFS permissions check) and the user will be mapped-test and NOT test as it should be. ANALYSIS: This appears to be HTTP specific and the problem seems to come in {{ThriftHttpServlet$HttpKerberosServerAction.getPrincipalWithoutRealmAndHost()}}: {code} try { fullKerberosName = ShimLoader.getHadoopShims().getKerberosNameShim(fullPrincipal); } catch (IOException e) { throw new HttpAuthenticationException(e); } return fullKerberosName.getServiceName(); {code} getServiceName applies no auth_to_local rules. Seems like maybe this should be getShortName()? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10528) Hiveserver2 in HTTP mode is not applying auth_to_local rules
[ https://issues.apache.org/jira/browse/HIVE-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Abdelrahman Shettia updated HIVE-10528: --- Attachment: HIVE-10528.1.patch Hiveserver2 in HTTP mode is not applying auth_to_local rules Key: HIVE-10528 URL: https://issues.apache.org/jira/browse/HIVE-10528 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 1.3.0 Environment: Centos 6 Reporter: Abdelrahman Shettia Assignee: Abdelrahman Shettia Attachments: HIVE-10528.1.patch PROBLEM: Authenticating to HS2 in HTTP mode with Kerberos, auth_to_local mappings do not get applied. Because of this various permissions checks which rely on the local cluster name for a user are going to fail. STEPS TO REPRODUCE: 1. Create kerberos cluster and HS2 in HTTP mode 2. Create a new user, test, along with a kerberos principal for this user 3. Create a separate principal, mapped-test 4. Create an auth_to_local rule to make sure that mapped-test is mapped to test 5. As the test user, connect to HS2 with beeline and create a simple table: {code} CREATE TABLE permtest (field1 int); {code} There is no need to load anything into this table. 6. Establish that it works as the test user: {code} show create table permtest; {code} 7. Drop the test identity and become mapped-test 8. Re-connect to HS2 with beeline, re-run the above command: {code} show create table permtest; {code} You will find that when this is done in HTTP mode, you will get an HDFS error (because of StorageBasedAuthorization doing a HDFS permissions check) and the user will be mapped-test and NOT test as it should be. ANALYSIS: This appears to be HTTP specific and the problem seems to come in {{ThriftHttpServlet$HttpKerberosServerAction.getPrincipalWithoutRealmAndHost()}}: {code} try { fullKerberosName = ShimLoader.getHadoopShims().getKerberosNameShim(fullPrincipal); } catch (IOException e) { throw new HttpAuthenticationException(e); } return fullKerberosName.getServiceName(); {code} getServiceName applies no auth_to_local rules. Seems like maybe this should be getShortName()? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10528) Hiveserver2 in HTTP mode is not applying auth_to_local rules
[ https://issues.apache.org/jira/browse/HIVE-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14526004#comment-14526004 ] Abdelrahman Shettia commented on HIVE-10528: The patch is uploaded and I am going to wait for the test results. Thanks -Rahman Hiveserver2 in HTTP mode is not applying auth_to_local rules Key: HIVE-10528 URL: https://issues.apache.org/jira/browse/HIVE-10528 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 1.3.0 Environment: Centos 6 Reporter: Abdelrahman Shettia Assignee: Abdelrahman Shettia Attachments: HIVE-10528.1.patch PROBLEM: Authenticating to HS2 in HTTP mode with Kerberos, auth_to_local mappings do not get applied. Because of this various permissions checks which rely on the local cluster name for a user are going to fail. STEPS TO REPRODUCE: 1. Create kerberos cluster and HS2 in HTTP mode 2. Create a new user, test, along with a kerberos principal for this user 3. Create a separate principal, mapped-test 4. Create an auth_to_local rule to make sure that mapped-test is mapped to test 5. As the test user, connect to HS2 with beeline and create a simple table: {code} CREATE TABLE permtest (field1 int); {code} There is no need to load anything into this table. 6. Establish that it works as the test user: {code} show create table permtest; {code} 7. Drop the test identity and become mapped-test 8. Re-connect to HS2 with beeline, re-run the above command: {code} show create table permtest; {code} You will find that when this is done in HTTP mode, you will get an HDFS error (because of StorageBasedAuthorization doing a HDFS permissions check) and the user will be mapped-test and NOT test as it should be. ANALYSIS: This appears to be HTTP specific and the problem seems to come in {{ThriftHttpServlet$HttpKerberosServerAction.getPrincipalWithoutRealmAndHost()}}: {code} try { fullKerberosName = ShimLoader.getHadoopShims().getKerberosNameShim(fullPrincipal); } catch (IOException e) { throw new HttpAuthenticationException(e); } return fullKerberosName.getServiceName(); {code} getServiceName applies no auth_to_local rules. Seems like maybe this should be getShortName()? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10121) Implement a hive --service udflint command to check UDF jars for common shading mistakes
[ https://issues.apache.org/jira/browse/HIVE-10121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Abdelrahman Shettia updated HIVE-10121: --- Attachment: HIVE-10121.2.patch Fixing minor format. Implement a hive --service udflint command to check UDF jars for common shading mistakes Key: HIVE-10121 URL: https://issues.apache.org/jira/browse/HIVE-10121 Project: Hive Issue Type: New Feature Components: UDF Reporter: Gopal V Assignee: Abdelrahman Shettia Fix For: 1.2.0 Attachments: HIVE-10121.1.patch, HIVE-10121.2.patch, bad_udfs.out, bad_udfs_verbose.out, good_udfs.out, good_udfs_verbose.out Several SerDe and UDF jars tend to shade in various parts of the dependencies including hadoop-common or guava without relocation. Implement a simple udflint tool which automates some part of the class path and shaded resources audit process required when upgrading a hive install from an old version to a new one. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10528) Hiveserver2 in HTTP mode is not applying auth_to_local rules
[ https://issues.apache.org/jira/browse/HIVE-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Abdelrahman Shettia updated HIVE-10528: --- Assignee: Abdelrahman Shettia Hiveserver2 in HTTP mode is not applying auth_to_local rules Key: HIVE-10528 URL: https://issues.apache.org/jira/browse/HIVE-10528 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.14.0 Environment: Centos 6 Reporter: Abdelrahman Shettia Assignee: Abdelrahman Shettia PROBLEM: Authenticating to HS2 in HTTP mode with Kerberos, auth_to_local mappings do not get applied. Because of this various permissions checks which rely on the local cluster name for a user are going to fail. STEPS TO REPRODUCE: 1. Create kerberos cluster and HS2 in HTTP mode 2. Create a new user, test, along with a kerberos principal for this user 3. Create a separate principal, mapped-test 4. Create an auth_to_local rule to make sure that mapped-test is mapped to test 5. As the test user, connect to HS2 with beeline and create a simple table: {code} CREATE TABLE permtest (field1 int); {code} There is no need to load anything into this table. 6. Establish that it works as the test user: {code} show create table permtest; {code} 7. Drop the test identity and become mapped-test 8. Re-connect to HS2 with beeline, re-run the above command: {code} show create table permtest; {code} You will find that when this is done in HTTP mode, you will get an HDFS error (because of StorageBasedAuthorization doing a HDFS permissions check) and the user will be mapped-test and NOT test as it should be. ANALYSIS: This appears to be HTTP specific and the problem seems to come in {{ThriftHttpServlet$HttpKerberosServerAction.getPrincipalWithoutRealmAndHost()}}: {code} try { fullKerberosName = ShimLoader.getHadoopShims().getKerberosNameShim(fullPrincipal); } catch (IOException e) { throw new HttpAuthenticationException(e); } return fullKerberosName.getServiceName(); {code} getServiceName applies no auth_to_local rules. Seems like maybe this should be getShortName()? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10121) Implement a hive --service udflint command to check UDF jars for common shading mistakes
[ https://issues.apache.org/jira/browse/HIVE-10121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Abdelrahman Shettia updated HIVE-10121: --- Attachment: HIVE-10121.1.patch Implement a hive --service udflint command to check UDF jars for common shading mistakes Key: HIVE-10121 URL: https://issues.apache.org/jira/browse/HIVE-10121 Project: Hive Issue Type: New Feature Components: UDF Reporter: Gopal V Assignee: Abdelrahman Shettia Fix For: 1.2.0 Attachments: HIVE-10121.1.patch Several SerDe and UDF jars tend to shade in various parts of the dependencies including hadoop-common or guava without relocation. Implement a simple udflint tool which automates some part of the class path and shaded resources audit process required when upgrading a hive install from an old version to a new one. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10121) Implement a hive --service udflint command to check UDF jars for common shading mistakes
[ https://issues.apache.org/jira/browse/HIVE-10121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14498076#comment-14498076 ] Abdelrahman Shettia commented on HIVE-10121: Hi Gopal, I am attaching the following use cases output files: bad_udfs.out bad_udfs_verbose.out good_udfs.out good_udfs_verbose.out Usage: Normal mode: $ hive --service UDFLint -file /tmp/hive_udf-1.0.0.jar Verbose mode: hive --service UDFLint -jar yi/hive-json-serde-0.3.jar -v Without any options: [root@sandbox test]# hive --service UDFLint usage: udflint -h,--helpprint help message --hiveconf property=value Use value for given property --jar arg Comma separated list of jars to validate -v,--verbose Verbose mode (Run the tool in debug mode) Please let me know if you have questions. Thanks -Rahman Implement a hive --service udflint command to check UDF jars for common shading mistakes Key: HIVE-10121 URL: https://issues.apache.org/jira/browse/HIVE-10121 Project: Hive Issue Type: New Feature Components: UDF Reporter: Gopal V Assignee: Abdelrahman Shettia Fix For: 1.2.0 Attachments: HIVE-10121.1.patch, bad_udfs.out, bad_udfs_verbose.out, good_udfs.out, good_udfs_verbose.out Several SerDe and UDF jars tend to shade in various parts of the dependencies including hadoop-common or guava without relocation. Implement a simple udflint tool which automates some part of the class path and shaded resources audit process required when upgrading a hive install from an old version to a new one. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9182) avoid FileSystem.getAclStatus rpc call for filesystems that don't support acl
[ https://issues.apache.org/jira/browse/HIVE-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Abdelrahman Shettia updated HIVE-9182: -- Attachment: (was: HIVE-9182.1.patch) avoid FileSystem.getAclStatus rpc call for filesystems that don't support acl - Key: HIVE-9182 URL: https://issues.apache.org/jira/browse/HIVE-9182 Project: Hive Issue Type: Bug Affects Versions: 0.14.0 Reporter: Thejas M Nair Assignee: Abdelrahman Shettia Fix For: 1.2.0 File systems such as s3, wasp (azure) don't implement Hadoop FileSystem acl functionality. Hadoop23Shims has code that calls getAclStatus on file systems. Instead of calling getAclStatus and catching the exception, we can also check FsPermission#getAclBit . Additionally, instead of catching all exceptions for calls to getAclStatus and ignoring them, it is better to just catch UnsupportedOperationException. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9182) avoid FileSystem.getAclStatus rpc call for filesystems that don't support acl
[ https://issues.apache.org/jira/browse/HIVE-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Abdelrahman Shettia updated HIVE-9182: -- Attachment: HIVE-9182.3.patch avoid FileSystem.getAclStatus rpc call for filesystems that don't support acl - Key: HIVE-9182 URL: https://issues.apache.org/jira/browse/HIVE-9182 Project: Hive Issue Type: Bug Affects Versions: 0.14.0 Reporter: Thejas M Nair Assignee: Abdelrahman Shettia Fix For: 1.2.0 Attachments: HIVE-9182.2.patch, HIVE-9182.3.patch File systems such as s3, wasp (azure) don't implement Hadoop FileSystem acl functionality. Hadoop23Shims has code that calls getAclStatus on file systems. Instead of calling getAclStatus and catching the exception, we can also check FsPermission#getAclBit . Additionally, instead of catching all exceptions for calls to getAclStatus and ignoring them, it is better to just catch UnsupportedOperationException. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9182) avoid FileSystem.getAclStatus rpc call for filesystems that don't support acl
[ https://issues.apache.org/jira/browse/HIVE-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490568#comment-14490568 ] Abdelrahman Shettia commented on HIVE-9182: --- Thanks Chris for your comment. I appreciate the feedback. Thanks -Rahman avoid FileSystem.getAclStatus rpc call for filesystems that don't support acl - Key: HIVE-9182 URL: https://issues.apache.org/jira/browse/HIVE-9182 Project: Hive Issue Type: Bug Affects Versions: 0.14.0 Reporter: Thejas M Nair Assignee: Abdelrahman Shettia Fix For: 1.2.0 Attachments: HIVE-9182.2.patch, HIVE-9182.3.patch File systems such as s3, wasp (azure) don't implement Hadoop FileSystem acl functionality. Hadoop23Shims has code that calls getAclStatus on file systems. Instead of calling getAclStatus and catching the exception, we can also check FsPermission#getAclBit . Additionally, instead of catching all exceptions for calls to getAclStatus and ignoring them, it is better to just catch UnsupportedOperationException. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9779) ATSHook does not log the end user if doAs=false (it logs the hs2 server user)
[ https://issues.apache.org/jira/browse/HIVE-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Abdelrahman Shettia updated HIVE-9779: -- Attachment: (was: 9979.002.patch) ATSHook does not log the end user if doAs=false (it logs the hs2 server user) - Key: HIVE-9779 URL: https://issues.apache.org/jira/browse/HIVE-9779 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.13.0, 0.14.0 Reporter: Abdelrahman Shettia Assignee: Abdelrahman Shettia Attachments: 9979.001.patch, HIVE-9779-testing.xlsx When doAs=false, ATSHook should log the end username in ATS instead of logging the hiveserver2 user's name. The way things are, it is not possible for an admin to identify which query is being run by which user. The end user information is already available in the HookContext. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9182) avoid FileSystem.getAclStatus rpc call for filesystems that don't support acl
[ https://issues.apache.org/jira/browse/HIVE-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Abdelrahman Shettia updated HIVE-9182: -- Attachment: HIVE-9182.1.patch avoid FileSystem.getAclStatus rpc call for filesystems that don't support acl - Key: HIVE-9182 URL: https://issues.apache.org/jira/browse/HIVE-9182 Project: Hive Issue Type: Bug Affects Versions: 0.14.0 Reporter: Thejas M Nair Assignee: Abdelrahman Shettia Fix For: 1.2.0 Attachments: HIVE-9182.1.patch File systems such as s3, wasp (azure) don't implement Hadoop FileSystem acl functionality. Hadoop23Shims has code that calls getAclStatus on file systems. Instead of calling getAclStatus and catching the exception, we can also check FsPermission#getAclBit . Additionally, instead of catching all exceptions for calls to getAclStatus and ignoring them, it is better to just catch UnsupportedOperationException. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9182) avoid FileSystem.getAclStatus rpc call for filesystems that don't support acl
[ https://issues.apache.org/jira/browse/HIVE-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14344189#comment-14344189 ] Abdelrahman Shettia commented on HIVE-9182: --- Hi [~thejas], I have uploaded the patch file called HIVE-9182.1.patch and used the recommended FsPermission#getAclBit. Thanks -Rahman avoid FileSystem.getAclStatus rpc call for filesystems that don't support acl - Key: HIVE-9182 URL: https://issues.apache.org/jira/browse/HIVE-9182 Project: Hive Issue Type: Bug Affects Versions: 0.14.0 Reporter: Thejas M Nair Assignee: Abdelrahman Shettia Fix For: 1.2.0 Attachments: HIVE-9182.1.patch File systems such as s3, wasp (azure) don't implement Hadoop FileSystem acl functionality. Hadoop23Shims has code that calls getAclStatus on file systems. Instead of calling getAclStatus and catching the exception, we can also check FsPermission#getAclBit . Additionally, instead of catching all exceptions for calls to getAclStatus and ignoring them, it is better to just catch UnsupportedOperationException. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9779) ATSHook does not log the end user if doAs=false (it logs the hs2 server user)
[ https://issues.apache.org/jira/browse/HIVE-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Abdelrahman Shettia updated HIVE-9779: -- Attachment: HIVE-9779-testing.xlsx ATSHook does not log the end user if doAs=false (it logs the hs2 server user) - Key: HIVE-9779 URL: https://issues.apache.org/jira/browse/HIVE-9779 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.13.0, 0.14.0 Reporter: Abdelrahman Shettia Assignee: Abdelrahman Shettia Attachments: 9979.001.patch, 9979.002.patch, HIVE-9779-testing.xlsx When doAs=false, ATSHook should log the end username in ATS instead of logging the hiveserver2 user's name. The way things are, it is not possible for an admin to identify which query is being run by which user. The end user information is already available in the HookContext. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9779) ATSHook does not log the end user if doAs=false (it logs the hs2 server user)
[ https://issues.apache.org/jira/browse/HIVE-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14344027#comment-14344027 ] Abdelrahman Shettia commented on HIVE-9779: --- I have uploaded excel sheet called HIVE-9779-testing. It has all the details of the test cases. Thanks -Rahman ATSHook does not log the end user if doAs=false (it logs the hs2 server user) - Key: HIVE-9779 URL: https://issues.apache.org/jira/browse/HIVE-9779 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.13.0, 0.14.0 Reporter: Abdelrahman Shettia Assignee: Abdelrahman Shettia Attachments: 9979.001.patch, 9979.002.patch, HIVE-9779-testing.xlsx When doAs=false, ATSHook should log the end username in ATS instead of logging the hiveserver2 user's name. The way things are, it is not possible for an admin to identify which query is being run by which user. The end user information is already available in the HookContext. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9779) ATSHook does not log the end user if doAs=false (it logs the hs2 server user)
[ https://issues.apache.org/jira/browse/HIVE-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Abdelrahman Shettia updated HIVE-9779: -- Attachment: (was: 9979.001.patch) ATSHook does not log the end user if doAs=false (it logs the hs2 server user) - Key: HIVE-9779 URL: https://issues.apache.org/jira/browse/HIVE-9779 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.13.0, 0.14.0 Reporter: Abdelrahman Shettia Assignee: Abdelrahman Shettia Attachments: HIVE-9779-testing.xlsx When doAs=false, ATSHook should log the end username in ATS instead of logging the hiveserver2 user's name. The way things are, it is not possible for an admin to identify which query is being run by which user. The end user information is already available in the HookContext. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9779) ATSHook does not log the end user if doAs=false (it logs the hs2 server user)
[ https://issues.apache.org/jira/browse/HIVE-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Abdelrahman Shettia updated HIVE-9779: -- Attachment: 9979.002.patch ATSHook does not log the end user if doAs=false (it logs the hs2 server user) - Key: HIVE-9779 URL: https://issues.apache.org/jira/browse/HIVE-9779 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.13.0, 0.14.0 Reporter: Abdelrahman Shettia Assignee: Abdelrahman Shettia Attachments: 9979.001.patch, 9979.002.patch When doAs=false, ATSHook should log the end username in ATS instead of logging the hiveserver2 user's name. The way things are, it is not possible for an admin to identify which query is being run by which user. The end user information is already available in the HookContext. -- This message was sent by Atlassian JIRA (v6.3.4#6332)