[jira] [Updated] (HIVE-25912) Drop external table at root of s3 bucket throws NPE

2022-02-09 Thread Fachuan Bai (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fachuan Bai updated HIVE-25912:
---
Hadoop Flags:   (was: Incompatible change)

> Drop external table at root of s3 bucket throws NPE
> ---
>
> Key: HIVE-25912
> URL: https://issues.apache.org/jira/browse/HIVE-25912
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 3.1.2, 4.0.0
> Environment: Hive version: 3.1.2
>Reporter: Fachuan Bai
>Assignee: Fachuan Bai
>Priority: Major
>  Labels: metastore, pull-request-available
> Attachments: hive bugs.png, hive-bug-01.png
>
>   Original Estimate: 96h
>  Time Spent: 15h 40m
>  Remaining Estimate: 80h 20m
>
> *new update:* 
> I test the master branch, have the same problem.
> --
> ENV:
> Hive 3.1.2
> HDFS:3.3.1
> enable OpenLDAP and Ranger .
>  
> I create the external hive table using this command:
>  
> {code:java}
> CREATE EXTERNAL TABLE `fcbai`(
> `inv_item_sk` int,
> `inv_warehouse_sk` int,
> `inv_quantity_on_hand` int)
> PARTITIONED BY (
> `inv_date_sk` int) STORED AS ORC
> LOCATION
> 'hdfs://emr-master-1:8020/';
> {code}
>  
> The table was created successfully, but  when I drop the table throw the NPE:
>  
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. 
> MetaException(message:java.lang.NullPointerException) 
> (state=08S01,code=1){code}
>  
> The same bug can reproduction on the other object storage file system, such 
> as S3 or TOS:
> {code:java}
> CREATE EXTERNAL TABLE `fcbai`(
> `inv_item_sk` int,
> `inv_warehouse_sk` int,
> `inv_quantity_on_hand` int)
> PARTITIONED BY (
> `inv_date_sk` int) STORED AS ORC
> LOCATION
> 's3a://bucketname/'; // 'tos://bucketname/'{code}
>  
> I see the source code found:
>  common/src/java/org/apache/hadoop/hive/common/FileUtils.java
> {code:java}
> // check if sticky bit is set on the parent dir
> FileStatus parStatus = fs.getFileStatus(path.getParent());
> if (!shims.hasStickyBit(parStatus.getPermission())) {
>   // no sticky bit, so write permission on parent dir is sufficient
>   // no further checks needed
>   return;
> }{code}
>  
> because I set the table location to HDFS root path 
> (hdfs://emr-master-1:8020/), so the  path.getParent() function will be return 
> null cause the NPE.
> I think have four solutions to fix the bug:
>  # modify the create table function, if the location is root dir return 
> create table fail.
>  # modify the  FileUtils.checkDeletePermission function, check the 
> path.getParent(), if it is null, the function return, drop successfully.
>  # modify the RangerHiveAuthorizer.checkPrivileges function of the hive 
> ranger plugin(in ranger rep), if the location is root dir return create table 
> fail.
>  # modify the HDFS Path object, if the URI is root dir, path.getParent() 
> return not null.
> I recommend the first or second method, any suggestion for me? thx.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HIVE-25912) Drop external table at root of s3 bucket throws NPE

2022-02-09 Thread Fachuan Bai (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fachuan Bai updated HIVE-25912:
---
Target Version/s: 3.1.2, 4.0.0  (was: 3.1.2)

> Drop external table at root of s3 bucket throws NPE
> ---
>
> Key: HIVE-25912
> URL: https://issues.apache.org/jira/browse/HIVE-25912
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 3.1.2, 4.0.0
> Environment: Hive version: 3.1.2
>Reporter: Fachuan Bai
>Assignee: Fachuan Bai
>Priority: Major
>  Labels: metastore, pull-request-available
> Attachments: hive bugs.png, hive-bug-01.png
>
>   Original Estimate: 96h
>  Time Spent: 15h 40m
>  Remaining Estimate: 80h 20m
>
> *new update:* 
> I test the master branch, have the same problem.
> --
> ENV:
> Hive 3.1.2
> HDFS:3.3.1
> enable OpenLDAP and Ranger .
>  
> I create the external hive table using this command:
>  
> {code:java}
> CREATE EXTERNAL TABLE `fcbai`(
> `inv_item_sk` int,
> `inv_warehouse_sk` int,
> `inv_quantity_on_hand` int)
> PARTITIONED BY (
> `inv_date_sk` int) STORED AS ORC
> LOCATION
> 'hdfs://emr-master-1:8020/';
> {code}
>  
> The table was created successfully, but  when I drop the table throw the NPE:
>  
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. 
> MetaException(message:java.lang.NullPointerException) 
> (state=08S01,code=1){code}
>  
> The same bug can reproduction on the other object storage file system, such 
> as S3 or TOS:
> {code:java}
> CREATE EXTERNAL TABLE `fcbai`(
> `inv_item_sk` int,
> `inv_warehouse_sk` int,
> `inv_quantity_on_hand` int)
> PARTITIONED BY (
> `inv_date_sk` int) STORED AS ORC
> LOCATION
> 's3a://bucketname/'; // 'tos://bucketname/'{code}
>  
> I see the source code found:
>  common/src/java/org/apache/hadoop/hive/common/FileUtils.java
> {code:java}
> // check if sticky bit is set on the parent dir
> FileStatus parStatus = fs.getFileStatus(path.getParent());
> if (!shims.hasStickyBit(parStatus.getPermission())) {
>   // no sticky bit, so write permission on parent dir is sufficient
>   // no further checks needed
>   return;
> }{code}
>  
> because I set the table location to HDFS root path 
> (hdfs://emr-master-1:8020/), so the  path.getParent() function will be return 
> null cause the NPE.
> I think have four solutions to fix the bug:
>  # modify the create table function, if the location is root dir return 
> create table fail.
>  # modify the  FileUtils.checkDeletePermission function, check the 
> path.getParent(), if it is null, the function return, drop successfully.
>  # modify the RangerHiveAuthorizer.checkPrivileges function of the hive 
> ranger plugin(in ranger rep), if the location is root dir return create table 
> fail.
>  # modify the HDFS Path object, if the URI is root dir, path.getParent() 
> return not null.
> I recommend the first or second method, any suggestion for me? thx.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HIVE-25912) Drop external table at root of s3 bucket throws NPE

2022-02-09 Thread Fachuan Bai (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fachuan Bai updated HIVE-25912:
---
Affects Version/s: 4.0.0

> Drop external table at root of s3 bucket throws NPE
> ---
>
> Key: HIVE-25912
> URL: https://issues.apache.org/jira/browse/HIVE-25912
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 3.1.2, 4.0.0
> Environment: Hive version: 3.1.2
>Reporter: Fachuan Bai
>Assignee: Fachuan Bai
>Priority: Major
>  Labels: metastore, pull-request-available
> Attachments: hive bugs.png, hive-bug-01.png
>
>   Original Estimate: 96h
>  Time Spent: 15h 40m
>  Remaining Estimate: 80h 20m
>
> *new update:* 
> I test the master branch, have the same problem.
> --
> ENV:
> Hive 3.1.2
> HDFS:3.3.1
> enable OpenLDAP and Ranger .
>  
> I create the external hive table using this command:
>  
> {code:java}
> CREATE EXTERNAL TABLE `fcbai`(
> `inv_item_sk` int,
> `inv_warehouse_sk` int,
> `inv_quantity_on_hand` int)
> PARTITIONED BY (
> `inv_date_sk` int) STORED AS ORC
> LOCATION
> 'hdfs://emr-master-1:8020/';
> {code}
>  
> The table was created successfully, but  when I drop the table throw the NPE:
>  
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. 
> MetaException(message:java.lang.NullPointerException) 
> (state=08S01,code=1){code}
>  
> The same bug can reproduction on the other object storage file system, such 
> as S3 or TOS:
> {code:java}
> CREATE EXTERNAL TABLE `fcbai`(
> `inv_item_sk` int,
> `inv_warehouse_sk` int,
> `inv_quantity_on_hand` int)
> PARTITIONED BY (
> `inv_date_sk` int) STORED AS ORC
> LOCATION
> 's3a://bucketname/'; // 'tos://bucketname/'{code}
>  
> I see the source code found:
>  common/src/java/org/apache/hadoop/hive/common/FileUtils.java
> {code:java}
> // check if sticky bit is set on the parent dir
> FileStatus parStatus = fs.getFileStatus(path.getParent());
> if (!shims.hasStickyBit(parStatus.getPermission())) {
>   // no sticky bit, so write permission on parent dir is sufficient
>   // no further checks needed
>   return;
> }{code}
>  
> because I set the table location to HDFS root path 
> (hdfs://emr-master-1:8020/), so the  path.getParent() function will be return 
> null cause the NPE.
> I think have four solutions to fix the bug:
>  # modify the create table function, if the location is root dir return 
> create table fail.
>  # modify the  FileUtils.checkDeletePermission function, check the 
> path.getParent(), if it is null, the function return, drop successfully.
>  # modify the RangerHiveAuthorizer.checkPrivileges function of the hive 
> ranger plugin(in ranger rep), if the location is root dir return create table 
> fail.
>  # modify the HDFS Path object, if the URI is root dir, path.getParent() 
> return not null.
> I recommend the first or second method, any suggestion for me? thx.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HIVE-25912) Drop external table at root of s3 bucket throws NPE

2022-02-09 Thread Fachuan Bai (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fachuan Bai updated HIVE-25912:
---
Description: 
*new update:* 

I test the master branch, have the same problem.

--

ENV:

Hive 3.1.2

HDFS:3.3.1

enable OpenLDAP and Ranger .

 

I create the external hive table using this command:

 
{code:java}
CREATE EXTERNAL TABLE `fcbai`(
`inv_item_sk` int,
`inv_warehouse_sk` int,
`inv_quantity_on_hand` int)
PARTITIONED BY (
`inv_date_sk` int) STORED AS ORC
LOCATION
'hdfs://emr-master-1:8020/';
{code}
 

The table was created successfully, but  when I drop the table throw the NPE:

 
{code:java}
Error: Error while processing statement: FAILED: Execution Error, return code 1 
from org.apache.hadoop.hive.ql.exec.DDLTask. 
MetaException(message:java.lang.NullPointerException) (state=08S01,code=1){code}
 

The same bug can reproduction on the other object storage file system, such as 
S3 or TOS:
{code:java}
CREATE EXTERNAL TABLE `fcbai`(
`inv_item_sk` int,
`inv_warehouse_sk` int,
`inv_quantity_on_hand` int)
PARTITIONED BY (
`inv_date_sk` int) STORED AS ORC
LOCATION
's3a://bucketname/'; // 'tos://bucketname/'{code}
 

I see the source code found:

 common/src/java/org/apache/hadoop/hive/common/FileUtils.java
{code:java}
// check if sticky bit is set on the parent dir
FileStatus parStatus = fs.getFileStatus(path.getParent());
if (!shims.hasStickyBit(parStatus.getPermission())) {
  // no sticky bit, so write permission on parent dir is sufficient
  // no further checks needed
  return;
}{code}
 

because I set the table location to HDFS root path (hdfs://emr-master-1:8020/), 
so the  path.getParent() function will be return null cause the NPE.

I think have four solutions to fix the bug:
 # modify the create table function, if the location is root dir return create 
table fail.
 # modify the  FileUtils.checkDeletePermission function, check the 
path.getParent(), if it is null, the function return, drop successfully.
 # modify the RangerHiveAuthorizer.checkPrivileges function of the hive ranger 
plugin(in ranger rep), if the location is root dir return create table fail.
 # modify the HDFS Path object, if the URI is root dir, path.getParent() return 
not null.

I recommend the first or second method, any suggestion for me? thx.

 

 

  was:
ENV:

Hive 3.1.2

HDFS:3.3.1

enable OpenLDAP and Ranger .

 

I create the external hive table using this command:

 
{code:java}
CREATE EXTERNAL TABLE `fcbai`(
`inv_item_sk` int,
`inv_warehouse_sk` int,
`inv_quantity_on_hand` int)
PARTITIONED BY (
`inv_date_sk` int) STORED AS ORC
LOCATION
'hdfs://emr-master-1:8020/';
{code}
 

The table was created successfully, but  when I drop the table throw the NPE:

 
{code:java}
Error: Error while processing statement: FAILED: Execution Error, return code 1 
from org.apache.hadoop.hive.ql.exec.DDLTask. 
MetaException(message:java.lang.NullPointerException) (state=08S01,code=1){code}
 

The same bug can reproduction on the other object storage file system, such as 
S3 or TOS:
{code:java}
CREATE EXTERNAL TABLE `fcbai`(
`inv_item_sk` int,
`inv_warehouse_sk` int,
`inv_quantity_on_hand` int)
PARTITIONED BY (
`inv_date_sk` int) STORED AS ORC
LOCATION
's3a://bucketname/'; // 'tos://bucketname/'{code}
 

I see the source code found:

 common/src/java/org/apache/hadoop/hive/common/FileUtils.java
{code:java}
// check if sticky bit is set on the parent dir
FileStatus parStatus = fs.getFileStatus(path.getParent());
if (!shims.hasStickyBit(parStatus.getPermission())) {
  // no sticky bit, so write permission on parent dir is sufficient
  // no further checks needed
  return;
}{code}
 

because I set the table location to HDFS root path (hdfs://emr-master-1:8020/), 
so the  path.getParent() function will be return null cause the NPE.

I think have four solutions to fix the bug:
 # modify the create table function, if the location is root dir return create 
table fail.
 # modify the  FileUtils.checkDeletePermission function, check the 
path.getParent(), if it is null, the function return, drop successfully.
 # modify the RangerHiveAuthorizer.checkPrivileges function of the hive ranger 
plugin(in ranger rep), if the location is root dir return create table fail.
 # modify the HDFS Path object, if the URI is root dir, path.getParent() return 
not null.

I recommend the first or second method, any suggestion for me? thx.

 

 


> Drop external table at root of s3 bucket throws NPE
> ---
>
> Key: HIVE-25912
> URL: https://issues.apache.org/jira/browse/HIVE-25912
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 3.1.2
> Environment: Hive version: 3.1.2
>Reporter: Fachuan Bai
>Assignee: Fachuan Bai
>Priority: Major
>  Labels: metastore, 

[jira] [Updated] (HIVE-25912) Drop external table at root of s3 bucket throws NPE

2022-02-08 Thread Fachuan Bai (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fachuan Bai updated HIVE-25912:
---
Attachment: hive-bug-01.png

> Drop external table at root of s3 bucket throws NPE
> ---
>
> Key: HIVE-25912
> URL: https://issues.apache.org/jira/browse/HIVE-25912
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 3.1.2
> Environment: Hive version: 3.1.2
>Reporter: Fachuan Bai
>Assignee: Fachuan Bai
>Priority: Major
>  Labels: metastore, pull-request-available
> Attachments: hive bugs.png, hive-bug-01.png
>
>   Original Estimate: 96h
>  Time Spent: 13h 40m
>  Remaining Estimate: 82h 20m
>
> ENV:
> Hive 3.1.2
> HDFS:3.3.1
> enable OpenLDAP and Ranger .
>  
> I create the external hive table using this command:
>  
> {code:java}
> CREATE EXTERNAL TABLE `fcbai`(
> `inv_item_sk` int,
> `inv_warehouse_sk` int,
> `inv_quantity_on_hand` int)
> PARTITIONED BY (
> `inv_date_sk` int) STORED AS ORC
> LOCATION
> 'hdfs://emr-master-1:8020/';
> {code}
>  
> The table was created successfully, but  when I drop the table throw the NPE:
>  
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. 
> MetaException(message:java.lang.NullPointerException) 
> (state=08S01,code=1){code}
>  
> The same bug can reproduction on the other object storage file system, such 
> as S3 or TOS:
> {code:java}
> CREATE EXTERNAL TABLE `fcbai`(
> `inv_item_sk` int,
> `inv_warehouse_sk` int,
> `inv_quantity_on_hand` int)
> PARTITIONED BY (
> `inv_date_sk` int) STORED AS ORC
> LOCATION
> 's3a://bucketname/'; // 'tos://bucketname/'{code}
>  
> I see the source code found:
>  common/src/java/org/apache/hadoop/hive/common/FileUtils.java
> {code:java}
> // check if sticky bit is set on the parent dir
> FileStatus parStatus = fs.getFileStatus(path.getParent());
> if (!shims.hasStickyBit(parStatus.getPermission())) {
>   // no sticky bit, so write permission on parent dir is sufficient
>   // no further checks needed
>   return;
> }{code}
>  
> because I set the table location to HDFS root path 
> (hdfs://emr-master-1:8020/), so the  path.getParent() function will be return 
> null cause the NPE.
> I think have four solutions to fix the bug:
>  # modify the create table function, if the location is root dir return 
> create table fail.
>  # modify the  FileUtils.checkDeletePermission function, check the 
> path.getParent(), if it is null, the function return, drop successfully.
>  # modify the RangerHiveAuthorizer.checkPrivileges function of the hive 
> ranger plugin(in ranger rep), if the location is root dir return create table 
> fail.
>  # modify the HDFS Path object, if the URI is root dir, path.getParent() 
> return not null.
> I recommend the first or second method, any suggestion for me? thx.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HIVE-25912) Drop external table at root of s3 bucket throws NPE

2022-02-08 Thread Fachuan Bai (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fachuan Bai updated HIVE-25912:
---
Description: 
ENV:

Hive 3.1.2

HDFS:3.3.1

enable OpenLDAP and Ranger .

 

I create the external hive table using this command:

 
{code:java}
CREATE EXTERNAL TABLE `fcbai`(
`inv_item_sk` int,
`inv_warehouse_sk` int,
`inv_quantity_on_hand` int)
PARTITIONED BY (
`inv_date_sk` int) STORED AS ORC
LOCATION
'hdfs://emr-master-1:8020/';
{code}
 

The table was created successfully, but  when I drop the table throw the NPE:

 
{code:java}
Error: Error while processing statement: FAILED: Execution Error, return code 1 
from org.apache.hadoop.hive.ql.exec.DDLTask. 
MetaException(message:java.lang.NullPointerException) (state=08S01,code=1){code}
 

The same bug can reproduction on the other object storage file system, such as 
S3 or TOS:
{code:java}
CREATE EXTERNAL TABLE `fcbai`(
`inv_item_sk` int,
`inv_warehouse_sk` int,
`inv_quantity_on_hand` int)
PARTITIONED BY (
`inv_date_sk` int) STORED AS ORC
LOCATION
's3a://bucketname/'; // 'tos://bucketname/'{code}
 

I see the source code found:

 common/src/java/org/apache/hadoop/hive/common/FileUtils.java
{code:java}
// check if sticky bit is set on the parent dir
FileStatus parStatus = fs.getFileStatus(path.getParent());
if (!shims.hasStickyBit(parStatus.getPermission())) {
  // no sticky bit, so write permission on parent dir is sufficient
  // no further checks needed
  return;
}{code}
 

because I set the table location to HDFS root path (hdfs://emr-master-1:8020/), 
so the  path.getParent() function will be return null cause the NPE.

I think have four solutions to fix the bug:
 # modify the create table function, if the location is root dir return create 
table fail.
 # modify the  FileUtils.checkDeletePermission function, check the 
path.getParent(), if it is null, the function return, drop successfully.
 # modify the RangerHiveAuthorizer.checkPrivileges function of the hive ranger 
plugin(in ranger rep), if the location is root dir return create table fail.
 # modify the HDFS Path object, if the URI is root dir, path.getParent() return 
not null.

I recommend the first or second method, any suggestion for me? thx.

 

 

  was:
I create the external hive table using this command:

 
{code:java}
CREATE EXTERNAL TABLE `fcbai`(
`inv_item_sk` int,
`inv_warehouse_sk` int,
`inv_quantity_on_hand` int)
PARTITIONED BY (
`inv_date_sk` int) STORED AS ORC
LOCATION
'hdfs://emr-master-1:8020/';
{code}
 

The table was created successfully, but  when I drop the table throw the NPE:

 
{code:java}
Error: Error while processing statement: FAILED: Execution Error, return code 1 
from org.apache.hadoop.hive.ql.exec.DDLTask. 
MetaException(message:java.lang.NullPointerException) (state=08S01,code=1){code}
 

The same bug can reproduction on the other object storage file system, such as 
S3 or TOS:
{code:java}
CREATE EXTERNAL TABLE `fcbai`(
`inv_item_sk` int,
`inv_warehouse_sk` int,
`inv_quantity_on_hand` int)
PARTITIONED BY (
`inv_date_sk` int) STORED AS ORC
LOCATION
's3a://bucketname/'; // 'tos://bucketname/'{code}
 

I see the source code found:

 common/src/java/org/apache/hadoop/hive/common/FileUtils.java
{code:java}
// check if sticky bit is set on the parent dir
FileStatus parStatus = fs.getFileStatus(path.getParent());
if (!shims.hasStickyBit(parStatus.getPermission())) {
  // no sticky bit, so write permission on parent dir is sufficient
  // no further checks needed
  return;
}{code}
 

because I set the table location to HDFS root path (hdfs://emr-master-1:8020/), 
so the  path.getParent() function will be return null cause the NPE.

I think have four solutions to fix the bug:
 # modify the create table function, if the location is root dir return create 
table fail.
 # modify the  FileUtils.checkDeletePermission function, check the 
path.getParent(), if it is null, the function return, drop successfully.
 # modify the RangerHiveAuthorizer.checkPrivileges function of the hive ranger 
plugin(in ranger rep), if the location is root dir return create table fail.
 # modify the HDFS Path object, if the URI is root dir, path.getParent() return 
not null.

I recommend the first or second method, any suggestion for me? thx.

 

 


> Drop external table at root of s3 bucket throws NPE
> ---
>
> Key: HIVE-25912
> URL: https://issues.apache.org/jira/browse/HIVE-25912
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 3.1.2
> Environment: Hive version: 3.1.2
>Reporter: Fachuan Bai
>Assignee: Fachuan Bai
>Priority: Major
>  Labels: metastore, pull-request-available
> Attachments: hive bugs.png
>
>   Original Estimate: 96h
>  Time Spent: 13h 40m
>  Remaining Estimate: 82h 20m
>
> 

[jira] [Updated] (HIVE-25912) Drop external table at root of s3 bucket throws NPE

2022-02-07 Thread Fachuan Bai (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fachuan Bai updated HIVE-25912:
---
Hadoop Flags: Incompatible change

> Drop external table at root of s3 bucket throws NPE
> ---
>
> Key: HIVE-25912
> URL: https://issues.apache.org/jira/browse/HIVE-25912
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 3.1.2
> Environment: Hive version: 3.1.2
>Reporter: Fachuan Bai
>Assignee: Fachuan Bai
>Priority: Major
>  Labels: metastore, pull-request-available
> Attachments: hive bugs.png
>
>   Original Estimate: 96h
>  Time Spent: 50m
>  Remaining Estimate: 95h 10m
>
> I create the external hive table using this command:
>  
> {code:java}
> CREATE EXTERNAL TABLE `fcbai`(
> `inv_item_sk` int,
> `inv_warehouse_sk` int,
> `inv_quantity_on_hand` int)
> PARTITIONED BY (
> `inv_date_sk` int) STORED AS ORC
> LOCATION
> 'hdfs://emr-master-1:8020/';
> {code}
>  
> The table was created successfully, but  when I drop the table throw the NPE:
>  
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. 
> MetaException(message:java.lang.NullPointerException) 
> (state=08S01,code=1){code}
>  
> The same bug can reproduction on the other object storage file system, such 
> as S3 or TOS:
> {code:java}
> CREATE EXTERNAL TABLE `fcbai`(
> `inv_item_sk` int,
> `inv_warehouse_sk` int,
> `inv_quantity_on_hand` int)
> PARTITIONED BY (
> `inv_date_sk` int) STORED AS ORC
> LOCATION
> 's3a://bucketname/'; // 'tos://bucketname/'{code}
>  
> I see the source code found:
>  common/src/java/org/apache/hadoop/hive/common/FileUtils.java
> {code:java}
> // check if sticky bit is set on the parent dir
> FileStatus parStatus = fs.getFileStatus(path.getParent());
> if (!shims.hasStickyBit(parStatus.getPermission())) {
>   // no sticky bit, so write permission on parent dir is sufficient
>   // no further checks needed
>   return;
> }{code}
>  
> because I set the table location to HDFS root path 
> (hdfs://emr-master-1:8020/), so the  path.getParent() function will be return 
> null cause the NPE.
> I think have four solutions to fix the bug:
>  # modify the create table function, if the location is root dir return 
> create table fail.
>  # modify the  FileUtils.checkDeletePermission function, check the 
> path.getParent(), if it is null, the function return, drop successfully.
>  # modify the RangerHiveAuthorizer.checkPrivileges function of the hive 
> ranger plugin(in ranger rep), if the location is root dir return create table 
> fail.
>  # modify the HDFS Path object, if the URI is root dir, path.getParent() 
> return not null.
> I recommend the first or second method, any suggestion for me? thx.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HIVE-25912) Drop external table at root of s3 bucket throws NPE

2022-02-07 Thread Fachuan Bai (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fachuan Bai updated HIVE-25912:
---
Hadoop Flags:   (was: Incompatible change)

> Drop external table at root of s3 bucket throws NPE
> ---
>
> Key: HIVE-25912
> URL: https://issues.apache.org/jira/browse/HIVE-25912
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 3.1.2
> Environment: Hive version: 3.1.2
>Reporter: Fachuan Bai
>Assignee: Fachuan Bai
>Priority: Major
>  Labels: metastore, pull-request-available
> Attachments: hive bugs.png
>
>   Original Estimate: 96h
>  Time Spent: 50m
>  Remaining Estimate: 95h 10m
>
> I create the external hive table using this command:
>  
> {code:java}
> CREATE EXTERNAL TABLE `fcbai`(
> `inv_item_sk` int,
> `inv_warehouse_sk` int,
> `inv_quantity_on_hand` int)
> PARTITIONED BY (
> `inv_date_sk` int) STORED AS ORC
> LOCATION
> 'hdfs://emr-master-1:8020/';
> {code}
>  
> The table was created successfully, but  when I drop the table throw the NPE:
>  
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. 
> MetaException(message:java.lang.NullPointerException) 
> (state=08S01,code=1){code}
>  
> The same bug can reproduction on the other object storage file system, such 
> as S3 or TOS:
> {code:java}
> CREATE EXTERNAL TABLE `fcbai`(
> `inv_item_sk` int,
> `inv_warehouse_sk` int,
> `inv_quantity_on_hand` int)
> PARTITIONED BY (
> `inv_date_sk` int) STORED AS ORC
> LOCATION
> 's3a://bucketname/'; // 'tos://bucketname/'{code}
>  
> I see the source code found:
>  common/src/java/org/apache/hadoop/hive/common/FileUtils.java
> {code:java}
> // check if sticky bit is set on the parent dir
> FileStatus parStatus = fs.getFileStatus(path.getParent());
> if (!shims.hasStickyBit(parStatus.getPermission())) {
>   // no sticky bit, so write permission on parent dir is sufficient
>   // no further checks needed
>   return;
> }{code}
>  
> because I set the table location to HDFS root path 
> (hdfs://emr-master-1:8020/), so the  path.getParent() function will be return 
> null cause the NPE.
> I think have four solutions to fix the bug:
>  # modify the create table function, if the location is root dir return 
> create table fail.
>  # modify the  FileUtils.checkDeletePermission function, check the 
> path.getParent(), if it is null, the function return, drop successfully.
>  # modify the RangerHiveAuthorizer.checkPrivileges function of the hive 
> ranger plugin(in ranger rep), if the location is root dir return create table 
> fail.
>  # modify the HDFS Path object, if the URI is root dir, path.getParent() 
> return not null.
> I recommend the first or second method, any suggestion for me? thx.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HIVE-25912) Drop external table at root of s3 bucket throws NPE

2022-02-06 Thread Fachuan Bai (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fachuan Bai updated HIVE-25912:
---
Attachment: (was: hive-bugs002.png)

> Drop external table at root of s3 bucket throws NPE
> ---
>
> Key: HIVE-25912
> URL: https://issues.apache.org/jira/browse/HIVE-25912
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 3.1.2
> Environment: Hive version: 3.1.2
>Reporter: Fachuan Bai
>Assignee: Fachuan Bai
>Priority: Major
>  Labels: metastore, pull-request-available
> Attachments: hive bugs.png
>
>   Original Estimate: 96h
>  Time Spent: 50m
>  Remaining Estimate: 95h 10m
>
> I create the external hive table using this command:
>  
> {code:java}
> CREATE EXTERNAL TABLE `fcbai`(
> `inv_item_sk` int,
> `inv_warehouse_sk` int,
> `inv_quantity_on_hand` int)
> PARTITIONED BY (
> `inv_date_sk` int) STORED AS ORC
> LOCATION
> 'hdfs://emr-master-1:8020/';
> {code}
>  
> The table was created successfully, but  when I drop the table throw the NPE:
>  
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. 
> MetaException(message:java.lang.NullPointerException) 
> (state=08S01,code=1){code}
>  
> The same bug can reproduction on the other object storage file system, such 
> as S3 or TOS:
> {code:java}
> CREATE EXTERNAL TABLE `fcbai`(
> `inv_item_sk` int,
> `inv_warehouse_sk` int,
> `inv_quantity_on_hand` int)
> PARTITIONED BY (
> `inv_date_sk` int) STORED AS ORC
> LOCATION
> 's3a://bucketname/'; // 'tos://bucketname/'{code}
>  
> I see the source code found:
>  common/src/java/org/apache/hadoop/hive/common/FileUtils.java
> {code:java}
> // check if sticky bit is set on the parent dir
> FileStatus parStatus = fs.getFileStatus(path.getParent());
> if (!shims.hasStickyBit(parStatus.getPermission())) {
>   // no sticky bit, so write permission on parent dir is sufficient
>   // no further checks needed
>   return;
> }{code}
>  
> because I set the table location to HDFS root path 
> (hdfs://emr-master-1:8020/), so the  path.getParent() function will be return 
> null cause the NPE.
> I think have four solutions to fix the bug:
>  # modify the create table function, if the location is root dir return 
> create table fail.
>  # modify the  FileUtils.checkDeletePermission function, check the 
> path.getParent(), if it is null, the function return, drop successfully.
>  # modify the RangerHiveAuthorizer.checkPrivileges function of the hive 
> ranger plugin(in ranger rep), if the location is root dir return create table 
> fail.
>  # modify the HDFS Path object, if the URI is root dir, path.getParent() 
> return not null.
> I recommend the first or second method, any suggestion for me? thx.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HIVE-25912) Drop external table at root of s3 bucket throws NPE

2022-02-06 Thread Fachuan Bai (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fachuan Bai updated HIVE-25912:
---
Attachment: (was: hive-bugs-001.png)

> Drop external table at root of s3 bucket throws NPE
> ---
>
> Key: HIVE-25912
> URL: https://issues.apache.org/jira/browse/HIVE-25912
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 3.1.2
> Environment: Hive version: 3.1.2
>Reporter: Fachuan Bai
>Assignee: Fachuan Bai
>Priority: Major
>  Labels: metastore, pull-request-available
> Attachments: hive bugs.png
>
>   Original Estimate: 96h
>  Time Spent: 50m
>  Remaining Estimate: 95h 10m
>
> I create the external hive table using this command:
>  
> {code:java}
> CREATE EXTERNAL TABLE `fcbai`(
> `inv_item_sk` int,
> `inv_warehouse_sk` int,
> `inv_quantity_on_hand` int)
> PARTITIONED BY (
> `inv_date_sk` int) STORED AS ORC
> LOCATION
> 'hdfs://emr-master-1:8020/';
> {code}
>  
> The table was created successfully, but  when I drop the table throw the NPE:
>  
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. 
> MetaException(message:java.lang.NullPointerException) 
> (state=08S01,code=1){code}
>  
> The same bug can reproduction on the other object storage file system, such 
> as S3 or TOS:
> {code:java}
> CREATE EXTERNAL TABLE `fcbai`(
> `inv_item_sk` int,
> `inv_warehouse_sk` int,
> `inv_quantity_on_hand` int)
> PARTITIONED BY (
> `inv_date_sk` int) STORED AS ORC
> LOCATION
> 's3a://bucketname/'; // 'tos://bucketname/'{code}
>  
> I see the source code found:
>  common/src/java/org/apache/hadoop/hive/common/FileUtils.java
> {code:java}
> // check if sticky bit is set on the parent dir
> FileStatus parStatus = fs.getFileStatus(path.getParent());
> if (!shims.hasStickyBit(parStatus.getPermission())) {
>   // no sticky bit, so write permission on parent dir is sufficient
>   // no further checks needed
>   return;
> }{code}
>  
> because I set the table location to HDFS root path 
> (hdfs://emr-master-1:8020/), so the  path.getParent() function will be return 
> null cause the NPE.
> I think have four solutions to fix the bug:
>  # modify the create table function, if the location is root dir return 
> create table fail.
>  # modify the  FileUtils.checkDeletePermission function, check the 
> path.getParent(), if it is null, the function return, drop successfully.
>  # modify the RangerHiveAuthorizer.checkPrivileges function of the hive 
> ranger plugin(in ranger rep), if the location is root dir return create table 
> fail.
>  # modify the HDFS Path object, if the URI is root dir, path.getParent() 
> return not null.
> I recommend the first or second method, any suggestion for me? thx.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HIVE-25912) Drop external table at root of s3 bucket throws NPE

2022-02-06 Thread Fachuan Bai (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fachuan Bai updated HIVE-25912:
---
Attachment: hive-bugs-001.png
hive-bugs002.png

> Drop external table at root of s3 bucket throws NPE
> ---
>
> Key: HIVE-25912
> URL: https://issues.apache.org/jira/browse/HIVE-25912
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 3.1.2
> Environment: Hive version: 3.1.2
>Reporter: Fachuan Bai
>Assignee: Fachuan Bai
>Priority: Major
>  Labels: metastore, pull-request-available
> Attachments: hive bugs.png, hive-bugs-001.png, hive-bugs002.png
>
>   Original Estimate: 96h
>  Time Spent: 50m
>  Remaining Estimate: 95h 10m
>
> I create the external hive table using this command:
>  
> {code:java}
> CREATE EXTERNAL TABLE `fcbai`(
> `inv_item_sk` int,
> `inv_warehouse_sk` int,
> `inv_quantity_on_hand` int)
> PARTITIONED BY (
> `inv_date_sk` int) STORED AS ORC
> LOCATION
> 'hdfs://emr-master-1:8020/';
> {code}
>  
> The table was created successfully, but  when I drop the table throw the NPE:
>  
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. 
> MetaException(message:java.lang.NullPointerException) 
> (state=08S01,code=1){code}
>  
> The same bug can reproduction on the other object storage file system, such 
> as S3 or TOS:
> {code:java}
> CREATE EXTERNAL TABLE `fcbai`(
> `inv_item_sk` int,
> `inv_warehouse_sk` int,
> `inv_quantity_on_hand` int)
> PARTITIONED BY (
> `inv_date_sk` int) STORED AS ORC
> LOCATION
> 's3a://bucketname/'; // 'tos://bucketname/'{code}
>  
> I see the source code found:
>  common/src/java/org/apache/hadoop/hive/common/FileUtils.java
> {code:java}
> // check if sticky bit is set on the parent dir
> FileStatus parStatus = fs.getFileStatus(path.getParent());
> if (!shims.hasStickyBit(parStatus.getPermission())) {
>   // no sticky bit, so write permission on parent dir is sufficient
>   // no further checks needed
>   return;
> }{code}
>  
> because I set the table location to HDFS root path 
> (hdfs://emr-master-1:8020/), so the  path.getParent() function will be return 
> null cause the NPE.
> I think have four solutions to fix the bug:
>  # modify the create table function, if the location is root dir return 
> create table fail.
>  # modify the  FileUtils.checkDeletePermission function, check the 
> path.getParent(), if it is null, the function return, drop successfully.
>  # modify the RangerHiveAuthorizer.checkPrivileges function of the hive 
> ranger plugin(in ranger rep), if the location is root dir return create table 
> fail.
>  # modify the HDFS Path object, if the URI is root dir, path.getParent() 
> return not null.
> I recommend the first or second method, any suggestion for me? thx.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HIVE-25912) Drop external table at root of s3 bucket throws NPE

2022-02-02 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HIVE-25912:
--
Summary: Drop external table at root of s3 bucket throws NPE  (was: Drop 
external table throw NPE)

> Drop external table at root of s3 bucket throws NPE
> ---
>
> Key: HIVE-25912
> URL: https://issues.apache.org/jira/browse/HIVE-25912
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 3.1.2
> Environment: Hive version: 3.1.2
>Reporter: Fachuan Bai
>Assignee: Fachuan Bai
>Priority: Major
>  Labels: metastore, pull-request-available
> Attachments: hive bugs.png
>
>   Original Estimate: 96h
>  Time Spent: 10m
>  Remaining Estimate: 95h 50m
>
> I create the external hive table using this command:
>  
> {code:java}
> CREATE EXTERNAL TABLE `fcbai`(
> `inv_item_sk` int,
> `inv_warehouse_sk` int,
> `inv_quantity_on_hand` int)
> PARTITIONED BY (
> `inv_date_sk` int) STORED AS ORC
> LOCATION
> 'hdfs://emr-master-1:8020/';
> {code}
>  
> The table was created successfully, but  when I drop the table throw the NPE:
>  
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. 
> MetaException(message:java.lang.NullPointerException) 
> (state=08S01,code=1){code}
>  
> The same bug can reproduction on the other object storage file system, such 
> as S3 or TOS:
> {code:java}
> CREATE EXTERNAL TABLE `fcbai`(
> `inv_item_sk` int,
> `inv_warehouse_sk` int,
> `inv_quantity_on_hand` int)
> PARTITIONED BY (
> `inv_date_sk` int) STORED AS ORC
> LOCATION
> 's3a://bucketname/'; // 'tos://bucketname/'{code}
>  
> I see the source code found:
>  common/src/java/org/apache/hadoop/hive/common/FileUtils.java
> {code:java}
> // check if sticky bit is set on the parent dir
> FileStatus parStatus = fs.getFileStatus(path.getParent());
> if (!shims.hasStickyBit(parStatus.getPermission())) {
>   // no sticky bit, so write permission on parent dir is sufficient
>   // no further checks needed
>   return;
> }{code}
>  
> because I set the table location to HDFS root path 
> (hdfs://emr-master-1:8020/), so the  path.getParent() function will be return 
> null cause the NPE.
> I think have four solutions to fix the bug:
>  # modify the create table function, if the location is root dir return 
> create table fail.
>  # modify the  FileUtils.checkDeletePermission function, check the 
> path.getParent(), if it is null, the function return, drop successfully.
>  # modify the RangerHiveAuthorizer.checkPrivileges function of the hive 
> ranger plugin(in ranger rep), if the location is root dir return create table 
> fail.
>  # modify the HDFS Path object, if the URI is root dir, path.getParent() 
> return not null.
> I recommend the first or second method, any suggestion for me? thx.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)