[jira] [Commented] (HIVE-12891) Hive fails when java.io.tmpdir is set to a relative location

2016-11-07 Thread Reuben Kuhnert (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15645313#comment-15645313
 ] 

Reuben Kuhnert commented on HIVE-12891:
---

Hey Barna,

Yeah, feel free to modify the code as you feel necessary. Appreciate your work, 
thanks!

> Hive fails when java.io.tmpdir is set to a relative location
> 
>
> Key: HIVE-12891
> URL: https://issues.apache.org/jira/browse/HIVE-12891
> Project: Hive
>  Issue Type: Bug
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
> Attachments: HIVE-12891.01.19.2016.01.patch, HIVE-12891.03.patch, 
> HIVE-12891.04.patch, HIVE-12981.01.22.2016.02.patch
>
>
> The function {{SessionState.createSessionDirs}} fails when trying to create 
> directories where {{java.io.tmpdir}} is set to a relative location.
> {code}
> \[SubtaskRunner] ERROR o.a.h.hive..ql.Driver - FAILED: 
> IllegalArgumentException java.net.URISyntaxException: Relative path in 
> absolute URI: 
> file:./tmp///hive_2015_12_11_09-12-25_352_4325234652356-1
> ...
> Minor variations:
> \[SubtaskRunner] ERROR o.a.h.hive..ql.Driver - FAILED: SemanticException 
> Exception while processing Exception while writing out the local file 
> o.a.h.hive.ql/parse.SemanticException: Exception while processing exception 
> while writing out local file 
> ... 
> caused by: java.lang.IllegalArgumentException: java.net.URISyntaxException: 
> Relative path in absolute URI: 
> file:./tmp///hive_2015_12_11_09-12-25_352_4325234652356-1 
> at o.a.h.fs.Path.initialize (206) 
> at o.a.h.fs.Path.(197)... 
> at o.a.h.hive.ql.context.getScratchDir(267) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13696) Monitor fair-scheduler.xml and automatically update/validate jobs submitted to fair-scheduler

2016-07-27 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13696:
--
Attachment: HIVE-13696.14.patch

> Monitor fair-scheduler.xml and automatically update/validate jobs submitted 
> to fair-scheduler
> -
>
> Key: HIVE-13696
> URL: https://issues.apache.org/jira/browse/HIVE-13696
> Project: Hive
>  Issue Type: Improvement
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
> Attachments: HIVE-13696.01.patch, HIVE-13696.02.patch, 
> HIVE-13696.06.patch, HIVE-13696.08.patch, HIVE-13696.11.patch, 
> HIVE-13696.13.patch, HIVE-13696.14.patch
>
>
> Ensure that jobs are placed into the correct queue according to 
> {{fair-scheduler.xml}}. Jobs should be placed into the correct queue, and 
> users should not be able to submit jobs to queues they do not have access to.
> This patch builds on the existing functionality in {{FairSchedulerShim}} to 
> route jobs to user-specific queue based on {{fair-scheduler.xml}} 
> configuration (leveraging the Yarn {{QueuePlacementPolicy}} class). In 
> addition to configuring job routing at session connect (current behavior), 
> the routing is validated per submission to yarn (when impersonation is off). 
> A {{FileSystemWatcher}} class is included to monitor changes in the 
> {{fair-scheduler.xml}} file (so updates are automatically reloaded when the 
> file pointed to by {{yarn.scheduler.fair.allocation.file}} is changed).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13966) DbNotificationListener: can loose DDL operation notifications

2016-06-23 Thread Reuben Kuhnert (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15346431#comment-15346431
 ] 

Reuben Kuhnert commented on HIVE-13966:
---

Seems like it might make sense to use an interface like 
{{TransactionalMetastoreEventListener}} or something so that we can enforce 
compile time safety. Then just tag the relevant classes with said interface so 
that we can have loops of type (with compile time check):

{code}
  for (TransactionalMetaStoreEventListener listener : listeners) { ... }
{code}

This allows alows you to enforce that listeners dropped into the 
{{hive.metastore.synchronous.event.listeners}} bucket all expect 
to be called within a transaction.

Alternatively, it might make sense to invert control such that listeners that 
care about transactional updates can listen for them, while leaving others 
unaffected:

{code}
class MetastoreEventListener {
  public abstract onTransactionalEvent(SomeEvent event); // Transactionals will 
listen for this
  public abstract onEvent(SomeEvent event);  // Everyone else will listen for 
this.
}
{code}

Transactional listeners will implement the first while leaving the second as a 
{{noop}}. 
$0.02

> DbNotificationListener: can loose DDL operation notifications
> -
>
> Key: HIVE-13966
> URL: https://issues.apache.org/jira/browse/HIVE-13966
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Reporter: Nachiket Vaidya
>Assignee: Rahul Sharma
>Priority: Critical
>
> The code for each API in HiveMetaStore.java is like this:
> 1. openTransaction()
> 2. -- operation--
> 3. commit() or rollback() based on result of the operation.
> 4. add entry to notification log (unconditionally)
> If the operation is failed (in step 2), we still add entry to notification 
> log. Found this issue in testing.
> It is still ok as this is the case of false positive.
> If the operation is successful and adding to notification log failed, the 
> user will get an MetaException. It will not rollback the operation, as it is 
> already committed. We need to handle this case so that we will not have false 
> negatives.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13966) DbNotificationListener: can loose DDL operation notifications

2016-06-14 Thread Reuben Kuhnert (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329382#comment-15329382
 ] 

Reuben Kuhnert commented on HIVE-13966:
---

Looking at this pattern in a number of metastore functions:

{code}
if (!success) {
  ms.rollbackTransaction();
  if (madeDir) {
wh.deleteDir(tblPath, true);
  }
}
for (MetaStoreEventListener listener : listeners) {
  CreateTableEvent createTableEvent =
  new CreateTableEvent(tbl, success, this);
  createTableEvent.setEnvironmentContext(envContext);
  listener.onCreateTable(createTableEvent);
}
{code}

I'm noticing that {{DBNotificationListener}} is a subclass of 
{{MetastoreEventListener}}. When you say we should not require bringing all 
post event listeners into the transaction (but we do want to bring in 
{{DbNotificationListener}}), would that mean having a separate hierarchy for 
those listeners that *should* be part of the transaction? Is that what is meant 
by 'synchronous' (part of the transaction) or do we mean 'synchronous' as in 
not queued for processing later, per:

{code}
 * Design overview:  This listener takes any event, builds a 
NotificationEventResponse,
 * and puts it on a queue.  There is a dedicated thread that reads entries from 
the queue and
 * places them in the database.  The reason for doing it in a separate thread 
is that we want to
 * avoid slowing down other metadata operations with the work of putting the 
notification into
 * the database.  Also, occasionally the thread needs to clean the database of 
old records.  We
 * definitely don't want to do that as part of another metadata operation.
 */
public class DbNotificationListener extends MetaStoreEventListener {
{code}

> DbNotificationListener: can loose DDL operation notifications
> -
>
> Key: HIVE-13966
> URL: https://issues.apache.org/jira/browse/HIVE-13966
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Reporter: Nachiket Vaidya
>Priority: Critical
>
> The code for each API in HiveMetaStore.java is like this:
> 1. openTransaction()
> 2. -- operation--
> 3. commit() or rollback() based on result of the operation.
> 4. add entry to notification log (unconditionally)
> If the operation is failed (in step 2), we still add entry to notification 
> log. Found this issue in testing.
> It is still ok as this is the case of false positive.
> If the operation is successful and adding to notification log failed, the 
> user will get an MetaException. It will not rollback the operation, as it is 
> already committed. We need to handle this case so that we will not have false 
> negatives.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13946) Decimal value need to be single-quoted when selecting where clause with that decimal value in order to get row

2016-06-13 Thread Reuben Kuhnert (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15327793#comment-15327793
 ] 

Reuben Kuhnert commented on HIVE-13946:
---

Also, I'm noticing in your previous ticket 
([HIVE-13945|https://issues.apache.org/jira/browse/HIVE-13945]) your decimal 
expands with a bunch of additional zeros, but in your example above it doesn't?

> Decimal value need to be single-quoted when selecting where clause with that 
> decimal value in order to get row
> --
>
> Key: HIVE-13946
> URL: https://issues.apache.org/jira/browse/HIVE-13946
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1
>Reporter: Takahiko Saito
> Fix For: 1.2.1
>
>
> Create a table withe a column of decimal type(38,18) and insert 
> '4327269606205.029297'. Then select with that value does not return anything.
> {noformat}
> 0: jdbc:hive2://ts-0531-1.openstacklocal:2181> drop table if exists test;
> No rows affected (0.175 seconds)
> 0: jdbc:hive2://ts-0531-1.openstacklocal:2181>
> 0: jdbc:hive2://ts-0531-1.openstacklocal:2181> create table test (dc 
> decimal(38,18));
> No rows affected (0.098 seconds)
> 0: jdbc:hive2://ts-0531-1.openstacklocal:2181>
> 0: jdbc:hive2://ts-0531-1.openstacklocal:2181> insert into table test values 
> (4327269606205.029297);
> INFO  : Session is already open
> INFO  : Dag name: insert into table tes...327269606205.029297)(Stage-1)
> INFO  : Tez session was closed. Reopening...
> INFO  : Session re-established.
> INFO  :
> INFO  : Status: Running (Executing on YARN cluster with App id 
> application_1464727816747_0762)
> INFO  : Map 1: -/-
> INFO  : Map 1: 0/1
> INFO  : Map 1: 0(+1)/1
> INFO  : Map 1: 1/1
> INFO  : Loading data to table default.test from 
> hdfs://ts-0531-5.openstacklocal:8020/apps/hive/warehouse/test/.hive-staging_hive_2016-06-04_00-03-54_302_7708281807413586675-940/-ext-1
> INFO  : Table default.test stats: [numFiles=1, numRows=1, totalSize=21, 
> rawDataSize=20]
> No rows affected (13.821 seconds)
> 0: jdbc:hive2://ts-0531-1.openstacklocal:2181>
> 0: jdbc:hive2://ts-0531-1.openstacklocal:2181> select * from test;
> +---+--+
> |test.dc|
> +---+--+
> | 4327269606205.029297  |
> +---+--+
> 1 row selected (0.078 seconds)
> 0: jdbc:hive2://ts-0531-1.openstacklocal:2181> select * from test where dc = 
> 4327269606205.029297;
> +--+--+
> | test.dc  |
> +--+--+
> +--+--+
> No rows selected (0.224 seconds)
> {noformat}
> If you single quote that decimal value, a row is returned.
> {noformat}
> 0: jdbc:hive2://ts-0531-1.openstacklocal:2181> select * from test where dc = 
> '4327269606205.029297';
> +---+--+
> |test.dc|
> +---+--+
> | 4327269606205.029297  |
> +---+--+
> 1 row selected (0.085 seconds)
> {noformat}
> explain shows:
> {noformat}
> 0: jdbc:hive2://ts-0531-1.openstacklocal:2181> explain select * from test 
> where dc = 4327269606205.029297;
> +--+--+
> |   Explain|
> +--+--+
> | STAGE DEPENDENCIES:  |
> |   Stage-0 is a root stage|
> |  |
> | STAGE PLANS: |
> |   Stage: Stage-0 |
> | Fetch Operator   |
> |   limit: -1  |
> |   Processor Tree:|
> | TableScan|
> |   alias: test|
> |   filterExpr: (dc = 4.3272696062050293E12) (type: boolean)   |
> |   Filter Operator|
> | predicate: (dc = 4.3272696062050293E12) (type: boolean)  |
> | Select Operator  |
> |   expressions: dc (type: decimal(38,18)) |
> |   outputColumnNames: _col0   |
> |   ListSink   |
> |  |
> +--+--+
> 18 rows selected (0.512 seconds)
> {noformat}



--

[jira] [Commented] (HIVE-13946) Decimal value need to be single-quoted when selecting where clause with that decimal value in order to get row

2016-06-13 Thread Reuben Kuhnert (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15327783#comment-15327783
 ] 

Reuben Kuhnert commented on HIVE-13946:
---

I'm getting different results, am I doing something wrong?

{code}
0: jdbc:hive2://localhost:1> show tables;
show tables;
No rows selected (2.659 seconds)
+---+--+
| tab_name  |
+---+--+
+---+--+
0: jdbc:hive2://localhost:1> create table test (dc decimal(38,18));
18));
0: jdbc:hive2://localhost:1> No rows affected (1.367 seconds)
insert into table test values (4327269606205.029297);

27269606205.029297);
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the 
future versions. Consider using a different execution engine (i.e. tez, spark) 
or using Hive 1.X releases.
No rows affected (20.19 seconds)
0: jdbc:hive2://localhost:1> 
0: jdbc:hive2://localhost:1> select * from test;
select * from test;
1 row selected (0.564 seconds)
+---+--+
|  test.dc  |
+---+--+
| 4327269606205.029297  |
+---+--+
0: jdbc:hive2://localhost:1> select * from test where dc = 
4327269606205.029297
7269606205.029297
. . . . . . . . . . . . . . . .> ;
;
1 row selected (6.726 seconds)
+---+--+
|  test.dc  |
+---+--+
| 4327269606205.029300  |
+---+--+
0: jdbc:hive2://localhost:1> explain select * from test where dc = 
4327269606205.029297
dc = 4327269606205.029297
. . . . . . . . . . . . . . . .> ;
;
+---+--+
|Explain
|
+---+--+
| STAGE DEPENDENCIES:   
|
|   Stage-0 is a root stage 
|
|   
|
| STAGE PLANS:  
|
|   Stage: Stage-0  
|
| Fetch Operator
|
|   limit: -1   
|
|   Processor Tree: 
|
| TableScan 
|
|   alias: test 
|
|   Statistics: Num rows: 1 Data size: 32 Basic stats: COMPLETE Column 
stats: NONE  |
|   Filter Operator 
|
| predicate: (UDFToDouble(dc) = 4.3272696062050293E12) (type: 
boolean)  |
| Statistics: Num rows: 1 Data size: 32 Basic stats: COMPLETE 
Column stats: NONE|
| Select Operator   
|
|   expressions: 4327269606205.0293 (type: decimal(38,18))  
|
|   outputColumnNames: _col0
|
|   Statistics: Num rows: 1 Data size: 32 Basic stats: COMPLETE 
Column stats: NONE  |
|   ListSink
|
|   
|
+---+--+
{code}

> Decimal value need to be single-quoted when selecting where clause with that 
> decimal value in order to get row
> --
>
> Key: HIVE-13946
> URL: https://issues.apache.org/jira/browse/HIVE-13946
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1
>Reporter: Takahiko Saito
> Fix For: 1.2.1
>
>
> Create a table withe a column of decimal type(38,18) and insert 
> '4327269606205.029297'. Then select with that value does not return anything.
> {noformat}
> 0: jdbc:hive2://ts-0531-1.openstacklocal:2181> drop table if exists test;
> No rows affected (0.175 seconds)
> 0: jdbc:hive2://ts-0531-1.openstacklocal:2181>
> 0: 

[jira] [Updated] (HIVE-13864) Beeline ignores the command that follows a semicolon and comment

2016-06-13 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13864:
--
Status: Patch Available  (was: Open)

> Beeline ignores the command that follows a semicolon and comment
> 
>
> Key: HIVE-13864
> URL: https://issues.apache.org/jira/browse/HIVE-13864
> Project: Hive
>  Issue Type: Bug
>Reporter: Muthu Manickam
>Assignee: Reuben Kuhnert
> Attachments: HIVE-13864.01.patch, HIVE-13864.02.patch
>
>
> Beeline ignores the next line/command that follows a command with semicolon 
> and comments.
> Example 1:
> select *
> from table1; -- comments
> select * from table2;
> In this case, only the first command is executed.. second command "select * 
> from table2" is not executed.
> --
> Example 2:
> select *
> from table1; -- comments
> select * from table2;
> select * from table3;
> In this case, first command and third command is executed. second command 
> "select * from table2" is not executed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13864) Beeline ignores the command that follows a semicolon and comment

2016-06-13 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13864:
--
Status: Open  (was: Patch Available)

> Beeline ignores the command that follows a semicolon and comment
> 
>
> Key: HIVE-13864
> URL: https://issues.apache.org/jira/browse/HIVE-13864
> Project: Hive
>  Issue Type: Bug
>Reporter: Muthu Manickam
>Assignee: Reuben Kuhnert
> Attachments: HIVE-13864.01.patch, HIVE-13864.02.patch
>
>
> Beeline ignores the next line/command that follows a command with semicolon 
> and comments.
> Example 1:
> select *
> from table1; -- comments
> select * from table2;
> In this case, only the first command is executed.. second command "select * 
> from table2" is not executed.
> --
> Example 2:
> select *
> from table1; -- comments
> select * from table2;
> select * from table3;
> In this case, first command and third command is executed. second command 
> "select * from table2" is not executed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13864) Beeline ignores the command that follows a semicolon and comment

2016-06-13 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13864:
--
Attachment: HIVE-13864.02.patch

> Beeline ignores the command that follows a semicolon and comment
> 
>
> Key: HIVE-13864
> URL: https://issues.apache.org/jira/browse/HIVE-13864
> Project: Hive
>  Issue Type: Bug
>Reporter: Muthu Manickam
>Assignee: Reuben Kuhnert
> Attachments: HIVE-13864.01.patch, HIVE-13864.02.patch
>
>
> Beeline ignores the next line/command that follows a command with semicolon 
> and comments.
> Example 1:
> select *
> from table1; -- comments
> select * from table2;
> In this case, only the first command is executed.. second command "select * 
> from table2" is not executed.
> --
> Example 2:
> select *
> from table1; -- comments
> select * from table2;
> select * from table3;
> In this case, first command and third command is executed. second command 
> "select * from table2" is not executed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HIVE-13864) Beeline ignores the command that follows a semicolon and comment

2016-06-10 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-13864 started by Reuben Kuhnert.
-
> Beeline ignores the command that follows a semicolon and comment
> 
>
> Key: HIVE-13864
> URL: https://issues.apache.org/jira/browse/HIVE-13864
> Project: Hive
>  Issue Type: Bug
>Reporter: Muthu Manickam
>Assignee: Reuben Kuhnert
> Attachments: HIVE-13864.01.patch
>
>
> Beeline ignores the next line/command that follows a command with semicolon 
> and comments.
> Example 1:
> select *
> from table1; -- comments
> select * from table2;
> In this case, only the first command is executed.. second command "select * 
> from table2" is not executed.
> --
> Example 2:
> select *
> from table1; -- comments
> select * from table2;
> select * from table3;
> In this case, first command and third command is executed. second command 
> "select * from table2" is not executed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13864) Beeline ignores the command that follows a semicolon and comment

2016-06-10 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13864:
--
Attachment: HIVE-13864.01.patch

> Beeline ignores the command that follows a semicolon and comment
> 
>
> Key: HIVE-13864
> URL: https://issues.apache.org/jira/browse/HIVE-13864
> Project: Hive
>  Issue Type: Bug
>Reporter: Muthu Manickam
>Assignee: Reuben Kuhnert
> Attachments: HIVE-13864.01.patch
>
>
> Beeline ignores the next line/command that follows a command with semicolon 
> and comments.
> Example 1:
> select *
> from table1; -- comments
> select * from table2;
> In this case, only the first command is executed.. second command "select * 
> from table2" is not executed.
> --
> Example 2:
> select *
> from table1; -- comments
> select * from table2;
> select * from table3;
> In this case, first command and third command is executed. second command 
> "select * from table2" is not executed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13864) Beeline ignores the command that follows a semicolon and comment

2016-06-10 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13864:
--
Status: Patch Available  (was: In Progress)

> Beeline ignores the command that follows a semicolon and comment
> 
>
> Key: HIVE-13864
> URL: https://issues.apache.org/jira/browse/HIVE-13864
> Project: Hive
>  Issue Type: Bug
>Reporter: Muthu Manickam
>Assignee: Reuben Kuhnert
> Attachments: HIVE-13864.01.patch
>
>
> Beeline ignores the next line/command that follows a command with semicolon 
> and comments.
> Example 1:
> select *
> from table1; -- comments
> select * from table2;
> In this case, only the first command is executed.. second command "select * 
> from table2" is not executed.
> --
> Example 2:
> select *
> from table1; -- comments
> select * from table2;
> select * from table3;
> In this case, first command and third command is executed. second command 
> "select * from table2" is not executed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13884) Disallow queries fetching more than a configured number of partitions in PartitionPruner

2016-06-09 Thread Reuben Kuhnert (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15322686#comment-15322686
 ] 

Reuben Kuhnert commented on HIVE-13884:
---

This looks fine to me. A few minor nitpicks about code style / cleanup, but for 
the most part this is clear. Good to move the limit check to the metastore 
rather than downstream during semantic analysis.

> Disallow queries fetching more than a configured number of partitions in 
> PartitionPruner
> 
>
> Key: HIVE-13884
> URL: https://issues.apache.org/jira/browse/HIVE-13884
> Project: Hive
>  Issue Type: Improvement
>Reporter: Mohit Sabharwal
>Assignee: Sergio Peña
> Attachments: HIVE-13884.1.patch
>
>
> Currently the PartitionPruner requests either all partitions or partitions 
> based on filter expression. In either scenarios, if the number of partitions 
> accessed is large there can be significant memory pressure at the HMS server 
> end.
> We already have a config {{hive.limit.query.max.table.partition}} that 
> enforces limits on number of partitions that may be scanned per operator. But 
> this check happens after the PartitionPruner has already fetched all 
> partitions.
> We should add an option at PartitionPruner level to disallow queries that 
> attempt to access number of partitions beyond a configurable limit.
> Note that {{hive.mapred.mode=strict}} disallow queries without a partition 
> filter in PartitionPruner, but this check accepts any query with a pruning 
> condition, even if partitions fetched are large. In multi-tenant 
> environments, admins could use more control w.r.t. number of partitions 
> allowed based on HMS memory capacity.
> One option is to have PartitionPruner first fetch the partition names 
> (instead of partition specs) and throw an exception if number of partitions 
> exceeds the configured value. Otherwise, fetch the partition specs.
> Looks like the existing {{listPartitionNames}} call could be used if extended 
> to take partition filter expressions like {{getPartitionsByExpr}} call does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13696) Monitor fair-scheduler.xml and automatically update/validate jobs submitted to fair-scheduler

2016-05-24 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13696:
--
Attachment: HIVE-13696.13.patch

> Monitor fair-scheduler.xml and automatically update/validate jobs submitted 
> to fair-scheduler
> -
>
> Key: HIVE-13696
> URL: https://issues.apache.org/jira/browse/HIVE-13696
> Project: Hive
>  Issue Type: Improvement
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
> Attachments: HIVE-13696.01.patch, HIVE-13696.02.patch, 
> HIVE-13696.06.patch, HIVE-13696.08.patch, HIVE-13696.11.patch, 
> HIVE-13696.13.patch
>
>
> Ensure that jobs are placed into the correct queue according to 
> {{fair-scheduler.xml}}. Jobs should be placed into the correct queue, and 
> users should not be able to submit jobs to queues they do not have access to.
> This patch builds on the existing functionality in {{FairSchedulerShim}} to 
> route jobs to user-specific queue based on {{fair-scheduler.xml}} 
> configuration (leveraging the Yarn {{QueuePlacementPolicy}} class). In 
> addition to configuring job routing at session connect (current behavior), 
> the routing is validated per submission to yarn (when impersonation is off). 
> A {{FileSystemWatcher}} class is included to monitor changes in the 
> {{fair-scheduler.xml}} file (so updates are automatically reloaded when the 
> file pointed to by {{yarn.scheduler.fair.allocation.file}} is changed).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13696) Monitor fair-scheduler.xml and automatically update/validate jobs submitted to fair-scheduler

2016-05-24 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13696:
--
Status: Patch Available  (was: Open)

> Monitor fair-scheduler.xml and automatically update/validate jobs submitted 
> to fair-scheduler
> -
>
> Key: HIVE-13696
> URL: https://issues.apache.org/jira/browse/HIVE-13696
> Project: Hive
>  Issue Type: Improvement
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
> Attachments: HIVE-13696.01.patch, HIVE-13696.02.patch, 
> HIVE-13696.06.patch, HIVE-13696.08.patch, HIVE-13696.11.patch, 
> HIVE-13696.13.patch
>
>
> Ensure that jobs are placed into the correct queue according to 
> {{fair-scheduler.xml}}. Jobs should be placed into the correct queue, and 
> users should not be able to submit jobs to queues they do not have access to.
> This patch builds on the existing functionality in {{FairSchedulerShim}} to 
> route jobs to user-specific queue based on {{fair-scheduler.xml}} 
> configuration (leveraging the Yarn {{QueuePlacementPolicy}} class). In 
> addition to configuring job routing at session connect (current behavior), 
> the routing is validated per submission to yarn (when impersonation is off). 
> A {{FileSystemWatcher}} class is included to monitor changes in the 
> {{fair-scheduler.xml}} file (so updates are automatically reloaded when the 
> file pointed to by {{yarn.scheduler.fair.allocation.file}} is changed).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13696) Monitor fair-scheduler.xml and automatically update/validate jobs submitted to fair-scheduler

2016-05-24 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13696:
--
Status: Open  (was: Patch Available)

> Monitor fair-scheduler.xml and automatically update/validate jobs submitted 
> to fair-scheduler
> -
>
> Key: HIVE-13696
> URL: https://issues.apache.org/jira/browse/HIVE-13696
> Project: Hive
>  Issue Type: Improvement
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
> Attachments: HIVE-13696.01.patch, HIVE-13696.02.patch, 
> HIVE-13696.06.patch, HIVE-13696.08.patch, HIVE-13696.11.patch, 
> HIVE-13696.13.patch
>
>
> Ensure that jobs are placed into the correct queue according to 
> {{fair-scheduler.xml}}. Jobs should be placed into the correct queue, and 
> users should not be able to submit jobs to queues they do not have access to.
> This patch builds on the existing functionality in {{FairSchedulerShim}} to 
> route jobs to user-specific queue based on {{fair-scheduler.xml}} 
> configuration (leveraging the Yarn {{QueuePlacementPolicy}} class). In 
> addition to configuring job routing at session connect (current behavior), 
> the routing is validated per submission to yarn (when impersonation is off). 
> A {{FileSystemWatcher}} class is included to monitor changes in the 
> {{fair-scheduler.xml}} file (so updates are automatically reloaded when the 
> file pointed to by {{yarn.scheduler.fair.allocation.file}} is changed).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13783) No secondary prompt

2016-05-19 Thread Reuben Kuhnert (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15291275#comment-15291275
 ] 

Reuben Kuhnert commented on HIVE-13783:
---

LGTM:
/
{code}
beeline> this
this
. . . .> seems
seems
. . . .> to
to
. . . .> work
work
. . . .> ;
;
beeline> No current connection
{code}

> No secondary prompt
> ---
>
> Key: HIVE-13783
> URL: https://issues.apache.org/jira/browse/HIVE-13783
> Project: Hive
>  Issue Type: Improvement
>  Components: Beeline
>Affects Versions: 2.0.0
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Minor
> Attachments: HIVE-13783.01.patch
>
>
> {noformat}
> # beeline -u jdbc:hive2://localhost:1
> [...]
> Beeline version 1.1.0-cdh5.4.5 by Apache Hive
> 0: jdbc:hive2://localhost:1> "
> 0: jdbc:hive2://localhost:1> select * from foo;
> Error: Error while compiling statement: FAILED: ParseException line 2:17 
> character '' not supported here (state=42000,code=4)
> 0: jdbc:hive2://localhost:1> 
> {noformat}
> After (accidentally) entering a lonely quote character on its own line and 
> pressing Enter, I get back the normal prompt. This easily makes me believe 
> I'm about to type a new command from scratch, e.g. a select query as in the 
> example, which ends up not working due to parsing error.
> Expected behavior: When a previous command is continued, or a quote is opened 
> or anything like this, a differently looking secondary prompt should be 
> displayed rather than the normal prompt; as this is done in e.g. hive, 
> impala, mysql, bash..., e.g.:
> {noformat}
> # beeline -u jdbc:hive2://localhost:1
> [...]
> Beeline version 1.1.0-cdh5.4.5 by Apache Hive
> 0: jdbc:hive2://localhost:1> "
>> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13696) Monitor fair-scheduler.xml and automatically update/validate jobs submitted to fair-scheduler

2016-05-14 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13696:
--
Attachment: HIVE-13696.11.patch

> Monitor fair-scheduler.xml and automatically update/validate jobs submitted 
> to fair-scheduler
> -
>
> Key: HIVE-13696
> URL: https://issues.apache.org/jira/browse/HIVE-13696
> Project: Hive
>  Issue Type: Improvement
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
> Attachments: HIVE-13696.01.patch, HIVE-13696.02.patch, 
> HIVE-13696.06.patch, HIVE-13696.08.patch, HIVE-13696.11.patch
>
>
> Ensure that jobs are placed into the correct queue according to 
> {{fair-scheduler.xml}}. Jobs should be placed into the correct queue, and 
> users should not be able to submit jobs to queues they do not have access to.
> This patch builds on the existing functionality in {{FairSchedulerShim}} to 
> route jobs to user-specific queue based on {{fair-scheduler.xml}} 
> configuration (leveraging the Yarn {{QueuePlacementPolicy}} class). In 
> addition to configuring job routing at session connect (current behavior), 
> the routing is validated per submission to yarn (when impersonation is off). 
> A {{FileSystemWatcher}} class is included to monitor changes in the 
> {{fair-scheduler.xml}} file (so updates are automatically reloaded when the 
> file pointed to by {{yarn.scheduler.fair.allocation.file}} is changed).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13696) Monitor fair-scheduler.xml and automatically update/validate jobs submitted to fair-scheduler

2016-05-13 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13696:
--
Status: Patch Available  (was: Open)

> Monitor fair-scheduler.xml and automatically update/validate jobs submitted 
> to fair-scheduler
> -
>
> Key: HIVE-13696
> URL: https://issues.apache.org/jira/browse/HIVE-13696
> Project: Hive
>  Issue Type: Improvement
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
> Attachments: HIVE-13696.01.patch, HIVE-13696.02.patch, 
> HIVE-13696.06.patch, HIVE-13696.08.patch
>
>
> Ensure that jobs are placed into the correct queue according to 
> {{fair-scheduler.xml}}. Jobs should be placed into the correct queue, and 
> users should not be able to submit jobs to queues they do not have access to.
> This patch builds on the existing functionality in {{FairSchedulerShim}} to 
> route jobs to user-specific queue based on {{fair-scheduler.xml}} 
> configuration (leveraging the Yarn {{QueuePlacementPolicy}} class). In 
> addition to configuring job routing at session connect (current behavior), 
> the routing is validated per submission to yarn (when impersonation is off). 
> A {{FileSystemWatcher}} class is included to monitor changes in the 
> {{fair-scheduler.xml}} file (so updates are automatically reloaded when the 
> file pointed to by {{yarn.scheduler.fair.allocation.file}} is changed).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13696) Monitor fair-scheduler.xml and automatically update/validate jobs submitted to fair-scheduler

2016-05-13 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13696:
--
Attachment: HIVE-13696.08.patch

> Monitor fair-scheduler.xml and automatically update/validate jobs submitted 
> to fair-scheduler
> -
>
> Key: HIVE-13696
> URL: https://issues.apache.org/jira/browse/HIVE-13696
> Project: Hive
>  Issue Type: Improvement
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
> Attachments: HIVE-13696.01.patch, HIVE-13696.02.patch, 
> HIVE-13696.06.patch, HIVE-13696.08.patch
>
>
> Ensure that jobs are placed into the correct queue according to 
> {{fair-scheduler.xml}}. Jobs should be placed into the correct queue, and 
> users should not be able to submit jobs to queues they do not have access to.
> This patch builds on the existing functionality in {{FairSchedulerShim}} to 
> route jobs to user-specific queue based on {{fair-scheduler.xml}} 
> configuration (leveraging the Yarn {{QueuePlacementPolicy}} class). In 
> addition to configuring job routing at session connect (current behavior), 
> the routing is validated per submission to yarn (when impersonation is off). 
> A {{FileSystemWatcher}} class is included to monitor changes in the 
> {{fair-scheduler.xml}} file (so updates are automatically reloaded when the 
> file pointed to by {{yarn.scheduler.fair.allocation.file}} is changed).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13696) Monitor fair-scheduler.xml and automatically update/validate jobs submitted to fair-scheduler

2016-05-13 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13696:
--
Status: Open  (was: Patch Available)

> Monitor fair-scheduler.xml and automatically update/validate jobs submitted 
> to fair-scheduler
> -
>
> Key: HIVE-13696
> URL: https://issues.apache.org/jira/browse/HIVE-13696
> Project: Hive
>  Issue Type: Improvement
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
> Attachments: HIVE-13696.01.patch, HIVE-13696.02.patch, 
> HIVE-13696.06.patch, HIVE-13696.08.patch
>
>
> Ensure that jobs are placed into the correct queue according to 
> {{fair-scheduler.xml}}. Jobs should be placed into the correct queue, and 
> users should not be able to submit jobs to queues they do not have access to.
> This patch builds on the existing functionality in {{FairSchedulerShim}} to 
> route jobs to user-specific queue based on {{fair-scheduler.xml}} 
> configuration (leveraging the Yarn {{QueuePlacementPolicy}} class). In 
> addition to configuring job routing at session connect (current behavior), 
> the routing is validated per submission to yarn (when impersonation is off). 
> A {{FileSystemWatcher}} class is included to monitor changes in the 
> {{fair-scheduler.xml}} file (so updates are automatically reloaded when the 
> file pointed to by {{yarn.scheduler.fair.allocation.file}} is changed).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13696) Monitor fair-scheduler.xml and automatically update/validate jobs submitted to fair-scheduler

2016-05-12 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13696:
--
Status: Patch Available  (was: Open)

> Monitor fair-scheduler.xml and automatically update/validate jobs submitted 
> to fair-scheduler
> -
>
> Key: HIVE-13696
> URL: https://issues.apache.org/jira/browse/HIVE-13696
> Project: Hive
>  Issue Type: Improvement
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
> Attachments: HIVE-13696.01.patch, HIVE-13696.02.patch, 
> HIVE-13696.06.patch
>
>
> Ensure that jobs are placed into the correct queue according to 
> {{fair-scheduler.xml}}. Jobs should be placed into the correct queue, and 
> users should not be able to submit jobs to queues they do not have access to.
> This patch builds on the existing functionality in {{FairSchedulerShim}} to 
> route jobs to user-specific queue based on {{fair-scheduler.xml}} 
> configuration (leveraging the Yarn {{QueuePlacementPolicy}} class). In 
> addition to configuring job routing at session connect (current behavior), 
> the routing is validated per submission to yarn (when impersonation is off). 
> A {{FileSystemWatcher}} class is included to monitor changes in the 
> {{fair-scheduler.xml}} file (so updates are automatically reloaded when the 
> file pointed to by {{yarn.scheduler.fair.allocation.file}} is changed).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13696) Monitor fair-scheduler.xml and automatically update/validate jobs submitted to fair-scheduler

2016-05-12 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13696:
--
Attachment: HIVE-13696.06.patch

> Monitor fair-scheduler.xml and automatically update/validate jobs submitted 
> to fair-scheduler
> -
>
> Key: HIVE-13696
> URL: https://issues.apache.org/jira/browse/HIVE-13696
> Project: Hive
>  Issue Type: Improvement
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
> Attachments: HIVE-13696.01.patch, HIVE-13696.02.patch, 
> HIVE-13696.06.patch
>
>
> Ensure that jobs are placed into the correct queue according to 
> {{fair-scheduler.xml}}. Jobs should be placed into the correct queue, and 
> users should not be able to submit jobs to queues they do not have access to.
> This patch builds on the existing functionality in {{FairSchedulerShim}} to 
> route jobs to user-specific queue based on {{fair-scheduler.xml}} 
> configuration (leveraging the Yarn {{QueuePlacementPolicy}} class). In 
> addition to configuring job routing at session connect (current behavior), 
> the routing is validated per submission to yarn (when impersonation is off). 
> A {{FileSystemWatcher}} class is included to monitor changes in the 
> {{fair-scheduler.xml}} file (so updates are automatically reloaded when the 
> file pointed to by {{yarn.scheduler.fair.allocation.file}} is changed).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13696) Monitor fair-scheduler.xml and automatically update/validate jobs submitted to fair-scheduler

2016-05-12 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13696:
--
Status: Open  (was: Patch Available)

> Monitor fair-scheduler.xml and automatically update/validate jobs submitted 
> to fair-scheduler
> -
>
> Key: HIVE-13696
> URL: https://issues.apache.org/jira/browse/HIVE-13696
> Project: Hive
>  Issue Type: Improvement
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
> Attachments: HIVE-13696.01.patch, HIVE-13696.02.patch, 
> HIVE-13696.06.patch
>
>
> Ensure that jobs are placed into the correct queue according to 
> {{fair-scheduler.xml}}. Jobs should be placed into the correct queue, and 
> users should not be able to submit jobs to queues they do not have access to.
> This patch builds on the existing functionality in {{FairSchedulerShim}} to 
> route jobs to user-specific queue based on {{fair-scheduler.xml}} 
> configuration (leveraging the Yarn {{QueuePlacementPolicy}} class). In 
> addition to configuring job routing at session connect (current behavior), 
> the routing is validated per submission to yarn (when impersonation is off). 
> A {{FileSystemWatcher}} class is included to monitor changes in the 
> {{fair-scheduler.xml}} file (so updates are automatically reloaded when the 
> file pointed to by {{yarn.scheduler.fair.allocation.file}} is changed).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13696) Monitor fair-scheduler.xml and automatically update/validate jobs submitted to fair-scheduler

2016-05-09 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13696:
--
Attachment: HIVE-13696.02.patch

> Monitor fair-scheduler.xml and automatically update/validate jobs submitted 
> to fair-scheduler
> -
>
> Key: HIVE-13696
> URL: https://issues.apache.org/jira/browse/HIVE-13696
> Project: Hive
>  Issue Type: Improvement
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
> Attachments: HIVE-13696.01.patch, HIVE-13696.02.patch
>
>
> Ensure that jobs are placed into the correct queue according to 
> {{fair-scheduler.xml}}. Jobs should be placed into the correct queue, and 
> users should not be able to submit jobs to queues they do not have access to.
> This patch builds on the existing functionality in {{FairSchedulerShim}} to 
> route jobs to user-specific queue based on {{fair-scheduler.xml}} 
> configuration (leveraging the Yarn {{QueuePlacementPolicy}} class). In 
> addition to configuring job routing at session connect (current behavior), 
> the routing is validated per submission to yarn (when impersonation is off). 
> A {{FileSystemWatcher}} class is included to monitor changes in the 
> {{fair-scheduler.xml}} file (so updates are automatically reloaded when the 
> file pointed to by {{yarn.scheduler.fair.allocation.file}} is changed).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13696) Monitor fair-scheduler.xml and automatically update/validate jobs submitted to fair-scheduler

2016-05-09 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13696:
--
Summary: Monitor fair-scheduler.xml and automatically update/validate jobs 
submitted to fair-scheduler  (was: Validate jobs submitted to fair-scheduler)

> Monitor fair-scheduler.xml and automatically update/validate jobs submitted 
> to fair-scheduler
> -
>
> Key: HIVE-13696
> URL: https://issues.apache.org/jira/browse/HIVE-13696
> Project: Hive
>  Issue Type: Improvement
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
> Attachments: HIVE-13696.01.patch
>
>
> Ensure that jobs are placed into the correct queue according to 
> {{fair-scheduler.xml}}. Jobs should be placed into the correct queue, and 
> users should not be able to submit jobs to queues they do not have access to.
> This patch builds on the existing functionality in {{FairSchedulerShim}} to 
> route jobs to user-specific queue based on {{fair-scheduler.xml}} 
> configuration (leveraging the Yarn {{QueuePlacementPolicy}} class). In 
> addition to configuring job routing at session connect (current behavior), 
> the routing is validated per submission to yarn (when impersonation is off). 
> A {{FileSystemWatcher}} class is included to monitor changes in the 
> {{fair-scheduler.xml}} file (so updates are automatically reloaded when the 
> file pointed to by {{yarn.scheduler.fair.allocation.file}} is changed).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13696) Validate jobs submitted to fair-scheduler

2016-05-07 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13696:
--
Description: 
Ensure that jobs are placed into the correct queue according to 
{{fair-scheduler.xml}}. Jobs should be placed into the correct queue, and users 
should not be able to submit jobs to queues they do not have access to.

This patch builds on the existing functionality in {{FairSchedulerShim}} to 
route jobs to user-specific queue based on {{fair-scheduler.xml}} configuration 
(leveraging the Yarn {{QueuePlacementPolicy}} class). In addition to 
configuring job routing at session connect (current behavior), the routing is 
validated per submission to yarn (when impersonation is off). A 
{{FileSystemWatcher}} class is included to monitor changes in the 
{{fair-scheduler.xml}} file (so updates are automatically reloaded when the 
file pointed to by {{yarn.scheduler.fair.allocation.file}} is changed).

  was:Ensure that jobs are placed into the correct queue according to 
{{fair-scheduler.xml}}. Jobs should be placed into the correct queue, and users 
should not be able to submit jobs to queues they do not have access to.


> Validate jobs submitted to fair-scheduler
> -
>
> Key: HIVE-13696
> URL: https://issues.apache.org/jira/browse/HIVE-13696
> Project: Hive
>  Issue Type: Improvement
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
> Attachments: HIVE-13696.01.patch
>
>
> Ensure that jobs are placed into the correct queue according to 
> {{fair-scheduler.xml}}. Jobs should be placed into the correct queue, and 
> users should not be able to submit jobs to queues they do not have access to.
> This patch builds on the existing functionality in {{FairSchedulerShim}} to 
> route jobs to user-specific queue based on {{fair-scheduler.xml}} 
> configuration (leveraging the Yarn {{QueuePlacementPolicy}} class). In 
> addition to configuring job routing at session connect (current behavior), 
> the routing is validated per submission to yarn (when impersonation is off). 
> A {{FileSystemWatcher}} class is included to monitor changes in the 
> {{fair-scheduler.xml}} file (so updates are automatically reloaded when the 
> file pointed to by {{yarn.scheduler.fair.allocation.file}} is changed).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13696) Validate jobs submitted to fair-scheduler

2016-05-05 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13696:
--
Status: Patch Available  (was: Open)

> Validate jobs submitted to fair-scheduler
> -
>
> Key: HIVE-13696
> URL: https://issues.apache.org/jira/browse/HIVE-13696
> Project: Hive
>  Issue Type: Improvement
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
> Attachments: HIVE-13696.01.patch
>
>
> Ensure that jobs are placed into the correct queue according to 
> {{fair-scheduler.xml}}. Jobs should be placed into the correct queue, and 
> users should not be able to submit jobs to queues they do not have access to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13696) Validate jobs submitted to fair-scheduler

2016-05-05 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13696:
--
Attachment: HIVE-13696.01.patch

> Validate jobs submitted to fair-scheduler
> -
>
> Key: HIVE-13696
> URL: https://issues.apache.org/jira/browse/HIVE-13696
> Project: Hive
>  Issue Type: Improvement
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
> Attachments: HIVE-13696.01.patch
>
>
> Ensure that jobs are placed into the correct queue according to 
> {{fair-scheduler.xml}}. Jobs should be placed into the correct queue, and 
> users should not be able to submit jobs to queues they do not have access to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13478) [Cleanup] Improve HookUtils performance

2016-04-15 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13478:
--
Attachment: HIVE-13478.03.patch

> [Cleanup] Improve HookUtils performance
> ---
>
> Key: HIVE-13478
> URL: https://issues.apache.org/jira/browse/HIVE-13478
> Project: Hive
>  Issue Type: Improvement
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
>Priority: Minor
> Attachments: HIVE-13478.01.patch, HIVE-13478.02.patch, 
> HIVE-13478.03.patch
>
>
> Minor cleanup. {{HookUtils.getHooks}} is called multiple times for every 
> statement executed performing nearly identical work. Cache the results of the 
> work to improve performance (LRU). 
> Also introduce the {{@CacheableHook}} annotation which can be appended to 
> hooks that don't need to be re-instantiated using expensive reflection (such 
> as Sentry hooks that load configuration on initialization).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13478) [Cleanup] Improve HookUtils performance

2016-04-14 Thread Reuben Kuhnert (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15241705#comment-15241705
 ] 

Reuben Kuhnert commented on HIVE-13478:
---

Oops broke the original test, that's no good. Let me fix that...

> [Cleanup] Improve HookUtils performance
> ---
>
> Key: HIVE-13478
> URL: https://issues.apache.org/jira/browse/HIVE-13478
> Project: Hive
>  Issue Type: Improvement
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
>Priority: Minor
> Attachments: HIVE-13478.01.patch, HIVE-13478.02.patch
>
>
> Minor cleanup. {{HookUtils.getHooks}} is called multiple times for every 
> statement executed performing nearly identical work. Cache the results of the 
> work to improve performance (LRU). 
> Also introduce the {{@CacheableHook}} annotation which can be appended to 
> hooks that don't need to be re-instantiated using expensive reflection (such 
> as Sentry hooks that load configuration on initialization).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13478) [Cleanup] Improve HookUtils performance

2016-04-11 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13478:
--
Attachment: HIVE-13478.02.patch

> [Cleanup] Improve HookUtils performance
> ---
>
> Key: HIVE-13478
> URL: https://issues.apache.org/jira/browse/HIVE-13478
> Project: Hive
>  Issue Type: Improvement
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
>Priority: Minor
> Attachments: HIVE-13478.01.patch, HIVE-13478.02.patch
>
>
> Minor cleanup. {{HookUtils.getHooks}} is called multiple times for every 
> statement executed performing nearly identical work. Cache the results of the 
> work to improve performance (LRU). 
> Also introduce the {{@CacheableHook}} annotation which can be appended to 
> hooks that don't need to be re-instantiated using expensive reflection (such 
> as Sentry hooks that load configuration on initialization).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13478) [Cleanup] Improve HookUtils performance

2016-04-11 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13478:
--
Description: 
Minor cleanup. {{HookUtils.getHooks}} is called multiple times for every 
statement executed performing nearly identical work. Cache the results of the 
work to improve performance (LRU). 

Also introduce the {{@CacheableHook}} annotation which can be appended to hooks 
that don't need to be re-instantiated using expensive reflection (such as 
Sentry hooks that load configuration on initialization).

  was:
Minor cleanup. {{HookUtils.getHooks}} multiple times for every statement 
executed performing nearly identical work. Cache the results of the work to 
improve performance. 

Also introduce the {{@CacheableHook}} annotation which can be appended to hooks 
that don't need to be re-instantiated using expensive reflection (such as 
Sentry hooks that load configuration on initialization).


> [Cleanup] Improve HookUtils performance
> ---
>
> Key: HIVE-13478
> URL: https://issues.apache.org/jira/browse/HIVE-13478
> Project: Hive
>  Issue Type: Improvement
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
>Priority: Minor
> Attachments: HIVE-13478.01.patch
>
>
> Minor cleanup. {{HookUtils.getHooks}} is called multiple times for every 
> statement executed performing nearly identical work. Cache the results of the 
> work to improve performance (LRU). 
> Also introduce the {{@CacheableHook}} annotation which can be appended to 
> hooks that don't need to be re-instantiated using expensive reflection (such 
> as Sentry hooks that load configuration on initialization).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13478) [Cleanup] Improve HookUtils performance

2016-04-11 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13478:
--
Attachment: HIVE-13478.01.patch

> [Cleanup] Improve HookUtils performance
> ---
>
> Key: HIVE-13478
> URL: https://issues.apache.org/jira/browse/HIVE-13478
> Project: Hive
>  Issue Type: Improvement
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
>Priority: Minor
> Attachments: HIVE-13478.01.patch
>
>
> Minor cleanup. {{HookUtils.getHooks}} multiple times for every statement 
> executed performing nearly identical work. Cache the results of the work to 
> improve performance. 
> Also introduce the {{@CacheableHook}} annotation which can be appended to 
> hooks that don't need to be re-instantiated using expensive reflection (such 
> as Sentry hooks that load configuration on initialization).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13387) Beeline fails silently from missing dependency

2016-03-30 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13387:
--
Status: Patch Available  (was: Open)

> Beeline fails silently from missing dependency
> --
>
> Key: HIVE-13387
> URL: https://issues.apache.org/jira/browse/HIVE-13387
> Project: Hive
>  Issue Type: Task
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
>Priority: Minor
> Attachments: HIVE-13387.01.patch
>
>
> Beeline fails to connect because {{HiveSqlException}} dependency is not on 
> classpath:
> {code}
> java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:52)
>   at 
> org.apache.hive.beeline.BeeLine.execCommandWithPrefix(BeeLine.java:1077)
>   at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1116)
>   at org.apache.hive.beeline.BeeLine.initArgs(BeeLine.java:762)
>   at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:841)
>   at 
> org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:493)
>   at org.apache.hive.beeline.BeeLine.main(BeeLine.java:476)
> Caused by: java.lang.NoClassDefFoundError: 
> org/apache/hive/service/cli/HiveSQLException
>   at org.apache.hive.jdbc.HiveConnection.(HiveConnection.java:131)
>   at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107)
>   at java.sql.DriverManager.getConnection(DriverManager.java:571)
>   at java.sql.DriverManager.getConnection(DriverManager.java:187)
>   at 
> org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:141)
>   at 
> org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:205)
>   at org.apache.hive.beeline.Commands.connect(Commands.java:1393)
>   at org.apache.hive.beeline.Commands.connect(Commands.java:1314)
>   ... 11 more
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hive.service.cli.HiveSQLException
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   ... 19 more
> {code}
> This happens when trying to run beeline as a standalone java application:
> {code}
> sircodesalot@excalibur:~/Dev/Cloudera/hive/beeline$ mvn exec:java 
> -Dexec.args='-u jdbc:hive2://localhost:1 sircodesalot' 
> -Dexec.mainClass="org.apache.hive.beeline.BeeLine"
> [INFO] Scanning for projects...
> [INFO]
>  
> [INFO] 
> 
> [INFO] Building Hive Beeline 2.1.0-SNAPSHOT
> [INFO] 
> 
> [INFO] 
> [INFO] --- exec-maven-plugin:1.4.0:java (default-cli) @ hive-beeline ---
> Connecting to jdbc:hive2://localhost:1
> ERROR StatusLogger No log4j2 configuration file found. Using default 
> configuration: logging only errors to the console.
> org/apache/hive/service/cli/HiveSQLException
> Beeline version ??? by Apache Hive
> // HERE: This will never connect because of ClassNotFoundException. 
> 0: jdbc:hive2://localhost:1 (closed)>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13387) Beeline fails silently from missing dependency

2016-03-30 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13387:
--
Attachment: HIVE-13387.01.patch

> Beeline fails silently from missing dependency
> --
>
> Key: HIVE-13387
> URL: https://issues.apache.org/jira/browse/HIVE-13387
> Project: Hive
>  Issue Type: Task
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
>Priority: Minor
> Attachments: HIVE-13387.01.patch
>
>
> Beeline fails to connect because {{HiveSqlException}} dependency is not on 
> classpath:
> {code}
> java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:52)
>   at 
> org.apache.hive.beeline.BeeLine.execCommandWithPrefix(BeeLine.java:1077)
>   at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1116)
>   at org.apache.hive.beeline.BeeLine.initArgs(BeeLine.java:762)
>   at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:841)
>   at 
> org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:493)
>   at org.apache.hive.beeline.BeeLine.main(BeeLine.java:476)
> Caused by: java.lang.NoClassDefFoundError: 
> org/apache/hive/service/cli/HiveSQLException
>   at org.apache.hive.jdbc.HiveConnection.(HiveConnection.java:131)
>   at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107)
>   at java.sql.DriverManager.getConnection(DriverManager.java:571)
>   at java.sql.DriverManager.getConnection(DriverManager.java:187)
>   at 
> org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:141)
>   at 
> org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:205)
>   at org.apache.hive.beeline.Commands.connect(Commands.java:1393)
>   at org.apache.hive.beeline.Commands.connect(Commands.java:1314)
>   ... 11 more
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hive.service.cli.HiveSQLException
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   ... 19 more
> {code}
> This happens when trying to run beeline as a standalone java application:
> {code}
> sircodesalot@excalibur:~/Dev/Cloudera/hive/beeline$ mvn exec:java 
> -Dexec.args='-u jdbc:hive2://localhost:1 sircodesalot' 
> -Dexec.mainClass="org.apache.hive.beeline.BeeLine"
> [INFO] Scanning for projects...
> [INFO]
>  
> [INFO] 
> 
> [INFO] Building Hive Beeline 2.1.0-SNAPSHOT
> [INFO] 
> 
> [INFO] 
> [INFO] --- exec-maven-plugin:1.4.0:java (default-cli) @ hive-beeline ---
> Connecting to jdbc:hive2://localhost:1
> ERROR StatusLogger No log4j2 configuration file found. Using default 
> configuration: logging only errors to the console.
> org/apache/hive/service/cli/HiveSQLException
> Beeline version ??? by Apache Hive
> // HERE: This will never connect because of ClassNotFoundException. 
> 0: jdbc:hive2://localhost:1 (closed)>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13387) Beeline fails silently from missing dependency

2016-03-30 Thread Reuben Kuhnert (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218320#comment-15218320
 ] 

Reuben Kuhnert commented on HIVE-13387:
---

The {{HiveSqlException}} class comes from {{hive-service}}. This binary is 
listed as a dependency, but is listed as 'test' scope. Removing this invalid 
scoping fixes this issue:

{code}

  org.apache.hive
  hive-service
  ${project.version}
  

{code}

> Beeline fails silently from missing dependency
> --
>
> Key: HIVE-13387
> URL: https://issues.apache.org/jira/browse/HIVE-13387
> Project: Hive
>  Issue Type: Task
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
>Priority: Minor
>
> Beeline fails to connect because {{HiveSqlException}} dependency is not on 
> classpath:
> {code}
> java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:52)
>   at 
> org.apache.hive.beeline.BeeLine.execCommandWithPrefix(BeeLine.java:1077)
>   at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1116)
>   at org.apache.hive.beeline.BeeLine.initArgs(BeeLine.java:762)
>   at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:841)
>   at 
> org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:493)
>   at org.apache.hive.beeline.BeeLine.main(BeeLine.java:476)
> Caused by: java.lang.NoClassDefFoundError: 
> org/apache/hive/service/cli/HiveSQLException
>   at org.apache.hive.jdbc.HiveConnection.(HiveConnection.java:131)
>   at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107)
>   at java.sql.DriverManager.getConnection(DriverManager.java:571)
>   at java.sql.DriverManager.getConnection(DriverManager.java:187)
>   at 
> org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:141)
>   at 
> org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:205)
>   at org.apache.hive.beeline.Commands.connect(Commands.java:1393)
>   at org.apache.hive.beeline.Commands.connect(Commands.java:1314)
>   ... 11 more
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hive.service.cli.HiveSQLException
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   ... 19 more
> {code}
> This happens when trying to run beeline as a standalone java application:
> {code}
> sircodesalot@excalibur:~/Dev/Cloudera/hive/beeline$ mvn exec:java 
> -Dexec.args='-u jdbc:hive2://localhost:1 sircodesalot' 
> -Dexec.mainClass="org.apache.hive.beeline.BeeLine"
> [INFO] Scanning for projects...
> [INFO]
>  
> [INFO] 
> 
> [INFO] Building Hive Beeline 2.1.0-SNAPSHOT
> [INFO] 
> 
> [INFO] 
> [INFO] --- exec-maven-plugin:1.4.0:java (default-cli) @ hive-beeline ---
> Connecting to jdbc:hive2://localhost:1
> ERROR StatusLogger No log4j2 configuration file found. Using default 
> configuration: logging only errors to the console.
> org/apache/hive/service/cli/HiveSQLException
> Beeline version ??? by Apache Hive
> // HERE: This will never connect because of ClassNotFoundException. 
> 0: jdbc:hive2://localhost:1 (closed)>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13385) [Cleanup] Streamline Beeline instantiation

2016-03-30 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13385:
--
Attachment: HIVE-13385.01.patch

> [Cleanup] Streamline Beeline instantiation
> --
>
> Key: HIVE-13385
> URL: https://issues.apache.org/jira/browse/HIVE-13385
> Project: Hive
>  Issue Type: Task
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
> Attachments: HIVE-13385.01.patch
>
>
> Janitorial. Remove circular dependencies in {{BeelineCommandLineCompleter}}. 
> Stream line code readability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12612) beeline always exits with 0 status when reading query from standard input

2016-03-30 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-12612:
--
Status: Patch Available  (was: Open)

> beeline always exits with 0 status when reading query from standard input
> -
>
> Key: HIVE-12612
> URL: https://issues.apache.org/jira/browse/HIVE-12612
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 1.1.0
> Environment: CDH5.5.0
>Reporter: Paulo Sequeira
>Assignee: Reuben Kuhnert
>Priority: Minor
> Attachments: HIVE-12612.01.patch, HIVE-12612.02.patch, 
> HIVE-12612.03.patch
>
>
> Similar to what was reported on HIVE-6978, but now it only happens when the 
> query is read from the standard input. For example, the following fails as 
> expected:
> {code}
> bash$ if beeline -u "jdbc:hive2://..." -e "boo;" ; then echo "Ok?!" ; else 
> echo "Failed!" ; fi
> Connecting to jdbc:hive2://...
> Connected to: Apache Hive (version 1.1.0-cdh5.5.0)
> Driver: Hive JDBC (version 1.1.0-cdh5.5.0)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Error: Error while compiling statement: FAILED: ParseException line 1:0 
> cannot recognize input near 'boo' '' '' (state=42000,code=4)
> Closing: 0: jdbc:hive2://...
> Failed!
> {code}
> But the following does not:
> {code}
> bash$ if echo "boo;"|beeline -u "jdbc:hive2://..." ; then echo "Ok?!" ; else 
> echo "Failed!" ; fi
> Connecting to jdbc:hive2://...
> Connected to: Apache Hive (version 1.1.0-cdh5.5.0)
> Driver: Hive JDBC (version 1.1.0-cdh5.5.0)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Beeline version 1.1.0-cdh5.5.0 by Apache Hive
> 0: jdbc:hive2://...:8> Error: Error while compiling statement: FAILED: 
> ParseException line 1:0 cannot recognize input near 'boo' '' '' 
> (state=42000,code=4)
> 0: jdbc:hive2://...:8> Closing: 0: jdbc:hive2://...
> Ok?!
> {code}
> This was misleading our batch scripts to always believe that the execution of 
> the queries succeded, when sometimes that was not the case. 
> h2. Workaround
> We found we can work around the issue by always using the -e or the -f 
> parameters, and even reading the standard input through the /dev/stdin device 
> (this was useful because a lot of the scripts fed the queries from here 
> documents), like this:
> {code:title=some-script.sh}
> #!/bin/sh
> set -o nounset -o errexit -o pipefail
> # As beeline is failing to report an error status if reading the query
> # to be executed from STDIN, check whether no -f or -e option is used
> # and, in that case, pretend it has to read the query from a regular
> # file using -f to read from /dev/stdin
> function beeline_workaround_exit_status () {
> for arg in "$@"
> do if [ "$arg" = "-f" -o "$arg" = "-e" ]
>then beeline -u "..." "$@"
> return
>fi
> done
> beeline -u "..." "$@" -f /dev/stdin
> }
> beeline_workaround_exit_status < boo;
> EOF
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12612) beeline always exits with 0 status when reading query from standard input

2016-03-30 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-12612:
--
Status: Open  (was: Patch Available)

> beeline always exits with 0 status when reading query from standard input
> -
>
> Key: HIVE-12612
> URL: https://issues.apache.org/jira/browse/HIVE-12612
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 1.1.0
> Environment: CDH5.5.0
>Reporter: Paulo Sequeira
>Assignee: Reuben Kuhnert
>Priority: Minor
> Attachments: HIVE-12612.01.patch, HIVE-12612.02.patch, 
> HIVE-12612.03.patch
>
>
> Similar to what was reported on HIVE-6978, but now it only happens when the 
> query is read from the standard input. For example, the following fails as 
> expected:
> {code}
> bash$ if beeline -u "jdbc:hive2://..." -e "boo;" ; then echo "Ok?!" ; else 
> echo "Failed!" ; fi
> Connecting to jdbc:hive2://...
> Connected to: Apache Hive (version 1.1.0-cdh5.5.0)
> Driver: Hive JDBC (version 1.1.0-cdh5.5.0)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Error: Error while compiling statement: FAILED: ParseException line 1:0 
> cannot recognize input near 'boo' '' '' (state=42000,code=4)
> Closing: 0: jdbc:hive2://...
> Failed!
> {code}
> But the following does not:
> {code}
> bash$ if echo "boo;"|beeline -u "jdbc:hive2://..." ; then echo "Ok?!" ; else 
> echo "Failed!" ; fi
> Connecting to jdbc:hive2://...
> Connected to: Apache Hive (version 1.1.0-cdh5.5.0)
> Driver: Hive JDBC (version 1.1.0-cdh5.5.0)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Beeline version 1.1.0-cdh5.5.0 by Apache Hive
> 0: jdbc:hive2://...:8> Error: Error while compiling statement: FAILED: 
> ParseException line 1:0 cannot recognize input near 'boo' '' '' 
> (state=42000,code=4)
> 0: jdbc:hive2://...:8> Closing: 0: jdbc:hive2://...
> Ok?!
> {code}
> This was misleading our batch scripts to always believe that the execution of 
> the queries succeded, when sometimes that was not the case. 
> h2. Workaround
> We found we can work around the issue by always using the -e or the -f 
> parameters, and even reading the standard input through the /dev/stdin device 
> (this was useful because a lot of the scripts fed the queries from here 
> documents), like this:
> {code:title=some-script.sh}
> #!/bin/sh
> set -o nounset -o errexit -o pipefail
> # As beeline is failing to report an error status if reading the query
> # to be executed from STDIN, check whether no -f or -e option is used
> # and, in that case, pretend it has to read the query from a regular
> # file using -f to read from /dev/stdin
> function beeline_workaround_exit_status () {
> for arg in "$@"
> do if [ "$arg" = "-f" -o "$arg" = "-e" ]
>then beeline -u "..." "$@"
> return
>fi
> done
> beeline -u "..." "$@" -f /dev/stdin
> }
> beeline_workaround_exit_status < boo;
> EOF
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12612) beeline always exits with 0 status when reading query from standard input

2016-03-30 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-12612:
--
Attachment: HIVE-12612.03.patch

> beeline always exits with 0 status when reading query from standard input
> -
>
> Key: HIVE-12612
> URL: https://issues.apache.org/jira/browse/HIVE-12612
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 1.1.0
> Environment: CDH5.5.0
>Reporter: Paulo Sequeira
>Assignee: Reuben Kuhnert
>Priority: Minor
> Attachments: HIVE-12612.01.patch, HIVE-12612.02.patch, 
> HIVE-12612.03.patch
>
>
> Similar to what was reported on HIVE-6978, but now it only happens when the 
> query is read from the standard input. For example, the following fails as 
> expected:
> {code}
> bash$ if beeline -u "jdbc:hive2://..." -e "boo;" ; then echo "Ok?!" ; else 
> echo "Failed!" ; fi
> Connecting to jdbc:hive2://...
> Connected to: Apache Hive (version 1.1.0-cdh5.5.0)
> Driver: Hive JDBC (version 1.1.0-cdh5.5.0)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Error: Error while compiling statement: FAILED: ParseException line 1:0 
> cannot recognize input near 'boo' '' '' (state=42000,code=4)
> Closing: 0: jdbc:hive2://...
> Failed!
> {code}
> But the following does not:
> {code}
> bash$ if echo "boo;"|beeline -u "jdbc:hive2://..." ; then echo "Ok?!" ; else 
> echo "Failed!" ; fi
> Connecting to jdbc:hive2://...
> Connected to: Apache Hive (version 1.1.0-cdh5.5.0)
> Driver: Hive JDBC (version 1.1.0-cdh5.5.0)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Beeline version 1.1.0-cdh5.5.0 by Apache Hive
> 0: jdbc:hive2://...:8> Error: Error while compiling statement: FAILED: 
> ParseException line 1:0 cannot recognize input near 'boo' '' '' 
> (state=42000,code=4)
> 0: jdbc:hive2://...:8> Closing: 0: jdbc:hive2://...
> Ok?!
> {code}
> This was misleading our batch scripts to always believe that the execution of 
> the queries succeded, when sometimes that was not the case. 
> h2. Workaround
> We found we can work around the issue by always using the -e or the -f 
> parameters, and even reading the standard input through the /dev/stdin device 
> (this was useful because a lot of the scripts fed the queries from here 
> documents), like this:
> {code:title=some-script.sh}
> #!/bin/sh
> set -o nounset -o errexit -o pipefail
> # As beeline is failing to report an error status if reading the query
> # to be executed from STDIN, check whether no -f or -e option is used
> # and, in that case, pretend it has to read the query from a regular
> # file using -f to read from /dev/stdin
> function beeline_workaround_exit_status () {
> for arg in "$@"
> do if [ "$arg" = "-f" -o "$arg" = "-e" ]
>then beeline -u "..." "$@"
> return
>fi
> done
> beeline -u "..." "$@" -f /dev/stdin
> }
> beeline_workaround_exit_status < boo;
> EOF
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12612) beeline always exits with 0 status when reading query from standard input

2016-03-25 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-12612:
--
Status: Patch Available  (was: Open)

> beeline always exits with 0 status when reading query from standard input
> -
>
> Key: HIVE-12612
> URL: https://issues.apache.org/jira/browse/HIVE-12612
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 1.1.0
> Environment: CDH5.5.0
>Reporter: Paulo Sequeira
>Assignee: Reuben Kuhnert
>Priority: Minor
> Attachments: HIVE-12612.01.patch, HIVE-12612.02.patch
>
>
> Similar to what was reported on HIVE-6978, but now it only happens when the 
> query is read from the standard input. For example, the following fails as 
> expected:
> {code}
> bash$ if beeline -u "jdbc:hive2://..." -e "boo;" ; then echo "Ok?!" ; else 
> echo "Failed!" ; fi
> Connecting to jdbc:hive2://...
> Connected to: Apache Hive (version 1.1.0-cdh5.5.0)
> Driver: Hive JDBC (version 1.1.0-cdh5.5.0)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Error: Error while compiling statement: FAILED: ParseException line 1:0 
> cannot recognize input near 'boo' '' '' (state=42000,code=4)
> Closing: 0: jdbc:hive2://...
> Failed!
> {code}
> But the following does not:
> {code}
> bash$ if echo "boo;"|beeline -u "jdbc:hive2://..." ; then echo "Ok?!" ; else 
> echo "Failed!" ; fi
> Connecting to jdbc:hive2://...
> Connected to: Apache Hive (version 1.1.0-cdh5.5.0)
> Driver: Hive JDBC (version 1.1.0-cdh5.5.0)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Beeline version 1.1.0-cdh5.5.0 by Apache Hive
> 0: jdbc:hive2://...:8> Error: Error while compiling statement: FAILED: 
> ParseException line 1:0 cannot recognize input near 'boo' '' '' 
> (state=42000,code=4)
> 0: jdbc:hive2://...:8> Closing: 0: jdbc:hive2://...
> Ok?!
> {code}
> This was misleading our batch scripts to always believe that the execution of 
> the queries succeded, when sometimes that was not the case. 
> h2. Workaround
> We found we can work around the issue by always using the -e or the -f 
> parameters, and even reading the standard input through the /dev/stdin device 
> (this was useful because a lot of the scripts fed the queries from here 
> documents), like this:
> {code:title=some-script.sh}
> #!/bin/sh
> set -o nounset -o errexit -o pipefail
> # As beeline is failing to report an error status if reading the query
> # to be executed from STDIN, check whether no -f or -e option is used
> # and, in that case, pretend it has to read the query from a regular
> # file using -f to read from /dev/stdin
> function beeline_workaround_exit_status () {
> for arg in "$@"
> do if [ "$arg" = "-f" -o "$arg" = "-e" ]
>then beeline -u "..." "$@"
> return
>fi
> done
> beeline -u "..." "$@" -f /dev/stdin
> }
> beeline_workaround_exit_status < boo;
> EOF
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12612) beeline always exits with 0 status when reading query from standard input

2016-03-25 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-12612:
--
Attachment: HIVE-12612.02.patch

> beeline always exits with 0 status when reading query from standard input
> -
>
> Key: HIVE-12612
> URL: https://issues.apache.org/jira/browse/HIVE-12612
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 1.1.0
> Environment: CDH5.5.0
>Reporter: Paulo Sequeira
>Assignee: Reuben Kuhnert
>Priority: Minor
> Attachments: HIVE-12612.01.patch, HIVE-12612.02.patch
>
>
> Similar to what was reported on HIVE-6978, but now it only happens when the 
> query is read from the standard input. For example, the following fails as 
> expected:
> {code}
> bash$ if beeline -u "jdbc:hive2://..." -e "boo;" ; then echo "Ok?!" ; else 
> echo "Failed!" ; fi
> Connecting to jdbc:hive2://...
> Connected to: Apache Hive (version 1.1.0-cdh5.5.0)
> Driver: Hive JDBC (version 1.1.0-cdh5.5.0)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Error: Error while compiling statement: FAILED: ParseException line 1:0 
> cannot recognize input near 'boo' '' '' (state=42000,code=4)
> Closing: 0: jdbc:hive2://...
> Failed!
> {code}
> But the following does not:
> {code}
> bash$ if echo "boo;"|beeline -u "jdbc:hive2://..." ; then echo "Ok?!" ; else 
> echo "Failed!" ; fi
> Connecting to jdbc:hive2://...
> Connected to: Apache Hive (version 1.1.0-cdh5.5.0)
> Driver: Hive JDBC (version 1.1.0-cdh5.5.0)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Beeline version 1.1.0-cdh5.5.0 by Apache Hive
> 0: jdbc:hive2://...:8> Error: Error while compiling statement: FAILED: 
> ParseException line 1:0 cannot recognize input near 'boo' '' '' 
> (state=42000,code=4)
> 0: jdbc:hive2://...:8> Closing: 0: jdbc:hive2://...
> Ok?!
> {code}
> This was misleading our batch scripts to always believe that the execution of 
> the queries succeded, when sometimes that was not the case. 
> h2. Workaround
> We found we can work around the issue by always using the -e or the -f 
> parameters, and even reading the standard input through the /dev/stdin device 
> (this was useful because a lot of the scripts fed the queries from here 
> documents), like this:
> {code:title=some-script.sh}
> #!/bin/sh
> set -o nounset -o errexit -o pipefail
> # As beeline is failing to report an error status if reading the query
> # to be executed from STDIN, check whether no -f or -e option is used
> # and, in that case, pretend it has to read the query from a regular
> # file using -f to read from /dev/stdin
> function beeline_workaround_exit_status () {
> for arg in "$@"
> do if [ "$arg" = "-f" -o "$arg" = "-e" ]
>then beeline -u "..." "$@"
> return
>fi
> done
> beeline -u "..." "$@" -f /dev/stdin
> }
> beeline_workaround_exit_status < boo;
> EOF
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12612) beeline always exits with 0 status when reading query from standard input

2016-03-25 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-12612:
--
Status: Open  (was: Patch Available)

> beeline always exits with 0 status when reading query from standard input
> -
>
> Key: HIVE-12612
> URL: https://issues.apache.org/jira/browse/HIVE-12612
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 1.1.0
> Environment: CDH5.5.0
>Reporter: Paulo Sequeira
>Assignee: Reuben Kuhnert
>Priority: Minor
> Attachments: HIVE-12612.01.patch, HIVE-12612.02.patch
>
>
> Similar to what was reported on HIVE-6978, but now it only happens when the 
> query is read from the standard input. For example, the following fails as 
> expected:
> {code}
> bash$ if beeline -u "jdbc:hive2://..." -e "boo;" ; then echo "Ok?!" ; else 
> echo "Failed!" ; fi
> Connecting to jdbc:hive2://...
> Connected to: Apache Hive (version 1.1.0-cdh5.5.0)
> Driver: Hive JDBC (version 1.1.0-cdh5.5.0)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Error: Error while compiling statement: FAILED: ParseException line 1:0 
> cannot recognize input near 'boo' '' '' (state=42000,code=4)
> Closing: 0: jdbc:hive2://...
> Failed!
> {code}
> But the following does not:
> {code}
> bash$ if echo "boo;"|beeline -u "jdbc:hive2://..." ; then echo "Ok?!" ; else 
> echo "Failed!" ; fi
> Connecting to jdbc:hive2://...
> Connected to: Apache Hive (version 1.1.0-cdh5.5.0)
> Driver: Hive JDBC (version 1.1.0-cdh5.5.0)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Beeline version 1.1.0-cdh5.5.0 by Apache Hive
> 0: jdbc:hive2://...:8> Error: Error while compiling statement: FAILED: 
> ParseException line 1:0 cannot recognize input near 'boo' '' '' 
> (state=42000,code=4)
> 0: jdbc:hive2://...:8> Closing: 0: jdbc:hive2://...
> Ok?!
> {code}
> This was misleading our batch scripts to always believe that the execution of 
> the queries succeded, when sometimes that was not the case. 
> h2. Workaround
> We found we can work around the issue by always using the -e or the -f 
> parameters, and even reading the standard input through the /dev/stdin device 
> (this was useful because a lot of the scripts fed the queries from here 
> documents), like this:
> {code:title=some-script.sh}
> #!/bin/sh
> set -o nounset -o errexit -o pipefail
> # As beeline is failing to report an error status if reading the query
> # to be executed from STDIN, check whether no -f or -e option is used
> # and, in that case, pretend it has to read the query from a regular
> # file using -f to read from /dev/stdin
> function beeline_workaround_exit_status () {
> for arg in "$@"
> do if [ "$arg" = "-f" -o "$arg" = "-e" ]
>then beeline -u "..." "$@"
> return
>fi
> done
> beeline -u "..." "$@" -f /dev/stdin
> }
> beeline_workaround_exit_status < boo;
> EOF
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12612) beeline always exits with 0 status when reading query from standard input

2016-03-22 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-12612:
--
Attachment: HIVE-12612.01.patch

> beeline always exits with 0 status when reading query from standard input
> -
>
> Key: HIVE-12612
> URL: https://issues.apache.org/jira/browse/HIVE-12612
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 1.1.0
> Environment: CDH5.5.0
>Reporter: Paulo Sequeira
>Assignee: Reuben Kuhnert
>Priority: Minor
> Attachments: HIVE-12612.01.patch
>
>
> Similar to what was reported on HIVE-6978, but now it only happens when the 
> query is read from the standard input. For example, the following fails as 
> expected:
> {code}
> bash$ if beeline -u "jdbc:hive2://..." -e "boo;" ; then echo "Ok?!" ; else 
> echo "Failed!" ; fi
> Connecting to jdbc:hive2://...
> Connected to: Apache Hive (version 1.1.0-cdh5.5.0)
> Driver: Hive JDBC (version 1.1.0-cdh5.5.0)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Error: Error while compiling statement: FAILED: ParseException line 1:0 
> cannot recognize input near 'boo' '' '' (state=42000,code=4)
> Closing: 0: jdbc:hive2://...
> Failed!
> {code}
> But the following does not:
> {code}
> bash$ if echo "boo;"|beeline -u "jdbc:hive2://..." ; then echo "Ok?!" ; else 
> echo "Failed!" ; fi
> Connecting to jdbc:hive2://...
> Connected to: Apache Hive (version 1.1.0-cdh5.5.0)
> Driver: Hive JDBC (version 1.1.0-cdh5.5.0)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Beeline version 1.1.0-cdh5.5.0 by Apache Hive
> 0: jdbc:hive2://...:8> Error: Error while compiling statement: FAILED: 
> ParseException line 1:0 cannot recognize input near 'boo' '' '' 
> (state=42000,code=4)
> 0: jdbc:hive2://...:8> Closing: 0: jdbc:hive2://...
> Ok?!
> {code}
> This was misleading our batch scripts to always believe that the execution of 
> the queries succeded, when sometimes that was not the case. 
> h2. Workaround
> We found we can work around the issue by always using the -e or the -f 
> parameters, and even reading the standard input through the /dev/stdin device 
> (this was useful because a lot of the scripts fed the queries from here 
> documents), like this:
> {code:title=some-script.sh}
> #!/bin/sh
> set -o nounset -o errexit -o pipefail
> # As beeline is failing to report an error status if reading the query
> # to be executed from STDIN, check whether no -f or -e option is used
> # and, in that case, pretend it has to read the query from a regular
> # file using -f to read from /dev/stdin
> function beeline_workaround_exit_status () {
> for arg in "$@"
> do if [ "$arg" = "-f" -o "$arg" = "-e" ]
>then beeline -u "..." "$@"
> return
>fi
> done
> beeline -u "..." "$@" -f /dev/stdin
> }
> beeline_workaround_exit_status < boo;
> EOF
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12612) beeline always exits with 0 status when reading query from standard input

2016-03-22 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-12612:
--
Status: Patch Available  (was: Open)

> beeline always exits with 0 status when reading query from standard input
> -
>
> Key: HIVE-12612
> URL: https://issues.apache.org/jira/browse/HIVE-12612
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 1.1.0
> Environment: CDH5.5.0
>Reporter: Paulo Sequeira
>Assignee: Reuben Kuhnert
>Priority: Minor
> Attachments: HIVE-12612.01.patch
>
>
> Similar to what was reported on HIVE-6978, but now it only happens when the 
> query is read from the standard input. For example, the following fails as 
> expected:
> {code}
> bash$ if beeline -u "jdbc:hive2://..." -e "boo;" ; then echo "Ok?!" ; else 
> echo "Failed!" ; fi
> Connecting to jdbc:hive2://...
> Connected to: Apache Hive (version 1.1.0-cdh5.5.0)
> Driver: Hive JDBC (version 1.1.0-cdh5.5.0)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Error: Error while compiling statement: FAILED: ParseException line 1:0 
> cannot recognize input near 'boo' '' '' (state=42000,code=4)
> Closing: 0: jdbc:hive2://...
> Failed!
> {code}
> But the following does not:
> {code}
> bash$ if echo "boo;"|beeline -u "jdbc:hive2://..." ; then echo "Ok?!" ; else 
> echo "Failed!" ; fi
> Connecting to jdbc:hive2://...
> Connected to: Apache Hive (version 1.1.0-cdh5.5.0)
> Driver: Hive JDBC (version 1.1.0-cdh5.5.0)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Beeline version 1.1.0-cdh5.5.0 by Apache Hive
> 0: jdbc:hive2://...:8> Error: Error while compiling statement: FAILED: 
> ParseException line 1:0 cannot recognize input near 'boo' '' '' 
> (state=42000,code=4)
> 0: jdbc:hive2://...:8> Closing: 0: jdbc:hive2://...
> Ok?!
> {code}
> This was misleading our batch scripts to always believe that the execution of 
> the queries succeded, when sometimes that was not the case. 
> h2. Workaround
> We found we can work around the issue by always using the -e or the -f 
> parameters, and even reading the standard input through the /dev/stdin device 
> (this was useful because a lot of the scripts fed the queries from here 
> documents), like this:
> {code:title=some-script.sh}
> #!/bin/sh
> set -o nounset -o errexit -o pipefail
> # As beeline is failing to report an error status if reading the query
> # to be executed from STDIN, check whether no -f or -e option is used
> # and, in that case, pretend it has to read the query from a regular
> # file using -f to read from /dev/stdin
> function beeline_workaround_exit_status () {
> for arg in "$@"
> do if [ "$arg" = "-f" -o "$arg" = "-e" ]
>then beeline -u "..." "$@"
> return
>fi
> done
> beeline -u "..." "$@" -f /dev/stdin
> }
> beeline_workaround_exit_status < boo;
> EOF
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12612) beeline always exits with 0 status when reading query from standard input

2016-03-21 Thread Reuben Kuhnert (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15205219#comment-15205219
 ] 

Reuben Kuhnert commented on HIVE-12612:
---

Was able to reproduce this issue earlier, but was wondering on what we want the 
desired behavior to be. For example, if the user executes something like:

{code}
echo "should-fail; show tables;" | beeline
{code}

what should the output be? Should it return '0' because the last command ran 
successfully, or should it return some error code because the first command 
failed? In addition, when running a standard (long-running) beeline session, 
should we ever return a failure result (if say one of the commands during the 
session fails)? Seems like the workaround here is the only realistic solution, 
but would love input.

> beeline always exits with 0 status when reading query from standard input
> -
>
> Key: HIVE-12612
> URL: https://issues.apache.org/jira/browse/HIVE-12612
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 1.1.0
> Environment: CDH5.5.0
>Reporter: Paulo Sequeira
>Assignee: Reuben Kuhnert
>Priority: Minor
>
> Similar to what was reported on HIVE-6978, but now it only happens when the 
> query is read from the standard input. For example, the following fails as 
> expected:
> {code}
> bash$ if beeline -u "jdbc:hive2://..." -e "boo;" ; then echo "Ok?!" ; else 
> echo "Failed!" ; fi
> Connecting to jdbc:hive2://...
> Connected to: Apache Hive (version 1.1.0-cdh5.5.0)
> Driver: Hive JDBC (version 1.1.0-cdh5.5.0)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Error: Error while compiling statement: FAILED: ParseException line 1:0 
> cannot recognize input near 'boo' '' '' (state=42000,code=4)
> Closing: 0: jdbc:hive2://...
> Failed!
> {code}
> But the following does not:
> {code}
> bash$ if echo "boo;"|beeline -u "jdbc:hive2://..." ; then echo "Ok?!" ; else 
> echo "Failed!" ; fi
> Connecting to jdbc:hive2://...
> Connected to: Apache Hive (version 1.1.0-cdh5.5.0)
> Driver: Hive JDBC (version 1.1.0-cdh5.5.0)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Beeline version 1.1.0-cdh5.5.0 by Apache Hive
> 0: jdbc:hive2://...:8> Error: Error while compiling statement: FAILED: 
> ParseException line 1:0 cannot recognize input near 'boo' '' '' 
> (state=42000,code=4)
> 0: jdbc:hive2://...:8> Closing: 0: jdbc:hive2://...
> Ok?!
> {code}
> This was misleading our batch scripts to always believe that the execution of 
> the queries succeded, when sometimes that was not the case. 
> h2. Workaround
> We found we can work around the issue by always using the -e or the -f 
> parameters, and even reading the standard input through the /dev/stdin device 
> (this was useful because a lot of the scripts fed the queries from here 
> documents), like this:
> {code:title=some-script.sh}
> #!/bin/sh
> set -o nounset -o errexit -o pipefail
> # As beeline is failing to report an error status if reading the query
> # to be executed from STDIN, check whether no -f or -e option is used
> # and, in that case, pretend it has to read the query from a regular
> # file using -f to read from /dev/stdin
> function beeline_workaround_exit_status () {
> for arg in "$@"
> do if [ "$arg" = "-f" -o "$arg" = "-e" ]
>then beeline -u "..." "$@"
> return
>fi
> done
> beeline -u "..." "$@" -f /dev/stdin
> }
> beeline_workaround_exit_status < boo;
> EOF
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13311) MetaDataFormatUtils throws NPE when HiveDecimal.create is null

2016-03-20 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13311:
--
Status: Patch Available  (was: Open)

> MetaDataFormatUtils throws NPE when HiveDecimal.create is null
> --
>
> Key: HIVE-13311
> URL: https://issues.apache.org/jira/browse/HIVE-13311
> Project: Hive
>  Issue Type: Bug
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
>Priority: Minor
> Attachments: HIVE-13311.01.patch
>
>
> The {{MetadataFormatUtils.convertToString}} functions have guards to validate 
> for when valid is null, however the {{HiveDecimal.create}} can return null 
> and will throw exceptions when {{.toString()}} is called.
> {code}
>   private static String convertToString(Decimal val) {
> if (val == null) {
>   return "";
> }
> // HERE: Will throw NPE when HiveDecimal.create returns null.
> return HiveDecimal.create(new BigInteger(val.getUnscaled()), 
> val.getScale()).toString();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13311) MetaDataFormatUtils throws NPE when HiveDecimal.create is null

2016-03-20 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13311:
--
Description: 
The {{MetadataFormatUtils.convertToString}} functions have guards to validate 
for when valid is null, however the {{HiveDecimal.create}} can return null and 
will throw exceptions when {{.toString()}} is called.

{code}
  private static String convertToString(Decimal val) {
if (val == null) {
  return "";
}

return HiveDecimal.create(new BigInteger(val.getUnscaled()), 
val.getScale()).toString();
  }
{code}

  was:
The {{MetadataFormatUtils.convertToString}} functions have guards to validate 
for when valid is null, however the 

{code}
  private static String convertToString(Decimal val) {
if (val == null) {
  return "";
}

return HiveDecimal.create(new BigInteger(val.getUnscaled()), 
val.getScale()).toString();
  }
{code}


> MetaDataFormatUtils throws NPE when HiveDecimal.create is null
> --
>
> Key: HIVE-13311
> URL: https://issues.apache.org/jira/browse/HIVE-13311
> Project: Hive
>  Issue Type: Bug
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
>
> The {{MetadataFormatUtils.convertToString}} functions have guards to validate 
> for when valid is null, however the {{HiveDecimal.create}} can return null 
> and will throw exceptions when {{.toString()}} is called.
> {code}
>   private static String convertToString(Decimal val) {
> if (val == null) {
>   return "";
> }
> return HiveDecimal.create(new BigInteger(val.getUnscaled()), 
> val.getScale()).toString();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-12612) beeline always exits with 0 status when reading query from standard input

2016-03-19 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert reassigned HIVE-12612:
-

Assignee: Reuben Kuhnert

> beeline always exits with 0 status when reading query from standard input
> -
>
> Key: HIVE-12612
> URL: https://issues.apache.org/jira/browse/HIVE-12612
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 1.1.0
> Environment: CDH5.5.0
>Reporter: Paulo Sequeira
>Assignee: Reuben Kuhnert
>Priority: Minor
>
> Similar to what was reported on HIVE-6978, but now it only happens when the 
> query is read from the standard input. For example, the following fails as 
> expected:
> {code}
> bash$ if beeline -u "jdbc:hive2://..." -e "boo;" ; then echo "Ok?!" ; else 
> echo "Failed!" ; fi
> Connecting to jdbc:hive2://...
> Connected to: Apache Hive (version 1.1.0-cdh5.5.0)
> Driver: Hive JDBC (version 1.1.0-cdh5.5.0)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Error: Error while compiling statement: FAILED: ParseException line 1:0 
> cannot recognize input near 'boo' '' '' (state=42000,code=4)
> Closing: 0: jdbc:hive2://...
> Failed!
> {code}
> But the following does not:
> {code}
> bash$ if echo "boo;"|beeline -u "jdbc:hive2://..." ; then echo "Ok?!" ; else 
> echo "Failed!" ; fi
> Connecting to jdbc:hive2://...
> Connected to: Apache Hive (version 1.1.0-cdh5.5.0)
> Driver: Hive JDBC (version 1.1.0-cdh5.5.0)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Beeline version 1.1.0-cdh5.5.0 by Apache Hive
> 0: jdbc:hive2://...:8> Error: Error while compiling statement: FAILED: 
> ParseException line 1:0 cannot recognize input near 'boo' '' '' 
> (state=42000,code=4)
> 0: jdbc:hive2://...:8> Closing: 0: jdbc:hive2://...
> Ok?!
> {code}
> This was misleading our batch scripts to always believe that the execution of 
> the queries succeded, when sometimes that was not the case. 
> h2. Workaround
> We found we can work around the issue by always using the -e or the -f 
> parameters, and even reading the standard input through the /dev/stdin device 
> (this was useful because a lot of the scripts fed the queries from here 
> documents), like this:
> {code:title=some-script.sh}
> #!/bin/sh
> set -o nounset -o errexit -o pipefail
> # As beeline is failing to report an error status if reading the query
> # to be executed from STDIN, check whether no -f or -e option is used
> # and, in that case, pretend it has to read the query from a regular
> # file using -f to read from /dev/stdin
> function beeline_workaround_exit_status () {
> for arg in "$@"
> do if [ "$arg" = "-f" -o "$arg" = "-e" ]
>then beeline -u "..." "$@"
> return
>fi
> done
> beeline -u "..." "$@" -f /dev/stdin
> }
> beeline_workaround_exit_status < boo;
> EOF
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13311) MetaDataFormatUtils throws NPE when HiveDecimal.create is null

2016-03-19 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13311:
--
Description: 
The {{MetadataFormatUtils.convertToString}} functions have guards to validate 
for when valid is null, however the {{HiveDecimal.create}} can return null and 
will throw exceptions when {{.toString()}} is called.

{code}
  private static String convertToString(Decimal val) {
if (val == null) {
  return "";
}

// HERE: Will throw NPE when HiveDecimal.create returns null.
return HiveDecimal.create(new BigInteger(val.getUnscaled()), 
val.getScale()).toString();
  }
{code}

  was:
The {{MetadataFormatUtils.convertToString}} functions have guards to validate 
for when valid is null, however the {{HiveDecimal.create}} can return null and 
will throw exceptions when {{.toString()}} is called.

{code}
  private static String convertToString(Decimal val) {
if (val == null) {
  return "";
}

return HiveDecimal.create(new BigInteger(val.getUnscaled()), 
val.getScale()).toString();
  }
{code}


> MetaDataFormatUtils throws NPE when HiveDecimal.create is null
> --
>
> Key: HIVE-13311
> URL: https://issues.apache.org/jira/browse/HIVE-13311
> Project: Hive
>  Issue Type: Bug
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
>
> The {{MetadataFormatUtils.convertToString}} functions have guards to validate 
> for when valid is null, however the {{HiveDecimal.create}} can return null 
> and will throw exceptions when {{.toString()}} is called.
> {code}
>   private static String convertToString(Decimal val) {
> if (val == null) {
>   return "";
> }
> // HERE: Will throw NPE when HiveDecimal.create returns null.
> return HiveDecimal.create(new BigInteger(val.getUnscaled()), 
> val.getScale()).toString();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13311) MetaDataFormatUtils throws NPE when HiveDecimal.create is null

2016-03-19 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13311:
--
Attachment: HIVE-13311.01.patch

> MetaDataFormatUtils throws NPE when HiveDecimal.create is null
> --
>
> Key: HIVE-13311
> URL: https://issues.apache.org/jira/browse/HIVE-13311
> Project: Hive
>  Issue Type: Bug
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
>Priority: Minor
> Attachments: HIVE-13311.01.patch
>
>
> The {{MetadataFormatUtils.convertToString}} functions have guards to validate 
> for when valid is null, however the {{HiveDecimal.create}} can return null 
> and will throw exceptions when {{.toString()}} is called.
> {code}
>   private static String convertToString(Decimal val) {
> if (val == null) {
>   return "";
> }
> // HERE: Will throw NPE when HiveDecimal.create returns null.
> return HiveDecimal.create(new BigInteger(val.getUnscaled()), 
> val.getScale()).toString();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13311) MetaDataFormatUtils throws NPE when HiveDecimal.create is null

2016-03-19 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13311:
--
Priority: Minor  (was: Major)

> MetaDataFormatUtils throws NPE when HiveDecimal.create is null
> --
>
> Key: HIVE-13311
> URL: https://issues.apache.org/jira/browse/HIVE-13311
> Project: Hive
>  Issue Type: Bug
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
>Priority: Minor
>
> The {{MetadataFormatUtils.convertToString}} functions have guards to validate 
> for when valid is null, however the {{HiveDecimal.create}} can return null 
> and will throw exceptions when {{.toString()}} is called.
> {code}
>   private static String convertToString(Decimal val) {
> if (val == null) {
>   return "";
> }
> // HERE: Will throw NPE when HiveDecimal.create returns null.
> return HiveDecimal.create(new BigInteger(val.getUnscaled()), 
> val.getScale()).toString();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12540) Create function failed, but show functions display it

2016-03-15 Thread Reuben Kuhnert (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15195436#comment-15195436
 ] 

Reuben Kuhnert commented on HIVE-12540:
---

I just tested this, but it worked for me. Is this still an issue?

Declaration:

{code}
public class FunctionTask extends Task {
  public static class MyUDF extends UDF {

  }
}
{code}

Test:
{code}
create function udfTest as 'org.apache.hadoop.hive.ql.exec.FunctionTask$MyUDF';
INFO  : Compiling 
command(queryId=sircodesalot_20160315095656_38c72e48-856e-4ece-94e8-eecc145cc045):
 create function udfTest as 'org.apache.hadoop.hive.ql.exec.FunctionTask$MyUDF'
INFO  : Semantic Analysis Completed
INFO  : Returning Hive schema: Schema(fieldSchemas:null, properties:null)
INFO  : Completed compiling 
command(queryId=sircodesalot_20160315095656_38c72e48-856e-4ece-94e8-eecc145cc045);
 Time taken: 0.108 seconds
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Executing 
command(queryId=sircodesalot_20160315095656_38c72e48-856e-4ece-94e8-eecc145cc045):
 create function udfTest as 'org.apache.hadoop.hive.ql.exec.FunctionTask$MyUDF'
INFO  : Starting task [Stage-0:FUNC] in serial mode
INFO  : Completed executing 
command(queryId=sircodesalot_20160315095656_38c72e48-856e-4ece-94e8-eecc145cc045);
 Time taken: 75.289 seconds
INFO  : OK
No rows affected (75.5 seconds)
{code}

{code}
0: jdbc:hive2://localhost:1> show functions;
show functions;
+-+--+
|tab_name |
+-+--+
| ... |
| default.udftest |
| ... |
+-+--+
{code}

> Create function failed, but show functions display it
> -
>
> Key: HIVE-12540
> URL: https://issues.apache.org/jira/browse/HIVE-12540
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.2.0, 1.2.1
>Reporter: Weizhong
>Priority: Minor
>
> {noformat}
> 0: jdbc:hive2://vm119:1> create function udfTest as 
> 'hive.udf.UDFArrayNotE';
> ERROR : Failed to register default.udftest using class hive.udf.UDFArrayNotE
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.FunctionTask (state=08S01,code=1)
> 0: jdbc:hive2://vm119:1> show functions;
> +-+--+
> |tab_name |
> +-+--+
> | ... |
> | default.udftest |
> | ... |
> +-+--+
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13231) Show helpful error message on failure to create table in nested directory

2016-03-10 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13231:
--
Status: Patch Available  (was: Open)

> Show helpful error message on failure to create table in nested directory
> -
>
> Key: HIVE-13231
> URL: https://issues.apache.org/jira/browse/HIVE-13231
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
>Priority: Minor
> Attachments: HIVE-13231.01.patch, HIVE-13231.02.patch
>
>
> cannot store data in a directory whose parent doesn't exist, even though the 
> target dir does have an existing ancestor on HDFS. This occurs when trying to 
> perform {{create table }}.
> {code}
> 0: jdbc:hive2://10.17.81.192:1/default> create table test3 location 
> '/user/hive/data/yshi/nonexisting/test3' as select * from sample_07;
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.MoveTask (state=08S01,code=1)
> Error message:
> 2015-10-29 19:04:46,323 ERROR org.apache.hadoop.hive.ql.exec.Task: Failed 
> with exception Unable to rename: 
> hdfs://host-10-17-81-192.coe.cloudera.com:8020/user/hive/warehouse/.hive-staging_hive_2015-10-29_19-04-08_375_5385987873542863570-3/-ext-10001
>  to: /user/hive/data/yshi/nonexisting/test3
> org.apache.hadoop.hive.ql.metadata.HiveException: Unable to rename: 
> hdfs://host-10-17-81-192.coe.cloudera.com:8020/user/hive/warehouse/.hive-staging_hive_2015-10-29_19-04-08_375_5385987873542863570-3/-ext-10001
>  to: /user/hive/data/yshi/nonexisting/test3
> at org.apache.hadoop.hive.ql.exec.MoveTask.moveFile(MoveTask.java:101)
> at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:209)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1554)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1321)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1139)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:962)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:957)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:144)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.access$000(SQLOperation.java:68)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:199)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:502)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:212)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13231) Show helpful error message on failure to create table in nested directory

2016-03-10 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13231:
--
Status: Open  (was: Patch Available)

> Show helpful error message on failure to create table in nested directory
> -
>
> Key: HIVE-13231
> URL: https://issues.apache.org/jira/browse/HIVE-13231
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
>Priority: Minor
> Attachments: HIVE-13231.01.patch, HIVE-13231.02.patch
>
>
> cannot store data in a directory whose parent doesn't exist, even though the 
> target dir does have an existing ancestor on HDFS. This occurs when trying to 
> perform {{create table }}.
> {code}
> 0: jdbc:hive2://10.17.81.192:1/default> create table test3 location 
> '/user/hive/data/yshi/nonexisting/test3' as select * from sample_07;
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.MoveTask (state=08S01,code=1)
> Error message:
> 2015-10-29 19:04:46,323 ERROR org.apache.hadoop.hive.ql.exec.Task: Failed 
> with exception Unable to rename: 
> hdfs://host-10-17-81-192.coe.cloudera.com:8020/user/hive/warehouse/.hive-staging_hive_2015-10-29_19-04-08_375_5385987873542863570-3/-ext-10001
>  to: /user/hive/data/yshi/nonexisting/test3
> org.apache.hadoop.hive.ql.metadata.HiveException: Unable to rename: 
> hdfs://host-10-17-81-192.coe.cloudera.com:8020/user/hive/warehouse/.hive-staging_hive_2015-10-29_19-04-08_375_5385987873542863570-3/-ext-10001
>  to: /user/hive/data/yshi/nonexisting/test3
> at org.apache.hadoop.hive.ql.exec.MoveTask.moveFile(MoveTask.java:101)
> at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:209)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1554)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1321)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1139)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:962)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:957)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:144)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.access$000(SQLOperation.java:68)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:199)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:502)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:212)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13231) Show helpful error message on failure to create table in nested directory

2016-03-10 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13231:
--
Attachment: HIVE-13231.02.patch

> Show helpful error message on failure to create table in nested directory
> -
>
> Key: HIVE-13231
> URL: https://issues.apache.org/jira/browse/HIVE-13231
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
>Priority: Minor
> Attachments: HIVE-13231.01.patch, HIVE-13231.02.patch
>
>
> cannot store data in a directory whose parent doesn't exist, even though the 
> target dir does have an existing ancestor on HDFS. This occurs when trying to 
> perform {{create table }}.
> {code}
> 0: jdbc:hive2://10.17.81.192:1/default> create table test3 location 
> '/user/hive/data/yshi/nonexisting/test3' as select * from sample_07;
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.MoveTask (state=08S01,code=1)
> Error message:
> 2015-10-29 19:04:46,323 ERROR org.apache.hadoop.hive.ql.exec.Task: Failed 
> with exception Unable to rename: 
> hdfs://host-10-17-81-192.coe.cloudera.com:8020/user/hive/warehouse/.hive-staging_hive_2015-10-29_19-04-08_375_5385987873542863570-3/-ext-10001
>  to: /user/hive/data/yshi/nonexisting/test3
> org.apache.hadoop.hive.ql.metadata.HiveException: Unable to rename: 
> hdfs://host-10-17-81-192.coe.cloudera.com:8020/user/hive/warehouse/.hive-staging_hive_2015-10-29_19-04-08_375_5385987873542863570-3/-ext-10001
>  to: /user/hive/data/yshi/nonexisting/test3
> at org.apache.hadoop.hive.ql.exec.MoveTask.moveFile(MoveTask.java:101)
> at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:209)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1554)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1321)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1139)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:962)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:957)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:144)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.access$000(SQLOperation.java:68)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:199)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:502)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:212)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13231) Show helpful error message on failure to create table in nested directory

2016-03-08 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13231:
--
Summary: Show helpful error message on failure to create table in nested 
directory  (was: Show helpful error message on failure to create nested table 
in nested directory)

> Show helpful error message on failure to create table in nested directory
> -
>
> Key: HIVE-13231
> URL: https://issues.apache.org/jira/browse/HIVE-13231
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
>Priority: Minor
> Attachments: HIVE-13231.01.patch
>
>
> cannot store data in a directory whose parent doesn't exist, even though the 
> target dir does have an existing ancestor on HDFS. This occurs when trying to 
> perform {{create table }}.
> {code}
> 0: jdbc:hive2://10.17.81.192:1/default> create table test3 location 
> '/user/hive/data/yshi/nonexisting/test3' as select * from sample_07;
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.MoveTask (state=08S01,code=1)
> Error message:
> 2015-10-29 19:04:46,323 ERROR org.apache.hadoop.hive.ql.exec.Task: Failed 
> with exception Unable to rename: 
> hdfs://host-10-17-81-192.coe.cloudera.com:8020/user/hive/warehouse/.hive-staging_hive_2015-10-29_19-04-08_375_5385987873542863570-3/-ext-10001
>  to: /user/hive/data/yshi/nonexisting/test3
> org.apache.hadoop.hive.ql.metadata.HiveException: Unable to rename: 
> hdfs://host-10-17-81-192.coe.cloudera.com:8020/user/hive/warehouse/.hive-staging_hive_2015-10-29_19-04-08_375_5385987873542863570-3/-ext-10001
>  to: /user/hive/data/yshi/nonexisting/test3
> at org.apache.hadoop.hive.ql.exec.MoveTask.moveFile(MoveTask.java:101)
> at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:209)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1554)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1321)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1139)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:962)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:957)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:144)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.access$000(SQLOperation.java:68)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:199)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:502)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:212)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13231) Show helpful error message on failure to create nested table in nested directory

2016-03-08 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13231:
--
Attachment: HIVE-13231.01.patch

> Show helpful error message on failure to create nested table in nested 
> directory
> 
>
> Key: HIVE-13231
> URL: https://issues.apache.org/jira/browse/HIVE-13231
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
>Priority: Minor
> Attachments: HIVE-13231.01.patch
>
>
> cannot store data in a directory whose parent doesn't exist, even though the 
> target dir does have an existing ancestor on HDFS. This occurs when trying to 
> perform {{create table }}.
> {code}
> 0: jdbc:hive2://10.17.81.192:1/default> create table test3 location 
> '/user/hive/data/yshi/nonexisting/test3' as select * from sample_07;
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.MoveTask (state=08S01,code=1)
> Error message:
> 2015-10-29 19:04:46,323 ERROR org.apache.hadoop.hive.ql.exec.Task: Failed 
> with exception Unable to rename: 
> hdfs://host-10-17-81-192.coe.cloudera.com:8020/user/hive/warehouse/.hive-staging_hive_2015-10-29_19-04-08_375_5385987873542863570-3/-ext-10001
>  to: /user/hive/data/yshi/nonexisting/test3
> org.apache.hadoop.hive.ql.metadata.HiveException: Unable to rename: 
> hdfs://host-10-17-81-192.coe.cloudera.com:8020/user/hive/warehouse/.hive-staging_hive_2015-10-29_19-04-08_375_5385987873542863570-3/-ext-10001
>  to: /user/hive/data/yshi/nonexisting/test3
> at org.apache.hadoop.hive.ql.exec.MoveTask.moveFile(MoveTask.java:101)
> at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:209)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1554)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1321)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1139)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:962)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:957)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:144)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.access$000(SQLOperation.java:68)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:199)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:502)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:212)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13231) Show helpful error message on failure to create nested table in nested directory

2016-03-08 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13231:
--
Status: Patch Available  (was: Open)

> Show helpful error message on failure to create nested table in nested 
> directory
> 
>
> Key: HIVE-13231
> URL: https://issues.apache.org/jira/browse/HIVE-13231
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
>Priority: Minor
> Attachments: HIVE-13231.01.patch
>
>
> cannot store data in a directory whose parent doesn't exist, even though the 
> target dir does have an existing ancestor on HDFS. This occurs when trying to 
> perform {{create table }}.
> {code}
> 0: jdbc:hive2://10.17.81.192:1/default> create table test3 location 
> '/user/hive/data/yshi/nonexisting/test3' as select * from sample_07;
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.MoveTask (state=08S01,code=1)
> Error message:
> 2015-10-29 19:04:46,323 ERROR org.apache.hadoop.hive.ql.exec.Task: Failed 
> with exception Unable to rename: 
> hdfs://host-10-17-81-192.coe.cloudera.com:8020/user/hive/warehouse/.hive-staging_hive_2015-10-29_19-04-08_375_5385987873542863570-3/-ext-10001
>  to: /user/hive/data/yshi/nonexisting/test3
> org.apache.hadoop.hive.ql.metadata.HiveException: Unable to rename: 
> hdfs://host-10-17-81-192.coe.cloudera.com:8020/user/hive/warehouse/.hive-staging_hive_2015-10-29_19-04-08_375_5385987873542863570-3/-ext-10001
>  to: /user/hive/data/yshi/nonexisting/test3
> at org.apache.hadoop.hive.ql.exec.MoveTask.moveFile(MoveTask.java:101)
> at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:209)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1554)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1321)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1139)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:962)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:957)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:144)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.access$000(SQLOperation.java:68)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:199)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:502)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:212)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13231) Show helpful error message on failure to create nested table in nested directory

2016-03-08 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13231:
--
Status: Open  (was: Patch Available)

> Show helpful error message on failure to create nested table in nested 
> directory
> 
>
> Key: HIVE-13231
> URL: https://issues.apache.org/jira/browse/HIVE-13231
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
>Priority: Minor
> Attachments: HIVE-13231.01.patch
>
>
> cannot store data in a directory whose parent doesn't exist, even though the 
> target dir does have an existing ancestor on HDFS. This occurs when trying to 
> perform {{create table }}.
> {code}
> 0: jdbc:hive2://10.17.81.192:1/default> create table test3 location 
> '/user/hive/data/yshi/nonexisting/test3' as select * from sample_07;
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.MoveTask (state=08S01,code=1)
> Error message:
> 2015-10-29 19:04:46,323 ERROR org.apache.hadoop.hive.ql.exec.Task: Failed 
> with exception Unable to rename: 
> hdfs://host-10-17-81-192.coe.cloudera.com:8020/user/hive/warehouse/.hive-staging_hive_2015-10-29_19-04-08_375_5385987873542863570-3/-ext-10001
>  to: /user/hive/data/yshi/nonexisting/test3
> org.apache.hadoop.hive.ql.metadata.HiveException: Unable to rename: 
> hdfs://host-10-17-81-192.coe.cloudera.com:8020/user/hive/warehouse/.hive-staging_hive_2015-10-29_19-04-08_375_5385987873542863570-3/-ext-10001
>  to: /user/hive/data/yshi/nonexisting/test3
> at org.apache.hadoop.hive.ql.exec.MoveTask.moveFile(MoveTask.java:101)
> at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:209)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1554)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1321)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1139)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:962)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:957)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:144)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.access$000(SQLOperation.java:68)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:199)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:502)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:212)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13231) Show helpful error message on failure to create nested table in nested directory

2016-03-08 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13231:
--
Attachment: (was: CDH-37508.01.patch)

> Show helpful error message on failure to create nested table in nested 
> directory
> 
>
> Key: HIVE-13231
> URL: https://issues.apache.org/jira/browse/HIVE-13231
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
>Priority: Minor
> Attachments: HIVE-13231.01.patch
>
>
> cannot store data in a directory whose parent doesn't exist, even though the 
> target dir does have an existing ancestor on HDFS. This occurs when trying to 
> perform {{create table }}.
> {code}
> 0: jdbc:hive2://10.17.81.192:1/default> create table test3 location 
> '/user/hive/data/yshi/nonexisting/test3' as select * from sample_07;
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.MoveTask (state=08S01,code=1)
> Error message:
> 2015-10-29 19:04:46,323 ERROR org.apache.hadoop.hive.ql.exec.Task: Failed 
> with exception Unable to rename: 
> hdfs://host-10-17-81-192.coe.cloudera.com:8020/user/hive/warehouse/.hive-staging_hive_2015-10-29_19-04-08_375_5385987873542863570-3/-ext-10001
>  to: /user/hive/data/yshi/nonexisting/test3
> org.apache.hadoop.hive.ql.metadata.HiveException: Unable to rename: 
> hdfs://host-10-17-81-192.coe.cloudera.com:8020/user/hive/warehouse/.hive-staging_hive_2015-10-29_19-04-08_375_5385987873542863570-3/-ext-10001
>  to: /user/hive/data/yshi/nonexisting/test3
> at org.apache.hadoop.hive.ql.exec.MoveTask.moveFile(MoveTask.java:101)
> at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:209)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1554)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1321)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1139)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:962)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:957)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:144)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.access$000(SQLOperation.java:68)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:199)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:502)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:212)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13231) Show helpful error message on failure to create nested table in nested directory

2016-03-08 Thread Reuben Kuhnert (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15185351#comment-15185351
 ] 

Reuben Kuhnert commented on HIVE-13231:
---

So it looks like it ~does work if you have {{hive.insert.into.multilevel.dirs}} 
turned on:

{code}
class MoveTask {
  private void moveFile(Path sourcePath, Path targetPath, boolean isDfsDir) {
...
if (HiveConf.getBoolVar(conf, 
HiveConf.ConfVars.HIVE_INSERT_INTO_MULTILEVEL_DIRS)) {
  deletePath = createTargetPath(targetPath, fs);
}
...
  }
}
{code}

In action:

{code}
(Using gerrit/cdh5-1.1.0_dev)

0: jdbc:hive2://localhost:1> set hive.insert.into.multilevel.dirs;
++--+
|  set   |
++--+
| hive.insert.into.multilevel.dirs=true  |
++--+

0: jdbc:hive2://localhost:1> create table shouldwork location 
'/user/hive/warehouse/x/y/z/shouldwork' as select * from another;
arehouse/x/y/z/shouldwork' as select * from another;
INFO  : Compiling 
command(queryId=sircodesalot_20160105131616_2bbad703-76fe-4621-809e-11a16ee72182):
 create table shouldwork location '/user/hive/warehouse/x/y/z/shouldwork' as 
select * from another
INFO  : Semantic Analysis Completed
INFO  : Returning Hive schema: 
Schema(fieldSchemas:[FieldSchema(name:another.id, type:int, comment:null), 
FieldSchema(name:another.name, type:string, comment:null)], properties:null)
INFO  : Completed compiling 
command(queryId=sircodesalot_20160105131616_2bbad703-76fe-4621-809e-11a16ee72182);
 Time taken: 1.023 seconds
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Executing 
command(queryId=sircodesalot_20160105131616_2bbad703-76fe-4621-809e-11a16ee72182):
 create table shouldwork location '/user/hive/warehouse/x/y/z/shouldwork' as 
select * from another
INFO  : Query ID = 
sircodesalot_20160105131616_2bbad703-76fe-4621-809e-11a16ee72182
INFO  : Total jobs = 3
INFO  : Launching Job 1 out of 3
INFO  : Starting task [Stage-1:MAPRED] in serial mode
INFO  : Number of reduce tasks is set to 0 since there's no reduce operator
INFO  : Job running in-process (local Hadoop)
INFO  : 2016-01-05 13:16:43,251 Stage-1 map = 100%,  reduce = 0%
INFO  : Ended Job = job_local1497036142_0001
INFO  : Starting task [Stage-7:CONDITIONAL] in serial mode
INFO  : Stage-4 is selected by condition resolver.
INFO  : Stage-3 is filtered out by condition resolver.
INFO  : Stage-5 is filtered out by condition resolver.
INFO  : Starting task [Stage-4:MOVE] in serial mode
INFO  : Moving data to: 
file:/user/hive/warehouse/.hive-staging_hive_2016-01-05_13-16-39_477_4080666378917536585-1/-ext-10001
 from 
file:/user/hive/warehouse/.hive-staging_hive_2016-01-05_13-16-39_477_4080666378917536585-1/-ext-10003
INFO  : Starting task [Stage-0:MOVE] in serial mode
INFO  : Moving data to: /user/hive/warehouse/x/y/z/shouldwork from 
file:/user/hive/warehouse/.hive-staging_hive_2016-01-05_13-16-39_477_4080666378917536585-1/-ext-10001
INFO  : Starting task [Stage-8:DDL] in serial mode
INFO  : Starting task [Stage-2:STATS] in serial mode
INFO  : Table default.shouldwork stats: [numFiles=1, numRows=2, totalSize=25, 
rawDataSize=23]
INFO  : MapReduce Jobs Launched: 
INFO  : Stage-Stage-1:  HDFS Read: 0 HDFS Write: 0 SUCCESS
INFO  : Total MapReduce CPU Time Spent: 0 msec
INFO  : Completed executing 
command(queryId=sircodesalot_20160105131616_2bbad703-76fe-4621-809e-11a16ee72182);
 Time taken: 20.601 seconds
INFO  : OK
No rows affected (21.644 seconds)


0: jdbc:hive2://localhost:1> select * from shouldwork;
select * from shouldwork;
INFO  : Compiling 
command(queryId=sircodesalot_20160105131717_4cd8dc3b-a732-4133-86c2-7261d567b6bf):
 select * from shouldwork
INFO  : Semantic Analysis Completed
INFO  : Returning Hive schema: 
Schema(fieldSchemas:[FieldSchema(name:shouldwork.id, type:int, comment:null), 
FieldSchema(name:shouldwork.name, type:string, comment:null)], properties:null)
INFO  : Completed compiling 
command(queryId=sircodesalot_20160105131717_4cd8dc3b-a732-4133-86c2-7261d567b6bf);
 Time taken: 0.215 seconds
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Executing 
command(queryId=sircodesalot_20160105131717_4cd8dc3b-a732-4133-86c2-7261d567b6bf):
 select * from shouldwork
INFO  : Completed executing 
command(queryId=sircodesalot_20160105131717_4cd8dc3b-a732-4133-86c2-7261d567b6bf);
 Time taken: 0.0 seconds
INFO  : OK
++--+--+
| shouldwork.id  | shouldwork.name  |
++--+--+
| 1  | something|
| 2  | otherthing   |
++--+--+
{code}

Although it makes sense that this property is not set by default, it does make 
sense to notify the user that a property exists to allow automatic nested 
directory creation.

[jira] [Updated] (HIVE-13231) Show helpful error message on failure to create nested table in nested directory

2016-03-08 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13231:
--
Status: Patch Available  (was: Open)

> Show helpful error message on failure to create nested table in nested 
> directory
> 
>
> Key: HIVE-13231
> URL: https://issues.apache.org/jira/browse/HIVE-13231
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
>Priority: Minor
> Attachments: CDH-37508.01.patch
>
>
> cannot store data in a directory whose parent doesn't exist, even though the 
> target dir does have an existing ancestor on HDFS. This occurs when trying to 
> perform {{create table }}.
> {code}
> 0: jdbc:hive2://10.17.81.192:1/default> create table test3 location 
> '/user/hive/data/yshi/nonexisting/test3' as select * from sample_07;
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.MoveTask (state=08S01,code=1)
> Error message:
> 2015-10-29 19:04:46,323 ERROR org.apache.hadoop.hive.ql.exec.Task: Failed 
> with exception Unable to rename: 
> hdfs://host-10-17-81-192.coe.cloudera.com:8020/user/hive/warehouse/.hive-staging_hive_2015-10-29_19-04-08_375_5385987873542863570-3/-ext-10001
>  to: /user/hive/data/yshi/nonexisting/test3
> org.apache.hadoop.hive.ql.metadata.HiveException: Unable to rename: 
> hdfs://host-10-17-81-192.coe.cloudera.com:8020/user/hive/warehouse/.hive-staging_hive_2015-10-29_19-04-08_375_5385987873542863570-3/-ext-10001
>  to: /user/hive/data/yshi/nonexisting/test3
> at org.apache.hadoop.hive.ql.exec.MoveTask.moveFile(MoveTask.java:101)
> at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:209)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1554)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1321)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1139)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:962)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:957)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:144)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.access$000(SQLOperation.java:68)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:199)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:502)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:212)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13231) Show helpful error message on failure to create nested table in nested directory

2016-03-08 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13231:
--
Description: 
cannot store data in a directory whose parent doesn't exist, even though the 
target dir does have an existing ancestor on HDFS. This occurs when trying to 
perform {{create table }}.

{code}
0: jdbc:hive2://10.17.81.192:1/default> create table test3 location 
'/user/hive/data/yshi/nonexisting/test3' as select * from sample_07;
Error: Error while processing statement: FAILED: Execution Error, return code 1 
from org.apache.hadoop.hive.ql.exec.MoveTask (state=08S01,code=1)
Error message:
2015-10-29 19:04:46,323 ERROR org.apache.hadoop.hive.ql.exec.Task: Failed with 
exception Unable to rename: 
hdfs://host-10-17-81-192.coe.cloudera.com:8020/user/hive/warehouse/.hive-staging_hive_2015-10-29_19-04-08_375_5385987873542863570-3/-ext-10001
 to: /user/hive/data/yshi/nonexisting/test3
org.apache.hadoop.hive.ql.metadata.HiveException: Unable to rename: 
hdfs://host-10-17-81-192.coe.cloudera.com:8020/user/hive/warehouse/.hive-staging_hive_2015-10-29_19-04-08_375_5385987873542863570-3/-ext-10001
 to: /user/hive/data/yshi/nonexisting/test3
at org.apache.hadoop.hive.ql.exec.MoveTask.moveFile(MoveTask.java:101)
at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:209)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153)
at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1554)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1321)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1139)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:962)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:957)
at 
org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:144)
at 
org.apache.hive.service.cli.operation.SQLOperation.access$000(SQLOperation.java:68)
at 
org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:199)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
at 
org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:502)
at 
org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:212)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

  was:
cannot store data in a directory whose parent doesn't exist, even though the 
target dir does have an existing ancestor on HDFS.

{code}
0: jdbc:hive2://10.17.81.192:1/default> create table test3 location 
'/user/hive/data/yshi/nonexisting/test3' as select * from sample_07;
Error: Error while processing statement: FAILED: Execution Error, return code 1 
from org.apache.hadoop.hive.ql.exec.MoveTask (state=08S01,code=1)
Error message:
2015-10-29 19:04:46,323 ERROR org.apache.hadoop.hive.ql.exec.Task: Failed with 
exception Unable to rename: 
hdfs://host-10-17-81-192.coe.cloudera.com:8020/user/hive/warehouse/.hive-staging_hive_2015-10-29_19-04-08_375_5385987873542863570-3/-ext-10001
 to: /user/hive/data/yshi/nonexisting/test3
org.apache.hadoop.hive.ql.metadata.HiveException: Unable to rename: 
hdfs://host-10-17-81-192.coe.cloudera.com:8020/user/hive/warehouse/.hive-staging_hive_2015-10-29_19-04-08_375_5385987873542863570-3/-ext-10001
 to: /user/hive/data/yshi/nonexisting/test3
at org.apache.hadoop.hive.ql.exec.MoveTask.moveFile(MoveTask.java:101)
at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:209)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153)
at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1554)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1321)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1139)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:962)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:957)
at 
org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:144)
at 
org.apache.hive.service.cli.operation.SQLOperation.access$000(SQLOperation.java:68)
at 
org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:199)
at 

[jira] [Updated] (HIVE-13231) Show helpful error message on failure to create nested table in nested directory

2016-03-08 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-13231:
--
Attachment: CDH-37508.01.patch

> Show helpful error message on failure to create nested table in nested 
> directory
> 
>
> Key: HIVE-13231
> URL: https://issues.apache.org/jira/browse/HIVE-13231
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
>Priority: Minor
> Attachments: CDH-37508.01.patch
>
>
> cannot store data in a directory whose parent doesn't exist, even though the 
> target dir does have an existing ancestor on HDFS. This occurs when trying to 
> perform {{create table }}.
> {code}
> 0: jdbc:hive2://10.17.81.192:1/default> create table test3 location 
> '/user/hive/data/yshi/nonexisting/test3' as select * from sample_07;
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.MoveTask (state=08S01,code=1)
> Error message:
> 2015-10-29 19:04:46,323 ERROR org.apache.hadoop.hive.ql.exec.Task: Failed 
> with exception Unable to rename: 
> hdfs://host-10-17-81-192.coe.cloudera.com:8020/user/hive/warehouse/.hive-staging_hive_2015-10-29_19-04-08_375_5385987873542863570-3/-ext-10001
>  to: /user/hive/data/yshi/nonexisting/test3
> org.apache.hadoop.hive.ql.metadata.HiveException: Unable to rename: 
> hdfs://host-10-17-81-192.coe.cloudera.com:8020/user/hive/warehouse/.hive-staging_hive_2015-10-29_19-04-08_375_5385987873542863570-3/-ext-10001
>  to: /user/hive/data/yshi/nonexisting/test3
> at org.apache.hadoop.hive.ql.exec.MoveTask.moveFile(MoveTask.java:101)
> at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:209)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1554)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1321)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1139)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:962)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:957)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:144)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.access$000(SQLOperation.java:68)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:199)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:502)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:212)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12891) Hive fails when java.io.tmpdir is set to a relative location

2016-01-25 Thread Reuben Kuhnert (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15115581#comment-15115581
 ] 

Reuben Kuhnert commented on HIVE-12891:
---

The problem that this patch addresses is that the value of 'java.io.tmpdir' can 
be set externally to be a relative path (we're seeing this problem occur in 
oozie). To adjust for this issue, the above patch uses 'Coercion' to 
validate/modify the value before passing it to the user. That is, rather than 
simply creating a one-off coercion, I thought it would be useful in general to 
have a way to hook into {{SystemVariables.substitute}} to validate or adjust 
the property before returning to the user.

The template is:

{code}
public abstract class VariableCoercion {
  private final String name;

  public VariableCoercion(String name) {
this.name = name;
  }

  public String getName() { return this.name; }
  public abstract String getCoerced(Configuration configuration, String 
originalValue);
  public abstract String setCoerced(Configuration configuration, String 
originalValue);
}
{code}

where {{getCoerced}} is called on get and {{setCoerced}} is called on set 
(configuration is passed in if the coerced value is context sensitive). In 
addition, to add other coercions, simply subclass the above and add it here:

{code}
public class SystemVariables {
  ...
  // HERE: List of coercions:
  private static final VariableCoercionSet COERCIONS = new VariableCoercionSet()
.add(new JavaIOTmpdirVariableCoercion());
{code}

If a coercion hook exists for a particular name (see 
{{VariableCoercion.getName()}}) then it is loaded and the raw value is passed 
through, then returned to the user:

{code}
  public String getCoerced(Configuration configuration, String variableName, 
String originalValue) {
if (COERCIONS.contains(variableName)) {
  return COERCIONS.get(variableName).getCoerced(configuration, 
originalValue);
} else {
  return originalValue;
}
  }
{code}

> Hive fails when java.io.tmpdir is set to a relative location
> 
>
> Key: HIVE-12891
> URL: https://issues.apache.org/jira/browse/HIVE-12891
> Project: Hive
>  Issue Type: Bug
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
> Attachments: HIVE-12891.01.19.2016.01.patch, HIVE-12891.03.patch, 
> HIVE-12981.01.22.2016.02.patch
>
>
> The function {{SessionState.createSessionDirs}} fails when trying to create 
> directories where {{java.io.tmpdir}} is set to a relative location.
> {code}
> \[uber-SubtaskRunner] ERROR o.a.h.hive..ql.Driver - FAILED: 
> IllegalArgumentException java.net.URISyntaxException: Relative path in 
> absolute URI: 
> file:./tmp///hive_2015_12_11_09-12-25_352_4325234652356-1
> ...
> Minor variations:
> \[uber-SubtaskRunner] ERROR o.a.h.hive..ql.Driver - FAILED: SemanticException 
> Exception while processing Exception while writing out the local file 
> o.a.h.hive.ql/parse.SemanticException: Exception while processing exception 
> while writing out local file 
> ... 
> caused by: java.lang.IllegalArgumentException: java.net.URISyntaxException: 
> Relative path in absolute URI: 
> file:./tmp///hive_2015_12_11_09-12-25_352_4325234652356-1 
> at o.a.h.fs.Path.initialize (206) 
> at o.a.h.fs.Path.(197)... 
> at o.a.h.hive.ql.context.getScratchDir(267) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12891) Hive fails when java.io.tmpdir is set to a relative location

2016-01-25 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-12891:
--
Attachment: HIVE-12891.04.patch

> Hive fails when java.io.tmpdir is set to a relative location
> 
>
> Key: HIVE-12891
> URL: https://issues.apache.org/jira/browse/HIVE-12891
> Project: Hive
>  Issue Type: Bug
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
> Attachments: HIVE-12891.01.19.2016.01.patch, HIVE-12891.03.patch, 
> HIVE-12891.04.patch, HIVE-12981.01.22.2016.02.patch
>
>
> The function {{SessionState.createSessionDirs}} fails when trying to create 
> directories where {{java.io.tmpdir}} is set to a relative location.
> {code}
> \[uber-SubtaskRunner] ERROR o.a.h.hive..ql.Driver - FAILED: 
> IllegalArgumentException java.net.URISyntaxException: Relative path in 
> absolute URI: 
> file:./tmp///hive_2015_12_11_09-12-25_352_4325234652356-1
> ...
> Minor variations:
> \[uber-SubtaskRunner] ERROR o.a.h.hive..ql.Driver - FAILED: SemanticException 
> Exception while processing Exception while writing out the local file 
> o.a.h.hive.ql/parse.SemanticException: Exception while processing exception 
> while writing out local file 
> ... 
> caused by: java.lang.IllegalArgumentException: java.net.URISyntaxException: 
> Relative path in absolute URI: 
> file:./tmp///hive_2015_12_11_09-12-25_352_4325234652356-1 
> at o.a.h.fs.Path.initialize (206) 
> at o.a.h.fs.Path.(197)... 
> at o.a.h.hive.ql.context.getScratchDir(267) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12891) Hive fails when java.io.tmpdir is set to a relative location

2016-01-25 Thread Reuben Kuhnert (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15115621#comment-15115621
 ] 

Reuben Kuhnert commented on HIVE-12891:
---

Updated, thanks

> Hive fails when java.io.tmpdir is set to a relative location
> 
>
> Key: HIVE-12891
> URL: https://issues.apache.org/jira/browse/HIVE-12891
> Project: Hive
>  Issue Type: Bug
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
> Attachments: HIVE-12891.01.19.2016.01.patch, HIVE-12891.03.patch, 
> HIVE-12891.04.patch, HIVE-12981.01.22.2016.02.patch
>
>
> The function {{SessionState.createSessionDirs}} fails when trying to create 
> directories where {{java.io.tmpdir}} is set to a relative location.
> {code}
> \[uber-SubtaskRunner] ERROR o.a.h.hive..ql.Driver - FAILED: 
> IllegalArgumentException java.net.URISyntaxException: Relative path in 
> absolute URI: 
> file:./tmp///hive_2015_12_11_09-12-25_352_4325234652356-1
> ...
> Minor variations:
> \[uber-SubtaskRunner] ERROR o.a.h.hive..ql.Driver - FAILED: SemanticException 
> Exception while processing Exception while writing out the local file 
> o.a.h.hive.ql/parse.SemanticException: Exception while processing exception 
> while writing out local file 
> ... 
> caused by: java.lang.IllegalArgumentException: java.net.URISyntaxException: 
> Relative path in absolute URI: 
> file:./tmp///hive_2015_12_11_09-12-25_352_4325234652356-1 
> at o.a.h.fs.Path.initialize (206) 
> at o.a.h.fs.Path.(197)... 
> at o.a.h.hive.ql.context.getScratchDir(267) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12891) Hive fails when java.io.tmpdir is set to a relative location

2016-01-22 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-12891:
--
Attachment: HIVE-12891.03.patch

> Hive fails when java.io.tmpdir is set to a relative location
> 
>
> Key: HIVE-12891
> URL: https://issues.apache.org/jira/browse/HIVE-12891
> Project: Hive
>  Issue Type: Bug
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
> Attachments: HIVE-12891.01.19.2016.01.patch, HIVE-12891.03.patch, 
> HIVE-12981.01.22.2016.02.patch
>
>
> The function {{SessionState.createSessionDirs}} fails when trying to create 
> directories where {{java.io.tmpdir}} is set to a relative location.
> {code}
> \[uber-SubtaskRunner] ERROR o.a.h.hive..ql.Driver - FAILED: 
> IllegalArgumentException java.net.URISyntaxException: Relative path in 
> absolute URI: 
> file:./tmp///hive_2015_12_11_09-12-25_352_4325234652356-1
> ...
> Minor variations:
> \[uber-SubtaskRunner] ERROR o.a.h.hive..ql.Driver - FAILED: SemanticException 
> Exception while processing Exception while writing out the local file 
> o.a.h.hive.ql/parse.SemanticException: Exception while processing exception 
> while writing out local file 
> ... 
> caused by: java.lang.IllegalArgumentException: java.net.URISyntaxException: 
> Relative path in absolute URI: 
> file:./tmp///hive_2015_12_11_09-12-25_352_4325234652356-1 
> at o.a.h.fs.Path.initialize (206) 
> at o.a.h.fs.Path.(197)... 
> at o.a.h.hive.ql.context.getScratchDir(267) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12891) Hive fails when java.io.tmpdir is set to a relative location

2016-01-22 Thread Reuben Kuhnert (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15113085#comment-15113085
 ] 

Reuben Kuhnert commented on HIVE-12891:
---

Adding additional patch for validation. Will update with patch notes on 
successful test run.

> Hive fails when java.io.tmpdir is set to a relative location
> 
>
> Key: HIVE-12891
> URL: https://issues.apache.org/jira/browse/HIVE-12891
> Project: Hive
>  Issue Type: Bug
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
> Attachments: HIVE-12891.01.19.2016.01.patch, 
> HIVE-12981.01.22.2016.02.patch
>
>
> The function {{SessionState.createSessionDirs}} fails when trying to create 
> directories where {{java.io.tmpdir}} is set to a relative location.
> {code}
> \[uber-SubtaskRunner] ERROR o.a.h.hive..ql.Driver - FAILED: 
> IllegalArgumentException java.net.URISyntaxException: Relative path in 
> absolute URI: 
> file:./tmp///hive_2015_12_11_09-12-25_352_4325234652356-1
> ...
> Minor variations:
> \[uber-SubtaskRunner] ERROR o.a.h.hive..ql.Driver - FAILED: SemanticException 
> Exception while processing Exception while writing out the local file 
> o.a.h.hive.ql/parse.SemanticException: Exception while processing exception 
> while writing out local file 
> ... 
> caused by: java.lang.IllegalArgumentException: java.net.URISyntaxException: 
> Relative path in absolute URI: 
> file:./tmp///hive_2015_12_11_09-12-25_352_4325234652356-1 
> at o.a.h.fs.Path.initialize (206) 
> at o.a.h.fs.Path.(197)... 
> at o.a.h.hive.ql.context.getScratchDir(267) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12891) Hive fails when java.io.tmpdir is set to a relative location

2016-01-22 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-12891:
--
Attachment: HIVE-12981.01.22.2016.02.patch

> Hive fails when java.io.tmpdir is set to a relative location
> 
>
> Key: HIVE-12891
> URL: https://issues.apache.org/jira/browse/HIVE-12891
> Project: Hive
>  Issue Type: Bug
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
> Attachments: HIVE-12891.01.19.2016.01.patch, 
> HIVE-12981.01.22.2016.02.patch
>
>
> The function {{SessionState.createSessionDirs}} fails when trying to create 
> directories where {{java.io.tmpdir}} is set to a relative location.
> {code}
> \[uber-SubtaskRunner] ERROR o.a.h.hive..ql.Driver - FAILED: 
> IllegalArgumentException java.net.URISyntaxException: Relative path in 
> absolute URI: 
> file:./tmp///hive_2015_12_11_09-12-25_352_4325234652356-1
> ...
> Minor variations:
> \[uber-SubtaskRunner] ERROR o.a.h.hive..ql.Driver - FAILED: SemanticException 
> Exception while processing Exception while writing out the local file 
> o.a.h.hive.ql/parse.SemanticException: Exception while processing exception 
> while writing out local file 
> ... 
> caused by: java.lang.IllegalArgumentException: java.net.URISyntaxException: 
> Relative path in absolute URI: 
> file:./tmp///hive_2015_12_11_09-12-25_352_4325234652356-1 
> at o.a.h.fs.Path.initialize (206) 
> at o.a.h.fs.Path.(197)... 
> at o.a.h.hive.ql.context.getScratchDir(267) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12891) Hive fails when java.io.tmpdir is set to a relative location

2016-01-20 Thread Reuben Kuhnert (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15108607#comment-15108607
 ] 

Reuben Kuhnert commented on HIVE-12891:
---

I wonder if it would make sense to catch the exception and wrap in it something 
to tell the user that relative paths for {{java.io.tmpdir}} are not allowed (if 
that is indeed the case). Let me know, should be an easy change.

As for the config documentation? Do you mean in 
[HiveConf|https://github.com/apache/hive/blob/b9d65f159d39614f510c64e58d7b09b4cf38f96f/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java#L253]
 or somewhere else?

Thanks

> Hive fails when java.io.tmpdir is set to a relative location
> 
>
> Key: HIVE-12891
> URL: https://issues.apache.org/jira/browse/HIVE-12891
> Project: Hive
>  Issue Type: Bug
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
> Attachments: HIVE-12891.01.19.2016.01.patch
>
>
> The function {{SessionState.createSessionDirs}} fails when trying to create 
> directories where {{java.io.tmpdir}} is set to a relative location.
> {code}
> \[uber-SubtaskRunner] ERROR o.a.h.hive..ql.Driver - FAILED: 
> IllegalArgumentException java.net.URISyntaxException: Relative path in 
> absolute URI: 
> file:./tmp///hive_2015_12_11_09-12-25_352_4325234652356-1
> ...
> Minor variations:
> \[uber-SubtaskRunner] ERROR o.a.h.hive..ql.Driver - FAILED: SemanticException 
> Exception while processing Exception while writing out the local file 
> o.a.h.hive.ql/parse.SemanticException: Exception while processing exception 
> while writing out local file 
> ... 
> caused by: java.lang.IllegalArgumentException: java.net.URISyntaxException: 
> Relative path in absolute URI: 
> file:./tmp///hive_2015_12_11_09-12-25_352_4325234652356-1 
> at o.a.h.fs.Path.initialize (206) 
> at o.a.h.fs.Path.(197)... 
> at o.a.h.hive.ql.context.getScratchDir(267) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12891) Hive fails when java.io.tmpdir is set to a relative location

2016-01-19 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-12891:
--
Attachment: HIVE-12891.01.19.2016.01.patch

> Hive fails when java.io.tmpdir is set to a relative location
> 
>
> Key: HIVE-12891
> URL: https://issues.apache.org/jira/browse/HIVE-12891
> Project: Hive
>  Issue Type: Bug
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
> Attachments: HIVE-12891.01.19.2016.01.patch
>
>
> The function {{SessionState.createSessionDirs}} fails when trying to create 
> directories where {{java.io.tmpdir}} is set to a relative location.
> {code}
> \[uber-SubtaskRunner] ERROR o.a.h.hive..ql.Driver - FAILED: 
> IllegalArgumentException java.net.URISyntaxException: Relative path in 
> absolute URI: 
> file:./tmp///hive_2015_12_11_09-12-25_352_4325234652356-1
> ...
> Minor variations:
> \[uber-SubtaskRunner] ERROR o.a.h.hive..ql.Driver - FAILED: SemanticException 
> Exception while processing Exception while writing out the local file 
> o.a.h.hive.ql/parse.SemanticException: Exception while processing exception 
> while writing out local file 
> ... 
> caused by: java.lang.IllegalArgumentException: java.net.URISyntaxException: 
> Relative path in absolute URI: 
> file:./tmp///hive_2015_12_11_09-12-25_352_4325234652356-1 
> at o.a.h.fs.Path.initialize (206) 
> at o.a.h.fs.Path.(197)... 
> at o.a.h.hive.ql.context.getScratchDir(267) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12891) Hive fails when java.io.tmpdir is set to a relative location

2016-01-19 Thread Reuben Kuhnert (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15107807#comment-15107807
 ] 

Reuben Kuhnert commented on HIVE-12891:
---

Patch Fix: Ensure that paths are expanded to absolute locations.

> Hive fails when java.io.tmpdir is set to a relative location
> 
>
> Key: HIVE-12891
> URL: https://issues.apache.org/jira/browse/HIVE-12891
> Project: Hive
>  Issue Type: Bug
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
> Attachments: HIVE-12891.01.19.2016.01.patch
>
>
> The function {{SessionState.createSessionDirs}} fails when trying to create 
> directories where {{java.io.tmpdir}} is set to a relative location.
> {code}
> \[uber-SubtaskRunner] ERROR o.a.h.hive..ql.Driver - FAILED: 
> IllegalArgumentException java.net.URISyntaxException: Relative path in 
> absolute URI: 
> file:./tmp///hive_2015_12_11_09-12-25_352_4325234652356-1
> ...
> Minor variations:
> \[uber-SubtaskRunner] ERROR o.a.h.hive..ql.Driver - FAILED: SemanticException 
> Exception while processing Exception while writing out the local file 
> o.a.h.hive.ql/parse.SemanticException: Exception while processing exception 
> while writing out local file 
> ... 
> caused by: java.lang.IllegalArgumentException: java.net.URISyntaxException: 
> Relative path in absolute URI: 
> file:./tmp///hive_2015_12_11_09-12-25_352_4325234652356-1 
> at o.a.h.fs.Path.initialize (206) 
> at o.a.h.fs.Path.(197)... 
> at o.a.h.hive.ql.context.getScratchDir(267) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12469) Bump Commons-Collections dependency from 3.2.1 to 3.2.2. to address vulnerability

2015-11-25 Thread Reuben Kuhnert (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15027105#comment-15027105
 ] 

Reuben Kuhnert commented on HIVE-12469:
---

LGTM (Non-committer) +1

> Bump Commons-Collections dependency from 3.2.1 to 3.2.2. to address 
> vulnerability
> -
>
> Key: HIVE-12469
> URL: https://issues.apache.org/jira/browse/HIVE-12469
> Project: Hive
>  Issue Type: Bug
>  Components: Build Infrastructure
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
>Priority: Blocker
> Attachments: HIVE-12469.2.patch, HIVE-12469.patch
>
>
> Currently the commons-collections (3.2.1) library allows for invocation of 
> arbitrary code through {{InvokerTransformer}}, need to bump the version of 
> commons-collections from 3.2.1 to 3.2.2 to resolve this issue.
> Results of {{mvn dependency:tree}}:
> {code}
> [INFO] 
> 
> [INFO] Building Hive HPL/SQL 2.0.0-SNAPSHOT
> [INFO] 
> 
> [INFO] 
> [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ hive-hplsql ---
> [INFO] org.apache.hive:hive-hplsql:jar:2.0.0-SNAPSHOT
> [INFO] +- com.google.guava:guava:jar:14.0.1:compile
> [INFO] +- commons-collections:commons-collections:jar:3.2.1:compile
> {code}
> {code}
> [INFO] 
> 
> [INFO] Building Hive Packaging 2.0.0-SNAPSHOT
> [INFO] 
> 
> [INFO] +- org.apache.hive:hive-hbase-handler:jar:2.0.0-SNAPSHOT:compile
> [INFO] |  +- org.apache.hbase:hbase-server:jar:1.1.1:compile
> [INFO] |  |  +- commons-collections:commons-collections:jar:3.2.1:compile
> {code}
> {code}
> [INFO] 
> 
> [INFO] Building Hive Common 2.0.0-SNAPSHOT
> [INFO] 
> 
> [INFO] 
> [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ hive-common ---
> [INFO] +- org.apache.hadoop:hadoop-common:jar:2.6.0:compile
> [INFO] |  +- commons-collections:commons-collections:jar:3.2.1:compile
> {code}
> {{Hadoop-Common}} dependency also found in: LLAP, Serde, Storage,  Shims, 
> Shims Common, Shims Scheduler)
> {code}
> [INFO] 
> 
> [INFO] Building Hive Ant Utilities 2.0.0-SNAPSHOT
> [INFO] 
> 
> [INFO] 
> [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ hive-ant ---
> [INFO] |  +- commons-collections:commons-collections:jar:3.1:compile
> {code}
> {code}
> [INFO]
>  
> [INFO] 
> 
> [INFO] Building Hive Accumulo Handler 2.0.0-SNAPSHOT
> [INFO] 
> 
> [INFO] +- org.apache.accumulo:accumulo-core:jar:1.6.0:compile
> [INFO] |  +- commons-collections:commons-collections:jar:3.2.1:compile
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12469) Bump Commons-Collections dependency from 3.2.1 to 3.2.2. to address vulnerability

2015-11-21 Thread Reuben Kuhnert (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15020695#comment-15020695
 ] 

Reuben Kuhnert commented on HIVE-12469:
---

I hear you. I guess the issue is not so much where the jar comes from, but 
rather the jar itself. If we are still using version {{3.2.1}} even if it comes 
from the end user's machine, that will still contain the exploit. Is there a 
reason we cant bump the version to {{3.2.2}}? Everything else looks good to me.

> Bump Commons-Collections dependency from 3.2.1 to 3.2.2. to address 
> vulnerability
> -
>
> Key: HIVE-12469
> URL: https://issues.apache.org/jira/browse/HIVE-12469
> Project: Hive
>  Issue Type: Bug
>  Components: Build Infrastructure
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
>Priority: Blocker
> Attachments: HIVE-12469.patch
>
>
> Currently the commons-collections (3.2.1) library allows for invocation of 
> arbitrary code through {{InvokerTransformer}}, need to bump the version of 
> commons-collections from 3.2.1 to 3.2.2 to resolve this issue.
> Results of {{mvn dependency:tree}}:
> {code}
> [INFO] 
> 
> [INFO] Building Hive HPL/SQL 2.0.0-SNAPSHOT
> [INFO] 
> 
> [INFO] 
> [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ hive-hplsql ---
> [INFO] org.apache.hive:hive-hplsql:jar:2.0.0-SNAPSHOT
> [INFO] +- com.google.guava:guava:jar:14.0.1:compile
> [INFO] +- commons-collections:commons-collections:jar:3.2.1:compile
> {code}
> {code}
> [INFO] 
> 
> [INFO] Building Hive Packaging 2.0.0-SNAPSHOT
> [INFO] 
> 
> [INFO] +- org.apache.hive:hive-hbase-handler:jar:2.0.0-SNAPSHOT:compile
> [INFO] |  +- org.apache.hbase:hbase-server:jar:1.1.1:compile
> [INFO] |  |  +- commons-collections:commons-collections:jar:3.2.1:compile
> {code}
> {code}
> [INFO] 
> 
> [INFO] Building Hive Common 2.0.0-SNAPSHOT
> [INFO] 
> 
> [INFO] 
> [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ hive-common ---
> [INFO] +- org.apache.hadoop:hadoop-common:jar:2.6.0:compile
> [INFO] |  +- commons-collections:commons-collections:jar:3.2.1:compile
> {code}
> {{Hadoop-Common}} dependency also found in: LLAP, Serde, Storage,  Shims, 
> Shims Common, Shims Scheduler)
> {code}
> [INFO] 
> 
> [INFO] Building Hive Ant Utilities 2.0.0-SNAPSHOT
> [INFO] 
> 
> [INFO] 
> [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ hive-ant ---
> [INFO] |  +- commons-collections:commons-collections:jar:3.1:compile
> {code}
> {code}
> [INFO]
>  
> [INFO] 
> 
> [INFO] Building Hive Accumulo Handler 2.0.0-SNAPSHOT
> [INFO] 
> 
> [INFO] +- org.apache.accumulo:accumulo-core:jar:1.6.0:compile
> [INFO] |  +- commons-collections:commons-collections:jar:3.2.1:compile
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12469) Bump Commons-Collections dependency from 3.2.1 to 3.2.2. to address vulnerability

2015-11-20 Thread Reuben Kuhnert (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15018680#comment-15018680
 ] 

Reuben Kuhnert commented on HIVE-12469:
---

So that looks good to me for the most part, I guess my only question is this:

{code}
+3.2.1
 1.9
 1.1
 3.0.1
@@ -303,7 +304,13 @@
 commons-codec
 ${commons-codec.version}
   
-  
+   
+commons-collections
+commons-collections
+${commons-collections.version}
+provided
+  
{code}

I would assume that at runtime this would still add 
{{commons-collections-3.2.1}} to the runtime classpath (even if we do expect it 
to be provided by the end user), which might re-introduce the issue. Feel free 
to correct me if I'm wrong though.

> Bump Commons-Collections dependency from 3.2.1 to 3.2.2. to address 
> vulnerability
> -
>
> Key: HIVE-12469
> URL: https://issues.apache.org/jira/browse/HIVE-12469
> Project: Hive
>  Issue Type: Bug
>  Components: Build Infrastructure
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
>Priority: Blocker
> Attachments: HIVE-12469.patch
>
>
> Currently the commons-collections (3.2.1) library allows for invocation of 
> arbitrary code through {{InvokerTransformer}}, need to bump the version of 
> commons-collections from 3.2.1 to 3.2.2 to resolve this issue.
> Results of {{mvn dependency:tree}}:
> {code}
> [INFO] 
> 
> [INFO] Building Hive HPL/SQL 2.0.0-SNAPSHOT
> [INFO] 
> 
> [INFO] 
> [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ hive-hplsql ---
> [INFO] org.apache.hive:hive-hplsql:jar:2.0.0-SNAPSHOT
> [INFO] +- com.google.guava:guava:jar:14.0.1:compile
> [INFO] +- commons-collections:commons-collections:jar:3.2.1:compile
> {code}
> {code}
> [INFO] 
> 
> [INFO] Building Hive Packaging 2.0.0-SNAPSHOT
> [INFO] 
> 
> [INFO] +- org.apache.hive:hive-hbase-handler:jar:2.0.0-SNAPSHOT:compile
> [INFO] |  +- org.apache.hbase:hbase-server:jar:1.1.1:compile
> [INFO] |  |  +- commons-collections:commons-collections:jar:3.2.1:compile
> {code}
> {code}
> [INFO] 
> 
> [INFO] Building Hive Common 2.0.0-SNAPSHOT
> [INFO] 
> 
> [INFO] 
> [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ hive-common ---
> [INFO] +- org.apache.hadoop:hadoop-common:jar:2.6.0:compile
> [INFO] |  +- commons-collections:commons-collections:jar:3.2.1:compile
> {code}
> {{Hadoop-Common}} dependency also found in: LLAP, Serde, Storage,  Shims, 
> Shims Common, Shims Scheduler)
> {code}
> [INFO] 
> 
> [INFO] Building Hive Ant Utilities 2.0.0-SNAPSHOT
> [INFO] 
> 
> [INFO] 
> [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ hive-ant ---
> [INFO] |  +- commons-collections:commons-collections:jar:3.1:compile
> {code}
> {code}
> [INFO]
>  
> [INFO] 
> 
> [INFO] Building Hive Accumulo Handler 2.0.0-SNAPSHOT
> [INFO] 
> 
> [INFO] +- org.apache.accumulo:accumulo-core:jar:1.6.0:compile
> [INFO] |  +- commons-collections:commons-collections:jar:3.2.1:compile
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12469) Bump Commons-Collections dependency from 3.2.1 to 3.2.2. to address vulnerability

2015-11-19 Thread Reuben Kuhnert (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15014124#comment-15014124
 ] 

Reuben Kuhnert commented on HIVE-12469:
---

Looks like there is only one direct dependency, but numerous downstream 
references (a number of them to {{hadoop-common}}). Any suggestions on how we 
want to fix this?

> Bump Commons-Collections dependency from 3.2.1 to 3.2.2. to address 
> vulnerability
> -
>
> Key: HIVE-12469
> URL: https://issues.apache.org/jira/browse/HIVE-12469
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
>Priority: Blocker
>
> Currently the commons-collections (3.2.1) library allows for invocation of 
> arbitrary code through {{InvokerTransformer}}, need to bump the version of 
> commons-collections from 3.2.1 to 3.2.2 to resolve this issue.
> Results of {{mvn dependency:tree}}:
> {code}
> [INFO] 
> 
> [INFO] Building Hive HPL/SQL 2.0.0-SNAPSHOT
> [INFO] 
> 
> [INFO] 
> [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ hive-hplsql ---
> [INFO] org.apache.hive:hive-hplsql:jar:2.0.0-SNAPSHOT
> [INFO] +- com.google.guava:guava:jar:14.0.1:compile
> [INFO] +- commons-collections:commons-collections:jar:3.2.1:compile
> {code}
> {code}
> [INFO] 
> 
> [INFO] Building Hive Packaging 2.0.0-SNAPSHOT
> [INFO] 
> 
> [INFO] +- org.apache.hive:hive-hbase-handler:jar:2.0.0-SNAPSHOT:compile
> [INFO] |  +- org.apache.hbase:hbase-server:jar:1.1.1:compile
> [INFO] |  |  +- commons-collections:commons-collections:jar:3.2.1:compile
> {code}
> {code}
> [INFO] 
> 
> [INFO] Building Hive Common 2.0.0-SNAPSHOT
> [INFO] 
> 
> [INFO] 
> [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ hive-common ---
> [INFO] +- org.apache.hadoop:hadoop-common:jar:2.6.0:compile
> [INFO] |  +- commons-collections:commons-collections:jar:3.2.1:compile
> {code}
> {{Hadoop-Common}} dependency also found in: LLAP, Serde, Storage,  Shims, 
> Shims Common, Shims Scheduler)
> {code}
> [INFO] 
> 
> [INFO] Building Hive Ant Utilities 2.0.0-SNAPSHOT
> [INFO] 
> 
> [INFO] 
> [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ hive-ant ---
> [INFO] |  +- commons-collections:commons-collections:jar:3.1:compile
> {code}
> {code}
> [INFO]
>  
> [INFO] 
> 
> [INFO] Building Hive Accumulo Handler 2.0.0-SNAPSHOT
> [INFO] 
> 
> [INFO] +- org.apache.accumulo:accumulo-core:jar:1.6.0:compile
> [INFO] |  +- commons-collections:commons-collections:jar:3.2.1:compile
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12469) Bump Commons-Collections dependency from 3.2.1 to 3.2.2. to address vulnerability

2015-11-19 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-12469:
--
Description: Currently the commons-collections (3.2.1) library allows for 
invocation of arbitrary code through {{InvokerTransformer}}, need to bump the 
version of commons-collections from 3.2.1 to 3.2.2 to resolve this issue.

> Bump Commons-Collections dependency from 3.2.1 to 3.2.2. to address 
> vulnerability
> -
>
> Key: HIVE-12469
> URL: https://issues.apache.org/jira/browse/HIVE-12469
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
>Priority: Blocker
>
> Currently the commons-collections (3.2.1) library allows for invocation of 
> arbitrary code through {{InvokerTransformer}}, need to bump the version of 
> commons-collections from 3.2.1 to 3.2.2 to resolve this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12469) Bump Commons-Collections dependency from 3.2.1 to 3.2.2. to address vulnerability

2015-11-19 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-12469:
--
Description: 
Currently the commons-collections (3.2.1) library allows for invocation of 
arbitrary code through {{InvokerTransformer}}, need to bump the version of 
commons-collections from 3.2.1 to 3.2.2 to resolve this issue.

Results of {{mvn dependency:tree}}:

{code}
[INFO] 
[INFO] Building Hive HPL/SQL 2.0.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ hive-hplsql ---
[INFO] org.apache.hive:hive-hplsql:jar:2.0.0-SNAPSHOT
[INFO] +- com.google.guava:guava:jar:14.0.1:compile
[INFO] +- commons-collections:commons-collections:jar:3.2.1:compile
{code}

{code}
[INFO] 
[INFO] Building Hive Packaging 2.0.0-SNAPSHOT
[INFO] 
[INFO] +- org.apache.hive:hive-hbase-handler:jar:2.0.0-SNAPSHOT:compile
[INFO] |  +- org.apache.hbase:hbase-server:jar:1.1.1:compile
[INFO] |  |  +- commons-collections:commons-collections:jar:3.2.1:compile
{code}

{code}
[INFO] 
[INFO] Building Hive Common 2.0.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ hive-common ---
[INFO] +- org.apache.hadoop:hadoop-common:jar:2.6.0:compile
[INFO] |  +- commons-collections:commons-collections:jar:3.2.1:compile
{code}

{{Hadoop-Common}} dependency also found in: LLAP, Serde, Storage,  Shims, Shims 
Common, Shims Scheduler)

{code}
[INFO] 
[INFO] Building Hive Ant Utilities 2.0.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ hive-ant ---
[INFO] |  +- commons-collections:commons-collections:jar:3.1:compile
{code}

{code}
[INFO] 
[INFO] 
[INFO] Building Hive Accumulo Handler 2.0.0-SNAPSHOT
[INFO] 
[INFO] +- org.apache.accumulo:accumulo-core:jar:1.6.0:compile
[INFO] |  +- commons-collections:commons-collections:jar:3.2.1:compile
{code}

  was:Currently the commons-collections (3.2.1) library allows for invocation 
of arbitrary code through {{InvokerTransformer}}, need to bump the version of 
commons-collections from 3.2.1 to 3.2.2 to resolve this issue.


> Bump Commons-Collections dependency from 3.2.1 to 3.2.2. to address 
> vulnerability
> -
>
> Key: HIVE-12469
> URL: https://issues.apache.org/jira/browse/HIVE-12469
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
>Priority: Blocker
>
> Currently the commons-collections (3.2.1) library allows for invocation of 
> arbitrary code through {{InvokerTransformer}}, need to bump the version of 
> commons-collections from 3.2.1 to 3.2.2 to resolve this issue.
> Results of {{mvn dependency:tree}}:
> {code}
> [INFO] 
> 
> [INFO] Building Hive HPL/SQL 2.0.0-SNAPSHOT
> [INFO] 
> 
> [INFO] 
> [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ hive-hplsql ---
> [INFO] org.apache.hive:hive-hplsql:jar:2.0.0-SNAPSHOT
> [INFO] +- com.google.guava:guava:jar:14.0.1:compile
> [INFO] +- commons-collections:commons-collections:jar:3.2.1:compile
> {code}
> {code}
> [INFO] 
> 
> [INFO] Building Hive Packaging 2.0.0-SNAPSHOT
> [INFO] 
> 
> [INFO] +- org.apache.hive:hive-hbase-handler:jar:2.0.0-SNAPSHOT:compile
> [INFO] |  +- org.apache.hbase:hbase-server:jar:1.1.1:compile
> [INFO] |  |  +- commons-collections:commons-collections:jar:3.2.1:compile
> {code}
> {code}
> [INFO] 
> 
> [INFO] Building Hive Common 2.0.0-SNAPSHOT
> [INFO] 
> 
> [INFO] 
> [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ hive-common ---
> [INFO] +- 

[jira] [Updated] (HIVE-12469) Bump Commons-Collections dependency from 3.2.1 to 3.2.2. to address vulnerability

2015-11-19 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-12469:
--
External issue URL:   (was: 
https://issues.apache.org/jira/browse/COLLECTIONS-580)

> Bump Commons-Collections dependency from 3.2.1 to 3.2.2. to address 
> vulnerability
> -
>
> Key: HIVE-12469
> URL: https://issues.apache.org/jira/browse/HIVE-12469
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Reuben Kuhnert
>Assignee: Reuben Kuhnert
>Priority: Blocker
>
> Currently the commons-collections (3.2.1) library allows for invocation of 
> arbitrary code through {{InvokerTransformer}}, need to bump the version of 
> commons-collections from 3.2.1 to 3.2.2 to resolve this issue.
> Results of {{mvn dependency:tree}}:
> {code}
> [INFO] 
> 
> [INFO] Building Hive HPL/SQL 2.0.0-SNAPSHOT
> [INFO] 
> 
> [INFO] 
> [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ hive-hplsql ---
> [INFO] org.apache.hive:hive-hplsql:jar:2.0.0-SNAPSHOT
> [INFO] +- com.google.guava:guava:jar:14.0.1:compile
> [INFO] +- commons-collections:commons-collections:jar:3.2.1:compile
> {code}
> {code}
> [INFO] 
> 
> [INFO] Building Hive Packaging 2.0.0-SNAPSHOT
> [INFO] 
> 
> [INFO] +- org.apache.hive:hive-hbase-handler:jar:2.0.0-SNAPSHOT:compile
> [INFO] |  +- org.apache.hbase:hbase-server:jar:1.1.1:compile
> [INFO] |  |  +- commons-collections:commons-collections:jar:3.2.1:compile
> {code}
> {code}
> [INFO] 
> 
> [INFO] Building Hive Common 2.0.0-SNAPSHOT
> [INFO] 
> 
> [INFO] 
> [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ hive-common ---
> [INFO] +- org.apache.hadoop:hadoop-common:jar:2.6.0:compile
> [INFO] |  +- commons-collections:commons-collections:jar:3.2.1:compile
> {code}
> {{Hadoop-Common}} dependency also found in: LLAP, Serde, Storage,  Shims, 
> Shims Common, Shims Scheduler)
> {code}
> [INFO] 
> 
> [INFO] Building Hive Ant Utilities 2.0.0-SNAPSHOT
> [INFO] 
> 
> [INFO] 
> [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ hive-ant ---
> [INFO] |  +- commons-collections:commons-collections:jar:3.1:compile
> {code}
> {code}
> [INFO]
>  
> [INFO] 
> 
> [INFO] Building Hive Accumulo Handler 2.0.0-SNAPSHOT
> [INFO] 
> 
> [INFO] +- org.apache.accumulo:accumulo-core:jar:1.6.0:compile
> [INFO] |  +- commons-collections:commons-collections:jar:3.2.1:compile
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10190) CBO: AST mode checks for TABLESAMPLE with AST.toString().contains(TOK_TABLESPLITSAMPLE)

2015-05-18 Thread Reuben Kuhnert (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14548127#comment-14548127
 ] 

Reuben Kuhnert commented on HIVE-10190:
---

If everything looks good here, do you mind committing the final patch (#12)?

Let me know if anything needs fixing.
Thanks

 CBO: AST mode checks for TABLESAMPLE with 
 AST.toString().contains(TOK_TABLESPLITSAMPLE)
 -

 Key: HIVE-10190
 URL: https://issues.apache.org/jira/browse/HIVE-10190
 Project: Hive
  Issue Type: Bug
  Components: CBO
Affects Versions: 1.2.0
Reporter: Gopal V
Assignee: Reuben Kuhnert
Priority: Trivial
  Labels: perfomance
 Attachments: HIVE-10190-querygen.py, HIVE-10190.01.patch, 
 HIVE-10190.02.patch, HIVE-10190.03.patch, HIVE-10190.04.patch, 
 HIVE-10190.05.patch, HIVE-10190.05.patch, HIVE-10190.06.patch, 
 HIVE-10190.07.patch, HIVE-10190.08.patch, HIVE-10190.09.patch, 
 HIVE-10190.10.patch, HIVE-10190.11.patch, HIVE-10190.12.patch


 {code}
 public static boolean validateASTForUnsupportedTokens(ASTNode ast) {
 String astTree = ast.toStringTree();
 // if any of following tokens are present in AST, bail out
 String[] tokens = { TOK_CHARSETLITERAL, TOK_TABLESPLITSAMPLE };
 for (String token : tokens) {
   if (astTree.contains(token)) {
 return false;
   }
 }
 return true;
   }
 {code}
 This is an issue for a SQL query which is bigger in AST form than in text 
 (~700kb).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-10656) Beeline set var=value not carrying over to queries

2015-05-18 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert reassigned HIVE-10656:
-

Assignee: Reuben Kuhnert

 Beeline set var=value not carrying over to queries
 --

 Key: HIVE-10656
 URL: https://issues.apache.org/jira/browse/HIVE-10656
 Project: Hive
  Issue Type: Bug
Reporter: Reuben Kuhnert
Assignee: Reuben Kuhnert
Priority: Minor

 After performing a {{set name=value}} I would expect that the variable name 
 would carry over to all locations within the session. It appears to work when 
 querying the value via {{set;}}, but not when trying to do actual sql 
 statements.
 Example:
 {code}
 0: jdbc:hive2://localhost:1 set foo;
 +--+--+
 |   set|
 +--+--+
 | foo=bar  |
 +--+--+
 1 row selected (0.932 seconds)
 0: jdbc:hive2://localhost:1 select * from ${foo};
 Error: Error while compiling statement: FAILED: SemanticException [Error 
 10001]: Line 1:14 Table not found 'bar' (state=42S02,code=10001)
 0: jdbc:hive2://localhost:1 show tables;
 ++--+
 |  tab_name  |
 ++--+
 | my |
 | purchases  |
 ++--+
 2 rows selected (0.437 seconds)
 0: jdbc:hive2://localhost:1 set foo=my;
 No rows affected (0.017 seconds)
 0: jdbc:hive2://localhost:1 set foo;
 +-+--+
 |   set   |
 +-+--+
 | foo=my  |
 +-+--+
 1 row selected (0.02 seconds)
 0: jdbc:hive2://localhost:1 select * from ${foo};
 select * from ${foo};
 Error: Error while compiling statement: FAILED: SemanticException [Error 
 10001]: Line 1:14 Table not found 'bar' (state=42S02,code=10001)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10190) CBO: AST mode checks for TABLESAMPLE with AST.toString().contains(TOK_TABLESPLITSAMPLE)

2015-05-10 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-10190:
--
Attachment: HIVE-10190.12.patch

 CBO: AST mode checks for TABLESAMPLE with 
 AST.toString().contains(TOK_TABLESPLITSAMPLE)
 -

 Key: HIVE-10190
 URL: https://issues.apache.org/jira/browse/HIVE-10190
 Project: Hive
  Issue Type: Bug
  Components: CBO
Affects Versions: 1.2.0
Reporter: Gopal V
Assignee: Reuben Kuhnert
Priority: Trivial
  Labels: perfomance
 Attachments: HIVE-10190-querygen.py, HIVE-10190.01.patch, 
 HIVE-10190.02.patch, HIVE-10190.03.patch, HIVE-10190.04.patch, 
 HIVE-10190.05.patch, HIVE-10190.05.patch, HIVE-10190.06.patch, 
 HIVE-10190.07.patch, HIVE-10190.08.patch, HIVE-10190.09.patch, 
 HIVE-10190.10.patch, HIVE-10190.11.patch, HIVE-10190.12.patch


 {code}
 public static boolean validateASTForUnsupportedTokens(ASTNode ast) {
 String astTree = ast.toStringTree();
 // if any of following tokens are present in AST, bail out
 String[] tokens = { TOK_CHARSETLITERAL, TOK_TABLESPLITSAMPLE };
 for (String token : tokens) {
   if (astTree.contains(token)) {
 return false;
   }
 }
 return true;
   }
 {code}
 This is an issue for a SQL query which is bigger in AST form than in text 
 (~700kb).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10190) CBO: AST mode checks for TABLESAMPLE with AST.toString().contains(TOK_TABLESPLITSAMPLE)

2015-05-10 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-10190:
--
Attachment: (was: HIVE-10190.12.patch)

 CBO: AST mode checks for TABLESAMPLE with 
 AST.toString().contains(TOK_TABLESPLITSAMPLE)
 -

 Key: HIVE-10190
 URL: https://issues.apache.org/jira/browse/HIVE-10190
 Project: Hive
  Issue Type: Bug
  Components: CBO
Affects Versions: 1.2.0
Reporter: Gopal V
Assignee: Reuben Kuhnert
Priority: Trivial
  Labels: perfomance
 Attachments: HIVE-10190-querygen.py, HIVE-10190.01.patch, 
 HIVE-10190.02.patch, HIVE-10190.03.patch, HIVE-10190.04.patch, 
 HIVE-10190.05.patch, HIVE-10190.05.patch, HIVE-10190.06.patch, 
 HIVE-10190.07.patch, HIVE-10190.08.patch, HIVE-10190.09.patch, 
 HIVE-10190.10.patch, HIVE-10190.11.patch, HIVE-10190.12.patch


 {code}
 public static boolean validateASTForUnsupportedTokens(ASTNode ast) {
 String astTree = ast.toStringTree();
 // if any of following tokens are present in AST, bail out
 String[] tokens = { TOK_CHARSETLITERAL, TOK_TABLESPLITSAMPLE };
 for (String token : tokens) {
   if (astTree.contains(token)) {
 return false;
   }
 }
 return true;
   }
 {code}
 This is an issue for a SQL query which is bigger in AST form than in text 
 (~700kb).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10190) CBO: AST mode checks for TABLESAMPLE with AST.toString().contains(TOK_TABLESPLITSAMPLE)

2015-05-08 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-10190:
--
Attachment: HIVE-10190.12.patch

 CBO: AST mode checks for TABLESAMPLE with 
 AST.toString().contains(TOK_TABLESPLITSAMPLE)
 -

 Key: HIVE-10190
 URL: https://issues.apache.org/jira/browse/HIVE-10190
 Project: Hive
  Issue Type: Bug
  Components: CBO
Affects Versions: 1.2.0
Reporter: Gopal V
Assignee: Reuben Kuhnert
Priority: Trivial
  Labels: perfomance
 Attachments: HIVE-10190-querygen.py, HIVE-10190.01.patch, 
 HIVE-10190.02.patch, HIVE-10190.03.patch, HIVE-10190.04.patch, 
 HIVE-10190.05.patch, HIVE-10190.05.patch, HIVE-10190.06.patch, 
 HIVE-10190.07.patch, HIVE-10190.08.patch, HIVE-10190.09.patch, 
 HIVE-10190.10.patch, HIVE-10190.11.patch, HIVE-10190.12.patch


 {code}
 public static boolean validateASTForUnsupportedTokens(ASTNode ast) {
 String astTree = ast.toStringTree();
 // if any of following tokens are present in AST, bail out
 String[] tokens = { TOK_CHARSETLITERAL, TOK_TABLESPLITSAMPLE };
 for (String token : tokens) {
   if (astTree.contains(token)) {
 return false;
   }
 }
 return true;
   }
 {code}
 This is an issue for a SQL query which is bigger in AST form than in text 
 (~700kb).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10656) Beeline set var=value not carrying over to queries

2015-05-08 Thread Reuben Kuhnert (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14535085#comment-14535085
 ] 

Reuben Kuhnert commented on HIVE-10656:
---

This appears to be a problem with variable ambiguity:

{code}
set key=value
{code} 

expands to: 
{code}
set hiveconf:key=value 
{code}

however,

{code}
select * from ${key}
{code}

expands to:

{code}
select* from ${hiveconf:key}
{code}

The question is basically, should we allow users to enter ambiguous properties, 
and if so should the {{key}} default to {{hiveconf:key}} or {{hivevar:key}}?

 Beeline set var=value not carrying over to queries
 --

 Key: HIVE-10656
 URL: https://issues.apache.org/jira/browse/HIVE-10656
 Project: Hive
  Issue Type: Bug
Reporter: Reuben Kuhnert
Priority: Minor

 After performing a {{set name=value}} I would expect that the variable name 
 would carry over to all locations within the session. It appears to work when 
 querying the value via {{set;}}, but not when trying to do actual sql 
 statements.
 Example:
 {code}
 0: jdbc:hive2://localhost:1 set foo;
 +--+--+
 |   set|
 +--+--+
 | foo=bar  |
 +--+--+
 1 row selected (0.932 seconds)
 0: jdbc:hive2://localhost:1 select * from ${foo};
 Error: Error while compiling statement: FAILED: SemanticException [Error 
 10001]: Line 1:14 Table not found 'bar' (state=42S02,code=10001)
 0: jdbc:hive2://localhost:1 show tables;
 ++--+
 |  tab_name  |
 ++--+
 | my |
 | purchases  |
 ++--+
 2 rows selected (0.437 seconds)
 0: jdbc:hive2://localhost:1 set foo=my;
 No rows affected (0.017 seconds)
 0: jdbc:hive2://localhost:1 set foo;
 +-+--+
 |   set   |
 +-+--+
 | foo=my  |
 +-+--+
 1 row selected (0.02 seconds)
 0: jdbc:hive2://localhost:1 select * from ${foo};
 select * from ${foo};
 Error: Error while compiling statement: FAILED: SemanticException [Error 
 10001]: Line 1:14 Table not found 'bar' (state=42S02,code=10001)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10190) CBO: AST mode checks for TABLESAMPLE with AST.toString().contains(TOK_TABLESPLITSAMPLE)

2015-05-07 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-10190:
--
Attachment: HIVE-10190.11.patch

 CBO: AST mode checks for TABLESAMPLE with 
 AST.toString().contains(TOK_TABLESPLITSAMPLE)
 -

 Key: HIVE-10190
 URL: https://issues.apache.org/jira/browse/HIVE-10190
 Project: Hive
  Issue Type: Bug
  Components: CBO
Affects Versions: 1.2.0
Reporter: Gopal V
Assignee: Reuben Kuhnert
Priority: Trivial
  Labels: perfomance
 Attachments: HIVE-10190-querygen.py, HIVE-10190.01.patch, 
 HIVE-10190.02.patch, HIVE-10190.03.patch, HIVE-10190.04.patch, 
 HIVE-10190.05.patch, HIVE-10190.05.patch, HIVE-10190.06.patch, 
 HIVE-10190.07.patch, HIVE-10190.08.patch, HIVE-10190.09.patch, 
 HIVE-10190.10.patch, HIVE-10190.11.patch


 {code}
 public static boolean validateASTForUnsupportedTokens(ASTNode ast) {
 String astTree = ast.toStringTree();
 // if any of following tokens are present in AST, bail out
 String[] tokens = { TOK_CHARSETLITERAL, TOK_TABLESPLITSAMPLE };
 for (String token : tokens) {
   if (astTree.contains(token)) {
 return false;
   }
 }
 return true;
   }
 {code}
 This is an issue for a SQL query which is bigger in AST form than in text 
 (~700kb).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10190) CBO: AST mode checks for TABLESAMPLE with AST.toString().contains(TOK_TABLESPLITSAMPLE)

2015-05-05 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-10190:
--
Attachment: HIVE-10190.09.patch

 CBO: AST mode checks for TABLESAMPLE with 
 AST.toString().contains(TOK_TABLESPLITSAMPLE)
 -

 Key: HIVE-10190
 URL: https://issues.apache.org/jira/browse/HIVE-10190
 Project: Hive
  Issue Type: Bug
  Components: CBO
Affects Versions: 1.2.0
Reporter: Gopal V
Assignee: Reuben Kuhnert
Priority: Trivial
  Labels: perfomance
 Attachments: HIVE-10190-querygen.py, HIVE-10190.01.patch, 
 HIVE-10190.02.patch, HIVE-10190.03.patch, HIVE-10190.04.patch, 
 HIVE-10190.05.patch, HIVE-10190.05.patch, HIVE-10190.06.patch, 
 HIVE-10190.07.patch, HIVE-10190.08.patch, HIVE-10190.09.patch


 {code}
 public static boolean validateASTForUnsupportedTokens(ASTNode ast) {
 String astTree = ast.toStringTree();
 // if any of following tokens are present in AST, bail out
 String[] tokens = { TOK_CHARSETLITERAL, TOK_TABLESPLITSAMPLE };
 for (String token : tokens) {
   if (astTree.contains(token)) {
 return false;
   }
 }
 return true;
   }
 {code}
 This is an issue for a SQL query which is bigger in AST form than in text 
 (~700kb).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10190) CBO: AST mode checks for TABLESAMPLE with AST.toString().contains(TOK_TABLESPLITSAMPLE)

2015-05-05 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-10190:
--
Attachment: HIVE-10190.10.patch

 CBO: AST mode checks for TABLESAMPLE with 
 AST.toString().contains(TOK_TABLESPLITSAMPLE)
 -

 Key: HIVE-10190
 URL: https://issues.apache.org/jira/browse/HIVE-10190
 Project: Hive
  Issue Type: Bug
  Components: CBO
Affects Versions: 1.2.0
Reporter: Gopal V
Assignee: Reuben Kuhnert
Priority: Trivial
  Labels: perfomance
 Attachments: HIVE-10190-querygen.py, HIVE-10190.01.patch, 
 HIVE-10190.02.patch, HIVE-10190.03.patch, HIVE-10190.04.patch, 
 HIVE-10190.05.patch, HIVE-10190.05.patch, HIVE-10190.06.patch, 
 HIVE-10190.07.patch, HIVE-10190.08.patch, HIVE-10190.09.patch, 
 HIVE-10190.10.patch


 {code}
 public static boolean validateASTForUnsupportedTokens(ASTNode ast) {
 String astTree = ast.toStringTree();
 // if any of following tokens are present in AST, bail out
 String[] tokens = { TOK_CHARSETLITERAL, TOK_TABLESPLITSAMPLE };
 for (String token : tokens) {
   if (astTree.contains(token)) {
 return false;
   }
 }
 return true;
   }
 {code}
 This is an issue for a SQL query which is bigger in AST form than in text 
 (~700kb).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10597) Relative path doesn't work with CREATE TABLE LOCATION 'relative/path'

2015-05-05 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-10597:
--
Attachment: HIVE-10597.02.patch

 Relative path doesn't work with CREATE TABLE LOCATION 'relative/path'
 -

 Key: HIVE-10597
 URL: https://issues.apache.org/jira/browse/HIVE-10597
 Project: Hive
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Reuben Kuhnert
Assignee: Reuben Kuhnert
Priority: Minor
 Attachments: HIVE-10597.01.patch, HIVE-10597.02.patch


 {code}
 0: jdbc:hive2://a2110.halxg.cloudera.com:1000 CREATE EXTERNAL TABLE IF NOT 
 EXISTS mydb.employees3 like mydb.employees LOCATION 'data/stock';
 Error: Error while processing statement: FAILED: Execution Error, return code 
 1 from org.apache.hadoop.hive.ql.exec.DDLTask. 
 MetaException(message:java.lang.NullPointerException) (state=08S01,code=1)
 0: jdbc:hive2://a2110.halxg.cloudera.com:1000 CREATE EXTERNAL TABLE IF NOT 
 EXISTS mydb.employees3 like mydb.employees LOCATION '/user/hive/data/stock';
 No rows affected (0.369 seconds)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10190) CBO: AST mode checks for TABLESAMPLE with AST.toString().contains(TOK_TABLESPLITSAMPLE)

2015-05-05 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-10190:
--
Attachment: HIVE-10190.10.patch

 CBO: AST mode checks for TABLESAMPLE with 
 AST.toString().contains(TOK_TABLESPLITSAMPLE)
 -

 Key: HIVE-10190
 URL: https://issues.apache.org/jira/browse/HIVE-10190
 Project: Hive
  Issue Type: Bug
  Components: CBO
Affects Versions: 1.2.0
Reporter: Gopal V
Assignee: Reuben Kuhnert
Priority: Trivial
  Labels: perfomance
 Attachments: HIVE-10190-querygen.py, HIVE-10190.01.patch, 
 HIVE-10190.02.patch, HIVE-10190.03.patch, HIVE-10190.04.patch, 
 HIVE-10190.05.patch, HIVE-10190.05.patch, HIVE-10190.06.patch, 
 HIVE-10190.07.patch, HIVE-10190.08.patch, HIVE-10190.09.patch, 
 HIVE-10190.10.patch


 {code}
 public static boolean validateASTForUnsupportedTokens(ASTNode ast) {
 String astTree = ast.toStringTree();
 // if any of following tokens are present in AST, bail out
 String[] tokens = { TOK_CHARSETLITERAL, TOK_TABLESPLITSAMPLE };
 for (String token : tokens) {
   if (astTree.contains(token)) {
 return false;
   }
 }
 return true;
   }
 {code}
 This is an issue for a SQL query which is bigger in AST form than in text 
 (~700kb).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10190) CBO: AST mode checks for TABLESAMPLE with AST.toString().contains(TOK_TABLESPLITSAMPLE)

2015-05-05 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-10190:
--
Attachment: (was: HIVE-10190.10.patch)

 CBO: AST mode checks for TABLESAMPLE with 
 AST.toString().contains(TOK_TABLESPLITSAMPLE)
 -

 Key: HIVE-10190
 URL: https://issues.apache.org/jira/browse/HIVE-10190
 Project: Hive
  Issue Type: Bug
  Components: CBO
Affects Versions: 1.2.0
Reporter: Gopal V
Assignee: Reuben Kuhnert
Priority: Trivial
  Labels: perfomance
 Attachments: HIVE-10190-querygen.py, HIVE-10190.01.patch, 
 HIVE-10190.02.patch, HIVE-10190.03.patch, HIVE-10190.04.patch, 
 HIVE-10190.05.patch, HIVE-10190.05.patch, HIVE-10190.06.patch, 
 HIVE-10190.07.patch, HIVE-10190.08.patch, HIVE-10190.09.patch, 
 HIVE-10190.10.patch


 {code}
 public static boolean validateASTForUnsupportedTokens(ASTNode ast) {
 String astTree = ast.toStringTree();
 // if any of following tokens are present in AST, bail out
 String[] tokens = { TOK_CHARSETLITERAL, TOK_TABLESPLITSAMPLE };
 for (String token : tokens) {
   if (astTree.contains(token)) {
 return false;
   }
 }
 return true;
   }
 {code}
 This is an issue for a SQL query which is bigger in AST form than in text 
 (~700kb).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10190) CBO: AST mode checks for TABLESAMPLE with AST.toString().contains(TOK_TABLESPLITSAMPLE)

2015-05-05 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-10190:
--
Attachment: HIVE-10190.09.patch

 CBO: AST mode checks for TABLESAMPLE with 
 AST.toString().contains(TOK_TABLESPLITSAMPLE)
 -

 Key: HIVE-10190
 URL: https://issues.apache.org/jira/browse/HIVE-10190
 Project: Hive
  Issue Type: Bug
  Components: CBO
Affects Versions: 1.2.0
Reporter: Gopal V
Assignee: Reuben Kuhnert
Priority: Trivial
  Labels: perfomance
 Attachments: HIVE-10190-querygen.py, HIVE-10190.01.patch, 
 HIVE-10190.02.patch, HIVE-10190.03.patch, HIVE-10190.04.patch, 
 HIVE-10190.05.patch, HIVE-10190.05.patch, HIVE-10190.06.patch, 
 HIVE-10190.07.patch, HIVE-10190.08.patch, HIVE-10190.09.patch


 {code}
 public static boolean validateASTForUnsupportedTokens(ASTNode ast) {
 String astTree = ast.toStringTree();
 // if any of following tokens are present in AST, bail out
 String[] tokens = { TOK_CHARSETLITERAL, TOK_TABLESPLITSAMPLE };
 for (String token : tokens) {
   if (astTree.contains(token)) {
 return false;
   }
 }
 return true;
   }
 {code}
 This is an issue for a SQL query which is bigger in AST form than in text 
 (~700kb).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10190) CBO: AST mode checks for TABLESAMPLE with AST.toString().contains(TOK_TABLESPLITSAMPLE)

2015-05-05 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-10190:
--
Attachment: (was: HIVE-10190.10.patch)

 CBO: AST mode checks for TABLESAMPLE with 
 AST.toString().contains(TOK_TABLESPLITSAMPLE)
 -

 Key: HIVE-10190
 URL: https://issues.apache.org/jira/browse/HIVE-10190
 Project: Hive
  Issue Type: Bug
  Components: CBO
Affects Versions: 1.2.0
Reporter: Gopal V
Assignee: Reuben Kuhnert
Priority: Trivial
  Labels: perfomance
 Attachments: HIVE-10190-querygen.py, HIVE-10190.01.patch, 
 HIVE-10190.02.patch, HIVE-10190.03.patch, HIVE-10190.04.patch, 
 HIVE-10190.05.patch, HIVE-10190.05.patch, HIVE-10190.06.patch, 
 HIVE-10190.07.patch, HIVE-10190.08.patch


 {code}
 public static boolean validateASTForUnsupportedTokens(ASTNode ast) {
 String astTree = ast.toStringTree();
 // if any of following tokens are present in AST, bail out
 String[] tokens = { TOK_CHARSETLITERAL, TOK_TABLESPLITSAMPLE };
 for (String token : tokens) {
   if (astTree.contains(token)) {
 return false;
   }
 }
 return true;
   }
 {code}
 This is an issue for a SQL query which is bigger in AST form than in text 
 (~700kb).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10190) CBO: AST mode checks for TABLESAMPLE with AST.toString().contains(TOK_TABLESPLITSAMPLE)

2015-05-05 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-10190:
--
Attachment: (was: HIVE-10190.09.patch)

 CBO: AST mode checks for TABLESAMPLE with 
 AST.toString().contains(TOK_TABLESPLITSAMPLE)
 -

 Key: HIVE-10190
 URL: https://issues.apache.org/jira/browse/HIVE-10190
 Project: Hive
  Issue Type: Bug
  Components: CBO
Affects Versions: 1.2.0
Reporter: Gopal V
Assignee: Reuben Kuhnert
Priority: Trivial
  Labels: perfomance
 Attachments: HIVE-10190-querygen.py, HIVE-10190.01.patch, 
 HIVE-10190.02.patch, HIVE-10190.03.patch, HIVE-10190.04.patch, 
 HIVE-10190.05.patch, HIVE-10190.05.patch, HIVE-10190.06.patch, 
 HIVE-10190.07.patch, HIVE-10190.08.patch


 {code}
 public static boolean validateASTForUnsupportedTokens(ASTNode ast) {
 String astTree = ast.toStringTree();
 // if any of following tokens are present in AST, bail out
 String[] tokens = { TOK_CHARSETLITERAL, TOK_TABLESPLITSAMPLE };
 for (String token : tokens) {
   if (astTree.contains(token)) {
 return false;
   }
 }
 return true;
   }
 {code}
 This is an issue for a SQL query which is bigger in AST form than in text 
 (~700kb).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10190) CBO: AST mode checks for TABLESAMPLE with AST.toString().contains(TOK_TABLESPLITSAMPLE)

2015-05-05 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-10190:
--
Attachment: HIVE-10190.10.patch

 CBO: AST mode checks for TABLESAMPLE with 
 AST.toString().contains(TOK_TABLESPLITSAMPLE)
 -

 Key: HIVE-10190
 URL: https://issues.apache.org/jira/browse/HIVE-10190
 Project: Hive
  Issue Type: Bug
  Components: CBO
Affects Versions: 1.2.0
Reporter: Gopal V
Assignee: Reuben Kuhnert
Priority: Trivial
  Labels: perfomance
 Attachments: HIVE-10190-querygen.py, HIVE-10190.01.patch, 
 HIVE-10190.02.patch, HIVE-10190.03.patch, HIVE-10190.04.patch, 
 HIVE-10190.05.patch, HIVE-10190.05.patch, HIVE-10190.06.patch, 
 HIVE-10190.07.patch, HIVE-10190.08.patch, HIVE-10190.09.patch, 
 HIVE-10190.10.patch


 {code}
 public static boolean validateASTForUnsupportedTokens(ASTNode ast) {
 String astTree = ast.toStringTree();
 // if any of following tokens are present in AST, bail out
 String[] tokens = { TOK_CHARSETLITERAL, TOK_TABLESPLITSAMPLE };
 for (String token : tokens) {
   if (astTree.contains(token)) {
 return false;
   }
 }
 return true;
   }
 {code}
 This is an issue for a SQL query which is bigger in AST form than in text 
 (~700kb).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10190) CBO: AST mode checks for TABLESAMPLE with AST.toString().contains(TOK_TABLESPLITSAMPLE)

2015-05-04 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-10190:
--
Attachment: HIVE-10190.06.patch

 CBO: AST mode checks for TABLESAMPLE with 
 AST.toString().contains(TOK_TABLESPLITSAMPLE)
 -

 Key: HIVE-10190
 URL: https://issues.apache.org/jira/browse/HIVE-10190
 Project: Hive
  Issue Type: Bug
  Components: CBO
Affects Versions: 1.2.0
Reporter: Gopal V
Assignee: Reuben Kuhnert
Priority: Trivial
  Labels: perfomance
 Attachments: HIVE-10190-querygen.py, HIVE-10190.01.patch, 
 HIVE-10190.02.patch, HIVE-10190.03.patch, HIVE-10190.04.patch, 
 HIVE-10190.05.patch, HIVE-10190.05.patch, HIVE-10190.06.patch


 {code}
 public static boolean validateASTForUnsupportedTokens(ASTNode ast) {
 String astTree = ast.toStringTree();
 // if any of following tokens are present in AST, bail out
 String[] tokens = { TOK_CHARSETLITERAL, TOK_TABLESPLITSAMPLE };
 for (String token : tokens) {
   if (astTree.contains(token)) {
 return false;
   }
 }
 return true;
   }
 {code}
 This is an issue for a SQL query which is bigger in AST form than in text 
 (~700kb).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10190) CBO: AST mode checks for TABLESAMPLE with AST.toString().contains(TOK_TABLESPLITSAMPLE)

2015-05-04 Thread Reuben Kuhnert (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reuben Kuhnert updated HIVE-10190:
--
Attachment: HIVE-10190.07.patch

 CBO: AST mode checks for TABLESAMPLE with 
 AST.toString().contains(TOK_TABLESPLITSAMPLE)
 -

 Key: HIVE-10190
 URL: https://issues.apache.org/jira/browse/HIVE-10190
 Project: Hive
  Issue Type: Bug
  Components: CBO
Affects Versions: 1.2.0
Reporter: Gopal V
Assignee: Reuben Kuhnert
Priority: Trivial
  Labels: perfomance
 Attachments: HIVE-10190-querygen.py, HIVE-10190.01.patch, 
 HIVE-10190.02.patch, HIVE-10190.03.patch, HIVE-10190.04.patch, 
 HIVE-10190.05.patch, HIVE-10190.05.patch, HIVE-10190.06.patch, 
 HIVE-10190.07.patch


 {code}
 public static boolean validateASTForUnsupportedTokens(ASTNode ast) {
 String astTree = ast.toStringTree();
 // if any of following tokens are present in AST, bail out
 String[] tokens = { TOK_CHARSETLITERAL, TOK_TABLESPLITSAMPLE };
 for (String token : tokens) {
   if (astTree.contains(token)) {
 return false;
   }
 }
 return true;
   }
 {code}
 This is an issue for a SQL query which is bigger in AST form than in text 
 (~700kb).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >