[jira] [Updated] (HIVE-17984) getMaxLength is not returning the correct lengths for Char/Varchar types while reading the ORC file from WebHDFS file system

2017-11-06 Thread Syam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Syam updated HIVE-17984:

Summary: getMaxLength is not returning the correct lengths for Char/Varchar 
types while reading the ORC file from WebHDFS file system  (was: getMaxLength 
is not returning the previously set length in ORC file)

> getMaxLength is not returning the correct lengths for Char/Varchar types 
> while reading the ORC file from WebHDFS file system
> 
>
> Key: HIVE-17984
> URL: https://issues.apache.org/jira/browse/HIVE-17984
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, ORC
> Environment: tested it against hive-exec 2.1
>Reporter: Syam
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> getMaxLength is not returning the correct length for char/varchar datatypes.
> I see that getMaxLength is returning 255 for CHAR type and 65535 for VARCHAR 
> type.
> When I checked the same file using orcfiledump utility, I could see the 
> correct lengths.
> Here is the snippet of the code:
>  Reader _reader = OrcFile.createReader(new 
> Path(_fileName),OrcFile.readerOptions(conf).filesystem(fs)) ;
>   TypeDescription metarec = _reader.getSchema() ;
>   List  cols = metarec.getChildren();
>   List  colNames = metarec.getFieldNames();
>   for (int i=0; i < cols.size(); i++)
>   {
>   TypeDescription fieldSchema = cols.get(i);
>   switch (fieldSchema.getCategory())
>   {
>case CHAR:
>  header += "char(" + fieldSchema.getMaxLength() + ")" ;
>  break;
>--  
>  --
>  }
> }
> Please let me know your pointers please.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17984) getMaxLength is not returning the previously set length in ORC file

2017-11-06 Thread Syam (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16240406#comment-16240406
 ] 

Syam commented on HIVE-17984:
-

The mentioned issue was not seen while reading the ORC file from local file 
system.

It appears that issue is specific to WebHDFS file system.

Is there any known issue here?

> getMaxLength is not returning the previously set length in ORC file
> ---
>
> Key: HIVE-17984
> URL: https://issues.apache.org/jira/browse/HIVE-17984
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, ORC
> Environment: tested it against hive-exec 2.1
>Reporter: Syam
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> getMaxLength is not returning the correct length for char/varchar datatypes.
> I see that getMaxLength is returning 255 for CHAR type and 65535 for VARCHAR 
> type.
> When I checked the same file using orcfiledump utility, I could see the 
> correct lengths.
> Here is the snippet of the code:
>  Reader _reader = OrcFile.createReader(new 
> Path(_fileName),OrcFile.readerOptions(conf).filesystem(fs)) ;
>   TypeDescription metarec = _reader.getSchema() ;
>   List  cols = metarec.getChildren();
>   List  colNames = metarec.getFieldNames();
>   for (int i=0; i < cols.size(); i++)
>   {
>   TypeDescription fieldSchema = cols.get(i);
>   switch (fieldSchema.getCategory())
>   {
>case CHAR:
>  header += "char(" + fieldSchema.getMaxLength() + ")" ;
>  break;
>--  
>  --
>  }
> }
> Please let me know your pointers please.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17976) HoS: don't set output collector if there's no data to process

2017-11-06 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16240401#comment-16240401
 ] 

Hive QA commented on HIVE-17976:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12896163/HIVE-17976.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 19 failed/errored test(s), 11357 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=62)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=156)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[ct_noperm_loc]
 (batchId=94)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_in] 
(batchId=131)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] 
(batchId=111)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_select] 
(batchId=120)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=206)
org.apache.hadoop.hive.ql.exec.tez.TestWorkloadManager.testAmPoolInteractions 
(batchId=281)
org.apache.hadoop.hive.ql.exec.tez.TestWorkloadManager.testApplyPlanQpChanges 
(batchId=281)
org.apache.hadoop.hive.ql.exec.tez.TestWorkloadManager.testApplyPlanUserMapping 
(batchId=281)
org.apache.hadoop.hive.ql.exec.tez.TestWorkloadManager.testAsyncSessionInitFailures
 (batchId=281)
org.apache.hadoop.hive.ql.exec.tez.TestWorkloadManager.testClusterFractions 
(batchId=281)
org.apache.hadoop.hive.ql.exec.tez.TestWorkloadManager.testDestroyAndReturn 
(batchId=281)
org.apache.hadoop.hive.ql.exec.tez.TestWorkloadManager.testQueueing 
(batchId=281)
org.apache.hadoop.hive.ql.exec.tez.TestWorkloadManager.testReopen (batchId=281)
org.apache.hadoop.hive.ql.exec.tez.TestWorkloadManager.testReuse (batchId=281)
org.apache.hadoop.hive.ql.exec.tez.TestWorkloadManager.testReuseWithDifferentPool
 (batchId=281)
org.apache.hadoop.hive.ql.exec.tez.TestWorkloadManager.testReuseWithQueueing 
(batchId=281)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=223)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7659/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7659/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7659/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 19 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12896163 - PreCommit-HIVE-Build

> HoS: don't set output collector if there's no data to process
> -
>
> Key: HIVE-17976
> URL: https://issues.apache.org/jira/browse/HIVE-17976
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Minor
> Attachments: HIVE-17976.1.patch
>
>
> MR doesn't set an output collector if no row is processed, i.e. 
> {{ExecMapper::map}} is never called. Let's investigate whether Spark should 
> do the same.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17934) Merging Statistics are promoted to COMPLETE (most of the time)

2017-11-06 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-17934:

Attachment: HIVE-17934.03.patch

#3)

* update most of the qouts
* be more realistic about the new state

> Merging Statistics are promoted to COMPLETE (most of the time)
> --
>
> Key: HIVE-17934
> URL: https://issues.apache.org/jira/browse/HIVE-17934
> Project: Hive
>  Issue Type: Bug
>  Components: Statistics
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
> Attachments: HIVE-17934.01.patch, HIVE-17934.02.patch, 
> HIVE-17934.03.patch
>
>
> in case multiple partition statistics are merged the STATS state is computed 
> based on the datasize and rowcount;
> the merge may hide away non-existent stats in case there are other partition 
> or operators which do contribute to the datasize and the rowcount.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-16827) Merge stats task and column stats task into a single task

2017-11-06 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-16827:

Attachment: HIVE-16827.05wip11.patch

> Merge stats task and column stats task into a single task
> -
>
> Key: HIVE-16827
> URL: https://issues.apache.org/jira/browse/HIVE-16827
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Zoltan Haindrich
> Attachments: HIVE-16827.01.patch, HIVE-16827.02.patch, 
> HIVE-16827.03.patch, HIVE-16827.04wip03.patch, HIVE-16827.04wip04.patch, 
> HIVE-16827.04wip05.patch, HIVE-16827.04wip06.patch, HIVE-16827.04wip09.patch, 
> HIVE-16827.04wip10.patch, HIVE-16827.05wip01.patch, HIVE-16827.05wip02.patch, 
> HIVE-16827.05wip03.patch, HIVE-16827.05wip04.patch, HIVE-16827.05wip05.patch, 
> HIVE-16827.05wip08.patch, HIVE-16827.05wip10.patch, HIVE-16827.05wip10.patch, 
> HIVE-16827.05wip11.patch, HIVE-16827.4.patch
>
>
> Within the task, we can specify whether to compute basic stats only or column 
> stats only or both.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-16827) Merge stats task and column stats task into a single task

2017-11-06 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-16827:

Attachment: HIVE-16827.05wip10.patch

> Merge stats task and column stats task into a single task
> -
>
> Key: HIVE-16827
> URL: https://issues.apache.org/jira/browse/HIVE-16827
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Zoltan Haindrich
> Attachments: HIVE-16827.01.patch, HIVE-16827.02.patch, 
> HIVE-16827.03.patch, HIVE-16827.04wip03.patch, HIVE-16827.04wip04.patch, 
> HIVE-16827.04wip05.patch, HIVE-16827.04wip06.patch, HIVE-16827.04wip09.patch, 
> HIVE-16827.04wip10.patch, HIVE-16827.05wip01.patch, HIVE-16827.05wip02.patch, 
> HIVE-16827.05wip03.patch, HIVE-16827.05wip04.patch, HIVE-16827.05wip05.patch, 
> HIVE-16827.05wip08.patch, HIVE-16827.05wip10.patch, HIVE-16827.05wip10.patch, 
> HIVE-16827.4.patch
>
>
> Within the task, we can specify whether to compute basic stats only or column 
> stats only or both.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17954) Implement pool, user, group and trigger to pool management API's.

2017-11-06 Thread Harish Jaiprakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Jaiprakash updated HIVE-17954:
-
Attachment: HIVE-17954.01.patch

Draft changes, some APIs are not implemented in ObjectStore. Just trying this 
out since I'm getting an error in shade-plugin.

> Implement pool, user, group and trigger to pool management API's.
> -
>
> Key: HIVE-17954
> URL: https://issues.apache.org/jira/browse/HIVE-17954
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Harish Jaiprakash
>Assignee: Harish Jaiprakash
> Attachments: HIVE-17954.01.patch
>
>
> Implement the following commands:
> -- Pool management.
> CREATE POOL `resource_plan`.`pool_path` WITH
>   ALLOC_FRACTION `fraction`
>   QUERY_PARALLELISM `parallelism`
>   SCHEDULING_POLICY `policy`;
> ALTER POOL `resource_plan`.`pool_path` SET
>   PATH = `new_path`,
>   ALLOC_FRACTION = `fraction`,
>   QUERY_PARALLELISM = `parallelism`,
>   SCHEDULING_POLICY = `policy`;
> DROP POOL `resource_plan`.`pool_path`;
> -- Trigger to pool mappings.
> ALTER RESOURCE PLAN `resource_plan`
>   ADD TRIGGER `trigger_name` TO `pool_path`;
> ALTER RESOURCE PLAN `resource_plan`
>   DROP TRIGGER `trigger_name` TO `pool_path`;
> -- User/Group to pool mappings.
> CREATE USER|GROUP MAPPING `resource_plan`.`group_or_user_name`
>   TO `pool_path` WITH ORDERING `order_no`;
> DROP USER|GROUP MAPPING `resource_plan`.`group_or_user_name`;



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17954) Implement pool, user, group and trigger to pool management API's.

2017-11-06 Thread Harish Jaiprakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Jaiprakash updated HIVE-17954:
-
Description: 
Implement the following commands:

-- Pool management.
CREATE POOL `resource_plan`.`pool_path` WITH
  ALLOC_FRACTION `fraction`
  QUERY_PARALLELISM `parallelism`
  SCHEDULING_POLICY `policy`;

ALTER POOL `resource_plan`.`pool_path` SET
  PATH = `new_path`,
  ALLOC_FRACTION = `fraction`,
  QUERY_PARALLELISM = `parallelism`,
  SCHEDULING_POLICY = `policy`;

DROP POOL `resource_plan`.`pool_path`;

-- Trigger to pool mappings.
ALTER RESOURCE PLAN `resource_plan`
  ADD TRIGGER `trigger_name` TO `pool_path`;

ALTER RESOURCE PLAN `resource_plan`
  DROP TRIGGER `trigger_name` TO `pool_path`;

-- User/Group to pool mappings.
CREATE USER|GROUP MAPPING `resource_plan`.`group_or_user_name`
  TO `pool_path` WITH ORDERING `order_no`;

DROP USER|GROUP MAPPING `resource_plan`.`group_or_user_name`;


  was:
Implement pool management commands:

CREATE POOL `resource_plan`.`pool_path` WITH
  ALLOC_FRACTION `fraction`
  QUERY_PARALLELISM `parallelism`
  SCHEDULING_POLICY `policy`;

ALTER POOL `resource_plan`.`pool_path` SET
  PATH = `new_path`,
  ALLOC_FRACTION = `fraction`,
  QUERY_PARALLELISM = `parallelism`,
  SCHEDULING_POLICY = `policy`;

DROP POOL `resource_plan`.`pool_path`;


> Implement pool, user, group and trigger to pool management API's.
> -
>
> Key: HIVE-17954
> URL: https://issues.apache.org/jira/browse/HIVE-17954
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Harish Jaiprakash
>Assignee: Harish Jaiprakash
>
> Implement the following commands:
> -- Pool management.
> CREATE POOL `resource_plan`.`pool_path` WITH
>   ALLOC_FRACTION `fraction`
>   QUERY_PARALLELISM `parallelism`
>   SCHEDULING_POLICY `policy`;
> ALTER POOL `resource_plan`.`pool_path` SET
>   PATH = `new_path`,
>   ALLOC_FRACTION = `fraction`,
>   QUERY_PARALLELISM = `parallelism`,
>   SCHEDULING_POLICY = `policy`;
> DROP POOL `resource_plan`.`pool_path`;
> -- Trigger to pool mappings.
> ALTER RESOURCE PLAN `resource_plan`
>   ADD TRIGGER `trigger_name` TO `pool_path`;
> ALTER RESOURCE PLAN `resource_plan`
>   DROP TRIGGER `trigger_name` TO `pool_path`;
> -- User/Group to pool mappings.
> CREATE USER|GROUP MAPPING `resource_plan`.`group_or_user_name`
>   TO `pool_path` WITH ORDERING `order_no`;
> DROP USER|GROUP MAPPING `resource_plan`.`group_or_user_name`;



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17954) Implement pool, user, group and trigger to pool management API's.

2017-11-06 Thread Harish Jaiprakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Jaiprakash updated HIVE-17954:
-
Summary: Implement pool, user, group and trigger to pool management API's.  
(was: Implement create, alter and drop pool API's.)

> Implement pool, user, group and trigger to pool management API's.
> -
>
> Key: HIVE-17954
> URL: https://issues.apache.org/jira/browse/HIVE-17954
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Harish Jaiprakash
>Assignee: Harish Jaiprakash
>
> Implement pool management commands:
> CREATE POOL `resource_plan`.`pool_path` WITH
>   ALLOC_FRACTION `fraction`
>   QUERY_PARALLELISM `parallelism`
>   SCHEDULING_POLICY `policy`;
> ALTER POOL `resource_plan`.`pool_path` SET
>   PATH = `new_path`,
>   ALLOC_FRACTION = `fraction`,
>   QUERY_PARALLELISM = `parallelism`,
>   SCHEDULING_POLICY = `policy`;
> DROP POOL `resource_plan`.`pool_path`;



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17969) Metastore to alter table in batches of partitions when renaming table

2017-11-06 Thread Peter Vary (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16240084#comment-16240084
 ] 

Peter Vary commented on HIVE-17969:
---

+1 pending tests

> Metastore to alter table in batches of partitions when renaming table
> -
>
> Key: HIVE-17969
> URL: https://issues.apache.org/jira/browse/HIVE-17969
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Adam Szita
>Assignee: Adam Szita
> Attachments: HIVE-17969.0.patch, batched.png, 
> hive9447OptimizationOnly.png, original.png
>
>
> I'm currently trying to speed up the {{alter table rename to}} feature of 
> HMS. The recently submitted change (HIVE-9447) already helps a lot especially 
> on Oracle HMS DBs.
> This time I intend to gain throughput independently of DB types by enabling 
> HMS to execute this alter table command on batches of partitions (rather than 
> 1by1)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17976) HoS: don't set output collector if there's no data to process

2017-11-06 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-17976:
--
Attachment: HIVE-17976.1.patch

> HoS: don't set output collector if there's no data to process
> -
>
> Key: HIVE-17976
> URL: https://issues.apache.org/jira/browse/HIVE-17976
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Minor
> Attachments: HIVE-17976.1.patch
>
>
> MR doesn't set an output collector if no row is processed, i.e. 
> {{ExecMapper::map}} is never called. Let's investigate whether Spark should 
> do the same.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17976) HoS: don't set output collector if there's no data to process

2017-11-06 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-17976:
--
Status: Patch Available  (was: Open)

> HoS: don't set output collector if there's no data to process
> -
>
> Key: HIVE-17976
> URL: https://issues.apache.org/jira/browse/HIVE-17976
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Minor
> Attachments: HIVE-17976.1.patch
>
>
> MR doesn't set an output collector if no row is processed, i.e. 
> {{ExecMapper::map}} is never called. Let's investigate whether Spark should 
> do the same.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17856) MM tables - IOW is not ACID compliant

2017-11-06 Thread Steve Yeom (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16240017#comment-16240017
 ] 

Steve Yeom commented on HIVE-17856:
---

patch01 has implemented the three items above for a regular and partitioned 
Micro-Managed (MM) table.
Also tested with ORC file format and default text format as two representatives 
of the all the formats.

> MM tables - IOW is not ACID compliant
> -
>
> Key: HIVE-17856
> URL: https://issues.apache.org/jira/browse/HIVE-17856
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Sergey Shelukhin
>Assignee: Steve Yeom
>  Labels: mm-gap-1
> Attachments: HIVE-17856.1.patch
>
>
> The following tests were removed from mm_all during "integration"... I should 
> have never allowed such manner of intergration.
> MM logic should have been kept intact until ACID logic could catch up. Alas, 
> here we are.
> {noformat}
> drop table iow0_mm;
> create table iow0_mm(key int) tblproperties("transactional"="true", 
> "transactional_properties"="insert_only");
> insert overwrite table iow0_mm select key from intermediate;
> insert into table iow0_mm select key + 1 from intermediate;
> select * from iow0_mm order by key;
> insert overwrite table iow0_mm select key + 2 from intermediate;
> select * from iow0_mm order by key;
> drop table iow0_mm;
> drop table iow1_mm; 
> create table iow1_mm(key int) partitioned by (key2 int)  
> tblproperties("transactional"="true", 
> "transactional_properties"="insert_only");
> insert overwrite table iow1_mm partition (key2)
> select key as k1, key from intermediate union all select key as k1, key from 
> intermediate;
> insert into table iow1_mm partition (key2)
> select key + 1 as k1, key from intermediate union all select key as k1, key 
> from intermediate;
> select * from iow1_mm order by key, key2;
> insert overwrite table iow1_mm partition (key2)
> select key + 3 as k1, key from intermediate union all select key + 4 as k1, 
> key from intermediate;
> select * from iow1_mm order by key, key2;
> insert overwrite table iow1_mm partition (key2)
> select key + 3 as k1, key + 3 from intermediate union all select key + 2 as 
> k1, key + 2 from intermediate;
> select * from iow1_mm order by key, key2;
> drop table iow1_mm;
> {noformat}
> {noformat}
> drop table simple_mm;
> create table simple_mm(key int) stored as orc tblproperties 
> ("transactional"="true", "transactional_properties"="insert_only");
> insert into table simple_mm select key from intermediate;
> -insert overwrite table simple_mm select key from intermediate;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


<    1   2