[jira] [Created] (HIVE-21285) hive 3.1.1 on druid create/insert/select bug summary

2019-02-18 Thread xiao123 (JIRA)
xiao123 created HIVE-21285:
--

 Summary: hive 3.1.1 on druid create/insert/select bug summary
 Key: HIVE-21285
 URL: https://issues.apache.org/jira/browse/HIVE-21285
 Project: Hive
  Issue Type: Bug
Reporter: xiao123


hive on druid select query  NullPoinException

env:

hive 3.1.1

hadoop 3.0.0

imply 2.8.12

 

create database is ok.
{code:java}
CREATE TABLE asteria.hive_druid 
STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler'
TBLPROPERTIES (
"druid.segment.granularity" = "MONTH",
"druid.query.granularity" = "DAY")
AS
SELECT cast(day as timestamp with local time zone) as `__time`,cast(city as 
string) as city,cast(reply_num as float) as reply_num FROM druid_demo;{code}
i can query on imply ui,but i query by hive-cli , i will apear the 
NullPointException
{code:java}
hive> select * from asteria.hive_druid limit 10;
2019-02-19T15:46:26,845 DEBUG [HttpClient-Netty-Worker-20] 
client.NettyHttpClient: [POST http://localhost:8083/druid/v2/] messageReceived: 
org.apache.hive.druid.org.jboss.netty.handler.codec.http.HttpChunk$1@72450d69
2019-02-19T15:46:26,845 DEBUG [HttpClient-Netty-Worker-20] 
client.NettyHttpClient: [POST http://localhost:8083/druid/v2/] Got chunk: 0B, 
last=true
Failed with exception java.io.IOException:java.lang.NullPointerException
2019-02-19T15:46:26,861 ERROR [306cc70b-f347-4343-a3aa-a6a69b99306e main] 
CliDriver: Failed with exception 
java.io.IOException:java.lang.NullPointerException
java.io.IOException: java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:602)
at org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:509)
at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:146)
at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:2691)
at 
org.apache.hadoop.hive.ql.reexec.ReExecDriver.getResults(ReExecDriver.java:229)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:313)
at org.apache.hadoop.util.RunJar.main(RunJar.java:227)
Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hive.druid.serde.DruidSelectQueryRecordReader.nextKeyValue(DruidSelectQueryRecordReader.java:62)
at 
org.apache.hadoop.hive.druid.serde.DruidSelectQueryRecordReader.next(DruidSelectQueryRecordReader.java:85)
at 
org.apache.hadoop.hive.druid.serde.DruidSelectQueryRecordReader.next(DruidSelectQueryRecordReader.java:38)
at 
org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:569)
... 16 more

2019-02-19T15:46:26,863 DEBUG [306cc70b-f347-4343-a3aa-a6a69b99306e main] 
exec.TableScanOperator: close called for operator TS[0]
2019-02-19T15:46:26,863 INFO [306cc70b-f347-4343-a3aa-a6a69b99306e main] 
exec.TableScanOperator: Closing operator TS[0]
{code}
insert into select
{code:java}
hive> insert into asteria.hive_druid SELECT cast(day as timestamp with local 
time zone) as `__time`,cast(city as string) as city,cast(reply_num as float) as 
reply_num FROM druid_demo where day="2019-02-01";
FAILED: NullPointerException null
hive> 
{code}
create external druid table wrong
{code:java}
hive> CREATE TABLE wikipedia 
> STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler';
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.RuntimeException: 
MetaException(message:org.apache.hadoop.hive.serde2.SerDeException Druid data 
source not specified; use druid.datasource in table properties)
hive> CREATE TABLE asteria.hive_druid 
> STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler'
> TBLPROPERTIES (
> "druid.datasource" = "wikipedia"
> );
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Datasource name 
cannot be specified using [druid.datasource] for managed tables using Druid)
hive> 
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Null pointer exception on running compaction against an MM table

2019-02-18 Thread Vaibhav Gumashta
The approach is similar, but it is not identical. Let me go over the query 
based compaction codepath to see if I spot this bug there.

Thanks,
--Vaibhav

From: Aditya Shah 
Date: Saturday, February 16, 2019 at 3:44 AM
To: Vaibhav Gumashta 
Cc: "dev@hive.apache.org" , Eugene Koifman 
, Gopal Vijayaraghavan 
Subject: Re: Null pointer exception on running compaction against an MM table

[mage removed by sender.]
Hi,

Thanks for the reply, have opened a JIRA (HIVE-21280) for the same and will 
upload a patch soon. But I further had doubts on the new query based compactor 
for full CRUD tables that has gone into master in HIVE-20699. Does major 
compaction work there using query based compactor similar to the one for MM 
table, because I expect the same problem to exist there as well?

Aditya


On Sat, Feb 16, 2019 at 2:34 AM Vaibhav Gumashta 
mailto:vgumas...@hortonworks.com>> wrote:
Aditya,

Thanks for reporting this. Would you like to create a jira for this 
(https://issues.apache.org/jira/projects/HIVE)? Additionally, if you would like 
to work on a fix, I’m happy to help in reviewing.

--Vaibhav

From: Aditya Shah mailto:adityashah3...@gmail.com>>
Date: Friday, February 15, 2019 at 2:05 AM
To: "dev@hive.apache.org" 
mailto:dev@hive.apache.org>>
Cc: Eugene Koifman mailto:ekoif...@hortonworks.com>>, 
Vaibhav Gumashta mailto:vgumas...@hortonworks.com>>, 
Gopal Vijayaraghavan mailto:go...@hortonworks.com>>
Subject: Null pointer exception on running compaction against an MM table

Error! Filename not specified.
Hi,

I was trying to run compaction on MM table but got a null pointer exception 
while getting HDFS session path. The error seemed to me that session state was 
not started for this queries. Am I missing something here? I do think session 
state needs to be started for each of the queries (insert into temp table etc) 
running for compaction (I'm also doubtful for statsupdater thread's queries) on 
HMS. Some details are as follows:

Env./Versions: Using Hive-3.1.1 (rel/release-3.1.1)

Steps to reproduce:
1) Using beeline with HS2 and HMS
2) create an MM table
3) Insert a few values in the table
4) alter table mm_table compact 'major' and wait;
Stack trace on HMS:

compactor.Worker: Caught exception while trying to compact 
id:8,dbname:default,tableName:acid_mm_orc,partName:null,state:^@,type:MAJOR,properties:null,runAs:null,tooManyAborts:false,highestWriteId:0.
  Marking failed to avoid repeated failures, java.io.IOException: 
org.apache.hadoop.hive.ql.metadata.HiveException: Failed to run create 
temporary table default.tmp_compactor_acid_mm_orc_1550222367257(`a` int, `b` 
string)  ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde'WITH 
SERDEPROPERTIES (
  'serialization.format'='1')STORED AS INPUTFORMAT 
'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 
'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' LOCATION 
'hdfs://localhost:9000/user/hive/warehouse/acid_mm_orc/_tmp_2d8a096c-2db5-4ed8-921c-b3f6d31e079e/_base'
 TBLPROPERTIES ('transactional'='false')
at 
org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.runMmCompaction(CompactorMR.java:373)
at 
org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.run(CompactorMR.java:241)
at org.apache.hadoop.hive.ql.txn.compactor.Worker.run(Worker.java:174)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Failed to run 
create temporary table default.tmp_compactor_acid_mm_orc_1550222367257(`a` int, 
`b` string)  ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde'WITH 
SERDEPROPERTIES (
  'serialization.format'='1')STORED AS INPUTFORMAT 
'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 
'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' LOCATION 
'hdfs://localhost:9000/user/hive/warehouse/acid_mm_orc/_tmp_2d8a096c-2db5-4ed8-921c-b3f6d31e079e/_base'
 TBLPROPERTIES ('transactional'='false')
at 
org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.runOnDriver(CompactorMR.java:525)
at 
org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.runMmCompaction(CompactorMR.java:365)
... 2 more
Caused by: java.lang.NullPointerException: Non-local session path expected to 
be non-null
at 
com.google.common.base.Preconditions.checkNotNull(Preconditions.java:228)
at 
org.apache.hadoop.hive.ql.session.SessionState.getHDFSSessionPath(SessionState.java:815)
at org.apache.hadoop.hive.ql.Context.(Context.java:309)
at org.apache.hadoop.hive.ql.Context.(Context.java:295)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:591)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1684)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1807)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1567)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1556)
at 
org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.runOnDriver(CompactorMR.java:522)

Serialization of HiveRelNodes

2019-02-18 Thread mark pasterkamp
Dear all,

First of all, I am sorry if I sent this message twice, I received no 
confirmation of my previous email actually reaching the mailing list.

For my project I want to be able to access the HiveRelNodes for some extended 
semantic analysis. By extending the hookcontext and adding a new hook I have 
been able to access them serverside.

To gain access to them from a client perspective I was thinking about 
serializing them and then deserialize them from the client (perhaps by storing 
the serialized form in a table). Since the RelNode and HiveRelNode classes do 
not implement the Serializable interface I thought maybe I could use the 
RelJsonWriter and RelJsonReader instead. However the RelJsonWriter is not able 
to convert the HiveRelNodes into a json format.

Would anyone perhaps know of a different solution to serializing and 
deserializing these HiveRelNodes?


With kind regards,

Mark


[GitHub] sankarh closed pull request #539: HIVE-21281: Repl checkpointing doesn't work while retry bootstrap load with partitions of external tables.

2019-02-18 Thread GitBox
sankarh closed pull request #539: HIVE-21281: Repl checkpointing doesn't work 
while retry bootstrap load with partitions of external tables.
URL: https://github.com/apache/hive/pull/539
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


Re: Review Request 69918: HIVE-21001 Update to Calcite 1.18

2019-02-18 Thread Zoltan Haindrich


> On Feb. 7, 2019, 10:16 p.m., Ashutosh Chauhan wrote:
> > ql/src/test/results/clientpositive/llap/subquery_multi.q.out
> > Lines 2312-2313 (patched)
> > 
> >
> > Worse plan than earlier.

It seems like more accurate equals/hashcode have caused this change; before 
CALCITE-2632 RexCorrelVariables were not properly compared; it seems like that 
have helped/interfered with HiveRelDecorrelator's operations.

https://github.com/apache/calcite/blob/ef9f926061de21ad713a76ec3ec8110e5cbd92bf/core/src/main/java/org/apache/calcite/rex/RexCorrelVariable.java#L59


> On Feb. 7, 2019, 10:16 p.m., Ashutosh Chauhan wrote:
> > ql/src/test/results/clientpositive/perf/tez/constraints/cbo_query6.q.out
> > Lines 68-70 (original), 68-70 (patched)
> > 
> >
> > Is new join order better?

the join order is essentially the same

* one of the higher level inner join have its arguments swapped in the output.
* the new plan has 1 projection happening earlier than in the old
* has 1 new projection


- Zoltan


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/69918/#review212637
---


On Feb. 7, 2019, 8:08 p.m., Zoltan Haindrich wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/69918/
> ---
> 
> (Updated Feb. 7, 2019, 8:08 p.m.)
> 
> 
> Review request for hive, Ashutosh Chauhan and Jesús Camacho Rodríguez.
> 
> 
> Bugs: HIVE-21001
> https://issues.apache.org/jira/browse/HIVE-21001
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> patch#1 here is #23 on jira
> 
> 
> Diffs
> -
> 
>   
> accumulo-handler/src/test/results/positive/accumulo_predicate_pushdown.q.out 
> 8a1e0609f9f48434d8147c296984bbc0a6cbae35 
>   hbase-handler/src/test/results/positive/hbase_ppd_key_range.q.out 
> 5e051543133125a57dbf5b83b62f0a13cf7f415a 
>   hbase-handler/src/test/results/positive/hbase_pushdown.q.out 
> 57613c36f9b3376469b1b05e9a9df59bd5365450 
>   pom.xml 240472a30e033a83d1c799e636d8df29cb2c5770 
>   ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveRelBuilder.java 
> e85a99e84658a69c4fd93a6c352af4ead768ef67 
>   ql/src/test/queries/clientpositive/druidmini_expressions.q 
> 36aad7937d556e013773f29ecd89bf0629c1937d 
>   ql/src/test/results/clientpositive/alter_partition_coltype.q.out 
> d484f9e2237402fa475cb79a182340d7d83dadb9 
>   ql/src/test/results/clientpositive/annotate_stats_filter.q.out 
> 44f77b8f503e3b1a5c6d68caeb9727b8950fc93c 
>   ql/src/test/results/clientpositive/cbo_rp_simple_select.q.out 
> cb22b61f269db76f5397a4ce0981e92d236d1123 
>   ql/src/test/results/clientpositive/cbo_simple_select.q.out 
> 32e69204f699186c4e591770320802ebb40e2c42 
>   ql/src/test/results/clientpositive/complex_alias.q.out 
> f9315f80457651a1324397c2a129c2bcc6ac0bc4 
>   ql/src/test/results/clientpositive/constantPropWhen.q.out 
> 4e7af0cf181c47c5e19a658764bea3eda959d88f 
>   ql/src/test/results/clientpositive/constantPropagateForSubQuery.q.out 
> 221837b410f6df499c18cbf04bee54a4c7b241f4 
>   ql/src/test/results/clientpositive/constant_prop_3.q.out 
> 2b314d7ebdf1e015a28379cd1795353206268efb 
>   ql/src/test/results/clientpositive/constprog_when_case.q.out 
> d237f135acd1ee199084866e44436e7757cb12e4 
>   ql/src/test/results/clientpositive/decimal_udf.q.out 
> 3ef40023ebf683c224c45eca61af5221d210a8ff 
>   ql/src/test/results/clientpositive/druid/druidmini_expressions.q.out 
> 973cade307bef1a1559a4a27a78078659628ea5a 
>   ql/src/test/results/clientpositive/druid/druidmini_extractTime.q.out 
> 4ea95f69302cdc283047612ef5b0f9847365b820 
>   ql/src/test/results/clientpositive/druid/druidmini_floorTime.q.out 
> 8d9382443ef290dedfa880b7413bf2742fd199ce 
>   ql/src/test/results/clientpositive/druid/druidmini_test_ts.q.out 
> 9c412d97dd4d42e7e45990fa3be380f947103cfd 
>   ql/src/test/results/clientpositive/dynamic_partition_skip_default.q.out 
> f76b24e7d9c0cf947cf4fff06fa55af73670e68f 
>   ql/src/test/results/clientpositive/fold_case.q.out 
> 408275dff6b42b6339fde24ae9d948fcca66d90f 
>   ql/src/test/results/clientpositive/fold_eq_with_case_when.q.out 
> 25825b824db57cec60ee199aaccaab06056c3287 
>   ql/src/test/results/clientpositive/fold_when.q.out 
> 6f3a479ba6f5092bcd6ce1e431a88df8a32725fd 
>   ql/src/test/results/clientpositive/groupby_sort_1_23.q.out 
> 7826f2eb7ad94ae0bf77bd129c21caca8808e0a2 
>   ql/src/test/results/clientpositive/groupby_sort_skew_1_23.q.out 
> 674e8bfe328761bffbaedfb93e3942548ac9b691 
>   ql/src/test/results/clientpositive/in_typecheck_char.q.out 
> 6948719881a7da18def438a2a113a4c48201ad41 
>   

[jira] [Created] (HIVE-21284) StatsWork should use footer scan for Parquet

2019-02-18 Thread Antal Sinkovits (JIRA)
Antal Sinkovits created HIVE-21284:
--

 Summary: StatsWork should use footer scan for Parquet
 Key: HIVE-21284
 URL: https://issues.apache.org/jira/browse/HIVE-21284
 Project: Hive
  Issue Type: Bug
Affects Versions: 4.0.0
Reporter: Antal Sinkovits
Assignee: Antal Sinkovits






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] maheshk114 commented on a change in pull request #539: HIVE-21281: Repl checkpointing doesn't work while retry bootstrap load with partitions of external tables.

2019-02-18 Thread GitBox
maheshk114 commented on a change in pull request #539: HIVE-21281: Repl 
checkpointing doesn't work while retry bootstrap load with partitions of 
external tables.
URL: https://github.com/apache/hive/pull/539#discussion_r257690116
 
 

 ##
 File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/WarehouseInstance.java
 ##
 @@ -361,6 +362,32 @@ private void printOutput() throws IOException {
 }
   }
 
+  private void verifyIfCkptSet(Map props, String dumpDir) {
+assertTrue(props.containsKey(ReplUtils.REPL_CHECKPOINT_KEY));
 
 Review comment:
assertTrue(props.containsKey(ReplUtils.REPL_CHECKPOINT_KEY)) is redundant


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rmsmani commented on issue #534: HIVE-21270: A UDTF to show schema (column names and types) of given q…

2019-02-18 Thread GitBox
rmsmani commented on issue #534: HIVE-21270: A UDTF to show schema (column 
names and types) of given q…
URL: https://github.com/apache/hive/pull/534#issuecomment-464691727
 
 
   +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rmsmani commented on issue #388: HIVE-20057: Fix Hive table conversion DESCRIBE table bug

2019-02-18 Thread GitBox
rmsmani commented on issue #388: HIVE-20057: Fix Hive table conversion DESCRIBE 
table bug
URL: https://github.com/apache/hive/pull/388#issuecomment-464675581
 
 
   +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rmsmani commented on issue #415: desc table Command optimize

2019-02-18 Thread GitBox
rmsmani commented on issue #415: desc table Command optimize
URL: https://github.com/apache/hive/pull/415#issuecomment-464672723
 
 
   What's the JIRA number for this.
   If Jira ticket is not available for this, create the ticket in Below URL, 
(under HIVE project)
   https://issues.apache.org/jira/projects/HIVE
   create the patch as given in the documentation
   
https://cwiki.apache.org/confluence/display/Hive/HowToContribute#HowToContribute-UnderstandingHiveBranches
   
   So that GIT PRE-COMMIT testing will be done automatically


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rmsmani commented on issue #405: There a negative number of splits need be avoided

2019-02-18 Thread GitBox
rmsmani commented on issue #405: There  a negative number of splits need be 
avoided
URL: https://github.com/apache/hive/pull/405#issuecomment-464672343
 
 
   What's the JIRA number for this.
   If Jira ticket is not available for this, create the ticket in Below URL, 
(under HIVE project)
   https://issues.apache.org/jira/projects/HIVE
   create the patch as given in the documentation
   
https://cwiki.apache.org/confluence/display/Hive/HowToContribute#HowToContribute-UnderstandingHiveBranches
   
   So that GIT PRE-COMMIT testing will be done automatically


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rmsmani commented on issue #454: findBestMatch() tests the inclusion of default partition name

2019-02-18 Thread GitBox
rmsmani commented on issue #454: findBestMatch() tests the inclusion of default 
partition name
URL: https://github.com/apache/hive/pull/454#issuecomment-464672488
 
 
   What's the JIRA Ticket number for this...


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rmsmani opened a new pull request #540: HIVE-21283 Synonyms for the existing functions

2019-02-18 Thread GitBox
rmsmani opened a new pull request #540: HIVE-21283 Synonyms for the existing 
functions
URL: https://github.com/apache/hive/pull/540
 
 
   mid for substr
   position for Locate
   
   @pvary  Kindly review and merge to master


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


How to use Replace Columns function

2019-02-18 Thread Yasunori Oto
Hi all,



We have been developing the schema-less file format like JSONSerDe.

https://github.com/yahoojapan/yosegi



We implement its SerDe to access data by their column names.

Then, we want to define and change the table at any field names.



We found the Replace Columns function in Hive.

However, this operation allows the only predefined SerDes for listing in the 
code.

https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java#L3960-L3968



So, could you please teach us how to use this kind of function?

Or discuss to open this function for other SerDe like us.



Yasunori