[jira] [Created] (HIVE-13348) Add Event Nullification support for Replication

2016-03-23 Thread Sushanth Sowmyan (JIRA)
Sushanth Sowmyan created HIVE-13348:
---

 Summary: Add Event Nullification support for Replication
 Key: HIVE-13348
 URL: https://issues.apache.org/jira/browse/HIVE-13348
 Project: Hive
  Issue Type: Sub-task
Reporter: Sushanth Sowmyan


Replication, as implemented by HIVE-7973 works as follows:

a) For every singly modification to the hive metastore, an event gets triggered 
that logs a notification object.
b) Replication tools such as falcon can consume these notification objects as a 
HCatReplicationTaskIterator from HCatClient.getReplicationTasks(lastEventId, 
maxEvents, dbName, tableName).
c) For each event,  we generate statements and distcp requirements for falcon 
to export, distcp and import to do the replication (along with requisite 
changes to export and import that would allow state management).

The big thing missing from this picture is that while it works, it is pretty 
dumb about how it works in that it will exhaustively process every single event 
generated, and will try to do the export-distcp-import cycle for all 
modifications, irrespective of whether or not that will actually get used at 
import time.

We need to build some sort of filtering logic which can process a batch of 
events to identify events that will result in effective no-ops, and to nullify 
those events from the stream before passing them on. The goal is to minimize 
the number of events that the tools like Falcon would actually have to process.

Examples of cases where event nullification would take place:

a) CREATE-DROP cases: If an object is being created in event#34 that will 
eventually get dropped in event#47, then there is no point in replicating this 
along. We simply null out both these events, and also, any other event that 
references this object between event#34 and event#47.

b) APPEND-APPEND : Some objects are replicated wholesale, which means every 
APPEND that occurs would cause a full export of the object in question. At this 
point, the prior APPENDS would all be supplanted by the last APPEND. Thus, we 
could nullify all the prior such events. 

Additional such cases can be inferred by analysis of the Export-Import relay 
protocol definition at 
https://issues.apache.org/jira/secure/attachment/12725999/EXIMReplicationReplayProtocol.pdf
 or by reasoning out various event processing orders possible.

Replication, as implemented by HIVE-7973 is merely a first step for functional 
support. This work is needed for replication to be efficient at all, and thus, 
usable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-13347) teztask event problem when running repeated queries on LLAP

2016-03-23 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HIVE-13347:
---

 Summary:  teztask event problem when running repeated queries on 
LLAP
 Key: HIVE-13347
 URL: https://issues.apache.org/jira/browse/HIVE-13347
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Siddharth Seth


I am running multiple queries in a row against LLAP from CLI.
I was running them vy copy-pasting multiple lines of "source this.sql" and 
"source that.sql" into CLI.
When I switched to running via hive -f all-queries.sql (could be a coincidence, 
one of the queries now fails towards the end with an error like this:
{noformat}
2016-03-23 21:57:35,531 [INFO] [TaskSchedulerEventHandlerThread] 
|tezplugins.LlapTaskSchedulerService|: Ignoring deallocate request for task 
attempt_1455662455106_3046_5_00_000526_0 which hasn't been assigned to a 
container
2016-03-23 21:57:35,531 [INFO] [TaskSchedulerEventHandlerThread] 
|rm.TaskSchedulerManager|: Task: attempt_1455662455106_3046_5_00_000526_0 has 
no container assignment in the scheduler
2016-03-23 21:57:35,533 [ERROR] [Dispatcher thread {Central}] 
|impl.TaskAttemptImpl|: Can't handle this event at current state for 
attempt_1455662455106_3046_5_00_06_1
org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
TA_TEZ_EVENT_UPDATE at KILL_IN_PROGRESS
at 
org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
at 
org.apache.tez.dag.app.dag.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:795)
at 
org.apache.tez.dag.app.dag.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:120)
at 
org.apache.tez.dag.app.DAGAppMaster$TaskAttemptEventDispatcher.handle(DAGAppMaster.java:2202)
at 
org.apache.tez.dag.app.DAGAppMaster$TaskAttemptEventDispatcher.handle(DAGAppMaster.java:2187)
at 
org.apache.tez.common.AsyncDispatcher.dispatch(AsyncDispatcher.java:183)
at org.apache.tez.common.AsyncDispatcher$1.run(AsyncDispatcher.java:114)
at java.lang.Thread.run(Thread.java:745)
2016-03-23 21:57:35,537 [INFO] [Dispatcher thread {Central}] 
|history.HistoryEventHandler|: 
[HISTORY][DAG:dag_1455662455106_3046_5][Event:TASK_FINISHED]: vertexName=Map 1, 
taskId=task_1455662455106_3046_5_00_000527, startTime=1458784644802, 
finishTime=1458784655537, timeTaken=10735, status=KILLED, 
successfulAttemptID=null, diagnostics=Killing tasks in vertex: 
vertex_1455662455106_3046_5_00 [Map 1] due to trigger: OWN_TASK_FAILURE, 
counters=Counters: 0
{noformat}

This is on master.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 45238: HIVE-9660 store end offset of compressed data for RG in RowIndex in ORC

2016-03-23 Thread Lefty Leverenz

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/45238/#review125176
---




common/src/java/org/apache/hadoop/hive/conf/HiveConf.java (lines 1205 - 1206)


Please spell out RG in the parameter description.



orc/src/java/org/apache/orc/OrcConf.java (lines 100 - 102)


Please spell out RG in the description.


- Lefty Leverenz


On March 23, 2016, 7:08 p.m., Sergey Shelukhin wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/45238/
> ---
> 
> (Updated March 23, 2016, 7:08 p.m.)
> 
> 
> Review request for hive and Prasanth_J.
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> see jira
> 
> 
> Diffs
> -
> 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java c14df20 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/cli/LlapOptionsProcessor.java
>  c292b37 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/io/encoded/OrcEncodedDataReader.java
>  eb251a8 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/io/metadata/OrcStripeMetadata.java
>  82187bd 
>   orc/src/java/org/apache/orc/OrcConf.java 6fcbb72 
>   orc/src/java/org/apache/orc/OrcFile.java 3945a5d 
>   orc/src/java/org/apache/orc/TypeDescription.java bd900ac 
>   orc/src/java/org/apache/orc/impl/BitFieldWriter.java aa5f886 
>   orc/src/java/org/apache/orc/impl/IntegerWriter.java 419054f 
>   orc/src/java/org/apache/orc/impl/OutStream.java 81662cc 
>   orc/src/java/org/apache/orc/impl/RunLengthByteWriter.java 09108b2 
>   orc/src/java/org/apache/orc/impl/RunLengthIntegerWriter.java 3e5f2e2 
>   orc/src/java/org/apache/orc/impl/RunLengthIntegerWriterV2.java fab2801 
>   orc/src/java/org/apache/orc/impl/SerializationUtils.java c1162e4 
>   orc/src/java/org/apache/orc/impl/WriterImpl.java 6497ecf 
>   orc/src/protobuf/orc_proto.proto f4935b4 
>   orc/src/test/org/apache/orc/impl/TestBitFieldReader.java e4c6f6b 
>   orc/src/test/org/apache/orc/impl/TestBitPack.java f2d3d64 
>   orc/src/test/org/apache/orc/impl/TestInStream.java 9e65345 
>   orc/src/test/org/apache/orc/impl/TestIntegerCompressionReader.java 399f35e 
>   orc/src/test/org/apache/orc/impl/TestOutStream.java e9614d5 
>   orc/src/test/org/apache/orc/impl/TestRunLengthByteReader.java a14bef1 
>   orc/src/test/org/apache/orc/impl/TestRunLengthIntegerReader.java 28239ba 
>   ql/src/java/org/apache/hadoop/hive/llap/DebugUtils.java ea626d7 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezJobMonitor.java 67f9da8 
>   ql/src/java/org/apache/hadoop/hive/ql/hooks/PostExecOrcFileDump.java 
> d5d1370 
>   ql/src/java/org/apache/hadoop/hive/ql/io/orc/FileDump.java 9c2f88f 
>   ql/src/java/org/apache/hadoop/hive/ql/io/orc/JsonFileDump.java 00de545 
>   ql/src/java/org/apache/hadoop/hive/ql/io/orc/RecordReaderUtils.java 8a73948 
>   ql/src/java/org/apache/hadoop/hive/ql/io/orc/encoded/EncodedReader.java 
> 96af96a 
>   ql/src/java/org/apache/hadoop/hive/ql/io/orc/encoded/EncodedReaderImpl.java 
> 29b51ec 
>   ql/src/test/queries/clientpositive/orc_lengths.q PRE-CREATION 
>   ql/src/test/results/clientpositive/orc_lengths.q.out PRE-CREATION 
>   
> storage-api/src/java/org/apache/hadoop/hive/common/io/encoded/EncodedColumnBatch.java
>  ddba889 
> 
> Diff: https://reviews.apache.org/r/45238/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Sergey Shelukhin
> 
>



[jira] [Created] (HIVE-13345) LLAP: metadata cache takes too much space, esp. with bloom filters, due to Java/protobuf overhead

2016-03-23 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HIVE-13345:
---

 Summary: LLAP: metadata cache takes too much space, esp. with 
bloom filters, due to Java/protobuf overhead
 Key: HIVE-13345
 URL: https://issues.apache.org/jira/browse/HIVE-13345
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin


We cache java objects currently; these have high overhead, average stripe 
metadata takes 200-500Kb on real files, and with bloom filters blowing up more 
than x5 due to being stored as list of Long-s, up to 5Mb per stripe. That is 
undesirable.

We should either create better objects for ORC (might be good in general) or 
store serialized metadata and deserialize when needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-13346) LLAP doesn't update metadata priority when reusing from cache

2016-03-23 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HIVE-13346:
---

 Summary: LLAP doesn't update metadata priority when reusing from 
cache
 Key: HIVE-13346
 URL: https://issues.apache.org/jira/browse/HIVE-13346
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-13344) port HIVE-12902 to 1.x line

2016-03-23 Thread Eugene Koifman (JIRA)
Eugene Koifman created HIVE-13344:
-

 Summary: port HIVE-12902 to 1.x line
 Key: HIVE-13344
 URL: https://issues.apache.org/jira/browse/HIVE-13344
 Project: Hive
  Issue Type: Bug
  Components: Transactions
Affects Versions: 1.3.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman


w/o this it makes it difficult to make checkins into 2.x and 1.x line



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-13343) Need to disable hybrid grace hash join in llap mode except for dynamically partitioned hash join

2016-03-23 Thread Vikram Dixit K (JIRA)
Vikram Dixit K created HIVE-13343:
-

 Summary: Need to disable hybrid grace hash join in llap mode 
except for dynamically partitioned hash join
 Key: HIVE-13343
 URL: https://issues.apache.org/jira/browse/HIVE-13343
 Project: Hive
  Issue Type: Bug
  Components: llap
Affects Versions: 2.1.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K


Due to performance reasons, we should disable use of hybrid grace hash join in 
llap when dynamic partition hash join is not used. With dynamic partition hash 
join, we need hybrid grace hash join due to the possibility of skews.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-13342) Improve logging in llap decider for llap

2016-03-23 Thread Vikram Dixit K (JIRA)
Vikram Dixit K created HIVE-13342:
-

 Summary: Improve logging in llap decider for llap
 Key: HIVE-13342
 URL: https://issues.apache.org/jira/browse/HIVE-13342
 Project: Hive
  Issue Type: Bug
  Components: llap
Affects Versions: 2.1.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K


Currently we do not log our decisions with respect to llap. Are we running 
everything in llap mode or only parts of the plan. We need more logging. Also, 
if llap mode is all but for some reason, we cannot run the work in llap mode, 
fail and throw an exception advise the user to change the mode to auto.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-13341) Stats state is not captured correctly: differentiate load table and create table

2016-03-23 Thread Pengcheng Xiong (JIRA)
Pengcheng Xiong created HIVE-13341:
--

 Summary: Stats state is not captured correctly: differentiate load 
table and create table
 Key: HIVE-13341
 URL: https://issues.apache.org/jira/browse/HIVE-13341
 Project: Hive
  Issue Type: Sub-task
Reporter: Pengcheng Xiong
Assignee: Pengcheng Xiong






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 45238: HIVE-9660 store end offset of compressed data for RG in RowIndex in ORC

2016-03-23 Thread Sergey Shelukhin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/45238/
---

Review request for hive and Prasanth_J.


Repository: hive-git


Description
---

see jira


Diffs
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java c14df20 
  
llap-server/src/java/org/apache/hadoop/hive/llap/cli/LlapOptionsProcessor.java 
c292b37 
  
llap-server/src/java/org/apache/hadoop/hive/llap/io/encoded/OrcEncodedDataReader.java
 eb251a8 
  
llap-server/src/java/org/apache/hadoop/hive/llap/io/metadata/OrcStripeMetadata.java
 82187bd 
  orc/src/java/org/apache/orc/OrcConf.java 6fcbb72 
  orc/src/java/org/apache/orc/OrcFile.java 3945a5d 
  orc/src/java/org/apache/orc/TypeDescription.java bd900ac 
  orc/src/java/org/apache/orc/impl/BitFieldWriter.java aa5f886 
  orc/src/java/org/apache/orc/impl/IntegerWriter.java 419054f 
  orc/src/java/org/apache/orc/impl/OutStream.java 81662cc 
  orc/src/java/org/apache/orc/impl/RunLengthByteWriter.java 09108b2 
  orc/src/java/org/apache/orc/impl/RunLengthIntegerWriter.java 3e5f2e2 
  orc/src/java/org/apache/orc/impl/RunLengthIntegerWriterV2.java fab2801 
  orc/src/java/org/apache/orc/impl/SerializationUtils.java c1162e4 
  orc/src/java/org/apache/orc/impl/WriterImpl.java 6497ecf 
  orc/src/protobuf/orc_proto.proto f4935b4 
  orc/src/test/org/apache/orc/impl/TestBitFieldReader.java e4c6f6b 
  orc/src/test/org/apache/orc/impl/TestBitPack.java f2d3d64 
  orc/src/test/org/apache/orc/impl/TestInStream.java 9e65345 
  orc/src/test/org/apache/orc/impl/TestIntegerCompressionReader.java 399f35e 
  orc/src/test/org/apache/orc/impl/TestOutStream.java e9614d5 
  orc/src/test/org/apache/orc/impl/TestRunLengthByteReader.java a14bef1 
  orc/src/test/org/apache/orc/impl/TestRunLengthIntegerReader.java 28239ba 
  ql/src/java/org/apache/hadoop/hive/llap/DebugUtils.java ea626d7 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezJobMonitor.java 67f9da8 
  ql/src/java/org/apache/hadoop/hive/ql/hooks/PostExecOrcFileDump.java d5d1370 
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/FileDump.java 9c2f88f 
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/JsonFileDump.java 00de545 
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/RecordReaderUtils.java 8a73948 
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/encoded/EncodedReader.java 
96af96a 
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/encoded/EncodedReaderImpl.java 
29b51ec 
  ql/src/test/queries/clientpositive/orc_lengths.q PRE-CREATION 
  ql/src/test/results/clientpositive/orc_lengths.q.out PRE-CREATION 
  
storage-api/src/java/org/apache/hadoop/hive/common/io/encoded/EncodedColumnBatch.java
 ddba889 

Diff: https://reviews.apache.org/r/45238/diff/


Testing
---


Thanks,

Sergey Shelukhin



Re: Error in Hive on Spark

2016-03-23 Thread Xuefu Zhang
Yes, it seems more viable that you integrate your application with HS2 via
JDBC or thrift rather than at code level.

--Xuefu

On Tue, Mar 22, 2016 at 12:01 AM, Stana  wrote:

> Hi, Xuefu
>
> You are right.
> Maybe I should launch spark-submit by HS2 or Hive CLI ?
>
> Thanks a lot,
> Stana
>
>
> 2016-03-22 1:16 GMT+08:00 Xuefu Zhang :
>
> > Stana,
> >
> > I'm not sure if I fully understand the problem. spark-submit is launched
> in
> > the same host as your application, which should be able to access
> > hive-exec.jar. Yarn cluster needs the jar also, but HS2 or Hive CLI will
> > take care of that. Since you are not using either of which, then, it's
> your
> > application's responsibility to make that happen.
> >
> > Did I missed anything else?
> >
> > Thanks,
> > Xuefu
> >
> > On Sun, Mar 20, 2016 at 11:18 PM, Stana  wrote:
> >
> > > Does anyone have suggestions in setting property of hive-exec-2.0.0.jar
> > > path in application?
> > > Something like
> > >
> > >
> >
> 'hiveConf.set("hive.remote.driver.jar","hdfs://storm0:9000/tmp/spark-assembly-1.4.1-hadoop2.6.0.jar")'.
> > >
> > >
> > >
> > > 2016-03-11 10:53 GMT+08:00 Stana :
> > >
> > > > Thanks for reply
> > > >
> > > > I have set the property spark.home in my application. Otherwise the
> > > > application threw 'SPARK_HOME not found exception'.
> > > >
> > > > I found hive source code in SparkClientImpl.java:
> > > >
> > > > private Thread startDriver(final RpcServer rpcServer, final String
> > > > clientId, final String secret)
> > > >   throws IOException {
> > > > ...
> > > >
> > > > List argv = Lists.newArrayList();
> > > >
> > > > ...
> > > >
> > > > argv.add("--class");
> > > > argv.add(RemoteDriver.class.getName());
> > > >
> > > > String jar = "spark-internal";
> > > > if (SparkContext.jarOfClass(this.getClass()).isDefined()) {
> > > > jar = SparkContext.jarOfClass(this.getClass()).get();
> > > > }
> > > > argv.add(jar);
> > > >
> > > > ...
> > > >
> > > > }
> > > >
> > > > When hive executed spark-submit , it generate the shell command with
> > > > --class org.apache.hive.spark.client.RemoteDriver ,and set jar path
> > with
> > > > SparkContext.jarOfClass(this.getClass()).get(). It will get the local
> > > path
> > > > of hive-exec-2.0.0.jar.
> > > >
> > > > In my situation, the application and yarn cluster are in different
> > > cluster.
> > > > When application executed spark-submit with local path of
> > > > hive-exec-2.0.0.jar to yarn cluster, there 's no hive-exec-2.0.0.jar
> in
> > > > yarn cluster. Then application threw the exception:
> > "hive-exec-2.0.0.jar
> > > >   does not exist ...".
> > > >
> > > > Can it be set property of hive-exec-2.0.0.jar path in application ?
> > > > Something like 'hiveConf.set("hive.remote.driver.jar",
> > > > "hdfs://storm0:9000/tmp/spark-assembly-1.4.1-hadoop2.6.0.jar")'.
> > > > If not, is it possible to achieve in the future version?
> > > >
> > > >
> > > >
> > > >
> > > > 2016-03-10 23:51 GMT+08:00 Xuefu Zhang :
> > > >
> > > >> You can probably avoid the problem by set environment variable
> > > SPARK_HOME
> > > >> or JVM property spark.home that points to your spark installation.
> > > >>
> > > >> --Xuefu
> > > >>
> > > >> On Thu, Mar 10, 2016 at 3:11 AM, Stana 
> wrote:
> > > >>
> > > >> >  I am trying out Hive on Spark with hive 2.0.0 and spark 1.4.1,
> and
> > > >> > executing org.apache.hadoop.hive.ql.Driver with java application.
> > > >> >
> > > >> > Following are my situations:
> > > >> > 1.Building spark 1.4.1 assembly jar without Hive .
> > > >> > 2.Uploading the spark assembly jar to the hadoop cluster.
> > > >> > 3.Executing the java application with eclipse IDE in my client
> > > computer.
> > > >> >
> > > >> > The application went well and it submitted mr job to the yarn
> > cluster
> > > >> > successfully when using " hiveConf.set("hive.execution.engine",
> > "mr")
> > > >> > ",but it threw exceptions in spark-engine.
> > > >> >
> > > >> > Finally, i traced Hive source code and came to the conclusion:
> > > >> >
> > > >> > In my situation, SparkClientImpl class will generate the
> > spark-submit
> > > >> > shell and executed it.
> > > >> > The shell command allocated  --class with
> > RemoteDriver.class.getName()
> > > >> > and jar with SparkContext.jarOfClass(this.getClass()).get(), so
> that
> > > >> > my application threw the exception.
> > > >> >
> > > >> > Is it right? And how can I do to execute the application with
> > > >> > spark-engine successfully in my client computer ? Thanks a lot!
> > > >> >
> > > >> >
> > > >> > Java application code:
> > > >> >
> > > >> > public class TestHiveDriver {
> > > >> >
> > > >> > private static HiveConf hiveConf;
> > > >> > private static Driver driver;
> > > >> > private static CliSessionState ss;
> > > >> > public static void main(String[] args){
> > > >> >
> > > >> > String sql = "select * from hadoop0263_0 as a join
> > > >> > hadoop0263_0 as b
> > > >> > on (a.key = b.key)";
> > > 

[jira] [Created] (HIVE-13340) Vectorization: from_unixtime UDF shim

2016-03-23 Thread Gopal V (JIRA)
Gopal V created HIVE-13340:
--

 Summary: Vectorization: from_unixtime UDF shim
 Key: HIVE-13340
 URL: https://issues.apache.org/jira/browse/HIVE-13340
 Project: Hive
  Issue Type: Bug
  Components: Vectorization
Reporter: Gopal V
Assignee: Matt McCline






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-13339) Vectorization: GenericUDFBetween in Projection mode

2016-03-23 Thread Gopal V (JIRA)
Gopal V created HIVE-13339:
--

 Summary: Vectorization: GenericUDFBetween in Projection mode 
 Key: HIVE-13339
 URL: https://issues.apache.org/jira/browse/HIVE-13339
 Project: Hive
  Issue Type: Bug
Reporter: Gopal V






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-13338) Differences in vectorized_casts.q output for vectorized and non-vectorized runs

2016-03-23 Thread Matt McCline (JIRA)
Matt McCline created HIVE-13338:
---

 Summary: Differences in vectorized_casts.q output for vectorized 
and non-vectorized runs
 Key: HIVE-13338
 URL: https://issues.apache.org/jira/browse/HIVE-13338
 Project: Hive
  Issue Type: Bug
  Components: Hive
Reporter: Matt McCline
Assignee: Matt McCline
Priority: Critical


Turn off vectorization and you get different results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)