[ 
https://issues.apache.org/jira/browse/HIVE-17528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16267276#comment-16267276
 ] 

Vihang Karajgaonkar commented on HIVE-17528:
--------------------------------------------

{noformat}
./itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java:212:    
String defaultTestSrcTables = 
"src,src1,srcbucket,srcbucket2,src_json,src_thrift,src_sequencefile,srcpart," 
+: warning: Line is longer than 100 characters (found 113).
./itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java:213:      
"alltypesorc,src_hbase,cbo_t1,cbo_t2,cbo_t3,src_cbo,part,lineitem,alltypesparquet";:
 warning: 
'"alltypesorc,src_hbase,cbo_t1,cbo_t2,cbo_t3,src_cbo,part,lineitem,alltypesparquet"'
 have incorrect indentation level 6, expected level should be 8.
{noformat}

Hi [~Ferd] Can you please fix the yetus errors above? Also, are 
parquet_vectorization_div0.q  and parquet_vectorization_0 failures related?

> Add more q-tests for Hive-on-Spark with Parquet vectorized reader
> -----------------------------------------------------------------
>
>                 Key: HIVE-17528
>                 URL: https://issues.apache.org/jira/browse/HIVE-17528
>             Project: Hive
>          Issue Type: Sub-task
>            Reporter: Vihang Karajgaonkar
>            Assignee: Ferdinand Xu
>         Attachments: HIVE-17528.1.patch, HIVE-17528.2.patch, 
> HIVE-17528.3.patch, HIVE-17528.4.patch, HIVE-17528.5.patch, 
> HIVE-17528.7.patch, HIVE-17528.patch
>
>
> Most of the vectorization related q-tests operate on ORC tables using Tez. It 
> would be good to add more coverage on a different combination of engine and 
> file-format. We can model existing q-tests using parquet tables and run it 
> using TestSparkCliDriver



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to