[jira] [Created] (DRILL-6961) Error Occurred: Cannot connect to the db. query INFORMATION_SCHEMA.VIEWS : Maybe you have incorrect connection params or db unavailable now (timeout)

2019-01-09 Thread Khurram Faraaz (JIRA)
Khurram Faraaz created DRILL-6961:
-

 Summary: Error Occurred: Cannot connect to the db. query 
INFORMATION_SCHEMA.VIEWS : Maybe you have incorrect connection params or db 
unavailable now (timeout)
 Key: DRILL-6961
 URL: https://issues.apache.org/jira/browse/DRILL-6961
 Project: Apache Drill
  Issue Type: Improvement
  Components: Storage - Information Schema
Affects Versions: 1.13.0
Reporter: Khurram Faraaz


Trying to query drill information_schema.views table returns error. Disabling 
openTSDB plugin resolves the problem.

Drill 1.13.0

Failing query :
{noformat}
SELECT TABLE_CATALOG, TABLE_SCHEMA, TABLE_NAME, VIEW_DEFINITION FROM 
INFORMATION_SCHEMA.`VIEWS` where VIEW_DEFINITION not like 'kraken';
{noformat}

Stack Trace from drillbit.log

{noformat}
2019-01-07 15:36:21,975 [23cc39aa-2618-e9f0-e77e-4fafa6edc314:foreman] INFO 
o.a.drill.exec.work.foreman.Foreman - Query text for query id 
23cc39aa-2618-e9f0-e77e-4fafa6edc314: SELECT TABLE_CATALOG, TABLE_SCHEMA, 
TABLE_NAME, VIEW_DEFINITION FROM INFORMATION_SCHEMA.`VIEWS` where 
VIEW_DEFINITION not like 'kraken'
2019-01-07 15:36:35,221 [23cc39aa-2618-e9f0-e77e-4fafa6edc314:frag:0:0] INFO 
o.a.d.e.s.o.c.services.ServiceImpl - User Error Occurred: Cannot connect to the 
db. Maybe you have incorrect connection params or db unavailable now (timeout)
org.apache.drill.common.exceptions.UserException: CONNECTION ERROR: Cannot 
connect to the db. Maybe you have incorrect connection params or db unavailable 
now


[Error Id: f8b4c074-ba62-4691-b142-a8ea6e4f6b2a ]
at 
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:633)
 ~[drill-common-1.13.0-mapr.jar:1.13.0-mapr]
at 
org.apache.drill.exec.store.openTSDB.client.services.ServiceImpl.getTableNames(ServiceImpl.java:107)
 [drill-opentsdb-storage-1.13.0-mapr.jar:1.13.0-mapr]
at 
org.apache.drill.exec.store.openTSDB.client.services.ServiceImpl.getAllMetricNames(ServiceImpl.java:70)
 [drill-opentsdb-storage-1.13.0-mapr.jar:1.13.0-mapr]
at 
org.apache.drill.exec.store.openTSDB.schema.OpenTSDBSchemaFactory$OpenTSDBSchema.getTableNames(OpenTSDBSchemaFactory.java:78)
 [drill-opentsdb-storage-1.13.0-mapr.jar:1.13.0-mapr]
at 
org.apache.calcite.jdbc.SimpleCalciteSchema.addImplicitTableToBuilder(SimpleCalciteSchema.java:106)
 [calcite-core-1.15.0-drill-r0.jar:1.15.0-drill-r0]
at org.apache.calcite.jdbc.CalciteSchema.getTableNames(CalciteSchema.java:318) 
[calcite-core-1.15.0-drill-r0.jar:1.15.0-drill-r0]
at 
org.apache.calcite.jdbc.CalciteSchema$SchemaPlusImpl.getTableNames(CalciteSchema.java:587)
 [calcite-core-1.15.0-drill-r0.jar:1.15.0-drill-r0]
at 
org.apache.calcite.jdbc.CalciteSchema$SchemaPlusImpl.getTableNames(CalciteSchema.java:548)
 [calcite-core-1.15.0-drill-r0.jar:1.15.0-drill-r0]
at 
org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.visitTables(InfoSchemaRecordGenerator.java:227)
 [drill-java-exec-1.13.0-mapr.jar:1.13.0-mapr]
at 
org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.scanSchema(InfoSchemaRecordGenerator.java:216)
 [drill-java-exec-1.13.0-mapr.jar:1.13.0-mapr]
at 
org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.scanSchema(InfoSchemaRecordGenerator.java:209)
 [drill-java-exec-1.13.0-mapr.jar:1.13.0-mapr]
at 
org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.scanSchema(InfoSchemaRecordGenerator.java:196)
 [drill-java-exec-1.13.0-mapr.jar:1.13.0-mapr]
at 
org.apache.drill.exec.store.ischema.InfoSchemaTableType.getRecordReader(InfoSchemaTableType.java:58)
 [drill-java-exec-1.13.0-mapr.jar:1.13.0-mapr]
at 
org.apache.drill.exec.store.ischema.InfoSchemaBatchCreator.getBatch(InfoSchemaBatchCreator.java:34)
 [drill-java-exec-1.13.0-mapr.jar:1.13.0-mapr]
at 
org.apache.drill.exec.store.ischema.InfoSchemaBatchCreator.getBatch(InfoSchemaBatchCreator.java:30)
 [drill-java-exec-1.13.0-mapr.jar:1.13.0-mapr]
at org.apache.drill.exec.physical.impl.ImplCreator$2.run(ImplCreator.java:146) 
[drill-java-exec-1.13.0-mapr.jar:1.13.0-mapr]
at org.apache.drill.exec.physical.impl.ImplCreator$2.run(ImplCreator.java:142) 
[drill-java-exec-1.13.0-mapr.jar:1.13.0-mapr]
at java.security.AccessController.doPrivileged(Native Method) [na:1.8.0_144]
at javax.security.auth.Subject.doAs(Subject.java:422) [na:1.8.0_144]
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1633)
 [hadoop-common-2.7.0-mapr-1710.jar:na]
at 
org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:142)
 [drill-java-exec-1.13.0-mapr.jar:1.13.0-mapr]
at 
org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:182)
 [drill-java-exec-1.13.0-mapr.jar:1.13.0-mapr]
at 
org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:137)
 [drill-java-exec-1.13.0-mapr.jar:1.13.0-mapr]
at 
org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:182)
 

[jira] [Commented] (DRILL-6914) Query with RuntimeFilter and SemiJoin fails with IllegalStateException: Memory was leaked by query

2019-01-09 Thread Boaz Ben-Zvi (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738957#comment-16738957
 ] 

Boaz Ben-Zvi commented on DRILL-6914:
-

This memory leak can be reproduced on SF1 (on the Mac) by forcing the Hash-Join 
to *spill*; e.g., by setting this _internal_ option ("spill if the number of 
batches in memory gets to 1000 "):
{code:java}
alter system set `exec.hashjoin.max_batches_in_memory` = 1000;
{code}
(Also removed the irrelevant 'distinct' and the 'cast' parts from the repro 
query).


 However could not reproduce by spilling on SF0, or by spilling with a regular 
(not semi) Hash-Join.
 Also tried the fix from PR#1600 (DRILL-6947), but it did not cure this memory 
leak. 

Maybe [~weijie] has some ideas about the cause for this leak. Looking at the 
Semi-Join code changes (PR#1522), none seems to me in any conflict with the 
runtime filter (maybe [~weijie] has a better idea).

 

 

> Query with RuntimeFilter and SemiJoin fails with IllegalStateException: 
> Memory was leaked by query
> --
>
> Key: DRILL-6914
> URL: https://issues.apache.org/jira/browse/DRILL-6914
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.15.0
>Reporter: Abhishek Ravi
>Assignee: Boaz Ben-Zvi
>Priority: Major
> Fix For: 1.16.0
>
> Attachments: 23cc1af3-0e8e-b2c9-a889-a96504988d6c.sys.drill, 
> 23cc1b7c-5b5c-d123-5e72-6d7d2719df39.sys.drill
>
>
> Following query fails on TPC-H SF 100 dataset when 
> exec.hashjoin.enable.runtime_filter = true AND planner.enable_semijoin = true.
> Note that the query does not fail if any one of them or both are disabled.
> {code:sql}
> set `exec.hashjoin.enable.runtime_filter` = true;
> set `exec.hashjoin.runtime_filter.max.waiting.time` = 1;
> set `planner.enable_broadcast_join` = false;
> set `planner.enable_semijoin` = true;
> select
>  count(*) as row_count
> from
>  lineitem l1
> where
>  l1.l_shipdate IN (
>  select
>  distinct(cast(l2.l_shipdate as date))
>  from
>  lineitem l2);
> reset `exec.hashjoin.enable.runtime_filter`;
> reset `exec.hashjoin.runtime_filter.max.waiting.time`;
> reset `planner.enable_broadcast_join`;
> reset `planner.enable_semijoin`;
> {code}
>  
> {noformat}
> Error: SYSTEM ERROR: IllegalStateException: Memory was leaked by query. 
> Memory leaked: (134217728)
> Allocator(frag:1:0) 800/134217728/172453568/70126322567 
> (res/actual/peak/limit)
> Fragment 1:0
> Please, refer to logs for more information.
> [Error Id: ccee18b3-c3ff-4fdb-b314-23a6cfed0a0e on qa-node185.qa.lab:31010] 
> (state=,code=0)
> java.sql.SQLException: SYSTEM ERROR: IllegalStateException: Memory was leaked 
> by query. Memory leaked: (134217728)
> Allocator(frag:1:0) 800/134217728/172453568/70126322567 
> (res/actual/peak/limit)
> Fragment 1:0
> Please, refer to logs for more information.
> [Error Id: ccee18b3-c3ff-4fdb-b314-23a6cfed0a0e on qa-node185.qa.lab:31010]
> at 
> org.apache.drill.jdbc.impl.DrillCursor.nextRowInternally(DrillCursor.java:536)
> at org.apache.drill.jdbc.impl.DrillCursor.next(DrillCursor.java:640)
> at org.apache.calcite.avatica.AvaticaResultSet.next(AvaticaResultSet.java:217)
> at 
> org.apache.drill.jdbc.impl.DrillResultSetImpl.next(DrillResultSetImpl.java:151)
> at sqlline.BufferedRows.(BufferedRows.java:37)
> at sqlline.SqlLine.print(SqlLine.java:1716)
> at sqlline.Commands.execute(Commands.java:949)
> at sqlline.Commands.sql(Commands.java:882)
> at sqlline.SqlLine.dispatch(SqlLine.java:725)
> at sqlline.SqlLine.runCommands(SqlLine.java:1779)
> at sqlline.Commands.run(Commands.java:1485)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
> at sqlline.SqlLine.dispatch(SqlLine.java:722)
> at sqlline.SqlLine.initArgs(SqlLine.java:458)
> at sqlline.SqlLine.begin(SqlLine.java:514)
> at sqlline.SqlLine.start(SqlLine.java:264)
> at sqlline.SqlLine.main(SqlLine.java:195)
> Caused by: org.apache.drill.common.exceptions.UserRemoteException: SYSTEM 
> ERROR: IllegalStateException: Memory was leaked by query. Memory leaked: 
> (134217728)
> Allocator(frag:1:0) 800/134217728/172453568/70126322567 
> (res/actual/peak/limit)
> Fragment 1:0
> Please, refer to logs for more information.
> [Error Id: ccee18b3-c3ff-4fdb-b314-23a6cfed0a0e on qa-node185.qa.lab:31010]
> at 
> org.apache.drill.exec.rpc.user.QueryResultHandler.resultArrived(QueryResultHandler.java:123)
> at 

[jira] [Created] (DRILL-6960) Auto wrapping with LIMIT query should not apply to non-select queries

2019-01-09 Thread Kunal Khatua (JIRA)
Kunal Khatua created DRILL-6960:
---

 Summary: Auto wrapping with LIMIT query should not apply to 
non-select queries
 Key: DRILL-6960
 URL: https://issues.apache.org/jira/browse/DRILL-6960
 Project: Apache Drill
  Issue Type: Bug
  Components: Web Server
Affects Versions: 1.16.0
Reporter: Kunal Khatua
Assignee: Kunal Khatua
 Fix For: 1.16.0


[~IhorHuzenko] pointed out that DRILL-6050 can cause submission of queries with 
incorrect syntax. 
For example, when user enters {{SHOW DATABASES}}' and after limitation wrapping 
{{SELECT * FROM (SHOW DATABASES) LIMIT 10}} will be posted. 
This results into parsing errors, like:
{{Query Failed: An Error Occurred 
org.apache.drill.common.exceptions.UserRemoteException: PARSE ERROR: 
Encountered "( show" at line 2, column 15. Was expecting one of:  
... }}.

The fix should involve a javascript check for all non-select queries and not 
apply the LIMIT wrap for those queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6960) Auto wrapping with LIMIT query should not apply to non-select queries

2019-01-09 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-6960:

Labels: user-experience  (was: )

> Auto wrapping with LIMIT query should not apply to non-select queries
> -
>
> Key: DRILL-6960
> URL: https://issues.apache.org/jira/browse/DRILL-6960
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Web Server
>Affects Versions: 1.16.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Major
>  Labels: user-experience
> Fix For: 1.16.0
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> [~IhorHuzenko] pointed out that DRILL-6050 can cause submission of queries 
> with incorrect syntax. 
> For example, when user enters {{SHOW DATABASES}}' and after limitation 
> wrapping {{SELECT * FROM (SHOW DATABASES) LIMIT 10}} will be posted. 
> This results into parsing errors, like:
> {{Query Failed: An Error Occurred 
> org.apache.drill.common.exceptions.UserRemoteException: PARSE ERROR: 
> Encountered "( show" at line 2, column 15. Was expecting one of:  
> ... }}.
> The fix should involve a javascript check for all non-select queries and not 
> apply the LIMIT wrap for those queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6959) Query with filter with cast to timestamp literal does not return any results

2019-01-09 Thread Volodymyr Vysotskyi (JIRA)
Volodymyr Vysotskyi created DRILL-6959:
--

 Summary: Query with filter with cast to timestamp literal does not 
return any results
 Key: DRILL-6959
 URL: https://issues.apache.org/jira/browse/DRILL-6959
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.15.0
Reporter: Volodymyr Vysotskyi
Assignee: Volodymyr Vysotskyi
 Fix For: 1.16.0


When filter in the query has a cast of timestamp literal, the query does not 
return any results.

Steps to reproduce:
1. Create a table with timestamp values with milliseconds
{code:sql}
create table dfs.tmp.test_timestamp_filter as (select timestamp '2018-01-01 
12:12:12.123' as c1, timestamp '-12-31 23:59:59.999' as c2);
{code}
2. Run query with filter and cast to timestamp:
{code:sql}
select * from dfs.tmp.test_timestamp_filter where c1 = cast('2018-01-01 
12:12:12.123' as timestamp(3));
{code}
This query should return a single row, but it does not return any results.

The following query returns the correct result:
{code:sql}
select * from dfs.tmp.test_timestamp_filter where c1 = timestamp '2018-01-01 
12:12:12.123';
{code}
{noformat}
+--+--+
|c1|c2|
+--+--+
| 2018-01-01 12:12:12.123  | -12-31 23:59:59.999  |
+--+--+
1 row selected (0.139 seconds)
{noformat}
The problem in {{DrillConstExecutor}}, when it used to simplify cast, timestamp 
precision is lost and it is trimmed to the value with 0 precision.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6918) Querying empty topics fails with "NumberFormatException"

2019-01-09 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-6918:
-
Reviewer: Sorabh Hamirwasia

> Querying empty topics fails with "NumberFormatException"
> 
>
> Key: DRILL-6918
> URL: https://issues.apache.org/jira/browse/DRILL-6918
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Kafka
>Affects Versions: 1.14.0
>Reporter: Abhishek Ravi
>Assignee: Abhishek Ravi
>Priority: Minor
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>
> Queries with filter conditions fail with {{NumberFormatException}} when 
> querying empty topics.
> {noformat}
> 0: jdbc:drill:drillbit=10.10.100.189> select * from `topic2` where Field1 = 
> 'abc';
> Error: SYSTEM ERROR: NumberFormatException: abc
> Fragment 0:0
> Please, refer to logs for more information.
> [Error Id: a0718456-c053-4820-9bd8-69c683598344 on qa-node189.qa.lab:31010] 
> (state=,code=0)
> {noformat}
>  
> *Logs:*
> {noformat}
> 2018-12-20 22:36:34,576 [23e3760d-7d23-5489-e2fb-6daf383053ee:foreman] INFO 
> o.a.drill.exec.work.foreman.Foreman - Query text for query with id 
> 23e3760d-7d23-5489-e2fb-6daf383053ee issued by root: select * from `topic2` 
> where Field1 = 'abc'
> 2018-12-20 22:36:35,134 [23e3760d-7d23-5489-e2fb-6daf383053ee:foreman] INFO 
> o.a.d.e.s.k.KafkaPushDownFilterIntoScan - Partitions ScanSpec before 
> pushdown: [KafkaPartitionScanSpec [topicName=topic2, partitionId=2, 
> startOffset=0, endOffset=0], KafkaPartitionScanSpec [topicName=topic2, 
> partitionId=1, startOffset=0, endOffset=0], KafkaPartitionScanSpec 
> [topicName=topic2, partitionId=0, startOffset=0, endOffset=0]]
> 2018-12-20 22:36:35,170 [23e3760d-7d23-5489-e2fb-6daf383053ee:frag:0:0] INFO 
> o.a.d.e.s.k.KafkaScanBatchCreator - Number of record readers initialized : 3
> 2018-12-20 22:36:35,171 [23e3760d-7d23-5489-e2fb-6daf383053ee:frag:0:0] INFO 
> o.a.d.e.w.fragment.FragmentExecutor - 
> 23e3760d-7d23-5489-e2fb-6daf383053ee:0:0: State change requested 
> AWAITING_ALLOCATION --> RUNNING
> 2018-12-20 22:36:35,172 [23e3760d-7d23-5489-e2fb-6daf383053ee:frag:0:0] INFO 
> o.a.d.e.w.f.FragmentStatusReporter - 
> 23e3760d-7d23-5489-e2fb-6daf383053ee:0:0: State to report: RUNNING
> 2018-12-20 22:36:35,173 [23e3760d-7d23-5489-e2fb-6daf383053ee:frag:0:0] INFO 
> o.a.d.e.s.k.d.MessageReaderFactory - Initialized Message Reader : 
> JsonMessageReader[jsonReader=null]
> 2018-12-20 22:36:35,177 [23e3760d-7d23-5489-e2fb-6daf383053ee:frag:0:0] INFO 
> o.a.d.e.store.kafka.MessageIterator - Start offset of topic2:2 is - 0
> 2018-12-20 22:36:35,177 [23e3760d-7d23-5489-e2fb-6daf383053ee:frag:0:0] INFO 
> o.a.d.e.s.kafka.KafkaRecordReader - Last offset processed for topic2:2 is - 0
> 2018-12-20 22:36:35,177 [23e3760d-7d23-5489-e2fb-6daf383053ee:frag:0:0] INFO 
> o.a.d.e.s.kafka.KafkaRecordReader - Total time to fetch messages from 
> topic2:2 is - 0 milliseconds
> 2018-12-20 22:36:35,178 [23e3760d-7d23-5489-e2fb-6daf383053ee:frag:0:0] WARN 
> o.a.d.e.e.ExpressionTreeMaterializer - Unable to find value vector of path 
> `Field1`, returning null instance.
> 2018-12-20 22:36:35,191 [23e3760d-7d23-5489-e2fb-6daf383053ee:frag:0:0] INFO 
> o.a.d.e.w.fragment.FragmentExecutor - 
> 23e3760d-7d23-5489-e2fb-6daf383053ee:0:0: State change requested RUNNING --> 
> FAILED
> 2018-12-20 22:36:35,191 [23e3760d-7d23-5489-e2fb-6daf383053ee:frag:0:0] ERROR 
> o.a.d.e.physical.impl.BaseRootExec - Batch dump started: dumping last 2 
> failed batches
> 2018-12-20 22:36:35,191 [23e3760d-7d23-5489-e2fb-6daf383053ee:frag:0:0] ERROR 
> o.a.d.e.p.i.s.RemovingRecordBatch - 
> RemovingRecordBatch[container=org.apache.drill.exec.record.VectorContainer@3ce6a91e[recordCount
>  = 0, schemaChanged = true, schema = null, wrappers = [], ...], state=FIRST, 
> copier=null]
> 2018-12-20 22:36:35,191 [23e3760d-7d23-5489-e2fb-6daf383053ee:frag:0:0] ERROR 
> o.a.d.e.p.i.filter.FilterRecordBatch - 
> FilterRecordBatch[container=org.apache.drill.exec.record.VectorContainer@2057ff66[recordCount
>  = 0, schemaChanged = true, schema = null, wrappers = 
> [org.apache.drill.exec.vector.NullableIntVector@32edcdf2[field = [`T4¦¦**` 
> (INT:OPTIONAL)], ...], 
> org.apache.drill.exec.vector.NullableIntVector@3a5bf582[field = [`Field1` 
> (INT:OPTIONAL)], ...]], ...], selectionVector2=[SV2: recs=0 - ], filter=null, 
> popConfig=org.apache.drill.exec.physical.config.Filter@1d69df75]
> 2018-12-20 22:36:35,191 [23e3760d-7d23-5489-e2fb-6daf383053ee:frag:0:0] ERROR 
> o.a.d.e.physical.impl.BaseRootExec - Batch dump completed.
> 2018-12-20 22:36:35,192 [23e3760d-7d23-5489-e2fb-6daf383053ee:frag:0:0] INFO 
> o.a.d.e.w.fragment.FragmentExecutor - 
> 23e3760d-7d23-5489-e2fb-6daf383053ee:0:0: State change requested FAILED --> 
> 

[jira] [Created] (DRILL-6958) CTAS csv with option

2019-01-09 Thread benj (JIRA)
benj created DRILL-6958:
---

 Summary: CTAS csv with option
 Key: DRILL-6958
 URL: https://issues.apache.org/jira/browse/DRILL-6958
 Project: Apache Drill
  Issue Type: Improvement
  Components: Storage - Text  CSV
Affects Versions: 1.15.0
Reporter: benj


Add some options to write CSV file with CTAS :
 * possibility to write or not the header,
 * possibility to force the write of only 1 file instead of lot of parts,
 * possibility to force quoting



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6955) storage-jdbc unit tests improvements

2019-01-09 Thread Arina Ielchiieva (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-6955:

Reviewer: Arina Ielchiieva

> storage-jdbc unit tests improvements
> 
>
> Key: DRILL-6955
> URL: https://issues.apache.org/jira/browse/DRILL-6955
> Project: Apache Drill
>  Issue Type: Task
>  Components: Tools, Build  Test
>Affects Versions: 1.15.0
>Reporter: Volodymyr Vysotskyi
>Assignee: Volodymyr Vysotskyi
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>
> Currently, for unit tests in storage-jdbc module used inmemdb-maven-plugin, 
> jcabi-mysql-maven-plugin, sql-maven-plugin, and other maven plugins. So 
> databases are started and tables are populated after the concrete maven goal 
> is executed. It makes the debugging process more complex since tests cannot 
> be executed directly from IDE. Another problem is that most of these plugins 
> weren't released for a long time.
> The goal of this Jira to remove these plugins usage and rework tests to 
> provide a way to run them from IDE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6955) storage-jdbc unit tests improvements

2019-01-09 Thread Arina Ielchiieva (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-6955:

Labels: ready-to-commit  (was: )

> storage-jdbc unit tests improvements
> 
>
> Key: DRILL-6955
> URL: https://issues.apache.org/jira/browse/DRILL-6955
> Project: Apache Drill
>  Issue Type: Task
>  Components: Tools, Build  Test
>Affects Versions: 1.15.0
>Reporter: Volodymyr Vysotskyi
>Assignee: Volodymyr Vysotskyi
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>
> Currently, for unit tests in storage-jdbc module used inmemdb-maven-plugin, 
> jcabi-mysql-maven-plugin, sql-maven-plugin, and other maven plugins. So 
> databases are started and tables are populated after the concrete maven goal 
> is executed. It makes the debugging process more complex since tests cannot 
> be executed directly from IDE. Another problem is that most of these plugins 
> weren't released for a long time.
> The goal of this Jira to remove these plugins usage and rework tests to 
> provide a way to run them from IDE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)