[jira] [Commented] (DRILL-8085) EVF V2 support in the "Easy" format plugin

2022-02-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DRILL-8085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17496753#comment-17496753
 ] 

ASF GitHub Bot commented on DRILL-8085:
---

jnturton commented on a change in pull request #2419:
URL: https://github.com/apache/drill/pull/2419#discussion_r789682495



##
File path: 
exec/java-exec/src/main/codegen/templates/ParquetOutputRecordWriter.java
##
@@ -142,94 +142,110 @@ public void writeField() throws IOException {
 minor.class == "Decimal9" ||
 minor.class == "UInt4">
 <#if mode.prefix == "Repeated" >
-reader.read(i, holder);
-consumer.addInteger(holder.value);
+reader.read(i, holder);
+consumer.addInteger(holder.value);
 <#else>
-consumer.startField(fieldName, fieldId);
-reader.read(holder);
-consumer.addInteger(holder.value);
-consumer.endField(fieldName, fieldId);
+consumer.startField(fieldName, fieldId);
+reader.read(holder);
+consumer.addInteger(holder.value);
+consumer.endField(fieldName, fieldId);
 
   <#elseif
 minor.class == "Float4">
   <#if mode.prefix == "Repeated" >
-  reader.read(i, holder);
-  consumer.addFloat(holder.value);
+reader.read(i, holder);
+consumer.addFloat(holder.value);
   <#else>
-consumer.startField(fieldName, fieldId);
-reader.read(holder);
-consumer.addFloat(holder.value);
-consumer.endField(fieldName, fieldId);
+consumer.startField(fieldName, fieldId);
+reader.read(holder);
+consumer.addFloat(holder.value);
+consumer.endField(fieldName, fieldId);
   
   <#elseif
 minor.class == "BigInt" ||
 minor.class == "Decimal18" ||
-minor.class == "TimeStamp" ||
 minor.class == "UInt8">
   <#if mode.prefix == "Repeated" >
-  reader.read(i, holder);
-  consumer.addLong(holder.value);
+reader.read(i, holder);
+consumer.addLong(holder.value);
   <#else>
-consumer.startField(fieldName, fieldId);
-reader.read(holder);
-consumer.addLong(holder.value);
-consumer.endField(fieldName, fieldId);
+consumer.startField(fieldName, fieldId);
+reader.read(holder);
+consumer.addLong(holder.value);
+consumer.endField(fieldName, fieldId);
   
+  <#elseif minor.class == "TimeStamp" >
+<#if mode.prefix == "Repeated" >
+reader.read(i, holder);
+// Write Drill timestamp directly: writes local time to Parquet's UTC 
type.
+// This is a bug: see DRILL-8099.
+// The commented-out line is the correct way to do this. However, doing
+// the correct way breaks other Drill code as explained in DRILL-8099.

Review comment:
   Okay so this case is yet to be fixed (in another PR).

##
File path: 
contrib/storage-jdbc/src/test/java/org/apache/drill/exec/store/jdbc/TestJdbcPluginWithH2IT.java
##
@@ -100,6 +101,9 @@ public void testCrossSourceMultiFragmentJoin() throws 
Exception {
   }
 
   @Test
+  // See Drill-8101: The timestamp type is broken: this test only works
+  // in UTC.
+  @Ignore("Only works in the UTC timezone")

Review comment:
   While this shrinks the unit test coverage of our CI build, TIMESTAMP is 
a well documented priority so I'm not worried that we will not return to these 
tests.

##
File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/scan/RowBatchReader.java
##
@@ -81,31 +81,28 @@
  * from any method. A {@link UserException} is preferred to provide
  * detailed information about the source of the problem.
  */
-
 public interface RowBatchReader {
 
   /**
* Name used when reporting errors. Can simply be the class name.
*
* @return display name for errors
*/
-
   String name();
 
   /**
* Setup the record reader. Called just before the first call
-   * to next(). Allocate resources here, not in the constructor.
+   * to {@code next()}. Allocate resources here, not in the constructor.

Review comment:
   @paul-rogers is this commentary still correct?  In the HTTP log format 
batch reader above, the new pattern is resource allocation in the constructor 
and not in `open()`...

##
File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/scan/v3/lifecycle/OutputBatchBuilder.java
##
@@ -277,11 +279,18 @@ protected void defineSourceBatchMapping(TupleMetadata 
schema, int source) {
   @SuppressWarnings("unchecked")
   private void physicalProjection() {
 outputContainer.removeAll();
+mapVectors.clear();
 for (int i = 0; i < outputSchema.size(); i++) {
-  ValueVector outputVector;
   ColumnMetadata outputCol = outputSchema.metadata(i);
+  ValueVector outputVector;
   if (outputCol.isMap()) {
 outputVector = buildTopMap(outputCol, 

[jira] [Commented] (DRILL-7722) CREATE VIEW with LATERAL UNNEST creates an invalid view

2022-02-23 Thread James Turton (Jira)


[ 
https://issues.apache.org/jira/browse/DRILL-7722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17496732#comment-17496732
 ] 

James Turton commented on DRILL-7722:
-

[~bozzo] you can delete any CRC files created by Drill and it will carry on.  
I've just checked that this bug is still present in Drill today, hopefully we 
can get it fixed soon.

> CREATE VIEW with LATERAL UNNEST creates an invalid view
> ---
>
> Key: DRILL-7722
> URL: https://issues.apache.org/jira/browse/DRILL-7722
> Project: Apache Drill
>  Issue Type: Bug
>  Components: SQL Parser
>Affects Versions: 1.17.0
>Reporter: Matevž Bradač
>Priority: Blocker
>
> Creating a view from a query containing LATERAL UNNEST results in a view that 
> cannot be parsed by the engine. The generated view contains superfluous 
> parentheses, thus the failed parsing.
> {code:bash|title=a simple JSON database}
> $ cat /tmp/t.json
> [{"name": "item_1", "related": ["id1"]}, {"name": "item_2", "related": 
> ["id1", "id2"]}, {"name": "item_3", "related": ["id2"]}]
> {code}
> {code:SQL|title=drill query, working}
> SELECT
>   item.name,
>   relations.*
> FROM dfs.tmp.`t.json` item
> JOIN LATERAL(
>   SELECT * FROM UNNEST(item.related) i(rels)
> ) relations
> ON TRUE
>  name rels
> 0  item_1  id1
> 1  item_2  id1
> 2  item_2  id2
> 3  item_3  id2
> {code}
> {code:SQL|title=create a drill view from the above query}
> CREATE VIEW dfs.tmp.unnested_view AS
> SELECT
>   item.name,
>   relations.*
> FROM dfs.tmp.`t.json` item
> JOIN LATERAL(
>   SELECT * FROM UNNEST(item.related) i(rels)
> ) relations
> ON TRUE
> {code}
> {code:bash|title=contents of view file}
> # note the extra parentheses near LATERAL and FROM
> $ cat /tmp/unnested_view.view.drill
> {
>   "name" : "unnested_view",
>   "sql" : "SELECT `item`.`name`, `relations`.*\nFROM `dfs`.`tmp`.`t.json` AS 
> `item`\nINNER JOIN LATERAL((SELECT *\nFROM (UNNEST(`item`.`related`)) AS `i` 
> (`rels`))) AS `relations` ON TRUE",
>   "fields" : [ {
> "name" : "name",
> "type" : "ANY",
> "isNullable" : true
>   }, {
> "name" : "rels",
> "type" : "ANY",
> "isNullable" : true
>   } ],
>   "workspaceSchemaPath" : [ ]
> }
> {code}
> {code:SQL|title=query the view}
> SELECT * FROM dfs.tmp.unnested_view
> PARSE ERROR: Failure parsing a view your query is dependent upon.
> SQL Query: SELECT `item`.`name`, `relations`.*
> FROM `dfs`.`tmp`.`t.json` AS `item`
> INNER JOIN LATERAL((SELECT *
> FROM (UNNEST(`item`.`related`)) AS `i` (`rels`))) AS `relations` ON TRUE
>  ^
> [Error Id: fd816a27-c2c5-4c2a-b6bf-173ab37eb693 ]
> {code}
> If the view is "fixed" by editing the generated JSON and removing the extra 
> parentheses, e.g.
> {code:bash|title=fixed view}
> $ cat /tmp/fixed_unnested_view.view.drill
> {
>   "name" : "fixed_unnested_view",
>   "sql" : "SELECT `item`.`name`, `relations`.*\nFROM `dfs`.`tmp`.`t.json` AS 
> `item`\nINNER JOIN LATERAL(SELECT *\nFROM UNNEST(`item`.`related`) AS `i` 
> (`rels`)) AS `relations` ON TRUE",
>   "fields" : [ {
> "name" : "name",
> "type" : "ANY",
> "isNullable" : true
>   }, {
> "name" : "rels",
> "type" : "ANY",
> "isNullable" : true
>   } ],
>   "workspaceSchemaPath" : [ ]
> }
> {code}
> then querying works as expected:
> {code:sql|title=fixed view query}
> SELECT * FROM dfs.tmp.fixed_unnested_view
>  name rels
> 0  item_1  id1
> 1  item_2  id1
> 2  item_2  id2
> 3  item_3  id2
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (DRILL-7871) StoragePluginStore instances for different users

2022-02-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DRILL-7871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17496722#comment-17496722
 ] 

ASF GitHub Bot commented on DRILL-7871:
---

jnturton commented on pull request #2251:
URL: https://github.com/apache/drill/pull/2251#issuecomment-1048739204


   Converting to draft while there is still discussion about how we approach 
the design.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@drill.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> StoragePluginStore instances for different users
> 
>
> Key: DRILL-7871
> URL: https://issues.apache.org/jira/browse/DRILL-7871
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Security
>Affects Versions: 1.18.0
>Reporter: Vitalii Diravka
>Assignee: Vitalii Diravka
>Priority: Major
>
> Different users should have their own storage plugin configs to have access 
> to own storages only. The feature can be based on Drill User Impersonation 
> model



--
This message was sent by Atlassian Jira
(v8.20.1#820001)