[jira] [Comment Edited] (DRILL-7980) Pushdowns subquery results as parameters
[ https://issues.apache.org/jira/browse/DRILL-7980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17394315#comment-17394315 ] Vitalii Diravka edited comment on DRILL-7980 at 2/14/23 1:02 PM: - Looks like the query works fine (thanks [~volodymyr] for pointing this): {code:java} apache drill> SELECT first_name FROM cp.`employee.json` where first_name in (SELECT first_name FROM cp.`employee.json` where first_name in ('Sheri', 'Derrick')); ++ | first_name | ++ | Sheri | | Derrick| ++{code} the query with star can't be submitted, because the number of columns are unknown: {code:java} apache drill> SELECT * FROM cp.`employee.json` where first_name in (SELECT * FROM cp.`employee.json` where first_name in ('Sheri', 'Derrick')); Error: SYSTEM ERROR: NullPointerException: Cannot invoke "org.apache.drill.exec.record.RecordBatchSizer$ColumnSize.getStdNetOrNetSizePerEntry()" because "columnSize" is null{code} For HTTP plugin the planner works in the same manner. Mark the ticket as invalid. was (Author: vitalii): Looks like the query works fine (thanks [~volodymyr] for pointing this): {code:java} apache drill> SELECT first_name FROM cp.`employee.json` where first_name in (SELECT first_name FROM cp.`employee.json` where first_name in ('Sheri', 'Derrick')); ++ | first_name | ++ | Sheri | | Derrick| ++{code} the query with start can't be submitted, because the number of columns are unknown: {code:java} apache drill> SELECT * FROM cp.`employee.json` where first_name in (SELECT * FROM cp.`employee.json` where first_name in ('Sheri', 'Derrick')); Error: SYSTEM ERROR: NullPointerException: Cannot invoke "org.apache.drill.exec.record.RecordBatchSizer$ColumnSize.getStdNetOrNetSizePerEntry()" because "columnSize" is null{code} For HTTP plugin the planner works in the same manner. Mark the ticket as invalid. > Pushdowns subquery results as parameters > > > Key: DRILL-7980 > URL: https://issues.apache.org/jira/browse/DRILL-7980 > Project: Apache Drill > Issue Type: Bug > Components: Query Planning Optimization >Affects Versions: 1.19.0 >Reporter: Vitalii Diravka >Assignee: Vitalii Diravka >Priority: Major > > If a sub query is used to generate parameters for an API call you get 400 > errors and the paramters don't show up on URL in the error. > > For Example: > If you have an API config that requires a "q=" paramter and want to populate > that with a sub query like this... > > {code:java} > SELECT * > FROM api.example > WHERE q IN (SELECT col FROM source){code} > It doesn't work. The expected behavior would be multiple API calls (one for > each value in "col") with the results aggregated back together for the > projection. > Query example: > {code:java} > vitalii@vitalii-UX331UN:~/IdeaProjects/drill$ > distribution/target/apache-drill-1.20.0-SNAPSHOT/apache-drill-1.20.0-SNAPSHOT/bin/drill-embedded > Apache Drill 1.20.0-SNAPSHOT > "A Drill is a terrible thing to waste." > apache drill> SELECT * FROM cp.`employee.json` where first_name in ('Sheri', > 'Derrick'); > +-+-++---+-++--+---++---+-+---+-+++---+ > | employee_id | full_name | first_name | last_name | position_id | > position_title | store_id | department_id | birth_date | hire_date | salary | > supervisor_id | education_level | marital_status | gender | management_role | > +-+-++---+-++--+---++---+-+---+-+++---+ > | 1 | Sheri Nowmer | Sheri | Nowmer | 1 | President | 0 | 1 | 1961-08-26 | > 1994-12-01 00:00:00.0 | 8.0 | 0 | Graduate Degree | S | F | Senior > Management | > | 2 | Derrick Whelply | Derrick | Whelply | 2 | VP Country Manager | 0 | 1 | > 1915-07-03 | 1994-12-01 00:00:00.0 | 4.0 | 1 | Graduate Degree | M | M | > Senior Management | > +-+-++---+-++--+---++---+-+---+-+++---+ > 2 rows selected (0.156 seconds) > apache drill> select * from (VALUES('Sheri', 'Derrick')); > ++-+ > | EXPR$0 | EXPR$1 | > ++-+ > | Sheri | Derrick | > ++-+ > 1 row selected (0.143 seconds) > apache drill> SELECT * FROM cp.`employee.json` where first_name in
[jira] [Updated] (DRILL-8248) Fix http_request for several rows
[ https://issues.apache.org/jira/browse/DRILL-8248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka updated DRILL-8248: --- Description: For example the following query should return 2 rows, but `http_request` return one row always. {code:java} SELECT `task_id`, http_request('devapp_cu./task/:task_id', `task_id`) as `Total` FROM( SELECT `Id` as `task_id` FROM `devapp_cu`.`/list/:list_id/task(custom_fields)` WHERE list_id = '180680339') LIMIT 2 {code} {code:java} Found one or more vector errors from ProjectRecordBatch col_1 - NullableFloat8Vector: Row count = 2, but value count = 1 {code} was: For example the following query should return 2 rows, but `http_request` return one row always. {code:java} SELECT `task_id`, http_request('devapp_cu./task/:task_id', `task_id`) as `Total` FROM( SELECT `Id` as `task_id` FROM `devapp_cu`.`/list/:list_id/task(custom_fields)` WHERE list_id = '180680339') LIMIT 2 {code} > Fix http_request for several rows > - > > Key: DRILL-8248 > URL: https://issues.apache.org/jira/browse/DRILL-8248 > Project: Apache Drill > Issue Type: Sub-task > Components: Functions - Drill >Reporter: Vitalii Diravka >Assignee: Vitalii Diravka >Priority: Major > > For example the following query should return 2 rows, but `http_request` > return one row always. > {code:java} > SELECT `task_id`, http_request('devapp_cu./task/:task_id', `task_id`) as > `Total` > FROM( > SELECT `Id` as `task_id` > FROM `devapp_cu`.`/list/:list_id/task(custom_fields)` > WHERE list_id = '180680339') > LIMIT 2 > {code} > {code:java} > Found one or more vector errors from ProjectRecordBatch > col_1 - NullableFloat8Vector: Row count = 2, but value count = 1 > {code} -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (DRILL-8248) Fix http_request for several rows
Vitalii Diravka created DRILL-8248: -- Summary: Fix http_request for several rows Key: DRILL-8248 URL: https://issues.apache.org/jira/browse/DRILL-8248 Project: Apache Drill Issue Type: Sub-task Components: Functions - Drill Reporter: Vitalii Diravka Assignee: Vitalii Diravka For example the following query should return 2 rows, but `http_request` return one row always. {code:java} SELECT `task_id`, http_request('devapp_cu./task/:task_id', `task_id`) as `Total` FROM( SELECT `Id` as `task_id` FROM `devapp_cu`.`/list/:list_id/task(custom_fields)` WHERE list_id = '180680339') LIMIT 2 {code} -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Resolved] (DRILL-8242) Fix output for HttpHelperFunctions
[ https://issues.apache.org/jira/browse/DRILL-8242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka resolved DRILL-8242. Resolution: Fixed > Fix output for HttpHelperFunctions > -- > > Key: DRILL-8242 > URL: https://issues.apache.org/jira/browse/DRILL-8242 > Project: Apache Drill > Issue Type: Sub-task > Components: Functions - Drill >Reporter: Vitalii Diravka >Assignee: Vitalii Diravka >Priority: Critical > Fix For: 2.0.0 > > > DRILL-8236 changed HttpHelperFunctions to use EVF based JSON v2 reader. But > function output left old -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (DRILL-8242) Fix output for HttpHelperFunctions
Vitalii Diravka created DRILL-8242: -- Summary: Fix output for HttpHelperFunctions Key: DRILL-8242 URL: https://issues.apache.org/jira/browse/DRILL-8242 Project: Apache Drill Issue Type: Sub-task Components: Functions - Drill Reporter: Vitalii Diravka Assignee: Vitalii Diravka Fix For: 2.0.0 DRILL-8236 changed HttpHelperFunctions to use EVF based JSON v2 reader. But function output left old -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (DRILL-8236) Move HttpHelperFunctions to use JSON2 reader
Vitalii Diravka created DRILL-8236: -- Summary: Move HttpHelperFunctions to use JSON2 reader Key: DRILL-8236 URL: https://issues.apache.org/jira/browse/DRILL-8236 Project: Apache Drill Issue Type: Improvement Components: Client - HTTP Reporter: Vitalii Diravka Assignee: Vitalii Diravka HttpHelperFunctions still uses old JSON reader. Need to swtich it to the new EVF based reader -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Resolved] (DRILL-8169) Add UDFs to HTTP Plugin to Facilitate Joins
[ https://issues.apache.org/jira/browse/DRILL-8169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka resolved DRILL-8169. Resolution: Fixed > Add UDFs to HTTP Plugin to Facilitate Joins > --- > > Key: DRILL-8169 > URL: https://issues.apache.org/jira/browse/DRILL-8169 > Project: Apache Drill > Issue Type: Improvement > Components: Storage - Other >Affects Versions: 1.20.0 >Reporter: Charles Givre >Assignee: Charles Givre >Priority: Major > Fix For: 2.0.0 > > > There are some situations where a user might want to join data with an API > result and the pushdowns prevent that from happening. The main situation > where this happens is when > an API has parameters which are part of the URL AND these parameters are > dynamically populated via a join. > In this case, there are two functions `http_get_url` and `http_get` which you > can use to faciliate these joins. > * `http_get('', )`: This function accepts a > storage plugin as input and an optional list of parameters to include in a > URL. > * `http_get_url(, )`: This function works in the same way except > that it does not pull any configuration information from existing storage > plugins. > ### Example Queries > Let's say that you have a storage plugin called `github` with an endpoint > called `repos` which points to the url: https://github.com/orgs/\{org}/repos. > It is easy enough to > write a query like this: > ```sql > SELECT * > FROM github.repos > WHERE org='apache' > ``` > However, if you had a file with organizations and wanted to join this with > the API, the query would fail. Using the functions listed above you could get > this data as follows: > ```sql > SELECT http_get('github.repos', `org`) > FROM dfs.`some_data.csvh` > ``` > or > ```sql > SELECT http_get('https://github.com/orgs/\{org}/repos', `org`) > FROM dfs.`some_data.csvh` > ``` > ** WARNING: This functionality will execute an HTTP Request FOR EVERY ROW IN > YOUR DATA. Use with caution. ** > -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Resolved] (DRILL-8229) Add Parameter to Skip Malformed Records to HTTP UDF
[ https://issues.apache.org/jira/browse/DRILL-8229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka resolved DRILL-8229. Resolution: Fixed > Add Parameter to Skip Malformed Records to HTTP UDF > > > Key: DRILL-8229 > URL: https://issues.apache.org/jira/browse/DRILL-8229 > Project: Apache Drill > Issue Type: Improvement > Components: Functions - Drill >Affects Versions: 1.20.1 >Reporter: Charles Givre >Assignee: Charles Givre >Priority: Minor > Fix For: 1.20.2 > > > The http_get and http_request UDFs were not using the JSON parameter to skip > malformed records. This PR fixes that. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (DRILL-8228) Drill2288GetColumnsMetadataWhenNoRowsTest regression
Vitalii Diravka created DRILL-8228: -- Summary: Drill2288GetColumnsMetadataWhenNoRowsTest regression Key: DRILL-8228 URL: https://issues.apache.org/jira/browse/DRILL-8228 Project: Apache Drill Issue Type: Sub-task Reporter: Vitalii Diravka Assignee: Vitalii Diravka Drill2288GetColumnsMetadataWhenNoRowsTest starts to fail after DRILL-8225 {code:java} Error: Failures: Error:Drill2288GetColumnsMetadataWhenNoRowsTest Multiple Failures (2 failures) java.lang.NullPointerException: Cannot invoke "java.io.File.getAbsolutePath()" because the return value of "org.apache.drill.test.BaseDirTestWatcher.getTmpDir()" is null java.lang.NullPointerException: Cannot invoke "java.sql.Connection.close()" because "org.apache.drill.jdbc.test.Drill2288GetColumnsMetadataWhenNoRowsTest.connection" is null {code} https://github.com/apache/drill/runs/6499401042?check_suite_focus=true -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Resolved] (DRILL-8225) Update LogParser and Yauaa dependencies
[ https://issues.apache.org/jira/browse/DRILL-8225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka resolved DRILL-8225. Resolution: Fixed > Update LogParser and Yauaa dependencies > --- > > Key: DRILL-8225 > URL: https://issues.apache.org/jira/browse/DRILL-8225 > Project: Apache Drill > Issue Type: Improvement >Reporter: Niels Basjes >Assignee: Niels Basjes >Priority: Minor > > This also includes making the new support for Client Hints available (and > related tests). -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (DRILL-8224) Fix TestHttpPlugin#testSlowResponse
[ https://issues.apache.org/jira/browse/DRILL-8224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka updated DRILL-8224: --- Reviewer: Cong Luo > Fix TestHttpPlugin#testSlowResponse > --- > > Key: DRILL-8224 > URL: https://issues.apache.org/jira/browse/DRILL-8224 > Project: Apache Drill > Issue Type: Sub-task >Reporter: Vitalii Diravka >Assignee: Vitalii Diravka >Priority: Major > > In > [DRILL-7973|https://github.com/apache/drill/commit/304230a289505526e1ff1bb1aae932517b7b6965#diff-c6b42ccd4a5372e7c299b3b8052455c0fe618403a957e0b1b908edf46eee4809R880] > timeout for http request in mock http plugin was changed. > But looks like we need to use other method, not {{_throttleBody:_}} > * Throttles the request reader and response writer to sleep for the given > period after each > * series of [bytesPerPeriod] bytes are transferred. Use this to simulate > network behavior. > That means we can sleep longer than 6 seconds, but 10s configured in plugin. > Possibly {{_new MockResponse().setBodyDelay(20, > TimeUnit.DAYS).setHeadersDelay(20, TimeUnit.DAYS);_}} should work: > https://github.com/square/okhttp/issues/6976#issuecomment-1006028317 > Also current MockWebServer 4.9.2 version can be updated to 4.9.3: > https://mvnrepository.com/artifact/com.squareup.okhttp3/mockwebserver/4.9.3 -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (DRILL-8209) Introduce rule for converting join with distinct input to semi-join
[ https://issues.apache.org/jira/browse/DRILL-8209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka updated DRILL-8209: --- Reviewer: Vitalii Diravka > Introduce rule for converting join with distinct input to semi-join > --- > > Key: DRILL-8209 > URL: https://issues.apache.org/jira/browse/DRILL-8209 > Project: Apache Drill > Issue Type: Sub-task >Reporter: Vova Vysotskyi >Assignee: Vova Vysotskyi >Priority: Major > > Newer Calcite changed the order of applying rules. AggregateRemoveRule is > applied before SemiJoinRule, so SemiJoinRule cannot be applied later, since > aggregate is pruned from planning. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (DRILL-8224) Fix TestHttpPlugin#testSlowResponse
[ https://issues.apache.org/jira/browse/DRILL-8224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka updated DRILL-8224: --- Description: In [DRILL-7973|https://github.com/apache/drill/commit/304230a289505526e1ff1bb1aae932517b7b6965#diff-c6b42ccd4a5372e7c299b3b8052455c0fe618403a957e0b1b908edf46eee4809R880] timeout for http request in mock http plugin was changed. But looks like we need to use other method, not {{_throttleBody:_}} * Throttles the request reader and response writer to sleep for the given period after each * series of [bytesPerPeriod] bytes are transferred. Use this to simulate network behavior. That means we can sleep longer than 6 seconds, but 10s configured in plugin. Possibly {{_new MockResponse().setBodyDelay(20, TimeUnit.DAYS).setHeadersDelay(20, TimeUnit.DAYS);_}} should work: https://github.com/square/okhttp/issues/6976#issuecomment-1006028317 Also current MockWebServer 4.9.2 version can be updated to 4.9.3: https://mvnrepository.com/artifact/com.squareup.okhttp3/mockwebserver/4.9.3 was: In [DRILL-7973|https://github.com/apache/drill/commit/304230a289505526e1ff1bb1aae932517b7b6965#diff-c6b42ccd4a5372e7c299b3b8052455c0fe618403a957e0b1b908edf46eee4809R880] timeout for http request in mock http plugin was changed. But looks like we need to use other method, not {{_throttleBody:_}} * Throttles the request reader and response writer to sleep for the given period after each * series of [bytesPerPeriod] bytes are transferred. Use this to simulate network behavior. That means we can sleep longer than 6 seconds, but 10s configured in plugin. Possibly{{ _new MockResponse().setBodyDelay(20, TimeUnit.DAYS).setHeadersDelay(20, TimeUnit.DAYS);_ should work:}} {{_[https://github.com/square/okhttp/issues/6976#issuecomment-1006028317 ]_}} {{{_}{_}Also I would recommend to update current MockWebServer 4.9.2 version to 4.9.3}} > Fix TestHttpPlugin#testSlowResponse > --- > > Key: DRILL-8224 > URL: https://issues.apache.org/jira/browse/DRILL-8224 > Project: Apache Drill > Issue Type: Sub-task >Reporter: Vitalii Diravka >Assignee: Vitalii Diravka >Priority: Major > > In > [DRILL-7973|https://github.com/apache/drill/commit/304230a289505526e1ff1bb1aae932517b7b6965#diff-c6b42ccd4a5372e7c299b3b8052455c0fe618403a957e0b1b908edf46eee4809R880] > timeout for http request in mock http plugin was changed. > But looks like we need to use other method, not {{_throttleBody:_}} > * Throttles the request reader and response writer to sleep for the given > period after each > * series of [bytesPerPeriod] bytes are transferred. Use this to simulate > network behavior. > That means we can sleep longer than 6 seconds, but 10s configured in plugin. > Possibly {{_new MockResponse().setBodyDelay(20, > TimeUnit.DAYS).setHeadersDelay(20, TimeUnit.DAYS);_}} should work: > https://github.com/square/okhttp/issues/6976#issuecomment-1006028317 > Also current MockWebServer 4.9.2 version can be updated to 4.9.3: > https://mvnrepository.com/artifact/com.squareup.okhttp3/mockwebserver/4.9.3 -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (DRILL-8224) Fix TestHttpPlugin#testSlowResponse
Vitalii Diravka created DRILL-8224: -- Summary: Fix TestHttpPlugin#testSlowResponse Key: DRILL-8224 URL: https://issues.apache.org/jira/browse/DRILL-8224 Project: Apache Drill Issue Type: Sub-task Reporter: Vitalii Diravka Assignee: Vitalii Diravka In [DRILL-7973|https://github.com/apache/drill/commit/304230a289505526e1ff1bb1aae932517b7b6965#diff-c6b42ccd4a5372e7c299b3b8052455c0fe618403a957e0b1b908edf46eee4809R880] timeout for http request in mock http plugin was changed. But looks like we need to use other method, not {{_throttleBody:_}} * Throttles the request reader and response writer to sleep for the given period after each * series of [bytesPerPeriod] bytes are transferred. Use this to simulate network behavior. That means we can sleep longer than 6 seconds, but 10s configured in plugin. Possibly{{ _new MockResponse().setBodyDelay(20, TimeUnit.DAYS).setHeadersDelay(20, TimeUnit.DAYS);_ should work:}} {{_[https://github.com/square/okhttp/issues/6976#issuecomment-1006028317 ]_}} {{{_}{_}Also I would recommend to update current MockWebServer 4.9.2 version to 4.9.3}} -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Resolved] (DRILL-8204) Allow Provided Schema for HTTP Plugin in JSON Mode
[ https://issues.apache.org/jira/browse/DRILL-8204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka resolved DRILL-8204. Resolution: Fixed > Allow Provided Schema for HTTP Plugin in JSON Mode > -- > > Key: DRILL-8204 > URL: https://issues.apache.org/jira/browse/DRILL-8204 > Project: Apache Drill > Issue Type: Improvement > Components: Storage - Other >Affects Versions: 1.20.0 >Reporter: Charles Givre >Assignee: Charles Givre >Priority: Major > Fix For: 2.0.0 > > > One of the challenges of querying APIs is inconsistent data. Drill allows you > to provide a schema for individual endpoints. You can do this in one of two > ways: either by > providing a serialized TupleMetadata of the desired schema. This is an > advanced functionality and should only be used by advanced Drill users. > The schema provisioning currently supports complex types of Arrays and Maps > at any nesting level. > ### Example Schema Provisioning: > ```json > "jsonOptions": { > "providedSchema": [ > { > "fieldName": "int_field", > "fieldType": "bigint" > }, { > "fieldName": "jsonField", > "fieldType": "varchar", > "properties": { > "drill.json-mode":"json" > } > },{ > // Array field > "fieldName": "stringField", > "fieldType": "varchar", > "isArray": true > }, { > // Map field > "fieldName": "mapField", > "fieldType": "map", > "fields": [ > { > "fieldName": "nestedField", > "fieldType": "int" > },{ > "fieldName": "nestedField2", > "fieldType": "varchar" > } > ] > } > ] > } > ``` > ### Example Provisioning the Schema with a JSON String > ```json > "jsonOptions": { > "jsonSchema": > "\{\"type\":\"tuple_schema\",\"columns\":[{\"name\":\"outer_map\",\"type\":\"STRUCT<`int_field` > BIGINT, `int_array` ARRAY>\",\"mode\":\"REQUIRED\"}]}" > } > ``` > You can print out a JSON string of a schema with the Java code below. > ```java > TupleMetadata schema = new SchemaBuilder() > .addNullable("a", MinorType.BIGINT) > .addNullable("m", MinorType.VARCHAR) > .build(); > ColumnMetadata m = schema.metadata("m"); > m.setProperty(JsonLoader.JSON_MODE, JsonLoader.JSON_LITERAL_MODE); > System.out.println(schema.jsonString()); > ``` > This will generate something like the JSON string below: > ```json > { > "type":"tuple_schema", > "columns":[ > {"name":"a","type":"BIGINT","mode":"OPTIONAL"}, > {"name":"m","type":"VARCHAR","mode":"OPTIONAL","properties":\{"drill.json-mode":"json"} > } > ] > } > ``` > ## Dealing With Inconsistent Schemas > One of the major challenges of interacting with JSON data is when the schema > is inconsistent. Drill has a `UNION` data type which is marked as > experimental. At the time of > writing, the HTTP plugin does not support the `UNION`, however supplying a > schema can solve a lot of those issues. > ### Json Mode > Drill offers the option of reading all JSON values as a string. While this > can complicate downstream analytics, it can also be a more memory-efficient > way of reading data with > inconsistent schema. Unfortunately, at the time of writing, JSON-mode is only > available with a provided schema. However, future work will allow this mode > to be enabled for > any JSON data. > Enabling JSON Mode: > You can enable JSON mode simply by adding the `drill.json-mode` property with > a value of `json` to a field, as shown below: > ```json > { > "fieldName": "jsonField", > "fieldType": "varchar", > "properties": { > "drill.json-mode": "json" > } > } > ``` -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Resolved] (DRILL-8178) Bump S3 SDK to Lastest Version
[ https://issues.apache.org/jira/browse/DRILL-8178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka resolved DRILL-8178. Resolution: Fixed > Bump S3 SDK to Lastest Version > -- > > Key: DRILL-8178 > URL: https://issues.apache.org/jira/browse/DRILL-8178 > Project: Apache Drill > Issue Type: Task > Components: Storage - Other >Affects Versions: 1.20.0 >Reporter: Charles Givre >Assignee: Charles Givre >Priority: Minor > Fix For: 2.0.0 > > -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (DRILL-8203) Enable Union Type for JSON2 reader
[ https://issues.apache.org/jira/browse/DRILL-8203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka updated DRILL-8203: --- Summary: Enable Union Type for JSON2 reader (was: Enable Union) > Enable Union Type for JSON2 reader > -- > > Key: DRILL-8203 > URL: https://issues.apache.org/jira/browse/DRILL-8203 > Project: Apache Drill > Issue Type: Improvement > Components: Storage - JSON >Affects Versions: 1.20.0 >Reporter: Vitalii Diravka >Assignee: Vitalii Diravka >Priority: Major > Fix For: 2.0.0 > > > Enable UNION TYPE Mode support for EVF JSON reader, which is controlled with > _exec.enable_union_type_ system/session option. > Need to leverage > _SingleMapWriter#unionEnabled_ functionality __ and bind to __ > _JsonLoaderOptions#unionEnabled_ config -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (DRILL-8203) Enable Union
Vitalii Diravka created DRILL-8203: -- Summary: Enable Union Key: DRILL-8203 URL: https://issues.apache.org/jira/browse/DRILL-8203 Project: Apache Drill Issue Type: Improvement Components: Storage - JSON Affects Versions: 1.20.0 Reporter: Vitalii Diravka Assignee: Vitalii Diravka Fix For: 2.0.0 Enable UNION TYPE Mode support for EVF JSON reader, which is controlled with _exec.enable_union_type_ system/session option. Need to leverage _SingleMapWriter#unionEnabled_ functionality __ and bind to __ _JsonLoaderOptions#unionEnabled_ config -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (DRILL-8201) SchemaChange in HashAgg operator
[ https://issues.apache.org/jira/browse/DRILL-8201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka updated DRILL-8201: --- Component/s: Execution - Codegen Storage - JSON > SchemaChange in HashAgg operator > > > Key: DRILL-8201 > URL: https://issues.apache.org/jira/browse/DRILL-8201 > Project: Apache Drill > Issue Type: Improvement > Components: Execution - Codegen, Storage - JSON >Affects Versions: 1.20.0 >Reporter: Vitalii Diravka >Assignee: Vitalii Diravka >Priority: Major > > Hash aggregate does not support schema change due to HashAggBatch > implementation: > {code:java} > case UPDATE_AGGREGATOR: > throw UserException.unsupportedError() > .message(SchemaChangeException.schemaChanged( > "Hash aggregate does not support schema change", > incomingSchema, > incoming.getSchema()).getMessage()) > .build(logger); > default: {code} > After JSON update to leverage EVF there is schema change for this oprator in > the > _TestE2EUnnestAndLateral#testMultipleBatchesLateral_WithGroupByInParent_ test > case -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (DRILL-8201) SchemaChange in HashAgg operator
[ https://issues.apache.org/jira/browse/DRILL-8201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka updated DRILL-8201: --- Fix Version/s: 2.0.0 > SchemaChange in HashAgg operator > > > Key: DRILL-8201 > URL: https://issues.apache.org/jira/browse/DRILL-8201 > Project: Apache Drill > Issue Type: Improvement > Components: Execution - Codegen, Storage - JSON >Affects Versions: 1.20.0 >Reporter: Vitalii Diravka >Assignee: Vitalii Diravka >Priority: Major > Fix For: 2.0.0 > > > Hash aggregate does not support schema change due to HashAggBatch > implementation: > {code:java} > case UPDATE_AGGREGATOR: > throw UserException.unsupportedError() > .message(SchemaChangeException.schemaChanged( > "Hash aggregate does not support schema change", > incomingSchema, > incoming.getSchema()).getMessage()) > .build(logger); > default: {code} > After JSON update to leverage EVF there is schema change for this oprator in > the > _TestE2EUnnestAndLateral#testMultipleBatchesLateral_WithGroupByInParent_ test > case -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (DRILL-8201) SchemaChange in HashAgg operator
[ https://issues.apache.org/jira/browse/DRILL-8201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka updated DRILL-8201: --- Affects Version/s: 1.20.0 > SchemaChange in HashAgg operator > > > Key: DRILL-8201 > URL: https://issues.apache.org/jira/browse/DRILL-8201 > Project: Apache Drill > Issue Type: Improvement >Affects Versions: 1.20.0 >Reporter: Vitalii Diravka >Assignee: Vitalii Diravka >Priority: Major > > Hash aggregate does not support schema change due to HashAggBatch > implementation: > {code:java} > case UPDATE_AGGREGATOR: > throw UserException.unsupportedError() > .message(SchemaChangeException.schemaChanged( > "Hash aggregate does not support schema change", > incomingSchema, > incoming.getSchema()).getMessage()) > .build(logger); > default: {code} > After JSON update to leverage EVF there is schema change for this oprator in > the > _TestE2EUnnestAndLateral#testMultipleBatchesLateral_WithGroupByInParent_ test > case -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (DRILL-8201) SchemaChange in HashAgg operator
Vitalii Diravka created DRILL-8201: -- Summary: SchemaChange in HashAgg operator Key: DRILL-8201 URL: https://issues.apache.org/jira/browse/DRILL-8201 Project: Apache Drill Issue Type: Improvement Reporter: Vitalii Diravka Assignee: Vitalii Diravka Hash aggregate does not support schema change due to HashAggBatch implementation: {code:java} case UPDATE_AGGREGATOR: throw UserException.unsupportedError() .message(SchemaChangeException.schemaChanged( "Hash aggregate does not support schema change", incomingSchema, incoming.getSchema()).getMessage()) .build(logger); default: {code} After JSON update to leverage EVF there is schema change for this oprator in the _TestE2EUnnestAndLateral#testMultipleBatchesLateral_WithGroupByInParent_ test case -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (DRILL-8199) Convert Excel EVF1 to EVF2
Vitalii Diravka created DRILL-8199: -- Summary: Convert Excel EVF1 to EVF2 Key: DRILL-8199 URL: https://issues.apache.org/jira/browse/DRILL-8199 Project: Apache Drill Issue Type: Sub-task Components: Storage - Excel Reporter: Vitalii Diravka Assignee: Vitalii Diravka EVF1 is currently implemented. Need to udpate to EVF2 _ScanFrameworkVersion#EVF_V2_ (DRILL-8085) -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (DRILL-8198) XML EVF2 reader provideSchema usage
Vitalii Diravka created DRILL-8198: -- Summary: XML EVF2 reader provideSchema usage Key: DRILL-8198 URL: https://issues.apache.org/jira/browse/DRILL-8198 Project: Apache Drill Issue Type: Sub-task Components: Storage - XML Affects Versions: 1.20.0 Reporter: Vitalii Diravka Assignee: Vitalii Diravka Fix For: 2.0.0 XMLBatchReader is converted to EVF2 reader, but not used provideSchema for Schema Provision feature -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (DRILL-8197) Enable All Text Mode support for EVF JSON reader
Vitalii Diravka created DRILL-8197: -- Summary: Enable All Text Mode support for EVF JSON reader Key: DRILL-8197 URL: https://issues.apache.org/jira/browse/DRILL-8197 Project: Apache Drill Issue Type: Improvement Components: Storage - JSON Affects Versions: 1.20.0 Reporter: Vitalii Diravka Assignee: Vitalii Diravka Fix For: 2.0.0 Enable All Text Mode support for EVF JSON reader. _JsonLoaderOptions#allTextMode_ -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (DRILL-8196) JSON EVF2
Vitalii Diravka created DRILL-8196: -- Summary: JSON EVF2 Key: DRILL-8196 URL: https://issues.apache.org/jira/browse/DRILL-8196 Project: Apache Drill Issue Type: Improvement Components: Storage - JSON Affects Versions: 1.20.0 Reporter: Vitalii Diravka Assignee: Vitalii Diravka Fix For: 2.0.0 DRILL-8085 introduces EVF2 formt. Switch EVF1 JSON to EVF2 -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (DRILL-8195) Enable Mongo ExtendedTypeName for JSON EVF
Vitalii Diravka created DRILL-8195: -- Summary: Enable Mongo ExtendedTypeName for JSON EVF Key: DRILL-8195 URL: https://issues.apache.org/jira/browse/DRILL-8195 Project: Apache Drill Issue Type: Improvement Components: Storage - JSON Affects Versions: 1.20.0 Reporter: Vitalii Diravka Assignee: Vitalii Diravka Fix For: 2.0.0 Add new format ifor mongo-styled dates intorduced in DRILL-8143 to EVF based JSON reader The following test cases failed: _TestExtendedTypes, TestFrameworkTest, TestCsvWithoutHeadersWithSchema, TestNestedDateTimeTimestamp, TestJsonRecordReader_ {code:java} java.lang.Exception: org.apache.drill.common.exceptions.UserRemoteException: DATA_READ ERROR: Type of JSON token is not compatible with its columnJSON token type: date JSON token: 2019-09-30T20:47:43.10+05 Column: date Column type: TIMESTAMP Format plugin: json Format plugin: JSONFormatPlugin Plugin config name: json File name: classpath:/jsoninput/input2.json Line: 32 Position: 42 Near token: 2019-09-30T20:47:43.10+05 Fragment: 0:0 {code} -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (DRILL-8145) Fix flaky TestDrillbitResilience#memoryLeaksWhenCancelled test case
[ https://issues.apache.org/jira/browse/DRILL-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka updated DRILL-8145: --- Description: The issue can be observed from [GitHub CI run|https://github.com/apache/drill/runs/5105482077?check_suite_focus=true] The flaky test can fail with 2 different errors: 1. {code:java} 931196 DEBUG [main] [org.apache.drill.exec.server.TestDrillbitResilience] - Sleep thread interrupted. Ignore it java.lang.InterruptedException: sleep interrupted at java.base/java.lang.Thread.sleep(Native Method) at org.apache.drill.exec.server.TestDrillbitResilience.countAllocatedMemory(TestDrillbitResilience.java:916) at org.apache.drill.exec.server.TestDrillbitResilience.memoryLeaksWhenCancelled(TestDrillbitResilience.java:619) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutInvocation.proceed(TimeoutInvocation.java:46) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35) at org.junit.platform.engine.support.hierarchical.NodeTestTask$DefaultDynamicTestExecutor.execute(NodeTestTask.java:226) at org.junit.platform.engine.support.hierarchical.NodeTestTask$DefaultDynamicTestExecutor.execute(NodeTestTask.java:204) at
[jira] [Updated] (DRILL-8145) Fix flaky TestDrillbitResilience#memoryLeaksWhenCancelled test case
[ https://issues.apache.org/jira/browse/DRILL-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka updated DRILL-8145: --- Description: The issue can be observed from [GitHub CI run|https://github.com/apache/drill/runs/5105482077?check_suite_focus=true] The flaky test can fail with 2 different error: 1. {code:java} 931196 DEBUG [main] [org.apache.drill.exec.server.TestDrillbitResilience] - Sleep thread interrupted. Ignore it java.lang.InterruptedException: sleep interrupted at java.base/java.lang.Thread.sleep(Native Method) at org.apache.drill.exec.server.TestDrillbitResilience.countAllocatedMemory(TestDrillbitResilience.java:916) at org.apache.drill.exec.server.TestDrillbitResilience.memoryLeaksWhenCancelled(TestDrillbitResilience.java:619) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutInvocation.proceed(TimeoutInvocation.java:46) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35) at org.junit.platform.engine.support.hierarchical.NodeTestTask$DefaultDynamicTestExecutor.execute(NodeTestTask.java:226) at org.junit.platform.engine.support.hierarchical.NodeTestTask$DefaultDynamicTestExecutor.execute(NodeTestTask.java:204) at
[jira] [Updated] (DRILL-8145) Fix flaky TestDrillbitResilience#memoryLeaksWhenCancelled test case
[ https://issues.apache.org/jira/browse/DRILL-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka updated DRILL-8145: --- Description: The issue can be observed from [GitHub CI run|https://github.com/apache/drill/runs/5105482077?check_suite_focus=true] 1. {code:java} 931196 DEBUG [main] [org.apache.drill.exec.server.TestDrillbitResilience] - Sleep thread interrupted. Ignore it java.lang.InterruptedException: sleep interrupted at java.base/java.lang.Thread.sleep(Native Method) at org.apache.drill.exec.server.TestDrillbitResilience.countAllocatedMemory(TestDrillbitResilience.java:916) at org.apache.drill.exec.server.TestDrillbitResilience.memoryLeaksWhenCancelled(TestDrillbitResilience.java:619) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutInvocation.proceed(TimeoutInvocation.java:46) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35) at org.junit.platform.engine.support.hierarchical.NodeTestTask$DefaultDynamicTestExecutor.execute(NodeTestTask.java:226) at org.junit.platform.engine.support.hierarchical.NodeTestTask$DefaultDynamicTestExecutor.execute(NodeTestTask.java:204) at org.junit.jupiter.engine.descriptor.TestTemplateTestDescriptor.execute(TestTemplateTestDescriptor.java:139) at
[jira] [Created] (DRILL-8145) Fix flaky TestDrillbitResilience#memoryLeaksWhenCancelled test case
Vitalii Diravka created DRILL-8145: -- Summary: Fix flaky TestDrillbitResilience#memoryLeaksWhenCancelled test case Key: DRILL-8145 URL: https://issues.apache.org/jira/browse/DRILL-8145 Project: Apache Drill Issue Type: Sub-task Affects Versions: 1.19.0 Reporter: Vitalii Diravka Assignee: Vitalii Diravka Fix For: Future -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (DRILL-8138) OperatorStats race condition produceable by some Parquet unit tests
[ https://issues.apache.org/jira/browse/DRILL-8138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka updated DRILL-8138: --- Description: Modifying a test like TestParquetWriter#testTPCHReadWriteSnappy so that it runs under the async column reader will expose a race condition that will cause the query to fail. No way to reproduce the bug in Drill itself is currently known. It fixes TODO in the _DrillCompressionCodecFactory#testTPCHReadWriteLz4_ was:Modifying a test like TestParquetWriter#testTPCHReadWriteSnappy so that it runs under the async column reader will expose a race condition that will cause the query to fail. No way to reproduce the bug in Drill itself is currently known. > OperatorStats race condition produceable by some Parquet unit tests > --- > > Key: DRILL-8138 > URL: https://issues.apache.org/jira/browse/DRILL-8138 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.19.0 >Reporter: James Turton >Priority: Major > > Modifying a test like TestParquetWriter#testTPCHReadWriteSnappy so that it > runs under the async column reader will expose a race condition that will > cause the query to fail. No way to reproduce the bug in Drill itself is > currently known. > It fixes TODO in the _DrillCompressionCodecFactory#testTPCHReadWriteLz4_ -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (DRILL-7009) Lots of tests are locale-specific. Enforce en_US locale for now.
[ https://issues.apache.org/jira/browse/DRILL-7009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka updated DRILL-7009: --- Labels: locale (was: ) > Lots of tests are locale-specific. Enforce en_US locale for now. > > > Key: DRILL-7009 > URL: https://issues.apache.org/jira/browse/DRILL-7009 > Project: Apache Drill > Issue Type: Bug > Components: Server, Tools, Build Test >Affects Versions: 1.15.0 >Reporter: Vladimir Sitnikov >Priority: Major > Labels: locale > > Lots of tests fail for locale reasons when running {{mvn test}} in ru_RU > locale. > I suggest to enforce en_US for now so {{mvn test}} works at least > For instance: > {noformat} > [ERROR] Failures: > [ERROR] > TestDurationFormat.testCompactSecMillis:58->validateDurationFormat:43 > expected:<4[,]545s> but was:<4[.]545s> > [ERROR] > TestDurationFormat.testCompactTwoDigitMilliSec:48->validateDurationFormat:43 > expected:<0[,]045s> but was:<0[.]045s> > [ERROR] > TestValueVectorElementFormatter.testFormatValueVectorElementAllDateTimeFormats:142 > expected:<[Mon, Nov] 5, 2012> but was:<[Пн, ноя] 5, 2012> > [ERROR] > TestValueVectorElementFormatter.testFormatValueVectorElementDateValidPattern:83 > expected:<[Mon, Nov] 5, 2012> but was:<[Пн, ноя] 5, 2012> > [ERROR] Errors: > [ERROR] TestFunctionsQuery.testToCharFunction:567 » After matching 0 > records, did not... > [ERROR] TestSelectivity.testFilterSelectivityOptions » UserRemote PARSE > ERROR: Encount... > [ERROR] > TestVarDecimalFunctions.testCastDecimalDouble:724->BaseTestQuery.testRunAndReturn:341 > » Rpc > [ERROR] > TestVarDecimalFunctions.testCastDoubleDecimal:694->BaseTestQuery.testRunAndReturn:341 > » Rpc > [ERROR] TestVarDecimalFunctions.testDecimalToChar:775 » at position 0 > column '`s1`' m... > [ERROR] TestTopNSchemaChanges.testMissingColumn:192 » > org.apache.drill.common.excepti... > [ERROR] TestTopNSchemaChanges.testNumericTypes:82 » > org.apache.drill.common.exception... > [ERROR] TestTopNSchemaChanges.testUnionTypes:162 » > org.apache.drill.common.exceptions... > [ERROR] > TestMergeJoinWithSchemaChanges.testNumericStringTypes:192->BaseTestQuery.testRunAndReturn:341 > » Rpc > [ERROR] > TestMergeJoinWithSchemaChanges.testNumericTypes:114->BaseTestQuery.testRunAndReturn:341 > » Rpc > [ERROR] > TestMergeJoinWithSchemaChanges.testOneSideSchemaChanges:348->BaseTestQuery.testRunAndReturn:341 > » Rpc > [ERROR] TestExternalSort.testNumericTypesLegacy:49->testNumericTypes:113 » > org.apache... > [ERROR] TestExternalSort.testNumericTypesManaged:44->testNumericTypes:113 » > org.apach... > [ERROR] TestImageRecordReader.testAviImage:101->createAndQuery:50 » at > position 0 col... > [ERROR] TestImageRecordReader.testBmpImage:56->createAndQuery:50 » at > position 0 colu... > [ERROR] TestImageRecordReader.testEpsImage:121->createAndQuery:50 » at > position 0 col... > [ERROR] TestImageRecordReader.testJpegImage:71->createAndQuery:50 » at > position 0 col... > [ERROR] TestImageRecordReader.testMovImage:111->createAndQuery:50 » at > position 0 col... > [ERROR] TestImageRecordReader.testPngImage:81->createAndQuery:50 » at > position 0 colu... > [ERROR] TestImageRecordReader.testPsdImage:86->createAndQuery:50 » at > position 0 colu... > [INFO] > [ERROR] Tests run: 3723, Failures: 4, Errors: 20, Skipped: 157 > [INFO] > [INFO] > > [INFO] Reactor Summary: > [INFO] > [INFO] Apache Drill Root POM 1.16.0-SNAPSHOT .. SUCCESS [ 6.470 > s] > [INFO] tools/Parent Pom ... SUCCESS [ 0.928 > s] > [INFO] tools/freemarker codegen tooling ... SUCCESS [ 6.004 > s] > [INFO] Drill Protocol . SUCCESS [ 5.090 > s] > [INFO] Common (Logical Plan, Base expressions) SUCCESS [ 5.898 > s] > [INFO] Logical Plan, Base expressions . SUCCESS [ 5.662 > s] > [INFO] exec/Parent Pom SUCCESS [ 0.696 > s] > [INFO] exec/memory/Parent Pom . SUCCESS [ 0.569 > s] > [INFO] exec/memory/base ... SUCCESS [ 3.380 > s] > [INFO] exec/rpc ... SUCCESS [ 1.782 > s] > [INFO] exec/Vectors ... SUCCESS [ 6.364 > s] > [INFO] contrib/Parent Pom . SUCCESS [ 0.487 > s] > [INFO] contrib/data/Parent Pom SUCCESS [ 0.604 > s] > [INFO] contrib/data/tpch-sample-data .. SUCCESS [ 2.891 > s] > [INFO] exec/Java Execution Engine .
[jira] [Updated] (DRILL-7005) TestDurationFormat and TestValueVectorElementFormatter are locale-dependent
[ https://issues.apache.org/jira/browse/DRILL-7005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka updated DRILL-7005: --- Labels: locale (was: ) > TestDurationFormat and TestValueVectorElementFormatter are locale-dependent > --- > > Key: DRILL-7005 > URL: https://issues.apache.org/jira/browse/DRILL-7005 > Project: Apache Drill > Issue Type: Bug > Components: Tools, Build Test >Affects Versions: 1.16.0 >Reporter: Vladimir Sitnikov >Priority: Major > Labels: locale > > {{mvn test}} seems to fail in Russian locale: > {noformat} > [ERROR] Failures: > [ERROR] > TestDurationFormat.testCompactSecMillis:58->validateDurationFormat:43 > expected:<4[,]545s> but was:<4[.]545s> > [ERROR] > TestDurationFormat.testCompactTwoDigitMilliSec:48->validateDurationFormat:43 > expected:<0[,]045s> but was:<0[.]045s> > [ERROR] > TestValueVectorElementFormatter.testFormatValueVectorElementAllDateTimeFormats:142 > expected:<[Mon, Nov] 5, 2012> but was:<[Пн, ноя] 5, 2012> > [ERROR] > TestValueVectorElementFormatter.testFormatValueVectorElementDateValidPattern:83 > expected:<[Mon, Nov] 5, 2012> but was:<[Пн, ноя] 5, 2012> > [ERROR] Errors: > [ERROR] TestFunctionsQuery.testToCharFunction:567 » After matching 0 > records, did not... > [ERROR] TestSelectivity.testFilterSelectivityOptions » UserRemote PARSE > ERROR: Encount... > [ERROR] > TestVarDecimalFunctions.testCastDecimalDouble:724->BaseTestQuery.testRunAndReturn:341 > » Rpc > [ERROR] > TestVarDecimalFunctions.testCastDoubleDecimal:694->BaseTestQuery.testRunAndReturn:341 > » Rpc > [ERROR] TestVarDecimalFunctions.testDecimalToChar:775 » at position 0 > column '`s1`' m... > [ERROR] TestTopNSchemaChanges.testMissingColumn:192 » > org.apache.drill.common.excepti... > [ERROR] TestTopNSchemaChanges.testNumericTypes:82 » > org.apache.drill.common.exception... > [ERROR] TestTopNSchemaChanges.testUnionTypes:162 » > org.apache.drill.common.exceptions... > [ERROR] > TestMergeJoinWithSchemaChanges.testNumericStringTypes:192->BaseTestQuery.testRunAndReturn:341 > » Rpc > [ERROR] > TestMergeJoinWithSchemaChanges.testNumericTypes:114->BaseTestQuery.testRunAndReturn:341 > » Rpc > [ERROR] > TestMergeJoinWithSchemaChanges.testOneSideSchemaChanges:348->BaseTestQuery.testRunAndReturn:341 > » Rpc > [ERROR] TestExternalSort.testNumericTypesLegacy:49->testNumericTypes:113 » > org.apache... > [ERROR] TestExternalSort.testNumericTypesManaged:44->testNumericTypes:113 » > org.apach... > [ERROR] TestImageRecordReader.testAviImage:101->createAndQuery:50 » at > position 0 col... > [ERROR] TestImageRecordReader.testBmpImage:56->createAndQuery:50 » at > position 0 colu... > [ERROR] TestImageRecordReader.testEpsImage:121->createAndQuery:50 » at > position 0 col... > [ERROR] TestImageRecordReader.testJpegImage:71->createAndQuery:50 » at > position 0 col... > [ERROR] TestImageRecordReader.testMovImage:111->createAndQuery:50 » at > position 0 col... > [ERROR] TestImageRecordReader.testPngImage:81->createAndQuery:50 » at > position 0 colu... > [ERROR] TestImageRecordReader.testPsdImage:86->createAndQuery:50 » at > position 0 colu... > [INFO] > [ERROR] Tests run: 3723, Failures: 4, Errors: 20, Skipped: 157{noformat} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (DRILL-8107) Hadoop2 backport Maven profile
[ https://issues.apache.org/jira/browse/DRILL-8107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka updated DRILL-8107: --- Fix Version/s: 1.20.0 > Hadoop2 backport Maven profile > -- > > Key: DRILL-8107 > URL: https://issues.apache.org/jira/browse/DRILL-8107 > Project: Apache Drill > Issue Type: New Feature > Components: Storage - HDF5, Tools, Build Test >Affects Versions: 1.19.0 >Reporter: Vitalii Diravka >Assignee: Vitalii Diravka >Priority: Major > Fix For: 1.20.0 > > > Some Drill clients are stuck on the old Hadoop2 cluster version. To run the > latest version of Drill, need to add a Maven profile to build Drill for that > environment. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (DRILL-8107) Hadoop2 backport Maven profile
[ https://issues.apache.org/jira/browse/DRILL-8107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka updated DRILL-8107: --- Summary: Hadoop2 backport Maven profile (was: Haddop2 backport Maven profile) > Hadoop2 backport Maven profile > -- > > Key: DRILL-8107 > URL: https://issues.apache.org/jira/browse/DRILL-8107 > Project: Apache Drill > Issue Type: New Feature > Components: Storage - HDF5, Tools, Build Test >Affects Versions: 1.19.0 >Reporter: Vitalii Diravka >Assignee: Vitalii Diravka >Priority: Major > > Some Drill clients are stuck on the old Hadoop2 cluster version. To run the > latest version of Drill, need to add a Maven profile to build Drill for that > environment. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (DRILL-63) Use Phoenix for HBase query execution and storage
[ https://issues.apache.org/jira/browse/DRILL-63?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka resolved DRILL-63. -- Resolution: Duplicate > Use Phoenix for HBase query execution and storage > - > > Key: DRILL-63 > URL: https://issues.apache.org/jira/browse/DRILL-63 > Project: Apache Drill > Issue Type: New Feature >Reporter: James Taylor >Priority: Major > Fix For: Future > > > Phoenix (https://github.com/forcedotcom/phoenix) already speaks SQL and can > execute a distributed query plan. I'd be happy to volunteer plugging it for > storage/query execution. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (DRILL-7863) Add Storage Plugin for Apache Phoenix
[ https://issues.apache.org/jira/browse/DRILL-7863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka resolved DRILL-7863. Resolution: Fixed > Add Storage Plugin for Apache Phoenix > - > > Key: DRILL-7863 > URL: https://issues.apache.org/jira/browse/DRILL-7863 > Project: Apache Drill > Issue Type: New Feature > Components: Storage - Other >Reporter: Cong Luo >Assignee: Cong Luo >Priority: Major > > There is a to-do list : > # MVP on EVF. > # Security Authentication. > # Support both the thin(PQS) and fat(ZK) driver. > # Compatibility with phoenix 4.x and 5.x. > # Shaded dependencies. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (DRILL-8107) Haddop2 backport Maven profile
[ https://issues.apache.org/jira/browse/DRILL-8107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka updated DRILL-8107: --- Summary: Haddop2 backport Maven profile (was: Haddop2.0 backport Maven profile) > Haddop2 backport Maven profile > -- > > Key: DRILL-8107 > URL: https://issues.apache.org/jira/browse/DRILL-8107 > Project: Apache Drill > Issue Type: New Feature > Components: Storage - HDF5, Tools, Build Test >Affects Versions: 1.19.0 >Reporter: Vitalii Diravka >Assignee: Vitalii Diravka >Priority: Major > > Some Drill clients are stuck on the old Hadoop2 cluster version. To run the > latest version of Drill, need to add a Maven profile to build Drill for that > environment. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (DRILL-8107) Haddop2.0 backport Maven profile
Vitalii Diravka created DRILL-8107: -- Summary: Haddop2.0 backport Maven profile Key: DRILL-8107 URL: https://issues.apache.org/jira/browse/DRILL-8107 Project: Apache Drill Issue Type: New Feature Components: Storage - HDF5, Tools, Build Test Affects Versions: 1.19.0 Reporter: Vitalii Diravka Assignee: Vitalii Diravka Some Drill clients are stuck on the old Hadoop2 cluster version. To run the latest version of Drill, need to add a Maven profile to build Drill for that environment. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (DRILL-8061) Add Impersonation Support for Phoenix
[ https://issues.apache.org/jira/browse/DRILL-8061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka updated DRILL-8061: --- Description: *Discussion:* * [https://github.com/apache/drill/issues/2296] *Documentation:* * [https://phoenix.apache.org/server.html#Impersonation] * [https://drill.apache.org/docs/configuring-user-impersonation] was: * https://phoenix.apache.org/server.html#Impersonation * https://drill.apache.org/docs/configuring-user-impersonation > Add Impersonation Support for Phoenix > - > > Key: DRILL-8061 > URL: https://issues.apache.org/jira/browse/DRILL-8061 > Project: Apache Drill > Issue Type: Sub-task > Components: Storage - Other >Reporter: Cong Luo >Assignee: Vitalii Diravka >Priority: Major > Fix For: 1.20.0 > > > *Discussion:* > * [https://github.com/apache/drill/issues/2296] > *Documentation:* > * [https://phoenix.apache.org/server.html#Impersonation] > * [https://drill.apache.org/docs/configuring-user-impersonation] -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (DRILL-8075) PhoenixTestSuite failed with JDK17
Vitalii Diravka created DRILL-8075: -- Summary: PhoenixTestSuite failed with JDK17 Key: DRILL-8075 URL: https://issues.apache.org/jira/browse/DRILL-8075 Project: Apache Drill Issue Type: Sub-task Components: Storage - Phoenix Reporter: Vitalii Diravka Fix For: Future *_PhoenixTestSuite_* can work only on jdk8. In future to make it enabled by default need to fix it and add support of running Phoenix Query Server on all jdk versions above 8. {code:java} Thread 298 (RS-EventLoopGroup-1-6): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: app//org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) app//org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:148) app//org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:141) app//org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:290) app//org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:347) app//org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) app//org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) app//org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.base@17.0.1/java.lang.Thread.run(Thread.java:833) Thread 301 (RS-EventLoopGroup-1-7): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: app//org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) app//org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:148) app//org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:141) app//org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:290) app//org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:347) app//org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) app//org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) app//org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.base@17.0.1/java.lang.Thread.run(Thread.java:833) Thread 305 (RS-EventLoopGroup-1-8): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: app//org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) app//org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:148) app//org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:141) app//org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:290) app//org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:347) app//org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) app//org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) app//org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.base@17.0.1/java.lang.Thread.run(Thread.java:833)java.lang.RuntimeException: java.io.IOException: Shutting down at org.apache.phoenix.query.BaseTest.initMiniCluster(BaseTest.java:549) at org.apache.phoenix.query.BaseTest.setUpTestCluster(BaseTest.java:449) at org.apache.phoenix.query.BaseTest.checkClusterInitialized(BaseTest.java:435) at org.apache.phoenix.query.BaseTest.setUpTestDriver(BaseTest.java:517) at org.apache.phoenix.query.BaseTest.setUpTestDriver(BaseTest.java:512) at org.apache.drill.exec.store.phoenix.QueryServerBasicsIT.doSetup(QueryServerBasicsIT.java:47) at org.apache.drill.exec.store.phoenix.PhoenixTestSuite.initPhoenixQueryServer(PhoenixTestSuite.java:54) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at
[jira] [Updated] (DRILL-8061) Add Impersonation Support for Phoenix
[ https://issues.apache.org/jira/browse/DRILL-8061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka updated DRILL-8061: --- Parent: DRILL-7863 Issue Type: Sub-task (was: New Feature) > Add Impersonation Support for Phoenix > - > > Key: DRILL-8061 > URL: https://issues.apache.org/jira/browse/DRILL-8061 > Project: Apache Drill > Issue Type: Sub-task > Components: Storage - Other >Reporter: Cong Luo >Assignee: Vitalii Diravka >Priority: Major > Fix For: 1.20.0 > > > * https://phoenix.apache.org/server.html#Impersonation > * https://drill.apache.org/docs/configuring-user-impersonation -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (DRILL-8058) NPE: Cannot invoke "org.apache.calcite.rel.core.TableScan.getTable()" because "scan" is null
[ https://issues.apache.org/jira/browse/DRILL-8058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17455933#comment-17455933 ] Vitalii Diravka commented on DRILL-8058: Looks the same as DRILL-8060 > NPE: Cannot invoke "org.apache.calcite.rel.core.TableScan.getTable()" because > "scan" is null > > > Key: DRILL-8058 > URL: https://issues.apache.org/jira/browse/DRILL-8058 > Project: Apache Drill > Issue Type: Bug > Components: Storage - Iceberg >Affects Versions: 1.19.0 >Reporter: Vitalii Diravka >Assignee: Vova Vysotskyi >Priority: Major > Labels: iceberg, storage > Fix For: Future > > > Checked in Drill embedded the query form > _TestE2EUnnestAndLateral#testMultipleBatchesLateral_WithLimitInParent_ test > case: > {code:java} > SELECT customer.c_name, avg(orders.o_totalprice) AS avgPrice FROM > dfs.`/{custom_path}/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.lateraljoin.TestE2EUnnestAndLateral/root/lateraljoin/multipleFiles` > > customer, LATERAL (SELECT t.ord.o_totalprice as o_totalprice FROM > UNNEST(customer.c_orders) t(ord) > WHERE t.ord.o_totalprice > 10 LIMIT 2) orders GROUP BY customer.c_name; > {code} > But it gives the following error: > {code:java} > Caused by: java.lang.NullPointerException: Cannot invoke > "org.apache.calcite.rel.core.TableScan.getTable()" because "scan" is null > at > org.apache.drill.exec.planner.common.DrillRelOptUtil.getDrillTable(DrillRelOptUtil.java:691) > at > org.apache.drill.exec.store.iceberg.plan.IcebergPluginImplementor.canImplement(IcebergPluginImplementor.java:101) > at > org.apache.drill.exec.store.plan.rule.PluginConverterRule.matches(PluginConverterRule.java:64) > at > org.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:263) > at > org.apache.calcite.plan.volcano.VolcanoRuleCall.match(VolcanoRuleCall.java:247) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.fireRules(VolcanoPlanner.java:1566) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1840) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:848) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:864) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:92) > at > org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:329) > {code} > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Comment Edited] (DRILL-8061) Add Impersonation Support for Phoenix
[ https://issues.apache.org/jira/browse/DRILL-8061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17452974#comment-17452974 ] Vitalii Diravka edited comment on DRILL-8061 at 12/3/21, 7:32 PM: -- Hi [~luoc] . Do you have any progress on it? Or I can take it? Upd: I found your [PR|https://github.com/apache/drill/pull/2332] was (Author: vitalii): Hi [~luoc] . Do you have any progress on it? Or I can take it? > Add Impersonation Support for Phoenix > - > > Key: DRILL-8061 > URL: https://issues.apache.org/jira/browse/DRILL-8061 > Project: Apache Drill > Issue Type: New Feature > Components: Storage - Other >Reporter: Cong Luo >Assignee: Vitalii Diravka >Priority: Major > Fix For: 1.20.0 > > > * https://phoenix.apache.org/server.html#Impersonation > * https://drill.apache.org/docs/configuring-user-impersonation -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (DRILL-8061) Add Impersonation Support for Phoenix
[ https://issues.apache.org/jira/browse/DRILL-8061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17452974#comment-17452974 ] Vitalii Diravka commented on DRILL-8061: Hi [~luoc] . Do you have any progress on it? Or I can take it? > Add Impersonation Support for Phoenix > - > > Key: DRILL-8061 > URL: https://issues.apache.org/jira/browse/DRILL-8061 > Project: Apache Drill > Issue Type: New Feature > Components: Storage - Other >Reporter: Cong Luo >Assignee: Cong Luo >Priority: Major > Fix For: 1.20.0 > > > * https://phoenix.apache.org/server.html#Impersonation > * https://drill.apache.org/docs/configuring-user-impersonation -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (DRILL-8065) OOM Heap Space for TestTpchDistributedConcurrent
Vitalii Diravka created DRILL-8065: -- Summary: OOM Heap Space for TestTpchDistributedConcurrent Key: DRILL-8065 URL: https://issues.apache.org/jira/browse/DRILL-8065 Project: Apache Drill Issue Type: Sub-task Components: Tools, Build Test Affects Versions: 1.19.0 Reporter: Vitalii Diravka Assignee: Vitalii Diravka Fix For: Future There is random failure with lack of heap space for {_}TestTpchDistributedConcurrent{_}. Need to increase heap space for test cases. {code:java} 1980442 ERROR [1e568e62-a57c-3aad-be1f-a3737530804d:frag:4:0] [org.apache.drill.common.CatastrophicFailure] - Catastrophic Failure Occurred, exiting. Information message: Unable to handle out of memory condition in FragmentExecutor. 42015java.lang.OutOfMemoryError: Java heap space 42016 at org.apache.drill.shaded.guava.com.google.common.io.ByteStreams.skipUpTo(ByteStreams.java:835) 42017 at org.apache.drill.shaded.guava.com.google.common.io.ByteSource.countBySkipping(ByteSource.java:222) 42018 at org.apache.drill.shaded.guava.com.google.common.io.ByteSource.size(ByteSource.java:200) 42019 at org.apache.drill.exec.store.ClassPathFileSystem.getFileStatus(ClassPathFileSystem.java:85) 42020 at org.apache.parquet.hadoop.util.HadoopInputFile.fromPath(HadoopInputFile.java:39) {code} -> {code:java} Error: org.apache.drill.TestTpchDistributedConcurrent 42270Error: org.apache.maven.surefire.booter.SurefireBooterForkException: ExecutionException The forked VM terminated without properly saying goodbye. VM crash or System.exit called? {code} https://github.com/apache/drill/runs/4402736406?check_suite_focus=true -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (DRILL-8058) NPE: Cannot invoke "org.apache.calcite.rel.core.TableScan.getTable()" because "scan" is null
[ https://issues.apache.org/jira/browse/DRILL-8058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka updated DRILL-8058: --- Description: Checked in Drill embedded the query form _TestE2EUnnestAndLateral#testMultipleBatchesLateral_WithLimitInParent_ test case: {code:java} SELECT customer.c_name, avg(orders.o_totalprice) AS avgPrice FROM dfs.`/{custom_path}/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.lateraljoin.TestE2EUnnestAndLateral/root/lateraljoin/multipleFiles` customer, LATERAL (SELECT t.ord.o_totalprice as o_totalprice FROM UNNEST(customer.c_orders) t(ord) WHERE t.ord.o_totalprice > 10 LIMIT 2) orders GROUP BY customer.c_name; {code} But it gives the following error: {code:java} Caused by: java.lang.NullPointerException: Cannot invoke "org.apache.calcite.rel.core.TableScan.getTable()" because "scan" is null at org.apache.drill.exec.planner.common.DrillRelOptUtil.getDrillTable(DrillRelOptUtil.java:691) at org.apache.drill.exec.store.iceberg.plan.IcebergPluginImplementor.canImplement(IcebergPluginImplementor.java:101) at org.apache.drill.exec.store.plan.rule.PluginConverterRule.matches(PluginConverterRule.java:64) at org.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:263) at org.apache.calcite.plan.volcano.VolcanoRuleCall.match(VolcanoRuleCall.java:247) at org.apache.calcite.plan.volcano.VolcanoPlanner.fireRules(VolcanoPlanner.java:1566) at org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1840) at org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:848) at org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:864) at org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:92) at org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:329) {code} was: Checked in Drill embedded the query form the test case: SELECT customer.c_name, avg(orders.o_totalprice) AS avgPrice FROM dfs.`/\{custom_path}/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.lateraljoin.TestE2EUnnestAndLateral/root/lateraljoin/multipleFiles` customer, LATERAL (SELECT t.ord.o_totalprice as o_totalprice FROM UNNEST(customer.c_orders) t(ord) WHERE t.ord.o_totalprice > 10 LIMIT 2) orders GROUP BY customer.c_name; But it gives the following error: Caused by: java.lang.NullPointerException: Cannot invoke "org.apache.calcite.rel.core.TableScan.getTable()" because "scan" is null at org.apache.drill.exec.planner.common.DrillRelOptUtil.getDrillTable(DrillRelOptUtil.java:691) at org.apache.drill.exec.store.iceberg.plan.IcebergPluginImplementor.canImplement(IcebergPluginImplementor.java:101) at org.apache.drill.exec.store.plan.rule.PluginConverterRule.matches(PluginConverterRule.java:64) at org.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:263) at org.apache.calcite.plan.volcano.VolcanoRuleCall.match(VolcanoRuleCall.java:247) at org.apache.calcite.plan.volcano.VolcanoPlanner.fireRules(VolcanoPlanner.java:1566) at org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1840) at org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:848) at org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:864) at org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:92) at org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:329) > NPE: Cannot invoke "org.apache.calcite.rel.core.TableScan.getTable()" because > "scan" is null > > > Key: DRILL-8058 > URL: https://issues.apache.org/jira/browse/DRILL-8058 > Project: Apache Drill > Issue Type: Bug > Components: Storage - Iceberg >Affects Versions: 1.19.0 >Reporter: Vitalii Diravka >Priority: Major > Labels: iceberg, storage > Fix For: Future > > > Checked in Drill embedded the query form > _TestE2EUnnestAndLateral#testMultipleBatchesLateral_WithLimitInParent_ test > case: > {code:java} > SELECT customer.c_name, avg(orders.o_totalprice) AS avgPrice FROM > dfs.`/{custom_path}/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.lateraljoin.TestE2EUnnestAndLateral/root/lateraljoin/multipleFiles` > > customer, LATERAL (SELECT t.ord.o_totalprice as o_totalprice FROM > UNNEST(customer.c_orders) t(ord) > WHERE t.ord.o_totalprice > 10 LIMIT 2) orders GROUP BY customer.c_name; > {code} > But it gives the following error: > {code:java} > Caused by: java.lang.NullPointerException: Cannot invoke >
[jira] [Created] (DRILL-8058) NPE: Cannot invoke "org.apache.calcite.rel.core.TableScan.getTable()" because "scan" is null
Vitalii Diravka created DRILL-8058: -- Summary: NPE: Cannot invoke "org.apache.calcite.rel.core.TableScan.getTable()" because "scan" is null Key: DRILL-8058 URL: https://issues.apache.org/jira/browse/DRILL-8058 Project: Apache Drill Issue Type: Bug Components: Storage - Iceberg Affects Versions: 1.19.0 Reporter: Vitalii Diravka Fix For: Future Checked in Drill embedded the query form the test case: SELECT customer.c_name, avg(orders.o_totalprice) AS avgPrice FROM dfs.`/\{custom_path}/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.lateraljoin.TestE2EUnnestAndLateral/root/lateraljoin/multipleFiles` customer, LATERAL (SELECT t.ord.o_totalprice as o_totalprice FROM UNNEST(customer.c_orders) t(ord) WHERE t.ord.o_totalprice > 10 LIMIT 2) orders GROUP BY customer.c_name; But it gives the following error: Caused by: java.lang.NullPointerException: Cannot invoke "org.apache.calcite.rel.core.TableScan.getTable()" because "scan" is null at org.apache.drill.exec.planner.common.DrillRelOptUtil.getDrillTable(DrillRelOptUtil.java:691) at org.apache.drill.exec.store.iceberg.plan.IcebergPluginImplementor.canImplement(IcebergPluginImplementor.java:101) at org.apache.drill.exec.store.plan.rule.PluginConverterRule.matches(PluginConverterRule.java:64) at org.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:263) at org.apache.calcite.plan.volcano.VolcanoRuleCall.match(VolcanoRuleCall.java:247) at org.apache.calcite.plan.volcano.VolcanoPlanner.fireRules(VolcanoPlanner.java:1566) at org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1840) at org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:848) at org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:864) at org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:92) at org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:329) -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (DRILL-8058) NPE: Cannot invoke "org.apache.calcite.rel.core.TableScan.getTable()" because "scan" is null
[ https://issues.apache.org/jira/browse/DRILL-8058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka updated DRILL-8058: --- Labels: iceberg storage (was: ) > NPE: Cannot invoke "org.apache.calcite.rel.core.TableScan.getTable()" because > "scan" is null > > > Key: DRILL-8058 > URL: https://issues.apache.org/jira/browse/DRILL-8058 > Project: Apache Drill > Issue Type: Bug > Components: Storage - Iceberg >Affects Versions: 1.19.0 >Reporter: Vitalii Diravka >Priority: Major > Labels: iceberg, storage > Fix For: Future > > > Checked in Drill embedded the query form the test case: > SELECT customer.c_name, avg(orders.o_totalprice) AS avgPrice FROM > dfs.`/\{custom_path}/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.lateraljoin.TestE2EUnnestAndLateral/root/lateraljoin/multipleFiles` > > customer, LATERAL (SELECT t.ord.o_totalprice as o_totalprice FROM > UNNEST(customer.c_orders) t(ord) > WHERE t.ord.o_totalprice > 10 LIMIT 2) orders GROUP BY customer.c_name; > But it gives the following error: > Caused by: java.lang.NullPointerException: Cannot invoke > "org.apache.calcite.rel.core.TableScan.getTable()" because "scan" is null > at > org.apache.drill.exec.planner.common.DrillRelOptUtil.getDrillTable(DrillRelOptUtil.java:691) > at > org.apache.drill.exec.store.iceberg.plan.IcebergPluginImplementor.canImplement(IcebergPluginImplementor.java:101) > at > org.apache.drill.exec.store.plan.rule.PluginConverterRule.matches(PluginConverterRule.java:64) > at > org.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:263) > at > org.apache.calcite.plan.volcano.VolcanoRuleCall.match(VolcanoRuleCall.java:247) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.fireRules(VolcanoPlanner.java:1566) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1840) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:848) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:864) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:92) > at > org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:329) -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (DRILL-7844) Some GitHub Actions builds intermittently fail
[ https://issues.apache.org/jira/browse/DRILL-7844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka resolved DRILL-7844. Resolution: Done The problem is no longer observed after DRILL-7973 > Some GitHub Actions builds intermittently fail > -- > > Key: DRILL-7844 > URL: https://issues.apache.org/jira/browse/DRILL-7844 > Project: Apache Drill > Issue Type: Test > Components: Tools, Build Test >Affects Versions: 1.18.0 >Reporter: Vitalii Diravka >Priority: Minor > Labels: github, github-actions > Fix For: Future > > > > *[Very minor, but can be good for newcomers]* > GitHub Action intermitently fails due to different issues: > * on JVM 1.8 with: > {code:java} > [INFO] > 2955[INFO] Results: > 2956[INFO] > 2957Error: Errors: > 2958Error:StatementTest.testClientTriggeredQueryTimeout:152 » SqlTimeout > Query timed out... > 2959[INFO] > 2960Error: Tests run: 1812, Failures: 0, Errors: 1, Skipped: 370 > 2961[INFO] > 2962[INFO] > > 2963[INFO] Reactor Summary for Apache Drill Root POM 1.19.0-SNAPSHOT: > 2964[INFO] > 2965[INFO] Apache Drill Root POM .. SUCCESS [ > 12.859 s] > 2966[INFO] tools/Parent Pom ... SUCCESS [ > 0.410 s] > 2967[INFO] tools/freemarker codegen tooling ... SUCCESS [ > 6.684 s] > 2968[INFO] Drill Protocol . SUCCESS [ > 7.467 s] > 2969[INFO] Common (Logical Plan, Base expressions) SUCCESS [ > 9.096 s] > 2970[INFO] Logical Plan, Base expressions . SUCCESS [ > 9.730 s] > 2971[INFO] exec/Parent Pom SUCCESS [ > 0.307 s] > 2972[INFO] exec/memory/Parent Pom . SUCCESS [ > 0.296 s] > 2973[INFO] exec/memory/base ... SUCCESS [ > 6.691 s] > 2974[INFO] exec/rpc ... SUCCESS [ > 3.329 s] > 2975[INFO] exec/Vectors ... SUCCESS > [01:30 min] > 2976[INFO] contrib/Parent Pom . SUCCESS [ > 0.260 s] > 2977[INFO] contrib/data/Parent Pom SUCCESS [ > 0.283 s] > 2978[INFO] contrib/data/tpch-sample-data .. SUCCESS [ > 2.177 s] > 2979[INFO] metastore/Parent Pom ... SUCCESS [ > 0.275 s] > 2980[INFO] metastore/Drill Metastore API .. SUCCESS [ > 9.111 s] > 2981[INFO] metastore/Drill Iceberg Metastore .. SUCCESS [ > 21.273 s] > 2982[INFO] exec/Java Execution Engine . SUCCESS > [44:14 min] > 2983[INFO] exec/JDBC Driver using dependencies FAILURE > [02:11 min] > 2984[INFO] JDBC JAR with all dependencies . SKIPPED > 2985[INFO] Drill-on-YARN .. SKIPPED > 2986[INFO] metastore/Drill RDBMS Metastore SKIPPED > 2987[INFO] contrib/kudu-storage-plugin SKIPPED > 2988[INFO] contrib/format-xml . SKIPPED > 2989[INFO] contrib/http-storage-plugin SKIPPED > 2990[INFO] contrib/opentsdb-storage-plugin SKIPPED > 2991[INFO] contrib/mongo-storage-plugin ... SKIPPED > 2992[INFO] contrib/hbase-storage-plugin ... SKIPPED > 2993[INFO] contrib/jdbc-storage-plugin SKIPPED > 2994[INFO] contrib/hive-storage-plugin/Parent Pom . SKIPPED > 2995[INFO] contrib/hive-storage-plugin/hive-exec-shaded ... SKIPPED > 2996[INFO] contrib/hive-storage-plugin/core ... SKIPPED > 2997[INFO] contrib/kafka-storage-plugin ... SKIPPED > 2998[INFO] contrib/drill-udfs . SKIPPED > 2999[INFO] contrib/format-syslog .. SKIPPED > 3000[INFO] contrib/httpd-format-plugin SKIPPED > 3001[INFO] contrib/format-hdf5 SKIPPED > 3002[INFO] contrib/format-spss SKIPPED > 3003[INFO] contrib/ltsv-format-plugin . SKIPPED > 3004[INFO] contrib/format-esri SKIPPED > 3005[INFO] contrib/format-excel ... SKIPPED > 3006[INFO] contrib/druid-storage-plugin ... SKIPPED > 3007[INFO] Packaging and Distribution Assembly SKIPPED > 3008[INFO] contrib/mapr-format-plugin . SKIPPED > 3009[INFO] > >
[jira] [Resolved] (DRILL-3192) TestDrillbitResilience#cancelWhenQueryIdArrives hangs
[ https://issues.apache.org/jira/browse/DRILL-3192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka resolved DRILL-3192. Resolution: Done Not an issue after DRILL-7973 and DRILL-8030 > TestDrillbitResilience#cancelWhenQueryIdArrives hangs > - > > Key: DRILL-3192 > URL: https://issues.apache.org/jira/browse/DRILL-3192 > Project: Apache Drill > Issue Type: Bug > Components: Execution - RPC >Reporter: Sudheesh Katkam >Assignee: Sudheesh Katkam >Priority: Critical > Fix For: Future > > > TestDrillbitResilience#cancelWhenQueryIdArrives (previously named > cancelBeforeAnyResultsArrive) hangs when the test is run multiple times. > -(Will add more information)- > *Configuration: BIT_SERVER_RPC_THREADS = 1* > The remote RPC thread with a cancel signal is waiting for the fragment to > start accepting external events. The fragment is waiting for a > ControlConnection to the Foreman node (through ReconnectingConnection). The > Foreman node is waiting for the remote node to accept the connection, which > happens through an RPC thread. Distributed deadlock. > DRILL-3242 should solve this problem. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (DRILL-2171) Test framework throws IOOB for tests changing schema
[ https://issues.apache.org/jira/browse/DRILL-2171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka resolved DRILL-2171. Resolution: Done > Test framework throws IOOB for tests changing schema > > > Key: DRILL-2171 > URL: https://issues.apache.org/jira/browse/DRILL-2171 > Project: Apache Drill > Issue Type: Bug > Components: Tools, Build Test >Reporter: Hanifi Gunes >Assignee: Vitalii Diravka >Priority: Major > Fix For: Future > > > I added a unit test as part of DRILL-1605 that resolves a problem with schema > change. Unfortunately test framework suffers from a similar problem throwing > IOOB while trying to verify the results. > TestSchemaChange#testMultiFilesWithDifferentSchema is currently ignored until > a patch is available for this issue. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (DRILL-1896) Unit tests failing due to string based comparison at JsonStringHashMap & JsonStringArrayList #equals methods
[ https://issues.apache.org/jira/browse/DRILL-1896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka resolved DRILL-1896. Resolution: Fixed _jsonBaselineFile_ is doing a new Drill query. The result for this and original query is different, because of _SchemaChange_ in the original query. So need to compare the result of the query with baselineValues. Resolved in DRILL-8046 > Unit tests failing due to string based comparison at JsonStringHashMap & > JsonStringArrayList #equals methods > > > Key: DRILL-1896 > URL: https://issues.apache.org/jira/browse/DRILL-1896 > Project: Apache Drill > Issue Type: Bug >Reporter: Hanifi Gunes >Assignee: Vitalii Diravka >Priority: Major > Fix For: 0.8.0 > > Attachments: DRILL-1896-v3.patch, DRILL-1896.patch, RILL-1896-v2.patch > > > Unit test framework relies on JsonString*#equals methods to compare actual > and expected results. We should properly implement these to prevent unit > tests from failing. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (DRILL-5612) Random failure in TestMergeJoinWithSchemaChanges
[ https://issues.apache.org/jira/browse/DRILL-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka resolved DRILL-5612. Resolution: Fixed > Random failure in TestMergeJoinWithSchemaChanges > > > Key: DRILL-5612 > URL: https://issues.apache.org/jira/browse/DRILL-5612 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.11.0 >Reporter: Paul Rogers >Assignee: Vitalii Diravka >Priority: Major > Attachments: image-2021-11-16-02-35-25-690.png > > > The unit test > {{org.apache.drill.exec.physical.impl.join.TestMergeJoinWithSchemaChanges#testMissingAndNewColumns}} > is subject to random failures, perhaps due to changes in file order in > readers. > The test builds a number of input files, then executes queries against them. > On most runs, the output is fine: > {code} > Running > org.apache.drill.exec.physical.impl.join.TestMergeJoinWithSchemaChanges#testMissingAndNewColumns > /home/.../target/1498606483211-0/mergejoin-schemachanges-left > /home/.../target/1498606483211-1/mergejoin-schemachanges-right > {code} > But, on occasion, the query fails: > {code} > org.apache.drill.exec.physical.impl.join.TestMergeJoinWithSchemaChanges > testMissingAndNewColumns(org.apache.drill.exec.physical.impl.join.TestMergeJoinWithSchemaChanges) > Time elapsed: 0.569 sec <<< ERROR! > ...: UNSUPPORTED_OPERATION ERROR: Sort doesn't currently support sorts with > changing schemas > Fragment 0:0 > (org.apache.drill.exec.exception.SchemaChangeException) Sort currently only > supports a single schema. > > org.apache.drill.exec.physical.impl.sort.SortRecordBatchBuilder.build():152 > > org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.innerNext():476 > ... > {code} > The line in the exception above: > {code} > public void build(VectorContainer outputContainer) throws > SchemaChangeException { > outputContainer.clear(); > if (batches.keySet().size() > 1) { > throw new SchemaChangeException("Sort currently only supports a single > schema."); > } > {code} > The above code has not changed in quite some time. The failure is in the > "legacy" external sort. > Although the external sort does support schema changes, it only does so in > the form of a union vector, which must be enabled. (Other tests validate that > schema changes work.) > What is likely happening here is that the sort sometimes sees two files with > differing schemas, sometimes multiple threads run so that a single sort sees > only one file. This speculation can be verified by looking at a log file (not > available in the test run that failed) to see if the scan under the sort read > more than one file. > Or, perhaps the order of the JSON files matters. Perhaps file order varies > across machines (since the Linux command to list directories does not > guarantee order.) -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (DRILL-5612) Random failure in TestMergeJoinWithSchemaChanges
[ https://issues.apache.org/jira/browse/DRILL-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka updated DRILL-5612: --- Fix Version/s: 1.20.0 > Random failure in TestMergeJoinWithSchemaChanges > > > Key: DRILL-5612 > URL: https://issues.apache.org/jira/browse/DRILL-5612 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.11.0 >Reporter: Paul Rogers >Assignee: Vitalii Diravka >Priority: Major > Fix For: 1.20.0 > > Attachments: image-2021-11-16-02-35-25-690.png > > > The unit test > {{org.apache.drill.exec.physical.impl.join.TestMergeJoinWithSchemaChanges#testMissingAndNewColumns}} > is subject to random failures, perhaps due to changes in file order in > readers. > The test builds a number of input files, then executes queries against them. > On most runs, the output is fine: > {code} > Running > org.apache.drill.exec.physical.impl.join.TestMergeJoinWithSchemaChanges#testMissingAndNewColumns > /home/.../target/1498606483211-0/mergejoin-schemachanges-left > /home/.../target/1498606483211-1/mergejoin-schemachanges-right > {code} > But, on occasion, the query fails: > {code} > org.apache.drill.exec.physical.impl.join.TestMergeJoinWithSchemaChanges > testMissingAndNewColumns(org.apache.drill.exec.physical.impl.join.TestMergeJoinWithSchemaChanges) > Time elapsed: 0.569 sec <<< ERROR! > ...: UNSUPPORTED_OPERATION ERROR: Sort doesn't currently support sorts with > changing schemas > Fragment 0:0 > (org.apache.drill.exec.exception.SchemaChangeException) Sort currently only > supports a single schema. > > org.apache.drill.exec.physical.impl.sort.SortRecordBatchBuilder.build():152 > > org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.innerNext():476 > ... > {code} > The line in the exception above: > {code} > public void build(VectorContainer outputContainer) throws > SchemaChangeException { > outputContainer.clear(); > if (batches.keySet().size() > 1) { > throw new SchemaChangeException("Sort currently only supports a single > schema."); > } > {code} > The above code has not changed in quite some time. The failure is in the > "legacy" external sort. > Although the external sort does support schema changes, it only does so in > the form of a union vector, which must be enabled. (Other tests validate that > schema changes work.) > What is likely happening here is that the sort sometimes sees two files with > differing schemas, sometimes multiple threads run so that a single sort sees > only one file. This speculation can be verified by looking at a log file (not > available in the test run that failed) to see if the scan under the sort read > more than one file. > Or, perhaps the order of the JSON files matters. Perhaps file order varies > across machines (since the Linux command to list directories does not > guarantee order.) -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Assigned] (DRILL-2933) RecordBatchLoader.load(...) calls catch SchemaChangeException that load(...) never actually throws
[ https://issues.apache.org/jira/browse/DRILL-2933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka reassigned DRILL-2933: -- Assignee: Vitalii Diravka > RecordBatchLoader.load(...) calls catch SchemaChangeException that load(...) > never actually throws > -- > > Key: DRILL-2933 > URL: https://issues.apache.org/jira/browse/DRILL-2933 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Reporter: Daniel Barclay >Assignee: Vitalii Diravka >Priority: Major > Fix For: Future > > > There are about 9 calls to RecordBatchLoader.load(...) that catch > SchemaChangeException because it is declared to be thrown by > RecordBatchLoader.load(...). > However, RecordBatchLoader.load(...) never actually throws > SchemaChangeException. > (To find those calls, comment out the "throws SchemaChangeException" on > RecordBatchLoader.load(...) and follow the compilation errors.) -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Assigned] (DRILL-5612) Random failure in TestMergeJoinWithSchemaChanges
[ https://issues.apache.org/jira/browse/DRILL-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka reassigned DRILL-5612: -- Assignee: Vitalii Diravka > Random failure in TestMergeJoinWithSchemaChanges > > > Key: DRILL-5612 > URL: https://issues.apache.org/jira/browse/DRILL-5612 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.11.0 >Reporter: Paul Rogers >Assignee: Vitalii Diravka >Priority: Major > Attachments: image-2021-11-16-02-35-25-690.png > > > The unit test > {{org.apache.drill.exec.physical.impl.join.TestMergeJoinWithSchemaChanges#testMissingAndNewColumns}} > is subject to random failures, perhaps due to changes in file order in > readers. > The test builds a number of input files, then executes queries against them. > On most runs, the output is fine: > {code} > Running > org.apache.drill.exec.physical.impl.join.TestMergeJoinWithSchemaChanges#testMissingAndNewColumns > /home/.../target/1498606483211-0/mergejoin-schemachanges-left > /home/.../target/1498606483211-1/mergejoin-schemachanges-right > {code} > But, on occasion, the query fails: > {code} > org.apache.drill.exec.physical.impl.join.TestMergeJoinWithSchemaChanges > testMissingAndNewColumns(org.apache.drill.exec.physical.impl.join.TestMergeJoinWithSchemaChanges) > Time elapsed: 0.569 sec <<< ERROR! > ...: UNSUPPORTED_OPERATION ERROR: Sort doesn't currently support sorts with > changing schemas > Fragment 0:0 > (org.apache.drill.exec.exception.SchemaChangeException) Sort currently only > supports a single schema. > > org.apache.drill.exec.physical.impl.sort.SortRecordBatchBuilder.build():152 > > org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.innerNext():476 > ... > {code} > The line in the exception above: > {code} > public void build(VectorContainer outputContainer) throws > SchemaChangeException { > outputContainer.clear(); > if (batches.keySet().size() > 1) { > throw new SchemaChangeException("Sort currently only supports a single > schema."); > } > {code} > The above code has not changed in quite some time. The failure is in the > "legacy" external sort. > Although the external sort does support schema changes, it only does so in > the form of a union vector, which must be enabled. (Other tests validate that > schema changes work.) > What is likely happening here is that the sort sometimes sees two files with > differing schemas, sometimes multiple threads run so that a single sort sees > only one file. This speculation can be verified by looking at a log file (not > available in the test run that failed) to see if the scan under the sort read > more than one file. > Or, perhaps the order of the JSON files matters. Perhaps file order varies > across machines (since the Linux command to list directories does not > guarantee order.) -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (DRILL-2933) RecordBatchLoader.load(...) calls catch SchemaChangeException that load(...) never actually throws
[ https://issues.apache.org/jira/browse/DRILL-2933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka resolved DRILL-2933. Resolution: Fixed > RecordBatchLoader.load(...) calls catch SchemaChangeException that load(...) > never actually throws > -- > > Key: DRILL-2933 > URL: https://issues.apache.org/jira/browse/DRILL-2933 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Reporter: Daniel Barclay >Assignee: Vitalii Diravka >Priority: Major > Fix For: Future > > > There are about 9 calls to RecordBatchLoader.load(...) that catch > SchemaChangeException because it is declared to be thrown by > RecordBatchLoader.load(...). > However, RecordBatchLoader.load(...) never actually throws > SchemaChangeException. > (To find those calls, comment out the "throws SchemaChangeException" on > RecordBatchLoader.load(...) and follow the compilation errors.) -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (DRILL-8022) Add Provided Schema Support for Excel Reader
[ https://issues.apache.org/jira/browse/DRILL-8022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka resolved DRILL-8022. Resolution: Fixed Merged to master with commit id 942d1fd455e56a19acb18ce592244d0d6b125966 > Add Provided Schema Support for Excel Reader > > > Key: DRILL-8022 > URL: https://issues.apache.org/jira/browse/DRILL-8022 > Project: Apache Drill > Issue Type: Improvement > Components: Storage - Text CSV >Affects Versions: 1.19.0 >Reporter: Charles Givre >Assignee: Charles Givre >Priority: Major > Fix For: 1.20.0 > > > Add support for provided schema for Excel files. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (DRILL-2933) RecordBatchLoader.load(...) calls catch SchemaChangeException that load(...) never actually throws
[ https://issues.apache.org/jira/browse/DRILL-2933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17444211#comment-17444211 ] Vitalii Diravka commented on DRILL-2933: removing _throws SchemaChangeException_ according to the _new RecordBatchLoader.load(...)_ in DRILL-8046 > RecordBatchLoader.load(...) calls catch SchemaChangeException that load(...) > never actually throws > -- > > Key: DRILL-2933 > URL: https://issues.apache.org/jira/browse/DRILL-2933 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Reporter: Daniel Barclay >Priority: Major > Fix For: Future > > > There are about 9 calls to RecordBatchLoader.load(...) that catch > SchemaChangeException because it is declared to be thrown by > RecordBatchLoader.load(...). > However, RecordBatchLoader.load(...) never actually throws > SchemaChangeException. > (To find those calls, comment out the "throws SchemaChangeException" on > RecordBatchLoader.load(...) and follow the compilation errors.) -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (DRILL-2171) Test framework throws IOOB for tests changing schema
[ https://issues.apache.org/jira/browse/DRILL-2171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17444208#comment-17444208 ] Vitalii Diravka commented on DRILL-2171: Not an issue anymore. Enabling test case in DRILL-8046 > Test framework throws IOOB for tests changing schema > > > Key: DRILL-2171 > URL: https://issues.apache.org/jira/browse/DRILL-2171 > Project: Apache Drill > Issue Type: Bug > Components: Tools, Build Test >Reporter: Hanifi Gunes >Assignee: Vitalii Diravka >Priority: Major > Fix For: Future > > > I added a unit test as part of DRILL-1605 that resolves a problem with schema > change. Unfortunately test framework suffers from a similar problem throwing > IOOB while trying to verify the results. > TestSchemaChange#testMultiFilesWithDifferentSchema is currently ignored until > a patch is available for this issue. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (DRILL-5612) Random failure in TestMergeJoinWithSchemaChanges
[ https://issues.apache.org/jira/browse/DRILL-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17444206#comment-17444206 ] Vitalii Diravka commented on DRILL-5612: Looks like not an issue anymore. I ran test with repeat = 1000 and it passed. So enabling it in DRILL-8046, https://github.com/apache/drill/pull/2378 !image-2021-11-16-02-35-25-690.png! > Random failure in TestMergeJoinWithSchemaChanges > > > Key: DRILL-5612 > URL: https://issues.apache.org/jira/browse/DRILL-5612 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.11.0 >Reporter: Paul Rogers >Priority: Major > Attachments: image-2021-11-16-02-35-25-690.png > > > The unit test > {{org.apache.drill.exec.physical.impl.join.TestMergeJoinWithSchemaChanges#testMissingAndNewColumns}} > is subject to random failures, perhaps due to changes in file order in > readers. > The test builds a number of input files, then executes queries against them. > On most runs, the output is fine: > {code} > Running > org.apache.drill.exec.physical.impl.join.TestMergeJoinWithSchemaChanges#testMissingAndNewColumns > /home/.../target/1498606483211-0/mergejoin-schemachanges-left > /home/.../target/1498606483211-1/mergejoin-schemachanges-right > {code} > But, on occasion, the query fails: > {code} > org.apache.drill.exec.physical.impl.join.TestMergeJoinWithSchemaChanges > testMissingAndNewColumns(org.apache.drill.exec.physical.impl.join.TestMergeJoinWithSchemaChanges) > Time elapsed: 0.569 sec <<< ERROR! > ...: UNSUPPORTED_OPERATION ERROR: Sort doesn't currently support sorts with > changing schemas > Fragment 0:0 > (org.apache.drill.exec.exception.SchemaChangeException) Sort currently only > supports a single schema. > > org.apache.drill.exec.physical.impl.sort.SortRecordBatchBuilder.build():152 > > org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.innerNext():476 > ... > {code} > The line in the exception above: > {code} > public void build(VectorContainer outputContainer) throws > SchemaChangeException { > outputContainer.clear(); > if (batches.keySet().size() > 1) { > throw new SchemaChangeException("Sort currently only supports a single > schema."); > } > {code} > The above code has not changed in quite some time. The failure is in the > "legacy" external sort. > Although the external sort does support schema changes, it only does so in > the form of a union vector, which must be enabled. (Other tests validate that > schema changes work.) > What is likely happening here is that the sort sometimes sees two files with > differing schemas, sometimes multiple threads run so that a single sort sees > only one file. This speculation can be verified by looking at a log file (not > available in the test run that failed) to see if the scan under the sort read > more than one file. > Or, perhaps the order of the JSON files matters. Perhaps file order varies > across machines (since the Linux command to list directories does not > guarantee order.) -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (DRILL-8046) Enabling ignored test cases for SchemaChange
[ https://issues.apache.org/jira/browse/DRILL-8046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka updated DRILL-8046: --- Summary: Enabling ignored test cases for SchemaChange (was: Enabling ignored test cases) > Enabling ignored test cases for SchemaChange > > > Key: DRILL-8046 > URL: https://issues.apache.org/jira/browse/DRILL-8046 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.19.0 >Reporter: Vitalii Diravka >Assignee: Vitalii Diravka >Priority: Minor > Fix For: 1.20.0 > > > It enables ignored test cases for > {_}TestJsonReader#schemaChangeValidate ({_}DRILL-1896), > _TestSchemaChange#testMultiFilesWithDifferentSchema_ (DRILL-2171), > _TestMergeJoinWithSchemaChanges#testMissingAndNewColumns_ (DRILL-5612) and > removes redundant throwing _SchemaChangeException_ (DRILL-2933). -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (DRILL-8046) Enabling ignored test cases
Vitalii Diravka created DRILL-8046: -- Summary: Enabling ignored test cases Key: DRILL-8046 URL: https://issues.apache.org/jira/browse/DRILL-8046 Project: Apache Drill Issue Type: Bug Affects Versions: 1.19.0 Reporter: Vitalii Diravka Assignee: Vitalii Diravka Fix For: 1.20.0 It enables ignored test cases for {_}TestJsonReader#schemaChangeValidate ({_}DRILL-1896), _TestSchemaChange#testMultiFilesWithDifferentSchema_ (DRILL-2171), _TestMergeJoinWithSchemaChanges#testMissingAndNewColumns_ (DRILL-5612) and removes redundant throwing _SchemaChangeException_ (DRILL-2933). -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (DRILL-5612) Random failure in TestMergeJoinWithSchemaChanges
[ https://issues.apache.org/jira/browse/DRILL-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka updated DRILL-5612: --- Attachment: image-2021-11-16-02-35-25-690.png > Random failure in TestMergeJoinWithSchemaChanges > > > Key: DRILL-5612 > URL: https://issues.apache.org/jira/browse/DRILL-5612 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.11.0 >Reporter: Paul Rogers >Priority: Major > Attachments: image-2021-11-16-02-35-25-690.png > > > The unit test > {{org.apache.drill.exec.physical.impl.join.TestMergeJoinWithSchemaChanges#testMissingAndNewColumns}} > is subject to random failures, perhaps due to changes in file order in > readers. > The test builds a number of input files, then executes queries against them. > On most runs, the output is fine: > {code} > Running > org.apache.drill.exec.physical.impl.join.TestMergeJoinWithSchemaChanges#testMissingAndNewColumns > /home/.../target/1498606483211-0/mergejoin-schemachanges-left > /home/.../target/1498606483211-1/mergejoin-schemachanges-right > {code} > But, on occasion, the query fails: > {code} > org.apache.drill.exec.physical.impl.join.TestMergeJoinWithSchemaChanges > testMissingAndNewColumns(org.apache.drill.exec.physical.impl.join.TestMergeJoinWithSchemaChanges) > Time elapsed: 0.569 sec <<< ERROR! > ...: UNSUPPORTED_OPERATION ERROR: Sort doesn't currently support sorts with > changing schemas > Fragment 0:0 > (org.apache.drill.exec.exception.SchemaChangeException) Sort currently only > supports a single schema. > > org.apache.drill.exec.physical.impl.sort.SortRecordBatchBuilder.build():152 > > org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.innerNext():476 > ... > {code} > The line in the exception above: > {code} > public void build(VectorContainer outputContainer) throws > SchemaChangeException { > outputContainer.clear(); > if (batches.keySet().size() > 1) { > throw new SchemaChangeException("Sort currently only supports a single > schema."); > } > {code} > The above code has not changed in quite some time. The failure is in the > "legacy" external sort. > Although the external sort does support schema changes, it only does so in > the form of a union vector, which must be enabled. (Other tests validate that > schema changes work.) > What is likely happening here is that the sort sometimes sees two files with > differing schemas, sometimes multiple threads run so that a single sort sees > only one file. This speculation can be verified by looking at a log file (not > available in the test run that failed) to see if the scan under the sort read > more than one file. > Or, perhaps the order of the JSON files matters. Perhaps file order varies > across machines (since the Linux command to list directories does not > guarantee order.) -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Assigned] (DRILL-8034) Support Java17
[ https://issues.apache.org/jira/browse/DRILL-8034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka reassigned DRILL-8034: -- Assignee: Vova Vysotskyi (was: Vitalii Diravka) > Support Java17 > -- > > Key: DRILL-8034 > URL: https://issues.apache.org/jira/browse/DRILL-8034 > Project: Apache Drill > Issue Type: Wish > Components: Execution - Codegen >Affects Versions: 1.19.0 >Reporter: Vitalii Diravka >Assignee: Vova Vysotskyi >Priority: Major > Fix For: Future > > > Drill officially supports Java14, and it can be updated to Java15 with a > minimal changes. But latest LTS Java version is 17. Need to add support of > building Drill with JVM17 and running on that JVM. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (DRILL-1896) Unit tests failing due to string based comparison at JsonStringHashMap & JsonStringArrayList #equals methods
[ https://issues.apache.org/jira/browse/DRILL-1896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17443484#comment-17443484 ] Vitalii Diravka commented on DRILL-1896: [~jnadeau] is right DRILL-1824 is mixed with this Jira. DRILL-1824 is resolved: {code:java} apache drill> select * from dfs.home.`/vector/complex/writer/schemaChange`; +--+---+ | a | b | +--+---+ | foo | null | | bar | null | | foo2 | {} | | bar2 | {"x":1,"y":2} | +--+---+ 4 rows selected (0.195 seconds) {code} But this one is not resolved and causes _TestJsonReader#schemaChangeValidate_ to fail > Unit tests failing due to string based comparison at JsonStringHashMap & > JsonStringArrayList #equals methods > > > Key: DRILL-1896 > URL: https://issues.apache.org/jira/browse/DRILL-1896 > Project: Apache Drill > Issue Type: Bug >Reporter: Hanifi Gunes >Assignee: Vitalii Diravka >Priority: Major > Fix For: 0.8.0 > > Attachments: DRILL-1896-v3.patch, DRILL-1896.patch, RILL-1896-v2.patch > > > Unit test framework relies on JsonString*#equals methods to compare actual > and expected results. We should properly implement these to prevent unit > tests from failing. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Reopened] (DRILL-1896) Unit tests failing due to string based comparison at JsonStringHashMap & JsonStringArrayList #equals methods
[ https://issues.apache.org/jira/browse/DRILL-1896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka reopened DRILL-1896: Assignee: Vitalii Diravka (was: Jason Altekruse) > Unit tests failing due to string based comparison at JsonStringHashMap & > JsonStringArrayList #equals methods > > > Key: DRILL-1896 > URL: https://issues.apache.org/jira/browse/DRILL-1896 > Project: Apache Drill > Issue Type: Bug >Reporter: Hanifi Gunes >Assignee: Vitalii Diravka >Priority: Major > Fix For: 0.8.0 > > Attachments: DRILL-1896-v3.patch, DRILL-1896.patch, RILL-1896-v2.patch > > > Unit test framework relies on JsonString*#equals methods to compare actual > and expected results. We should properly implement these to prevent unit > tests from failing. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (DRILL-1824) Certain JSON data test patterns cause false negatives in new test framework
[ https://issues.apache.org/jira/browse/DRILL-1824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka resolved DRILL-1824. Fix Version/s: 1.19.0 (was: Future) Resolution: Done > Certain JSON data test patterns cause false negatives in new test framework > --- > > Key: DRILL-1824 > URL: https://issues.apache.org/jira/browse/DRILL-1824 > Project: Apache Drill > Issue Type: Improvement > Components: Tools, Build Test >Affects Versions: 0.7.0 >Reporter: Jacques Nadeau >Assignee: Vitalii Diravka >Priority: Minor > Fix For: 1.19.0 > > > Testing the json reader. > {code} > testBuilder() // > .sqlQuery("select b from files") // > .unOrdered() // > .jsonBaselineFile("expected.json") // > .build() > .run(); > {code} > Files composed of two files: > File 1 > {code} > {"a": "foo","b": null} > {"a": "bar","b": null} > {code} > File 2 > {code} > {"a": "foo2","b": null} > {"a": "bar2","b": {"x":1, "y":2}} > {code} > Expected Output: > {code} > b > null > null > {} > {"x":1,"y":2} > {code} > Receives failure of > {code} > java.lang.Exception: Did not find expected record in result set: `b` : null, > at > org.apache.drill.DrillTestWrapper.compareResults(DrillTestWrapper.java:528) > at > org.apache.drill.DrillTestWrapper.compareUnorderedResults(DrillTestWrapper.java:290) > at org.apache.drill.DrillTestWrapper.run(DrillTestWrapper.java:118) > at > org.apache.drill.exec.vector.complex.writer.TestJsonReader.schemaChange(TestJsonReader.java:60) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at java.lang.reflect.Method.invoke(Method.java:601) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at java.lang.reflect.Method.invoke(Method.java:601) > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (DRILL-1824) Certain JSON data test patterns cause false negatives in new test framework
[ https://issues.apache.org/jira/browse/DRILL-1824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17443483#comment-17443483 ] Vitalii Diravka commented on DRILL-1824: [~jnadeau] is right DRILL-1896 is mixed with this Jira. This one is resolved: {code:java} apache drill> select * from dfs.home.`/vector/complex/writer/schemaChange`; +--+---+ | a | b | +--+---+ | foo | null | | bar | null | | foo2 | {} | | bar2 | {"x":1,"y":2} | +--+---+ 4 rows selected (0.195 seconds) {code} But DRILL-1896 is not resolved and causes _TestJsonReader#schemaChangeValidate_ to fail. > Certain JSON data test patterns cause false negatives in new test framework > --- > > Key: DRILL-1824 > URL: https://issues.apache.org/jira/browse/DRILL-1824 > Project: Apache Drill > Issue Type: Improvement > Components: Tools, Build Test >Affects Versions: 0.7.0 >Reporter: Jacques Nadeau >Priority: Minor > Fix For: Future > > > Testing the json reader. > {code} > testBuilder() // > .sqlQuery("select b from files") // > .unOrdered() // > .jsonBaselineFile("expected.json") // > .build() > .run(); > {code} > Files composed of two files: > File 1 > {code} > {"a": "foo","b": null} > {"a": "bar","b": null} > {code} > File 2 > {code} > {"a": "foo2","b": null} > {"a": "bar2","b": {"x":1, "y":2}} > {code} > Expected Output: > {code} > b > null > null > {} > {"x":1,"y":2} > {code} > Receives failure of > {code} > java.lang.Exception: Did not find expected record in result set: `b` : null, > at > org.apache.drill.DrillTestWrapper.compareResults(DrillTestWrapper.java:528) > at > org.apache.drill.DrillTestWrapper.compareUnorderedResults(DrillTestWrapper.java:290) > at org.apache.drill.DrillTestWrapper.run(DrillTestWrapper.java:118) > at > org.apache.drill.exec.vector.complex.writer.TestJsonReader.schemaChange(TestJsonReader.java:60) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at java.lang.reflect.Method.invoke(Method.java:601) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at java.lang.reflect.Method.invoke(Method.java:601) > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Assigned] (DRILL-1824) Certain JSON data test patterns cause false negatives in new test framework
[ https://issues.apache.org/jira/browse/DRILL-1824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka reassigned DRILL-1824: -- Assignee: Vitalii Diravka > Certain JSON data test patterns cause false negatives in new test framework > --- > > Key: DRILL-1824 > URL: https://issues.apache.org/jira/browse/DRILL-1824 > Project: Apache Drill > Issue Type: Improvement > Components: Tools, Build Test >Affects Versions: 0.7.0 >Reporter: Jacques Nadeau >Assignee: Vitalii Diravka >Priority: Minor > Fix For: Future > > > Testing the json reader. > {code} > testBuilder() // > .sqlQuery("select b from files") // > .unOrdered() // > .jsonBaselineFile("expected.json") // > .build() > .run(); > {code} > Files composed of two files: > File 1 > {code} > {"a": "foo","b": null} > {"a": "bar","b": null} > {code} > File 2 > {code} > {"a": "foo2","b": null} > {"a": "bar2","b": {"x":1, "y":2}} > {code} > Expected Output: > {code} > b > null > null > {} > {"x":1,"y":2} > {code} > Receives failure of > {code} > java.lang.Exception: Did not find expected record in result set: `b` : null, > at > org.apache.drill.DrillTestWrapper.compareResults(DrillTestWrapper.java:528) > at > org.apache.drill.DrillTestWrapper.compareUnorderedResults(DrillTestWrapper.java:290) > at org.apache.drill.DrillTestWrapper.run(DrillTestWrapper.java:118) > at > org.apache.drill.exec.vector.complex.writer.TestJsonReader.schemaChange(TestJsonReader.java:60) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at java.lang.reflect.Method.invoke(Method.java:601) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at java.lang.reflect.Method.invoke(Method.java:601) > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (DRILL-8037) Add V2 JSON Format Plugin based on EVF
Vitalii Diravka created DRILL-8037: -- Summary: Add V2 JSON Format Plugin based on EVF Key: DRILL-8037 URL: https://issues.apache.org/jira/browse/DRILL-8037 Project: Apache Drill Issue Type: Sub-task Reporter: Vitalii Diravka Assignee: Vitalii Diravka This adds new V2 beta JSON Format Plugin based on the "Extended Vector Framework". This is follow up DRILL-6953 (was closed with the decision to merge it by small pieces). So it is based on [https://github.com/apache/drill/pull/1913] and [https://github.com/paul-rogers/drill/tree/DRILL-6953-rev2] work. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (DRILL-8029) Upgrade project parent POM to 24 version
[ https://issues.apache.org/jira/browse/DRILL-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka updated DRILL-8029: --- Fix Version/s: (was: Future) 1.20.0 > Upgrade project parent POM to 24 version > > > Key: DRILL-8029 > URL: https://issues.apache.org/jira/browse/DRILL-8029 > Project: Apache Drill > Issue Type: Improvement > Components: Tools, Build Test >Affects Versions: 1.19.0 >Reporter: Vitalii Diravka >Assignee: Vitalii Diravka >Priority: Major > Fix For: 1.20.0 > > > Current project Apache Software Foundation Parent POM is too old. > It was updated in DRILL-5862 and DRILL-6751. But there is newer version: > [https://maven.apache.org/pom/asf/] > https://github.com/apache/maven-apache-parent/blob/apache-24/pom.xml -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (DRILL-8030) Intermittent TestDrillbitResilience cancelInMiddleOfFetchingResults and foreman_runTryEnd failures
[ https://issues.apache.org/jira/browse/DRILL-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka updated DRILL-8030: --- Fix Version/s: (was: Future) 1.20.0 > Intermittent TestDrillbitResilience cancelInMiddleOfFetchingResults and > foreman_runTryEnd failures > -- > > Key: DRILL-8030 > URL: https://issues.apache.org/jira/browse/DRILL-8030 > Project: Apache Drill > Issue Type: Sub-task > Components: Tools, Build Test >Affects Versions: 1.19.0 >Reporter: Vitalii Diravka >Assignee: Vitalii Diravka >Priority: Minor > Fix For: 1.20.0 > > > DRILL-7908 fixes distributed deadlocks in _TestDrillbitResilience_ and add > better timing for simulation the different Drill states. But sometimes > several tests failed. > 1. Sometimes tests indicate memory leak: > {code:java} > Error: Failures: > 3419Error: > org.apache.drill.exec.server.TestDrillbitResilience.cancelInMiddleOfFetchingResults > 3420Error:Run 1: > TestDrillbitResilience.cancelInMiddleOfFetchingResults:375 We are leaking > 300 bytes ==> expected: <0> but was: <300> > {code} > But actually there is no memory leak. Looks like Drill just check actual > memory to early, when dot all fragments are closed, so adding timeout before > final _countAllocatedMemory_ fixes the issue. > The other reason of test failures - the queries were not in expected state > before cancelling (for instance in STARTING state instead of RUNNING), so > adding timeout before starting cancellation thread allows to wait the proper > drill query state, which is expected to be for Drill in test case before > cancellation. > I don't have anymore test failures with NUM_RUNS = 1000 (@RepeatedTest) for > the problematic test cases. > 2. The other test case which failed is: > {code:java} > Error: Failures: > 3540Error: > TestDrillbitResilience.foreman_runTryEnd:289->testForeman:973->assertFailsWithException:960->assertFailsWithException:954 > Query state should be FAILED (and not COMPLETED). ==> expected: > but was: {code} > It relates to DRILL-3167. The root cause here is the following: in some cases > we are completing the query faster than run-try-end exception is injecetd and > thrown in Foreman. The Completed state is acceptable for such cases -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (DRILL-8033) uptake POI 5.1.0
[ https://issues.apache.org/jira/browse/DRILL-8033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka reassigned DRILL-8033: -- Assignee: (was: Vitalii Diravka) > uptake POI 5.1.0 > > > Key: DRILL-8033 > URL: https://issues.apache.org/jira/browse/DRILL-8033 > Project: Apache Drill > Issue Type: Task > Components: Server >Reporter: PJ Fanning >Priority: Major > > POI 5.1.0 is released. excel-streaming-reader 3.2.0 is an additional upgrade > that you'll need for POI 5.1.0 compatibility. I would expect that you won't > need to make code changes as part of the upgrade. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (DRILL-8033) uptake POI 5.1.0
[ https://issues.apache.org/jira/browse/DRILL-8033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka resolved DRILL-8033. Resolution: Fixed > uptake POI 5.1.0 > > > Key: DRILL-8033 > URL: https://issues.apache.org/jira/browse/DRILL-8033 > Project: Apache Drill > Issue Type: Task > Components: Server >Reporter: PJ Fanning >Priority: Major > > POI 5.1.0 is released. excel-streaming-reader 3.2.0 is an additional upgrade > that you'll need for POI 5.1.0 compatibility. I would expect that you won't > need to make code changes as part of the upgrade. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (DRILL-8033) uptake POI 5.1.0
[ https://issues.apache.org/jira/browse/DRILL-8033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka reassigned DRILL-8033: -- Assignee: Vitalii Diravka > uptake POI 5.1.0 > > > Key: DRILL-8033 > URL: https://issues.apache.org/jira/browse/DRILL-8033 > Project: Apache Drill > Issue Type: Task > Components: Server >Reporter: PJ Fanning >Assignee: Vitalii Diravka >Priority: Major > > POI 5.1.0 is released. excel-streaming-reader 3.2.0 is an additional upgrade > that you'll need for POI 5.1.0 compatibility. I would expect that you won't > need to make code changes as part of the upgrade. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (DRILL-8036) Enable Spnego and Kerberos Tests
Vitalii Diravka created DRILL-8036: -- Summary: Enable Spnego and Kerberos Tests Key: DRILL-8036 URL: https://issues.apache.org/jira/browse/DRILL-8036 Project: Apache Drill Issue Type: Improvement Components: Security Affects Versions: 1.19.0 Reporter: Vitalii Diravka Assignee: Vitalii Diravka Fix For: 1.20.0 DRILL-5387 disables several test cases. Since that time Hadoop lib was updated, so enabling test cases should be reconsidered. Besides that disabled test cases create uncertainty about that specific functionality -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (DRILL-8035) Update Janino to 3.1.6 version
[ https://issues.apache.org/jira/browse/DRILL-8035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka updated DRILL-8035: --- Summary: Update Janino to 3.1.6 version (was: Update Janino version) > Update Janino to 3.1.6 version > -- > > Key: DRILL-8035 > URL: https://issues.apache.org/jira/browse/DRILL-8035 > Project: Apache Drill > Issue Type: Sub-task > Components: 1.19 >Affects Versions: Future >Reporter: Vitalii Diravka >Assignee: Vitalii Diravka >Priority: Major > > Drill uses 3.0.11 Janino version. The latest one is > [3.1.6|https://mvnrepository.com/artifact/org.codehaus.janino/janino/3.1.6] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (DRILL-8035) Update Janino version
[ https://issues.apache.org/jira/browse/DRILL-8035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka updated DRILL-8035: --- Parent: DRILL-8034 Issue Type: Sub-task (was: Wish) > Update Janino version > - > > Key: DRILL-8035 > URL: https://issues.apache.org/jira/browse/DRILL-8035 > Project: Apache Drill > Issue Type: Sub-task > Components: 1.19 >Affects Versions: Future >Reporter: Vitalii Diravka >Assignee: Vitalii Diravka >Priority: Major > > Drill uses 3.0.11 Janino version. The latest one is > [3.1.6|https://mvnrepository.com/artifact/org.codehaus.janino/janino/3.1.6] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (DRILL-8035) Update Janino version
Vitalii Diravka created DRILL-8035: -- Summary: Update Janino version Key: DRILL-8035 URL: https://issues.apache.org/jira/browse/DRILL-8035 Project: Apache Drill Issue Type: Wish Components: 1.19 Affects Versions: Future Reporter: Vitalii Diravka Assignee: Vitalii Diravka Drill uses 3.0.11 Janino version. The latest one is [3.1.6|https://mvnrepository.com/artifact/org.codehaus.janino/janino/3.1.6] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (DRILL-8034) Support Java17
Vitalii Diravka created DRILL-8034: -- Summary: Support Java17 Key: DRILL-8034 URL: https://issues.apache.org/jira/browse/DRILL-8034 Project: Apache Drill Issue Type: Wish Components: Execution - Codegen Affects Versions: 1.19.0 Reporter: Vitalii Diravka Assignee: Vitalii Diravka Fix For: Future Drill officially supports Java14, and it can be updated to Java15 with a minimal changes. But latest LTS Java version is 17. Need to add support of building Drill with JVM17 and running on that JVM. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (DRILL-3194) TestDrillbitResilience#memoryLeaksWhenFailed hangs
[ https://issues.apache.org/jira/browse/DRILL-3194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17418460#comment-17418460 ] Vitalii Diravka edited comment on DRILL-3194 at 11/3/21, 1:16 PM: -- After DRILL-7973 and DRILL-8030 this test passes successfully. So closed the ticket was (Author: vitalii): After DRILL-7973 and DRILL-8030 this test passes successfully. So close the ticket > TestDrillbitResilience#memoryLeaksWhenFailed hangs > -- > > Key: DRILL-3194 > URL: https://issues.apache.org/jira/browse/DRILL-3194 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Flow >Reporter: Sudheesh Katkam >Assignee: Vitalii Diravka >Priority: Major > Fix For: 1.20.0 > > > TestDrillbitResilience#memoryLeaksWhenFailed hangs and fails when run > multiple times. This might be related to DRILL-3163. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (DRILL-8032) Remove obsolete Python scripts from the project
Vitalii Diravka created DRILL-8032: -- Summary: Remove obsolete Python scripts from the project Key: DRILL-8032 URL: https://issues.apache.org/jira/browse/DRILL-8032 Project: Apache Drill Issue Type: Improvement Affects Versions: 1.19.0 Reporter: Vitalii Diravka Assignee: Vitalii Diravka Fix For: 1.20.0 There is one python script, which triggers lgtm checks for PRs - _drill-patch-review.py_ But it is obsolete and not be used anymore. It was created in 2013 for Jira review tool, boards and submitting patches. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (DRILL-8012) Flaky tests TestExtendedTypes and TestNestedDateTimeTimestamp
[ https://issues.apache.org/jira/browse/DRILL-8012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka closed DRILL-8012. -- Resolution: Won't Fix > Flaky tests TestExtendedTypes and TestNestedDateTimeTimestamp > - > > Key: DRILL-8012 > URL: https://issues.apache.org/jira/browse/DRILL-8012 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.19.0 >Reporter: Vitalii Diravka >Priority: Major > > > {code:java} > Sometimes these tests fail: > TestNestedDateTimeTimestamp.testNestedDateTimeCTASExtendedJson:155 > TestExtendedTypes.checkReadWriteExtended:58 {code} > {code:java} > [ERROR] TestExtendedTypes.checkReadWriteExtended:58 expected:<...date" : > "2009-02-23T[08:00:00.000Z" > }, > "time" : { > "$time" : "19:20:30.450Z" > }, > "interval" : { > "$interval" : "PT26.400S" > }, > "integer" : { > "$numberLong" : 4 > }, > "inner" : { > "bin" : { > "$binary" : "ZHJpbGw=" > }, > "drill_date" : { > "$dateDay" : "1997-07-16" > }, > "drill_timestamp" : { > "$date" : "2009-02-23T08]:00:00.000Z" > }, > ...> but was:<...date" : "2009-02-23T[10:00:00.000Z" > }, > "time" : { > "$time" : "19:20:30.450Z" > }, > "interval" : { > "$interval" : "PT26.400S" > }, > "integer" : { > "$numberLong" : 4 > }, > "inner" : { > "bin" : { > "$binary" : "ZHJpbGw=" > }, > "drill_date" : { > "$dateDay" : "1997-07-16" > }, > "drill_timestamp" : { > "$date" : "2009-02-23T10]:00:00.000Z" > }, > ...> > [ERROR] Errors: > [ERROR] org.apache.drill.common.exceptions.UserRemoteException: > INTERNAL_ERROR ERROR: nullFragment: 0:0Please, refer to logs for more > information.[Error Id: 7ea92921-6d93-4d57-8e0f-d35650e16b42 on drill:31028] > (java.lang.NullPointerException) null > > org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.next():234 > org.apache.drill.exec.physical.impl.ScanBatch.internalNext():234 > org.apache.drill.exec.physical.impl.ScanBatch.next():298 > > org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next():237 > org.apache.drill.exec.record.AbstractRecordBatch.next():119 > org.apache.drill.exec.record.AbstractRecordBatch.next():111 > org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():59 > > org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():85 > org.apache.drill.exec.record.AbstractRecordBatch.next():170 > > org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next():237 > org.apache.drill.exec.physical.impl.BaseRootExec.next():103 > > org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():81 > org.apache.drill.exec.physical.impl.BaseRootExec.next():93 > org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():323 > org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():310 > java.security.AccessController.doPrivileged():-2 > javax.security.auth.Subject.doAs():422 > org.apache.hadoop.security.UserGroupInformation.doAs():1762 > org.apache.drill.exec.work.fragment.FragmentExecutor.run():310 > org.apache.drill.common.SelfCleaningRunnable.run():38 > java.util.concurrent.ThreadPoolExecutor.runWorker():1149 > java.util.concurrent.ThreadPoolExecutor$Worker.run():624 > java.lang.Thread.run():748For query: select c_varchar, c_integer, > c_bigint, c_float, c_double, c_date, c_time, c_timestamp, c_boolean from > cp.`parquet/all_nulls.parquet` > [ERROR] org.apache.drill.common.exceptions.UserRemoteException: > INTERNAL_ERROR ERROR: nullFragment: 0:0Please, refer to logs for more > information.[Error Id: 5fb379d5-ebdc-4f10-a173-79427ec8b215 on drill:31028] > (java.lang.NullPointerException) null > > org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.next():234 > org.apache.drill.exec.physical.impl.ScanBatch.internalNext():234 > org.apache.drill.exec.physical.impl.ScanBatch.next():298 > > org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next():237 > org.apache.drill.exec.record.AbstractRecordBatch.next():119 > org.apache.drill.exec.record.AbstractRecordBatch.next():111 > org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():59 > > org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():85 > org.apache.drill.exec.record.AbstractRecordBatch.next():170 > > org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next():237 > org.apache.drill.exec.physical.impl.BaseRootExec.next():103 > > org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():81 >
[jira] [Commented] (DRILL-8012) Flaky tests TestExtendedTypes and TestNestedDateTimeTimestamp
[ https://issues.apache.org/jira/browse/DRILL-8012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17437985#comment-17437985 ] Vitalii Diravka commented on DRILL-8012: Given from [~dzamo] {code:java} These tests fail when the build is run on a system that is not set to the UTC time zone. My laptop is set to UTC+2 but I can make the tests pass by specifying -Duser.timezone=UTC. {code} So can be closed > Flaky tests TestExtendedTypes and TestNestedDateTimeTimestamp > - > > Key: DRILL-8012 > URL: https://issues.apache.org/jira/browse/DRILL-8012 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.19.0 >Reporter: Vitalii Diravka >Priority: Major > > > {code:java} > Sometimes these tests fail: > TestNestedDateTimeTimestamp.testNestedDateTimeCTASExtendedJson:155 > TestExtendedTypes.checkReadWriteExtended:58 {code} > {code:java} > [ERROR] TestExtendedTypes.checkReadWriteExtended:58 expected:<...date" : > "2009-02-23T[08:00:00.000Z" > }, > "time" : { > "$time" : "19:20:30.450Z" > }, > "interval" : { > "$interval" : "PT26.400S" > }, > "integer" : { > "$numberLong" : 4 > }, > "inner" : { > "bin" : { > "$binary" : "ZHJpbGw=" > }, > "drill_date" : { > "$dateDay" : "1997-07-16" > }, > "drill_timestamp" : { > "$date" : "2009-02-23T08]:00:00.000Z" > }, > ...> but was:<...date" : "2009-02-23T[10:00:00.000Z" > }, > "time" : { > "$time" : "19:20:30.450Z" > }, > "interval" : { > "$interval" : "PT26.400S" > }, > "integer" : { > "$numberLong" : 4 > }, > "inner" : { > "bin" : { > "$binary" : "ZHJpbGw=" > }, > "drill_date" : { > "$dateDay" : "1997-07-16" > }, > "drill_timestamp" : { > "$date" : "2009-02-23T10]:00:00.000Z" > }, > ...> > [ERROR] Errors: > [ERROR] org.apache.drill.common.exceptions.UserRemoteException: > INTERNAL_ERROR ERROR: nullFragment: 0:0Please, refer to logs for more > information.[Error Id: 7ea92921-6d93-4d57-8e0f-d35650e16b42 on drill:31028] > (java.lang.NullPointerException) null > > org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.next():234 > org.apache.drill.exec.physical.impl.ScanBatch.internalNext():234 > org.apache.drill.exec.physical.impl.ScanBatch.next():298 > > org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next():237 > org.apache.drill.exec.record.AbstractRecordBatch.next():119 > org.apache.drill.exec.record.AbstractRecordBatch.next():111 > org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():59 > > org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():85 > org.apache.drill.exec.record.AbstractRecordBatch.next():170 > > org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next():237 > org.apache.drill.exec.physical.impl.BaseRootExec.next():103 > > org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():81 > org.apache.drill.exec.physical.impl.BaseRootExec.next():93 > org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():323 > org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():310 > java.security.AccessController.doPrivileged():-2 > javax.security.auth.Subject.doAs():422 > org.apache.hadoop.security.UserGroupInformation.doAs():1762 > org.apache.drill.exec.work.fragment.FragmentExecutor.run():310 > org.apache.drill.common.SelfCleaningRunnable.run():38 > java.util.concurrent.ThreadPoolExecutor.runWorker():1149 > java.util.concurrent.ThreadPoolExecutor$Worker.run():624 > java.lang.Thread.run():748For query: select c_varchar, c_integer, > c_bigint, c_float, c_double, c_date, c_time, c_timestamp, c_boolean from > cp.`parquet/all_nulls.parquet` > [ERROR] org.apache.drill.common.exceptions.UserRemoteException: > INTERNAL_ERROR ERROR: nullFragment: 0:0Please, refer to logs for more > information.[Error Id: 5fb379d5-ebdc-4f10-a173-79427ec8b215 on drill:31028] > (java.lang.NullPointerException) null > > org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.next():234 > org.apache.drill.exec.physical.impl.ScanBatch.internalNext():234 > org.apache.drill.exec.physical.impl.ScanBatch.next():298 > > org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next():237 > org.apache.drill.exec.record.AbstractRecordBatch.next():119 > org.apache.drill.exec.record.AbstractRecordBatch.next():111 > org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():59 > > org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():85 > org.apache.drill.exec.record.AbstractRecordBatch.next():170 > >
[jira] [Created] (DRILL-8031) Intermittent TestPStoreProviders failures
Vitalii Diravka created DRILL-8031: -- Summary: Intermittent TestPStoreProviders failures Key: DRILL-8031 URL: https://issues.apache.org/jira/browse/DRILL-8031 Project: Apache Drill Issue Type: Sub-task Components: Tools, Build Test Affects Versions: 1.19.0 Reporter: Vitalii Diravka Fix For: Future {code:java} Error: Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 4.083 s <<< FAILURE! - in org.apache.drill.exec.store.sys.TestPStoreProviders 1495Error: org.apache.drill.exec.store.sys.TestPStoreProviders.verifyZkStore Time elapsed: 0.836 s <<< FAILURE! 1496java.lang.AssertionError 1497at org.apache.drill.exec.store.sys.TestPStoreProviders.verifyZkStore(TestPStoreProviders.java:67) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (DRILL-8030) Intermittent TestDrillbitResilience cancelInMiddleOfFetchingResults and foreman_runTryEnd failures
[ https://issues.apache.org/jira/browse/DRILL-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17437961#comment-17437961 ] Vitalii Diravka commented on DRILL-8030: It also resolves DRILL-3052, DRILL-3167, DRILL-3193, DRILL-3194, DRILL-3967, DRILL-6228. Therefore close them > Intermittent TestDrillbitResilience cancelInMiddleOfFetchingResults and > foreman_runTryEnd failures > -- > > Key: DRILL-8030 > URL: https://issues.apache.org/jira/browse/DRILL-8030 > Project: Apache Drill > Issue Type: Sub-task > Components: Tools, Build Test >Affects Versions: 1.19.0 >Reporter: Vitalii Diravka >Assignee: Vitalii Diravka >Priority: Minor > Fix For: Future > > > DRILL-7908 fixes distributed deadlocks in _TestDrillbitResilience_ and add > better timing for simulation the different Drill states. But sometimes > several tests failed. > 1. Sometimes tests indicate memory leak: > {code:java} > Error: Failures: > 3419Error: > org.apache.drill.exec.server.TestDrillbitResilience.cancelInMiddleOfFetchingResults > 3420Error:Run 1: > TestDrillbitResilience.cancelInMiddleOfFetchingResults:375 We are leaking > 300 bytes ==> expected: <0> but was: <300> > {code} > But actually there is no memory leak. Looks like Drill just check actual > memory to early, when dot all fragments are closed, so adding timeout before > final _countAllocatedMemory_ fixes the issue. > The other reason of test failures - the queries were not in expected state > before cancelling (for instance in STARTING state instead of RUNNING), so > adding timeout before starting cancellation thread allows to wait the proper > drill query state, which is expected to be for Drill in test case before > cancellation. > I don't have anymore test failures with NUM_RUNS = 1000 (@RepeatedTest) for > the problematic test cases. > 2. The other test case which failed is: > {code:java} > Error: Failures: > 3540Error: > TestDrillbitResilience.foreman_runTryEnd:289->testForeman:973->assertFailsWithException:960->assertFailsWithException:954 > Query state should be FAILED (and not COMPLETED). ==> expected: > but was: {code} > It relates to DRILL-3167. The root cause here is the following: in some cases > we are completing the query faster than run-try-end exception is injecetd and > thrown in Foreman. The Completed state is acceptable for such cases -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (DRILL-3194) TestDrillbitResilience#memoryLeaksWhenFailed hangs
[ https://issues.apache.org/jira/browse/DRILL-3194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka resolved DRILL-3194. Fix Version/s: (was: Future) 1.20.0 Resolution: Fixed > TestDrillbitResilience#memoryLeaksWhenFailed hangs > -- > > Key: DRILL-3194 > URL: https://issues.apache.org/jira/browse/DRILL-3194 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Flow >Reporter: Sudheesh Katkam >Assignee: Vitalii Diravka >Priority: Major > Fix For: 1.20.0 > > > TestDrillbitResilience#memoryLeaksWhenFailed hangs and fails when run > multiple times. This might be related to DRILL-3163. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (DRILL-3194) TestDrillbitResilience#memoryLeaksWhenFailed hangs
[ https://issues.apache.org/jira/browse/DRILL-3194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17418460#comment-17418460 ] Vitalii Diravka edited comment on DRILL-3194 at 11/3/21, 11:18 AM: --- After DRILL-7973 and DRILL-8030 this test passes successfully. So close the ticket was (Author: vitalii): After DRILL-7973 and DRILL-8030 this test passes successfully. > TestDrillbitResilience#memoryLeaksWhenFailed hangs > -- > > Key: DRILL-3194 > URL: https://issues.apache.org/jira/browse/DRILL-3194 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Flow >Reporter: Sudheesh Katkam >Assignee: Abdel Hakim Deneche >Priority: Major > Fix For: Future > > > TestDrillbitResilience#memoryLeaksWhenFailed hangs and fails when run > multiple times. This might be related to DRILL-3163. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (DRILL-3194) TestDrillbitResilience#memoryLeaksWhenFailed hangs
[ https://issues.apache.org/jira/browse/DRILL-3194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17418460#comment-17418460 ] Vitalii Diravka edited comment on DRILL-3194 at 11/3/21, 11:18 AM: --- After DRILL-7973 and DRILL-8030 this test passes successfully. was (Author: vitalii): This test doesn't hang for any number of repeats. So it can be enabled. But the issue from DRILL-3167 still persist. So this test can be enabled in scope of that task > TestDrillbitResilience#memoryLeaksWhenFailed hangs > -- > > Key: DRILL-3194 > URL: https://issues.apache.org/jira/browse/DRILL-3194 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Flow >Reporter: Sudheesh Katkam >Assignee: Abdel Hakim Deneche >Priority: Major > Fix For: Future > > > TestDrillbitResilience#memoryLeaksWhenFailed hangs and fails when run > multiple times. This might be related to DRILL-3163. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (DRILL-3194) TestDrillbitResilience#memoryLeaksWhenFailed hangs
[ https://issues.apache.org/jira/browse/DRILL-3194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka reassigned DRILL-3194: -- Assignee: Vitalii Diravka (was: Abdel Hakim Deneche) > TestDrillbitResilience#memoryLeaksWhenFailed hangs > -- > > Key: DRILL-3194 > URL: https://issues.apache.org/jira/browse/DRILL-3194 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Flow >Reporter: Sudheesh Katkam >Assignee: Vitalii Diravka >Priority: Major > Fix For: Future > > > TestDrillbitResilience#memoryLeaksWhenFailed hangs and fails when run > multiple times. This might be related to DRILL-3163. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (DRILL-3193) TestDrillbitResilience#interruptingWhileFragmentIsBlockedInAcquiringSendingTicket hangs and fails
[ https://issues.apache.org/jira/browse/DRILL-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka resolved DRILL-3193. Fix Version/s: (was: Future) 1.20.0 Resolution: Fixed > TestDrillbitResilience#interruptingWhileFragmentIsBlockedInAcquiringSendingTicket > hangs and fails > - > > Key: DRILL-3193 > URL: https://issues.apache.org/jira/browse/DRILL-3193 > Project: Apache Drill > Issue Type: Bug >Reporter: Sudheesh Katkam >Assignee: Vitalii Diravka >Priority: Major > Fix For: 1.20.0 > > Attachments: 3193_thread_dump.txt > > > TestDrillbitResilience#interruptingWhileFragmentIsBlockedInAcquiringSendingTicket > hangs when it is run multiple times. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (DRILL-3193) TestDrillbitResilience#interruptingWhileFragmentIsBlockedInAcquiringSendingTicket hangs and fails
[ https://issues.apache.org/jira/browse/DRILL-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17437943#comment-17437943 ] Vitalii Diravka commented on DRILL-3193: Fixed in DRILL-7973 by _wrapUpCancellation_ (_cancelExecutingFragments_) and set _QueryState_ to _CANCELED_ after receiving CANCELLATION_REQUESTED is obtained. > TestDrillbitResilience#interruptingWhileFragmentIsBlockedInAcquiringSendingTicket > hangs and fails > - > > Key: DRILL-3193 > URL: https://issues.apache.org/jira/browse/DRILL-3193 > Project: Apache Drill > Issue Type: Bug >Reporter: Sudheesh Katkam >Assignee: Sudheesh Katkam >Priority: Major > Fix For: Future > > Attachments: 3193_thread_dump.txt > > > TestDrillbitResilience#interruptingWhileFragmentIsBlockedInAcquiringSendingTicket > hangs when it is run multiple times. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (DRILL-3193) TestDrillbitResilience#interruptingWhileFragmentIsBlockedInAcquiringSendingTicket hangs and fails
[ https://issues.apache.org/jira/browse/DRILL-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka reassigned DRILL-3193: -- Assignee: Vitalii Diravka (was: Sudheesh Katkam) > TestDrillbitResilience#interruptingWhileFragmentIsBlockedInAcquiringSendingTicket > hangs and fails > - > > Key: DRILL-3193 > URL: https://issues.apache.org/jira/browse/DRILL-3193 > Project: Apache Drill > Issue Type: Bug >Reporter: Sudheesh Katkam >Assignee: Vitalii Diravka >Priority: Major > Fix For: Future > > Attachments: 3193_thread_dump.txt > > > TestDrillbitResilience#interruptingWhileFragmentIsBlockedInAcquiringSendingTicket > hangs when it is run multiple times. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (DRILL-3967) Broken Test: TestDrillbitResilience.cancelAfterEverythingIsCompleted()
[ https://issues.apache.org/jira/browse/DRILL-3967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka closed DRILL-3967. -- Fix Version/s: 1.20.0 Resolution: Fixed > Broken Test: TestDrillbitResilience.cancelAfterEverythingIsCompleted() > -- > > Key: DRILL-3967 > URL: https://issues.apache.org/jira/browse/DRILL-3967 > Project: Apache Drill > Issue Type: Test > Components: Execution - Flow, Execution - RPC >Affects Versions: 1.2.0 >Reporter: Andrew >Assignee: Vitalii Diravka >Priority: Minor > Fix For: 1.20.0 > > > TestDrillbitResilience.cancelAfterEverythingIsCompleted() can sometimes fail. > I've noticed that running this test on an m2.xlarge on AWS causes a > reproducible failure when running against the patch for > https://issues.apache.org/jira/browse/DRILL-3749 (Upgraded Hadoop and Curator > libraries). > When running this test with the same patch on my laptop, this test passes. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (DRILL-3967) Broken Test: TestDrillbitResilience.cancelAfterEverythingIsCompleted()
[ https://issues.apache.org/jira/browse/DRILL-3967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17437933#comment-17437933 ] Vitalii Diravka commented on DRILL-3967: It is not an issue anymore. This tests *_cancelling after everything is completed._* Now drill query in Completed terminal state for this test and terminal states can't be changed. Also to make sure "_everything is completed_" the cancelling thread is running after Thread.sleep(1000) - 1sec delay. Resolved in DRILL-7973 > Broken Test: TestDrillbitResilience.cancelAfterEverythingIsCompleted() > -- > > Key: DRILL-3967 > URL: https://issues.apache.org/jira/browse/DRILL-3967 > Project: Apache Drill > Issue Type: Test > Components: Execution - Flow, Execution - RPC >Affects Versions: 1.2.0 >Reporter: Andrew >Assignee: Sudheesh Katkam >Priority: Minor > > TestDrillbitResilience.cancelAfterEverythingIsCompleted() can sometimes fail. > I've noticed that running this test on an m2.xlarge on AWS causes a > reproducible failure when running against the patch for > https://issues.apache.org/jira/browse/DRILL-3749 (Upgraded Hadoop and Curator > libraries). > When running this test with the same patch on my laptop, this test passes. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (DRILL-3967) Broken Test: TestDrillbitResilience.cancelAfterEverythingIsCompleted()
[ https://issues.apache.org/jira/browse/DRILL-3967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka reassigned DRILL-3967: -- Assignee: Vitalii Diravka (was: Sudheesh Katkam) > Broken Test: TestDrillbitResilience.cancelAfterEverythingIsCompleted() > -- > > Key: DRILL-3967 > URL: https://issues.apache.org/jira/browse/DRILL-3967 > Project: Apache Drill > Issue Type: Test > Components: Execution - Flow, Execution - RPC >Affects Versions: 1.2.0 >Reporter: Andrew >Assignee: Vitalii Diravka >Priority: Minor > > TestDrillbitResilience.cancelAfterEverythingIsCompleted() can sometimes fail. > I've noticed that running this test on an m2.xlarge on AWS causes a > reproducible failure when running against the patch for > https://issues.apache.org/jira/browse/DRILL-3749 (Upgraded Hadoop and Curator > libraries). > When running this test with the same patch on my laptop, this test passes. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (DRILL-6228) Random failures of TestDrillbitResilience tests
[ https://issues.apache.org/jira/browse/DRILL-6228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitalii Diravka resolved DRILL-6228. Fix Version/s: Future Resolution: Fixed > Random failures of TestDrillbitResilience tests > --- > > Key: DRILL-6228 > URL: https://issues.apache.org/jira/browse/DRILL-6228 > Project: Apache Drill > Issue Type: Bug >Reporter: Volodymyr Tkach >Priority: Major > Fix For: Future > > > When running on jdk8 enviroment two unit tests randomly fails. To reproduce: > change @Repeat(count = 10) to 10 or more on both of them and run tests from > command line: > mvn -Dtest=TestDrillbitResilience -pl exec/java-exec test > TestDrillbitResilience.cancelAfterAllResultsProduced > TestDrillbitResilience.cancelInMiddleOfFetchingResults > {noformat} > cancelInMiddleOfFetchingResults(org.apache.drill.exec.server.TestDrillbitResilience) > Time elapsed: 10.498 sec <<< FAILURE! > java.lang.AssertionError: Query state is incorrect (expected: CANCELED, > actual: COMPLETED) AND/OR > Exception thrown: none. > at > org.apache.drill.exec.server.TestDrillbitResilience.assertStateCompleted(TestDrillbitResilience.java:543) > at > org.apache.drill.exec.server.TestDrillbitResilience.assertCancelledWithoutException(TestDrillbitResilience.java:557) > at > org.apache.drill.exec.server.TestDrillbitResilience.assertCancelledWithoutException(TestDrillbitResilience.java:564) > at > org.apache.drill.exec.server.TestDrillbitResilience.cancelInMiddleOfFetchingResults(TestDrillbitResilience.java:644) > {noformat} > {noformat} > Failed tests: > TestDrillbitResilience.cancelAfterAllResultsProduced:672->assertCancelledWithoutException:564->assertCancelledWithoutException:557->assertStateCompleted:543 > Query state is incorrect (expected: CANCELED, actual: COMPLETED) AND/OR > Exception thrown: none > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)