[jira] [Commented] (FLINK-15563) When using ParquetTableSource, The program hangs until OOM

2021-04-27 Thread Flink Jira Bot (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17333822#comment-17333822
 ] 

Flink Jira Bot commented on FLINK-15563:


This issue was marked "stale-assigned" and has not received an update in 7 
days. It is now automatically unassigned. If you are still working on it, you 
can assign it to yourself again. Please also give an update about the status of 
the work.

>  When using ParquetTableSource, The program hangs until OOM
> ---
>
> Key: FLINK-15563
> URL: https://issues.apache.org/jira/browse/FLINK-15563
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Legacy Planner
>Affects Versions: 1.8.1, 1.9.1
>Reporter: sujun
>Assignee: godfrey he
>Priority: Critical
>  Labels: Parquet, pull-request-available, stale-assigned
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This is my code:
> {code:java}
> def main(args: Array[String]): Unit = {
> val env = StreamExecutionEnvironment.getExecutionEnvironment
> env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
> val tableEnv = StreamTableEnvironment.create(env)val schema = 
> "{\"type\":\"record\",\"name\":\"root\",\"fields\":[{\"name\":\"log_id\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"city\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"log_from\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"ip\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"type\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"data_source\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"is_scan\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"result\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"timelong\",\"type\":[\"null\",\"long\"],\"default\":null},{\"name\":\"is_sec\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"event_name\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"id\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"time_string\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"device\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"timestamp_string\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"occur_time\",\"type\":[\"null\",{\"type\":\"long\",\"logicalType\":\"timestamp-millis\"}],\"default\":null},{\"name\":\"row_time\",\"type\":[\"null\",{\"type\":\"long\",\"logicalType\":\"timestamp-millis\"}],\"default\":null}]}"
> 
> val parquetTableSource: ParquetTableSource = ParquetTableSource
> .builder
> .forParquetSchema(new 
> org.apache.parquet.avro.AvroSchemaConverter().convert(
> org.apache.avro.Schema.parse(schema, true)))
> .path("/Users/sujun/Documents/tmp/login_data")
> .build
> tableEnv.registerTableSource("source",parquetTableSource)
> val t1 = tableEnv.sqlQuery("select log_id,city from source where city = 
> '274' ")
> tableEnv.registerTable("t1",t1)
> val t2 = tableEnv.sqlQuery("select * from t1 where 
> log_id='5927070661978133'")
> t2.toAppendStream[Row].print()
> env.execute()}
> {code}
>  
>  When the two SQLS each have a where condition, the main program will hang 
> until OOM. When the filter push down code of ParquetTableSource is deleted, 
> the program runs normally.
>   
>  Through my debugging, I found that the program hangs in the 
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp method
>   
>  May be a bug in the calcite optimizer caused by filter push down code
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15563) When using ParquetTableSource, The program hangs until OOM

2021-04-16 Thread Flink Jira Bot (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17323124#comment-17323124
 ] 

Flink Jira Bot commented on FLINK-15563:


This issue is assigned but has not received an update in 7 days so it has been 
labeled "stale-assigned". If you are still working on the issue, please give an 
update and remove the label. If you are no longer working on the issue, please 
unassign so someone else may work on it. In 7 days the issue will be 
automatically unassigned.

>  When using ParquetTableSource, The program hangs until OOM
> ---
>
> Key: FLINK-15563
> URL: https://issues.apache.org/jira/browse/FLINK-15563
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Legacy Planner
>Affects Versions: 1.8.1, 1.9.1
>Reporter: sujun
>Assignee: godfrey he
>Priority: Critical
>  Labels: Parquet, pull-request-available, stale-assigned
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This is my code:
> {code:java}
> def main(args: Array[String]): Unit = {
> val env = StreamExecutionEnvironment.getExecutionEnvironment
> env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
> val tableEnv = StreamTableEnvironment.create(env)val schema = 
> "{\"type\":\"record\",\"name\":\"root\",\"fields\":[{\"name\":\"log_id\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"city\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"log_from\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"ip\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"type\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"data_source\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"is_scan\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"result\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"timelong\",\"type\":[\"null\",\"long\"],\"default\":null},{\"name\":\"is_sec\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"event_name\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"id\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"time_string\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"device\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"timestamp_string\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"occur_time\",\"type\":[\"null\",{\"type\":\"long\",\"logicalType\":\"timestamp-millis\"}],\"default\":null},{\"name\":\"row_time\",\"type\":[\"null\",{\"type\":\"long\",\"logicalType\":\"timestamp-millis\"}],\"default\":null}]}"
> 
> val parquetTableSource: ParquetTableSource = ParquetTableSource
> .builder
> .forParquetSchema(new 
> org.apache.parquet.avro.AvroSchemaConverter().convert(
> org.apache.avro.Schema.parse(schema, true)))
> .path("/Users/sujun/Documents/tmp/login_data")
> .build
> tableEnv.registerTableSource("source",parquetTableSource)
> val t1 = tableEnv.sqlQuery("select log_id,city from source where city = 
> '274' ")
> tableEnv.registerTable("t1",t1)
> val t2 = tableEnv.sqlQuery("select * from t1 where 
> log_id='5927070661978133'")
> t2.toAppendStream[Row].print()
> env.execute()}
> {code}
>  
>  When the two SQLS each have a where condition, the main program will hang 
> until OOM. When the filter push down code of ParquetTableSource is deleted, 
> the program runs normally.
>   
>  Through my debugging, I found that the program hangs in the 
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp method
>   
>  May be a bug in the calcite optimizer caused by filter push down code
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15563) When using ParquetTableSource, The program hangs until OOM

2020-01-13 Thread sujun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17014790#comment-17014790
 ] 

sujun commented on FLINK-15563:
---

[~ykt836] [~godfreyhe] Thanks for your help

>  When using ParquetTableSource, The program hangs until OOM
> ---
>
> Key: FLINK-15563
> URL: https://issues.apache.org/jira/browse/FLINK-15563
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Legacy Planner
>Affects Versions: 1.8.1, 1.9.1
>Reporter: sujun
>Assignee: godfrey he
>Priority: Critical
>  Labels: Parquet
>
> This is my code:
> {code:java}
> def main(args: Array[String]): Unit = {
> val env = StreamExecutionEnvironment.getExecutionEnvironment
> env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
> val tableEnv = StreamTableEnvironment.create(env)val schema = 
> "{\"type\":\"record\",\"name\":\"root\",\"fields\":[{\"name\":\"log_id\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"city\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"log_from\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"ip\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"type\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"data_source\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"is_scan\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"result\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"timelong\",\"type\":[\"null\",\"long\"],\"default\":null},{\"name\":\"is_sec\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"event_name\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"id\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"time_string\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"device\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"timestamp_string\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"occur_time\",\"type\":[\"null\",{\"type\":\"long\",\"logicalType\":\"timestamp-millis\"}],\"default\":null},{\"name\":\"row_time\",\"type\":[\"null\",{\"type\":\"long\",\"logicalType\":\"timestamp-millis\"}],\"default\":null}]}"
> 
> val parquetTableSource: ParquetTableSource = ParquetTableSource
> .builder
> .forParquetSchema(new 
> org.apache.parquet.avro.AvroSchemaConverter().convert(
> org.apache.avro.Schema.parse(schema, true)))
> .path("/Users/sujun/Documents/tmp/login_data")
> .build
> tableEnv.registerTableSource("source",parquetTableSource)
> val t1 = tableEnv.sqlQuery("select log_id,city from source where city = 
> '274' ")
> tableEnv.registerTable("t1",t1)
> val t2 = tableEnv.sqlQuery("select * from t1 where 
> log_id='5927070661978133'")
> t2.toAppendStream[Row].print()
> env.execute()}
> {code}
>  
>  When the two SQLS each have a where condition, the main program will hang 
> until OOM. When the filter push down code of ParquetTableSource is deleted, 
> the program runs normally.
>   
>  Through my debugging, I found that the program hangs in the 
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp method
>   
>  May be a bug in the calcite optimizer caused by filter push down code
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15563) When using ParquetTableSource, The program hangs until OOM

2020-01-13 Thread Kurt Young (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17014325#comment-17014325
 ] 

Kurt Young commented on FLINK-15563:


[~sujun1020] Before we fixing this bug, you could also try out blink planner 
which doesn't have this issue. 

>  When using ParquetTableSource, The program hangs until OOM
> ---
>
> Key: FLINK-15563
> URL: https://issues.apache.org/jira/browse/FLINK-15563
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Legacy Planner
>Affects Versions: 1.8.1, 1.9.1
>Reporter: sujun
>Assignee: godfrey he
>Priority: Critical
>  Labels: Parquet
>
> This is my code:
> {code:java}
> def main(args: Array[String]): Unit = {
> val env = StreamExecutionEnvironment.getExecutionEnvironment
> env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
> val tableEnv = StreamTableEnvironment.create(env)val schema = 
> "{\"type\":\"record\",\"name\":\"root\",\"fields\":[{\"name\":\"log_id\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"city\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"log_from\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"ip\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"type\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"data_source\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"is_scan\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"result\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"timelong\",\"type\":[\"null\",\"long\"],\"default\":null},{\"name\":\"is_sec\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"event_name\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"id\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"time_string\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"device\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"timestamp_string\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"occur_time\",\"type\":[\"null\",{\"type\":\"long\",\"logicalType\":\"timestamp-millis\"}],\"default\":null},{\"name\":\"row_time\",\"type\":[\"null\",{\"type\":\"long\",\"logicalType\":\"timestamp-millis\"}],\"default\":null}]}"
> 
> val parquetTableSource: ParquetTableSource = ParquetTableSource
> .builder
> .forParquetSchema(new 
> org.apache.parquet.avro.AvroSchemaConverter().convert(
> org.apache.avro.Schema.parse(schema, true)))
> .path("/Users/sujun/Documents/tmp/login_data")
> .build
> tableEnv.registerTableSource("source",parquetTableSource)
> val t1 = tableEnv.sqlQuery("select log_id,city from source where city = 
> '274' ")
> tableEnv.registerTable("t1",t1)
> val t2 = tableEnv.sqlQuery("select * from t1 where 
> log_id='5927070661978133'")
> t2.toAppendStream[Row].print()
> env.execute()}
> {code}
>  
>  When the two SQLS each have a where condition, the main program will hang 
> until OOM. When the filter push down code of ParquetTableSource is deleted, 
> the program runs normally.
>   
>  Through my debugging, I found that the program hangs in the 
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp method
>   
>  May be a bug in the calcite optimizer caused by filter push down code
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15563) When using ParquetTableSource, The program hangs until OOM

2020-01-13 Thread Kurt Young (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17014324#comment-17014324
 ] 

Kurt Young commented on FLINK-15563:


Thanks [~godfreyhe]

>  When using ParquetTableSource, The program hangs until OOM
> ---
>
> Key: FLINK-15563
> URL: https://issues.apache.org/jira/browse/FLINK-15563
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.8.1, 1.9.1
>Reporter: sujun
>Priority: Critical
>  Labels: Parquet
>
> This is my code:
> {code:java}
> def main(args: Array[String]): Unit = {
> val env = StreamExecutionEnvironment.getExecutionEnvironment
> env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
> val tableEnv = StreamTableEnvironment.create(env)val schema = 
> "{\"type\":\"record\",\"name\":\"root\",\"fields\":[{\"name\":\"log_id\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"city\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"log_from\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"ip\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"type\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"data_source\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"is_scan\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"result\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"timelong\",\"type\":[\"null\",\"long\"],\"default\":null},{\"name\":\"is_sec\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"event_name\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"id\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"time_string\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"device\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"timestamp_string\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"occur_time\",\"type\":[\"null\",{\"type\":\"long\",\"logicalType\":\"timestamp-millis\"}],\"default\":null},{\"name\":\"row_time\",\"type\":[\"null\",{\"type\":\"long\",\"logicalType\":\"timestamp-millis\"}],\"default\":null}]}"
> 
> val parquetTableSource: ParquetTableSource = ParquetTableSource
> .builder
> .forParquetSchema(new 
> org.apache.parquet.avro.AvroSchemaConverter().convert(
> org.apache.avro.Schema.parse(schema, true)))
> .path("/Users/sujun/Documents/tmp/login_data")
> .build
> tableEnv.registerTableSource("source",parquetTableSource)
> val t1 = tableEnv.sqlQuery("select log_id,city from source where city = 
> '274' ")
> tableEnv.registerTable("t1",t1)
> val t2 = tableEnv.sqlQuery("select * from t1 where 
> log_id='5927070661978133'")
> t2.toAppendStream[Row].print()
> env.execute()}
> {code}
>  
>  When the two SQLS each have a where condition, the main program will hang 
> until OOM. When the filter push down code of ParquetTableSource is deleted, 
> the program runs normally.
>   
>  Through my debugging, I found that the program hangs in the 
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp method
>   
>  May be a bug in the calcite optimizer caused by filter push down code
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15563) When using ParquetTableSource, The program hangs until OOM

2020-01-13 Thread godfrey he (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17014312#comment-17014312
 ] 

godfrey he commented on FLINK-15563:


I would like to fix this.

 The reason is there is a bug in CalcMergeRule that the merge result of two 
conditions will explode when meeting some corner cases. The blink planner had 
fixed this ([FLINK-12487|https://github.com/apache/flink/pull/8411]).  

The solution is: copy {{FlinkCalcMergeRule}} from blink planner to old planer 

>  When using ParquetTableSource, The program hangs until OOM
> ---
>
> Key: FLINK-15563
> URL: https://issues.apache.org/jira/browse/FLINK-15563
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.8.1, 1.9.1
>Reporter: sujun
>Priority: Critical
>  Labels: Parquet
>
> This is my code:
> {code:java}
> def main(args: Array[String]): Unit = {
> val env = StreamExecutionEnvironment.getExecutionEnvironment
> env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
> val tableEnv = StreamTableEnvironment.create(env)val schema = 
> "{\"type\":\"record\",\"name\":\"root\",\"fields\":[{\"name\":\"log_id\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"city\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"log_from\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"ip\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"type\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"data_source\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"is_scan\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"result\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"timelong\",\"type\":[\"null\",\"long\"],\"default\":null},{\"name\":\"is_sec\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"event_name\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"id\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"time_string\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"device\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"timestamp_string\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"occur_time\",\"type\":[\"null\",{\"type\":\"long\",\"logicalType\":\"timestamp-millis\"}],\"default\":null},{\"name\":\"row_time\",\"type\":[\"null\",{\"type\":\"long\",\"logicalType\":\"timestamp-millis\"}],\"default\":null}]}"
> 
> val parquetTableSource: ParquetTableSource = ParquetTableSource
> .builder
> .forParquetSchema(new 
> org.apache.parquet.avro.AvroSchemaConverter().convert(
> org.apache.avro.Schema.parse(schema, true)))
> .path("/Users/sujun/Documents/tmp/login_data")
> .build
> tableEnv.registerTableSource("source",parquetTableSource)
> val t1 = tableEnv.sqlQuery("select log_id,city from source where city = 
> '274' ")
> tableEnv.registerTable("t1",t1)
> val t2 = tableEnv.sqlQuery("select * from t1 where 
> log_id='5927070661978133'")
> t2.toAppendStream[Row].print()
> env.execute()}
> {code}
>  
>  When the two SQLS each have a where condition, the main program will hang 
> until OOM. When the filter push down code of ParquetTableSource is deleted, 
> the program runs normally.
>   
>  Through my debugging, I found that the program hangs in the 
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp method
>   
>  May be a bug in the calcite optimizer caused by filter push down code
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15563) When using ParquetTableSource, The program hangs until OOM

2020-01-12 Thread Kurt Young (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17013990#comment-17013990
 ] 

Kurt Young commented on FLINK-15563:


cc [~godfreyhe]

>  When using ParquetTableSource, The program hangs until OOM
> ---
>
> Key: FLINK-15563
> URL: https://issues.apache.org/jira/browse/FLINK-15563
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.8.1, 1.9.1
>Reporter: sujun
>Priority: Critical
>  Labels: Parquet
>
> This is my code:
> {code:java}
> def main(args: Array[String]): Unit = {
> val env = StreamExecutionEnvironment.getExecutionEnvironment
> env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
> val tableEnv = StreamTableEnvironment.create(env)val schema = 
> "{\"type\":\"record\",\"name\":\"root\",\"fields\":[{\"name\":\"log_id\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"city\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"log_from\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"ip\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"type\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"data_source\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"is_scan\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"result\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"timelong\",\"type\":[\"null\",\"long\"],\"default\":null},{\"name\":\"is_sec\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"event_name\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"id\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"time_string\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"device\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"timestamp_string\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"occur_time\",\"type\":[\"null\",{\"type\":\"long\",\"logicalType\":\"timestamp-millis\"}],\"default\":null},{\"name\":\"row_time\",\"type\":[\"null\",{\"type\":\"long\",\"logicalType\":\"timestamp-millis\"}],\"default\":null}]}"
> 
> val parquetTableSource: ParquetTableSource = ParquetTableSource
> .builder
> .forParquetSchema(new 
> org.apache.parquet.avro.AvroSchemaConverter().convert(
> org.apache.avro.Schema.parse(schema, true)))
> .path("/Users/sujun/Documents/tmp/login_data")
> .build
> tableEnv.registerTableSource("source",parquetTableSource)
> val t1 = tableEnv.sqlQuery("select log_id,city from source where city = 
> '274' ")
> tableEnv.registerTable("t1",t1)
> val t2 = tableEnv.sqlQuery("select * from t1 where 
> log_id='5927070661978133'")
> t2.toAppendStream[Row].print()
> env.execute()}
> {code}
>  
>  When the two SQLS each have a where condition, the main program will hang 
> until OOM. When the filter push down code of ParquetTableSource is deleted, 
> the program runs normally.
>   
>  Through my debugging, I found that the program hangs in the 
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp method
>   
>  May be a bug in the calcite optimizer caused by filter push down code
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)