[jira] [Commented] (FLINK-19739) CompileException when windowing in batch mode: A method named "replace" is not declared in any enclosing class nor any supertype

2021-07-27 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17388395#comment-17388395
 ] 

Jingsong Lee commented on FLINK-19739:
--

[~TsReaper] Can you cherry-pick to 1.13?

> CompileException when windowing in batch mode: A method named "replace" is 
> not declared in any enclosing class nor any supertype 
> -
>
> Key: FLINK-19739
> URL: https://issues.apache.org/jira/browse/FLINK-19739
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.11.2, 1.12.0
> Environment: Ubuntu 18.04
> Python 3.8, jar built from master yesterday.
> Or Python 3.7, installed latest version from pip.
>Reporter: Alex Hall
>Assignee: Caizhi Weng
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
>
> Example script:
> {code:python}
> from pyflink.table import EnvironmentSettings, BatchTableEnvironment
> from pyflink.table.window import Tumble
> env_settings = (
> 
> EnvironmentSettings.new_instance().in_batch_mode().use_blink_planner().build()
> )
> table_env = BatchTableEnvironment.create(environment_settings=env_settings)
> table_env.execute_sql(
> """
> CREATE TABLE table1 (
> amount INT,
> ts TIMESTAMP(3),
> WATERMARK FOR ts AS ts - INTERVAL '5' SECOND
> ) WITH (
> 'connector.type' = 'filesystem',
> 'format.type' = 'csv',
> 'connector.path' = '/home/alex/work/test-flink/data1.csv'
> )
> """
> )
> table1 = table_env.from_path("table1")
> table = (
> table1
> .window(Tumble.over("5.days").on("ts").alias("__window"))
> .group_by("__window")
> .select("amount.sum")
> )
> print(table.to_pandas())
> {code}
> Output:
> {code}
> WARNING: An illegal reflective access operation has occurred
> WARNING: Illegal reflective access by 
> org.apache.flink.api.python.shaded.io.netty.util.internal.ReflectionUtil 
> (file:/home/alex/work/flink/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT/opt/flink-python_2.11-1.12-SNAPSHOT.jar)
>  to constructor java.nio.DirectByteBuffer(long,int)
> WARNING: Please consider reporting this to the maintainers of 
> org.apache.flink.api.python.shaded.io.netty.util.internal.ReflectionUtil
> WARNING: Use --illegal-access=warn to enable warnings of further illegal 
> reflective access operations
> WARNING: All illegal access operations will be denied in a future release
> /* 1 */
> /* 2 */  public class LocalHashWinAggWithoutKeys$59 extends 
> org.apache.flink.table.runtime.operators.TableStreamOperator
> /* 3 */  implements 
> org.apache.flink.streaming.api.operators.OneInputStreamOperator, 
> org.apache.flink.streaming.api.operators.BoundedOneInput {
> /* 4 */
> /* 5 */private final Object[] references;
> /* 6 */
> /* 7 */private static final org.slf4j.Logger LOG$2 =
> /* 8 */  org.slf4j.LoggerFactory.getLogger("LocalHashWinAgg");
> /* 9 */
> /* 10 */private transient 
> org.apache.flink.table.types.logical.LogicalType[] aggMapKeyTypes$5;
> /* 11 */private transient 
> org.apache.flink.table.types.logical.LogicalType[] aggBufferTypes$6;
> /* 12 */private transient 
> org.apache.flink.table.runtime.operators.aggregate.BytesHashMap 
> aggregateMap$7;
> /* 13 */org.apache.flink.table.data.binary.BinaryRowData 
> emptyAggBuffer$9 = new org.apache.flink.table.data.binary.BinaryRowData(1);
> /* 14 */org.apache.flink.table.data.writer.BinaryRowWriter 
> emptyAggBufferWriterTerm$10 = new 
> org.apache.flink.table.data.writer.BinaryRowWriter(emptyAggBuffer$9);
> /* 15 */org.apache.flink.table.data.GenericRowData hashAggOutput = 
> new org.apache.flink.table.data.GenericRowData(2);
> /* 16 */private transient 
> org.apache.flink.table.data.binary.BinaryRowData reuseAggMapKey$17 = new 
> org.apache.flink.table.data.binary.BinaryRowData(1);
> /* 17 */private transient 
> org.apache.flink.table.data.binary.BinaryRowData reuseAggBuffer$18 = new 
> org.apache.flink.table.data.binary.BinaryRowData(1);
> /* 18 */private transient 
> org.apache.flink.table.runtime.operators.aggregate.BytesHashMap.Entry 
> reuseAggMapEntry$19 = new 
> org.apache.flink.table.runtime.operators.aggregate.BytesHashMap.Entry(reuseAggMapKey$17,
>  reuseAggBuffer$18);
> /* 19 */org.apache.flink.table.data.binary.BinaryRowData aggMapKey$3 
> = new org.apache.flink.table.data.binary.BinaryRowData(1);
> /* 20 */org.apache.flink.table.data.writer.BinaryRowWriter 
> aggMapKeyWriter$4 = new 
> org.apache.flink.table.data.writer.BinaryRowWriter(aggMapKey$3);
> /* 21 */private 

[jira] [Commented] (FLINK-19739) CompileException when windowing in batch mode: A method named "replace" is not declared in any enclosing class nor any supertype

2021-07-23 Thread Caizhi Weng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17386106#comment-17386106
 ] 

Caizhi Weng commented on FLINK-19739:
-

Hi!

I've looked into this issue and found that {{HashAggCodeGenHelper}} forgets to 
consider window group keys when choosing type for output records. I'm taking 
this issue.

> CompileException when windowing in batch mode: A method named "replace" is 
> not declared in any enclosing class nor any supertype 
> -
>
> Key: FLINK-19739
> URL: https://issues.apache.org/jira/browse/FLINK-19739
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.11.2, 1.12.0
> Environment: Ubuntu 18.04
> Python 3.8, jar built from master yesterday.
> Or Python 3.7, installed latest version from pip.
>Reporter: Alex Hall
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
>
> Example script:
> {code:python}
> from pyflink.table import EnvironmentSettings, BatchTableEnvironment
> from pyflink.table.window import Tumble
> env_settings = (
> 
> EnvironmentSettings.new_instance().in_batch_mode().use_blink_planner().build()
> )
> table_env = BatchTableEnvironment.create(environment_settings=env_settings)
> table_env.execute_sql(
> """
> CREATE TABLE table1 (
> amount INT,
> ts TIMESTAMP(3),
> WATERMARK FOR ts AS ts - INTERVAL '5' SECOND
> ) WITH (
> 'connector.type' = 'filesystem',
> 'format.type' = 'csv',
> 'connector.path' = '/home/alex/work/test-flink/data1.csv'
> )
> """
> )
> table1 = table_env.from_path("table1")
> table = (
> table1
> .window(Tumble.over("5.days").on("ts").alias("__window"))
> .group_by("__window")
> .select("amount.sum")
> )
> print(table.to_pandas())
> {code}
> Output:
> {code}
> WARNING: An illegal reflective access operation has occurred
> WARNING: Illegal reflective access by 
> org.apache.flink.api.python.shaded.io.netty.util.internal.ReflectionUtil 
> (file:/home/alex/work/flink/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT/opt/flink-python_2.11-1.12-SNAPSHOT.jar)
>  to constructor java.nio.DirectByteBuffer(long,int)
> WARNING: Please consider reporting this to the maintainers of 
> org.apache.flink.api.python.shaded.io.netty.util.internal.ReflectionUtil
> WARNING: Use --illegal-access=warn to enable warnings of further illegal 
> reflective access operations
> WARNING: All illegal access operations will be denied in a future release
> /* 1 */
> /* 2 */  public class LocalHashWinAggWithoutKeys$59 extends 
> org.apache.flink.table.runtime.operators.TableStreamOperator
> /* 3 */  implements 
> org.apache.flink.streaming.api.operators.OneInputStreamOperator, 
> org.apache.flink.streaming.api.operators.BoundedOneInput {
> /* 4 */
> /* 5 */private final Object[] references;
> /* 6 */
> /* 7 */private static final org.slf4j.Logger LOG$2 =
> /* 8 */  org.slf4j.LoggerFactory.getLogger("LocalHashWinAgg");
> /* 9 */
> /* 10 */private transient 
> org.apache.flink.table.types.logical.LogicalType[] aggMapKeyTypes$5;
> /* 11 */private transient 
> org.apache.flink.table.types.logical.LogicalType[] aggBufferTypes$6;
> /* 12 */private transient 
> org.apache.flink.table.runtime.operators.aggregate.BytesHashMap 
> aggregateMap$7;
> /* 13 */org.apache.flink.table.data.binary.BinaryRowData 
> emptyAggBuffer$9 = new org.apache.flink.table.data.binary.BinaryRowData(1);
> /* 14 */org.apache.flink.table.data.writer.BinaryRowWriter 
> emptyAggBufferWriterTerm$10 = new 
> org.apache.flink.table.data.writer.BinaryRowWriter(emptyAggBuffer$9);
> /* 15 */org.apache.flink.table.data.GenericRowData hashAggOutput = 
> new org.apache.flink.table.data.GenericRowData(2);
> /* 16 */private transient 
> org.apache.flink.table.data.binary.BinaryRowData reuseAggMapKey$17 = new 
> org.apache.flink.table.data.binary.BinaryRowData(1);
> /* 17 */private transient 
> org.apache.flink.table.data.binary.BinaryRowData reuseAggBuffer$18 = new 
> org.apache.flink.table.data.binary.BinaryRowData(1);
> /* 18 */private transient 
> org.apache.flink.table.runtime.operators.aggregate.BytesHashMap.Entry 
> reuseAggMapEntry$19 = new 
> org.apache.flink.table.runtime.operators.aggregate.BytesHashMap.Entry(reuseAggMapKey$17,
>  reuseAggBuffer$18);
> /* 19 */org.apache.flink.table.data.binary.BinaryRowData aggMapKey$3 
> = new org.apache.flink.table.data.binary.BinaryRowData(1);
> /* 20 */org.apache.flink.table.data.writer.BinaryRowWriter 
> aggMapKeyWriter$4 = new 
> 

[jira] [Commented] (FLINK-19739) CompileException when windowing in batch mode: A method named "replace" is not declared in any enclosing class nor any supertype

2021-04-27 Thread Flink Jira Bot (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17333678#comment-17333678
 ] 

Flink Jira Bot commented on FLINK-19739:


This issue was marked "stale-assigned" and has not received an update in 7 
days. It is now automatically unassigned. If you are still working on it, you 
can assign it to yourself again. Please also give an update about the status of 
the work.

> CompileException when windowing in batch mode: A method named "replace" is 
> not declared in any enclosing class nor any supertype 
> -
>
> Key: FLINK-19739
> URL: https://issues.apache.org/jira/browse/FLINK-19739
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.11.2, 1.12.0
> Environment: Ubuntu 18.04
> Python 3.8, jar built from master yesterday.
> Or Python 3.7, installed latest version from pip.
>Reporter: Alex Hall
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: stale-assigned
>
> Example script:
> {code:python}
> from pyflink.table import EnvironmentSettings, BatchTableEnvironment
> from pyflink.table.window import Tumble
> env_settings = (
> 
> EnvironmentSettings.new_instance().in_batch_mode().use_blink_planner().build()
> )
> table_env = BatchTableEnvironment.create(environment_settings=env_settings)
> table_env.execute_sql(
> """
> CREATE TABLE table1 (
> amount INT,
> ts TIMESTAMP(3),
> WATERMARK FOR ts AS ts - INTERVAL '5' SECOND
> ) WITH (
> 'connector.type' = 'filesystem',
> 'format.type' = 'csv',
> 'connector.path' = '/home/alex/work/test-flink/data1.csv'
> )
> """
> )
> table1 = table_env.from_path("table1")
> table = (
> table1
> .window(Tumble.over("5.days").on("ts").alias("__window"))
> .group_by("__window")
> .select("amount.sum")
> )
> print(table.to_pandas())
> {code}
> Output:
> {code}
> WARNING: An illegal reflective access operation has occurred
> WARNING: Illegal reflective access by 
> org.apache.flink.api.python.shaded.io.netty.util.internal.ReflectionUtil 
> (file:/home/alex/work/flink/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT/opt/flink-python_2.11-1.12-SNAPSHOT.jar)
>  to constructor java.nio.DirectByteBuffer(long,int)
> WARNING: Please consider reporting this to the maintainers of 
> org.apache.flink.api.python.shaded.io.netty.util.internal.ReflectionUtil
> WARNING: Use --illegal-access=warn to enable warnings of further illegal 
> reflective access operations
> WARNING: All illegal access operations will be denied in a future release
> /* 1 */
> /* 2 */  public class LocalHashWinAggWithoutKeys$59 extends 
> org.apache.flink.table.runtime.operators.TableStreamOperator
> /* 3 */  implements 
> org.apache.flink.streaming.api.operators.OneInputStreamOperator, 
> org.apache.flink.streaming.api.operators.BoundedOneInput {
> /* 4 */
> /* 5 */private final Object[] references;
> /* 6 */
> /* 7 */private static final org.slf4j.Logger LOG$2 =
> /* 8 */  org.slf4j.LoggerFactory.getLogger("LocalHashWinAgg");
> /* 9 */
> /* 10 */private transient 
> org.apache.flink.table.types.logical.LogicalType[] aggMapKeyTypes$5;
> /* 11 */private transient 
> org.apache.flink.table.types.logical.LogicalType[] aggBufferTypes$6;
> /* 12 */private transient 
> org.apache.flink.table.runtime.operators.aggregate.BytesHashMap 
> aggregateMap$7;
> /* 13 */org.apache.flink.table.data.binary.BinaryRowData 
> emptyAggBuffer$9 = new org.apache.flink.table.data.binary.BinaryRowData(1);
> /* 14 */org.apache.flink.table.data.writer.BinaryRowWriter 
> emptyAggBufferWriterTerm$10 = new 
> org.apache.flink.table.data.writer.BinaryRowWriter(emptyAggBuffer$9);
> /* 15 */org.apache.flink.table.data.GenericRowData hashAggOutput = 
> new org.apache.flink.table.data.GenericRowData(2);
> /* 16 */private transient 
> org.apache.flink.table.data.binary.BinaryRowData reuseAggMapKey$17 = new 
> org.apache.flink.table.data.binary.BinaryRowData(1);
> /* 17 */private transient 
> org.apache.flink.table.data.binary.BinaryRowData reuseAggBuffer$18 = new 
> org.apache.flink.table.data.binary.BinaryRowData(1);
> /* 18 */private transient 
> org.apache.flink.table.runtime.operators.aggregate.BytesHashMap.Entry 
> reuseAggMapEntry$19 = new 
> org.apache.flink.table.runtime.operators.aggregate.BytesHashMap.Entry(reuseAggMapKey$17,
>  reuseAggBuffer$18);
> /* 19 */org.apache.flink.table.data.binary.BinaryRowData aggMapKey$3 
> = new org.apache.flink.table.data.binary.BinaryRowData(1);
> /* 20 */

[jira] [Commented] (FLINK-19739) CompileException when windowing in batch mode: A method named "replace" is not declared in any enclosing class nor any supertype

2021-04-16 Thread Flink Jira Bot (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17324112#comment-17324112
 ] 

Flink Jira Bot commented on FLINK-19739:


This issue is assigned but has not received an update in 7 days so it has been 
labeled "stale-assigned". If you are still working on the issue, please give an 
update and remove the label. If you are no longer working on the issue, please 
unassign so someone else may work on it. In 7 days the issue will be 
automatically unassigned.

> CompileException when windowing in batch mode: A method named "replace" is 
> not declared in any enclosing class nor any supertype 
> -
>
> Key: FLINK-19739
> URL: https://issues.apache.org/jira/browse/FLINK-19739
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.11.2, 1.12.0
> Environment: Ubuntu 18.04
> Python 3.8, jar built from master yesterday.
> Or Python 3.7, installed latest version from pip.
>Reporter: Alex Hall
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: stale-assigned
>
> Example script:
> {code:python}
> from pyflink.table import EnvironmentSettings, BatchTableEnvironment
> from pyflink.table.window import Tumble
> env_settings = (
> 
> EnvironmentSettings.new_instance().in_batch_mode().use_blink_planner().build()
> )
> table_env = BatchTableEnvironment.create(environment_settings=env_settings)
> table_env.execute_sql(
> """
> CREATE TABLE table1 (
> amount INT,
> ts TIMESTAMP(3),
> WATERMARK FOR ts AS ts - INTERVAL '5' SECOND
> ) WITH (
> 'connector.type' = 'filesystem',
> 'format.type' = 'csv',
> 'connector.path' = '/home/alex/work/test-flink/data1.csv'
> )
> """
> )
> table1 = table_env.from_path("table1")
> table = (
> table1
> .window(Tumble.over("5.days").on("ts").alias("__window"))
> .group_by("__window")
> .select("amount.sum")
> )
> print(table.to_pandas())
> {code}
> Output:
> {code}
> WARNING: An illegal reflective access operation has occurred
> WARNING: Illegal reflective access by 
> org.apache.flink.api.python.shaded.io.netty.util.internal.ReflectionUtil 
> (file:/home/alex/work/flink/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT/opt/flink-python_2.11-1.12-SNAPSHOT.jar)
>  to constructor java.nio.DirectByteBuffer(long,int)
> WARNING: Please consider reporting this to the maintainers of 
> org.apache.flink.api.python.shaded.io.netty.util.internal.ReflectionUtil
> WARNING: Use --illegal-access=warn to enable warnings of further illegal 
> reflective access operations
> WARNING: All illegal access operations will be denied in a future release
> /* 1 */
> /* 2 */  public class LocalHashWinAggWithoutKeys$59 extends 
> org.apache.flink.table.runtime.operators.TableStreamOperator
> /* 3 */  implements 
> org.apache.flink.streaming.api.operators.OneInputStreamOperator, 
> org.apache.flink.streaming.api.operators.BoundedOneInput {
> /* 4 */
> /* 5 */private final Object[] references;
> /* 6 */
> /* 7 */private static final org.slf4j.Logger LOG$2 =
> /* 8 */  org.slf4j.LoggerFactory.getLogger("LocalHashWinAgg");
> /* 9 */
> /* 10 */private transient 
> org.apache.flink.table.types.logical.LogicalType[] aggMapKeyTypes$5;
> /* 11 */private transient 
> org.apache.flink.table.types.logical.LogicalType[] aggBufferTypes$6;
> /* 12 */private transient 
> org.apache.flink.table.runtime.operators.aggregate.BytesHashMap 
> aggregateMap$7;
> /* 13 */org.apache.flink.table.data.binary.BinaryRowData 
> emptyAggBuffer$9 = new org.apache.flink.table.data.binary.BinaryRowData(1);
> /* 14 */org.apache.flink.table.data.writer.BinaryRowWriter 
> emptyAggBufferWriterTerm$10 = new 
> org.apache.flink.table.data.writer.BinaryRowWriter(emptyAggBuffer$9);
> /* 15 */org.apache.flink.table.data.GenericRowData hashAggOutput = 
> new org.apache.flink.table.data.GenericRowData(2);
> /* 16 */private transient 
> org.apache.flink.table.data.binary.BinaryRowData reuseAggMapKey$17 = new 
> org.apache.flink.table.data.binary.BinaryRowData(1);
> /* 17 */private transient 
> org.apache.flink.table.data.binary.BinaryRowData reuseAggBuffer$18 = new 
> org.apache.flink.table.data.binary.BinaryRowData(1);
> /* 18 */private transient 
> org.apache.flink.table.runtime.operators.aggregate.BytesHashMap.Entry 
> reuseAggMapEntry$19 = new 
> org.apache.flink.table.runtime.operators.aggregate.BytesHashMap.Entry(reuseAggMapKey$17,
>  reuseAggBuffer$18);
> /* 19 */org.apache.flink.table.data.binary.BinaryRowData aggMapKey$3 
> = 

[jira] [Commented] (FLINK-19739) CompileException when windowing in batch mode: A method named "replace" is not declared in any enclosing class nor any supertype

2020-11-02 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17224625#comment-17224625
 ] 

Jark Wu commented on FLINK-19739:
-

I think this is a bug in batch code generator for window aggregate. cc 
[~lzljs3620320]

> CompileException when windowing in batch mode: A method named "replace" is 
> not declared in any enclosing class nor any supertype 
> -
>
> Key: FLINK-19739
> URL: https://issues.apache.org/jira/browse/FLINK-19739
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Table SQL / API
>Affects Versions: 1.12.0, 1.11.2
> Environment: Ubuntu 18.04
> Python 3.8, jar built from master yesterday.
> Or Python 3.7, installed latest version from pip.
>Reporter: Alex Hall
>Priority: Major
>
> Example script:
> {code:python}
> from pyflink.table import EnvironmentSettings, BatchTableEnvironment
> from pyflink.table.window import Tumble
> env_settings = (
> 
> EnvironmentSettings.new_instance().in_batch_mode().use_blink_planner().build()
> )
> table_env = BatchTableEnvironment.create(environment_settings=env_settings)
> table_env.execute_sql(
> """
> CREATE TABLE table1 (
> amount INT,
> ts TIMESTAMP(3),
> WATERMARK FOR ts AS ts - INTERVAL '5' SECOND
> ) WITH (
> 'connector.type' = 'filesystem',
> 'format.type' = 'csv',
> 'connector.path' = '/home/alex/work/test-flink/data1.csv'
> )
> """
> )
> table1 = table_env.from_path("table1")
> table = (
> table1
> .window(Tumble.over("5.days").on("ts").alias("__window"))
> .group_by("__window")
> .select("amount.sum")
> )
> print(table.to_pandas())
> {code}
> Output:
> {code}
> WARNING: An illegal reflective access operation has occurred
> WARNING: Illegal reflective access by 
> org.apache.flink.api.python.shaded.io.netty.util.internal.ReflectionUtil 
> (file:/home/alex/work/flink/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT/opt/flink-python_2.11-1.12-SNAPSHOT.jar)
>  to constructor java.nio.DirectByteBuffer(long,int)
> WARNING: Please consider reporting this to the maintainers of 
> org.apache.flink.api.python.shaded.io.netty.util.internal.ReflectionUtil
> WARNING: Use --illegal-access=warn to enable warnings of further illegal 
> reflective access operations
> WARNING: All illegal access operations will be denied in a future release
> /* 1 */
> /* 2 */  public class LocalHashWinAggWithoutKeys$59 extends 
> org.apache.flink.table.runtime.operators.TableStreamOperator
> /* 3 */  implements 
> org.apache.flink.streaming.api.operators.OneInputStreamOperator, 
> org.apache.flink.streaming.api.operators.BoundedOneInput {
> /* 4 */
> /* 5 */private final Object[] references;
> /* 6 */
> /* 7 */private static final org.slf4j.Logger LOG$2 =
> /* 8 */  org.slf4j.LoggerFactory.getLogger("LocalHashWinAgg");
> /* 9 */
> /* 10 */private transient 
> org.apache.flink.table.types.logical.LogicalType[] aggMapKeyTypes$5;
> /* 11 */private transient 
> org.apache.flink.table.types.logical.LogicalType[] aggBufferTypes$6;
> /* 12 */private transient 
> org.apache.flink.table.runtime.operators.aggregate.BytesHashMap 
> aggregateMap$7;
> /* 13 */org.apache.flink.table.data.binary.BinaryRowData 
> emptyAggBuffer$9 = new org.apache.flink.table.data.binary.BinaryRowData(1);
> /* 14 */org.apache.flink.table.data.writer.BinaryRowWriter 
> emptyAggBufferWriterTerm$10 = new 
> org.apache.flink.table.data.writer.BinaryRowWriter(emptyAggBuffer$9);
> /* 15 */org.apache.flink.table.data.GenericRowData hashAggOutput = 
> new org.apache.flink.table.data.GenericRowData(2);
> /* 16 */private transient 
> org.apache.flink.table.data.binary.BinaryRowData reuseAggMapKey$17 = new 
> org.apache.flink.table.data.binary.BinaryRowData(1);
> /* 17 */private transient 
> org.apache.flink.table.data.binary.BinaryRowData reuseAggBuffer$18 = new 
> org.apache.flink.table.data.binary.BinaryRowData(1);
> /* 18 */private transient 
> org.apache.flink.table.runtime.operators.aggregate.BytesHashMap.Entry 
> reuseAggMapEntry$19 = new 
> org.apache.flink.table.runtime.operators.aggregate.BytesHashMap.Entry(reuseAggMapKey$17,
>  reuseAggBuffer$18);
> /* 19 */org.apache.flink.table.data.binary.BinaryRowData aggMapKey$3 
> = new org.apache.flink.table.data.binary.BinaryRowData(1);
> /* 20 */org.apache.flink.table.data.writer.BinaryRowWriter 
> aggMapKeyWriter$4 = new 
> org.apache.flink.table.data.writer.BinaryRowWriter(aggMapKey$3);
> /* 21 */private boolean hasInput = false;
> /* 22 */

[jira] [Commented] (FLINK-19739) CompileException when windowing in batch mode: A method named "replace" is not declared in any enclosing class nor any supertype

2020-11-02 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17224622#comment-17224622
 ] 

Dian Fu commented on FLINK-19739:
-

cc [~jark]

> CompileException when windowing in batch mode: A method named "replace" is 
> not declared in any enclosing class nor any supertype 
> -
>
> Key: FLINK-19739
> URL: https://issues.apache.org/jira/browse/FLINK-19739
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Table SQL / API
>Affects Versions: 1.12.0, 1.11.2
> Environment: Ubuntu 18.04
> Python 3.8, jar built from master yesterday.
> Or Python 3.7, installed latest version from pip.
>Reporter: Alex Hall
>Priority: Major
>
> Example script:
> ```python
> from pyflink.table import EnvironmentSettings, BatchTableEnvironment
> from pyflink.table.window import Tumble
> env_settings = (
> 
> EnvironmentSettings.new_instance().in_batch_mode().use_blink_planner().build()
> )
> table_env = BatchTableEnvironment.create(environment_settings=env_settings)
> table_env.execute_sql(
> """
> CREATE TABLE table1 (
> amount INT,
> ts TIMESTAMP(3),
> WATERMARK FOR ts AS ts - INTERVAL '5' SECOND
> ) WITH (
> 'connector.type' = 'filesystem',
> 'format.type' = 'csv',
> 'connector.path' = '/home/alex/work/test-flink/data1.csv'
> )
> """
> )
> table1 = table_env.from_path("table1")
> table = (
> table1
> .window(Tumble.over("5.days").on("ts").alias("__window"))
> .group_by("__window")
> .select("amount.sum")
> )
> print(table.to_pandas())
> ```
> Output:
> ```
> WARNING: An illegal reflective access operation has occurred
> WARNING: Illegal reflective access by 
> org.apache.flink.api.python.shaded.io.netty.util.internal.ReflectionUtil 
> (file:/home/alex/work/flink/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT/opt/flink-python_2.11-1.12-SNAPSHOT.jar)
>  to constructor java.nio.DirectByteBuffer(long,int)
> WARNING: Please consider reporting this to the maintainers of 
> org.apache.flink.api.python.shaded.io.netty.util.internal.ReflectionUtil
> WARNING: Use --illegal-access=warn to enable warnings of further illegal 
> reflective access operations
> WARNING: All illegal access operations will be denied in a future release
> /* 1 */
> /* 2 */  public class LocalHashWinAggWithoutKeys$59 extends 
> org.apache.flink.table.runtime.operators.TableStreamOperator
> /* 3 */  implements 
> org.apache.flink.streaming.api.operators.OneInputStreamOperator, 
> org.apache.flink.streaming.api.operators.BoundedOneInput {
> /* 4 */
> /* 5 */private final Object[] references;
> /* 6 */
> /* 7 */private static final org.slf4j.Logger LOG$2 =
> /* 8 */  org.slf4j.LoggerFactory.getLogger("LocalHashWinAgg");
> /* 9 */
> /* 10 */private transient 
> org.apache.flink.table.types.logical.LogicalType[] aggMapKeyTypes$5;
> /* 11 */private transient 
> org.apache.flink.table.types.logical.LogicalType[] aggBufferTypes$6;
> /* 12 */private transient 
> org.apache.flink.table.runtime.operators.aggregate.BytesHashMap 
> aggregateMap$7;
> /* 13 */org.apache.flink.table.data.binary.BinaryRowData 
> emptyAggBuffer$9 = new org.apache.flink.table.data.binary.BinaryRowData(1);
> /* 14 */org.apache.flink.table.data.writer.BinaryRowWriter 
> emptyAggBufferWriterTerm$10 = new 
> org.apache.flink.table.data.writer.BinaryRowWriter(emptyAggBuffer$9);
> /* 15 */org.apache.flink.table.data.GenericRowData hashAggOutput = 
> new org.apache.flink.table.data.GenericRowData(2);
> /* 16 */private transient 
> org.apache.flink.table.data.binary.BinaryRowData reuseAggMapKey$17 = new 
> org.apache.flink.table.data.binary.BinaryRowData(1);
> /* 17 */private transient 
> org.apache.flink.table.data.binary.BinaryRowData reuseAggBuffer$18 = new 
> org.apache.flink.table.data.binary.BinaryRowData(1);
> /* 18 */private transient 
> org.apache.flink.table.runtime.operators.aggregate.BytesHashMap.Entry 
> reuseAggMapEntry$19 = new 
> org.apache.flink.table.runtime.operators.aggregate.BytesHashMap.Entry(reuseAggMapKey$17,
>  reuseAggBuffer$18);
> /* 19 */org.apache.flink.table.data.binary.BinaryRowData aggMapKey$3 
> = new org.apache.flink.table.data.binary.BinaryRowData(1);
> /* 20 */org.apache.flink.table.data.writer.BinaryRowWriter 
> aggMapKeyWriter$4 = new 
> org.apache.flink.table.data.writer.BinaryRowWriter(aggMapKey$3);
> /* 21 */private boolean hasInput = false;
> /* 22 */org.apache.flink.streaming.runtime.streamrecord.StreamRecord 
> element = new 
>