[jira] [Created] (FLINK-34016) Janino compile failed when watermark with column by udf

2024-01-07 Thread Jude Zhu (Jira)
Jude Zhu created FLINK-34016:


 Summary: Janino compile failed when watermark with column by udf
 Key: FLINK-34016
 URL: https://issues.apache.org/jira/browse/FLINK-34016
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Runtime
Affects Versions: 1.18.0, 1.15.0
Reporter: Jude Zhu


After submit the following flink sql by sql-client.sh will throw an exception:
{code:java}
Caused by: java.lang.RuntimeException: Could not instantiate generated class 
'WatermarkGenerator$0'
    at 
org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:74)
    at 
org.apache.flink.table.runtime.generated.GeneratedWatermarkGeneratorSupplier.createWatermarkGenerator(GeneratedWatermarkGeneratorSupplier.java:69)
    at 
org.apache.flink.streaming.api.operators.source.ProgressiveTimestampsAndWatermarks.createMainOutput(ProgressiveTimestampsAndWatermarks.java:109)
    at 
org.apache.flink.streaming.api.operators.SourceOperator.initializeMainOutput(SourceOperator.java:462)
    at 
org.apache.flink.streaming.api.operators.SourceOperator.emitNextNotReading(SourceOperator.java:438)
    at 
org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:414)
    at 
org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68)
    at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:562)
    at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:231)
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:858)
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:807)
    at 
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:953)
    at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:932)
    at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:746)
    at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.util.FlinkRuntimeException: 
org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
compiled. This is a bug. Please file an issue.
    at 
org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:94)
    at 
org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:101)
    at 
org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:68)
    ... 16 more
Caused by: 
org.apache.flink.shaded.guava31.com.google.common.util.concurrent.UncheckedExecutionException:
 org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
compiled. This is a bug. Please file an issue.
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2055)
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache.get(LocalCache.java:3966)
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4863)
    at 
org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:92)
    ... 18 more
Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
cannot be compiled. This is a bug. Please file an issue.
    at 
org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:107)
    at 
org.apache.flink.table.runtime.generated.CompileUtils.lambda$compile$0(CompileUtils.java:92)
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4868)
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3533)
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2282)
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2159)
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2049)
    ... 21 more
Caused by: org.codehaus.commons.compiler.CompileException: Line 29, Column 123: 
Line 29, Column 123: Cannot determine simple type name "org"
    at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:7007)
    at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6886)
    at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6899)
    at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6899)
    at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6899)
    at 

[jira] [Commented] (FLINK-25268) Support task manager node-label in Yarn deployment

2021-12-14 Thread Jude Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17459618#comment-17459618
 ] 

Jude Zhu commented on FLINK-25268:
--

good job!

> Support task manager node-label in Yarn deployment
> --
>
> Key: FLINK-25268
> URL: https://issues.apache.org/jira/browse/FLINK-25268
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Reporter: Junfan Zhang
>Assignee: Junfan Zhang
>Priority: Major
>  Labels: pull-request-available
>
> Now Flink only support application level node label, it's necessary to 
> introduce task manager level node-label on Yarn deployment.
> h2. Why we need it?
> Sometimes we will implement Flink to support deep learning payload using GPU, 
> so if having this feature, job manager and task managers could use different 
> nodes.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-20951) IllegalArgumentException when reading Hive parquet table if condition not contain all partitioned fields

2021-01-27 Thread Jude Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17272893#comment-17272893
 ] 

Jude Zhu commented on FLINK-20951:
--

I' alread set 

*tableEnv.getConfig().getConfiguration().setString("table.exec.hive.fallback-mapred-reader",
 "true");*

*when i only append one partition condition。all of two* *partition***

the exception is:

21/01/27 20:53:26 ERROR base.source.reader.fetcher.SplitFetcherManager: 
Received uncaught exception.21/01/27 20:53:26 ERROR 
base.source.reader.fetcher.SplitFetcherManager: Received uncaught 
exception.java.lang.RuntimeException: SplitFetcher thread 79 received 
unexpected exception while polling the records at 
org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:146)
 ~[TalosEngine-Prod-1.12-2.1.11.jar:?] at 
org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:101)
 [TalosEngine-Prod-1.12-2.1.11.jar:?] at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[?:1.8.0_121] at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[?:1.8.0_121] at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:1.8.0_121] at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:1.8.0_121] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]Caused by: 
java.io.IOException: Filesystem closed at 
org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:858) 
~[hadoop-hdfs-2.6.0-cdh5.11.0.jar:?] at 
org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:707) 
~[hadoop-hdfs-2.6.0-cdh5.11.0.jar:?] at 
java.io.FilterInputStream.close(FilterInputStream.java:181) ~[?:1.8.0_121] at 
parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:432) 
~[TalosEngine-Prod-1.12-2.1.11.jar:?] at 
parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:385) 
~[TalosEngine-Prod-1.12-2.1.11.jar:?] at 
parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:371) 
~[TalosEngine-Prod-1.12-2.1.11.jar:?] at 
org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.getSplit(ParquetRecordReaderWrapper.java:252)
 ~[TalosEngine-Prod-1.12-2.1.11.jar:?] at 
org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:99)
 ~[TalosEngine-Prod-1.12-2.1.11.jar:?] at 
org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:85)
 ~[TalosEngine-Prod-1.12-2.1.11.jar:?] at 
org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:72)
 ~[TalosEngine-Prod-1.12-2.1.11.jar:?] at 
org.apache.flink.connectors.hive.read.HiveMapredSplitReader.(HiveMapredSplitReader.java:113)
 ~[TalosEngine-Prod-1.12-2.1.11.jar:?] at 
org.apache.flink.connectors.hive.read.HiveBulkFormatAdapter$HiveReader.(HiveBulkFormatAdapter.java:309)
 ~[TalosEngine-Prod-1.12-2.1.11.jar:?] at 
org.apache.flink.connectors.hive.read.HiveBulkFormatAdapter$HiveReader.(HiveBulkFormatAdapter.java:288)
 ~[TalosEngine-Prod-1.12-2.1.11.jar:?] at 
org.apache.flink.connectors.hive.read.HiveBulkFormatAdapter$HiveMapRedBulkFormat.createReader(HiveBulkFormatAdapter.java:265)
 ~[TalosEngine-Prod-1.12-2.1.11.jar:?] at 
org.apache.flink.connectors.hive.read.HiveBulkFormatAdapter$HiveMapRedBulkFormat.createReader(HiveBulkFormatAdapter.java:258)
 ~[TalosEngine-Prod-1.12-2.1.11.jar:?] at 
org.apache.flink.connectors.hive.read.HiveBulkFormatAdapter.createReader(HiveBulkFormatAdapter.java:108)
 ~[TalosEngine-Prod-1.12-2.1.11.jar:?] at 
org.apache.flink.connectors.hive.read.HiveBulkFormatAdapter.createReader(HiveBulkFormatAdapter.java:63)
 ~[TalosEngine-Prod-1.12-2.1.11.jar:?] at 
org.apache.flink.connector.file.src.impl.FileSourceSplitReader.checkSplitOrStartNext(FileSourceSplitReader.java:112)
 ~[TalosEngine-Prod-1.12-2.1.11.jar:?] at 
org.apache.flink.connector.file.src.impl.FileSourceSplitReader.fetch(FileSourceSplitReader.java:65)
 ~[TalosEngine-Prod-1.12-2.1.11.jar:?] at 
org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:56)
 ~[TalosEngine-Prod-1.12-2.1.11.jar:?] at 
org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:138)
 ~[TalosEngine-Prod-1.12-2.1.11.jar:?] ... 6 more

and

21/01/27 20:53:26 ERROR runtime.io.network.partition.ResultPartition: Error 
during release of result subpartition: 
/data1/yarn/local/usercache/venus/appcache/application_1605868815011_5695885/flink-netty-shuffle-a2999438-b901-472e-a19a-99d025c13c20/0ae8a74acd295d92c30242edc432e358.channel21/01/27
 20:53:26 ERROR runtime.io.network.partition.ResultPartition: Error during 
release of result subpartition: