[jira] [Closed] (FLINK-31952) Support 'EXPLAIN' statement for CompiledPlan

2023-06-17 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan closed FLINK-31952.
-
Resolution: Won't Do

> Support 'EXPLAIN' statement for CompiledPlan
> 
>
> Key: FLINK-31952
> URL: https://issues.apache.org/jira/browse/FLINK-31952
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Jane Chan
>Assignee: Shuai Xu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
>  Support the explain SQL syntax towards serialized CompiledPlan
> {code:sql}
> EXPLAIN [  | PLAN FOR]  
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31952) Support 'EXPLAIN' statement for CompiledPlan

2023-06-17 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733855#comment-17733855
 ] 

Jane Chan commented on FLINK-31952:
---

After revisiting `StreamPlanner#explainPlan`, I think the necessity of 
implementing "explain" syntax is not significant because the serialized 
execGraph cannot be rolled back to relNode. Therefore, the supported 
ExplainDetail types are limited to JSON_EXECUTION_PLAN only. As it is also in 
JSON format, its readability is not so good, and users may have confused 
regarding the relationship between JSON_EXECUTION_PLAN and COMPILED_PLAN.

> Support 'EXPLAIN' statement for CompiledPlan
> 
>
> Key: FLINK-31952
> URL: https://issues.apache.org/jira/browse/FLINK-31952
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Jane Chan
>Assignee: Shuai Xu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
>  Support the explain SQL syntax towards serialized CompiledPlan
> {code:sql}
> EXPLAIN [  | PLAN FOR]  
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #22818: [FLINK-31956][table] Extend COMPILE AND EXECUTE PLAN statement to read/write from/to Flink FileSystem

2023-06-17 Thread via GitHub


flinkbot commented on PR #22818:
URL: https://github.com/apache/flink/pull/22818#issuecomment-1595961377

   
   ## CI report:
   
   * 8f744a4e0f44e0cb3c3e8f573f736ca3d7aeacfd UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] LadyForest opened a new pull request, #22818: [FLINK-31956][table] Extend COMPILE AND EXECUTE PLAN statement to read/write from/to Flink FileSystem

2023-06-17 Thread via GitHub


LadyForest opened a new pull request, #22818:
URL: https://github.com/apache/flink/pull/22818

   ## What is the purpose of the change
   
   This PR extends the `COMPILE PLAN` statement to read/write from/to Flink 
`FileSystem`. 
   
   
   ## Brief change log
   
   This PR contains two commits
   - f397fe6c Fix the issue that the current `COMPILE PLAN` statement cannot 
write the plan to a local path if its parent dir does not exist.
   - 8f744a4e Extend the `COMPILE AND EXECUTE PLAN` statement to read from a 
remote plan with URI like `hdfs://` or `s3://`. The impl leverages the power of 
`ResourceManager` to stage a local file copy, which will be cleaned up when the 
SQL gateway or client close.
   
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
   - The first commit can be verified by rolling back src changes 
`ExecNodeGraphInternalPlan` and running `CompiledPlanITCase`.
   - The second commit adds tests as follows:
 - `FlinkSqlParserImplTest` verifies the syntax works as expected
 - `ResourceManagerTest` verifies the newly introduced methods work as 
expected
 - `insert.q` verifies the changes work on the absolute path without the 
scheme, relative path without the scheme, and absolute path with the scheme for 
the local file system.
 - `CompileAndExecuteRemotePlanITCase` is an E2E case to verify the changes 
work as expected for HDFS remote path.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? yes
 - If yes, how is the feature documented? FLINK-31957
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hanyuzheng7 commented on pull request #22717: [FLINK-31665] [table] Add ARRAY_CONCAT function

2023-06-17 Thread via GitHub


hanyuzheng7 commented on PR #22717:
URL: https://github.com/apache/flink/pull/22717#issuecomment-1595911983

   https://github.com/apache/flink/assets/135176127/4ee9ca6f-9a95-4a1a-9f29-d876dcdafba4;>
   @snuyanzin , all done, please merge it when you have time. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hanyuzheng7 commented on pull request #22717: [FLINK-31665] [table] Add ARRAY_CONCAT function

2023-06-17 Thread via GitHub


hanyuzheng7 commented on PR #22717:
URL: https://github.com/apache/flink/pull/22717#issuecomment-1595910185

   @snuyanzin It seem expression.py format not correct, and I have already 
fixed the expression.py.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #22817: [FLINK-31731] Add backwards compatible constructor for DebeziumAvroSerializationSchema

2023-06-17 Thread via GitHub


flinkbot commented on PR #22817:
URL: https://github.com/apache/flink/pull/22817#issuecomment-1595826739

   
   ## CI report:
   
   * cf332a0cfee80f2df17bbaec9227416e92380f2e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] tzulitai commented on pull request #22817: [FLINK-31731] Add backwards compatible constructor for DebeziumAvroSerializationSchema

2023-06-17 Thread via GitHub


tzulitai commented on PR #22817:
URL: https://github.com/apache/flink/pull/22817#issuecomment-1595825739

   cc @MartijnVisser @PatrickRen @knaufk @mas-chen related to Flink 1.18.x  
release blocker


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31731) No suitable constructor found for DebeziumAvroSerializationSchema

2023-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-31731:
---
Labels: pull-request-available test-stability  (was: test-stability)

> No suitable constructor found for DebeziumAvroSerializationSchema
> -
>
> Key: FLINK-31731
> URL: https://issues.apache.org/jira/browse/FLINK-31731
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.18.0, kafka-4.0.0
>Reporter: Martijn Visser
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> {code:java}
> Error:  Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.8.0:testCompile 
> (default-testCompile) on project flink-connector-kafka: Compilation failure
> Error:  
> /home/runner/work/flink-connector-kafka/flink-connector-kafka/flink-connector-kafka/src/test/java/org/apache/flink/streaming/connectors/kafka/table/KafkaDynamicTableFactoryTest.java:[939,16]
>  no suitable constructor found for 
> DebeziumAvroSerializationSchema(org.apache.flink.table.types.logical.RowType,java.lang.String,java.lang.String,)
> Error:  constructor 
> org.apache.flink.formats.avro.registry.confluent.debezium.DebeziumAvroSerializationSchema.DebeziumAvroSerializationSchema(org.apache.flink.table.types.logical.RowType,java.lang.String,java.lang.String,java.lang.String,java.util.Map)
>  is not applicable
> Error:(actual and formal argument lists differ in length)
> Error:  constructor 
> org.apache.flink.formats.avro.registry.confluent.debezium.DebeziumAvroSerializationSchema.DebeziumAvroSerializationSchema(org.apache.flink.formats.avro.AvroRowDataSerializationSchema)
>  is not applicable
> Error:(actual and formal argument lists differ in length)
> Error:  -> [Help 1]
> Error:  
> Error:  To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> Error:  Re-run Maven using the -X switch to enable full debug logging.
> Error:  
> Error:  For more information about the errors and possible solutions, please 
> read the following articles:
> Error:  [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
> Error:  
> Error:  After correcting the problems, you can resume the build with the 
> command
> Error:mvn  -rf :flink-connector-kafka
> Error: Process completed with exit code 1.
> {code}
> https://github.com/apache/flink-connector-kafka/actions/runs/4610715024/jobs/8149513647#step:13:153



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] tzulitai opened a new pull request, #22817: [FLINK-31731] Add backwards compatible constructor for DebeziumAvroSerializationSchema

2023-06-17 Thread via GitHub


tzulitai opened a new pull request, #22817:
URL: https://github.com/apache/flink/pull/22817

   ## What is the purpose of the change
   
   Adds a backwards compatible constructor for 
`DebeziumAvroSerializationSchema` that doesn't require passing an arg for the 
schema string. This allows the code in `apache/flink-connector-kafka` to build 
against Flink 1.18.x.
   
   The constructor args for `DebeziumAvroSerializationSchema` was modified in 
e818c11d to allow passing in a nullable schema string. There were test code in 
`apache/flink-connector-kafka` that used that constructor, and therefore broke 
with that change and could no longer simultaneously build against Flink 1.18.x 
and 1.17.x.
   
   There's a bigger question of _why_ the `flink-connector-kafka` tests are 
using the constructor which is meant to be `@Internal`, but for now this should 
have the nightly tests passing again and unblock Flink 1.18.x.
   
   ## Brief change log
   
   - Add back a constructor for `DebeziumAvroSerializationSchema` that doesn't 
require an arg for the schema string.
   
   ## Verifying this change
   
   The nightly tests in `flink-connector-kafka` should be passing again: 
https://github.com/apache/flink-connector-kafka/actions/workflows/weekly.yml
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? n/a
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #22816: [FLINK-32373][sql-client] Support passing headers with SQL Client gateway requests

2023-06-17 Thread via GitHub


flinkbot commented on PR #22816:
URL: https://github.com/apache/flink/pull/22816#issuecomment-1595809052

   
   ## CI report:
   
   * 309e1b9b5078a03ea233d96bb7a6e342f0ab22ec UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-32373) Support passing headers with SQL Client gateway requests

2023-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-32373:
---
Labels: pull-request-available  (was: )

> Support passing headers with SQL Client gateway requests
> 
>
> Key: FLINK-32373
> URL: https://issues.apache.org/jira/browse/FLINK-32373
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Client, Table SQL / Gateway
>Affects Versions: 1.18.0
>Reporter: Alexander Fedulov
>Assignee: Alexander Fedulov
>Priority: Major
>  Labels: pull-request-available
>
> FLINK-32030 and FLINK-32035 enable communication from the SQL Client to the 
> SQL Gateway placed behind a proxy, such as a K8S ingress. Given that 
> authentication is typically needed in these cases, it can be achieved by 
> adding the ability to supply custom headers to the underlying RestClient.  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] afedulov opened a new pull request, #22816: [FLINK-32373][sql-client] Support passing headers with SQL Client gateway requests

2023-06-17 Thread via GitHub


afedulov opened a new pull request, #22816:
URL: https://github.com/apache/flink/pull/22816

   ## What is the purpose of the change
   
   [FLINK-32030](https://issues.apache.org/jira/browse/FLINK-32030) and 
[FLINK-32035](https://issues.apache.org/jira/browse/FLINK-32035) enable 
communication from the SQL Client to the SQL Gateway placed behind a proxy, 
such as a K8S ingress. Such use cases typically need authentication. This PR 
enables authentication by adding the ability to supply custom headers to the 
underlying RestClient.
   
   ## Verifying this change
   
 - Added a test that verifies that custom headers are read and parsed 
correctly
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (**yes** / no)
 - A follow up docs PR will be created to document URLs support in SQL 
Client gateway mode, including the custom headers feature
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-32008) Protobuf format cannot work with FileSystem Connector

2023-06-17 Thread Benchao Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benchao Li closed FLINK-32008.
--
Fix Version/s: 1.18.0
   Resolution: Fixed

Fixed via 
https://github.com/apache/flink/commit/7d4ee28e85aad4abc8ad126c4d953d0e921ea07e 
(master)

> Protobuf format cannot work with FileSystem Connector
> -
>
> Key: FLINK-32008
> URL: https://issues.apache.org/jira/browse/FLINK-32008
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.17.0
>Reporter: Xuannan Su
>Assignee: Ryan Skraba
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
> Attachments: flink-protobuf-example.zip
>
>
> The protobuf format throws exception when working with Map data type. I 
> uploaded a example project to reproduce the problem.
>  
> {code:java}
> Caused by: java.lang.RuntimeException: One or more fetchers have encountered 
> exception
>     at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:261)
>     at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:169)
>     at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:131)
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:417)
>     at 
> org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68)
>     at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:550)
>     at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:231)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:839)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:788)
>     at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:952)
>     at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:931)
>     at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:745)
>     at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
>     at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.RuntimeException: SplitFetcher thread 0 received 
> unexpected exception while polling the records
>     at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:165)
>     at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:114)
>     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>     ... 1 more
> Caused by: java.io.IOException: Failed to deserialize PB object.
>     at 
> org.apache.flink.formats.protobuf.deserialize.PbRowDataDeserializationSchema.deserialize(PbRowDataDeserializationSchema.java:75)
>     at 
> org.apache.flink.formats.protobuf.deserialize.PbRowDataDeserializationSchema.deserialize(PbRowDataDeserializationSchema.java:42)
>     at 
> org.apache.flink.api.common.serialization.DeserializationSchema.deserialize(DeserializationSchema.java:82)
>     at 
> org.apache.flink.connector.file.table.DeserializationSchemaAdapter$LineBytesInputFormat.readRecord(DeserializationSchemaAdapter.java:197)
>     at 
> org.apache.flink.connector.file.table.DeserializationSchemaAdapter$LineBytesInputFormat.nextRecord(DeserializationSchemaAdapter.java:210)
>     at 
> org.apache.flink.connector.file.table.DeserializationSchemaAdapter$Reader.readBatch(DeserializationSchemaAdapter.java:124)
>     at 
> org.apache.flink.connector.file.src.util.RecordMapperWrapperRecordIterator$1.readBatch(RecordMapperWrapperRecordIterator.java:82)
>     at 
> org.apache.flink.connector.file.src.impl.FileSourceSplitReader.fetch(FileSourceSplitReader.java:67)
>     at 
> org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:58)
>     at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:162)
>     ... 6 more
> Caused by: java.lang.reflect.InvocationTargetException
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> 

[jira] [Assigned] (FLINK-32008) Protobuf format cannot work with FileSystem Connector

2023-06-17 Thread Benchao Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benchao Li reassigned FLINK-32008:
--

Assignee: Ryan Skraba

> Protobuf format cannot work with FileSystem Connector
> -
>
> Key: FLINK-32008
> URL: https://issues.apache.org/jira/browse/FLINK-32008
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.17.0
>Reporter: Xuannan Su
>Assignee: Ryan Skraba
>Priority: Major
>  Labels: pull-request-available
> Attachments: flink-protobuf-example.zip
>
>
> The protobuf format throws exception when working with Map data type. I 
> uploaded a example project to reproduce the problem.
>  
> {code:java}
> Caused by: java.lang.RuntimeException: One or more fetchers have encountered 
> exception
>     at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:261)
>     at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:169)
>     at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:131)
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:417)
>     at 
> org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68)
>     at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:550)
>     at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:231)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:839)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:788)
>     at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:952)
>     at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:931)
>     at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:745)
>     at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
>     at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.RuntimeException: SplitFetcher thread 0 received 
> unexpected exception while polling the records
>     at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:165)
>     at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:114)
>     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>     ... 1 more
> Caused by: java.io.IOException: Failed to deserialize PB object.
>     at 
> org.apache.flink.formats.protobuf.deserialize.PbRowDataDeserializationSchema.deserialize(PbRowDataDeserializationSchema.java:75)
>     at 
> org.apache.flink.formats.protobuf.deserialize.PbRowDataDeserializationSchema.deserialize(PbRowDataDeserializationSchema.java:42)
>     at 
> org.apache.flink.api.common.serialization.DeserializationSchema.deserialize(DeserializationSchema.java:82)
>     at 
> org.apache.flink.connector.file.table.DeserializationSchemaAdapter$LineBytesInputFormat.readRecord(DeserializationSchemaAdapter.java:197)
>     at 
> org.apache.flink.connector.file.table.DeserializationSchemaAdapter$LineBytesInputFormat.nextRecord(DeserializationSchemaAdapter.java:210)
>     at 
> org.apache.flink.connector.file.table.DeserializationSchemaAdapter$Reader.readBatch(DeserializationSchemaAdapter.java:124)
>     at 
> org.apache.flink.connector.file.src.util.RecordMapperWrapperRecordIterator$1.readBatch(RecordMapperWrapperRecordIterator.java:82)
>     at 
> org.apache.flink.connector.file.src.impl.FileSourceSplitReader.fetch(FileSourceSplitReader.java:67)
>     at 
> org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:58)
>     at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:162)
>     ... 6 more
> Caused by: java.lang.reflect.InvocationTargetException
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> 

[jira] [Commented] (FLINK-32008) Protobuf format cannot work with FileSystem Connector

2023-06-17 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733777#comment-17733777
 ] 

Benchao Li commented on FLINK-32008:


[~rskraba] Thanks, I agree with your proposal, I'll review it.

> Protobuf format cannot work with FileSystem Connector
> -
>
> Key: FLINK-32008
> URL: https://issues.apache.org/jira/browse/FLINK-32008
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.17.0
>Reporter: Xuannan Su
>Assignee: Ryan Skraba
>Priority: Major
>  Labels: pull-request-available
> Attachments: flink-protobuf-example.zip
>
>
> The protobuf format throws exception when working with Map data type. I 
> uploaded a example project to reproduce the problem.
>  
> {code:java}
> Caused by: java.lang.RuntimeException: One or more fetchers have encountered 
> exception
>     at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:261)
>     at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:169)
>     at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:131)
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:417)
>     at 
> org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68)
>     at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:550)
>     at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:231)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:839)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:788)
>     at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:952)
>     at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:931)
>     at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:745)
>     at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
>     at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.RuntimeException: SplitFetcher thread 0 received 
> unexpected exception while polling the records
>     at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:165)
>     at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:114)
>     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>     ... 1 more
> Caused by: java.io.IOException: Failed to deserialize PB object.
>     at 
> org.apache.flink.formats.protobuf.deserialize.PbRowDataDeserializationSchema.deserialize(PbRowDataDeserializationSchema.java:75)
>     at 
> org.apache.flink.formats.protobuf.deserialize.PbRowDataDeserializationSchema.deserialize(PbRowDataDeserializationSchema.java:42)
>     at 
> org.apache.flink.api.common.serialization.DeserializationSchema.deserialize(DeserializationSchema.java:82)
>     at 
> org.apache.flink.connector.file.table.DeserializationSchemaAdapter$LineBytesInputFormat.readRecord(DeserializationSchemaAdapter.java:197)
>     at 
> org.apache.flink.connector.file.table.DeserializationSchemaAdapter$LineBytesInputFormat.nextRecord(DeserializationSchemaAdapter.java:210)
>     at 
> org.apache.flink.connector.file.table.DeserializationSchemaAdapter$Reader.readBatch(DeserializationSchemaAdapter.java:124)
>     at 
> org.apache.flink.connector.file.src.util.RecordMapperWrapperRecordIterator$1.readBatch(RecordMapperWrapperRecordIterator.java:82)
>     at 
> org.apache.flink.connector.file.src.impl.FileSourceSplitReader.fetch(FileSourceSplitReader.java:67)
>     at 
> org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:58)
>     at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:162)
>     ... 6 more
> Caused by: java.lang.reflect.InvocationTargetException
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at 

[jira] [Assigned] (FLINK-32008) Protobuf format cannot work with FileSystem Connector

2023-06-17 Thread Benchao Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benchao Li reassigned FLINK-32008:
--

Assignee: (was: Ryan McKinley)

> Protobuf format cannot work with FileSystem Connector
> -
>
> Key: FLINK-32008
> URL: https://issues.apache.org/jira/browse/FLINK-32008
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.17.0
>Reporter: Xuannan Su
>Priority: Major
>  Labels: pull-request-available
> Attachments: flink-protobuf-example.zip
>
>
> The protobuf format throws exception when working with Map data type. I 
> uploaded a example project to reproduce the problem.
>  
> {code:java}
> Caused by: java.lang.RuntimeException: One or more fetchers have encountered 
> exception
>     at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:261)
>     at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:169)
>     at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:131)
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:417)
>     at 
> org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68)
>     at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:550)
>     at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:231)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:839)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:788)
>     at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:952)
>     at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:931)
>     at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:745)
>     at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
>     at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.RuntimeException: SplitFetcher thread 0 received 
> unexpected exception while polling the records
>     at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:165)
>     at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:114)
>     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>     ... 1 more
> Caused by: java.io.IOException: Failed to deserialize PB object.
>     at 
> org.apache.flink.formats.protobuf.deserialize.PbRowDataDeserializationSchema.deserialize(PbRowDataDeserializationSchema.java:75)
>     at 
> org.apache.flink.formats.protobuf.deserialize.PbRowDataDeserializationSchema.deserialize(PbRowDataDeserializationSchema.java:42)
>     at 
> org.apache.flink.api.common.serialization.DeserializationSchema.deserialize(DeserializationSchema.java:82)
>     at 
> org.apache.flink.connector.file.table.DeserializationSchemaAdapter$LineBytesInputFormat.readRecord(DeserializationSchemaAdapter.java:197)
>     at 
> org.apache.flink.connector.file.table.DeserializationSchemaAdapter$LineBytesInputFormat.nextRecord(DeserializationSchemaAdapter.java:210)
>     at 
> org.apache.flink.connector.file.table.DeserializationSchemaAdapter$Reader.readBatch(DeserializationSchemaAdapter.java:124)
>     at 
> org.apache.flink.connector.file.src.util.RecordMapperWrapperRecordIterator$1.readBatch(RecordMapperWrapperRecordIterator.java:82)
>     at 
> org.apache.flink.connector.file.src.impl.FileSourceSplitReader.fetch(FileSourceSplitReader.java:67)
>     at 
> org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:58)
>     at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:162)
>     ... 6 more
> Caused by: java.lang.reflect.InvocationTargetException
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> 

[jira] [Assigned] (FLINK-32008) Protobuf format cannot work with FileSystem Connector

2023-06-17 Thread Benchao Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benchao Li reassigned FLINK-32008:
--

Assignee: Ryan McKinley

> Protobuf format cannot work with FileSystem Connector
> -
>
> Key: FLINK-32008
> URL: https://issues.apache.org/jira/browse/FLINK-32008
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.17.0
>Reporter: Xuannan Su
>Assignee: Ryan McKinley
>Priority: Major
>  Labels: pull-request-available
> Attachments: flink-protobuf-example.zip
>
>
> The protobuf format throws exception when working with Map data type. I 
> uploaded a example project to reproduce the problem.
>  
> {code:java}
> Caused by: java.lang.RuntimeException: One or more fetchers have encountered 
> exception
>     at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:261)
>     at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:169)
>     at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:131)
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:417)
>     at 
> org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68)
>     at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:550)
>     at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:231)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:839)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:788)
>     at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:952)
>     at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:931)
>     at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:745)
>     at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
>     at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.RuntimeException: SplitFetcher thread 0 received 
> unexpected exception while polling the records
>     at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:165)
>     at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:114)
>     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>     ... 1 more
> Caused by: java.io.IOException: Failed to deserialize PB object.
>     at 
> org.apache.flink.formats.protobuf.deserialize.PbRowDataDeserializationSchema.deserialize(PbRowDataDeserializationSchema.java:75)
>     at 
> org.apache.flink.formats.protobuf.deserialize.PbRowDataDeserializationSchema.deserialize(PbRowDataDeserializationSchema.java:42)
>     at 
> org.apache.flink.api.common.serialization.DeserializationSchema.deserialize(DeserializationSchema.java:82)
>     at 
> org.apache.flink.connector.file.table.DeserializationSchemaAdapter$LineBytesInputFormat.readRecord(DeserializationSchemaAdapter.java:197)
>     at 
> org.apache.flink.connector.file.table.DeserializationSchemaAdapter$LineBytesInputFormat.nextRecord(DeserializationSchemaAdapter.java:210)
>     at 
> org.apache.flink.connector.file.table.DeserializationSchemaAdapter$Reader.readBatch(DeserializationSchemaAdapter.java:124)
>     at 
> org.apache.flink.connector.file.src.util.RecordMapperWrapperRecordIterator$1.readBatch(RecordMapperWrapperRecordIterator.java:82)
>     at 
> org.apache.flink.connector.file.src.impl.FileSourceSplitReader.fetch(FileSourceSplitReader.java:67)
>     at 
> org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:58)
>     at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:162)
>     ... 6 more
> Caused by: java.lang.reflect.InvocationTargetException
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> 

[GitHub] [flink] wuchong commented on pull request #22734: [FLINK-32277][table-runtime] Introduce the basic operator fusion codegen framework

2023-06-17 Thread via GitHub


wuchong commented on PR #22734:
URL: https://github.com/apache/flink/pull/22734#issuecomment-1595773570

   The table module is failed, could you have a check?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-32373) Support passing headers with SQL Client gateway requests

2023-06-17 Thread Alexander Fedulov (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Fedulov reassigned FLINK-32373:
-

Assignee: Alexander Fedulov

> Support passing headers with SQL Client gateway requests
> 
>
> Key: FLINK-32373
> URL: https://issues.apache.org/jira/browse/FLINK-32373
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Client, Table SQL / Gateway
>Affects Versions: 1.18.0
>Reporter: Alexander Fedulov
>Assignee: Alexander Fedulov
>Priority: Major
>
> FLINK-32030 and FLINK-32035 enable communication from the SQL Client to the 
> SQL Gateway placed behind a proxy, such as a K8S ingress. Given that 
> authentication is typically needed in these cases, it can be achieved by 
> adding the ability to supply custom headers to the underlying RestClient.  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32373) Support passing headers with SQL Client gateway requests

2023-06-17 Thread Alexander Fedulov (Jira)
Alexander Fedulov created FLINK-32373:
-

 Summary: Support passing headers with SQL Client gateway requests
 Key: FLINK-32373
 URL: https://issues.apache.org/jira/browse/FLINK-32373
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Client, Table SQL / Gateway
Affects Versions: 1.18.0
Reporter: Alexander Fedulov


FLINK-32030 and FLINK-32035 enable communication from the SQL Client to the SQL 
Gateway placed behind a proxy, such as a K8S ingress. Given that authentication 
is typically needed in these cases, it can be achieved by adding the ability to 
supply custom headers to the underlying RestClient.  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-aws] z3d1k commented on pull request #81: [FLINK-31923] Run nightly builds against multiple branches and Flink versions

2023-06-17 Thread via GitHub


z3d1k commented on PR #81:
URL: 
https://github.com/apache/flink-connector-aws/pull/81#issuecomment-1595710163

   As @dannycranmer mentioned, this is due `flink-connector-kinesis` module not 
working with shading in maven 3.8.6 and above.
   
   Build failure against 1.18-SNAPSHOT is due to the backward incompatible 
change in architecture tests from 
[FLINK-31804](https://issues.apache.org/jira/browse/FLINK-31804). This issue 
impacts multiple connector packages.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32327) Python Kafka connector runs into strange NullPointerException

2023-06-17 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733729#comment-17733729
 ] 

Dian Fu commented on FLINK-32327:
-

PS: I can help to dig into this problem if we know which ones are failing.

> Python Kafka connector runs into strange NullPointerException
> -
>
> Key: FLINK-32327
> URL: https://issues.apache.org/jira/browse/FLINK-32327
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Chesnay Schepler
>Priority: Major
>
> The following error occurs when running the python kafka tests:
> (this uses a slightly modified version of the code, but the error also 
> happens without it)
> {code:python}
>  def set_record_serializer(self, record_serializer: 
> 'KafkaRecordSerializationSchema') \
>  -> 'KafkaSinkBuilder':
>  """
>  Sets the :class:`KafkaRecordSerializationSchema` that transforms 
> incoming records to kafka
>  producer records.
>  
>  :param record_serializer: The 
> :class:`KafkaRecordSerializationSchema`.
>  """
>  # NOTE: If topic selector is a generated first-column selector, do 
> extra preprocessing
>  j_topic_selector = 
> get_field_value(record_serializer._j_serialization_schema,
> 'topicSelector')
>  
>  caching_name_suffix = 
> 'KafkaRecordSerializationSchemaBuilder.CachingTopicSelector'
>  if 
> j_topic_selector.getClass().getCanonicalName().endswith(caching_name_suffix):
>  class_name = get_field_value(j_topic_selector, 'topicSelector')\
>  .getClass().getCanonicalName()
>  >   if class_name.startswith('com.sun.proxy') or 
> class_name.startswith('jdk.proxy'):
>  E   AttributeError: 'NoneType' object has no attribute 'startswith'
> {code}
> My assumption is that {{getCanonicalName}} returns {{null}} for some objects, 
> and this set of objects may have increased in Java 17. I tried adding a null 
> check, but that caused other tests to fail.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32327) Python Kafka connector runs into strange NullPointerException

2023-06-17 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733728#comment-17733728
 ] 

Dian Fu commented on FLINK-32327:
-

[~chesnay] It seems that you have disabled the entire Python tests for Java 17. 
Have you seen other failing tests on Java 17 for Python?

> Python Kafka connector runs into strange NullPointerException
> -
>
> Key: FLINK-32327
> URL: https://issues.apache.org/jira/browse/FLINK-32327
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Chesnay Schepler
>Priority: Major
>
> The following error occurs when running the python kafka tests:
> (this uses a slightly modified version of the code, but the error also 
> happens without it)
> {code:python}
>  def set_record_serializer(self, record_serializer: 
> 'KafkaRecordSerializationSchema') \
>  -> 'KafkaSinkBuilder':
>  """
>  Sets the :class:`KafkaRecordSerializationSchema` that transforms 
> incoming records to kafka
>  producer records.
>  
>  :param record_serializer: The 
> :class:`KafkaRecordSerializationSchema`.
>  """
>  # NOTE: If topic selector is a generated first-column selector, do 
> extra preprocessing
>  j_topic_selector = 
> get_field_value(record_serializer._j_serialization_schema,
> 'topicSelector')
>  
>  caching_name_suffix = 
> 'KafkaRecordSerializationSchemaBuilder.CachingTopicSelector'
>  if 
> j_topic_selector.getClass().getCanonicalName().endswith(caching_name_suffix):
>  class_name = get_field_value(j_topic_selector, 'topicSelector')\
>  .getClass().getCanonicalName()
>  >   if class_name.startswith('com.sun.proxy') or 
> class_name.startswith('jdk.proxy'):
>  E   AttributeError: 'NoneType' object has no attribute 'startswith'
> {code}
> My assumption is that {{getCanonicalName}} returns {{null}} for some objects, 
> and this set of objects may have increased in Java 17. I tried adding a null 
> check, but that caused other tests to fail.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-32136) Pyflink gateway server launch fails when purelib != platlib

2023-06-17 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu closed FLINK-32136.
---
Fix Version/s: 1.18.0
   1.16.3
   1.17.2
 Assignee: Dian Fu
   Resolution: Fixed

Fixed in:
- master via f64563bc1a7ff698acd708b61e9e80ae9c3e848f
- release-1.17 via 1b3f25432a29005370b0f51aaa7d4ee79a5edd58
- release-1.16 via f9394025fb756c844ec5f2615971f227d40b9244

> Pyflink gateway server launch fails when purelib != platlib
> ---
>
> Key: FLINK-32136
> URL: https://issues.apache.org/jira/browse/FLINK-32136
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.13.3
>Reporter: William Ashley
>Assignee: Dian Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0, 1.16.3, 1.17.2
>
>
> On distros where python's {{purelib}} is different than {{platlib}} (e.g. 
> Amazon Linux 2, but from my research it's all of the Redhat-based ones), you 
> wind up with components of packages being installed across two different 
> locations (e.g. {{/usr/local/lib/python3.7/site-packages/pyflink}} and 
> {{{}/usr/local/lib64/python3.7/site-packages/pyflink{}}}).
> {{_find_flink_home}} 
> [handles|https://github.com/apache/flink/blob/06688f345f6793a8964ec2175f44cda13c33/flink-python/pyflink/find_flink_home.py#L58C63-L60]
>  this, and in flink releases <= 1.13.2 its setting of the {{FLINK_LIB_DIR}} 
> environment variable was the one being used. However, from 1.13.3, a 
> refactoring of {{launch_gateway_server_process}} 
> ([1.13.2,|https://github.com/apache/flink/blob/release-1.13.2/flink-python/pyflink/pyflink_gateway_server.py#L200]
>  
> [1.13.3|https://github.com/apache/flink/blob/release-1.13.3/flink-python/pyflink/pyflink_gateway_server.py#L280])
>  re-ordered some method calls. {{{}prepare_environment_variable{}}}'s 
> [non-awareness|https://github.com/apache/flink/blob/release-1.13.3/flink-python/pyflink/pyflink_gateway_server.py#L94C67-L95]
>  of multiple homes and setting of {{FLINK_LIB_DIR}} now is the one that 
> matters, and it is the incorrect location.
> I've confirmed this problem on Amazon Linux 2 and 2023. The problem does not 
> exist on, for example, Ubuntu 20 and 22 (for which {{platlib}} == 
> {{{}purelib{}}}).
> Repro steps on Amazon Linux 2
> {quote}{{yum -y install python3 java-11}}
> {{pip3 install apache-flink==1.13.3}}
> {{python3 -c 'from pyflink.table import EnvironmentSettings ; 
> EnvironmentSettings.new_instance()'}}
> {quote}
> The resulting error is
> {quote}{{The flink-python jar is not found in the opt folder of the 
> FLINK_HOME: /usr/local/lib64/python3.7/site-packages/pyflink}}
> {{Error: Could not find or load main class 
> org.apache.flink.client.python.PythonGatewayServer}}
> {{Caused by: java.lang.ClassNotFoundException: 
> org.apache.flink.client.python.PythonGatewayServer}}
> {{Traceback (most recent call last):}}
> {{  File "", line 1, in }}
> {{  File 
> "/usr/local/lib64/python3.7/site-packages/pyflink/table/environment_settings.py",
>  line 214, in new_instance}}
> {{    return EnvironmentSettings.Builder()}}
> {{  File 
> "/usr/local/lib64/python3.7/site-packages/pyflink/table/environment_settings.py",
>  line 48, in {_}{{_}}init{{_}}{_}}}
> {{    gateway = get_gateway()}}
> {{  File "/usr/local/lib64/python3.7/site-packages/pyflink/java_gateway.py", 
> line 62, in get_gateway}}
> {{    _gateway = launch_gateway()}}
> {{  File "/usr/local/lib64/python3.7/site-packages/pyflink/java_gateway.py", 
> line 112, in launch_gateway}}
> {{    raise Exception("Java gateway process exited before sending its port 
> number")}}
> {{Exception: Java gateway process exited before sending its port number}}
> {quote}
> The flink home under /lib64/ does not contain the jar, but it is in the /lib/ 
> location
> {quote}{{bash-4.2# find /usr/local/lib64/python3.7/site-packages/pyflink 
> -name "flink-python*.jar"}}
> {{bash-4.2# find /usr/local/lib/python3.7/site-packages/pyflink -name 
> "flink-python*.jar"}}
> {{/usr/local/lib/python3.7/site-packages/pyflink/opt/flink-python_2.11-1.13.3.jar}}
> {quote}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] dianfu commented on pull request #22802: [FLINK-32136][python] Fix the issue that Pyflink gateway server launch fails when purelib != platlib

2023-06-17 Thread via GitHub


dianfu commented on PR #22802:
URL: https://github.com/apache/flink/pull/22802#issuecomment-1595697874

   @wash-amzn Thanks for the confirmation and thanks @HuangXingBo for the 
review. I have merged the PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] dianfu closed pull request #22802: [FLINK-32136][python] Fix the issue that Pyflink gateway server launch fails when purelib != platlib

2023-06-17 Thread via GitHub


dianfu closed pull request #22802: [FLINK-32136][python] Fix the issue that 
Pyflink gateway server launch fails when purelib != platlib
URL: https://github.com/apache/flink/pull/22802


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-32372) flink-connector-aws: build on pull request / compile_and_test doesn't support for Flink 1.16.2 and 1.17.1

2023-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-32372:
---
Labels: pull-request-available  (was: )

> flink-connector-aws: build on pull request / compile_and_test doesn't support 
> for Flink 1.16.2 and 1.17.1
> -
>
> Key: FLINK-32372
> URL: https://issues.apache.org/jira/browse/FLINK-32372
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / AWS
>Reporter: Samrat Deb
>Priority: Major
>  Labels: pull-request-available
>
> *Recently* {*}1.16.2 and 1.17.1 flink version are released.{*}{*}{*}
> *flink-connector-aws: build on pull request / compile_and_test currently 
> doesn't support 1.16.2 and 1.17.1 flink version.* 
> *Add the support to build and test for*  \{*}1.16.2 and 1.17.1 flink version 
> on CI pipeline for flink-connector-aws.{*}{*}{*}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-aws] Samrat002 opened a new pull request, #83: [FLINK-32372] Push PR support build and test with recently release fl…

2023-06-17 Thread via GitHub


Samrat002 opened a new pull request, #83:
URL: https://github.com/apache/flink-connector-aws/pull/83

   …ink version 1.16.2 and 1.17.1
   
   
   ## Purpose of the change
   
   Support build and test for flink 1.16.2 and 1.17.1 flink release.
   
   ## Significant changes
   *(Please check any boxes [x] if the answer is "yes". You can first publish 
the PR and check them afterwards, for convenience.)*
   - [ ] Dependencies have been added or upgraded
   - [ ] Public API has been changed (Public API is any class annotated with 
`@Public(Evolving)`)
   - [ ] Serializers have been changed
   - [ ] New feature has been introduced
 - If yes, how is this documented? (not applicable / docs / JavaDocs / not 
documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-32372) flink-connector-aws: build on pull request / compile_and_test doesn't support for Flink 1.16.2 and 1.17.1

2023-06-17 Thread Samrat Deb (Jira)
Samrat Deb created FLINK-32372:
--

 Summary: flink-connector-aws: build on pull request / 
compile_and_test doesn't support for Flink 1.16.2 and 1.17.1
 Key: FLINK-32372
 URL: https://issues.apache.org/jira/browse/FLINK-32372
 Project: Flink
  Issue Type: New Feature
  Components: Connectors / AWS
Reporter: Samrat Deb


*Recently* {*}1.16.2 and 1.17.1 flink version are released.{*}{*}{*}

*flink-connector-aws: build on pull request / compile_and_test currently 
doesn't support 1.16.2 and 1.17.1 flink version.* 

*Add the support to build and test for*  \{*}1.16.2 and 1.17.1 flink version on 
CI pipeline for flink-connector-aws.{*}{*}{*}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-aws] Samrat002 commented on a diff in pull request #47: [FLINK-30481][FLIP-277] GlueCatalog Implementation

2023-06-17 Thread via GitHub


Samrat002 commented on code in PR #47:
URL: 
https://github.com/apache/flink-connector-aws/pull/47#discussion_r1232978092


##
flink-catalog-aws/flink-catalog-aws-glue/src/main/java/org/apache/flink/table/catalog/glue/factory/GlueCatalogFactory.java:
##
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.catalog.glue.factory;
+
+import org.apache.flink.configuration.ConfigOption;
+import org.apache.flink.table.catalog.Catalog;
+import org.apache.flink.table.catalog.glue.GlueCatalog;
+import org.apache.flink.table.catalog.glue.GlueCatalogOptions;
+import org.apache.flink.table.catalog.glue.util.GlueCatalogOptionsUtils;
+import org.apache.flink.table.factories.CatalogFactory;
+import org.apache.flink.table.factories.FactoryUtil;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Properties;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+/** Catalog factory for {@link GlueCatalog}. */
+public class GlueCatalogFactory implements CatalogFactory {

Review Comment:
   checked flink repo `GenericInMemoryCatalog` 
[here](https://github.com/apache/flink/blob/4624cc47bec135f369bcdf159b68bdd4566ce5af/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/GenericInMemoryCatalogFactory.java#L33)
 
   
   there is no annotations there. Not sure if adding `@PublicEvolving` or 
`@Public` for this class diverges from other catalogs





-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-aws] Samrat002 commented on a diff in pull request #47: [FLINK-30481][FLIP-277] GlueCatalog Implementation

2023-06-17 Thread via GitHub


Samrat002 commented on code in PR #47:
URL: 
https://github.com/apache/flink-connector-aws/pull/47#discussion_r1232971715


##
flink-catalog-aws/flink-catalog-aws-glue/src/main/java/org/apache/flink/table/catalog/glue/factory/GlueCatalogFactoryOptions.java:
##
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.catalog.glue.factory;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.configuration.ConfigOption;
+import org.apache.flink.configuration.ConfigOptions;
+import org.apache.flink.table.catalog.glue.GlueCatalog;
+import org.apache.flink.table.catalog.glue.GlueCatalogOptions;
+
+/** {@link ConfigOption}s for {@link GlueCatalog}. */
+@Internal
+public class GlueCatalogFactoryOptions {

Review Comment:
   Both are related to configs , Changed it to merge into one 
`GlueCatalogOptions`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-aws] Samrat002 commented on a diff in pull request #47: [FLINK-30481][FLIP-277] GlueCatalog Implementation

2023-06-17 Thread via GitHub


Samrat002 commented on code in PR #47:
URL: 
https://github.com/apache/flink-connector-aws/pull/47#discussion_r1232978092


##
flink-catalog-aws/flink-catalog-aws-glue/src/main/java/org/apache/flink/table/catalog/glue/factory/GlueCatalogFactory.java:
##
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.catalog.glue.factory;
+
+import org.apache.flink.configuration.ConfigOption;
+import org.apache.flink.table.catalog.Catalog;
+import org.apache.flink.table.catalog.glue.GlueCatalog;
+import org.apache.flink.table.catalog.glue.GlueCatalogOptions;
+import org.apache.flink.table.catalog.glue.util.GlueCatalogOptionsUtils;
+import org.apache.flink.table.factories.CatalogFactory;
+import org.apache.flink.table.factories.FactoryUtil;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Properties;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+/** Catalog factory for {@link GlueCatalog}. */
+public class GlueCatalogFactory implements CatalogFactory {

Review Comment:
   checked flink repo `GenericInMemoryCatalog` 
[here](https://github.com/apache/flink/blob/4624cc47bec135f369bcdf159b68bdd4566ce5af/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/GenericInMemoryCatalogFactory.java#L33)
 
   
   there is no annotations there. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-aws] Samrat002 commented on a diff in pull request #47: [FLINK-30481][FLIP-277] GlueCatalog Implementation

2023-06-17 Thread via GitHub


Samrat002 commented on code in PR #47:
URL: 
https://github.com/apache/flink-connector-aws/pull/47#discussion_r1232971715


##
flink-catalog-aws/flink-catalog-aws-glue/src/main/java/org/apache/flink/table/catalog/glue/factory/GlueCatalogFactoryOptions.java:
##
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.catalog.glue.factory;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.configuration.ConfigOption;
+import org.apache.flink.configuration.ConfigOptions;
+import org.apache.flink.table.catalog.glue.GlueCatalog;
+import org.apache.flink.table.catalog.glue.GlueCatalogOptions;
+
+/** {@link ConfigOption}s for {@link GlueCatalog}. */
+@Internal
+public class GlueCatalogFactoryOptions {

Review Comment:
   Both are related to configs , Canged it to merge into one 
`GlueCatalogOptions`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org