[jira] [Updated] (FLINK-34537) Autoscaler JDBC Support HikariPool

2024-02-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34537:
---
Labels: pull-request-available  (was: )

> Autoscaler JDBC Support HikariPool
> --
>
> Key: FLINK-34537
> URL: https://issues.apache.org/jira/browse/FLINK-34537
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Affects Versions: 1.8.0
>Reporter: ConradJam
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.8.0
>
>
> Autoscaler Using HikariPool to replace native JDBC connections.  Helps reduce 
> database pressure



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-34537] Autoscaler JDBC Support HikariPool [flink-kubernetes-operator]

2024-02-28 Thread via GitHub


czy006 opened a new pull request, #785:
URL: https://github.com/apache/flink-kubernetes-operator/pull/785

   
   
   ## What is the purpose of the change
   
   *Autoscaler JDBC Support HikariPool to Replace JDBC DirverManager*
   
   
   ## Brief change log
   
 - *Add Autoscaler HikariPool Test*
 - *Replace JDBC DirverManager Wiht HikariPool DataSource*
   
   ## Verifying this change
   
   
 - *Added HikariPool Connection Test*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes)
 - The public API, i.e., is any changes to the `CustomResourceDescriptors`: 
(no)
 - Core observer or reconciler logic that is regularly executed: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34537) Autoscaler JDBC Support HikariPool

2024-02-28 Thread ConradJam (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ConradJam updated FLINK-34537:
--
Description: Autoscaler Using HikariPool to replace native JDBC 
connections.  Helps reduce database pressure

> Autoscaler JDBC Support HikariPool
> --
>
> Key: FLINK-34537
> URL: https://issues.apache.org/jira/browse/FLINK-34537
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Affects Versions: 1.8.0
>Reporter: ConradJam
>Priority: Major
> Fix For: 1.8.0
>
>
> Autoscaler Using HikariPool to replace native JDBC connections.  Helps reduce 
> database pressure



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34537) Autoscaler JDBC Support HikariPool

2024-02-28 Thread ConradJam (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ConradJam updated FLINK-34537:
--
Summary: Autoscaler JDBC Support HikariPool  (was: Autoscaler JDBC)

> Autoscaler JDBC Support HikariPool
> --
>
> Key: FLINK-34537
> URL: https://issues.apache.org/jira/browse/FLINK-34537
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Affects Versions: 1.8.0
>Reporter: ConradJam
>Priority: Major
> Fix For: 1.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34537) Autoscaler JDBC

2024-02-28 Thread ConradJam (Jira)
ConradJam created FLINK-34537:
-

 Summary: Autoscaler JDBC
 Key: FLINK-34537
 URL: https://issues.apache.org/jira/browse/FLINK-34537
 Project: Flink
  Issue Type: Improvement
  Components: Kubernetes Operator
Affects Versions: 1.8.0
Reporter: ConradJam
 Fix For: 1.8.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34509] [docs] add missing "url" option for Debezium Avro [flink]

2024-02-28 Thread via GitHub


affo commented on PR #24395:
URL: https://github.com/apache/flink/pull/24395#issuecomment-1968635738

   > Thanks @affo for driving it! Just left some comments, PTAL.
   
   Thank you @JingGe for reviewing.
   
   I just added `Confluent` where needed, and also made the use of `JSON` 
instead of `Json` more consistent @morazow .
   
   Thank you for the reviews.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34274][runtime] Implicitly disable resource wait timeout for A… [flink]

2024-02-28 Thread via GitHub


XComp commented on PR #24238:
URL: https://github.com/apache/flink/pull/24238#issuecomment-1968607712

   > I added the changes I proposed. That should be good enough from my end, if 
you're ok with those changes (just to bring the PR closer to being merged and 
resolving the test instability in master).
   
   I reverted those changes. See my responses 
[above](https://github.com/apache/flink/pull/24238#pullrequestreview-1853374964).
 The only thing I kept/added was [removing the 
comment](https://github.com/apache/flink/pull/24238/commits/621db257363626aa4bbf8a0ee0d8472888d0e662).
 I'm gonna rebase the branch to get a green CI (the e2e tests are failing due 
to FLINK-34420).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34274][runtime] Implicitly disable resource wait timeout for A… [flink]

2024-02-28 Thread via GitHub


XComp commented on code in PR #24238:
URL: https://github.com/apache/flink/pull/24238#discussion_r1505659537


##
flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/adaptive/AdaptiveSchedulerBuilder.java:
##


Review Comment:
   I removed the comment because it's misleading (wrong (non-existing) test 
method mentioned) and it's obsolete because we're disabling the waitForResource 
timeout entirely.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34274][runtime] Implicitly disable resource wait timeout for A… [flink]

2024-02-28 Thread via GitHub


XComp commented on code in PR #24238:
URL: https://github.com/apache/flink/pull/24238#discussion_r1505657635


##
flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/adaptive/AdaptiveSchedulerBuilder.java:
##
@@ -117,6 +118,12 @@ public AdaptiveSchedulerBuilder setJobMasterConfiguration(
 return this;
 }
 
+public AdaptiveSchedulerBuilder withConfigurationOverride(

Review Comment:
   Ok, I don't know what I thought here claiming that 
`AdaptiveSchedulerBuilder` is production code. I reverted the proposal. 
:facepalm: 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34517][table]fix environment configs ignored when calling procedure operation [flink]

2024-02-28 Thread via GitHub


luoyuxia commented on code in PR #24397:
URL: https://github.com/apache/flink/pull/24397#discussion_r1505636104


##
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/operations/PlannerCallProcedureOperation.java:
##
@@ -138,6 +137,14 @@ private Object[] getConvertedArgumentValues(
 return argumentVal;
 }
 
+private ProcedureContext getProcedureContext(TableConfig tableConfig) {
+Configuration configuration = tableConfig.getConfiguration();

Review Comment:
   For tableConfig, it'll overwrite conf from outer root. So, won't it more 
reasonal the conf in tableConfig has more priority  afer we reconstruct the 
Configuration?
   I mean we should first add root config and then add conf the table config;
   



##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/runtime/stream/sql/ProcedureITCase.java:
##
@@ -210,6 +214,24 @@ void testNamedArgumentsWithOptionalArguments() {
 ResolvedSchema.of(Column.physical("result", 
DataTypes.STRING(;
 }
 
+@Test
+void testEnvironmentConf() throws DatabaseAlreadyExistException {
+Configuration configuration = new Configuration();
+configuration.setString("key1", "value1");
+StreamExecutionEnvironment env =
+
StreamExecutionEnvironment.getExecutionEnvironment(configuration);
+StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
+TestProcedureCatalogFactory.CatalogWithBuiltInProcedure 
procedureCatalog =
+new 
TestProcedureCatalogFactory.CatalogWithBuiltInProcedure("procedure_catalog");
+procedureCatalog.createDatabase(
+"system", new CatalogDatabaseImpl(Collections.emptyMap(), 
null), true);
+tableEnv.registerCatalog("test_p", procedureCatalog);
+tableEnv.useCatalog("test_p");

Review Comment:
   also set a property table confg to make sure we can also get table confg;
   and set a property ("key1, "value2")  to table confg  to make sure the table 
conf overwrite the root conf
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32075][FLIP-306][Checkpoint] Delete merged files on checkpoint abort or subsumption [flink]

2024-02-28 Thread via GitHub


masteryhx commented on code in PR #24181:
URL: https://github.com/apache/flink/pull/24181#discussion_r1505595039


##
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/StreamTask.java:
##
@@ -321,6 +321,8 @@ public abstract class StreamTask>
 
 private long initializeStateEndTs;
 
+@Nullable private FileMergingSnapshotManager fileMergingSnapshotManager;

Review Comment:
   Why this field is not initialized ?



##
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/filemerging/FileMergingSnapshotManager.java:
##
@@ -118,6 +119,34 @@ FileMergingCheckpointStateOutputStream 
createCheckpointStateOutputStream(
  */
 Path getManagedDir(SubtaskKey subtaskKey, CheckpointedStateScope scope);
 
+/**

Review Comment:
   We should remove  TODO (L39), right ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34352][doc] Improve the documentation of allowNonRestoredState [flink]

2024-02-28 Thread via GitHub


masteryhx commented on PR #24396:
URL: https://github.com/apache/flink/pull/24396#issuecomment-1968514402

   @Zakelly Thanks for the review. I have addressed your comments. PTAL again.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] Fix configuration through TernaryBoolean in EmbeddedRocksDBStateBackend. [flink]

2024-02-28 Thread via GitHub


StefanRRichter merged PR #24392:
URL: https://github.com/apache/flink/pull/24392


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-31810) RocksDBException: Bad table magic number on checkpoint rescale

2024-02-28 Thread junzhong qin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17821560#comment-17821560
 ] 

junzhong qin commented on FLINK-31810:
--

Hi [~yunta] ,

> First of all, we can certainly rescale from a checkpoint. Would this problem 
> still exist if not rescaled? I just want to confirm whether the file 
> (000232.sst) is corrupted already.
Can you share how to verify if an SST file is corrupt? Thank you.

> RocksDBException: Bad table magic number on checkpoint rescale
> --
>
> Key: FLINK-31810
> URL: https://issues.apache.org/jira/browse/FLINK-31810
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.15.2
>Reporter: Robert Metzger
>Priority: Major
>
> While rescaling a job from checkpoint, I ran into this exception:
> {code:java}
> SinkMaterializer[7] -> rob-result[7]: Writer -> rob-result[7]: Committer 
> (4/4)#3 (c1b348f7eed6e1ce0e41ef75338ae754) switched from INITIALIZING to 
> FAILED with failure cause: java.lang.Exception: Exception while creating 
> StreamOperatorStateContext.
>   at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:255)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:265)
>   at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:703)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:679)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:646)
>   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
>   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:917)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
>   at java.base/java.lang.Thread.run(Unknown Source)
> Caused by: org.apache.flink.util.FlinkException: Could not restore keyed 
> state backend for 
> SinkUpsertMaterializer_7d9b7588bc2ff89baed50d7a4558caa4_(4/4) from any of the 
> 1 provided restore options.
>   at 
> org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:160)
>   at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.keyedStatedBackend(StreamTaskStateInitializerImpl.java:346)
>   at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:164)
>   ... 11 more
> Caused by: org.apache.flink.runtime.state.BackendBuildingException: Caught 
> unexpected exception.
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackendBuilder.build(RocksDBKeyedStateBackendBuilder.java:395)
>   at 
> org.apache.flink.contrib.streaming.state.EmbeddedRocksDBStateBackend.createKeyedStateBackend(EmbeddedRocksDBStateBackend.java:483)
>   at 
> org.apache.flink.contrib.streaming.state.EmbeddedRocksDBStateBackend.createKeyedStateBackend(EmbeddedRocksDBStateBackend.java:97)
>   at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.lambda$keyedStatedBackend$1(StreamTaskStateInitializerImpl.java:329)
>   at 
> org.apache.flink.streaming.api.operators.BackendRestorerProcedure.attemptCreateAndRestore(BackendRestorerProcedure.java:168)
>   at 
> org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:135)
>   ... 13 more
> Caused by: java.io.IOException: Error while opening RocksDB instance.
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBOperationUtils.openDB(RocksDBOperationUtils.java:92)
>   at 
> org.apache.flink.contrib.streaming.state.restore.RocksDBIncrementalRestoreOperation.restoreDBInstanceFromStateHandle(RocksDBIncrementalRestoreOperation.java:465)
>   at 
> org.apache.flink.contrib.streaming.state.restore.RocksDBIncrementalRestoreOperation.restoreWithRescaling(RocksDBIncrementalRestoreOperation.java:321)
>   at 
> org.apache.flink.contrib.streaming.state.restore.RocksDBIncrementalRestoreOperation.restore(RocksDBIncrementalRestoreOperation.java:164)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackendBuilder.build(RocksDBKeyedStateBackendBuilder.java:315)
>   ... 18 more
> Caused 

[jira] [Created] (FLINK-34536) Support reading long value as Timestamp column in JSON format

2024-02-28 Thread yisha zhou (Jira)
yisha zhou created FLINK-34536:
--

 Summary: Support reading long value as Timestamp column in JSON 
format
 Key: FLINK-34536
 URL: https://issues.apache.org/jira/browse/FLINK-34536
 Project: Flink
  Issue Type: Improvement
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
Affects Versions: 1.19.0
Reporter: yisha zhou


In many scenarios, timestamp data is stored as Long value and expected to be 
operated as Timestamp value. It's not user-friendly to use an UDF to convert 
the data before operating it.

Meanwhile, in Avro format, it seems it can receive several types of input and 
convert it into TimestampData. Hope the same ability can be introduced into 
JSON format.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


<    1   2   3