[GitHub] [flink] flinkbot edited a comment on pull request #18991: [FLINK-26279][runtime] Add mailbox delay and throughput metrics

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18991:
URL: https://github.com/apache/flink/pull/18991#issuecomment-1060268498


   
   ## CI report:
   
   * de1583bca2dc4062ce5773f7dfaaeb532e1c1e27 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32787)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-26545) Ingress rules should be created in the same namespace with FlinkDeployment CR

2022-03-09 Thread Matyas Orhidi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17504067#comment-17504067
 ] 

Matyas Orhidi commented on FLINK-26545:
---

[~wangyang0918] you can enable it from helm.

> Ingress rules should be created in the same namespace with FlinkDeployment CR
> -
>
> Key: FLINK-26545
> URL: https://issues.apache.org/jira/browse/FLINK-26545
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Reporter: Yang Wang
>Assignee: Matyas Orhidi
>Priority: Major
>
> Currently, the ingress rules are always created in the operator 
> namespace(e.g. default). It could not work when the FlinkDeloyment CR is 
> submitted in a different namespace(e.g. flink-test).
> Refer to here[1] for why it could not work.
>  
> [1]. 
> https://stackoverflow.com/questions/59844622/ingress-configuration-for-k8s-in-different-namespaces
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] Kwafoor commented on pull request #18498: [FLINK-25801][ metrics]add cpu processor metric of taskmanager

2022-03-09 Thread GitBox


Kwafoor commented on pull request #18498:
URL: https://github.com/apache/flink/pull/18498#issuecomment-1063763070


   > This to me sounds more like something that should be logged; we generally 
avoid exposing inherently constant values as metrics.
   
   Not inherently constant values.This metric likes taskmanager  current used 
cores.To give advice to adjust taskmanager instances param.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18991: [FLINK-26279][runtime] Add mailbox delay and throughput metrics

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18991:
URL: https://github.com/apache/flink/pull/18991#issuecomment-1060268498


   
   ## CI report:
   
   * de1583bca2dc4062ce5773f7dfaaeb532e1c1e27 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32787)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-table-store] JingsongLi commented on a change in pull request #40: [FLINK-26567] FileStoreSourceSplitReader should deal with value count

2022-03-09 Thread GitBox


JingsongLi commented on a change in pull request #40:
URL: https://github.com/apache/flink-table-store/pull/40#discussion_r823433452



##
File path: 
flink-table-store-connector/src/test/java/org/apache/flink/table/store/connector/source/TestDataReadWrite.java
##
@@ -75,20 +77,19 @@ public FileStoreRead createRead() {
 }
 
 public List writeFiles(
-BinaryRowData partition, int bucket, List> kvs)
-throws Exception {
+BinaryRowData partition, int bucket, List> kvs) 
throws Exception {
 Preconditions.checkNotNull(
 service, "ExecutorService must be provided if writeFiles is 
needed");
 RecordWriter writer = createMergeTreeWriter(partition, bucket);
-for (Tuple2 tuple2 : kvs) {
+for (Tuple2 tuple2 : kvs) {
 writer.write(ValueKind.ADD, GenericRowData.of(tuple2.f0), 
GenericRowData.of(tuple2.f1));
 }
 List files = writer.prepareCommit().newFiles();
 writer.close();
 return new ArrayList<>(files);
 }
 
-private RecordWriter createMergeTreeWriter(BinaryRowData partition, int 
bucket) {
+public RecordWriter createMergeTreeWriter(BinaryRowData partition, int 
bucket) {

Review comment:
   Here is not to measure the underlying `merge`, so there will be no 
duplicate key appear, and this class will be useless.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18989: [FLINK-26306][state/changelog] Randomly offset materialization

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18989:
URL: https://github.com/apache/flink/pull/18989#issuecomment-1060121028


   
   ## CI report:
   
   * 1da5d40f751ebb3d9bc2382fb7ef5ab52c69bf4f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32656)
 
   * 54b33720f0d51945cf58fabf4c3c6945e178c1ed Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32811)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-table-store] JingsongLi commented on a change in pull request #40: [FLINK-26567] FileStoreSourceSplitReader should deal with value count

2022-03-09 Thread GitBox


JingsongLi commented on a change in pull request #40:
URL: https://github.com/apache/flink-table-store/pull/40#discussion_r823432457



##
File path: 
flink-table-store-connector/src/test/java/org/apache/flink/table/store/connector/source/FileStoreSourceSplitReaderTest.java
##
@@ -64,32 +67,92 @@ public static void after() {
 }
 
 @Test
-public void testKeyAsRecord() throws Exception {
-innerTestOnce(true);
+public void testPrimaryKey() throws Exception {
+innerTestOnce(false, 0);
+}
+
+@Test
+public void testValueCount() throws Exception {
+innerTestOnce(true, 0);

Review comment:
   value is a bigint.
   - has pk, value is the record
   - value count, value is the record count




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-26574) Allow definining Operator configuration in Helm chart Values

2022-03-09 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora updated FLINK-26574:
---
Description: 
Currently the Helm chart hardodes the flink-operator-config and 
flink-default-config configmap that contains the configurations (logging and 
yaml).

We should allow the users to define the contents of these config files directly 
in the helm Values yaml.

Also we should probably rename the flink-conf.yaml config key to 
operator-conf.yaml in flink-operator-config to not confuse it with flink 
accidentally.

  was:
Currently the Helm chart hardodes the flink-operator-config configmap that 
contains the operator configurations (logging and yaml).

We should allow the users to define the contents of these config files directly 
in the helm Values yaml.

Also we should probably rename the flink-conf.yaml config key to 
operator-conf.yaml to not confuse it with flink accidentally.


> Allow definining Operator configuration in Helm chart Values
> 
>
> Key: FLINK-26574
> URL: https://issues.apache.org/jira/browse/FLINK-26574
> Project: Flink
>  Issue Type: Sub-task
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Priority: Major
>
> Currently the Helm chart hardodes the flink-operator-config and 
> flink-default-config configmap that contains the configurations (logging and 
> yaml).
> We should allow the users to define the contents of these config files 
> directly in the helm Values yaml.
> Also we should probably rename the flink-conf.yaml config key to 
> operator-conf.yaml in flink-operator-config to not confuse it with flink 
> accidentally.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26537) Allow disabling StatefulFunctionsConfigValidator validation for classloader.parent-first-patterns.additional

2022-03-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-26537:
---
Labels: pull-request-available  (was: )

> Allow disabling StatefulFunctionsConfigValidator validation for 
> classloader.parent-first-patterns.additional
> 
>
> Key: FLINK-26537
> URL: https://issues.apache.org/jira/browse/FLINK-26537
> Project: Flink
>  Issue Type: Improvement
>  Components: Stateful Functions
>Reporter: Fil Karnicki
>Assignee: Fil Karnicki
>Priority: Major
>  Labels: pull-request-available
>
> For some deployments of stateful functions onto existing, shared clusters, it 
> is impossible to tailor which classes exist on the classpath. An example 
> would be a Cloudera Flink cluster, which adds protobuf-java classes that 
> clash with statefun ones.
> Stateful functions require the following flink-config.yaml setting:
> {{classloader.parent-first-patterns.additional: 
> org.apache.flink.statefun;org.apache.kafka;{+}*com.google.protobuf*{+}}} 
> In the case of the cloudera flink cluster, this will cause old, 2.5.0 
> protobuf classes to be loaded by statefun, which causes MethodNotFound 
> exceptions. 
> The idea is to allow disabling of the validation below, if some config 
> setting is present in the global flink configuration, for example: 
> statefun.validation.parent-first-classloader.disable=true
>  
> {code:java}
> private static void validateParentFirstClassloaderPatterns(Configuration 
> configuration) {
>   final Set parentFirstClassloaderPatterns =
>   parentFirstClassloaderPatterns(configuration);
>   if 
> (!parentFirstClassloaderPatterns.containsAll(PARENT_FIRST_CLASSLOADER_PATTERNS))
>  {
> throw new StatefulFunctionsInvalidConfigException(
> CoreOptions.ALWAYS_PARENT_FIRST_LOADER_PATTERNS_ADDITIONAL,
> "Must contain all of " + String.join(", ", 
> PARENT_FIRST_CLASSLOADER_PATTERNS));
>   }
> } {code}
>  
> Then, we wouldn't need to contain com.google.protobuf in 
> {{classloader.parent-first-patterns.additional:}} and it would be up to the 
> statefun user to use protobuf classes which are compatible with their version 
> of statefun.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink-statefun] FilKarnicki opened a new pull request #307: [FLINK-26537][statefun] Allow disabling StatefulFunctionsConfigValida…

2022-03-09 Thread GitBox


FilKarnicki opened a new pull request #307:
URL: https://github.com/apache/flink-statefun/pull/307


   ### What is the purpose of the change
   The goal of this PR is to add the ability to define when a statefun job is 
run from an uber jar. This then disables the validation of  
`classloader.parent-first-patterns.additional` flink-config.yaml settings.
   
   It is then up to the creator of the uber jar to make sure that statefun, 
kafka and protobuf versions don't clash.
   
   ### Main changes are:
   - a new EMBEDDED config option was added (statefun.embedded)
   - if configuration does _not_ have this EMBEDDED option, then classloader 
validation happens
   - test util for catching throwables from a runnable so that it can be 
inspected
   
   ### Verifying this change
   Added a test for when the statefun.embedded option is and isn't present
   
   Dependencies (does it add or upgrade a dependency): no
   The public API, i.e., is any changed class annotated with @public(Evolving): 
N/A
   The serializers: no
   The runtime per-record code paths (performance sensitive): no
   Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
   The S3 file system connector: no


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-26573) ChangelogPeriodicMaterializationRescaleITCase.testRescaleIn failed on azure

2022-03-09 Thread Roman Khachatryan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17504053#comment-17504053
 ] 

Roman Khachatryan commented on FLINK-26573:
---

[~gaoyunhaii], yes, I think FLINK-26318 is similar.
[~masteryhx], could you please take a look?

> ChangelogPeriodicMaterializationRescaleITCase.testRescaleIn failed on azure
> ---
>
> Key: FLINK-26573
> URL: https://issues.apache.org/jira/browse/FLINK-26573
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Priority: Critical
>  Labels: test-stability
>
> {code:java}
> 2022-03-09T18:27:58.5345435Z Mar 09 18:27:58 [ERROR] Tests run: 6, Failures: 
> 0, Errors: 1, Skipped: 0, Time elapsed: 12.81 s <<< FAILURE! - in 
> org.apache.flink.test.checkpointing.ChangelogPeriodicMaterializationRescaleITCase
> 2022-03-09T18:27:58.5348017Z Mar 09 18:27:58 [ERROR] 
> ChangelogPeriodicMaterializationRescaleITCase.testRescaleIn  Time elapsed: 
> 5.292 s  <<< ERROR!
> 2022-03-09T18:27:58.5349888Z Mar 09 18:27:58 
> org.apache.flink.runtime.JobException: Recovery is suppressed by 
> FixedDelayRestartBackoffTimeStrategy(maxNumberRestartAttempts=0, 
> backoffTimeMS=0)
> 2022-03-09T18:27:58.5351920Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:138)
> 2022-03-09T18:27:58.5354136Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:82)
> 2022-03-09T18:27:58.5355966Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:301)
> 2022-03-09T18:27:58.5357860Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:291)
> 2022-03-09T18:27:58.5359532Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:282)
> 2022-03-09T18:27:58.5361149Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:739)
> 2022-03-09T18:27:58.5362868Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:78)
> 2022-03-09T18:27:58.5365410Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:443)
> 2022-03-09T18:27:58.5366584Z Mar 09 18:27:58  at 
> sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
> 2022-03-09T18:27:58.5367849Z Mar 09 18:27:58  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-03-09T18:27:58.5371509Z Mar 09 18:27:58  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-03-09T18:27:58.5372555Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRpcInvocation$1(AkkaRpcActor.java:304)
> 2022-03-09T18:27:58.5373584Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
> 2022-03-09T18:27:58.5374532Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:302)
> 2022-03-09T18:27:58.5375556Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:217)
> 2022-03-09T18:27:58.5376329Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:78)
> 2022-03-09T18:27:58.5377045Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:163)
> 2022-03-09T18:27:58.5377845Z Mar 09 18:27:58  at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24)
> 2022-03-09T18:27:58.5378453Z Mar 09 18:27:58  at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20)
> 2022-03-09T18:27:58.5379053Z Mar 09 18:27:58  at 
> scala.PartialFunction.applyOrElse(PartialFunction.scala:123)
> 2022-03-09T18:27:58.5379654Z Mar 09 18:27:58  at 
> scala.PartialFunction.applyOrElse$(PartialFunction.scala:122)
> 2022-03-09T18:27:58.5380478Z Mar 09 18:27:58  at 
> akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20)
> 2022-03-09T18:27:58.5381228Z Mar 09 18:27:58  at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> 2022-03-09T18:27:58.5381918Z Mar 09 18:27:58  at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
> 2022-03-09T18:27:58.5382540Z Mar 09 18:27:58  at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
> 2022-03-09T18:27:58.5383115Z Mar 09 18:27:58  at 

[jira] [Created] (FLINK-26574) Allow definining Operator configuration in Helm chart Values

2022-03-09 Thread Gyula Fora (Jira)
Gyula Fora created FLINK-26574:
--

 Summary: Allow definining Operator configuration in Helm chart 
Values
 Key: FLINK-26574
 URL: https://issues.apache.org/jira/browse/FLINK-26574
 Project: Flink
  Issue Type: Sub-task
  Components: Kubernetes Operator
Reporter: Gyula Fora


Currently the Helm chart hardodes the flink-operator-config configmap that 
contains the operator configurations (logging and yaml).

We should allow the users to define the contents of these config files directly 
in the helm Values yaml.

Also we should probably rename the flink-conf.yaml config key to 
operator-conf.yaml to not confuse it with flink accidentally.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #19017: [FLINK-26542][hive] fix "schema of both sides of union should match" exception with Hive dialect

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #19017:
URL: https://github.com/apache/flink/pull/19017#issuecomment-1062742669


   
   ## CI report:
   
   * 8f35bb7e2717b5776683522b074eebabe2824734 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32796)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18989: [FLINK-26306][state/changelog] Randomly offset materialization

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18989:
URL: https://github.com/apache/flink/pull/18989#issuecomment-1060121028


   
   ## CI report:
   
   * 1da5d40f751ebb3d9bc2382fb7ef5ab52c69bf4f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32656)
 
   * 54b33720f0d51945cf58fabf4c3c6945e178c1ed UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] smattheis commented on pull request #18991: [FLINK-26279][runtime] Add mailbox delay and throughput metrics

2022-03-09 Thread GitBox


smattheis commented on pull request #18991:
URL: https://github.com/apache/flink/pull/18991#issuecomment-1063757856


   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-table-store] LadyForest commented on a change in pull request #40: [FLINK-26567] FileStoreSourceSplitReader should deal with value count

2022-03-09 Thread GitBox


LadyForest commented on a change in pull request #40:
URL: https://github.com/apache/flink-table-store/pull/40#discussion_r823428099



##
File path: 
flink-table-store-connector/src/test/java/org/apache/flink/table/store/connector/source/FileStoreSourceSplitReaderTest.java
##
@@ -64,32 +67,92 @@ public static void after() {
 }
 
 @Test
-public void testKeyAsRecord() throws Exception {
-innerTestOnce(true);
+public void testPrimaryKey() throws Exception {
+innerTestOnce(false, 0);
+}
+
+@Test
+public void testValueCount() throws Exception {
+innerTestOnce(true, 0);

Review comment:
   Correct me if I'm wrong. I'm a little confused that`innerTestOnce` 
relies on `TestDataReadWrite`. But the latter hardcoded the key type, value 
type, and accumulator type to the pk situation. How does it work to test the 
value count(no pk) condition?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-table-store] LadyForest commented on a change in pull request #40: [FLINK-26567] FileStoreSourceSplitReader should deal with value count

2022-03-09 Thread GitBox


LadyForest commented on a change in pull request #40:
URL: https://github.com/apache/flink-table-store/pull/40#discussion_r823428099



##
File path: 
flink-table-store-connector/src/test/java/org/apache/flink/table/store/connector/source/FileStoreSourceSplitReaderTest.java
##
@@ -64,32 +67,92 @@ public static void after() {
 }
 
 @Test
-public void testKeyAsRecord() throws Exception {
-innerTestOnce(true);
+public void testPrimaryKey() throws Exception {
+innerTestOnce(false, 0);
+}
+
+@Test
+public void testValueCount() throws Exception {
+innerTestOnce(true, 0);

Review comment:
   Correct me if I'm wrong. `innerTestOnce` relies on `TestDataReadWrite`. 
But the latter hardcoded the key type, value type, and accumulator type to the 
pk situation.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rkhachatryan commented on a change in pull request #18989: [FLINK-26306][state/changelog] Randomly offset materialization

2022-03-09 Thread GitBox


rkhachatryan commented on a change in pull request #18989:
URL: https://github.com/apache/flink/pull/18989#discussion_r823426495



##
File path: 
flink-state-backends/flink-statebackend-changelog/src/main/java/org/apache/flink/state/changelog/PeriodicMaterializationManager.java
##
@@ -96,6 +102,11 @@
 Executors.newSingleThreadScheduledExecutor(
 new ExecutorThreadFactory(
 "periodic-materialization-scheduler-" + 
subtaskName));
+
+this.initialDelay =
+// randomize initial delay to avoid thundering herd problem
+MathUtils.murmurHash(Objects.hash(operatorId, subtaskIndex))

Review comment:
   Good point, I'll remove `subtaskIndex`.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-26573) ChangelogPeriodicMaterializationRescaleITCase.testRescaleIn failed on azure

2022-03-09 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17504049#comment-17504049
 ] 

Yun Gao commented on FLINK-26573:
-

Perhaps cc [~roman] [~ym]

> ChangelogPeriodicMaterializationRescaleITCase.testRescaleIn failed on azure
> ---
>
> Key: FLINK-26573
> URL: https://issues.apache.org/jira/browse/FLINK-26573
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Priority: Critical
>  Labels: test-stability
>
> {code:java}
> 2022-03-09T18:27:58.5345435Z Mar 09 18:27:58 [ERROR] Tests run: 6, Failures: 
> 0, Errors: 1, Skipped: 0, Time elapsed: 12.81 s <<< FAILURE! - in 
> org.apache.flink.test.checkpointing.ChangelogPeriodicMaterializationRescaleITCase
> 2022-03-09T18:27:58.5348017Z Mar 09 18:27:58 [ERROR] 
> ChangelogPeriodicMaterializationRescaleITCase.testRescaleIn  Time elapsed: 
> 5.292 s  <<< ERROR!
> 2022-03-09T18:27:58.5349888Z Mar 09 18:27:58 
> org.apache.flink.runtime.JobException: Recovery is suppressed by 
> FixedDelayRestartBackoffTimeStrategy(maxNumberRestartAttempts=0, 
> backoffTimeMS=0)
> 2022-03-09T18:27:58.5351920Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:138)
> 2022-03-09T18:27:58.5354136Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:82)
> 2022-03-09T18:27:58.5355966Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:301)
> 2022-03-09T18:27:58.5357860Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:291)
> 2022-03-09T18:27:58.5359532Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:282)
> 2022-03-09T18:27:58.5361149Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:739)
> 2022-03-09T18:27:58.5362868Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:78)
> 2022-03-09T18:27:58.5365410Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:443)
> 2022-03-09T18:27:58.5366584Z Mar 09 18:27:58  at 
> sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
> 2022-03-09T18:27:58.5367849Z Mar 09 18:27:58  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-03-09T18:27:58.5371509Z Mar 09 18:27:58  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-03-09T18:27:58.5372555Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRpcInvocation$1(AkkaRpcActor.java:304)
> 2022-03-09T18:27:58.5373584Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
> 2022-03-09T18:27:58.5374532Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:302)
> 2022-03-09T18:27:58.5375556Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:217)
> 2022-03-09T18:27:58.5376329Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:78)
> 2022-03-09T18:27:58.5377045Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:163)
> 2022-03-09T18:27:58.5377845Z Mar 09 18:27:58  at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24)
> 2022-03-09T18:27:58.5378453Z Mar 09 18:27:58  at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20)
> 2022-03-09T18:27:58.5379053Z Mar 09 18:27:58  at 
> scala.PartialFunction.applyOrElse(PartialFunction.scala:123)
> 2022-03-09T18:27:58.5379654Z Mar 09 18:27:58  at 
> scala.PartialFunction.applyOrElse$(PartialFunction.scala:122)
> 2022-03-09T18:27:58.5380478Z Mar 09 18:27:58  at 
> akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20)
> 2022-03-09T18:27:58.5381228Z Mar 09 18:27:58  at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> 2022-03-09T18:27:58.5381918Z Mar 09 18:27:58  at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
> 2022-03-09T18:27:58.5382540Z Mar 09 18:27:58  at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
> 2022-03-09T18:27:58.5383115Z Mar 09 18:27:58  at 
> akka.actor.Actor.aroundReceive(Actor.scala:537)
> 2022-03-09T18:27:58.5383652Z Mar 09 

[jira] [Comment Edited] (FLINK-26573) ChangelogPeriodicMaterializationRescaleITCase.testRescaleIn failed on azure

2022-03-09 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17504049#comment-17504049
 ] 

Yun Gao edited comment on FLINK-26573 at 3/10/22, 7:43 AM:
---

Perhaps cc [~roman] [~ym] 

Is it the same issue with https://issues.apache.org/jira/browse/FLINK-26318 ? 


was (Author: gaoyunhaii):
Perhaps cc [~roman] [~ym]

> ChangelogPeriodicMaterializationRescaleITCase.testRescaleIn failed on azure
> ---
>
> Key: FLINK-26573
> URL: https://issues.apache.org/jira/browse/FLINK-26573
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Priority: Critical
>  Labels: test-stability
>
> {code:java}
> 2022-03-09T18:27:58.5345435Z Mar 09 18:27:58 [ERROR] Tests run: 6, Failures: 
> 0, Errors: 1, Skipped: 0, Time elapsed: 12.81 s <<< FAILURE! - in 
> org.apache.flink.test.checkpointing.ChangelogPeriodicMaterializationRescaleITCase
> 2022-03-09T18:27:58.5348017Z Mar 09 18:27:58 [ERROR] 
> ChangelogPeriodicMaterializationRescaleITCase.testRescaleIn  Time elapsed: 
> 5.292 s  <<< ERROR!
> 2022-03-09T18:27:58.5349888Z Mar 09 18:27:58 
> org.apache.flink.runtime.JobException: Recovery is suppressed by 
> FixedDelayRestartBackoffTimeStrategy(maxNumberRestartAttempts=0, 
> backoffTimeMS=0)
> 2022-03-09T18:27:58.5351920Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:138)
> 2022-03-09T18:27:58.5354136Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:82)
> 2022-03-09T18:27:58.5355966Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:301)
> 2022-03-09T18:27:58.5357860Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:291)
> 2022-03-09T18:27:58.5359532Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:282)
> 2022-03-09T18:27:58.5361149Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:739)
> 2022-03-09T18:27:58.5362868Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:78)
> 2022-03-09T18:27:58.5365410Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:443)
> 2022-03-09T18:27:58.5366584Z Mar 09 18:27:58  at 
> sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
> 2022-03-09T18:27:58.5367849Z Mar 09 18:27:58  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-03-09T18:27:58.5371509Z Mar 09 18:27:58  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-03-09T18:27:58.5372555Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRpcInvocation$1(AkkaRpcActor.java:304)
> 2022-03-09T18:27:58.5373584Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
> 2022-03-09T18:27:58.5374532Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:302)
> 2022-03-09T18:27:58.5375556Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:217)
> 2022-03-09T18:27:58.5376329Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:78)
> 2022-03-09T18:27:58.5377045Z Mar 09 18:27:58  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:163)
> 2022-03-09T18:27:58.5377845Z Mar 09 18:27:58  at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24)
> 2022-03-09T18:27:58.5378453Z Mar 09 18:27:58  at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20)
> 2022-03-09T18:27:58.5379053Z Mar 09 18:27:58  at 
> scala.PartialFunction.applyOrElse(PartialFunction.scala:123)
> 2022-03-09T18:27:58.5379654Z Mar 09 18:27:58  at 
> scala.PartialFunction.applyOrElse$(PartialFunction.scala:122)
> 2022-03-09T18:27:58.5380478Z Mar 09 18:27:58  at 
> akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20)
> 2022-03-09T18:27:58.5381228Z Mar 09 18:27:58  at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> 2022-03-09T18:27:58.5381918Z Mar 09 18:27:58  at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
> 2022-03-09T18:27:58.5382540Z Mar 09 18:27:58  at 
> 

[GitHub] [flink-ml] weibozhao commented on a change in pull request #54: [FLINK-25552] Add Estimator and Transformer for MinMaxScaler in FlinkML

2022-03-09 Thread GitBox


weibozhao commented on a change in pull request #54:
URL: https://github.com/apache/flink-ml/pull/54#discussion_r823425316



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/feature/minmaxscaler/MinMaxScalerParams.java
##
@@ -0,0 +1,56 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.feature.minmaxscaler;
+
+import org.apache.flink.ml.common.param.HasFeaturesCol;
+import org.apache.flink.ml.common.param.HasOutputCol;
+import org.apache.flink.ml.param.DoubleParam;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.param.ParamValidators;
+
+/**
+ * Params for {@link MinMaxScaler}.
+ *
+ * @param  The class type of this instance.
+ */
+public interface MinMaxScalerParams extends HasFeaturesCol, 
HasOutputCol {
+Param MAX =
+new DoubleParam(
+"max", "Upper bound after transformation.", 1.0, 
ParamValidators.notNull());
+
+default Double getMax() {
+return get(MAX);
+}
+
+default T setMax(Double value) {
+return set(MAX, value);
+}
+
+Param MIN =
+new DoubleParam(
+"min", "Lower bound after transformation.", 0.0, 
ParamValidators.notNull());
+
+default Double getMIN() {
+return get(MIN);
+}
+
+default T setMIN(Double value) {

Review comment:
   ditto.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-26573) ChangelogPeriodicMaterializationRescaleITCase.testRescaleIn failed on azure

2022-03-09 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-26573:

Description: 
{code:java}
2022-03-09T18:27:58.5345435Z Mar 09 18:27:58 [ERROR] Tests run: 6, Failures: 0, 
Errors: 1, Skipped: 0, Time elapsed: 12.81 s <<< FAILURE! - in 
org.apache.flink.test.checkpointing.ChangelogPeriodicMaterializationRescaleITCase
2022-03-09T18:27:58.5348017Z Mar 09 18:27:58 [ERROR] 
ChangelogPeriodicMaterializationRescaleITCase.testRescaleIn  Time elapsed: 
5.292 s  <<< ERROR!
2022-03-09T18:27:58.5349888Z Mar 09 18:27:58 
org.apache.flink.runtime.JobException: Recovery is suppressed by 
FixedDelayRestartBackoffTimeStrategy(maxNumberRestartAttempts=0, 
backoffTimeMS=0)
2022-03-09T18:27:58.5351920Z Mar 09 18:27:58at 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:138)
2022-03-09T18:27:58.5354136Z Mar 09 18:27:58at 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:82)
2022-03-09T18:27:58.5355966Z Mar 09 18:27:58at 
org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:301)
2022-03-09T18:27:58.5357860Z Mar 09 18:27:58at 
org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:291)
2022-03-09T18:27:58.5359532Z Mar 09 18:27:58at 
org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:282)
2022-03-09T18:27:58.5361149Z Mar 09 18:27:58at 
org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:739)
2022-03-09T18:27:58.5362868Z Mar 09 18:27:58at 
org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:78)
2022-03-09T18:27:58.5365410Z Mar 09 18:27:58at 
org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:443)
2022-03-09T18:27:58.5366584Z Mar 09 18:27:58at 
sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
2022-03-09T18:27:58.5367849Z Mar 09 18:27:58at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2022-03-09T18:27:58.5371509Z Mar 09 18:27:58at 
java.lang.reflect.Method.invoke(Method.java:498)
2022-03-09T18:27:58.5372555Z Mar 09 18:27:58at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRpcInvocation$1(AkkaRpcActor.java:304)
2022-03-09T18:27:58.5373584Z Mar 09 18:27:58at 
org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
2022-03-09T18:27:58.5374532Z Mar 09 18:27:58at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:302)
2022-03-09T18:27:58.5375556Z Mar 09 18:27:58at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:217)
2022-03-09T18:27:58.5376329Z Mar 09 18:27:58at 
org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:78)
2022-03-09T18:27:58.5377045Z Mar 09 18:27:58at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:163)
2022-03-09T18:27:58.5377845Z Mar 09 18:27:58at 
akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24)
2022-03-09T18:27:58.5378453Z Mar 09 18:27:58at 
akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20)
2022-03-09T18:27:58.5379053Z Mar 09 18:27:58at 
scala.PartialFunction.applyOrElse(PartialFunction.scala:123)
2022-03-09T18:27:58.5379654Z Mar 09 18:27:58at 
scala.PartialFunction.applyOrElse$(PartialFunction.scala:122)
2022-03-09T18:27:58.5380478Z Mar 09 18:27:58at 
akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20)
2022-03-09T18:27:58.5381228Z Mar 09 18:27:58at 
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
2022-03-09T18:27:58.5381918Z Mar 09 18:27:58at 
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
2022-03-09T18:27:58.5382540Z Mar 09 18:27:58at 
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
2022-03-09T18:27:58.5383115Z Mar 09 18:27:58at 
akka.actor.Actor.aroundReceive(Actor.scala:537)
2022-03-09T18:27:58.5383652Z Mar 09 18:27:58at 
akka.actor.Actor.aroundReceive$(Actor.scala:535)
2022-03-09T18:27:58.5384420Z Mar 09 18:27:58at 
akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220)
2022-03-09T18:27:58.5385037Z Mar 09 18:27:58at 
akka.actor.ActorCell.receiveMessage(ActorCell.scala:580)
2022-03-09T18:27:58.5385594Z Mar 09 18:27:58at 
akka.actor.ActorCell.invoke(ActorCell.scala:548)
2022-03-09T18:27:58.5386182Z Mar 09 18:27:58at 
akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270)
2022-03-09T18:27:58.5386751Z Mar 09 18:27:58at 
akka.dispatch.Mailbox.run(Mailbox.scala:231)
2022-03-09T18:27:58.5387291Z Mar 09 

[jira] [Updated] (FLINK-26573) ChangelogPeriodicMaterializationRescaleITCase.testRescaleIn failed on azure

2022-03-09 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-26573:

Priority: Critical  (was: Major)

> ChangelogPeriodicMaterializationRescaleITCase.testRescaleIn failed on azure
> ---
>
> Key: FLINK-26573
> URL: https://issues.apache.org/jira/browse/FLINK-26573
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Priority: Critical
>  Labels: test-stability
>
> {code:java}
> Mar 09 18:27:45 [INFO] Running 
> org.apache.flink.test.checkpointing.ChangelogPeriodicMaterializationRescaleITCase
> Mar 09 18:27:58 [ERROR] Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 12.81 s <<< FAILURE! - in 
> org.apache.flink.test.checkpointing.ChangelogPeriodicMaterializationRescaleITCase
> Mar 09 18:27:58 [ERROR] 
> ChangelogPeriodicMaterializationRescaleITCase.testRescaleIn  Time elapsed: 
> 5.292 s  <<< ERROR!
> Mar 09 18:27:58 org.apache.flink.runtime.JobException: Recovery is suppressed 
> by FixedDelayRestartBackoffTimeStrategy(maxNumberRestartAttempts=0, 
> backoffTimeMS=0)
> Mar 09 18:27:58   at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:138)
> Mar 09 18:27:58   at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:82)
> Mar 09 18:27:58   at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:301)
> Mar 09 18:27:58   at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:291)
> Mar 09 18:27:58   at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:282)
> Mar 09 18:27:58   at 
> org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:739)
> Mar 09 18:27:58   at 
> org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:78)
> Mar 09 18:27:58   at 
> org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:443)
> Mar 09 18:27:58   at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown 
> Source)
> Mar 09 18:27:58   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Mar 09 18:27:58   at java.lang.reflect.Method.invoke(Method.java:498)
> Mar 09 18:27:58   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRpcInvocation$1(AkkaRpcActor.java:304)
> Mar 09 18:27:58   at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
> Mar 09 18:27:58   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:302)
> Mar 09 18:27:58   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:217)
> Mar 09 18:27:58   at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:78)
> Mar 09 18:27:58   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:163)
> Mar 09 18:27:58   at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24)
> Mar 09 18:27:58   at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20)
> Mar 09 18:27:58   at 
> scala.PartialFunction.applyOrElse(PartialFunction.scala:123)
> Mar 09 18:27:58   at 
> scala.PartialFunction.applyOrElse$(PartialFunction.scala:122)
> Mar 09 18:27:58   at 
> akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20)
> Mar 09 18:27:58   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> Mar 09 18:27:58   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
> Mar 09 18:27:58   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
> Mar 09 18:27:58   at akka.actor.Actor.aroundReceive(Actor.scala:537)
> Mar 09 18:27:58   at akka.actor.Actor.aroundReceive$(Actor.scala:535)
> Mar 09 18:27:58   at 
> akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220)
> Mar 09 18:27:58   at 
> akka.actor.ActorCell.receiveMessage(ActorCell.scala:580)
> Mar 09 18:27:58   at akka.actor.ActorCell.invoke(ActorCell.scala:548)
> Mar 09 18:27:58   at 
> akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270)
> Mar 09 18:27:58   at akka.dispatch.Mailbox.run(Mailbox.scala:231)
> Mar 09 18:27:58   at akka.dispatch.Mailbox.exec(Mailbox.scala:243)
> Mar 09 18:27:58   at 
> java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
> Mar 09 

[jira] [Commented] (FLINK-26573) ChangelogPeriodicMaterializationRescaleITCase.testRescaleIn failed on azure

2022-03-09 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17504046#comment-17504046
 ] 

Yun Gao commented on FLINK-26573:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32793=logs=8fd9202e-fd17-5b26-353c-ac1ff76c8f28=ea7cf968-e585-52cb-e0fc-f48de023a7ca=5674

> ChangelogPeriodicMaterializationRescaleITCase.testRescaleIn failed on azure
> ---
>
> Key: FLINK-26573
> URL: https://issues.apache.org/jira/browse/FLINK-26573
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> Mar 09 18:27:45 [INFO] Running 
> org.apache.flink.test.checkpointing.ChangelogPeriodicMaterializationRescaleITCase
> Mar 09 18:27:58 [ERROR] Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 12.81 s <<< FAILURE! - in 
> org.apache.flink.test.checkpointing.ChangelogPeriodicMaterializationRescaleITCase
> Mar 09 18:27:58 [ERROR] 
> ChangelogPeriodicMaterializationRescaleITCase.testRescaleIn  Time elapsed: 
> 5.292 s  <<< ERROR!
> Mar 09 18:27:58 org.apache.flink.runtime.JobException: Recovery is suppressed 
> by FixedDelayRestartBackoffTimeStrategy(maxNumberRestartAttempts=0, 
> backoffTimeMS=0)
> Mar 09 18:27:58   at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:138)
> Mar 09 18:27:58   at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:82)
> Mar 09 18:27:58   at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:301)
> Mar 09 18:27:58   at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:291)
> Mar 09 18:27:58   at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:282)
> Mar 09 18:27:58   at 
> org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:739)
> Mar 09 18:27:58   at 
> org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:78)
> Mar 09 18:27:58   at 
> org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:443)
> Mar 09 18:27:58   at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown 
> Source)
> Mar 09 18:27:58   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Mar 09 18:27:58   at java.lang.reflect.Method.invoke(Method.java:498)
> Mar 09 18:27:58   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRpcInvocation$1(AkkaRpcActor.java:304)
> Mar 09 18:27:58   at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
> Mar 09 18:27:58   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:302)
> Mar 09 18:27:58   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:217)
> Mar 09 18:27:58   at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:78)
> Mar 09 18:27:58   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:163)
> Mar 09 18:27:58   at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24)
> Mar 09 18:27:58   at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20)
> Mar 09 18:27:58   at 
> scala.PartialFunction.applyOrElse(PartialFunction.scala:123)
> Mar 09 18:27:58   at 
> scala.PartialFunction.applyOrElse$(PartialFunction.scala:122)
> Mar 09 18:27:58   at 
> akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20)
> Mar 09 18:27:58   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> Mar 09 18:27:58   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
> Mar 09 18:27:58   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
> Mar 09 18:27:58   at akka.actor.Actor.aroundReceive(Actor.scala:537)
> Mar 09 18:27:58   at akka.actor.Actor.aroundReceive$(Actor.scala:535)
> Mar 09 18:27:58   at 
> akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220)
> Mar 09 18:27:58   at 
> akka.actor.ActorCell.receiveMessage(ActorCell.scala:580)
> Mar 09 18:27:58   at akka.actor.ActorCell.invoke(ActorCell.scala:548)
> Mar 09 18:27:58   at 
> akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270)
> Mar 09 18:27:58   at akka.dispatch.Mailbox.run(Mailbox.scala:231)
> Mar 09 18:27:58   

[GitHub] [flink-ml] weibozhao commented on a change in pull request #54: [FLINK-25552] Add Estimator and Transformer for MinMaxScaler in FlinkML

2022-03-09 Thread GitBox


weibozhao commented on a change in pull request #54:
URL: https://github.com/apache/flink-ml/pull/54#discussion_r823423537



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/feature/minmaxscaler/MinMaxScaler.java
##
@@ -0,0 +1,165 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.feature.minmaxscaler;
+
+import org.apache.flink.api.common.functions.RichMapPartitionFunction;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.ml.api.Estimator;
+import org.apache.flink.ml.common.datastream.DataStreamUtils;
+import org.apache.flink.ml.linalg.DenseVector;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.util.ParamUtils;
+import org.apache.flink.ml.util.ReadWriteUtils;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
+import org.apache.flink.table.api.internal.TableImpl;
+import org.apache.flink.types.Row;
+import org.apache.flink.util.Collector;
+import org.apache.flink.util.Preconditions;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+/** An Estimator which implements the MinMaxScaler algorithm. */
+public class MinMaxScaler
+implements Estimator, 
MinMaxScalerParams {
+private final Map, Object> paramMap = new HashMap<>();
+
+public MinMaxScaler() {
+ParamUtils.initializeMapWithDefaultValues(paramMap, this);
+}
+
+@Override
+public MinMaxScalerModel fit(Table... inputs) {
+Preconditions.checkArgument(inputs.length == 1);
+StreamTableEnvironment tEnv =
+(StreamTableEnvironment) ((TableImpl) 
inputs[0]).getTableEnvironment();
+DataStream> minMaxVectors =
+computeMinMaxVectors(tEnv.toDataStream(inputs[0]), 
getFeaturesCol());
+DataStream modelData = 
genModelData(minMaxVectors);
+MinMaxScalerModel model =
+new 
MinMaxScalerModel().setModelData(tEnv.fromDataStream(modelData));
+ReadWriteUtils.updateExistingParams(model, getParamMap());
+return model;
+}
+
+@Override
+public Map, Object> getParamMap() {
+return paramMap;
+}
+
+@Override
+public void save(String path) throws IOException {
+ReadWriteUtils.saveMetadata(this, path);
+}
+
+public static MinMaxScaler load(StreamExecutionEnvironment env, String 
path)
+throws IOException {
+return ReadWriteUtils.loadStageParam(path);
+}
+
+/**
+ * Generates minMax scaler model data.
+ *
+ * @param minMaxVectors Input distributed minMaxVectors.
+ * @return MinMax scaler model data.
+ */
+private static DataStream genModelData(
+DataStream> minMaxVectors) {
+DataStream modelData =
+DataStreamUtils.mapPartition(

Review comment:
   This mapPartition only has fewer record. For every worker only has one 
record. I think it's no efficient problem.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-26573) ChangelogPeriodicMaterializationRescaleITCase.testRescaleIn failed on azure

2022-03-09 Thread Yun Gao (Jira)
Yun Gao created FLINK-26573:
---

 Summary: 
ChangelogPeriodicMaterializationRescaleITCase.testRescaleIn failed on azure
 Key: FLINK-26573
 URL: https://issues.apache.org/jira/browse/FLINK-26573
 Project: Flink
  Issue Type: Bug
  Components: Runtime / State Backends
Affects Versions: 1.15.0
Reporter: Yun Gao



{code:java}
Mar 09 18:27:45 [INFO] Running 
org.apache.flink.test.checkpointing.ChangelogPeriodicMaterializationRescaleITCase
Mar 09 18:27:58 [ERROR] Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, Time 
elapsed: 12.81 s <<< FAILURE! - in 
org.apache.flink.test.checkpointing.ChangelogPeriodicMaterializationRescaleITCase
Mar 09 18:27:58 [ERROR] 
ChangelogPeriodicMaterializationRescaleITCase.testRescaleIn  Time elapsed: 
5.292 s  <<< ERROR!
Mar 09 18:27:58 org.apache.flink.runtime.JobException: Recovery is suppressed 
by FixedDelayRestartBackoffTimeStrategy(maxNumberRestartAttempts=0, 
backoffTimeMS=0)
Mar 09 18:27:58 at 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:138)
Mar 09 18:27:58 at 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:82)
Mar 09 18:27:58 at 
org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:301)
Mar 09 18:27:58 at 
org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:291)
Mar 09 18:27:58 at 
org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:282)
Mar 09 18:27:58 at 
org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:739)
Mar 09 18:27:58 at 
org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:78)
Mar 09 18:27:58 at 
org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:443)
Mar 09 18:27:58 at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown 
Source)
Mar 09 18:27:58 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Mar 09 18:27:58 at java.lang.reflect.Method.invoke(Method.java:498)
Mar 09 18:27:58 at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRpcInvocation$1(AkkaRpcActor.java:304)
Mar 09 18:27:58 at 
org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
Mar 09 18:27:58 at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:302)
Mar 09 18:27:58 at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:217)
Mar 09 18:27:58 at 
org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:78)
Mar 09 18:27:58 at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:163)
Mar 09 18:27:58 at 
akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24)
Mar 09 18:27:58 at 
akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20)
Mar 09 18:27:58 at 
scala.PartialFunction.applyOrElse(PartialFunction.scala:123)
Mar 09 18:27:58 at 
scala.PartialFunction.applyOrElse$(PartialFunction.scala:122)
Mar 09 18:27:58 at 
akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20)
Mar 09 18:27:58 at 
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
Mar 09 18:27:58 at 
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
Mar 09 18:27:58 at 
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
Mar 09 18:27:58 at akka.actor.Actor.aroundReceive(Actor.scala:537)
Mar 09 18:27:58 at akka.actor.Actor.aroundReceive$(Actor.scala:535)
Mar 09 18:27:58 at 
akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220)
Mar 09 18:27:58 at 
akka.actor.ActorCell.receiveMessage(ActorCell.scala:580)
Mar 09 18:27:58 at akka.actor.ActorCell.invoke(ActorCell.scala:548)
Mar 09 18:27:58 at 
akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270)
Mar 09 18:27:58 at akka.dispatch.Mailbox.run(Mailbox.scala:231)
Mar 09 18:27:58 at akka.dispatch.Mailbox.exec(Mailbox.scala:243)
Mar 09 18:27:58 at 
java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
Mar 09 18:27:58 at 
java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)

{code}

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32774=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=86f654fa-ab48-5c1a-25f4-7e7f6afb9bba=5593



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] Myasuka commented on pull request #19029: [FLINK-26418][runtime][test] Use java.io.tmpdir for tmpWorkingDir

2022-03-09 Thread GitBox


Myasuka commented on pull request #19029:
URL: https://github.com/apache/flink/pull/19029#issuecomment-1063749872


   I think this patch could resolve some problems, but I am not sure whether 
this fix could avoid to make all local test directories stay in the working 
directory.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-table-store] LadyForest commented on a change in pull request #40: [FLINK-26567] FileStoreSourceSplitReader should deal with value count

2022-03-09 Thread GitBox


LadyForest commented on a change in pull request #40:
URL: https://github.com/apache/flink-table-store/pull/40#discussion_r823420125



##
File path: 
flink-table-store-connector/src/test/java/org/apache/flink/table/store/connector/source/TestDataReadWrite.java
##
@@ -75,20 +77,19 @@ public FileStoreRead createRead() {
 }
 
 public List writeFiles(
-BinaryRowData partition, int bucket, List> kvs)
-throws Exception {
+BinaryRowData partition, int bucket, List> kvs) 
throws Exception {
 Preconditions.checkNotNull(
 service, "ExecutorService must be provided if writeFiles is 
needed");
 RecordWriter writer = createMergeTreeWriter(partition, bucket);
-for (Tuple2 tuple2 : kvs) {
+for (Tuple2 tuple2 : kvs) {
 writer.write(ValueKind.ADD, GenericRowData.of(tuple2.f0), 
GenericRowData.of(tuple2.f1));
 }
 List files = writer.prepareCommit().newFiles();
 writer.close();
 return new ArrayList<>(files);
 }
 
-private RecordWriter createMergeTreeWriter(BinaryRowData partition, int 
bucket) {
+public RecordWriter createMergeTreeWriter(BinaryRowData partition, int 
bucket) {

Review comment:
   I have a question here. The accumulator is always initialized as a 
`DeduplicateAccumulator`(L98), can it be used in a value count test?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-26572) Re-schedule reconcile more often until job is in ready state

2022-03-09 Thread Gyula Fora (Jira)
Gyula Fora created FLINK-26572:
--

 Summary: Re-schedule reconcile more often until job is in ready 
state
 Key: FLINK-26572
 URL: https://issues.apache.org/jira/browse/FLINK-26572
 Project: Flink
  Issue Type: Sub-task
  Components: Kubernetes Operator
Reporter: Gyula Fora


Currently we only use 2 reconcile reschedule configs.  One that is specific to 
when the port is ready but we need to reschedule one more time (this is 10 
seconds by default) and another general reconcile reschedule delay (60 seconds)

We should introduce another setting to use when the job is in a deploying or 
savepointing state to allow for reaching the READY status faster.

We could call it: 
operator.observer.progress-check.interval.sec
or
operator.observer.operation-check.interval.sec

I suggest to use 10 seconds here by default also.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink-ml] weibozhao commented on a change in pull request #54: [FLINK-25552] Add Estimator and Transformer for MinMaxScaler in FlinkML

2022-03-09 Thread GitBox


weibozhao commented on a change in pull request #54:
URL: https://github.com/apache/flink-ml/pull/54#discussion_r823414741



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/feature/minmaxscaler/MinMaxScalerParams.java
##
@@ -0,0 +1,56 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.feature.minmaxscaler;
+
+import org.apache.flink.ml.common.param.HasFeaturesCol;
+import org.apache.flink.ml.common.param.HasOutputCol;
+import org.apache.flink.ml.param.DoubleParam;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.param.ParamValidators;
+
+/**
+ * Params for {@link MinMaxScaler}.
+ *
+ * @param  The class type of this instance.
+ */
+public interface MinMaxScalerParams extends HasFeaturesCol, 
HasOutputCol {
+Param MAX =
+new DoubleParam(
+"max", "Upper bound after transformation.", 1.0, 
ParamValidators.notNull());
+
+default Double getMax() {
+return get(MAX);
+}
+
+default T setMax(Double value) {
+return set(MAX, value);
+}
+
+Param MIN =
+new DoubleParam(
+"min", "Lower bound after transformation.", 0.0, 
ParamValidators.notNull());
+
+default Double getMIN() {

Review comment:
   I define api as alink and spark.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-26571) Savepoint trigger/tracking improvements

2022-03-09 Thread Gyula Fora (Jira)
Gyula Fora created FLINK-26571:
--

 Summary: Savepoint trigger/tracking improvements
 Key: FLINK-26571
 URL: https://issues.apache.org/jira/browse/FLINK-26571
 Project: Flink
  Issue Type: Sub-task
  Components: Kubernetes Operator
Reporter: Gyula Fora


This Jira covers a few small fixes / improvements we should make to the manual 
savepoint trigger/tracking logic:

 - JobSpec.savepointTriggerNonce should be Long instead long with null default 
value. 

 - SavepointInfo.triggerTimestamp should be Long type and nulled out together 
with triggerId when savepoint is complete



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink-ml] weibozhao commented on a change in pull request #54: [FLINK-25552] Add Estimator and Transformer for MinMaxScaler in FlinkML

2022-03-09 Thread GitBox


weibozhao commented on a change in pull request #54:
URL: https://github.com/apache/flink-ml/pull/54#discussion_r823414091



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/feature/minmaxscaler/MinMaxScaler.java
##
@@ -0,0 +1,165 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.feature.minmaxscaler;
+
+import org.apache.flink.api.common.functions.RichMapPartitionFunction;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.ml.api.Estimator;
+import org.apache.flink.ml.common.datastream.DataStreamUtils;
+import org.apache.flink.ml.linalg.DenseVector;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.util.ParamUtils;
+import org.apache.flink.ml.util.ReadWriteUtils;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
+import org.apache.flink.table.api.internal.TableImpl;
+import org.apache.flink.types.Row;
+import org.apache.flink.util.Collector;
+import org.apache.flink.util.Preconditions;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+/** An Estimator which implements the MinMaxScaler algorithm. */

Review comment:
   OK




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] weibozhao commented on a change in pull request #54: [FLINK-25552] Add Estimator and Transformer for MinMaxScaler in FlinkML

2022-03-09 Thread GitBox


weibozhao commented on a change in pull request #54:
URL: https://github.com/apache/flink-ml/pull/54#discussion_r823413985



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/feature/minmaxscaler/MinMaxScaler.java
##
@@ -0,0 +1,165 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.feature.minmaxscaler;
+
+import org.apache.flink.api.common.functions.RichMapPartitionFunction;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.ml.api.Estimator;
+import org.apache.flink.ml.common.datastream.DataStreamUtils;
+import org.apache.flink.ml.linalg.DenseVector;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.util.ParamUtils;
+import org.apache.flink.ml.util.ReadWriteUtils;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
+import org.apache.flink.table.api.internal.TableImpl;
+import org.apache.flink.types.Row;
+import org.apache.flink.util.Collector;
+import org.apache.flink.util.Preconditions;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+/** An Estimator which implements the MinMaxScaler algorithm. */
+public class MinMaxScaler
+implements Estimator, 
MinMaxScalerParams {
+private final Map, Object> paramMap = new HashMap<>();
+
+public MinMaxScaler() {
+ParamUtils.initializeMapWithDefaultValues(paramMap, this);
+}
+
+@Override
+public MinMaxScalerModel fit(Table... inputs) {
+Preconditions.checkArgument(inputs.length == 1);
+StreamTableEnvironment tEnv =
+(StreamTableEnvironment) ((TableImpl) 
inputs[0]).getTableEnvironment();
+DataStream> minMaxVectors =
+computeMinMaxVectors(tEnv.toDataStream(inputs[0]), 
getFeaturesCol());
+DataStream modelData = 
genModelData(minMaxVectors);
+MinMaxScalerModel model =
+new 
MinMaxScalerModel().setModelData(tEnv.fromDataStream(modelData));
+ReadWriteUtils.updateExistingParams(model, getParamMap());
+return model;
+}
+
+@Override
+public Map, Object> getParamMap() {
+return paramMap;
+}
+
+@Override
+public void save(String path) throws IOException {
+ReadWriteUtils.saveMetadata(this, path);
+}
+
+public static MinMaxScaler load(StreamExecutionEnvironment env, String 
path)
+throws IOException {
+return ReadWriteUtils.loadStageParam(path);
+}
+
+/**
+ * Generates minMax scaler model data.
+ *
+ * @param minMaxVectors Input distributed minMaxVectors.
+ * @return MinMax scaler model data.
+ */
+private static DataStream genModelData(
+DataStream> minMaxVectors) {
+DataStream modelData =
+DataStreamUtils.mapPartition(
+minMaxVectors,
+new RichMapPartitionFunction<
+Tuple2, 
MinMaxScalerModelData>() {
+@Override
+public void mapPartition(
+Iterable> 
dataPoints,
+Collector out) {
+DenseVector minVector = null;
+DenseVector maxVector = null;
+int vecSize = 0;
+for (Tuple2 
dataPoint : dataPoints) {
+if (maxVector == null) {
+vecSize = dataPoint.f0.size();
+maxVector = dataPoint.f1;
+minVector = dataPoint.f0;
+}
+for (int i = 0; i < vecSize; ++i) {

Review comment:
   ok




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To 

[GitHub] [flink-ml] weibozhao commented on a change in pull request #54: [FLINK-25552] Add Estimator and Transformer for MinMaxScaler in FlinkML

2022-03-09 Thread GitBox


weibozhao commented on a change in pull request #54:
URL: https://github.com/apache/flink-ml/pull/54#discussion_r823413341



##
File path: 
flink-ml-lib/src/test/java/org/apache/flink/ml/feature/MinMaxScalerTest.java
##
@@ -0,0 +1,206 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.feature;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.ml.feature.minmaxscaler.MinMaxScaler;
+import org.apache.flink.ml.feature.minmaxscaler.MinMaxScalerModel;
+import org.apache.flink.ml.feature.minmaxscaler.MinMaxScalerModelData;
+import org.apache.flink.ml.linalg.DenseVector;
+import org.apache.flink.ml.linalg.Vectors;
+import org.apache.flink.ml.util.ReadWriteUtils;
+import org.apache.flink.ml.util.StageTestUtils;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import 
org.apache.flink.streaming.api.environment.ExecutionCheckpointingOptions;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.api.Schema;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
+import org.apache.flink.table.api.internal.TableImpl;
+import org.apache.flink.types.Row;
+
+import org.apache.commons.collections.IteratorUtils;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+
+import static org.junit.Assert.assertEquals;
+
+/** Tests {@link MinMaxScaler} and {@link MinMaxScalerModel}. */
+public class MinMaxScalerTest {
+@Rule public final TemporaryFolder tempFolder = new TemporaryFolder();
+private StreamExecutionEnvironment env;
+private StreamTableEnvironment tEnv;
+private Table trainData;
+private Table predictData;
+private static final List trainRows =
+new ArrayList<>(
+Arrays.asList(
+Row.of(Vectors.dense(0.0, 3.0)),
+Row.of(Vectors.dense(2.1, 0.0)),
+Row.of(Vectors.dense(4.1, 5.1)),
+Row.of(Vectors.dense(6.1, 8.1)),
+Row.of(Vectors.dense(200, 300;
+private static final List predictRows =
+new 
ArrayList<>(Collections.singletonList(Row.of(Vectors.dense(150.0, 90.0;
+
+@Before
+public void before() {
+Configuration config = new Configuration();
+
config.set(ExecutionCheckpointingOptions.ENABLE_CHECKPOINTS_AFTER_TASKS_FINISH, 
true);
+env = StreamExecutionEnvironment.getExecutionEnvironment(config);
+env.setParallelism(4);
+env.enableCheckpointing(100);
+env.setRestartStrategy(RestartStrategies.noRestart());
+tEnv = StreamTableEnvironment.create(env);
+Schema schema = Schema.newBuilder().column("f0", 
DataTypes.of(DenseVector.class)).build();
+DataStream dataStream = env.fromCollection(trainRows);
+trainData = tEnv.fromDataStream(dataStream, schema).as("features");
+DataStream predDataStream = env.fromCollection(predictRows);
+predictData = tEnv.fromDataStream(predDataStream, 
schema).as("features");
+}
+
+private static void verifyPredictionResult(Table output, String outputCol) 
throws Exception {
+StreamTableEnvironment tEnv =
+(StreamTableEnvironment) ((TableImpl) 
output).getTableEnvironment();
+DataStream stream =
+tEnv.toDataStream(output)
+.map(
+(MapFunction)
+row -> (DenseVector) 
row.getField(outputCol));
+List result = 
IteratorUtils.toList(stream.executeAndCollect());
+for (DenseVector t2 : result) {
+assertEquals(Vectors.dense(0.75, 0.3), t2);
+}
+}
+
+@Test
+

[GitHub] [flink-ml] weibozhao commented on a change in pull request #54: [FLINK-25552] Add Estimator and Transformer for MinMaxScaler in FlinkML

2022-03-09 Thread GitBox


weibozhao commented on a change in pull request #54:
URL: https://github.com/apache/flink-ml/pull/54#discussion_r823412958



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/common/param/HasOutputCol.java
##
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.common.param;
+
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.param.ParamValidators;
+import org.apache.flink.ml.param.StringParam;
+import org.apache.flink.ml.param.WithParams;
+
+/** Interface for the shared outputCol param. */
+public interface HasOutputCol extends WithParams {

Review comment:
   PredictionCol may be better for ml algo. For feature engineer and data 
proc, outputCol may be better.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dawidwys commented on pull request #19029: [FLINK-26418][runtime][test] Use java.io.tmpdir for tmpWorkingDir

2022-03-09 Thread GitBox


dawidwys commented on pull request #19029:
URL: https://github.com/apache/flink/pull/19029#issuecomment-1063739269


   cc @Myasuka 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] weibozhao commented on a change in pull request #54: [FLINK-25552] Add Estimator and Transformer for MinMaxScaler in FlinkML

2022-03-09 Thread GitBox


weibozhao commented on a change in pull request #54:
URL: https://github.com/apache/flink-ml/pull/54#discussion_r823410761



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/feature/minmaxscaler/MinMaxScalerModel.java
##
@@ -0,0 +1,179 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.feature.minmaxscaler;
+
+import org.apache.flink.api.common.functions.RichMapFunction;
+import org.apache.flink.api.java.typeutils.RowTypeInfo;
+import org.apache.flink.ml.api.Model;
+import org.apache.flink.ml.common.broadcast.BroadcastUtils;
+import org.apache.flink.ml.common.datastream.TableUtils;
+import org.apache.flink.ml.linalg.DenseVector;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.util.ParamUtils;
+import org.apache.flink.ml.util.ReadWriteUtils;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
+import org.apache.flink.table.api.internal.TableImpl;
+import org.apache.flink.table.runtime.typeutils.ExternalTypeInfo;
+import org.apache.flink.types.Row;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.commons.lang3.ArrayUtils;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+
+/**
+ * A Model which do a minMax scaler operation using the model data computed by 
{@link MinMaxScaler}.
+ */
+public class MinMaxScalerModel
+implements Model, 
MinMaxScalerParams {
+private final Map, Object> paramMap = new HashMap<>();
+private Table modelDataTable;
+
+public MinMaxScalerModel() {
+ParamUtils.initializeMapWithDefaultValues(paramMap, this);
+}
+
+@Override
+public MinMaxScalerModel setModelData(Table... inputs) {
+modelDataTable = inputs[0];
+return this;
+}
+
+@Override
+public Table[] getModelData() {
+return new Table[] {modelDataTable};
+}
+
+@Override
+@SuppressWarnings("unchecked")
+public Table[] transform(Table... inputs) {
+Preconditions.checkArgument(inputs.length == 1);
+StreamTableEnvironment tEnv =
+(StreamTableEnvironment) ((TableImpl) 
inputs[0]).getTableEnvironment();
+DataStream data = tEnv.toDataStream(inputs[0]);
+DataStream minMaxScalerModel =
+MinMaxScalerModelData.getModelDataStream(modelDataTable);
+final String broadcastModelKey = "broadcastModelKey";
+RowTypeInfo inputTypeInfo = 
TableUtils.getRowTypeInfo(inputs[0].getResolvedSchema());
+RowTypeInfo outputTypeInfo =
+new RowTypeInfo(
+ArrayUtils.addAll(
+inputTypeInfo.getFieldTypes(),
+ExternalTypeInfo.of(DenseVector.class)),
+ArrayUtils.addAll(inputTypeInfo.getFieldNames(), 
getOutputCol()));
+DataStream output =
+BroadcastUtils.withBroadcastStream(
+Collections.singletonList(data),
+Collections.singletonMap(broadcastModelKey, 
minMaxScalerModel),
+inputList -> {
+DataStream input = inputList.get(0);
+return input.map(
+new PredictLabelFunction(
+broadcastModelKey,
+getMax(),
+getMIN(),
+getFeaturesCol()),
+outputTypeInfo);
+});
+return new Table[] {tEnv.fromDataStream(output)};
+}
+
+@Override
+public Map, Object> getParamMap() {
+return paramMap;
+}
+
+@Override
+public void save(String path) throws IOException {
+ReadWriteUtils.saveMetadata(this, path);
+ReadWriteUtils.saveModelData(
+

[GitHub] [flink-ml] weibozhao commented on a change in pull request #54: [FLINK-25552] Add Estimator and Transformer for MinMaxScaler in FlinkML

2022-03-09 Thread GitBox


weibozhao commented on a change in pull request #54:
URL: https://github.com/apache/flink-ml/pull/54#discussion_r823407289



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/feature/minmaxscaler/MinMaxScalerModel.java
##
@@ -0,0 +1,179 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.feature.minmaxscaler;
+
+import org.apache.flink.api.common.functions.RichMapFunction;
+import org.apache.flink.api.java.typeutils.RowTypeInfo;
+import org.apache.flink.ml.api.Model;
+import org.apache.flink.ml.common.broadcast.BroadcastUtils;
+import org.apache.flink.ml.common.datastream.TableUtils;
+import org.apache.flink.ml.linalg.DenseVector;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.util.ParamUtils;
+import org.apache.flink.ml.util.ReadWriteUtils;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
+import org.apache.flink.table.api.internal.TableImpl;
+import org.apache.flink.table.runtime.typeutils.ExternalTypeInfo;
+import org.apache.flink.types.Row;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.commons.lang3.ArrayUtils;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+
+/**
+ * A Model which do a minMax scaler operation using the model data computed by 
{@link MinMaxScaler}.
+ */
+public class MinMaxScalerModel
+implements Model, 
MinMaxScalerParams {
+private final Map, Object> paramMap = new HashMap<>();
+private Table modelDataTable;
+
+public MinMaxScalerModel() {
+ParamUtils.initializeMapWithDefaultValues(paramMap, this);
+}
+
+@Override
+public MinMaxScalerModel setModelData(Table... inputs) {
+modelDataTable = inputs[0];
+return this;
+}
+
+@Override
+public Table[] getModelData() {
+return new Table[] {modelDataTable};
+}
+
+@Override
+@SuppressWarnings("unchecked")
+public Table[] transform(Table... inputs) {
+Preconditions.checkArgument(inputs.length == 1);
+StreamTableEnvironment tEnv =
+(StreamTableEnvironment) ((TableImpl) 
inputs[0]).getTableEnvironment();
+DataStream data = tEnv.toDataStream(inputs[0]);
+DataStream minMaxScalerModel =
+MinMaxScalerModelData.getModelDataStream(modelDataTable);
+final String broadcastModelKey = "broadcastModelKey";
+RowTypeInfo inputTypeInfo = 
TableUtils.getRowTypeInfo(inputs[0].getResolvedSchema());
+RowTypeInfo outputTypeInfo =
+new RowTypeInfo(
+ArrayUtils.addAll(
+inputTypeInfo.getFieldTypes(),
+ExternalTypeInfo.of(DenseVector.class)),
+ArrayUtils.addAll(inputTypeInfo.getFieldNames(), 
getOutputCol()));
+DataStream output =
+BroadcastUtils.withBroadcastStream(
+Collections.singletonList(data),
+Collections.singletonMap(broadcastModelKey, 
minMaxScalerModel),
+inputList -> {
+DataStream input = inputList.get(0);
+return input.map(
+new PredictLabelFunction(
+broadcastModelKey,
+getMax(),
+getMIN(),
+getFeaturesCol()),
+outputTypeInfo);
+});
+return new Table[] {tEnv.fromDataStream(output)};
+}
+
+@Override
+public Map, Object> getParamMap() {
+return paramMap;
+}
+
+@Override
+public void save(String path) throws IOException {
+ReadWriteUtils.saveMetadata(this, path);
+ReadWriteUtils.saveModelData(
+

[GitHub] [flink] Myasuka commented on a change in pull request #18989: [FLINK-26306][state/changelog] Randomly offset materialization

2022-03-09 Thread GitBox


Myasuka commented on a change in pull request #18989:
URL: https://github.com/apache/flink/pull/18989#discussion_r823406204



##
File path: 
flink-state-backends/flink-statebackend-changelog/src/main/java/org/apache/flink/state/changelog/PeriodicMaterializationManager.java
##
@@ -105,7 +116,7 @@ public void start() {
 
 LOG.info("Task {} starts periodic materialization", subtaskName);
 
-scheduleNextMaterialization();
+scheduleNextMaterialization(initialDelay);

Review comment:
   Shall we introduce this another method 
`scheduleInitialMaterization(initialDelay)` and make all calling of 
`scheduleNextMaterialization(0)` to `scheduleNextMaterialization()`? I think 
this looks better.

##
File path: 
flink-state-backends/flink-statebackend-changelog/src/main/java/org/apache/flink/state/changelog/PeriodicMaterializationManager.java
##
@@ -96,6 +102,11 @@
 Executors.newSingleThreadScheduledExecutor(
 new ExecutorThreadFactory(
 "periodic-materialization-scheduler-" + 
subtaskName));
+
+this.initialDelay =
+// randomize initial delay to avoid thundering herd problem
+MathUtils.murmurHash(Objects.hash(operatorId, subtaskIndex))

Review comment:
   The `operatorId` which comes from `OperatorSubtaskDescriptionText` 
already contains the information of `subtaskIndex`, why we still need another 
`subtaskIndex` here?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18957: [FLINK-26444][python]Window allocator supporting pyflink datastream API

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18957:
URL: https://github.com/apache/flink/pull/18957#issuecomment-1056202149


   
   ## CI report:
   
   * ec1e0a435186082e5ac1481bc093f9bdd9d94d70 UNKNOWN
   * d9ffed1defa7514c27d3ba9a9e3008de8b84bc68 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32805)
 
   * 0ee22778504c1bf11c25b5e9df0a8b5fc88a83cc Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32810)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18957: [FLINK-26444][python]Window allocator supporting pyflink datastream API

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18957:
URL: https://github.com/apache/flink/pull/18957#issuecomment-1056202149


   
   ## CI report:
   
   * ec1e0a435186082e5ac1481bc093f9bdd9d94d70 UNKNOWN
   * d9ffed1defa7514c27d3ba9a9e3008de8b84bc68 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32805)
 
   * 0ee22778504c1bf11c25b5e9df0a8b5fc88a83cc UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] weibozhao commented on a change in pull request #54: [FLINK-25552] Add Estimator and Transformer for MinMaxScaler in FlinkML

2022-03-09 Thread GitBox


weibozhao commented on a change in pull request #54:
URL: https://github.com/apache/flink-ml/pull/54#discussion_r823402845



##
File path: 
flink-ml-lib/src/test/java/org/apache/flink/ml/feature/MinMaxScalerTest.java
##
@@ -0,0 +1,206 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.feature;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.ml.feature.minmaxscaler.MinMaxScaler;
+import org.apache.flink.ml.feature.minmaxscaler.MinMaxScalerModel;
+import org.apache.flink.ml.feature.minmaxscaler.MinMaxScalerModelData;
+import org.apache.flink.ml.linalg.DenseVector;
+import org.apache.flink.ml.linalg.Vectors;
+import org.apache.flink.ml.util.ReadWriteUtils;
+import org.apache.flink.ml.util.StageTestUtils;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import 
org.apache.flink.streaming.api.environment.ExecutionCheckpointingOptions;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.api.Schema;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
+import org.apache.flink.table.api.internal.TableImpl;
+import org.apache.flink.types.Row;
+
+import org.apache.commons.collections.IteratorUtils;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+
+import static org.junit.Assert.assertEquals;
+
+/** Tests {@link MinMaxScaler} and {@link MinMaxScalerModel}. */
+public class MinMaxScalerTest {
+@Rule public final TemporaryFolder tempFolder = new TemporaryFolder();
+private StreamExecutionEnvironment env;
+private StreamTableEnvironment tEnv;
+private Table trainData;
+private Table predictData;
+private static final List trainRows =
+new ArrayList<>(
+Arrays.asList(
+Row.of(Vectors.dense(0.0, 3.0)),
+Row.of(Vectors.dense(2.1, 0.0)),
+Row.of(Vectors.dense(4.1, 5.1)),
+Row.of(Vectors.dense(6.1, 8.1)),
+Row.of(Vectors.dense(200, 300;
+private static final List predictRows =
+new 
ArrayList<>(Collections.singletonList(Row.of(Vectors.dense(150.0, 90.0;
+
+@Before
+public void before() {
+Configuration config = new Configuration();
+
config.set(ExecutionCheckpointingOptions.ENABLE_CHECKPOINTS_AFTER_TASKS_FINISH, 
true);
+env = StreamExecutionEnvironment.getExecutionEnvironment(config);
+env.setParallelism(4);
+env.enableCheckpointing(100);
+env.setRestartStrategy(RestartStrategies.noRestart());
+tEnv = StreamTableEnvironment.create(env);
+Schema schema = Schema.newBuilder().column("f0", 
DataTypes.of(DenseVector.class)).build();
+DataStream dataStream = env.fromCollection(trainRows);
+trainData = tEnv.fromDataStream(dataStream, schema).as("features");
+DataStream predDataStream = env.fromCollection(predictRows);
+predictData = tEnv.fromDataStream(predDataStream, 
schema).as("features");
+}
+
+private static void verifyPredictionResult(Table output, String outputCol) 
throws Exception {
+StreamTableEnvironment tEnv =
+(StreamTableEnvironment) ((TableImpl) 
output).getTableEnvironment();
+DataStream stream =
+tEnv.toDataStream(output)
+.map(
+(MapFunction)
+row -> (DenseVector) 
row.getField(outputCol));
+List result = 
IteratorUtils.toList(stream.executeAndCollect());
+for (DenseVector t2 : result) {
+assertEquals(Vectors.dense(0.75, 0.3), t2);
+}
+}
+
+@Test
+

[GitHub] [flink] matriv commented on pull request #18980: [FLINK-26421] Use only EnvironmentSettings to configure the environment

2022-03-09 Thread GitBox


matriv commented on pull request #18980:
URL: https://github.com/apache/flink/pull/18980#issuecomment-1063728285


   @twalthr Please check, I hope I've addressed all the comments.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] weibozhao commented on a change in pull request #54: [FLINK-25552] Add Estimator and Transformer for MinMaxScaler in FlinkML

2022-03-09 Thread GitBox


weibozhao commented on a change in pull request #54:
URL: https://github.com/apache/flink-ml/pull/54#discussion_r823401050



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/feature/minmaxscaler/MinMaxScaler.java
##
@@ -0,0 +1,165 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.feature.minmaxscaler;
+
+import org.apache.flink.api.common.functions.RichMapPartitionFunction;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.ml.api.Estimator;
+import org.apache.flink.ml.common.datastream.DataStreamUtils;
+import org.apache.flink.ml.linalg.DenseVector;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.util.ParamUtils;
+import org.apache.flink.ml.util.ReadWriteUtils;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
+import org.apache.flink.table.api.internal.TableImpl;
+import org.apache.flink.types.Row;
+import org.apache.flink.util.Collector;
+import org.apache.flink.util.Preconditions;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+/** An Estimator which implements the MinMaxScaler algorithm. */
+public class MinMaxScaler
+implements Estimator, 
MinMaxScalerParams {
+private final Map, Object> paramMap = new HashMap<>();
+
+public MinMaxScaler() {
+ParamUtils.initializeMapWithDefaultValues(paramMap, this);
+}
+
+@Override
+public MinMaxScalerModel fit(Table... inputs) {
+Preconditions.checkArgument(inputs.length == 1);
+StreamTableEnvironment tEnv =
+(StreamTableEnvironment) ((TableImpl) 
inputs[0]).getTableEnvironment();
+DataStream> minMaxVectors =
+computeMinMaxVectors(tEnv.toDataStream(inputs[0]), 
getFeaturesCol());
+DataStream modelData = 
genModelData(minMaxVectors);
+MinMaxScalerModel model =
+new 
MinMaxScalerModel().setModelData(tEnv.fromDataStream(modelData));
+ReadWriteUtils.updateExistingParams(model, getParamMap());
+return model;
+}
+
+@Override
+public Map, Object> getParamMap() {
+return paramMap;
+}
+
+@Override
+public void save(String path) throws IOException {
+ReadWriteUtils.saveMetadata(this, path);
+}
+
+public static MinMaxScaler load(StreamExecutionEnvironment env, String 
path)
+throws IOException {
+return ReadWriteUtils.loadStageParam(path);
+}
+
+/**
+ * Generates minMax scaler model data.
+ *
+ * @param minMaxVectors Input distributed minMaxVectors.
+ * @return MinMax scaler model data.
+ */
+private static DataStream genModelData(
+DataStream> minMaxVectors) {
+DataStream modelData =
+DataStreamUtils.mapPartition(
+minMaxVectors,
+new RichMapPartitionFunction<
+Tuple2, 
MinMaxScalerModelData>() {
+@Override
+public void mapPartition(
+Iterable> 
dataPoints,
+Collector out) {
+DenseVector minVector = null;
+DenseVector maxVector = null;
+int vecSize = 0;
+for (Tuple2 
dataPoint : dataPoints) {
+if (maxVector == null) {
+vecSize = dataPoint.f0.size();
+maxVector = dataPoint.f1;
+minVector = dataPoint.f0;
+}
+for (int i = 0; i < vecSize; ++i) {
+minVector.values[i] =
+Math.min(
+dataPoint.f0.values[i],
+ 

[GitHub] [flink] flinkbot edited a comment on pull request #18957: [FLINK-26444][python]Window allocator supporting pyflink datastream API

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18957:
URL: https://github.com/apache/flink/pull/18957#issuecomment-1056202149


   
   ## CI report:
   
   * ec1e0a435186082e5ac1481bc093f9bdd9d94d70 UNKNOWN
   * d9ffed1defa7514c27d3ba9a9e3008de8b84bc68 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32805)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #19014: [FLINK-24586][table-planner] JSON_VALUE should return STRING instead of VARCHAR(2000)

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #19014:
URL: https://github.com/apache/flink/pull/19014#issuecomment-1062677742


   
   ## CI report:
   
   * 1019aa38945c228bd2b619ec83b75fe4d495df49 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32716)
 
   * 6b2ac8783cf5d0cff97b8eb1377d8edaf9fac620 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32809)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18957: [FLINK-26444][python]Window allocator supporting pyflink datastream API

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18957:
URL: https://github.com/apache/flink/pull/18957#issuecomment-1056202149


   
   ## CI report:
   
   * ec1e0a435186082e5ac1481bc093f9bdd9d94d70 UNKNOWN
   * d9ffed1defa7514c27d3ba9a9e3008de8b84bc68 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32805)
 
   * 0ee22778504c1bf11c25b5e9df0a8b5fc88a83cc UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] LadyForest edited a comment on pull request #19007: [FLINK-26495][table-planner] Prohibit hints(dynamic table options) on view

2022-03-09 Thread GitBox


LadyForest edited a comment on pull request #19007:
URL: https://github.com/apache/flink/pull/19007#issuecomment-1063581072


   Hi, @JingsongLi, do you have time to take a look?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-25771) CassandraConnectorITCase.testRetrialAndDropTables timeouts on AZP

2022-03-09 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17504032#comment-17504032
 ] 

Matthias Pohl edited comment on FLINK-25771 at 3/10/22, 6:50 AM:
-

I'm going to reopen this issue because we experienced the same build failure in 
[this 
build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32791=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=2e426bf0-b717-56bb-ab62-d63086457354=14591]
 once more in {{release-1.14}}:
{code}
Mar 10 02:44:32 [ERROR] Tests run: 20, Failures: 1, Errors: 0, Skipped: 0, Time 
elapsed: 234.114 s <<< FAILURE! - in 
org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase
Mar 10 02:44:32 [ERROR] testRetrialAndDropTables  Time elapsed: 25.5 s  <<< 
FAILURE!
Mar 10 02:44:32 org.opentest4j.MultipleFailuresError: 
Mar 10 02:44:32 Multiple Failures (2 failures)
Mar 10 02:44:32 
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried 
for query failed (no host was tried)
Mar 10 02:44:32 
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried 
for query failed (tried: /172.17.0.1:50767 
(com.datastax.driver.core.exceptions.OperationTimedOutException: [/172.17.0.1] 
Timed out waiting for server response))
Mar 10 02:44:32 at 
org.junit.vintage.engine.execution.TestRun.getStoredResultOrSuccessful(TestRun.java:196)
Mar 10 02:44:32 at 
org.junit.vintage.engine.execution.RunListenerAdapter.fireExecutionFinished(RunListenerAdapter.java:226)
Mar 10 02:44:32 at 
org.junit.vintage.engine.execution.RunListenerAdapter.testFinished(RunListenerAdapter.java:192)
Mar 10 02:44:32 at 
org.junit.vintage.engine.execution.RunListenerAdapter.testFinished(RunListenerAdapter.java:79)
[...]
{code}

I verified that the fix was included in this build:
{code}
$ git log --oneline 32c7067f | grep -m1 af019cc5dad
af019cc5dad [FLINK-25771][cassandra][tests] Raise client timeouts
{code}

May you have another look [~echauchot]?


was (Author: mapohl):
I'm going to reopen this issue because we experienced the same build failure in 
[this 
build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32791=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=2e426bf0-b717-56bb-ab62-d63086457354=14591]
 once more in {{release-1.14}}:
{code}
Mar 10 02:44:32 [ERROR] Tests run: 20, Failures: 1, Errors: 0, Skipped: 0, Time 
elapsed: 234.114 s <<< FAILURE! - in 
org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase
Mar 10 02:44:32 [ERROR] testRetrialAndDropTables  Time elapsed: 25.5 s  <<< 
FAILURE!
Mar 10 02:44:32 org.opentest4j.MultipleFailuresError: 
Mar 10 02:44:32 Multiple Failures (2 failures)
Mar 10 02:44:32 
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried 
for query failed (no host was tried)
Mar 10 02:44:32 
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried 
for query failed (tried: /172.17.0.1:50767 
(com.datastax.driver.core.exceptions.OperationTimedOutException: [/172.17.0.1] 
Timed out waiting for server response))
Mar 10 02:44:32 at 
org.junit.vintage.engine.execution.TestRun.getStoredResultOrSuccessful(TestRun.java:196)
Mar 10 02:44:32 at 
org.junit.vintage.engine.execution.RunListenerAdapter.fireExecutionFinished(RunListenerAdapter.java:226)
Mar 10 02:44:32 at 
org.junit.vintage.engine.execution.RunListenerAdapter.testFinished(RunListenerAdapter.java:192)
Mar 10 02:44:32 at 
org.junit.vintage.engine.execution.RunListenerAdapter.testFinished(RunListenerAdapter.java:79)
[...]
{code}

I verified that the fix was included in this build:
{code}
$ git log --oneline 32c7067f | grep -m1 af019cc5dad
af019cc5dad [FLINK-25771][cassandra][tests] Raise client timeouts
{code}

> CassandraConnectorITCase.testRetrialAndDropTables timeouts on AZP
> -
>
> Key: FLINK-25771
> URL: https://issues.apache.org/jira/browse/FLINK-25771
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Cassandra
>Affects Versions: 1.15.0, 1.13.5, 1.14.3
>Reporter: Till Rohrmann
>Assignee: Etienne Chauchot
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0, 1.13.7, 1.14.5
>
>
> The test {{CassandraConnectorITCase.testRetrialAndDropTables}} fails on AZP 
> with
> {code}
> Jan 23 01:02:52 com.datastax.driver.core.exceptions.NoHostAvailableException: 
> All host(s) tried for query failed (tried: /172.17.0.1:59220 
> (com.datastax.driver.core.exceptions.OperationTimedOutException: 
> [/172.17.0.1] Timed out waiting for server response))
> Jan 23 01:02:52   at 
> 

[jira] [Comment Edited] (FLINK-25771) CassandraConnectorITCase.testRetrialAndDropTables timeouts on AZP

2022-03-09 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17504032#comment-17504032
 ] 

Matthias Pohl edited comment on FLINK-25771 at 3/10/22, 6:50 AM:
-

I'm going to reopen this issue because we experienced the same build failure in 
[this 
build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32791=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=2e426bf0-b717-56bb-ab62-d63086457354=14591]
 once more in {{release-1.14}}:
{code}
Mar 10 02:44:32 [ERROR] Tests run: 20, Failures: 1, Errors: 0, Skipped: 0, Time 
elapsed: 234.114 s <<< FAILURE! - in 
org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase
Mar 10 02:44:32 [ERROR] testRetrialAndDropTables  Time elapsed: 25.5 s  <<< 
FAILURE!
Mar 10 02:44:32 org.opentest4j.MultipleFailuresError: 
Mar 10 02:44:32 Multiple Failures (2 failures)
Mar 10 02:44:32 
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried 
for query failed (no host was tried)
Mar 10 02:44:32 
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried 
for query failed (tried: /172.17.0.1:50767 
(com.datastax.driver.core.exceptions.OperationTimedOutException: [/172.17.0.1] 
Timed out waiting for server response))
Mar 10 02:44:32 at 
org.junit.vintage.engine.execution.TestRun.getStoredResultOrSuccessful(TestRun.java:196)
Mar 10 02:44:32 at 
org.junit.vintage.engine.execution.RunListenerAdapter.fireExecutionFinished(RunListenerAdapter.java:226)
Mar 10 02:44:32 at 
org.junit.vintage.engine.execution.RunListenerAdapter.testFinished(RunListenerAdapter.java:192)
Mar 10 02:44:32 at 
org.junit.vintage.engine.execution.RunListenerAdapter.testFinished(RunListenerAdapter.java:79)
[...]
{code}

I verified that the fix was included in this build:
{code}
$ git log --oneline 32c7067f | grep -m1 af019cc5dad
af019cc5dad [FLINK-25771][cassandra][tests] Raise client timeouts
{code}


was (Author: mapohl):
I'm going to reopen this issue because we experienced the same build failure in 
[this 
build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32791=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=2e426bf0-b717-56bb-ab62-d63086457354=14591]
 once more in {{release-1.14}}:
{code}
Mar 10 02:44:32 [ERROR] Tests run: 20, Failures: 1, Errors: 0, Skipped: 0, Time 
elapsed: 234.114 s <<< FAILURE! - in 
org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase
Mar 10 02:44:32 [ERROR] testRetrialAndDropTables  Time elapsed: 25.5 s  <<< 
FAILURE!
Mar 10 02:44:32 org.opentest4j.MultipleFailuresError: 
Mar 10 02:44:32 Multiple Failures (2 failures)
Mar 10 02:44:32 
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried 
for query failed (no host was tried)
Mar 10 02:44:32 
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried 
for query failed (tried: /172.17.0.1:50767 
(com.datastax.driver.core.exceptions.OperationTimedOutException: [/172.17.0.1] 
Timed out waiting for server response))
Mar 10 02:44:32 at 
org.junit.vintage.engine.execution.TestRun.getStoredResultOrSuccessful(TestRun.java:196)
Mar 10 02:44:32 at 
org.junit.vintage.engine.execution.RunListenerAdapter.fireExecutionFinished(RunListenerAdapter.java:226)
Mar 10 02:44:32 at 
org.junit.vintage.engine.execution.RunListenerAdapter.testFinished(RunListenerAdapter.java:192)
Mar 10 02:44:32 at 
org.junit.vintage.engine.execution.RunListenerAdapter.testFinished(RunListenerAdapter.java:79)
[...]
{code}

> CassandraConnectorITCase.testRetrialAndDropTables timeouts on AZP
> -
>
> Key: FLINK-25771
> URL: https://issues.apache.org/jira/browse/FLINK-25771
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Cassandra
>Affects Versions: 1.15.0, 1.13.5, 1.14.3
>Reporter: Till Rohrmann
>Assignee: Etienne Chauchot
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0, 1.13.7, 1.14.5
>
>
> The test {{CassandraConnectorITCase.testRetrialAndDropTables}} fails on AZP 
> with
> {code}
> Jan 23 01:02:52 com.datastax.driver.core.exceptions.NoHostAvailableException: 
> All host(s) tried for query failed (tried: /172.17.0.1:59220 
> (com.datastax.driver.core.exceptions.OperationTimedOutException: 
> [/172.17.0.1] Timed out waiting for server response))
> Jan 23 01:02:52   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
> Jan 23 01:02:52   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:37)
> Jan 23 01:02:52   at 
> 

[GitHub] [flink] flinkbot edited a comment on pull request #19014: [FLINK-24586][table-planner] JSON_VALUE should return STRING instead of VARCHAR(2000)

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #19014:
URL: https://github.com/apache/flink/pull/19014#issuecomment-1062677742


   
   ## CI report:
   
   * 1019aa38945c228bd2b619ec83b75fe4d495df49 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32716)
 
   * 6b2ac8783cf5d0cff97b8eb1377d8edaf9fac620 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Reopened] (FLINK-25771) CassandraConnectorITCase.testRetrialAndDropTables timeouts on AZP

2022-03-09 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl reopened FLINK-25771:
---

I'm going to reopen this issue because we experienced the same build failure in 
[this 
build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32791=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=2e426bf0-b717-56bb-ab62-d63086457354=14591]
 once more in {{release-1.14}}:
{code}
Mar 10 02:44:32 [ERROR] Tests run: 20, Failures: 1, Errors: 0, Skipped: 0, Time 
elapsed: 234.114 s <<< FAILURE! - in 
org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase
Mar 10 02:44:32 [ERROR] testRetrialAndDropTables  Time elapsed: 25.5 s  <<< 
FAILURE!
Mar 10 02:44:32 org.opentest4j.MultipleFailuresError: 
Mar 10 02:44:32 Multiple Failures (2 failures)
Mar 10 02:44:32 
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried 
for query failed (no host was tried)
Mar 10 02:44:32 
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried 
for query failed (tried: /172.17.0.1:50767 
(com.datastax.driver.core.exceptions.OperationTimedOutException: [/172.17.0.1] 
Timed out waiting for server response))
Mar 10 02:44:32 at 
org.junit.vintage.engine.execution.TestRun.getStoredResultOrSuccessful(TestRun.java:196)
Mar 10 02:44:32 at 
org.junit.vintage.engine.execution.RunListenerAdapter.fireExecutionFinished(RunListenerAdapter.java:226)
Mar 10 02:44:32 at 
org.junit.vintage.engine.execution.RunListenerAdapter.testFinished(RunListenerAdapter.java:192)
Mar 10 02:44:32 at 
org.junit.vintage.engine.execution.RunListenerAdapter.testFinished(RunListenerAdapter.java:79)
[...]
{code}

> CassandraConnectorITCase.testRetrialAndDropTables timeouts on AZP
> -
>
> Key: FLINK-25771
> URL: https://issues.apache.org/jira/browse/FLINK-25771
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Cassandra
>Affects Versions: 1.15.0, 1.13.5, 1.14.3
>Reporter: Till Rohrmann
>Assignee: Etienne Chauchot
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0, 1.13.7, 1.14.5
>
>
> The test {{CassandraConnectorITCase.testRetrialAndDropTables}} fails on AZP 
> with
> {code}
> Jan 23 01:02:52 com.datastax.driver.core.exceptions.NoHostAvailableException: 
> All host(s) tried for query failed (tried: /172.17.0.1:59220 
> (com.datastax.driver.core.exceptions.OperationTimedOutException: 
> [/172.17.0.1] Timed out waiting for server response))
> Jan 23 01:02:52   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
> Jan 23 01:02:52   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:37)
> Jan 23 01:02:52   at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
> Jan 23 01:02:52   at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
> Jan 23 01:02:52   at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63)
> Jan 23 01:02:52   at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
> Jan 23 01:02:52   at 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase.testRetrialAndDropTables(CassandraConnectorITCase.java:554)
> Jan 23 01:02:52   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jan 23 01:02:52   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jan 23 01:02:52   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jan 23 01:02:52   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 23 01:02:52   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Jan 23 01:02:52   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jan 23 01:02:52   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Jan 23 01:02:52   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Jan 23 01:02:52   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Jan 23 01:02:52   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Jan 23 01:02:52   at 
> org.apache.flink.testutils.junit.RetryRule$RetryOnExceptionStatement.evaluate(RetryRule.java:196)
> Jan 23 01:02:52   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Jan 23 01:02:52   at 
> 

[GitHub] [flink] flinkbot edited a comment on pull request #18957: [FLINK-26444][python]Window allocator supporting pyflink datastream API

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18957:
URL: https://github.com/apache/flink/pull/18957#issuecomment-1056202149


   
   ## CI report:
   
   * ec1e0a435186082e5ac1481bc093f9bdd9d94d70 UNKNOWN
   * d9ffed1defa7514c27d3ba9a9e3008de8b84bc68 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32805)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-26530) Introduce TableStore API and refactor ITCases

2022-03-09 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-26530:
-
Summary: Introduce TableStore API and refactor ITCases  (was: Introduce 
TableStore API)

> Introduce TableStore API and refactor ITCases
> -
>
> Key: FLINK-26530
> URL: https://issues.apache.org/jira/browse/FLINK-26530
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.1.0
>
>
> We need to refactor the FileStoreITCase, and even the Sink interface itself, 
> which is a DataStream layer class that is more complex to build than a simple 
> SQL can accomplish.
> We need to think through a problem, StoreSink exposed API should be what kind 
> of, currently about keyed is rather confusing.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26570) Remote module configuration interpolation

2022-03-09 Thread Fil Karnicki (Jira)
Fil Karnicki created FLINK-26570:


 Summary: Remote module configuration interpolation
 Key: FLINK-26570
 URL: https://issues.apache.org/jira/browse/FLINK-26570
 Project: Flink
  Issue Type: Improvement
  Components: Stateful Functions
Reporter: Fil Karnicki


Add the ability for users to provide placeholders in module.yaml, e.g.
{code:java}
kind: com.foo.bar/test
spec:
  something: ${REPLACE_ME}
  transport:
password: ${REPLACE_ME_WITH_A_SECRET}
array:
  - ${REPLACE_ME}
  - sthElse {code}
These placeholders would be resolved in 

org.apache.flink.statefun.flink.core.jsonmodule.RemoteModule#bindComponent

using 
{code:java}
ParameterTool.fromSystemProperties().mergeWith(ParameterTool.fromMap(globalConfiguration)
 {code}
by traversing the ComponentJsonObject.specJsonNode() and replacing values that 
contain placeholders with values from the combined system+globalConfig map



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink-table-store] JingsongLi removed a comment on pull request #36: [FLINK-26530] Introduce TableStore API

2022-03-09 Thread GitBox


JingsongLi removed a comment on pull request #36:
URL: https://github.com/apache/flink-table-store/pull/36#issuecomment-1062600574


   TODO refactor itcases based on new API.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-26569) testResumeWithWrongOffset(org.apache.flink.runtime.fs.hdfs.HadoopRecoverableWriterTest) failing

2022-03-09 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17504031#comment-17504031
 ] 

Matthias Pohl commented on FLINK-26569:
---

I had to update the issue: The build failure actually happened on 
{{release-1.13}} instead of {{master}}. I'm sorry for the confusion

> testResumeWithWrongOffset(org.apache.flink.runtime.fs.hdfs.HadoopRecoverableWriterTest)
>  failing
> ---
>
> Key: FLINK-26569
> URL: https://issues.apache.org/jira/browse/FLINK-26569
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Connectors / Hadoop 
> Compatibility
>Affects Versions: 1.13.6
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
>
> There's a test failure in [this 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32792=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=bfbc6239-57a0-5db0-63f3-41551b4f7d51=10107]
>  due to {{HadoopRecoverableWriterTest.testResumeWithWrongOffset}}:
> {code}
> Mar 10 00:46:04 [ERROR] 
> testResumeWithWrongOffset(org.apache.flink.runtime.fs.hdfs.HadoopRecoverableWriterTest)
>   Time elapsed: 100.39 s  <<< FAILURE!
> Mar 10 00:46:04 java.lang.AssertionError
> Mar 10 00:46:04   at org.junit.Assert.fail(Assert.java:86)
> Mar 10 00:46:04   at org.junit.Assert.fail(Assert.java:95)
> Mar 10 00:46:04   at 
> org.apache.flink.core.fs.AbstractRecoverableWriterTest.testResumeWithWrongOffset(AbstractRecoverableWriterTest.java:381)
> Mar 10 00:46:04   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Mar 10 00:46:04   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Mar 10 00:46:04   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Mar 10 00:46:04   at java.lang.reflect.Method.invoke(Method.java:498)
> Mar 10 00:46:04   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Mar 10 00:46:04   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Mar 10 00:46:04   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Mar 10 00:46:04   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Mar 10 00:46:04   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Mar 10 00:46:04   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Mar 10 00:46:04   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Mar 10 00:46:04   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> Mar 10 00:46:04   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Mar 10 00:46:04   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> Mar 10 00:46:04   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Mar 10 00:46:04   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Mar 10 00:46:04   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Mar 10 00:46:04   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Mar 10 00:46:04   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Mar 10 00:46:04   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Mar 10 00:46:04   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> Mar 10 00:46:04   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Mar 10 00:46:04   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Mar 10 00:46:04   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> Mar 10 00:46:04   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Mar 10 00:46:04   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> Mar 10 00:46:04   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> Mar 10 00:46:04   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> Mar 10 00:46:04   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> Mar 10 00:46:04   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> Mar 10 00:46:04   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> Mar 10 00:46:04   at 
> 

[GitHub] [flink] flinkbot edited a comment on pull request #18957: [FLINK-26444][python]Window allocator supporting pyflink datastream API

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18957:
URL: https://github.com/apache/flink/pull/18957#issuecomment-1056202149


   
   ## CI report:
   
   * ec1e0a435186082e5ac1481bc093f9bdd9d94d70 UNKNOWN
   * d9ffed1defa7514c27d3ba9a9e3008de8b84bc68 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32805)
 
   * 0ee22778504c1bf11c25b5e9df0a8b5fc88a83cc UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-26569) testResumeWithWrongOffset(org.apache.flink.runtime.fs.hdfs.HadoopRecoverableWriterTest) failing

2022-03-09 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl updated FLINK-26569:
--
Summary: 
testResumeWithWrongOffset(org.apache.flink.runtime.fs.hdfs.HadoopRecoverableWriterTest)
 failing  (was: 
testResumeWithWrongOffset(org.apache.flink.runtime.fs.hdfs.HadoopRecoverableWriterTest)
 failing on master)

> testResumeWithWrongOffset(org.apache.flink.runtime.fs.hdfs.HadoopRecoverableWriterTest)
>  failing
> ---
>
> Key: FLINK-26569
> URL: https://issues.apache.org/jira/browse/FLINK-26569
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Connectors / Hadoop 
> Compatibility
>Affects Versions: 1.13.6
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
>
> There's a test failure in [this 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32792=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=bfbc6239-57a0-5db0-63f3-41551b4f7d51=10107]
>  due to {{HadoopRecoverableWriterTest.testResumeWithWrongOffset}}:
> {code}
> Mar 10 00:46:04 [ERROR] 
> testResumeWithWrongOffset(org.apache.flink.runtime.fs.hdfs.HadoopRecoverableWriterTest)
>   Time elapsed: 100.39 s  <<< FAILURE!
> Mar 10 00:46:04 java.lang.AssertionError
> Mar 10 00:46:04   at org.junit.Assert.fail(Assert.java:86)
> Mar 10 00:46:04   at org.junit.Assert.fail(Assert.java:95)
> Mar 10 00:46:04   at 
> org.apache.flink.core.fs.AbstractRecoverableWriterTest.testResumeWithWrongOffset(AbstractRecoverableWriterTest.java:381)
> Mar 10 00:46:04   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Mar 10 00:46:04   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Mar 10 00:46:04   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Mar 10 00:46:04   at java.lang.reflect.Method.invoke(Method.java:498)
> Mar 10 00:46:04   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Mar 10 00:46:04   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Mar 10 00:46:04   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Mar 10 00:46:04   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Mar 10 00:46:04   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Mar 10 00:46:04   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Mar 10 00:46:04   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Mar 10 00:46:04   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> Mar 10 00:46:04   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Mar 10 00:46:04   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> Mar 10 00:46:04   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Mar 10 00:46:04   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Mar 10 00:46:04   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Mar 10 00:46:04   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Mar 10 00:46:04   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Mar 10 00:46:04   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Mar 10 00:46:04   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> Mar 10 00:46:04   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Mar 10 00:46:04   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Mar 10 00:46:04   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> Mar 10 00:46:04   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Mar 10 00:46:04   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> Mar 10 00:46:04   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> Mar 10 00:46:04   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> Mar 10 00:46:04   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> Mar 10 00:46:04   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> Mar 10 00:46:04   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> Mar 10 00:46:04   at 

[jira] [Updated] (FLINK-26569) testResumeWithWrongOffset(org.apache.flink.runtime.fs.hdfs.HadoopRecoverableWriterTest) failing on master

2022-03-09 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl updated FLINK-26569:
--
Fix Version/s: (was: 1.15.0)

> testResumeWithWrongOffset(org.apache.flink.runtime.fs.hdfs.HadoopRecoverableWriterTest)
>  failing on master
> -
>
> Key: FLINK-26569
> URL: https://issues.apache.org/jira/browse/FLINK-26569
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Connectors / Hadoop 
> Compatibility
>Affects Versions: 1.15.0
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
>
> There's a test failure in [this 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32792=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=bfbc6239-57a0-5db0-63f3-41551b4f7d51=10107]
>  due to {{HadoopRecoverableWriterTest.testResumeWithWrongOffset}}:
> {code}
> Mar 10 00:46:04 [ERROR] 
> testResumeWithWrongOffset(org.apache.flink.runtime.fs.hdfs.HadoopRecoverableWriterTest)
>   Time elapsed: 100.39 s  <<< FAILURE!
> Mar 10 00:46:04 java.lang.AssertionError
> Mar 10 00:46:04   at org.junit.Assert.fail(Assert.java:86)
> Mar 10 00:46:04   at org.junit.Assert.fail(Assert.java:95)
> Mar 10 00:46:04   at 
> org.apache.flink.core.fs.AbstractRecoverableWriterTest.testResumeWithWrongOffset(AbstractRecoverableWriterTest.java:381)
> Mar 10 00:46:04   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Mar 10 00:46:04   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Mar 10 00:46:04   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Mar 10 00:46:04   at java.lang.reflect.Method.invoke(Method.java:498)
> Mar 10 00:46:04   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Mar 10 00:46:04   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Mar 10 00:46:04   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Mar 10 00:46:04   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Mar 10 00:46:04   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Mar 10 00:46:04   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Mar 10 00:46:04   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Mar 10 00:46:04   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> Mar 10 00:46:04   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Mar 10 00:46:04   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> Mar 10 00:46:04   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Mar 10 00:46:04   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Mar 10 00:46:04   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Mar 10 00:46:04   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Mar 10 00:46:04   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Mar 10 00:46:04   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Mar 10 00:46:04   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> Mar 10 00:46:04   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Mar 10 00:46:04   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Mar 10 00:46:04   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> Mar 10 00:46:04   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Mar 10 00:46:04   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> Mar 10 00:46:04   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> Mar 10 00:46:04   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> Mar 10 00:46:04   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> Mar 10 00:46:04   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> Mar 10 00:46:04   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> Mar 10 00:46:04   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> Mar 10 00:46:04   at 
> 

[jira] [Updated] (FLINK-26569) testResumeWithWrongOffset(org.apache.flink.runtime.fs.hdfs.HadoopRecoverableWriterTest) failing on master

2022-03-09 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl updated FLINK-26569:
--
Affects Version/s: 1.13.6
   (was: 1.15.0)

> testResumeWithWrongOffset(org.apache.flink.runtime.fs.hdfs.HadoopRecoverableWriterTest)
>  failing on master
> -
>
> Key: FLINK-26569
> URL: https://issues.apache.org/jira/browse/FLINK-26569
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Connectors / Hadoop 
> Compatibility
>Affects Versions: 1.13.6
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
>
> There's a test failure in [this 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32792=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=bfbc6239-57a0-5db0-63f3-41551b4f7d51=10107]
>  due to {{HadoopRecoverableWriterTest.testResumeWithWrongOffset}}:
> {code}
> Mar 10 00:46:04 [ERROR] 
> testResumeWithWrongOffset(org.apache.flink.runtime.fs.hdfs.HadoopRecoverableWriterTest)
>   Time elapsed: 100.39 s  <<< FAILURE!
> Mar 10 00:46:04 java.lang.AssertionError
> Mar 10 00:46:04   at org.junit.Assert.fail(Assert.java:86)
> Mar 10 00:46:04   at org.junit.Assert.fail(Assert.java:95)
> Mar 10 00:46:04   at 
> org.apache.flink.core.fs.AbstractRecoverableWriterTest.testResumeWithWrongOffset(AbstractRecoverableWriterTest.java:381)
> Mar 10 00:46:04   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Mar 10 00:46:04   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Mar 10 00:46:04   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Mar 10 00:46:04   at java.lang.reflect.Method.invoke(Method.java:498)
> Mar 10 00:46:04   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Mar 10 00:46:04   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Mar 10 00:46:04   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Mar 10 00:46:04   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Mar 10 00:46:04   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Mar 10 00:46:04   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Mar 10 00:46:04   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Mar 10 00:46:04   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> Mar 10 00:46:04   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Mar 10 00:46:04   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> Mar 10 00:46:04   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Mar 10 00:46:04   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Mar 10 00:46:04   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Mar 10 00:46:04   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Mar 10 00:46:04   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Mar 10 00:46:04   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Mar 10 00:46:04   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> Mar 10 00:46:04   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Mar 10 00:46:04   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Mar 10 00:46:04   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> Mar 10 00:46:04   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Mar 10 00:46:04   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> Mar 10 00:46:04   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> Mar 10 00:46:04   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> Mar 10 00:46:04   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> Mar 10 00:46:04   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> Mar 10 00:46:04   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> Mar 10 00:46:04   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> Mar 10 00:46:04   at 
> 

[jira] [Created] (FLINK-26569) testResumeWithWrongOffset(org.apache.flink.runtime.fs.hdfs.HadoopRecoverableWriterTest) failing on master

2022-03-09 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-26569:
-

 Summary: 
testResumeWithWrongOffset(org.apache.flink.runtime.fs.hdfs.HadoopRecoverableWriterTest)
 failing on master
 Key: FLINK-26569
 URL: https://issues.apache.org/jira/browse/FLINK-26569
 Project: Flink
  Issue Type: Bug
  Components: Connectors / FileSystem, Connectors / Hadoop Compatibility
Affects Versions: 1.15.0
Reporter: Matthias Pohl
 Fix For: 1.15.0


There's a test failure in [this 
build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32792=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=bfbc6239-57a0-5db0-63f3-41551b4f7d51=10107]
 due to {{HadoopRecoverableWriterTest.testResumeWithWrongOffset}}:
{code}
Mar 10 00:46:04 [ERROR] 
testResumeWithWrongOffset(org.apache.flink.runtime.fs.hdfs.HadoopRecoverableWriterTest)
  Time elapsed: 100.39 s  <<< FAILURE!
Mar 10 00:46:04 java.lang.AssertionError
Mar 10 00:46:04 at org.junit.Assert.fail(Assert.java:86)
Mar 10 00:46:04 at org.junit.Assert.fail(Assert.java:95)
Mar 10 00:46:04 at 
org.apache.flink.core.fs.AbstractRecoverableWriterTest.testResumeWithWrongOffset(AbstractRecoverableWriterTest.java:381)
Mar 10 00:46:04 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
Mar 10 00:46:04 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Mar 10 00:46:04 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Mar 10 00:46:04 at java.lang.reflect.Method.invoke(Method.java:498)
Mar 10 00:46:04 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
Mar 10 00:46:04 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
Mar 10 00:46:04 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
Mar 10 00:46:04 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
Mar 10 00:46:04 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
Mar 10 00:46:04 at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
Mar 10 00:46:04 at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
Mar 10 00:46:04 at 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
Mar 10 00:46:04 at org.junit.rules.RunRules.evaluate(RunRules.java:20)
Mar 10 00:46:04 at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
Mar 10 00:46:04 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
Mar 10 00:46:04 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
Mar 10 00:46:04 at 
org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
Mar 10 00:46:04 at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
Mar 10 00:46:04 at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
Mar 10 00:46:04 at 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
Mar 10 00:46:04 at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
Mar 10 00:46:04 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
Mar 10 00:46:04 at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
Mar 10 00:46:04 at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
Mar 10 00:46:04 at org.junit.rules.RunRules.evaluate(RunRules.java:20)
Mar 10 00:46:04 at 
org.junit.runners.ParentRunner.run(ParentRunner.java:363)
Mar 10 00:46:04 at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
Mar 10 00:46:04 at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
Mar 10 00:46:04 at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
Mar 10 00:46:04 at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
Mar 10 00:46:04 at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
Mar 10 00:46:04 at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
Mar 10 00:46:04 at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
Mar 10 00:46:04 at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
{code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18980: [FLINK-26421] Use only EnvironmentSettings to configure the environment

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18980:
URL: https://github.com/apache/flink/pull/18980#issuecomment-1059275933


   
   ## CI report:
   
   * 900b4763d3add6404a15588dacce66be5dbf67bd Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32770)
 
   * 04c3161485ca865bded44adbf5bb361ead018c97 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32808)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18957: [FLINK-26444][python]Window allocator supporting pyflink datastream API

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18957:
URL: https://github.com/apache/flink/pull/18957#issuecomment-1056202149


   
   ## CI report:
   
   * ec1e0a435186082e5ac1481bc093f9bdd9d94d70 UNKNOWN
   * d9ffed1defa7514c27d3ba9a9e3008de8b84bc68 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32805)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18980: [FLINK-26421] Use only EnvironmentSettings to configure the environment

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18980:
URL: https://github.com/apache/flink/pull/18980#issuecomment-1059275933


   
   ## CI report:
   
   * 900b4763d3add6404a15588dacce66be5dbf67bd Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32770)
 
   * 04c3161485ca865bded44adbf5bb361ead018c97 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18957: [FLINK-26444][python]Window allocator supporting pyflink datastream API

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18957:
URL: https://github.com/apache/flink/pull/18957#issuecomment-1056202149


   
   ## CI report:
   
   * ec1e0a435186082e5ac1481bc093f9bdd9d94d70 UNKNOWN
   * d9ffed1defa7514c27d3ba9a9e3008de8b84bc68 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32805)
 
   * 0ee22778504c1bf11c25b5e9df0a8b5fc88a83cc UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-26568) BlockingShuffleITCase.testDeletePartitionFileOfBoundedBlockingShuffle timing out on Azure

2022-03-09 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17504026#comment-17504026
 ] 

Matthias Pohl commented on FLINK-26568:
---

It was identified on a branch that only adds logs to an internal component and 
a Precondition that checks whether a directory was deleted by a 3rd party 
(which shouldn't not happen in a test). Therefore, my assumption is that it's 
based on some other issue.

> BlockingShuffleITCase.testDeletePartitionFileOfBoundedBlockingShuffle timing 
> out on Azure
> -
>
> Key: FLINK-26568
> URL: https://issues.apache.org/jira/browse/FLINK-26568
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task, Tests
>Affects Versions: 1.15.0
>Reporter: Matthias Pohl
>Priority: Critical
> Fix For: 1.15.0
>
>
> [This 
> build|https://dev.azure.com/mapohl/flink/_build/results?buildId=845=logs=0a15d512-44ac-5ba5-97ab-13a5d066c22c=9a028d19-6c4b-5a4e-d378-03fca149d0b1=12865]
>  timed out due the test 
> {{BlockingShuffleITCase.testDeletePartitionFileOfBoundedBlockingShuffle}} not 
> finishing.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18957: [FLINK-26444][python]Window allocator supporting pyflink datastream API

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18957:
URL: https://github.com/apache/flink/pull/18957#issuecomment-1056202149


   
   ## CI report:
   
   * ec1e0a435186082e5ac1481bc093f9bdd9d94d70 UNKNOWN
   * d9ffed1defa7514c27d3ba9a9e3008de8b84bc68 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32805)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-26568) BlockingShuffleITCase.testDeletePartitionFileOfBoundedBlockingShuffle timing out on Azure

2022-03-09 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-26568:
-

 Summary: 
BlockingShuffleITCase.testDeletePartitionFileOfBoundedBlockingShuffle timing 
out on Azure
 Key: FLINK-26568
 URL: https://issues.apache.org/jira/browse/FLINK-26568
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Task, Tests
Affects Versions: 1.15.0
Reporter: Matthias Pohl
 Fix For: 1.15.0


[This 
build|https://dev.azure.com/mapohl/flink/_build/results?buildId=845=logs=0a15d512-44ac-5ba5-97ab-13a5d066c22c=9a028d19-6c4b-5a4e-d378-03fca149d0b1=12865]
 timed out due the test 
{{BlockingShuffleITCase.testDeletePartitionFileOfBoundedBlockingShuffle}} not 
finishing.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] ChengkaiYang2022 commented on a change in pull request #19023: [FLINK-25705][docs]Translate "Metric Reporters" page of "Deployment" …

2022-03-09 Thread GitBox


ChengkaiYang2022 commented on a change in pull request #19023:
URL: https://github.com/apache/flink/pull/19023#discussion_r823384807



##
File path: docs/content.zh/docs/deployment/metric_reporters.md
##
@@ -24,31 +24,30 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# Metric Reporters
+# 指标发送器

Review comment:
   Ok,I will add tags later:)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18957: [FLINK-26444][python]Window allocator supporting pyflink datastream API

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18957:
URL: https://github.com/apache/flink/pull/18957#issuecomment-1056202149


   
   ## CI report:
   
   * ec1e0a435186082e5ac1481bc093f9bdd9d94d70 UNKNOWN
   * d9ffed1defa7514c27d3ba9a9e3008de8b84bc68 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32805)
 
   * 0ee22778504c1bf11c25b5e9df0a8b5fc88a83cc UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-123) Strange (and faulty) Iterations Behaviour

2022-03-09 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl reassigned FLINK-123:
---

Assignee: Ufuk Celebi  (was: Matthias Pohl)

> Strange (and faulty) Iterations Behaviour
> -
>
> Key: FLINK-123
> URL: https://issues.apache.org/jira/browse/FLINK-123
> Project: Flink
>  Issue Type: Bug
>Reporter: GitHub Import
>Assignee: Ufuk Celebi
>Priority: Major
>  Labels: github-import
> Fix For: pre-apache
>
>
> In KMeansIterative, when you add an identity mapper after the 
> reduceClusterCenter reducer you get:
> java.lang.RuntimeException: : BUG: The task must have at least one output
>   at 
> eu.stratosphere.pact.runtime.task.RegularPactTask.registerInputOutput(RegularPactTask.java:248)
>   at 
> eu.stratosphere.nephele.execution.RuntimeEnvironment.(RuntimeEnvironment.java:194)
>   at 
> eu.stratosphere.nephele.executiongraph.ExecutionGroupVertex.(ExecutionGroupVertex.java:237)
>   at 
> eu.stratosphere.nephele.executiongraph.ExecutionGraph.createVertex(ExecutionGraph.java:497)
>   at 
> eu.stratosphere.nephele.executiongraph.ExecutionGraph.constructExecutionGraph(ExecutionGraph.java:275)
>   at 
> eu.stratosphere.nephele.executiongraph.ExecutionGraph.(ExecutionGraph.java:176)
>   at 
> eu.stratosphere.nephele.jobmanager.JobManager.submitJob(JobManager.java:532)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at eu.stratosphere.nephele.ipc.RPC$Server.call(RPC.java:417)
>   at eu.stratosphere.nephele.ipc.Server$Handler.run(Server.java:946)
> I discovered this one from a plan in the scala frontend where I also have a 
> final mapper that does some real work after a reducer in an iteration 
> (BulkIteration).
>  Imported from GitHub 
> Url: https://github.com/stratosphere/stratosphere/issues/123
> Created by: [aljoscha|https://github.com/aljoscha]
> Labels: bug, 
> Milestone: Release 0.4
> Assignee: [StephanEwen|https://github.com/StephanEwen]
> Created at: Thu Oct 03 09:16:36 CEST 2013
> State: closed



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-123) Strange (and faulty) Iterations Behaviour

2022-03-09 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl reassigned FLINK-123:
---

Assignee: Matthias Pohl

> Strange (and faulty) Iterations Behaviour
> -
>
> Key: FLINK-123
> URL: https://issues.apache.org/jira/browse/FLINK-123
> Project: Flink
>  Issue Type: Bug
>Reporter: GitHub Import
>Assignee: Matthias Pohl
>Priority: Major
>  Labels: github-import
> Fix For: pre-apache
>
>
> In KMeansIterative, when you add an identity mapper after the 
> reduceClusterCenter reducer you get:
> java.lang.RuntimeException: : BUG: The task must have at least one output
>   at 
> eu.stratosphere.pact.runtime.task.RegularPactTask.registerInputOutput(RegularPactTask.java:248)
>   at 
> eu.stratosphere.nephele.execution.RuntimeEnvironment.(RuntimeEnvironment.java:194)
>   at 
> eu.stratosphere.nephele.executiongraph.ExecutionGroupVertex.(ExecutionGroupVertex.java:237)
>   at 
> eu.stratosphere.nephele.executiongraph.ExecutionGraph.createVertex(ExecutionGraph.java:497)
>   at 
> eu.stratosphere.nephele.executiongraph.ExecutionGraph.constructExecutionGraph(ExecutionGraph.java:275)
>   at 
> eu.stratosphere.nephele.executiongraph.ExecutionGraph.(ExecutionGraph.java:176)
>   at 
> eu.stratosphere.nephele.jobmanager.JobManager.submitJob(JobManager.java:532)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at eu.stratosphere.nephele.ipc.RPC$Server.call(RPC.java:417)
>   at eu.stratosphere.nephele.ipc.Server$Handler.run(Server.java:946)
> I discovered this one from a plan in the scala frontend where I also have a 
> final mapper that does some real work after a reducer in an iteration 
> (BulkIteration).
>  Imported from GitHub 
> Url: https://github.com/stratosphere/stratosphere/issues/123
> Created by: [aljoscha|https://github.com/aljoscha]
> Labels: bug, 
> Milestone: Release 0.4
> Assignee: [StephanEwen|https://github.com/StephanEwen]
> Created at: Thu Oct 03 09:16:36 CEST 2013
> State: closed



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18957: [FLINK-26444][python]Window allocator supporting pyflink datastream API

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18957:
URL: https://github.com/apache/flink/pull/18957#issuecomment-1056202149


   
   ## CI report:
   
   * ec1e0a435186082e5ac1481bc093f9bdd9d94d70 UNKNOWN
   * d9ffed1defa7514c27d3ba9a9e3008de8b84bc68 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32805)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18957: [FLINK-26444][python]Window allocator supporting pyflink datastream API

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18957:
URL: https://github.com/apache/flink/pull/18957#issuecomment-1056202149


   
   ## CI report:
   
   * ec1e0a435186082e5ac1481bc093f9bdd9d94d70 UNKNOWN
   * d9ffed1defa7514c27d3ba9a9e3008de8b84bc68 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32805)
 
   * 0ee22778504c1bf11c25b5e9df0a8b5fc88a83cc UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #19033: [FLINK-21321][Runtime/StateBackends] improve RocksDB incremental rescale performance by using deleteRange operator

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #19033:
URL: https://github.com/apache/flink/pull/19033#issuecomment-1063699541


   
   ## CI report:
   
   * 7d1dbec66de54a97fb4a1a9a9304806bbe4da9af Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32807)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18957: [FLINK-26444][python]Window allocator supporting pyflink datastream API

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18957:
URL: https://github.com/apache/flink/pull/18957#issuecomment-1056202149


   
   ## CI report:
   
   * ec1e0a435186082e5ac1481bc093f9bdd9d94d70 UNKNOWN
   * d9ffed1defa7514c27d3ba9a9e3008de8b84bc68 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32805)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #19033: [FLINK-21321][Runtime/StateBackends] improve RocksDB incremental rescale performance by using deleteRange operator

2022-03-09 Thread GitBox


flinkbot commented on pull request #19033:
URL: https://github.com/apache/flink/pull/19033#issuecomment-1063699541


   
   ## CI report:
   
   * 7d1dbec66de54a97fb4a1a9a9304806bbe4da9af UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18957: [FLINK-26444][python]Window allocator supporting pyflink datastream API

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18957:
URL: https://github.com/apache/flink/pull/18957#issuecomment-1056202149


   
   ## CI report:
   
   * ec1e0a435186082e5ac1481bc093f9bdd9d94d70 UNKNOWN
   * a36182858952a61ef43ccbdc184440354bb86110 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32721)
 
   * d9ffed1defa7514c27d3ba9a9e3008de8b84bc68 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32805)
 
   * 0ee22778504c1bf11c25b5e9df0a8b5fc88a83cc UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18905: [FLINK-25927][connectors] Make flink-connector-base dependency usage consistent across all connectors

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18905:
URL: https://github.com/apache/flink/pull/18905#issuecomment-1049717092


   
   ## CI report:
   
   * 3bf4e3c4e401ee8283f964089700c2b265643c23 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32794)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-26567) FileStoreSourceSplitReader should deal with value count

2022-03-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-26567:
---
Labels: pull-request-available  (was: )

> FileStoreSourceSplitReader should deal with value count
> ---
>
> Key: FLINK-26567
> URL: https://issues.apache.org/jira/browse/FLINK-26567
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.1.0
>
>
> There is a keyAsRecord in FileStoreSourceSplitReader, but this should only be 
> keyAsRecord.
> When keyAsRecord, it should emit the same number of records as value count.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink-table-store] JingsongLi opened a new pull request #40: [FLINK-26567] FileStoreSourceSplitReader should deal with value count

2022-03-09 Thread GitBox


JingsongLi opened a new pull request #40:
URL: https://github.com/apache/flink-table-store/pull/40


   There is a keyAsRecord in FileStoreSourceSplitReader, but this should only 
be keyAsRecord.
   
   When keyAsRecord, it should emit the same number of records as value count.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-26567) FileStoreSourceSplitReader should deal with value count

2022-03-09 Thread Jingsong Lee (Jira)
Jingsong Lee created FLINK-26567:


 Summary: FileStoreSourceSplitReader should deal with value count
 Key: FLINK-26567
 URL: https://issues.apache.org/jira/browse/FLINK-26567
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Reporter: Jingsong Lee
Assignee: Jingsong Lee
 Fix For: table-store-0.1.0


There is a keyAsRecord in FileStoreSourceSplitReader, but this should only be 
keyAsRecord.

When keyAsRecord, it should emit the same number of records as value count.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] fredia opened a new pull request #19033: [FLINK-21321][Runtime/StateBackends] improve RocksDB incremental rescale performance by using deleteRange operator

2022-03-09 Thread GitBox


fredia opened a new pull request #19033:
URL: https://github.com/apache/flink/pull/19033


   
   ## What is the purpose of the change
   
   Since the FRocksDB version was upgraded to 6.20.3 in 1.14,  `deleteRange()` 
is now usable in production.
   
   This change was previously worked by @[lgo](https://github.com/lgo) in 
https://github.com/apache/flink/pull/14893 and highlighted by 
[@sihuazhou](https://github.com/sihuazhou) in 
[FLINK-8790](https://issues.apache.org/jira/browse/FLINK-8790). Since 
`deleteRange()` was marked as an experimental feature, all these previous jobs 
are suspended.
   
   This PR wants to continue the above work to improve the performance of 
RocksDB state backend incremental rescaling operations, by replacing 
**delete-key-by-key** with `deleteRange()` when clipping base rocksdb.
   
   ## Brief change log
   
 - Replace **delete-key-by-key** with `deleteRange()` when clipping base 
rocksdb during incremental rescaling scenario.
 -  Add ITCases for rescaling from checkpoints.
   
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
 -  Add ITCases for scaling in/out from checkpoints.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
**don't know**)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (**yes** / no / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18303: [FLINK-25085][runtime] Add a scheduled thread pool for periodic tasks in RpcEndpoint

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18303:
URL: https://github.com/apache/flink/pull/18303#issuecomment-1008005239


   
   ## CI report:
   
   * 40b70a9b1e8479f25814ecf01e978ea011eb1021 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32797)
 
   * 91a5a156671f5b97479e9b284678912ace3ac51d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32806)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18303: [FLINK-25085][runtime] Add a scheduled thread pool for periodic tasks in RpcEndpoint

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18303:
URL: https://github.com/apache/flink/pull/18303#issuecomment-1008005239


   
   ## CI report:
   
   * ecd80ec7a11dae690b258377aab2428872a0abdb Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32664)
 
   * 40b70a9b1e8479f25814ecf01e978ea011eb1021 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32797)
 
   * 91a5a156671f5b97479e9b284678912ace3ac51d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18957: [FLINK-26444][python]Window allocator supporting pyflink datastream API

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18957:
URL: https://github.com/apache/flink/pull/18957#issuecomment-1056202149


   
   ## CI report:
   
   * ec1e0a435186082e5ac1481bc093f9bdd9d94d70 UNKNOWN
   * a36182858952a61ef43ccbdc184440354bb86110 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32721)
 
   * d9ffed1defa7514c27d3ba9a9e3008de8b84bc68 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] zhipeng93 commented on a change in pull request #54: [FLINK-25552] Add Estimator and Transformer for MinMaxScaler in FlinkML

2022-03-09 Thread GitBox


zhipeng93 commented on a change in pull request #54:
URL: https://github.com/apache/flink-ml/pull/54#discussion_r823332320



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/feature/minmaxscaler/MinMaxScaler.java
##
@@ -0,0 +1,165 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.feature.minmaxscaler;
+
+import org.apache.flink.api.common.functions.RichMapPartitionFunction;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.ml.api.Estimator;
+import org.apache.flink.ml.common.datastream.DataStreamUtils;
+import org.apache.flink.ml.linalg.DenseVector;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.util.ParamUtils;
+import org.apache.flink.ml.util.ReadWriteUtils;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
+import org.apache.flink.table.api.internal.TableImpl;
+import org.apache.flink.types.Row;
+import org.apache.flink.util.Collector;
+import org.apache.flink.util.Preconditions;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+/** An Estimator which implements the MinMaxScaler algorithm. */
+public class MinMaxScaler
+implements Estimator, 
MinMaxScalerParams {
+private final Map, Object> paramMap = new HashMap<>();
+
+public MinMaxScaler() {
+ParamUtils.initializeMapWithDefaultValues(paramMap, this);
+}
+
+@Override
+public MinMaxScalerModel fit(Table... inputs) {
+Preconditions.checkArgument(inputs.length == 1);
+StreamTableEnvironment tEnv =
+(StreamTableEnvironment) ((TableImpl) 
inputs[0]).getTableEnvironment();
+DataStream> minMaxVectors =
+computeMinMaxVectors(tEnv.toDataStream(inputs[0]), 
getFeaturesCol());
+DataStream modelData = 
genModelData(minMaxVectors);
+MinMaxScalerModel model =
+new 
MinMaxScalerModel().setModelData(tEnv.fromDataStream(modelData));
+ReadWriteUtils.updateExistingParams(model, getParamMap());
+return model;
+}
+
+@Override
+public Map, Object> getParamMap() {
+return paramMap;
+}
+
+@Override
+public void save(String path) throws IOException {
+ReadWriteUtils.saveMetadata(this, path);
+}
+
+public static MinMaxScaler load(StreamExecutionEnvironment env, String 
path)
+throws IOException {
+return ReadWriteUtils.loadStageParam(path);
+}
+
+/**
+ * Generates minMax scaler model data.
+ *
+ * @param minMaxVectors Input distributed minMaxVectors.
+ * @return MinMax scaler model data.
+ */
+private static DataStream genModelData(
+DataStream> minMaxVectors) {
+DataStream modelData =
+DataStreamUtils.mapPartition(
+minMaxVectors,
+new RichMapPartitionFunction<
+Tuple2, 
MinMaxScalerModelData>() {
+@Override
+public void mapPartition(
+Iterable> 
dataPoints,
+Collector out) {
+DenseVector minVector = null;
+DenseVector maxVector = null;
+int vecSize = 0;
+for (Tuple2 
dataPoint : dataPoints) {
+if (maxVector == null) {
+vecSize = dataPoint.f0.size();
+maxVector = dataPoint.f1;
+minVector = dataPoint.f0;
+}
+for (int i = 0; i < vecSize; ++i) {

Review comment:
   nits: vecSize could be replaced with `maxVector.size()`

##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/feature/minmaxscaler/MinMaxScaler.java
##
@@ -0,0 

[jira] [Commented] (FLINK-26529) PyFlink 'tuple' object has no attribute '_values'

2022-03-09 Thread James Schulte (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17503995#comment-17503995
 ] 

James Schulte commented on FLINK-26529:
---

I believe this is realated to this pull request - restructuring the coders: 
[https://github.com/apache/flink/pull/15877]

 

[~hxbks2ks] - are you able to provide any insight? 

> PyFlink 'tuple' object has no attribute '_values'
> -
>
> Key: FLINK-26529
> URL: https://issues.apache.org/jira/browse/FLINK-26529
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.14.3
> Environment: JAVA_VERSION=8
> SCALA_VERSION=2.12
> FLINK_VERSION=1.14.3
> PYTHON_VERSION=3.7.9
>  
> Running in Kubernetes using spotify/flink-on-kubernetes-operator
>Reporter: James Schulte
>Priority: Major
> Attachments: flink_operators.py, main.py
>
>
>  
> {code:java}
> Caused by: java.util.concurrent.ExecutionException: 
> java.lang.RuntimeException: Error received from SDK harness for instruction 
> 4: Traceback (most recent call last):  File 
> "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py",
>  line 289, in _executeresponse = task()  File 
> "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py",
>  line 362, in lambda: 
> self.create_worker().do_instruction(request), request)  File 
> "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py",
>  line 607, in do_instructiongetattr(request, request_type), 
> request.instruction_id)  File 
> "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py",
>  line 644, in process_bundle
> bundle_processor.process_bundle(instruction_id))  File 
> "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/bundle_processor.py",
>  line 1000, in process_bundleelement.data)  File 
> "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/bundle_processor.py",
>  line 228, in process_encodedself.output(decoded_value)  File 
> "apache_beam/runners/worker/operations.py", line 357, in 
> apache_beam.runners.worker.operations.Operation.output  File 
> "apache_beam/runners/worker/operations.py", line 359, in 
> apache_beam.runners.worker.operations.Operation.output  File 
> "apache_beam/runners/worker/operations.py", line 221, in 
> apache_beam.runners.worker.operations.SingletonConsumerSet.receive  File 
> "pyflink/fn_execution/beam/beam_operations_fast.pyx", line 158, in 
> pyflink.fn_execution.beam.beam_operations_fast.FunctionOperation.process  
> File "pyflink/fn_execution/beam/beam_operations_fast.pyx", line 174, in 
> pyflink.fn_execution.beam.beam_operations_fast.FunctionOperation.process  
> File "pyflink/fn_execution/beam/beam_operations_fast.pyx", line 104, in 
> pyflink.fn_execution.beam.beam_operations_fast.IntermediateOutputProcessor.process_outputs
>   File "pyflink/fn_execution/beam/beam_operations_fast.pyx", line 158, in 
> pyflink.fn_execution.beam.beam_operations_fast.FunctionOperation.process  
> File "pyflink/fn_execution/beam/beam_operations_fast.pyx", line 174, in 
> pyflink.fn_execution.beam.beam_operations_fast.FunctionOperation.process  
> File "pyflink/fn_execution/beam/beam_operations_fast.pyx", line 92, in 
> pyflink.fn_execution.beam.beam_operations_fast.NetworkOutputProcessor.process_outputs
>   File "pyflink/fn_execution/beam/beam_coder_impl_fast.pyx", line 101, in 
> pyflink.fn_execution.beam.beam_coder_impl_fast.FlinkLengthPrefixCoderBeamWrapper.encode_to_stream
>   File "pyflink/fn_execution/coder_impl_fast.pyx", line 271, in 
> pyflink.fn_execution.coder_impl_fast.IterableCoderImpl.encode_to_stream  File 
> "pyflink/fn_execution/coder_impl_fast.pyx", line 399, in 
> pyflink.fn_execution.coder_impl_fast.RowCoderImpl.encode_to_stream  File 
> "pyflink/fn_execution/coder_impl_fast.pyx", line 389, in 
> pyflink.fn_execution.coder_impl_fast.RowCoderImpl.encode_to_streamAttributeError:
>  'tuple' object has no attribute '_values'
>  {code}
> Recieved this error after upgrading from Flink 1.13.1 -> 1.14.3 - no other 
> changes
>  
> I've reviewed the release notes - can't see anything highlighting why this 
> might be the case.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] RocMarshal commented on a change in pull request #18386: [FLINK-25684][table] Support enhanced `show databases` syntax

2022-03-09 Thread GitBox


RocMarshal commented on a change in pull request #18386:
URL: https://github.com/apache/flink/pull/18386#discussion_r823349610



##
File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/TableEnvironment.java
##
@@ -505,10 +511,10 @@ void createTemporarySystemFunction(
  * (implicitly or explicitly) identified by a catalog and database.
  *
  * @param path The path under which the function will be registered. See 
also the {@link
- * TableEnvironment} class description for the format of the path.
+ * TableEnvironment} class description for the format of the path.
  * @param functionClass The function class containing the implementation.
  * @param ignoreIfExists If a function exists under the given path and 
this flag is set, no
- * operation is executed. An exception is thrown otherwise.
+ * operation is executed. An exception is thrown otherwise.

Review comment:
   a little confusion : The change was caused by `mvn spotless:apply` 
command ?

##
File path: 
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/api/TableEnvironmentTest.scala
##
@@ -656,13 +656,48 @@ class TableEnvironmentTest {
   def testExecuteSqlWithShowDatabases(): Unit = {
 val tableResult1 = tableEnv.executeSql("CREATE DATABASE db1 COMMENT 
'db1_comment'")
 assertEquals(ResultKind.SUCCESS, tableResult1.getResultKind)
-val tableResult2 = tableEnv.executeSql("SHOW DATABASES")
+val tableResult2 = tableEnv.executeSql("CREATE DATABASE db2 COMMENT 
'db2_comment'")
+assertEquals(ResultKind.SUCCESS, tableResult2.getResultKind)

Review comment:
   It's recommended  to use `assertThat` from assertj pacakge. If you think 
so, please  keep consistent in the rest of the related changed lines.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (FLINK-26330) Test Adaptive Batch Scheduler manually

2022-03-09 Thread Lijie Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lijie Wang resolved FLINK-26330.

Resolution: Done

> Test Adaptive Batch Scheduler manually
> --
>
> Key: FLINK-26330
> URL: https://issues.apache.org/jira/browse/FLINK-26330
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Lijie Wang
>Assignee: Niklas Semmler
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.15.0
>
>
> Documentation: [https://github.com/apache/flink/pull/18757]
> Run DataStream / SQL batch jobs with Adaptive Batch Scheduler and verifiy:
> 1. Whether the automatically decided parallelism is correct
> 2. Whether the job result is correct
>  
> *For example:*
> {code:java}
> final Configuration configuration = new Configuration();
> configuration.set(
> JobManagerOptions.SCHEDULER, 
> JobManagerOptions.SchedulerType.AdaptiveBatch);
> configuration.setInteger(
> JobManagerOptions.ADAPTIVE_BATCH_SCHEDULER_MAX_PARALLELISM, 4);
> configuration.set(
> JobManagerOptions.ADAPTIVE_BATCH_SCHEDULER_DATA_VOLUME_PER_TASK,
> MemorySize.parse("8kb"));
> configuration.setInteger("parallelism.default", -1);
> final StreamExecutionEnvironment env =
> StreamExecutionEnvironment.createLocalEnvironment(configuration);
> env.setRuntimeMode(RuntimeExecutionMode.BATCH);
> env.fromSequence(0, 1000).setParallelism(1)
> .keyBy(num -> num % 10)
> .sum(0)
> .addSink(new PrintSinkFunction<>());
> env.execute(); {code}
> You can run above job and check:
>  
> 1. The parallelism of "Keyed Aggregation -> Sink: Unnamed" should be 3. 
> Jobmanager logs show following logs:
> {code:java}
> Parallelism of JobVertex: Keyed Aggregation -> Sink: Unnamed 
> (20ba6b65f97481d5570070de90e4e791) is decided to be 3. {code}
> 2. The job result should be:
> {code:java}
> 50500
> 49600
> 49700
> 49800
> 49900
> 5
> 50100
> 50200
> 50300
> 50400 {code}
>  
> You can change the amout of data produced by source and config options of 
> adaptive batch scheduler according your wishes.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Closed] (FLINK-26562) Introduce table.path option for FileStoreOptions

2022-03-09 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-26562.

Fix Version/s: table-store-0.1.0
   Resolution: Fixed

master: 36c365d48a6d5a7f009b1f15d9f27565f29bf843

> Introduce table.path option for FileStoreOptions
> 
>
> Key: FLINK-26562
> URL: https://issues.apache.org/jira/browse/FLINK-26562
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: 0.1.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.1.0
>
>
> Currently, the {{FileStoreOptions}} only has the {{FILE_PATH}} option as the 
> table store root dir, we should have another {{TABLE_PATH}} for generated 
> {{sst/manifest/snapshot}} per table.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Closed] (FLINK-26330) Test Adaptive Batch Scheduler manually

2022-03-09 Thread Lijie Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lijie Wang closed FLINK-26330.
--

> Test Adaptive Batch Scheduler manually
> --
>
> Key: FLINK-26330
> URL: https://issues.apache.org/jira/browse/FLINK-26330
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Lijie Wang
>Assignee: Niklas Semmler
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.15.0
>
>
> Documentation: [https://github.com/apache/flink/pull/18757]
> Run DataStream / SQL batch jobs with Adaptive Batch Scheduler and verifiy:
> 1. Whether the automatically decided parallelism is correct
> 2. Whether the job result is correct
>  
> *For example:*
> {code:java}
> final Configuration configuration = new Configuration();
> configuration.set(
> JobManagerOptions.SCHEDULER, 
> JobManagerOptions.SchedulerType.AdaptiveBatch);
> configuration.setInteger(
> JobManagerOptions.ADAPTIVE_BATCH_SCHEDULER_MAX_PARALLELISM, 4);
> configuration.set(
> JobManagerOptions.ADAPTIVE_BATCH_SCHEDULER_DATA_VOLUME_PER_TASK,
> MemorySize.parse("8kb"));
> configuration.setInteger("parallelism.default", -1);
> final StreamExecutionEnvironment env =
> StreamExecutionEnvironment.createLocalEnvironment(configuration);
> env.setRuntimeMode(RuntimeExecutionMode.BATCH);
> env.fromSequence(0, 1000).setParallelism(1)
> .keyBy(num -> num % 10)
> .sum(0)
> .addSink(new PrintSinkFunction<>());
> env.execute(); {code}
> You can run above job and check:
>  
> 1. The parallelism of "Keyed Aggregation -> Sink: Unnamed" should be 3. 
> Jobmanager logs show following logs:
> {code:java}
> Parallelism of JobVertex: Keyed Aggregation -> Sink: Unnamed 
> (20ba6b65f97481d5570070de90e4e791) is decided to be 3. {code}
> 2. The job result should be:
> {code:java}
> 50500
> 49600
> 49700
> 49800
> 49900
> 5
> 50100
> 50200
> 50300
> 50400 {code}
>  
> You can change the amout of data produced by source and config options of 
> adaptive batch scheduler according your wishes.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink-ml] lindong28 commented on pull request #68: [FLINK-26404] support non-local file systems

2022-03-09 Thread GitBox


lindong28 commented on pull request #68:
URL: https://github.com/apache/flink-ml/pull/68#issuecomment-1063668484


   Thanks for the update! LGTM.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #19031: Update joining.md

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #19031:
URL: https://github.com/apache/flink/pull/19031#issuecomment-1063550945


   
   ## CI report:
   
   * 1eda3a65fbb575bfbd493fb507102f46d6467ad6 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32795)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] RocMarshal commented on a change in pull request #19023: [FLINK-25705][docs]Translate "Metric Reporters" page of "Deployment" …

2022-03-09 Thread GitBox


RocMarshal commented on a change in pull request #19023:
URL: https://github.com/apache/flink/pull/19023#discussion_r823354258



##
File path: docs/content.zh/docs/deployment/metric_reporters.md
##
@@ -24,31 +24,30 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# Metric Reporters
+# 指标发送器

Review comment:
   ```suggestion
   
   
   # 指标发送器
   ```
   
   Please fill the related tags in the rest of the section titles in the page.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18957: [FLINK-26444][python]Window allocator supporting pyflink datastream API

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18957:
URL: https://github.com/apache/flink/pull/18957#issuecomment-1056202149


   
   ## CI report:
   
   * ec1e0a435186082e5ac1481bc093f9bdd9d94d70 UNKNOWN
   * a36182858952a61ef43ccbdc184440354bb86110 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32721)
 
   * d9ffed1defa7514c27d3ba9a9e3008de8b84bc68 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32805)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-table-store] JingsongLi merged pull request #39: [FLINK-26562] Introduce table.path option for FileStoreOptions

2022-03-09 Thread GitBox


JingsongLi merged pull request #39:
URL: https://github.com/apache/flink-table-store/pull/39


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18957: [FLINK-26444][python]Window allocator supporting pyflink datastream API

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18957:
URL: https://github.com/apache/flink/pull/18957#issuecomment-1056202149


   
   ## CI report:
   
   * ec1e0a435186082e5ac1481bc093f9bdd9d94d70 UNKNOWN
   * a36182858952a61ef43ccbdc184440354bb86110 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32721)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] lindong28 merged pull request #68: [FLINK-26404] support non-local file systems

2022-03-09 Thread GitBox


lindong28 merged pull request #68:
URL: https://github.com/apache/flink-ml/pull/68


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   5   6   7   8   >