[jira] [Commented] (FLINK-22584) Use protobuf-shaded in StateFun core.
[ https://issues.apache.org/jira/browse/FLINK-22584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17528183#comment-17528183 ] Mans Singh commented on FLINK-22584: [~igal] - I am getting the error mentioned [here|https://lists.apache.org/thread/tcbx58oqn7kw32kl2q4cskoojwn7yrfx] (java.lang.NoSuchMethodError: org.apache.flink.statefun.sdk.reqreply.generated.TypedValue$Builder.setValue(Lcom/google/protobuf/ByteString;)Lorg/apache/flink/statefun/sdk/reqreply/generated/TypedValue$Builder) and need to use statefun java sdk (v 3.2.0). Can you please advice how to use the workaroud code segment using byte buddy that you have mentioned above ? Thanks > Use protobuf-shaded in StateFun core. > - > > Key: FLINK-22584 > URL: https://issues.apache.org/jira/browse/FLINK-22584 > Project: Flink > Issue Type: Improvement > Components: Stateful Functions >Reporter: Igal Shilman >Priority: Not a Priority > Labels: auto-deprioritized-major, auto-deprioritized-minor, > developer-experience > > We have *statefun-protobuf-shaded* module, that was introduced for the remote > Java sdk. > we can use it to shade protobuf internally, to reduce the dependency surface. > The major hurdle we need to overcome is that, in embedded functions, we have > to be able to accept instances of protobuf generated messages by the user. > For example: > {code:java} > UserProfile userProfile = UserProfile.newBilder().build(); > context.send(..., userProfile) {code} > If we will simply use the shaded Protobuf version, we will get immediately a > class cast exception. > One way to overcome this is to use reflection and find the well known methods > on the generated classes and call toBytes() / parseFrom() reflectively. > This however will cause a significant slow down, even by using MethodHandles. > A small experiment that I've previously done with ByteBuddy mitigates this, > by generating > accessors, in pre-flight: > {code:java} > package org.apache.flink.statefun.flink.common.protobuf.serde; > import static net.bytebuddy.matcher.ElementMatchers.named;import > java.io.InputStream; > import java.io.OutputStream; > import java.lang.reflect.InvocationTargetException; > import java.lang.reflect.Method; > import net.bytebuddy.ByteBuddy; > import net.bytebuddy.dynamic.DynamicType; > import net.bytebuddy.implementation.FixedValue; > import net.bytebuddy.implementation.MethodCall; > import net.bytebuddy.implementation.bytecode.assign.Assigner;final class > ReflectiveProtobufSerde { @SuppressWarnings({"unchecked", "rawtypes"}) > static ProtobufSerde ofProtobufGeneratedType(Class type) { > try { > DynamicType.Unloaded unloaded = > configureByteBuddy(type); Class writer = > unloaded.load(type.getClassLoader()).getLoaded(); return > (ProtobufSerde) writer.getDeclaredConstructor().newInstance(); > } catch (Throwable e) { > throw new IllegalArgumentException(); > } > } @SuppressWarnings("rawtypes") > private static DynamicType.Unloaded > configureByteBuddy(Class type) > throws NoSuchMethodException, InvocationTargetException, > IllegalAccessException { > Method writeToMethod = type.getMethod("writeTo", OutputStream.class); > Method parseFromMethod = type.getMethod("parseFrom", InputStream.class); > Method getSerializedSizeMethod = type.getMethod("getSerializedSize"); > // get the message full name > Method getDescriptorMethod = type.getMethod("getDescriptor"); > Object descriptor = getDescriptorMethod.invoke(null); > Method getFullNameMethod = descriptor.getClass().getMethod("getFullName"); > String messageFullName = (String) getFullNameMethod.invoke(descriptor); > return new ByteBuddy() > .subclass(ProtobufSerde.class) > .typeVariable("M", type) > .method(named("writeTo")) > .intercept( > MethodCall.invoke(writeToMethod) > .onArgument(0) > .withArgument(1) > .withAssigner(Assigner.DEFAULT, Assigner.Typing.DYNAMIC)) > .method(named("parseFrom")) > .intercept(MethodCall.invoke(parseFromMethod).withArgument(0)) > .method(named("getSerializedSize")) > .intercept( > MethodCall.invoke(getSerializedSizeMethod) > .onArgument(0) > .withAssigner(Assigner.DEFAULT, Assigner.Typing.DYNAMIC)) > .method(named("getMessageFullName")) > .intercept(FixedValue.value(messageFullName)) > .make(); > } > } > {code} > -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (FLINK-27210) StateFun example link on datastream integration page is broken
[ https://issues.apache.org/jira/browse/FLINK-27210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mans Singh updated FLINK-27210: --- Attachment: Screen Shot 2022-04-12 at 2.44.51 PM.png > StateFun example link on datastream integration page is broken > -- > > Key: FLINK-27210 > URL: https://issues.apache.org/jira/browse/FLINK-27210 > Project: Flink > Issue Type: Improvement > Components: Documentation, Stateful Functions >Affects Versions: statefun-3.2.0 >Reporter: Mans Singh >Priority: Minor > Labels: doc, stateful > Attachments: Screen Shot 2022-04-12 at 2.44.51 PM.png > > > The [example > link|https://github.com/apache/flink-statefun/blob/master/statefun-examples/statefun-flink-datastream-example/src/main/java/org/apache/flink/statefun/examples/datastream/Example.java] > on the page > [statefun-datastream|https://nightlies.apache.org/flink/flink-statefun-docs-release-3.2/docs/sdk/flink-datastream/#sdk-overview] > is broken. > Also, please let me know if the example code available anywhere. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (FLINK-27210) StateFun example link on datastream integration page is broken
Mans Singh created FLINK-27210: -- Summary: StateFun example link on datastream integration page is broken Key: FLINK-27210 URL: https://issues.apache.org/jira/browse/FLINK-27210 Project: Flink Issue Type: Improvement Components: Documentation, Stateful Functions Affects Versions: statefun-3.2.0 Reporter: Mans Singh The [example link|https://github.com/apache/flink-statefun/blob/master/statefun-examples/statefun-flink-datastream-example/src/main/java/org/apache/flink/statefun/examples/datastream/Example.java] on the page [statefun-datastream|https://nightlies.apache.org/flink/flink-statefun-docs-release-3.2/docs/sdk/flink-datastream/#sdk-overview] is broken. Also, please let me know if the example code available anywhere. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (FLINK-26099) Table connector proctime attributes has syntax error
[ https://issues.apache.org/jira/browse/FLINK-26099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17491721#comment-17491721 ] Mans Singh commented on FLINK-26099: Please assign this issue to me. Thanks > Table connector proctime attributes has syntax error > > > Key: FLINK-26099 > URL: https://issues.apache.org/jira/browse/FLINK-26099 > Project: Flink > Issue Type: Improvement > Components: Documentation, Table SQL / API >Affects Versions: 1.14.3 > Environment: All >Reporter: Mans Singh >Priority: Minor > Labels: docuentation, table > Fix For: 1.15.0 > > > The example for proctime attributes has syntax error (missing comma after 3rd > column) [table proctime| > https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/overview/#proctime-attributes]: > > {noformat} > CREATE TABLE MyTable ( > MyField1 INT, > MyField2 STRING, > MyField3 BOOLEAN > MyField4 AS PROCTIME() -- declares a proctime attribute > ) WITH (...){noformat} > > > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (FLINK-26099) Table connector proctime attributes has syntax error
Mans Singh created FLINK-26099: -- Summary: Table connector proctime attributes has syntax error Key: FLINK-26099 URL: https://issues.apache.org/jira/browse/FLINK-26099 Project: Flink Issue Type: Improvement Components: Documentation, Table SQL / API Affects Versions: 1.14.3 Environment: All Reporter: Mans Singh Fix For: 1.15.0 The example for proctime attributes has syntax error (missing comma after 3rd column) [table proctime| https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/overview/#proctime-attributes]: {noformat} CREATE TABLE MyTable ( MyField1 INT, MyField2 STRING, MyField3 BOOLEAN MyField4 AS PROCTIME() -- declares a proctime attribute ) WITH (...){noformat} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (FLINK-25900) Create view example does not assign alias to functions resulting in generated names like EXPR$5
[ https://issues.apache.org/jira/browse/FLINK-25900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17485004#comment-17485004 ] Mans Singh commented on FLINK-25900: Hi: Please assign this issue to me. Thanks > Create view example does not assign alias to functions resulting in generated > names like EXPR$5 > --- > > Key: FLINK-25900 > URL: https://issues.apache.org/jira/browse/FLINK-25900 > Project: Flink > Issue Type: Improvement > Components: Documentation, Table SQL / API >Affects Versions: 1.14.3 >Reporter: Mans Singh >Priority: Minor > Fix For: 1.15.0 > > > The create view example query: > {noformat} > Flink SQL> CREATE VIEW MyView1 AS SELECT LOCALTIME, LOCALTIMESTAMP, > CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, CURRENT_ROW_TIMESTAMP(), > NOW(), PROCTIME(); > {noformat} > produces generated column names for CURRENT_ROW_TIMESTAMP() (EXPR$5), NOW() > (EXPR$6), and PROCTIME() (EXPR$7) since it does not assign aliases, as shown > below: > {code:java} > Flink SQL> CREATE VIEW MyView1 AS SELECT LOCALTIME, LOCALTIMESTAMP, > CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, CURRENT_ROW_TIMESTAMP(), > NOW(), PROCTIME(); > > > Flink SQL> describe MyView1; > +---+-+---+-++---+ > | name | type | null | key | extras | > watermark | > +---+-+---+-++---+ > | LOCALTIME | TIME(0) | FALSE | | | > | > | LOCALTIMESTAMP | TIMESTAMP(3) | FALSE | | | > | > | CURRENT_DATE | DATE | FALSE | | | > | > | CURRENT_TIME | TIME(0) | FALSE | | | > | > | CURRENT_TIMESTAMP | TIMESTAMP_LTZ(3) | FALSE | | | > | > | EXPR$5 | TIMESTAMP_LTZ(3) | FALSE | | | > | > | EXPR$6 | TIMESTAMP_LTZ(3) | FALSE | | | > | > | EXPR$7 | TIMESTAMP_LTZ(3) *PROCTIME* | FALSE | | | > | > +---+-+---+-++---+ > 8 rows in set > > {code} > > The documentation shows aliased names > [Timezone|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/timezone/#decide-time-functions-return-value] > {code:java} > ++-+---+-++---+ > | name |type | null | key | extras > | watermark | > ++-+---+-++---+ > | LOCALTIME | TIME(0) | false | | > | | > | LOCALTIMESTAMP |TIMESTAMP(3) | false | | > | | > | CURRENT_DATE |DATE | false | | > | | > | CURRENT_TIME | TIME(0) | false | | > | | > | CURRENT_TIMESTAMP |TIMESTAMP_LTZ(3) | false | | > | | > |CURRENT_ROW_TIMESTAMP() |TIMESTAMP_LTZ(3) | false | | > | | > | NOW() |TIMESTAMP_LTZ(3) | false | | > | | > | PROCTIME() | TIMESTAMP_LTZ(3) *PROCTIME* | false | | > | | > ++-+---+-++---+ > {code} > > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (FLINK-25900) Create view example does not assign alias to functions resulting in generated names like EXPR$5
[ https://issues.apache.org/jira/browse/FLINK-25900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mans Singh updated FLINK-25900: --- Description: The create view example query: {noformat} Flink SQL> CREATE VIEW MyView1 AS SELECT LOCALTIME, LOCALTIMESTAMP, CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, CURRENT_ROW_TIMESTAMP(), NOW(), PROCTIME(); {noformat} produces generated column names for CURRENT_ROW_TIMESTAMP() (EXPR$5), NOW() (EXPR$6), and PROCTIME() (EXPR$7) since it does not assign aliases, as shown below: {code:java} Flink SQL> CREATE VIEW MyView1 AS SELECT LOCALTIME, LOCALTIMESTAMP, CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, CURRENT_ROW_TIMESTAMP(), NOW(), PROCTIME(); > Flink SQL> describe MyView1; +---+-+---+-++---+ | name | type | null | key | extras | watermark | +---+-+---+-++---+ | LOCALTIME | TIME(0) | FALSE | | | | | LOCALTIMESTAMP | TIMESTAMP(3) | FALSE | | | | | CURRENT_DATE | DATE | FALSE | | | | | CURRENT_TIME | TIME(0) | FALSE | | | | | CURRENT_TIMESTAMP | TIMESTAMP_LTZ(3) | FALSE | | | | | EXPR$5 | TIMESTAMP_LTZ(3) | FALSE | | | | | EXPR$6 | TIMESTAMP_LTZ(3) | FALSE | | | | | EXPR$7 | TIMESTAMP_LTZ(3) *PROCTIME* | FALSE | | | | +---+-+---+-++---+ 8 rows in set {code} The documentation shows aliased names [Timezone|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/timezone/#decide-time-functions-return-value] {code:java} ++-+---+-++---+ | name |type | null | key | extras | watermark | ++-+---+-++---+ | LOCALTIME | TIME(0) | false | || | | LOCALTIMESTAMP |TIMESTAMP(3) | false | || | | CURRENT_DATE |DATE | false | || | | CURRENT_TIME | TIME(0) | false | || | | CURRENT_TIMESTAMP |TIMESTAMP_LTZ(3) | false | || | |CURRENT_ROW_TIMESTAMP() |TIMESTAMP_LTZ(3) | false | || | | NOW() |TIMESTAMP_LTZ(3) | false | || | | PROCTIME() | TIMESTAMP_LTZ(3) *PROCTIME* | false | || | ++-+---+-++---+ {code} was: The create view example query: {noformat} Flink SQL> CREATE VIEW MyView1 AS SELECT LOCALTIME, LOCALTIMESTAMP, CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, CURRENT_ROW_TIMESTAMP(), NOW(), PROCTIME(); {noformat} produces generated column names for CURRENT_ROW_TIMESTAMP() (EXPR$5), NOW() (EXPR$6), and PROCTIME() (EXPR$7) since it does not assign aliases, as shown below: {code:java} Flink SQL> CREATE VIEW MyView1 AS SELECT LOCALTIME, LOCALTIMESTAMP, CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, CURRENT_ROW_TIMESTAMP(), NOW(), PROCTIME(); > Flink SQL> describe MyView1; +---+-+---+-++---+ | name | type | null | key | extras | watermark | +---+-+---+-++---+ | LOCALTIME | TIME(0) | FALSE | | | | | LOCALTIMESTAMP | TIMESTAMP(3) | FALSE | | | | | CURRENT_DATE | DATE | FALSE | | | | | CURRENT_TIME | TIME(0) | FALSE | | | | | CURRENT_TIMESTAMP | TIMESTAMP_LTZ(3) | FALSE | | | | | EXPR$5 | TIMESTAMP_LTZ(3) | FALSE | | | | | EXPR$6 | TIMESTAMP_LTZ(3) | FALSE | | | | | EXPR$7 | TIMESTAMP_LTZ(3) *PROCTIME* | FALSE | | | | +---+-+---+-++---+ 8 rows in set {code} The documentation shows aliased names
[jira] [Updated] (FLINK-25900) Create view example does not assign alias to functions resulting in generated names like EXPR$5
[ https://issues.apache.org/jira/browse/FLINK-25900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mans Singh updated FLINK-25900: --- Description: The create view example query: {noformat} Flink SQL> CREATE VIEW MyView1 AS SELECT LOCALTIME, LOCALTIMESTAMP, CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, CURRENT_ROW_TIMESTAMP(), NOW(), PROCTIME(); {noformat} produces generated column names for CURRENT_ROW_TIMESTAMP() (EXPR$5), NOW() (EXPR$6), and PROCTIME() (EXPR$7) since it does not assign aliases, as shown below: {code:java} Flink SQL> CREATE VIEW MyView1 AS SELECT LOCALTIME, LOCALTIMESTAMP, CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, CURRENT_ROW_TIMESTAMP(), NOW(), PROCTIME(); > Flink SQL> describe MyView1; +---+-+---+-++---+ | name | type | null | key | extras | watermark | +---+-+---+-++---+ | LOCALTIME | TIME(0) | FALSE | | | | | LOCALTIMESTAMP | TIMESTAMP(3) | FALSE | | | | | CURRENT_DATE | DATE | FALSE | | | | | CURRENT_TIME | TIME(0) | FALSE | | | | | CURRENT_TIMESTAMP | TIMESTAMP_LTZ(3) | FALSE | | | | | EXPR$5 | TIMESTAMP_LTZ(3) | FALSE | | | | | EXPR$6 | TIMESTAMP_LTZ(3) | FALSE | | | | | EXPR$7 | TIMESTAMP_LTZ(3) *PROCTIME* | FALSE | | | | +---+-+---+-++---+ 8 rows in set {code} The documentation shows aliased names [Timezone|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/timezone/#decide-time-functions-return-value] {code:java} ++-+---+-++---+ | name |type | null | key | extras | watermark | ++-+---+-++---+ | LOCALTIME | TIME(0) | false | || | | LOCALTIMESTAMP |TIMESTAMP(3) | false | || | | CURRENT_DATE |DATE | false | || | | CURRENT_TIME | TIME(0) | false | || | | CURRENT_TIMESTAMP |TIMESTAMP_LTZ(3) | false | || | |CURRENT_ROW_TIMESTAMP() |TIMESTAMP_LTZ(3) | false | || | | NOW() |TIMESTAMP_LTZ(3) | false | || | | PROCTIME() | TIMESTAMP_LTZ(3) *PROCTIME* | false | || | ++-+---+-++---+ {code} was: The create view example query: {noformat} Flink SQL> CREATE VIEW MyView1 AS SELECT LOCALTIME, LOCALTIMESTAMP, CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, CURRENT_ROW_TIMESTAMP(), NOW(), PROCTIME(); {noformat} produces generated column names for CURRENT_ROW_TIMESTAMP() (EXPR$5), NOW() (EXPR$6), and PROCTIME() (EXPR$7) since it does not assign aliases, as shown below: {code:java} Flink SQL> CREATE VIEW MyView1 AS SELECT LOCALTIME, LOCALTIMESTAMP, CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, CURRENT_ROW_TIMESTAMP(), NOW(), PROCTIME(); > Flink SQL> describe MyView1; +---+-+---+-++---+ | name | type | null | key | extras | watermark | +---+-+---+-++---+ | LOCALTIME | TIME(0) | FALSE | | | | | LOCALTIMESTAMP | TIMESTAMP(3) | FALSE | | | | | CURRENT_DATE | DATE | FALSE | | | | | CURRENT_TIME | TIME(0) | FALSE | | | | | CURRENT_TIMESTAMP | TIMESTAMP_LTZ(3) | FALSE | | | | | EXPR$5 | TIMESTAMP_LTZ(3) | FALSE | | | | | EXPR$6 | TIMESTAMP_LTZ(3) | FALSE | | | | | EXPR$7 | TIMESTAMP_LTZ(3) *PROCTIME* | FALSE | | | | +---+-+---+-++---+ 8 rows in set {code} The documentation shows aliased names
[jira] [Created] (FLINK-25900) Create view example does not assign alias to functions resulting in generated names like EXPR$5
Mans Singh created FLINK-25900: -- Summary: Create view example does not assign alias to functions resulting in generated names like EXPR$5 Key: FLINK-25900 URL: https://issues.apache.org/jira/browse/FLINK-25900 Project: Flink Issue Type: Improvement Components: Documentation, Table SQL / API Affects Versions: 1.14.3 Reporter: Mans Singh Fix For: 1.15.0 The create view example query: {noformat} Flink SQL> CREATE VIEW MyView1 AS SELECT LOCALTIME, LOCALTIMESTAMP, CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, CURRENT_ROW_TIMESTAMP(), NOW(), PROCTIME(); {noformat} produces generated column names for CURRENT_ROW_TIMESTAMP() (EXPR$5), NOW() (EXPR$6), and PROCTIME() (EXPR$7) since it does not assign aliases, as shown below: {code:java} Flink SQL> CREATE VIEW MyView1 AS SELECT LOCALTIME, LOCALTIMESTAMP, CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, CURRENT_ROW_TIMESTAMP(), NOW(), PROCTIME(); > Flink SQL> describe MyView1; +---+-+---+-++---+ | name | type | null | key | extras | watermark | +---+-+---+-++---+ | LOCALTIME | TIME(0) | FALSE | | | | | LOCALTIMESTAMP | TIMESTAMP(3) | FALSE | | | | | CURRENT_DATE | DATE | FALSE | | | | | CURRENT_TIME | TIME(0) | FALSE | | | | | CURRENT_TIMESTAMP | TIMESTAMP_LTZ(3) | FALSE | | | | | EXPR$5 | TIMESTAMP_LTZ(3) | FALSE | | | | | EXPR$6 | TIMESTAMP_LTZ(3) | FALSE | | | | | EXPR$7 | TIMESTAMP_LTZ(3) *PROCTIME* | FALSE | | | | +---+-+---+-++---+ 8 rows in set {code} The documentation shows aliased names [Timezone|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/timezone/#decide-time-functions-return-value] {code:java} ++-+---+-++---+ | name |type | null | key | extras | watermark | ++-+---+-++---+ | LOCALTIME | TIME(0) | false | || | | LOCALTIMESTAMP |TIMESTAMP(3) | false | || | | CURRENT_DATE |DATE | false | || | | CURRENT_TIME | TIME(0) | false | || | | CURRENT_TIMESTAMP |TIMESTAMP_LTZ(3) | false | || | |CURRENT_ROW_TIMESTAMP() |TIMESTAMP_LTZ(3) | false | || | | NOW() |TIMESTAMP_LTZ(3) | false | || | | PROCTIME() | TIMESTAMP_LTZ(3) *PROCTIME* | false | || | ++-+---+-++---+ {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (FLINK-25763) Match Recognize Logical Offsets function table shows backticks
[ https://issues.apache.org/jira/browse/FLINK-25763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17480524#comment-17480524 ] Mans Singh commented on FLINK-25763: Please assign this issue to me. Thanks > Match Recognize Logical Offsets function table shows backticks > -- > > Key: FLINK-25763 > URL: https://issues.apache.org/jira/browse/FLINK-25763 > Project: Flink > Issue Type: Improvement > Components: Documentation, Table SQL / API >Affects Versions: 1.14.3 > Environment: All >Reporter: Mans Singh >Priority: Minor > Labels: doc, table-api > Fix For: 1.15.0 > > Attachments: MatchRecognizeLogicalOffsets.png > > > The match recognize logical offsets functions in the table are formatted with > back ticks as shown below: > > !MatchRecognizeLogicalOffsets.png|width=736,height=230! -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (FLINK-25763) Match Recognize Logical Offsets function table shows backticks
Mans Singh created FLINK-25763: -- Summary: Match Recognize Logical Offsets function table shows backticks Key: FLINK-25763 URL: https://issues.apache.org/jira/browse/FLINK-25763 Project: Flink Issue Type: Improvement Components: Documentation, Table SQL / API Affects Versions: 1.14.3 Environment: All Reporter: Mans Singh Fix For: 1.15.0 Attachments: MatchRecognizeLogicalOffsets.png The match recognize logical offsets functions in the table are formatted with back ticks as shown below: !MatchRecognizeLogicalOffsets.png|width=736,height=230! -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (FLINK-24442) Flink Queries Docs markup does not show escape ticks
[ https://issues.apache.org/jira/browse/FLINK-24442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17423741#comment-17423741 ] Mans Singh commented on FLINK-24442: Please assign this issue to me. Thanks > Flink Queries Docs markup does not show escape ticks > > > Key: FLINK-24442 > URL: https://issues.apache.org/jira/browse/FLINK-24442 > Project: Flink > Issue Type: Improvement > Components: Documentation, Table SQL / API >Affects Versions: 1.14.0 >Reporter: Mans Singh >Priority: Minor > Fix For: 1.14.0 > > Attachments: Screen Shot 2021-10-03 at 7.01.41 PM.png > > Original Estimate: 0.25h > Remaining Estimate: 0.25h > > The [table query overview > |https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/sql/queries/overview/#syntax]mentions: > {quote} * Unlike Java, back-ticks allow identifiers to contain > non-alphanumeric characters (e.g. {{“SELECT a AS }}{{my field}}{{ FROM > t”}}).{quote} > The "my field" identifier appears without escape back ticks as shown in > screenshot below: > > !Screen Shot 2021-10-03 at 7.01.41 PM.png! > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-24442) Flink Queries Docs markup does not show escape ticks
Mans Singh created FLINK-24442: -- Summary: Flink Queries Docs markup does not show escape ticks Key: FLINK-24442 URL: https://issues.apache.org/jira/browse/FLINK-24442 Project: Flink Issue Type: Improvement Components: Documentation, Table SQL / API Affects Versions: 1.14.0 Reporter: Mans Singh Fix For: 1.14.0 Attachments: Screen Shot 2021-10-03 at 7.01.41 PM.png The [table query overview |https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/sql/queries/overview/#syntax]mentions: {quote} * Unlike Java, back-ticks allow identifiers to contain non-alphanumeric characters (e.g. {{“SELECT a AS }}{{my field}}{{ FROM t”}}).{quote} The "my field" identifier appears without escape back ticks as shown in screenshot below: !Screen Shot 2021-10-03 at 7.01.41 PM.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-24259) Exception message when Kafka topic is null in FlinkKafkaConsumer
[ https://issues.apache.org/jira/browse/FLINK-24259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mans Singh updated FLINK-24259: --- Summary: Exception message when Kafka topic is null in FlinkKafkaConsumer (was: Exception thrown when Kafka topic is null in FlinkKafkaConsumer) > Exception message when Kafka topic is null in FlinkKafkaConsumer > > > Key: FLINK-24259 > URL: https://issues.apache.org/jira/browse/FLINK-24259 > Project: Flink > Issue Type: Improvement > Components: Connectors / Kafka, kafka >Affects Versions: 1.14.0 > Environment: All >Reporter: Mans Singh >Priority: Minor > Labels: connector, consumer, exception, kafka, > pull-request-available > Fix For: 1.15.0 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > If the topic name is null for the FlinkKafkaConsumer we get an Kafka > exception with the message: > {quote}{{Error computing size for field 'topics': Error computing size for > field 'name': Missing value for field 'name' which has no default value.}} > {quote} > The KafkaTopicDescriptor can check that the topic name empty, blank or null > and provide more informative message. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-24259) Exception thrown when Kafka topic is null in FlinkKafkaConsumer
[ https://issues.apache.org/jira/browse/FLINK-24259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mans Singh updated FLINK-24259: --- Summary: Exception thrown when Kafka topic is null in FlinkKafkaConsumer (was: Exception thrown when Kafka topic is null for FlinkKafkaConsumer) > Exception thrown when Kafka topic is null in FlinkKafkaConsumer > --- > > Key: FLINK-24259 > URL: https://issues.apache.org/jira/browse/FLINK-24259 > Project: Flink > Issue Type: Improvement > Components: Connectors / Kafka, kafka >Affects Versions: 1.14.0 > Environment: All >Reporter: Mans Singh >Priority: Minor > Labels: connector, consumer, exception, kafka, > pull-request-available > Fix For: 1.15.0 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > If the topic name is null for the FlinkKafkaConsumer we get an Kafka > exception with the message: > {quote}{{Error computing size for field 'topics': Error computing size for > field 'name': Missing value for field 'name' which has no default value.}} > {quote} > The KafkaTopicDescriptor can check that the topic name empty, blank or null > and provide more informative message. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-24259) Exception thrown when Kafka topic is null for FlinkKafkaConsumer
[ https://issues.apache.org/jira/browse/FLINK-24259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mans Singh updated FLINK-24259: --- Description: If the topic name is null for the FlinkKafkaConsumer we get an Kafka exception with the message: {quote}{{Error computing size for field 'topics': Error computing size for field 'name': Missing value for field 'name' which has no default value.}} {quote} The KafkaTopicDescriptor can check that the topic name empty, blank or null and provide more informative message. was: If the topic name is null for the FlinkKafkaConsumer we get an Kafka exception with the message: {quote}{{Error computing size for field 'topics': Error computing size for field 'name': Missing value for field 'name' which has no default value.}} {quote} The KafkaTopicDescriptor can checks that the topic name empty, blank or null and provide more informative message. > Exception thrown when Kafka topic is null for FlinkKafkaConsumer > > > Key: FLINK-24259 > URL: https://issues.apache.org/jira/browse/FLINK-24259 > Project: Flink > Issue Type: Improvement > Components: Connectors / Kafka, kafka >Affects Versions: 1.14.0 > Environment: All >Reporter: Mans Singh >Priority: Minor > Labels: connector, consumer, exception, kafka, > pull-request-available > Fix For: 1.15.0 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > If the topic name is null for the FlinkKafkaConsumer we get an Kafka > exception with the message: > {quote}{{Error computing size for field 'topics': Error computing size for > field 'name': Missing value for field 'name' which has no default value.}} > {quote} > The KafkaTopicDescriptor can check that the topic name empty, blank or null > and provide more informative message. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-24259) Exception thrown when Kafka topic is null for FlinkKafkaConsumer
Mans Singh created FLINK-24259: -- Summary: Exception thrown when Kafka topic is null for FlinkKafkaConsumer Key: FLINK-24259 URL: https://issues.apache.org/jira/browse/FLINK-24259 Project: Flink Issue Type: Improvement Components: Connectors / Kafka, kafka Affects Versions: 1.14.0 Environment: All Reporter: Mans Singh Fix For: 1.15.0 If the topic name is null for the FlinkKafkaConsumer we get an Kafka exception with the message: {quote}{{Error computing size for field 'topics': Error computing size for field 'name': Missing value for field 'name' which has no default value.}} {quote} The KafkaTopicDescriptor can checks that the topic name empty, blank or null and provide more informative message. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-24259) Exception thrown when Kafka topic is null for FlinkKafkaConsumer
[ https://issues.apache.org/jira/browse/FLINK-24259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413815#comment-17413815 ] Mans Singh commented on FLINK-24259: Please assign this issue to me. Thanks > Exception thrown when Kafka topic is null for FlinkKafkaConsumer > > > Key: FLINK-24259 > URL: https://issues.apache.org/jira/browse/FLINK-24259 > Project: Flink > Issue Type: Improvement > Components: Connectors / Kafka, kafka >Affects Versions: 1.14.0 > Environment: All >Reporter: Mans Singh >Priority: Minor > Labels: connector, consumer, exception, kafka > Fix For: 1.15.0 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > If the topic name is null for the FlinkKafkaConsumer we get an Kafka > exception with the message: > {quote}{{Error computing size for field 'topics': Error computing size for > field 'name': Missing value for field 'name' which has no default value.}} > {quote} > The KafkaTopicDescriptor can checks that the topic name empty, blank or null > and provide more informative message. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-24172) Table API join documentation for Java has missing end quote after table name
[ https://issues.apache.org/jira/browse/FLINK-24172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17410840#comment-17410840 ] Mans Singh commented on FLINK-24172: Please assign this issue to me. Thanks > Table API join documentation for Java has missing end quote after table name > > > Key: FLINK-24172 > URL: https://issues.apache.org/jira/browse/FLINK-24172 > Project: Flink > Issue Type: Improvement > Components: Documentation, Table SQL / API >Affects Versions: 1.14.0 >Reporter: Mans Singh >Priority: Minor > Labels: docuentation, join, table-api > Fix For: 1.15.0 > > Original Estimate: 0.25h > Remaining Estimate: 0.25h > > The table api join documentation has missing ending quote after table name: > > {quote}{{Table left = tableEnv.from("MyTable).select($("a"), $("b"), > $("c"));}} > {quote} > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-24172) Table API join documentation for Java has missing end quote after table name
Mans Singh created FLINK-24172: -- Summary: Table API join documentation for Java has missing end quote after table name Key: FLINK-24172 URL: https://issues.apache.org/jira/browse/FLINK-24172 Project: Flink Issue Type: Improvement Components: Documentation, Table SQL / API Affects Versions: 1.14.0 Reporter: Mans Singh Fix For: 1.15.0 The table api join documentation has missing ending quote after table name: {quote}{{Table left = tableEnv.from("MyTable).select($("a"), $("b"), $("c"));}} {quote} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-24042) DataStream printToErr doc indicates that it writes to standard output
[ https://issues.apache.org/jira/browse/FLINK-24042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17406473#comment-17406473 ] Mans Singh commented on FLINK-24042: Please assign this issue to me. Thanks > DataStream printToErr doc indicates that it writes to standard output > - > > Key: FLINK-24042 > URL: https://issues.apache.org/jira/browse/FLINK-24042 > Project: Flink > Issue Type: Improvement > Components: API / DataStream, Documentation >Affects Versions: 1.13.2 >Reporter: Mans Singh >Priority: Minor > Labels: javadoc, streaming-api > Fix For: 1.14.0 > > Original Estimate: 0.25h > Remaining Estimate: 0.25h > > The data stream printToErr method javadoc indicates that it writes to > standard output stream. > > {quote}Writes a DataStream to the standard output stream (stderr). > {quote} > > It should be standard error stream. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-24042) DataStream printToErr doc indicates that it writes to standard output
Mans Singh created FLINK-24042: -- Summary: DataStream printToErr doc indicates that it writes to standard output Key: FLINK-24042 URL: https://issues.apache.org/jira/browse/FLINK-24042 Project: Flink Issue Type: Improvement Components: API / DataStream, Documentation Affects Versions: 1.13.2 Reporter: Mans Singh Fix For: 1.14.0 The data stream printToErr method javadoc indicates that it writes to standard output stream. {quote}Writes a DataStream to the standard output stream (stderr). {quote} It should be standard error stream. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-23910) RichAsyncFunction counter message inconsistent with counter type
[ https://issues.apache.org/jira/browse/FLINK-23910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mans Singh updated FLINK-23910: --- Description: RichAsyncFunction does not support double counters but the message indicates "Long counters..." @Override public DoubleCounter getDoubleCounter(String name) { throw new UnsupportedOperationException( "Long counters are not supported in rich async functions."); } was: RichAsyncFunction does not support double counters but the message indicates "Long counters..." @Override public DoubleCounter getDoubleCounter(String name) { throw new UnsupportedOperationException( "Long counters are not supported in rich async functions."); } > RichAsyncFunction counter message inconsistent with counter type > - > > Key: FLINK-23910 > URL: https://issues.apache.org/jira/browse/FLINK-23910 > Project: Flink > Issue Type: Improvement > Components: API / DataStream >Affects Versions: 1.13.2 > Environment: All >Reporter: Mans Singh >Priority: Minor > Labels: api, async, pull-request-available > Fix For: 1.14.0 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > RichAsyncFunction does not support double counters but the message indicates > "Long counters..." > > @Override > public DoubleCounter getDoubleCounter(String name) { > throw new UnsupportedOperationException( "Long counters are not supported in > rich async functions."); > } > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-23910) RichAsyncFunction counter message inconsistent with counter type
[ https://issues.apache.org/jira/browse/FLINK-23910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mans Singh updated FLINK-23910: --- Description: RichAsyncFunction does not support double counters but the message indicates "Long counters..." @Override public DoubleCounter getDoubleCounter(String name) { throw new UnsupportedOperationException( "Long counters are not supported in rich async functions."); } was: RichAsyncFunction does not support double counters but the message indicates "Long counters..." {quote} @Override public DoubleCounter getDoubleCounter(String name) { throw new UnsupportedOperationException( "Long counters are not supported in rich async functions."); } {quote} > RichAsyncFunction counter message inconsistent with counter type > - > > Key: FLINK-23910 > URL: https://issues.apache.org/jira/browse/FLINK-23910 > Project: Flink > Issue Type: Improvement > Components: API / DataStream >Affects Versions: 1.13.2 > Environment: All >Reporter: Mans Singh >Priority: Minor > Labels: api, async > Fix For: 1.14.0 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > RichAsyncFunction does not support double counters but the message indicates > "Long counters..." > > @Override > public DoubleCounter getDoubleCounter(String name) > { throw new UnsupportedOperationException( "Long counters are not supported > in rich async functions."); } > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23910) RichAsyncFunction counter message inconsistent with counter type
[ https://issues.apache.org/jira/browse/FLINK-23910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17402895#comment-17402895 ] Mans Singh commented on FLINK-23910: Please assign this issue to me. Thanks > RichAsyncFunction counter message inconsistent with counter type > - > > Key: FLINK-23910 > URL: https://issues.apache.org/jira/browse/FLINK-23910 > Project: Flink > Issue Type: Improvement > Components: API / DataStream >Affects Versions: 1.13.2 > Environment: All >Reporter: Mans Singh >Priority: Minor > Labels: api, async > Fix For: 1.14.0 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > RichAsyncFunction does not support double counters but the message indicates > "Long counters..." > > {quote} @Override > public DoubleCounter getDoubleCounter(String name) > { throw new UnsupportedOperationException( "Long counters are not supported > in rich async functions."); } > > {quote} > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-23910) RichAsyncFunction counter message inconsistent with counter type
[ https://issues.apache.org/jira/browse/FLINK-23910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mans Singh updated FLINK-23910: --- Description: RichAsyncFunction does not support double counters but the message indicates "Long counters..." {quote} @Override public DoubleCounter getDoubleCounter(String name) { throw new UnsupportedOperationException( "Long counters are not supported in rich async functions."); } {quote} was: RichAsyncFunction does not support double counters but the message indicates "Long counters..." {quote} @Override public DoubleCounter getDoubleCounter(String name) { throw new UnsupportedOperationException( "Long counters are not supported in rich async functions."); } {quote} > RichAsyncFunction counter message inconsistent with counter type > - > > Key: FLINK-23910 > URL: https://issues.apache.org/jira/browse/FLINK-23910 > Project: Flink > Issue Type: Improvement > Components: API / DataStream >Affects Versions: 1.13.2 > Environment: All >Reporter: Mans Singh >Priority: Minor > Labels: api, async > Fix For: 1.14.0 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > RichAsyncFunction does not support double counters but the message indicates > "Long counters..." > > {quote} @Override > public DoubleCounter getDoubleCounter(String name) > { throw new UnsupportedOperationException( "Long counters are not supported > in rich async functions."); } > > {quote} > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23910) RichAsyncFunction counter message inconsistent with counter type
Mans Singh created FLINK-23910: -- Summary: RichAsyncFunction counter message inconsistent with counter type Key: FLINK-23910 URL: https://issues.apache.org/jira/browse/FLINK-23910 Project: Flink Issue Type: Improvement Components: API / DataStream Affects Versions: 1.13.2 Environment: All Reporter: Mans Singh Fix For: 1.14.0 RichAsyncFunction does not support double counters but the message indicates "Long counters..." {quote} @Override public DoubleCounter getDoubleCounter(String name) { throw new UnsupportedOperationException( "Long counters are not supported in rich async functions."); } {quote} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23699) The code comment's reference is wrong for the function isUsingFixedMemoryPerSlot
[ https://issues.apache.org/jira/browse/FLINK-23699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17397291#comment-17397291 ] Mans Singh commented on FLINK-23699: Please assign this issue to me. Thanks. > The code comment's reference is wrong for the function > isUsingFixedMemoryPerSlot > > > Key: FLINK-23699 > URL: https://issues.apache.org/jira/browse/FLINK-23699 > Project: Flink > Issue Type: Improvement > Components: Runtime / State Backends >Affects Versions: 1.14.0 >Reporter: Liu >Priority: Minor > Labels: rocksdb > > The code is as follow. USE_MANAGED_MEMORY should be FIX_PER_SLOT_MEMORY_SIZE. > /** > * Gets whether the state backend is configured to use a fixed amount of > memory shared between > * all RocksDB instances (in all tasks and operators) of a slot. See {@link > * RocksDBOptions#USE_MANAGED_MEMORY} for details. > */ > public boolean isUsingFixedMemoryPerSlot() { > return fixedMemoryPerSlot != null; > } -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23549) Kafka table connector create table example in docs has syntax error
[ https://issues.apache.org/jira/browse/FLINK-23549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17390197#comment-17390197 ] Mans Singh commented on FLINK-23549: Please assign the issue to me. Thanks > Kafka table connector create table example in docs has syntax error > --- > > Key: FLINK-23549 > URL: https://issues.apache.org/jira/browse/FLINK-23549 > Project: Flink > Issue Type: Improvement > Components: Connectors / Kafka, Documentation >Affects Versions: 1.13.1 >Reporter: Mans Singh >Priority: Minor > Labels: connector, doc, example, kafka, syntax, table > Fix For: 1.14.0 > > Original Estimate: 0.25h > Remaining Estimate: 0.25h > > The create table example in the docs has an syntax error (extra comma after > the opening bracket): > {quote}CREATE TABLE KafkaTable (, > `ts` TIMESTAMP(3) METADATA FROM 'timestamp', > `user_id` BIGINT, > `item_id` BIGINT, > `behavior` STRING > ) WITH ( > 'connector' = 'kafka', > ... > ) > {quote} > On executing in the FlinkSQL it produces an error: > {quote}[ERROR] Could not execute SQL statement. Reason: > org.apache.flink.sql.parser.impl.ParseException: Encountered "," at line 1, > column 26. > {quote} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23549) Kafka table connector create table example in docs has syntax error
Mans Singh created FLINK-23549: -- Summary: Kafka table connector create table example in docs has syntax error Key: FLINK-23549 URL: https://issues.apache.org/jira/browse/FLINK-23549 Project: Flink Issue Type: Improvement Components: Connectors / Kafka, Documentation Affects Versions: 1.13.1 Reporter: Mans Singh Fix For: 1.14.0 The create table example in the docs has an syntax error (extra comma after the opening bracket): {quote}CREATE TABLE KafkaTable (, `ts` TIMESTAMP(3) METADATA FROM 'timestamp', `user_id` BIGINT, `item_id` BIGINT, `behavior` STRING ) WITH ( 'connector' = 'kafka', ... ) {quote} On executing in the FlinkSQL it produces an error: {quote}[ERROR] Could not execute SQL statement. Reason: org.apache.flink.sql.parser.impl.ParseException: Encountered "," at line 1, column 26. {quote} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23490) Flink Table Example - StreamWindowSQLExample shows output in older format
[ https://issues.apache.org/jira/browse/FLINK-23490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17386783#comment-17386783 ] Mans Singh commented on FLINK-23490: Please assign this issue to me. Thanks > Flink Table Example - StreamWindowSQLExample shows output in older format > - > > Key: FLINK-23490 > URL: https://issues.apache.org/jira/browse/FLINK-23490 > Project: Flink > Issue Type: Improvement > Components: Examples, Table SQL / API >Affects Versions: 1.13.1 >Reporter: Mans Singh >Priority: Minor > Labels: examples, sql, table > Fix For: 1.14.0 > > Original Estimate: 0.25h > Remaining Estimate: 0.25h > > The example print output shows older format: > {quote}{{// 2019-12-12 00:00:00.000,3,10,3}} > {{// 2019-12-12 00:00:05.000,3,6,2}} > {quote} > > Execution of the application print the following: > {quote}+I[2019-12-12 00:00:00.000, 3, 10, 3] > +I[2019-12-12 00:00:05.000, 3, 6, 2] > {quote} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23490) Flink Table Example - StreamWindowSQLExample shows output in older format
Mans Singh created FLINK-23490: -- Summary: Flink Table Example - StreamWindowSQLExample shows output in older format Key: FLINK-23490 URL: https://issues.apache.org/jira/browse/FLINK-23490 Project: Flink Issue Type: Improvement Components: Examples, Table SQL / API Affects Versions: 1.13.1 Reporter: Mans Singh Fix For: 1.14.0 The example print output shows older format: {quote}{{// 2019-12-12 00:00:00.000,3,10,3}} {{// 2019-12-12 00:00:05.000,3,6,2}} {quote} Execution of the application print the following: {quote}+I[2019-12-12 00:00:00.000, 3, 10, 3] +I[2019-12-12 00:00:05.000, 3, 6, 2] {quote} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23412) Improve sourceSink description
Mans Singh created FLINK-23412: -- Summary: Improve sourceSink description Key: FLINK-23412 URL: https://issues.apache.org/jira/browse/FLINK-23412 Project: Flink Issue Type: Improvement Components: Documentation Affects Versions: 1.13.1 Reporter: Mans Singh Fix For: 1.14.0 The table/sourcesink documentation indicates: {quote} the sink can solely accept insert-only rows and write out bounded streams.{quote} Perhaps can be: {quote} the sink can only accept insert-only rows and write out bounded streams.{quote} Also, improving full stack example bullet points. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23412) Improve sourceSink description
[ https://issues.apache.org/jira/browse/FLINK-23412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17382375#comment-17382375 ] Mans Singh commented on FLINK-23412: Please assign this issue to me. Thanks > Improve sourceSink description > -- > > Key: FLINK-23412 > URL: https://issues.apache.org/jira/browse/FLINK-23412 > Project: Flink > Issue Type: Improvement > Components: Documentation >Affects Versions: 1.13.1 >Reporter: Mans Singh >Priority: Minor > Labels: docuentation, sink, source, table > Fix For: 1.14.0 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > The table/sourcesink documentation indicates: > {quote} the sink can solely accept insert-only rows and write out bounded > streams.{quote} > Perhaps can be: > {quote} the sink can only accept insert-only rows and write out bounded > streams.{quote} > Also, improving full stack example bullet points. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-23347) Datastream operators overview refers to DataStreamStream
[ https://issues.apache.org/jira/browse/FLINK-23347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mans Singh updated FLINK-23347: --- Description: The [operators overview|https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/datastream/operators/overview/#windowall] document refers to DataStreamStream: {quote} h4. DataStreamStream → AllWindowedStream {quote} Perhaps should be: {quote} h4. DataStream → AllWindowedStream {quote} was: The [operators overview|https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/datastream/operators/overview/#windowall] document refers to: {quote} h4. DataStreamStream → AllWindowedStream {quote} Perhaps should be: {quote} h4. DataStream → AllWindowedStream {quote} > Datastream operators overview refers to DataStreamStream > > > Key: FLINK-23347 > URL: https://issues.apache.org/jira/browse/FLINK-23347 > Project: Flink > Issue Type: Improvement > Components: API / DataStream, Documentation >Affects Versions: 1.13.1 >Reporter: Mans Singh >Priority: Minor > Labels: doc, operators, overview > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > The [operators > overview|https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/datastream/operators/overview/#windowall] > document refers to DataStreamStream: > {quote} > h4. DataStreamStream → AllWindowedStream > {quote} > Perhaps should be: > {quote} > h4. DataStream → AllWindowedStream > {quote} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23347) Datastream operators overview refers to DataStreamStream
[ https://issues.apache.org/jira/browse/FLINK-23347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17378807#comment-17378807 ] Mans Singh commented on FLINK-23347: Please assign this issue to me. Thanks > Datastream operators overview refers to DataStreamStream > > > Key: FLINK-23347 > URL: https://issues.apache.org/jira/browse/FLINK-23347 > Project: Flink > Issue Type: Improvement > Components: API / DataStream, Documentation >Affects Versions: 1.13.1 >Reporter: Mans Singh >Priority: Minor > Labels: doc, operators, overview > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > The [operators > overview|https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/datastream/operators/overview/#windowall] > document refers to: > {quote} > h4. DataStreamStream → AllWindowedStream > {quote} > Perhaps should be: > {quote} > h4. DataStream → AllWindowedStream > {quote} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23347) Datastream operators overview refers to DataStreamStream
Mans Singh created FLINK-23347: -- Summary: Datastream operators overview refers to DataStreamStream Key: FLINK-23347 URL: https://issues.apache.org/jira/browse/FLINK-23347 Project: Flink Issue Type: Improvement Components: API / DataStream, Documentation Affects Versions: 1.13.1 Reporter: Mans Singh The [operators overview|https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/datastream/operators/overview/#windowall] document refers to: {quote} h4. DataStreamStream → AllWindowedStream {quote} Perhaps should be: {quote} h4. DataStream → AllWindowedStream {quote} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23280) Python ExplainDetails does not have JSON_EXECUTION_PLAN option
Mans Singh created FLINK-23280: -- Summary: Python ExplainDetails does not have JSON_EXECUTION_PLAN option Key: FLINK-23280 URL: https://issues.apache.org/jira/browse/FLINK-23280 Project: Flink Issue Type: Bug Components: API / Python, Table SQL / API Affects Versions: 1.13.0 Reporter: Mans Singh Fix For: 1.14.0 Add missing JSON_EXECUTION_PLAN option to python ExplainDetails class (https://github.com/apache/flink/blob/master/flink-python/pyflink/table/explain_detail.py) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23256) Explain string shows output of legacy planner
[ https://issues.apache.org/jira/browse/FLINK-23256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17374917#comment-17374917 ] Mans Singh commented on FLINK-23256: Please assign this issue to me. Thanks > Explain string shows output of legacy planner > - > > Key: FLINK-23256 > URL: https://issues.apache.org/jira/browse/FLINK-23256 > Project: Flink > Issue Type: Improvement > Components: Documentation, Table SQL / API >Affects Versions: 1.13.1 >Reporter: Mans Singh >Priority: Minor > Fix For: 1.14.0 > > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > The output on [Concepts & Common API > page|https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/common/#explaining-a-table/] > documentation page of: > {quote}table.explain() > {quote} > is showing the result of the legacy planner. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23256) Explain string shows output of legacy planner
Mans Singh created FLINK-23256: -- Summary: Explain string shows output of legacy planner Key: FLINK-23256 URL: https://issues.apache.org/jira/browse/FLINK-23256 Project: Flink Issue Type: Improvement Components: Documentation, Table SQL / API Affects Versions: 1.13.1 Reporter: Mans Singh Fix For: 1.14.0 The output on [Concepts & Common API page|https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/common/#explaining-a-table/] documentation page of: {quote}table.explain() {quote} is showing the result of the legacy planner. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-23162) Create table uses time_ltz in the column name and it's expression which results in exception
[ https://issues.apache.org/jira/browse/FLINK-23162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mans Singh updated FLINK-23162: --- Description: The create table example in [https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/concepts/time_attributes/] uses the `time_ltz` in it's declaration and expression {quote}CREATE TABLE user_actions ( user_name STRING, data STRING, ts BIGINT, time_ltz AS TO_TIMESTAMP_LTZ(time_ltz, 3), – declare time_ltz as event time attribute and use 5 seconds delayed watermark strategy WATERMARK FOR time_ltz AS time_ltz - INTERVAL '5' SECOND ) WITH ( ... ); {quote} When it is executed in the flink sql client it throws an exception: {quote}[ERROR] Could not execute SQL statement. Reason: org.apache.calcite.sql.validate.SqlValidatorException: Unknown identifier 'time_ltz' {quote} The create table works if the expression uses ts as the argument while declaring time_ltz. was: The create table example in [https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/concepts/time_attributes/] uses the `time_ltz` in it's declaration {quote}CREATE TABLE user_actions ( user_name STRING, data STRING, ts BIGINT, time_ltz AS TO_TIMESTAMP_LTZ(time_ltz, 3), – declare time_ltz as event time attribute and use 5 seconds delayed watermark strategy WATERMARK FOR time_ltz AS time_ltz - INTERVAL '5' SECOND ) WITH ( ... ); {quote} When it is executed in the flink sql client it throws an exception: {quote}[ERROR] Could not execute SQL statement. Reason: org.apache.calcite.sql.validate.SqlValidatorException: Unknown identifier 'time_ltz' {quote} The create table works if the expression uses ts as the argument while declaring time_ltz. > Create table uses time_ltz in the column name and it's expression which > results in exception > - > > Key: FLINK-23162 > URL: https://issues.apache.org/jira/browse/FLINK-23162 > Project: Flink > Issue Type: Improvement > Components: Documentation, Examples, Table SQL / Client >Affects Versions: 1.13.1 >Reporter: Mans Singh >Priority: Minor > Labels: doc, example, pull-request-available, sql > Fix For: 1.14.0 > > Original Estimate: 10m > Remaining Estimate: 10m > > The create table example in > [https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/concepts/time_attributes/] > uses the `time_ltz` in it's declaration and expression > {quote}CREATE TABLE user_actions ( > user_name STRING, > data STRING, > ts BIGINT, > time_ltz AS TO_TIMESTAMP_LTZ(time_ltz, 3), > – declare time_ltz as event time attribute and use 5 seconds delayed > watermark strategy > WATERMARK FOR time_ltz AS time_ltz - INTERVAL '5' SECOND > ) WITH ( > ... > ); > {quote} > When it is executed in the flink sql client it throws an exception: > {quote}[ERROR] Could not execute SQL statement. Reason: > org.apache.calcite.sql.validate.SqlValidatorException: Unknown identifier > 'time_ltz' > {quote} > The create table works if the expression uses ts as the argument while > declaring time_ltz. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23162) Create table uses time_ltz in the column name and it's expression which results in exception
[ https://issues.apache.org/jira/browse/FLINK-23162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17370075#comment-17370075 ] Mans Singh commented on FLINK-23162: Please assign this ticket to me. Thank > Create table uses time_ltz in the column name and it's expression which > results in exception > - > > Key: FLINK-23162 > URL: https://issues.apache.org/jira/browse/FLINK-23162 > Project: Flink > Issue Type: Improvement > Components: Documentation, Examples, Table SQL / Client >Affects Versions: 1.13.1 >Reporter: Mans Singh >Priority: Minor > Labels: doc, example, pull-request-available, sql > Fix For: 1.14.0 > > Original Estimate: 10m > Remaining Estimate: 10m > > The create table example in > [https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/concepts/time_attributes/] > uses the `time_ltz` in it's declaration > {quote}CREATE TABLE user_actions ( > user_name STRING, > data STRING, > ts BIGINT, > time_ltz AS TO_TIMESTAMP_LTZ(time_ltz, 3), > – declare time_ltz as event time attribute and use 5 seconds delayed > watermark strategy > WATERMARK FOR time_ltz AS time_ltz - INTERVAL '5' SECOND > ) WITH ( > ... > ); > {quote} > When it is executed in the flink sql client it throws an exception: > {quote}[ERROR] Could not execute SQL statement. Reason: > org.apache.calcite.sql.validate.SqlValidatorException: Unknown identifier > 'time_ltz' > {quote} > The create table works if the expression uses ts as the argument while > declaring time_ltz. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23162) Create table uses time_ltz in the column name and it's expression which results in exception
Mans Singh created FLINK-23162: -- Summary: Create table uses time_ltz in the column name and it's expression which results in exception Key: FLINK-23162 URL: https://issues.apache.org/jira/browse/FLINK-23162 Project: Flink Issue Type: Improvement Components: Documentation, Examples, Table SQL / Client Affects Versions: 1.13.1 Reporter: Mans Singh Fix For: 1.14.0 The create table example in [https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/concepts/time_attributes/] uses the `time_ltz` in it's declaration {quote}CREATE TABLE user_actions ( user_name STRING, data STRING, ts BIGINT, time_ltz AS TO_TIMESTAMP_LTZ(time_ltz, 3), – declare time_ltz as event time attribute and use 5 seconds delayed watermark strategy WATERMARK FOR time_ltz AS time_ltz - INTERVAL '5' SECOND ) WITH ( ... ); {quote} When it is executed in the flink sql client it throws an exception: {quote}[ERROR] Could not execute SQL statement. Reason: org.apache.calcite.sql.validate.SqlValidatorException: Unknown identifier 'time_ltz' {quote} The create table works if the expression uses ts as the argument while declaring time_ltz. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-18484) RowSerializer arity error does not provide specific information about the mismatch
[ https://issues.apache.org/jira/browse/FLINK-18484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17151099#comment-17151099 ] Mans Singh commented on FLINK-18484: Please assign the issue to me. Thanks > RowSerializer arity error does not provide specific information about the > mismatch > -- > > Key: FLINK-18484 > URL: https://issues.apache.org/jira/browse/FLINK-18484 > Project: Flink > Issue Type: Improvement > Components: API / Core >Affects Versions: 1.10.1 > Environment: All >Reporter: Mans Singh >Priority: Minor > Labels: core, exception, message > Fix For: 1.11.1 > > Original Estimate: 2h > Remaining Estimate: 2h > > The RowSerializer throws a RuntimeException when there is mismatch between > the serializers field length and the input row. But the exception message > does not contain information about the difference in the lengths. Eg: > {{java.lang.RuntimeException: Row arity of from does not match serializers.}} > Adding information about the mismatched length will be more helpful. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-18484) RowSerializer arity error does not provide specific information about the mismatch
Mans Singh created FLINK-18484: -- Summary: RowSerializer arity error does not provide specific information about the mismatch Key: FLINK-18484 URL: https://issues.apache.org/jira/browse/FLINK-18484 Project: Flink Issue Type: Improvement Components: API / Core Affects Versions: 1.10.1 Environment: All Reporter: Mans Singh Fix For: 1.11.1 The RowSerializer throws a RuntimeException when there is mismatch between the serializers field length and the input row. But the exception message does not contain information about the difference in the lengths. Eg: {{java.lang.RuntimeException: Row arity of from does not match serializers.}} Adding information about the mismatched length will be more helpful. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17602) Documentation for broadcast state correction
[ https://issues.apache.org/jira/browse/FLINK-17602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mans Singh updated FLINK-17602: --- Summary: Documentation for broadcast state correction (was: Document for broadcast state correction) > Documentation for broadcast state correction > > > Key: FLINK-17602 > URL: https://issues.apache.org/jira/browse/FLINK-17602 > Project: Flink > Issue Type: Improvement > Components: Documentation >Affects Versions: 1.10.0 >Reporter: Mans Singh >Priority: Trivial > Labels: document > Fix For: 1.11.0 > > Original Estimate: 1h > Remaining Estimate: 1h > > Broadcast state documentation mentions `processBroadcast()` which should be > `processBroadcastElement()` -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17602) Document for broadcast state correction
[ https://issues.apache.org/jira/browse/FLINK-17602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mans Singh updated FLINK-17602: --- Summary: Document for broadcast state correction (was: Document broadcast state correction) > Document for broadcast state correction > --- > > Key: FLINK-17602 > URL: https://issues.apache.org/jira/browse/FLINK-17602 > Project: Flink > Issue Type: Improvement > Components: Documentation >Affects Versions: 1.10.0 >Reporter: Mans Singh >Priority: Trivial > Labels: document > Fix For: 1.11.0 > > Original Estimate: 1h > Remaining Estimate: 1h > > Broadcast state documentation mentions `processBroadcast()` which should be > `processBroadcastElement()` -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-17602) Document broadcast state correction
[ https://issues.apache.org/jira/browse/FLINK-17602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17103958#comment-17103958 ] Mans Singh commented on FLINK-17602: Please assign this issue to me. > Document broadcast state correction > --- > > Key: FLINK-17602 > URL: https://issues.apache.org/jira/browse/FLINK-17602 > Project: Flink > Issue Type: Improvement > Components: Documentation >Affects Versions: 1.10.0 >Reporter: Mans Singh >Priority: Trivial > Labels: document > Fix For: 1.11.0 > > Original Estimate: 1h > Remaining Estimate: 1h > > Broadcast state documentation mentions `processBroadcast()` which should be > `processBroadcastElement()` -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-17602) Document broadcast state correction
Mans Singh created FLINK-17602: -- Summary: Document broadcast state correction Key: FLINK-17602 URL: https://issues.apache.org/jira/browse/FLINK-17602 Project: Flink Issue Type: Improvement Components: Documentation Affects Versions: 1.10.0 Reporter: Mans Singh Fix For: 1.11.0 Broadcast state documentation mentions `processBroadcast()` which should be `processBroadcastElement()` -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-15181) Minor doc correction
Mans Singh created FLINK-15181: -- Summary: Minor doc correction Key: FLINK-15181 URL: https://issues.apache.org/jira/browse/FLINK-15181 Project: Flink Issue Type: Improvement Components: Documentation Affects Versions: 1.9.1 Reporter: Mans Singh Fix For: 1.10.0 Minor documentation corrections. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-13542) Flink Datadog metrics reporter sends empty series if there is no metrics
Mans Singh created FLINK-13542: -- Summary: Flink Datadog metrics reporter sends empty series if there is no metrics Key: FLINK-13542 URL: https://issues.apache.org/jira/browse/FLINK-13542 Project: Flink Issue Type: Improvement Components: Runtime / Metrics Affects Versions: 1.8.1 Reporter: Mans Singh Fix For: 1.9.0 If there are no metrics, Datadog reporter still sends empty series array to Datadog. The reporter can check the size of the series and only send if there are metrics collected. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Created] (FLINK-13104) Flink Datadog metrics client callback does not check for errors on posting and fails silently
Mans Singh created FLINK-13104: -- Summary: Flink Datadog metrics client callback does not check for errors on posting and fails silently Key: FLINK-13104 URL: https://issues.apache.org/jira/browse/FLINK-13104 Project: Flink Issue Type: Improvement Components: Runtime / Metrics Affects Versions: 1.8.1 Reporter: Mans Singh Assignee: Mans Singh Fix For: 1.9.0 Flink DatadogHttpClient's callback does not check if the request was successful. In case of non-successful posting request it should log a warning so the the error can be resolved. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-13065) Document example snippet correction using KeySelector
[ https://issues.apache.org/jira/browse/FLINK-13065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mans Singh updated FLINK-13065: --- Description: The broadcast state [example|[https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/stream/state/broadcast_state.html#provided-apis]] states: {noformat} Starting from the stream of Items, we just need to key it by Color, as we want pairs of the same color. This will make sure that elements of the same color end up on the same physical machine. // key the shapes by color KeyedStream colorPartitionedStream = shapeStream .keyBy(new KeySelector(){...});{noformat} However, it uses shape stream and KeySelector but should use KeySelector to create KeyedStream. was: The broadcast state [example|[https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/stream/state/broadcast_state.html#provided-apis]] states: {noformat} Starting from the stream of Items, we just need to key it by Color, as we want pairs of the same color. This will make sure that elements of the same color end up on the same physical machine. // key the shapes by color KeyedStream colorPartitionedStream = shapeStream .keyBy(new KeySelector(){...});{noformat} How it uses shape stream and use KeySelector but should use KeySelector to create KeyedStream. > Document example snippet correction using KeySelector > - > > Key: FLINK-13065 > URL: https://issues.apache.org/jira/browse/FLINK-13065 > Project: Flink > Issue Type: Improvement > Components: Documentation >Reporter: Mans Singh >Assignee: Mans Singh >Priority: Minor > Labels: correction, doc,, example > > The broadcast state > [example|[https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/stream/state/broadcast_state.html#provided-apis]] > states: > > {noformat} > Starting from the stream of Items, we just need to key it by Color, as we > want pairs of the same color. This will make sure that elements of the same > color end up on the same physical machine. > // key the shapes by color > KeyedStream colorPartitionedStream = shapeStream > .keyBy(new KeySelector Color>(){...});{noformat} > > However, it uses shape stream and KeySelector but should use > KeySelector to create KeyedStream. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (FLINK-13065) Document example snippet correction using KeySelector
Mans Singh created FLINK-13065: -- Summary: Document example snippet correction using KeySelector Key: FLINK-13065 URL: https://issues.apache.org/jira/browse/FLINK-13065 Project: Flink Issue Type: Improvement Components: Documentation Reporter: Mans Singh Assignee: Mans Singh The broadcast state [example|[https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/stream/state/broadcast_state.html#provided-apis]] states: {noformat} Starting from the stream of Items, we just need to key it by Color, as we want pairs of the same color. This will make sure that elements of the same color end up on the same physical machine. // key the shapes by color KeyedStream colorPartitionedStream = shapeStream .keyBy(new KeySelector(){...});{noformat} How it uses shape stream and use KeySelector but should use KeySelector to create KeyedStream. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (FLINK-12952) Minor doc correction regarding incremental window functions
Mans Singh created FLINK-12952: -- Summary: Minor doc correction regarding incremental window functions Key: FLINK-12952 URL: https://issues.apache.org/jira/browse/FLINK-12952 Project: Flink Issue Type: Improvement Components: Documentation Affects Versions: 1.8.0 Reporter: Mans Singh Assignee: Mans Singh The Flink documentation [Window Function|https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/stream/operators/windows.html#window-functions], mentions that bq. The window function can be one of ReduceFunction, AggregateFunction, FoldFunction or ProcessWindowFunction. The first two can be executed more efficiently It should perhaps state (since FoldFunction, though deprecated, is also incremental): bq. The first *three* can be executed more efficiently -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (FLINK-12784) Support retention policy for InfluxDB metrics reporter
Mans Singh created FLINK-12784: -- Summary: Support retention policy for InfluxDB metrics reporter Key: FLINK-12784 URL: https://issues.apache.org/jira/browse/FLINK-12784 Project: Flink Issue Type: Improvement Components: Runtime / Metrics Affects Versions: 1.8.0 Reporter: Mans Singh Assignee: Mans Singh InfluxDB metrics reporter uses default retention policy for saving metrics to InfluxDB. This enhancement will allow user to specify retention policy. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (FLINK-3967) Provide RethinkDB Sink for Flink
Mans Singh created FLINK-3967: - Summary: Provide RethinkDB Sink for Flink Key: FLINK-3967 URL: https://issues.apache.org/jira/browse/FLINK-3967 Project: Flink Issue Type: New Feature Components: Streaming, Streaming Connectors Affects Versions: 1.0.3 Environment: All Reporter: Mans Singh Assignee: Mans Singh Priority: Minor Fix For: 1.1.0 Provide Sink to stream data from flink to rethink db. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (FLINK-3881) Error in Java 8 Documentation Sample
[ https://issues.apache.org/jira/browse/FLINK-3881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mans Singh updated FLINK-3881: -- Labels: documentation java8 sample (was: docuentation java8 sample) > Error in Java 8 Documentation Sample > > > Key: FLINK-3881 > URL: https://issues.apache.org/jira/browse/FLINK-3881 > Project: Flink > Issue Type: Bug > Components: Documentation >Affects Versions: 1.0.3 > Environment: All >Reporter: Mans Singh >Assignee: Mans Singh >Priority: Minor > Labels: documentation, java8, sample > Original Estimate: 1h > Remaining Estimate: 1h > > The java8 documentation > (https://ci.apache.org/projects/flink/flink-docs-master/apis/java8.html) has > samples, one of them included below: > {code:java} > DataSet input = env.fromElements(1, 2, 3); > // collector type must be declared > input.flatMap((Integer number, Collector out) -> { > for(int i = 0; i < number; i++) { > out.collect("a"); > } > }) > // returns "a", "a", "aa", "a", "aa" , "aaa" > .print(); > {code} > I tried the sample and I think there are two issues with it (unless I have > missed anything): > 1. The DataSet should be DataSet and not DataSet > 2. There should probably be a StringBuffer that in the flatMap function that > is used to append "a" in the for loop and output > (out.collect(buffer.toString()) it rather than just out.collect("a");. > Currently, this produces only "a" each time rather than "a", "a", "aa", > "a", "aa" , "aaa" as shown the comments. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Issue Comment Deleted] (FLINK-3881) Error in Java 8 Documentation Sample
[ https://issues.apache.org/jira/browse/FLINK-3881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mans Singh updated FLINK-3881: -- Comment: was deleted (was: Hey Folks: I've corrected this java8 documentation sample error. Please let me know if there is any comment/suggestion. Thanks Mans) > Error in Java 8 Documentation Sample > > > Key: FLINK-3881 > URL: https://issues.apache.org/jira/browse/FLINK-3881 > Project: Flink > Issue Type: Bug > Components: Documentation >Affects Versions: 1.0.3 > Environment: All >Reporter: Mans Singh >Assignee: Mans Singh >Priority: Minor > Labels: docuentation, java8, sample > Original Estimate: 1h > Remaining Estimate: 1h > > The java8 documentation > (https://ci.apache.org/projects/flink/flink-docs-master/apis/java8.html) has > samples, one of them included below: > {code:java} > DataSet input = env.fromElements(1, 2, 3); > // collector type must be declared > input.flatMap((Integer number, Collector out) -> { > for(int i = 0; i < number; i++) { > out.collect("a"); > } > }) > // returns "a", "a", "aa", "a", "aa" , "aaa" > .print(); > {code} > I tried the sample and I think there are two issues with it (unless I have > missed anything): > 1. The DataSet should be DataSet and not DataSet > 2. There should probably be a StringBuffer that in the flatMap function that > is used to append "a" in the for loop and output > (out.collect(buffer.toString()) it rather than just out.collect("a");. > Currently, this produces only "a" each time rather than "a", "a", "aa", > "a", "aa" , "aaa" as shown the comments. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (FLINK-3881) Error in Java 8 Documentation Sample
[ https://issues.apache.org/jira/browse/FLINK-3881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15275350#comment-15275350 ] Mans Singh commented on FLINK-3881: --- Hey Folks: I've corrected this java8 documentation sample error. Please let me know if there is any comment/suggestion. Thanks Mans > Error in Java 8 Documentation Sample > > > Key: FLINK-3881 > URL: https://issues.apache.org/jira/browse/FLINK-3881 > Project: Flink > Issue Type: Bug > Components: Documentation >Affects Versions: 1.0.3 > Environment: All >Reporter: Mans Singh >Assignee: Mans Singh >Priority: Minor > Labels: docuentation, java8, sample > Original Estimate: 1h > Remaining Estimate: 1h > > The java8 documentation > (https://ci.apache.org/projects/flink/flink-docs-master/apis/java8.html) has > samples, one of them included below: > {code:java} > DataSet input = env.fromElements(1, 2, 3); > // collector type must be declared > input.flatMap((Integer number, Collector out) -> { > for(int i = 0; i < number; i++) { > out.collect("a"); > } > }) > // returns "a", "a", "aa", "a", "aa" , "aaa" > .print(); > {code} > I tried the sample and I think there are two issues with it (unless I have > missed anything): > 1. The DataSet should be DataSet and not DataSet > 2. There should probably be a StringBuffer that in the flatMap function that > is used to append "a" in the for loop and output > (out.collect(buffer.toString()) it rather than just out.collect("a");. > Currently, this produces only "a" each time rather than "a", "a", "aa", > "a", "aa" , "aaa" as shown the comments. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (FLINK-3881) Error in Java 8 Documentation Sample
[ https://issues.apache.org/jira/browse/FLINK-3881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mans Singh updated FLINK-3881: -- Description: The java8 documentation (https://ci.apache.org/projects/flink/flink-docs-master/apis/java8.html) has samples, one of them included below: {code:java} DataSet input = env.fromElements(1, 2, 3); // collector type must be declared input.flatMap((Integer number, Collector out) -> { for(int i = 0; i < number; i++) { out.collect("a"); } }) // returns "a", "a", "aa", "a", "aa" , "aaa" .print(); {code} I tried the sample and I think there are two issues with it (unless I have missed anything): 1. The DataSet should be DataSet and not DataSet 2. There should probably be a StringBuffer that in the flatMap function that is used to append "a" in the for loop and output (out.collect(buffer.toString()) it rather than just out.collect("a");. Currently, this produces only "a" each time rather than "a", "a", "aa", "a", "aa" , "aaa" as shown the comments. was: The java8 documentation (https://ci.apache.org/projects/flink/flink-docs-master/apis/java8.html) has samples (one of them) included below: {code:java} DataSet input = env.fromElements(1, 2, 3); // collector type must be declared input.flatMap((Integer number, Collector out) -> { for(int i = 0; i < number; i++) { out.collect("a"); } }) // returns "a", "a", "aa", "a", "aa" , "aaa" .print(); {code} I tried the sample and I think there are two issues with it (unless I have missed anything): 1. The DataSet should be DataSet and not DataSet 2. There should probably be a StringBuffer that in the flatMap function that is used to append "a" in the for loop and output (out.collect(buffer.toString()) it rather than just out.collect("a");. Currently, this produces only "a" each time rather than "a", "a", "aa", "a", "aa" , "aaa" as shown the comments. > Error in Java 8 Documentation Sample > > > Key: FLINK-3881 > URL: https://issues.apache.org/jira/browse/FLINK-3881 > Project: Flink > Issue Type: Bug > Components: Documentation >Affects Versions: 1.0.3 > Environment: All >Reporter: Mans Singh >Priority: Minor > Labels: docuentation, java8, sample > Original Estimate: 1h > Remaining Estimate: 1h > > The java8 documentation > (https://ci.apache.org/projects/flink/flink-docs-master/apis/java8.html) has > samples, one of them included below: > {code:java} > DataSet input = env.fromElements(1, 2, 3); > // collector type must be declared > input.flatMap((Integer number, Collector out) -> { > for(int i = 0; i < number; i++) { > out.collect("a"); > } > }) > // returns "a", "a", "aa", "a", "aa" , "aaa" > .print(); > {code} > I tried the sample and I think there are two issues with it (unless I have > missed anything): > 1. The DataSet should be DataSet and not DataSet > 2. There should probably be a StringBuffer that in the flatMap function that > is used to append "a" in the for loop and output > (out.collect(buffer.toString()) it rather than just out.collect("a");. > Currently, this produces only "a" each time rather than "a", "a", "aa", > "a", "aa" , "aaa" as shown the comments. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (FLINK-3881) Error in Java 8 Documentation Sample
[ https://issues.apache.org/jira/browse/FLINK-3881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mans Singh updated FLINK-3881: -- Description: The java8 documentation (https://ci.apache.org/projects/flink/flink-docs-master/apis/java8.html) has samples (one of them) included below: {code:java} DataSet input = env.fromElements(1, 2, 3); // collector type must be declared input.flatMap((Integer number, Collector out) -> { for(int i = 0; i < number; i++) { out.collect("a"); } }) // returns "a", "a", "aa", "a", "aa" , "aaa" .print(); {code} I tried the sample and I think there are two issues with it (unless I have missed anything): 1. The DataSet should be DataSet and not DataSet 2. There should probably be a StringBuffer that in the flatMap function that is used to append "a" in the for loop and output (out.collect(buffer.toString()) it rather than just out.collect("a");. Currently, this produces only "a" each time rather than "a", "a", "aa", "a", "aa" , "aaa" as shown the comments. was: The java8 documentation (https://ci.apache.org/projects/flink/flink-docs-master/apis/java8.html) has samples (one of them) included below: DataSet input = env.fromElements(1, 2, 3); // collector type must be declared input.flatMap((Integer number, Collector out) -> { for(int i = 0; i < number; i++) { out.collect("a"); } }) // returns "a", "a", "aa", "a", "aa" , "aaa" .print(); I tried the sample and I think there are two issues with it (unless I have missed anything): 1. The DataSet should be DataSet and not DataSet 2. It should have a StringBuffer that appends "a" in the for loop and output (out.collect(buffer.toString()) it rather than just out.collect("a");. Currently, this produces only "a" each time rather than "a", "a", "aa", "a", "aa" , "aaa" as shown the comments. > Error in Java 8 Documentation Sample > > > Key: FLINK-3881 > URL: https://issues.apache.org/jira/browse/FLINK-3881 > Project: Flink > Issue Type: Bug > Components: Documentation >Affects Versions: 1.0.3 > Environment: All >Reporter: Mans Singh >Priority: Minor > Labels: docuentation, java8, sample > Original Estimate: 1h > Remaining Estimate: 1h > > The java8 documentation > (https://ci.apache.org/projects/flink/flink-docs-master/apis/java8.html) has > samples (one of them) included below: > {code:java} > DataSet input = env.fromElements(1, 2, 3); > // collector type must be declared > input.flatMap((Integer number, Collector out) -> { > for(int i = 0; i < number; i++) { > out.collect("a"); > } > }) > // returns "a", "a", "aa", "a", "aa" , "aaa" > .print(); > {code} > I tried the sample and I think there are two issues with it (unless I have > missed anything): > 1. The DataSet should be DataSet and not DataSet > 2. There should probably be a StringBuffer that in the flatMap function that > is used to append "a" in the for loop and output > (out.collect(buffer.toString()) it rather than just out.collect("a");. > Currently, this produces only "a" each time rather than "a", "a", "aa", > "a", "aa" , "aaa" as shown the comments. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (FLINK-3881) Error in Java 8 Documentation Sample
Mans Singh created FLINK-3881: - Summary: Error in Java 8 Documentation Sample Key: FLINK-3881 URL: https://issues.apache.org/jira/browse/FLINK-3881 Project: Flink Issue Type: Bug Components: Documentation Affects Versions: 1.0.3 Environment: All Reporter: Mans Singh Priority: Minor The java8 documentation (https://ci.apache.org/projects/flink/flink-docs-master/apis/java8.html) has samples (one of them) included below: DataSet input = env.fromElements(1, 2, 3); // collector type must be declared input.flatMap((Integer number, Collector out) -> { for(int i = 0; i < number; i++) { out.collect("a"); } }) // returns "a", "a", "aa", "a", "aa" , "aaa" .print(); I tried the sample and I think there are two issues with it (unless I have missed anything): 1. The DataSet should be DataSet and not DataSet 2. It should have a StringBuffer that appends "a" in the for loop and output (out.collect(buffer.toString()) it rather than just out.collect("a");. Currently, this produces only "a" each time rather than "a", "a", "aa", "a", "aa" , "aaa" as shown the comments. -- This message was sent by Atlassian JIRA (v6.3.4#6332)