Configuration not propagating unless explicitly initializing FileSystem for Azure File System

2023-12-18 Thread Yuval Itzchakov
nf) But adding the following line makes everything work: FileSystem.initialize(conf, null) Has anyone encountered this issue? I am wondering why explicit initialization of the file system is required for this to work. -- Best Regards, Yuval Itzchakov.

Re: [ANNOUNCE] Apache Flink has won the 2023 SIGMOD Systems Award

2023-07-03 Thread Yuval Itzchakov
Congrats team! On Mon, Jul 3, 2023, 17:28 Jing Ge via user wrote: > Congratulations! > > Best regards, > Jing > > > On Mon, Jul 3, 2023 at 3:21 PM yuxia wrote: > >> Congratulations! >> >> Best regards, >> Yuxia >> >> -- >> *发件人: *"Pushpa Ramakrishnan" >> *收件人:

SupportsReadingMetadata flow between table transformations

2023-05-29 Thread Yuval Itzchakov
s tag flow between the different projections and joins the table goes through? If not, what could be alternatives to achieve flowing some column A between all table transformations such that I can provide some "metadata" context that is coming in from the source itself? -- Best Regards, Yuval Itzchakov.

Re: Waiting for a signal on one stream to start processing on another

2023-03-08 Thread Yuval Itzchakov
tional questions. > > [1] > https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/hybridsource/ > > Best, > Mason > > On Wed, Mar 8, 2023 at 11:12 AM Yuval Itzchakov wrote: > >> Hi, >> >> I have a use-case where I have two streams, c

Waiting for a signal on one stream to start processing on another

2023-03-08 Thread Yuval Itzchakov
Hi, I have a use-case where I have two streams, call them A and B. I need to consume stream A up to a certain point, and only then start processing on stream B. What could be a way to go about this?

Re: How to use Postgres UUID types in Flink SQL?

2023-02-22 Thread Yuval Itzchakov
ype and >> created a java.util.UUID instance >> I see a merged-and-reverted ticket about this here: >> https://issues.apache.org/jira/browse/FLINK-19869 >> >> It mentions using BinaryRawValueData for now, but can someone help me >> figure out what that would look like in Flink SQL? >> >> regards Frank >> > -- Best Regards, Yuval Itzchakov.

Re: Flink SQL Dates Between and Parallelism

2022-05-21 Thread Yuval Itzchakov
t able to filter by anything else, > is there anything else we can could potentially look into to help? > > > Thank you > > > Jason Politis > Solutions Architect, Carrera Group > carrera.io > | jpoli...@carrera.io > <http://us.linkedin.com/in/jasonpolitis> > >

Re: Flink SQL Dates Between and Parallelism

2022-05-20 Thread Yuval Itzchakov
Hi Jason, When using interval joins, Flink can't parallelize the execution as the join key (semantically) is even time, thus all events must fall into the same partition for Flink to be able to lookup events from the two streams. See the IntervalJoinOperator (

Re: [DISCUSS] Update Scala 2.12.7 to 2.12.15

2022-05-20 Thread Yuval Itzchakov
As a Scala API user I'd prefer a breaking change to get all the benefits of the latest Scala minor versions. On Fri, May 20, 2022, 11:37 Martijn Visser wrote: > Hi everyone, > > I would like to get some opinions from our Scala users, therefore I'm also > looping in the user mailing list. > >

Re: Using Amazon EC2 Spot instances with Flink

2022-03-25 Thread Yuval Itzchakov
My company is running Flink in Kubernetes with spot instances for both JM and TM. Feel free to reach out. On Thu, Mar 24, 2022, 20:33 Ber, Jeremy wrote: > > https://aws.amazon.com/blogs/compute/optimizing-apache-flink-on-amazon-eks-using-amazon-ec2-spot-instances/ > > > > Sharing this link

Re: streaming mode with both finite and infinite input sources

2022-02-25 Thread Yuval Itzchakov
One possible option is to look into the hybrid source released in Flink 1.14 to support your use-case: https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/connectors/datastream/hybridsource/ On Fri, Feb 25, 2022, 09:19 Jin Yi wrote: > so we have a streaming job where the main work

flink-table-api-scala-bridge missing source files

2021-12-25 Thread Yuval Itzchakov
/ScalaDataStreamQueryOperation.java Would love some help with this. -- Best Regards, Yuval Itzchakov.

Re: Waiting on streaming job with StatementSet.execute() when using Web Submission

2021-12-22 Thread Yuval Itzchakov
d successfully with value: 0" seems quite suspicious. If you > search through the code base no log is printing such information. Could you > please check which component is printing this log and determine which > process this exit code belongs to? > > Yuval Itzchakov 于2

Re: Waiting on streaming job with StatementSet.execute() when using Web Submission

2021-12-21 Thread Yuval Itzchakov
g down rest endpoint. 2021-12-22 09:25:28,943 INFO o.a.f.r.b.BlobServer Stopped BLOB server at 0.0.0.0:64213 Process finished with exit code 239 On Wed, Dec 22, 2021 at 8:47 AM Yuval Itzchakov wrote: > I mean it finishes successful and exists with status code 0. Both when > running loca

Re: Waiting on streaming job with StatementSet.execute() when using Web Submission

2021-12-21 Thread Yuval Itzchakov
ate? Which kind of job are you running? Is it a > select job or an insert job? Insert jobs should run continuously once > they're submitted. Could you share your user code if possible? > > Yuval Itzchakov 于2021年12月22日周三 14:11写道: > >> Hi Caizhi, >> >> If I don't bloc

Re: Waiting on streaming job with StatementSet.execute() when using Web Submission

2021-12-21 Thread Yuval Itzchakov
ST API [1]. You can tell that > the job successfully finishes by the FINISHED state and that the job fails > by the FAILED state. > > [1] > https://nightlies.apache.org/flink/flink-docs-master/docs/ops/rest_api/#jobs-jobid > > Yuval Itzchakov 于2021年12月22日周三 02:36写道: > >&g

Waiting on streaming job with StatementSet.execute() when using Web Submission

2021-12-21 Thread Yuval Itzchakov
) ... 7 more This is because the Web Submission via the REST API is using the WebSubmissionJobClient. How can I wait on my Flink SQL streaming job when submitting through the REST API? -- Best Regards, Yuval Itzchakov

Flinks DispatcherRestEndpoint Thread stuck even though TaskManager failed to execute job

2021-12-05 Thread Yuval Itzchakov
bmit the job as it did not receive any response, but a timeout. This will happen continuously until it exhausts the available threads defined by rest.server.numThreads. How can I make sure the exception thrown from the JM causes the REST API thread to be terminated? -- Best Regards, Yuval Itzchakov.

SqlSnapshot throws NullPointerException when used in conjunction with CTE

2021-11-18 Thread Yuval Itzchakov
ala:159) I have opened an issue on this here: https://issues.apache.org/jira/browse/FLINK-24956 Was wondering if anyone has any workaround in mind. -- Best Regards, Yuval Itzchakov.

Re: Flink SQL build-in function questions.

2021-11-13 Thread Yuval Itzchakov
I recall looking for these once in the SQL standard spec, AFAIR they are not part of it. On Fri, Nov 12, 2021, 11:48 Francesco Guardiani wrote: > Yep I agree with waiting for calcite to support it. As a temporary > workaround you can define your own udfs with that functionality. > > I also

Re: Custom partitioning of keys with keyBy

2021-11-04 Thread Yuval Itzchakov
s > generated by some script to map bucket id evenly to operators under the max > parallelism. > > Sent with a Spark <https://sparkmailapp.com/source?from=signature> > On Nov 3, 2021, 9:47 PM +0800, Yuval Itzchakov , wrote: > > Hi, > I have a use-case where I'd like to partitio

Re: Upgrading to the new KafkaSink + Flink 1.14 from Flink 1.13.2 + FlinkKafkaProducer

2021-11-02 Thread Yuval Itzchakov
ks indeed strange. I do not think when switching to the > unified KafkaSink the old serializer should be invoked at all. > Can you maybe share more information about the job you are using or maybe > share the program so that we can reproduce it? > > Best, > Fabian -- Best Regards, Yuval Itzchakov.

Re: Converting a Table to a DataStream[RowData] instead of DataStream[Row] with `toDataStream`

2021-11-02 Thread Yuval Itzchakov
gt; On Thu, Oct 28, 2021 at 10:15 PM Yuval Itzchakov > wrote: > >> Flink 1.14 >> Scala 2.12.5 >> >> Hi, >> I want to be able to convert a Table into a DataStream[RowData]. I need >> to do this since I already have specific infrastructure in place that

Upgrading to the new KafkaSink + Flink 1.14 from Flink 1.13.2 + FlinkKafkaProducer

2021-11-02 Thread Yuval Itzchakov
) at org.apache.flink.runtime.state.DefaultOperatorStateBackendBuilder.build(DefaultOperatorStateBackendBuilder.java:80) ... 17 more The StateSerializer being created is the old one for the FlinkKafkaProducer. Is there any other way to work around this issue? -- Best Regards, Yuval Itzchakov.

Re: S3 Source support in Flink

2021-10-28 Thread Yuval Itzchakov
2. If we have to implement a custom S3 continuous source, how would one > implement the SplitEnumerator since ListObjects S3 API can become expensive > as the bucket grows? > > Thanks in advance > > Best, > Abhishek > -- Best Regards, Yuval Itzchakov.

Converting a Table to a DataStream[RowData] instead of DataStream[Row] with `toDataStream`

2021-10-28 Thread Yuval Itzchakov
, now with the new toDataStream API, I don't see a direct way to do this. Using toDataStream[RowData](tableSchema.toPhysicalRowDataType) still creates an internal OutputConversionOperator and yields a Row to the Kafka sink I'm trying to use. Would love some help. -- Best Regards, Yuval Itzchakov.

Re: JM / TM Starting With Modified Classpath Causes log4j2 Exception

2021-10-14 Thread Yuval Itzchakov
Yes I am On Thu, Oct 14, 2021, 13:32 Chesnay Schepler wrote: > Are you by chance explicitly setting the -Dlog4j.configurationFile option > (or however it is called?) > > On 14/10/2021 11:59, Yuval Itzchakov wrote: > > Just tried adding the jars to lib/, I still receive the

Re: JM / TM Starting With Modified Classpath Causes log4j2 Exception

2021-10-14 Thread Yuval Itzchakov
Just tried adding the jars to lib/, I still receive the same error message. On Thu, Oct 14, 2021 at 12:53 PM Yuval Itzchakov wrote: > I have an UBER jar that contains all the dependencies I require for the > execution, I find it weird that I need to add an external library to lib to >

Re: JM / TM Starting With Modified Classpath Causes log4j2 Exception

2021-10-14 Thread Yuval Itzchakov
endent of how Flink works; the JSONLayout needs jackson, so > you need to make sure it is on the classpath. > > On 14/10/2021 11:34, Yuval Itzchakov wrote: > > Scala 2.12 > Java 11 > Flink 1.13.2 > Running in Kubernetes > > Hi, > > While trying to use a configuration w

JM / TM Starting With Modified Classpath Causes log4j2 Exception

2021-10-14 Thread Yuval Itzchakov
de jackson-databind. While I could add the jars explicitly to the `lib/` directory in the Dockerfile, this is pretty annoying to maintain as versions evolve. Is there any other workaround for this? or an open issue? -- Best Regards, Yuval Itzchakov.

Flink fault tolerance guarantees

2021-10-13 Thread Yuval Itzchakov
t reached the sink and a crash happens for various reasons. Does Flink snapshot internal buffers periodically with checkpoints? Would messages in flight in internal buffers be restored? -- Best Regards, Yuval Itzchakov.

Re: Built-in functions to manipulate MULTISET type

2021-09-19 Thread Yuval Itzchakov
ble/examples/java/functions/LastDatedValueFunction.java > > On Sat, Sep 18, 2021 at 7:42 AM Yuval Itzchakov wrote: > >> Hi Jing, >> >> I recall there is already an open ticket for built-in aggregate functions >> >> On Sat, Sep 18, 2021, 15:08 JING ZHANG wrote: >

Re: Built-in functions to manipulate MULTISET type

2021-09-18 Thread Yuval Itzchakov
Hi Jing, I recall there is already an open ticket for built-in aggregate functions On Sat, Sep 18, 2021, 15:08 JING ZHANG wrote: > Hi Yuval, > You could open a JIRA to track this if you think some functions should be > added as built-in functions in Flink. > > Best, > JI

Re: Built-in functions to manipulate MULTISET type

2021-09-18 Thread Yuval Itzchakov
The problem with defining a UDF is that you have to create one overload per key type in the MULTISET. It would be very convenient to have functions like Snowflakes ARRAY_AGG. On Sat, Sep 18, 2021, 05:43 JING ZHANG wrote: > Hi Kai, > AFAIK, there is no built-in function to extract the keys in

Re: Building a flink connector

2021-09-17 Thread Yuval Itzchakov
Hi Lars, We've built a custom connector for Snowflake (both source and sink). Feel free to reach out in private if you have any questions. On Fri, Sep 17, 2021, 14:33 Lars Skjærven wrote: > We're in need of a Google Bigtable flink connector. Do you have any tips > on how this could be done,

Re: Job crashing with RowSerializer EOF exception

2021-09-10 Thread Yuval Itzchakov
gt; serializers in the serializer chain is somehow broken. > What data type are you serializating? Does it include some type serializer > by a custom serializer, or Kryo, ... ? > > On Thu, Sep 9, 2021 at 4:35 PM Yuval Itzchakov wrote: > >> Hi, >> >> Flink 1.13.2 >>

Job crashing with RowSerializer EOF exception

2021-09-09 Thread Yuval Itzchakov
to get more info would be great. Thanks. -- Best Regards, Yuval Itzchakov.

Re: Creating a generic ARRAY_AGG aggregate function for Flink SQL

2021-08-24 Thread Yuval Itzchakov
en a JIRA ticket to require a built-in support for >> array_agg, as this function exists in many data ware houses. >> >> Yuval Itzchakov 于2021年8月23日周一 下午7:38写道: >> >>> Hi, >>> >>> I'm trying to implement a generic ARRAY_AGG UDF function (identical to

Creating a generic ARRAY_AGG aggregate function for Flink SQL

2021-08-23 Thread Yuval Itzchakov
ow to tackle this. -- Best Regards, Yuval Itzchakov.

Re: Validating Flink SQL without registering with StreamTableEnvironment

2021-08-18 Thread Yuval Itzchakov
ou? > > > Best > Ingo > > On Wed, Aug 18, 2021 at 12:51 PM Yuval Itzchakov > wrote: > >> Hi, >> >> I have a use-case where I need to validate hundreds of Flink SQL queries. >> Ideally, I'd like to run these validations in parallel. But, given that >&g

Validating Flink SQL without registering with StreamTableEnvironment

2021-08-18 Thread Yuval Itzchakov
don't really care about the overall registration process of sources, transformations and sinks, I just want to make sure the syntax is correct from Flinks perspective. Is there any straightforward way of doing this? -- Best Regards, Yuval Itzchakov.

UNION ALL on two LookupTableSources

2021-08-15 Thread Yuval Itzchakov
io, 0.0 network, 0.0 memory}, id = 170 I do understand that the semantics of unioning two lookups may be a bit complicated, but was wondering if this is planned to be supported in the future? -- Best Regards, Yuval Itzchakov.

Re: Production Grade GitOps Based Kubernetes Setup

2021-08-06 Thread Yuval Itzchakov
Hi Niklas, We are currently using the Lyft operator for Flink in production ( https://github.com/lyft/flinkk8soperator), which is additional alternative. The project itself is pretty much in Zombie state, but commits happen every now and then. 1. Native Kubernetes could definitely work with

Understanding the semantics of SourceContext.collect

2021-08-04 Thread Yuval Itzchakov
on that fact that it has synchronously been pushed end to end. -- Best Regards, Yuval Itzchakov.

Re: [ANNOUNCE] RocksDB Version Upgrade and Performance

2021-08-04 Thread Yuval Itzchakov
ksdb/issues/5774 > [11] http://david-grs.github.io/tls_performance_overhead_cost_linux/ > [12] https://github.com/ververica/frocksdb/pull/19 > [13] https://github.com/facebook/rocksdb/pull/5441/ > [14] https://github.com/facebook/rocksdb/pull/2283 > > > Best, > Yun Tang

Re: Custom Source with the new Data Source API

2021-08-04 Thread Yuval Itzchakov
nbound source (i.e. Kafka). > > Thanks! > Bin > -- Best Regards, Yuval Itzchakov.

Re: [ANNOUNCE] RocksDB Version Upgrade and Performance

2021-08-04 Thread Yuval Itzchakov
need to trace it back to RocksDB, then one of the > committers can find the relevant patch from RocksDB master and backport it. > That isn't the greatest user experience. > > Because of those disadvantages, we would prefer to do the upgrade to the > newer RocksDB version despite the unfortunate performance regression. > > Best, > Stephan > > > -- Best Regards, Yuval Itzchakov.

Re: Output[StreamRecord[T]] thread safety

2021-08-03 Thread Yuval Itzchakov
to > writing the results out) in a separated thread. Flink's runtime expect the > whole operator chain to run in a single thread. > > Yuval Itzchakov 于2021年8月3日周二 下午1:47写道: > >> Hi, >> >> Flink 1.13.1 >> Scala 2.12.4 >> >> I have an implementation of an A

Output[StreamRecord[T]] thread safety

2021-08-02 Thread Yuval Itzchakov
) { outWriter.setNullAt(0); } else { outWriter.writeString(0, field$57); }outWriter.complete();return out; } } -- Best Regards, Yuval Itzchakov.

Re: Flink 1.13 fails to load log4j2 yaml configuration file via jackson-dataformat-yaml

2021-07-30 Thread Yuval Itzchakov
Utils. > > On 30/07/2021 08:48, Yuval Itzchakov wrote: > > It is finding the file though, the problem is that the lib/ might not be > on the classpath when the file is being parsed, thus the YAML file is not > recognized as being parsable. > > Is there a way to diffe

Re: Flink 1.13 fails to load log4j2 yaml configuration file via jackson-dataformat-yaml

2021-07-30 Thread Yuval Itzchakov
to the utils classpath? On Thu, Jul 29, 2021, 17:55 Chesnay Schepler wrote: > That could be...could you try configuring "env.java.opts: > -Dlog4j.configurationFile=..."? > > On 29/07/2021 15:18, Yuval Itzchakov wrote: > > Perhaps because I'm passing

Re: Flink 1.13 fails to load log4j2 yaml configuration file via jackson-dataformat-yaml

2021-07-29 Thread Yuval Itzchakov
ectory on the > classpath. > What I don't understand is why it would pick up your log4j file. It should > only use the file that is embedded within BashJavaUtils.jar. > > On 29/07/2021 13:11, Yuval Itzchakov wrote: > > Hi Chesnay, > > So I looked into it, and jackson-data

Re: Flink 1.13 fails to load log4j2 yaml configuration file via jackson-dataformat-yaml

2021-07-29 Thread Yuval Itzchakov
is that I'm using the "classloader.resolve-order" = "parent-first" flag due to my lib having some deps that otherwise cause runtime collision errors in Kafka. On Wed, Jul 7, 2021 at 12:25 PM Yuval Itzchakov wrote: > Interesting, I don't have bind explicitly on the classpat

Re: [External] naming table stages

2021-07-28 Thread Yuval Itzchakov
Hi Jing, An additional challenge with the current Table API / SQL approach for iperator naming is that it makes it very hard to export metrics, i.e. to track watermarks with Prometheus, when operator names are not assignable by the user. On Wed, Jul 28, 2021, 13:11 JING ZHANG wrote: > Hi

Re: Issue with Flink SQL using RocksDB backend

2021-07-26 Thread Yuval Itzchakov
wrote: > Hi Yuval, > I run a similar SQL (without `FIRST` aggregate function), there is nothing > wrong. > `FIRST` is a custom aggregate function? Would you please check if there is > a drawback in `FIRST`? Whether the query could run without `FIRST`? > > Best, > JING ZHAN

Issue with Flink SQL using RocksDB backend

2021-07-26 Thread Yuval Itzchakov
620) at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:779) at org.apache.flink.runtime.taskmanager.Task.run(Task.java:566) at java.base/java.lang.Thread.run(Thread.java:829) Would appreciate help in the direction of how to debug this issue, or if anyone has encountered this before. -- Best Regards, Yuval Itzchakov.

Re: Flink 1.13 fails to load log4j2 yaml configuration file via jackson-dataformat-yaml

2021-07-07 Thread Yuval Itzchakov
improved classloader separation, you actually need > to put your dependency into `lib/` instead of putting it into your user > jar. But I'm pulling in @Chesnay Schepler who has > much more insights. > > On Sun, Jul 4, 2021 at 9:45 PM Yuval Itzchakov wrote: > >> Hi,

Flink 1.13 fails to load log4j2 yaml configuration file via jackson-dataformat-yaml

2021-07-05 Thread Yuval Itzchakov
Hi, I am attempting to upgrade Flink from 1.9 to 1.13.1 I am using a YAML based log4j file. In 1.9, it worked perfectly fine by adding the following dependency to the classpath (I deploy with an uber JAR): "com.fasterxml.jackson.dataformat" % "jackson-dataformat-yaml" % "2.12.3" However, with

Flink 1.13 fails to load log4j2 yaml configuration file via jackson-dataformat-yaml

2021-07-04 Thread Yuval Itzchakov
ERROR StatusLogger No logging configuration This indicates that for some reason, the jackson dataformat YAML library is not getting properly loaded from my uber JAR at runtime. Has anyone run into this? Any possible workarounds? -- Best Regards, Yuval Itzchakov.

Re: RocksDB MapState debugging key serialization

2021-06-30 Thread Yuval Itzchakov
find out the serialized values that are being used for key > comparison? Can you recommend any possible solutions or debugging > strategies that would help? > > > > Thank you, > > Tom > -- Best Regards, Yuval Itzchakov.

Re: State migration for sql job

2021-06-09 Thread Yuval Itzchakov
As my company is also a heavy user of Flink SQL, the state migration story is very important to us. I as well believe that adding new fields should start to accumulate state from the point in time of the change forward. Is anyone actively working on this? Is there anyway to get involved? On

Re: Allow setting job name when using StatementSet

2021-06-08 Thread Yuval Itzchakov
want >> to have? >> >> Best wishes, >> Nico >> >> On Mon, Jun 7, 2021 at 10:09 AM Yuval Itzchakov >> wrote: >> >>> Hi, >>> >>> Currently when using StatementSet, the name of the job is autogenerated >>> by the runt

Allow setting job name when using StatementSet

2021-06-07 Thread Yuval Itzchakov
Hi, Currently when using StatementSet, the name of the job is autogenerated by the runtime: [image: image.png] Is there any reason why there shouldn't be an overload that allows the user to explicitly specify the name of the running job? -- Best Regards, Yuval Itzchakov.

Re: Alternatives to TypeConversions fromDataTypeToLegacyInfo / fromLegacyInfoToDataType

2021-06-04 Thread Yuval Itzchakov
t to TypeInformation. > > Regards, > Timo > > > On 04.06.21 16:05, Yuval Itzchakov wrote: > > Hi Timo, > > Thank you for the response. > > > > The tables being created in reality are based on arbitrary SQL code such > > that I don't know

Re: Alternatives to TypeConversions fromDataTypeToLegacyInfo / fromLegacyInfoToDataType

2021-06-04 Thread Yuval Itzchakov
ear future. > But for now we went with the most generic solution that supports > everything that can come out of Table API. > > Regards, > Timo > > On 04.06.21 15:12, Yuval Itzchakov wrote: > > When upgrading to Flink 1.13, I ran into deprecation warnings on > >

Alternatives to TypeConversions fromDataTypeToLegacyInfo / fromLegacyInfoToDataType

2021-06-04 Thread Yuval Itzchakov
that needs to be converted into a DataStream[Row] and in turn I need to apply some stateful transformations on it. In order to do that I need the TypeInformation[Row] produced in order to pass into the various state functions. @Timo Walther I would love your help on this. -- Best Regards, Yuval Itzchakov.

Re: Flink in k8s operators list

2021-05-28 Thread Yuval Itzchakov
https://github.com/lyft/flinkk8soperator On Fri, May 28, 2021, 10:09 Ilya Karpov wrote: > Hi there, > > I’m making a little research about the easiest way to deploy link job to > k8s cluster and manage its lifecycle by *k8s operator*. The list of > solutions is below: > -

Re: Multiple select queries in single job on flink table API

2021-04-16 Thread Yuval Itzchakov
Yes. Instead of calling execute on each table, create a StatementSet using your StreamTableEnvironment (tableEnv.createStatementSet) and use addInsert and finally .execute when you want to run the job. On Sat, Apr 17, 2021, 03:20 tbud wrote: > If I want to run two different select queries on

Re: Flink 1.11.4?

2021-04-13 Thread Yuval Itzchakov
Roman, is there an ETA on 1.13? On Mon, Apr 12, 2021, 16:17 Roman Khachatryan wrote: > Hi Maciek, > > There are no specific plans for 1.11.4 yet as far as I know. > The official policy is to support the current and previous minor > release [1]. So 1.12 and 1.13 will be officially supported once

Re: [DISCUSS] Feature freeze date for 1.13

2021-04-06 Thread Yuval Itzchakov
Best, > Guowei > > > On Thu, Apr 1, 2021 at 10:33 PM Yuval Itzchakov wrote: > >> Hi All, >> >> I would really love to merge https://github.com/apache/flink/pull/15307 >> prior to 1.13 release cutoff, it just needs some more tests which I can >> hopefu

Re: [DISCUSS] Feature freeze date for 1.13

2021-04-01 Thread Yuval Itzchakov
Hi All, I would really love to merge https://github.com/apache/flink/pull/15307 prior to 1.13 release cutoff, it just needs some more tests which I can hopefully get to today / tomorrow morning. This is a critical fix as now predicate pushdown won't work for any stream which generates a

Re: LocalWatermarkAssigner causes predicate pushdown to be skipped

2021-03-08 Thread Yuval Itzchakov
ark > assigner or extend the filter push down rule to capture the structure that > the watermark assigner is the parent of the table scan. > > Best, > Shengkai > > Yuval Itzchakov 于2021年3月8日周一 上午12:13写道: > >> Hi Jark, >> >> Even after implementing

Re: LocalWatermarkAssigner causes predicate pushdown to be skipped

2021-03-07 Thread Yuval Itzchakov
er if you > assigned a watermark. > If you implement SupportsWatermarkPushdown, the LogicalWatermarkAssigner > will be pushed > into TableSource, and then you can push Filter into source if source > implement SupportsFilterPushdown. > > Best, > Jark > > On Sat, 6 Mar 2021 at

Re: LocalWatermarkAssigner causes predicate pushdown to be skipped

2021-03-05 Thread Yuval Itzchakov
se rowtime without > implementing `SupportsWatermarkPushDown` in your custom source. > > I will lookp in Shengkai who worked on this topic recently. > > Regards, > Timo > > > On 04.03.21 18:52, Yuval Itzchakov wrote: > > Bumping this up again, would appreciate any hel

Re: Scaling Higher than 10k Nodes

2021-03-04 Thread Yuval Itzchakov
>>> Piotrek >>> >>> pon., 1 mar 2021 o 14:43 Joey Tran >>> napisał(a): >>> >>>> Hi, I was looking at Apache Beam/Flink for some of our data processing >>>> needs, but when reading about the resource managers >>>> (YARN/mesos/Kubernetes), it seems like they all top out at around 10k >>>> nodes. What are recommended solutions for scaling higher than this? >>>> >>>> Thanks in advance, >>>> Joey >>>> >>> -- Best Regards, Yuval Itzchakov.

Re: Community chat?

2021-02-24 Thread Yuval Itzchakov
rst pick? Or would something async but > easier to interact with also work, like a Discourse forum? > > Thanks for bringing this up! > > Marta > > > > On Mon, Feb 22, 2021 at 10:03 PM Yuval Itzchakov > wrote: > >> A dedicated Slack would be awesome.

Re: Community chat?

2021-02-22 Thread Yuval Itzchakov
A dedicated Slack would be awesome. On Mon, Feb 22, 2021, 22:57 Sebastián Magrí wrote: > Is there any chat from the community? > > I saw the freenode channel but it's pretty dead. > > A lot of the time a more chat alike venue where to discuss stuff > synchronously or just share ideas turns out

Re: Role of Rowtime Field Task?

2021-02-20 Thread Yuval Itzchakov
E(ROWTIME)) > > that has an upstream Kafka source task. What exactly does the rowtime task > do? > > -- > Thank you, > Aeden > -- Best Regards, Yuval Itzchakov.

Re: Generated SinkConversion code ignores incoming StreamRecord timestamp

2021-02-14 Thread Yuval Itzchakov
s.apache.org/jira/browse/FLINK-21013 > On 15/02/2021 08:20, Yuval Itzchakov wrote: > > Hi, > > I have a source that generates events with timestamps. These flow nicely, > until encountering a conversion from Table -> DataStream[Row]: > > def toRowRetractSt

Generated SinkConversion code ignores incoming StreamRecord timestamp

2021-02-14 Thread Yuval Itzchakov
Am I missing anything in the Table -> DataStream[Row] conversion that should make the timestamp follow through? or is this a bug? -- Best Regards, Yuval Itzchakov.

Re: DynamicTableSink's equivalent of UpsertStreamTableSink.setKeyFields

2021-02-03 Thread Yuval Itzchakov
e with just a primary key. > > Regards, > Timo > > > On 03.02.21 14:09, Yuval Itzchakov wrote: > > Hi, > > I'm reworking an existing UpsertStreamTableSink into the new > > DynamicTableSink API. In the previous API, one would get the unique keys > > for upse

DynamicTableSink's equivalent of UpsertStreamTableSink.setKeyFields

2021-02-03 Thread Yuval Itzchakov
to receive the unique keys for upsert queries with the new DynamicTableSink API? -- Best Regards, Yuval Itzchakov.

Re: Max with retract aggregate function does not support type: ''CHAR''.

2021-02-03 Thread Yuval Itzchakov
nks for reaching out to the Flink community Yuval. I am pulling in Timo > and Jark who might be able to answer this question. From what I can tell, > it looks a bit like an oversight because VARCHAR is also supported. > > Cheers, > Till > > On Tue, Feb 2, 2021 at 6:12 PM Yuval Itzchakov

Max with retract aggregate function does not support type: ''CHAR''.

2021-02-02 Thread Yuval Itzchakov
: image.png] It does seem like all primitives are supported. Is there a particular reason why a CHAR would not be supported? Is this an oversight? -- Best Regards, Yuval Itzchakov.

Re: HybridMemorySegment index out of bounds exception

2021-01-28 Thread Yuval Itzchakov
LINK-20986 ? > > Regards, > Timo > > > On 28.01.21 16:30, Yuval Itzchakov wrote: > > Hi Timo, > > > > I tried replacing it with an ordinary ARRAY DataType, which > > doesn't reproduce the issue. > > If I use a RawType(Array[String]), the prob

Re: HybridMemorySegment index out of bounds exception

2021-01-28 Thread Yuval Itzchakov
the problem. > > Have you tried to replace the JSON object with a regular String? If the > exception is gone after this change. I believe it must be the > serialization and not the network stack. > > Regards, > Timo > > > On 28.01.21 10:29, Yuval Itzchakov wrote: > >

Annotating AggregateFunction accumulator type with @DataTypeHint

2021-01-26 Thread Yuval Itzchakov
type? All the examples in the docs refer to Scalar functions. -- Best Regards, Yuval Itzchakov.

ProjectWatermarkAssignerTransposeRule field pruning causes wrong watermark index to be accessed

2021-01-25 Thread Yuval Itzchakov
t_time]) Is this the desired behavior? I was quite surprised by this and the fact that the pruning happens regardless of if the underlying table is a TableSource or if it actually filters out these unused fields from the result query. -- Best Regards, Yuval Itzchakov.

Flink 1.12 Kryo Serialization Error

2021-01-11 Thread Yuval Itzchakov
throw err case Right(value) => value } } Would appreciate any help on how to debug this further. -- Best Regards, Yuval Itzchakov.

Re: Registering RawType with LegacyTypeInfo via Flink CREATE TABLE DDL

2020-12-28 Thread Yuval Itzchakov
aType. DataType is a super type of TypeInformation. > > Registering types (e.g. giving a RAW type backed by io.circe.Json a name > such that it can be used in a DDL statement) is also in the backlog but > requires another FLIP. > > Regards, > Timo > > > On 28.12.20 13:

Re: Registering RawType with LegacyTypeInfo via Flink CREATE TABLE DDL

2020-12-28 Thread Yuval Itzchakov
resentation of a `CREATE TABLE` statement. In this case you would > have the full control over the entire type stack end-to-end. > > Regards, > Timo > > On 28.12.20 10:36, Yuval Itzchakov wrote: > > Timo, an additional question. > > > > I am currently using TypeConversion

Re: Registering RawType with LegacyTypeInfo via Flink CREATE TABLE DDL

2020-12-28 Thread Yuval Itzchakov
). Is there any way I can preserve the underlying type and still register the column somehow with CREATE TABLE? On Mon, Dec 28, 2020 at 11:22 AM Yuval Itzchakov wrote: > Hi Timo, > > Thanks for the explanation. Passing around a byte array is not possible > since I need to know the concret

Re: Registering RawType with LegacyTypeInfo via Flink CREATE TABLE DDL

2020-12-28 Thread Yuval Itzchakov
ataType`. > > Another work around could be that you simply use `BYTES` as the return > type and pass around a byte array instead. > > Maybe you can show us a little end-to-end example, what you are trying > to achieve? > > Regards, > Timo > > > On 28.12.20 07:47, Y

Re: Registering RawType with LegacyTypeInfo via Flink CREATE TABLE DDL

2020-12-27 Thread Yuval Itzchakov
ot convert a plain RAW type > back to TypeInformation. > > Did you try to construct type information by a new > fresh TypeInformationRawType ? > > Yuval Itzchakov 于2020年12月24日周四 下午7:24写道: > >> An expansion to my question: >> >> What I really want is for the

Re: Registering RawType with LegacyTypeInfo via Flink CREATE TABLE DDL

2020-12-24 Thread Yuval Itzchakov
PM Yuval Itzchakov wrote: > Hi, > > I have a UDF which returns a type of MAP 'ANY')>. When I try to register this type with Flink via the > CREATE TABLE DDL, I encounter an exception: > > - SQL parse failed. Encountered "(" at line 2, column 256. > Was expecting o

Registering RawType with LegacyTypeInfo via Flink CREATE TABLE DDL

2020-12-24 Thread Yuval Itzchakov
t;>" ... "MULTISET" ... "ARRAY" ... "." ... Which looks like the planner doesn't like the round brackets on the LEGACY type. What is the correct way to register the table with this type with Flink? -- Best Regards, Yuval Itzchakov.

NullPointerException while calling TableEnvironment.sqlQuery in Flink 1.12

2020-12-21 Thread Yuval Itzchakov
st Regards, Yuval Itzchakov.

Re: DataStream.connect semantics for Flink SQL / Table API

2020-11-12 Thread Yuval Itzchakov
should work because then there won't > be any buffering by event time which could delay your output. > > Have you tried this and have you seen an actual delay in your output? > > Best, > Aljoscha > > On 12.11.20 12:57, Yuval Itzchakov wrote: > > Hi, > > > >

DataStream.connect semantics for Flink SQL / Table API

2020-11-12 Thread Yuval Itzchakov
the Table into a DataStream, since I want to leverage predicate pushdown for the definition of the result table. Does anything like this currently exist? -- Best Regards, Yuval Itzchakov.

Re: Error parsing annotations in flink-table-planner_blink_2.12-1.11.2

2020-11-05 Thread Yuval Itzchakov
ards, > Timo > > > On 02.11.20 16:00, Aljoscha Krettek wrote: > > But you're not using apiguardian yourself or have it as a dependency > > before this, right? > > > > Best, > > Aljoscha > > > > On 02.11.20 14:59, Yuval Itzchakov wrote: > >

  1   2   >