nt(conf)
But adding the following line makes everything work:
FileSystem.initialize(conf, null)
Has anyone encountered this issue? I am wondering why explicit
initialization of the file system is required for this to work.
--
Best Regards,
Yuval Itzchakov.
Congrats team!
On Mon, Jul 3, 2023, 17:28 Jing Ge via user wrote:
> Congratulations!
>
> Best regards,
> Jing
>
>
> On Mon, Jul 3, 2023 at 3:21 PM yuxia wrote:
>
>> Congratulations!
>>
>> Best regards,
>> Yuxia
>>
>> --
>> *发件人: *"Pushpa Ramakrishnan"
>> *收件人: *"Xin
s tag flow between the
different projections and joins the table goes through? If not, what could
be alternatives to achieve flowing some column A between all table
transformations such that I can provide some "metadata" context that is
coming in from the source itself?
--
Best Regards,
Yuval Itzchakov.
additional questions.
>
> [1]
> https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/hybridsource/
>
> Best,
> Mason
>
> On Wed, Mar 8, 2023 at 11:12 AM Yuval Itzchakov wrote:
>
>> Hi,
>>
>> I have a use-case where I have two strea
Hi,
I have a use-case where I have two streams, call them A and B. I need to
consume stream A up to a certain point, and only then start processing on
stream B.
What could be a way to go about this?
seem to have recognized the UUID type and
>> created a java.util.UUID instance
>> I see a merged-and-reverted ticket about this here:
>> https://issues.apache.org/jira/browse/FLINK-19869
>>
>> It mentions using BinaryRawValueData for now, but can someone help me
>> figure out what that would look like in Flink SQL?
>>
>> regards Frank
>>
>
--
Best Regards,
Yuval Itzchakov.
n't able to filter by anything else,
> is there anything else we can could potentially look into to help?
>
>
> Thank you
>
>
> Jason Politis
> Solutions Architect, Carrera Group
> carrera.io
> | jpoli...@carrera.io
> <http://us.linkedin.com/in/jasonpolitis>
Hi Jason,
When using interval joins, Flink can't parallelize the execution as the
join key (semantically) is even time, thus all events must fall into the
same partition for Flink to be able to lookup events from the two streams.
See the IntervalJoinOperator (
https://github.com/apache/flink/blob/
As a Scala API user I'd prefer a breaking change to get all the benefits of
the latest Scala minor versions.
On Fri, May 20, 2022, 11:37 Martijn Visser wrote:
> Hi everyone,
>
> I would like to get some opinions from our Scala users, therefore I'm also
> looping in the user mailing list.
>
> Fli
My company is running Flink in Kubernetes with spot instances for both JM
and TM.
Feel free to reach out.
On Thu, Mar 24, 2022, 20:33 Ber, Jeremy wrote:
>
> https://aws.amazon.com/blogs/compute/optimizing-apache-flink-on-amazon-eks-using-amazon-ec2-spot-instances/
>
>
>
> Sharing this link FWIW
One possible option is to look into the hybrid source released in Flink
1.14 to support your use-case:
https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/connectors/datastream/hybridsource/
On Fri, Feb 25, 2022, 09:19 Jin Yi wrote:
> so we have a streaming job where the main work t
tions/ScalaDataStreamQueryOperation.java
Would love some help with this.
--
Best Regards,
Yuval Itzchakov.
nished successfully with value: 0" seems quite suspicious. If you
> search through the code base no log is printing such information. Could you
> please check which component is printing this log and determine which
> process this exit code belongs to?
>
> Yuval Itzchako
d externally..
2021-12-22 09:25:28,923 INFO o.a.f.r.r.RestServerEndpoint Shutting down
rest endpoint.
2021-12-22 09:25:28,943 INFO o.a.f.r.b.BlobServer Stopped BLOB server at
0.0.0.0:64213
Process finished with exit code 239
On Wed, Dec 22, 2021 at 8:47 AM Yuval Itzchakov wrote:
> I mean it fi
ate? Which kind of job are you running? Is it a
> select job or an insert job? Insert jobs should run continuously once
> they're submitted. Could you share your user code if possible?
>
> Yuval Itzchakov 于2021年12月22日周三 14:11写道:
>
>> Hi Caizhi,
>>
>> If I don
ob with REST API [1]. You can tell that
> the job successfully finishes by the FINISHED state and that the job fails
> by the FAILED state.
>
> [1]
> https://nightlies.apache.org/flink/flink-docs-master/docs/ops/rest_api/#jobs-jobid
>
> Yuval Itzchakov 于2021年12月22日周三 02:36写道:
&
va:54)
... 7 more
This is because the Web Submission via the REST API is using
the WebSubmissionJobClient.
How can I wait on my Flink SQL streaming job when submitting through the
REST API?
--
Best Regards,
Yuval Itzchakov
the job as it did not receive any response, but a timeout.
This will happen continuously until it exhausts
the available threads defined by rest.server.numThreads.
How can I make sure the exception thrown from the JM causes the REST
API thread to be terminated?
--
Best Regards,
Yuval Itzchakov.
ala:159)
I have opened an issue on this here:
https://issues.apache.org/jira/browse/FLINK-24956
Was wondering if anyone has any workaround in mind.
--
Best Regards,
Yuval Itzchakov.
I recall looking for these once in the SQL standard spec, AFAIR they are
not part of it.
On Fri, Nov 12, 2021, 11:48 Francesco Guardiani
wrote:
> Yep I agree with waiting for calcite to support it. As a temporary
> workaround you can define your own udfs with that functionality.
>
> I also wonde
, and the KEY_RAND_SALT was
> generated by some script to map bucket id evenly to operators under the max
> parallelism.
>
> Sent with a Spark <https://sparkmailapp.com/source?from=signature>
> On Nov 3, 2021, 9:47 PM +0800, Yuval Itzchakov , wrote:
>
> Hi,
> I have a use-c
error looks indeed strange. I do not think when switching to the
> unified KafkaSink the old serializer should be invoked at all.
> Can you maybe share more information about the job you are using or maybe
> share the program so that we can reproduce it?
>
> Best,
> Fabian
--
Best Regards,
Yuval Itzchakov.
gt; On Thu, Oct 28, 2021 at 10:15 PM Yuval Itzchakov
> wrote:
>
>> Flink 1.14
>> Scala 2.12.5
>>
>> Hi,
>> I want to be able to convert a Table into a DataStream[RowData]. I need
>> to do this since I already have specific infrastructure in place that
8)
at
org.apache.flink.runtime.state.DefaultOperatorStateBackendBuilder.build(DefaultOperatorStateBackendBuilder.java:80)
... 17 more
The StateSerializer being created is the old one for the
FlinkKafkaProducer. Is there any other way to work around this issue?
--
Best Regards,
Yuval Itzchakov.
2. If we have to implement a custom S3 continuous source, how would one
> implement the SplitEnumerator since ListObjects S3 API can become expensive
> as the bucket grows?
>
> Thanks in advance
>
> Best,
> Abhishek
>
--
Best Regards,
Yuval Itzchakov.
, now with the new toDataStream API, I don't see a direct way to do
this. Using toDataStream[RowData](tableSchema.toPhysicalRowDataType) still
creates an internal OutputConversionOperator and yields a Row to the Kafka
sink I'm trying to use.
Would love some help.
--
Best Regards,
Yuval Itzchakov.
Yes I am
On Thu, Oct 14, 2021, 13:32 Chesnay Schepler wrote:
> Are you by chance explicitly setting the -Dlog4j.configurationFile option
> (or however it is called?)
>
> On 14/10/2021 11:59, Yuval Itzchakov wrote:
>
> Just tried adding the jars to lib/, I still receive the
Just tried adding the jars to lib/, I still receive the same error message.
On Thu, Oct 14, 2021 at 12:53 PM Yuval Itzchakov wrote:
> I have an UBER jar that contains all the dependencies I require for the
> execution, I find it weird that I need to add an external library to lib to
>
ent of how Flink works; the JSONLayout needs jackson, so
> you need to make sure it is on the classpath.
>
> On 14/10/2021 11:34, Yuval Itzchakov wrote:
>
> Scala 2.12
> Java 11
> Flink 1.13.2
> Running in Kubernetes
>
> Hi,
>
> While trying to use a configuration w
include jackson-databind.
While I could add the jars explicitly to the `lib/` directory in the
Dockerfile, this is pretty annoying to maintain as versions evolve. Is
there any other workaround for this? or an open issue?
--
Best Regards,
Yuval Itzchakov.
ached the sink and a crash happens for
various reasons. Does Flink snapshot internal buffers periodically with
checkpoints? Would messages in flight in internal buffers be restored?
--
Best Regards,
Yuval Itzchakov.
e/flink/table/examples/java/functions/LastDatedValueFunction.java
>
> On Sat, Sep 18, 2021 at 7:42 AM Yuval Itzchakov wrote:
>
>> Hi Jing,
>>
>> I recall there is already an open ticket for built-in aggregate functions
>>
>> On Sat, Sep 18, 2021, 15:08 JING ZHANG
Hi Jing,
I recall there is already an open ticket for built-in aggregate functions
On Sat, Sep 18, 2021, 15:08 JING ZHANG wrote:
> Hi Yuval,
> You could open a JIRA to track this if you think some functions should be
> added as built-in functions in Flink.
>
> Best,
> JI
The problem with defining a UDF is that you have to create one overload per
key type in the MULTISET. It would be very convenient to have functions
like Snowflakes ARRAY_AGG.
On Sat, Sep 18, 2021, 05:43 JING ZHANG wrote:
> Hi Kai,
> AFAIK, there is no built-in function to extract the keys in MUL
Hi Lars,
We've built a custom connector for Snowflake (both source and sink).
Feel free to reach out in private if you have any questions.
On Fri, Sep 17, 2021, 14:33 Lars Skjærven wrote:
> We're in need of a Google Bigtable flink connector. Do you have any tips
> on how this could be done, e.
e
> serializers in the serializer chain is somehow broken.
> What data type are you serializating? Does it include some type serializer
> by a custom serializer, or Kryo, ... ?
>
> On Thu, Sep 9, 2021 at 4:35 PM Yuval Itzchakov wrote:
>
>> Hi,
>>
>> Flink 1.13
his or try to get more info
would be great.
Thanks.
--
Best Regards,
Yuval Itzchakov.
;>
>> You can also open a JIRA ticket to require a built-in support for
>> array_agg, as this function exists in many data ware houses.
>>
>> Yuval Itzchakov 于2021年8月23日周一 下午7:38写道:
>>
>>> Hi,
>>>
>>> I'm trying to implement a generic AR
on how to tackle this.
--
Best Regards,
Yuval Itzchakov.
gt; this works for you?
>
>
> Best
> Ingo
>
> On Wed, Aug 18, 2021 at 12:51 PM Yuval Itzchakov
> wrote:
>
>> Hi,
>>
>> I have a use-case where I need to validate hundreds of Flink SQL queries.
>> Ideally, I'd like to run these validations in paralle
Ideally, I don't really care about the overall registration process of
sources, transformations and sinks, I just want to make sure the syntax is
correct from Flinks perspective.
Is there any straightforward way of doing this?
--
Best Regards,
Yuval Itzchakov.
32E10 io, 0.0 network, 0.0 memory}, id = 170
I do understand that the semantics of unioning two lookups may be a bit
complicated, but was wondering if this is planned to be supported in the
future?
--
Best Regards,
Yuval Itzchakov.
Hi Niklas,
We are currently using the Lyft operator for Flink in production (
https://github.com/lyft/flinkk8soperator), which is additional alternative.
The project itself is pretty much in Zombie state, but commits happen every
now and then.
1. Native Kubernetes could definitely work with GitOp
tate and can rely on that fact that it has synchronously been pushed
end to end.
--
Best Regards,
Yuval Itzchakov.
//github.com/facebook/rocksdb/issues/5774
> [11] http://david-grs.github.io/tls_performance_overhead_cost_linux/
> [12] https://github.com/ververica/frocksdb/pull/19
> [13] https://github.com/facebook/rocksdb/pull/5441/
> [14] https://github.com/facebook/rocksdb/pull/2283
>
>
>
sed unbound source (i.e. Kafka).
>
> Thanks!
> Bin
>
--
Best Regards,
Yuval Itzchakov.
d need to trace it back to RocksDB, then one of the
> committers can find the relevant patch from RocksDB master and backport it.
> That isn't the greatest user experience.
>
> Because of those disadvantages, we would prefer to do the upgrade to the
> newer RocksDB version despite the unfortunate performance regression.
>
> Best,
> Stephan
>
>
>
--
Best Regards,
Yuval Itzchakov.
to
> writing the results out) in a separated thread. Flink's runtime expect the
> whole operator chain to run in a single thread.
>
> Yuval Itzchakov 于2021年8月3日周二 下午1:47写道:
>
>> Hi,
>>
>> Flink 1.13.1
>> Scala 2.12.4
>>
>> I have an implementati
Data)
in1.getString(0));
}
if (isNull$57) {
outWriter.setNullAt(0);
} else {
outWriter.writeString(0, field$57);
}outWriter.complete(); return out;
}
}
--
Best Regards,
Yuval Itzchakov.
e BashJavaUtils.
>
> On 30/07/2021 08:48, Yuval Itzchakov wrote:
>
> It is finding the file though, the problem is that the lib/ might not be
> on the classpath when the file is being parsed, thus the YAML file is not
> recognized as being parsable.
>
> Is there a wa
the utils classpath?
On Thu, Jul 29, 2021, 17:55 Chesnay Schepler wrote:
> That could be...could you try configuring "env.java.opts:
> -Dlog4j.configurationFile=..."?
>
> On 29/07/2021 15:18, Yuval Itzchakov wrote:
>
> Perhaps because I'm passing
tory on the
> classpath.
> What I don't understand is why it would pick up your log4j file. It should
> only use the file that is embedded within BashJavaUtils.jar.
>
> On 29/07/2021 13:11, Yuval Itzchakov wrote:
>
> Hi Chesnay,
>
> So I looked into it, and jackson-d
I'm using the "classloader.resolve-order" =
"parent-first" flag due to my lib having some deps that otherwise cause
runtime collision errors in Kafka.
On Wed, Jul 7, 2021 at 12:25 PM Yuval Itzchakov wrote:
> Interesting, I don't have bind explicitly on th
Hi Jing,
An additional challenge with the current Table API / SQL approach for
iperator naming is that it makes it very hard to export metrics, i.e. to
track watermarks with Prometheus, when operator names are not assignable by
the user.
On Wed, Jul 28, 2021, 13:11 JING ZHANG wrote:
> Hi Clemen
JING ZHANG wrote:
> Hi Yuval,
> I run a similar SQL (without `FIRST` aggregate function), there is nothing
> wrong.
> `FIRST` is a custom aggregate function? Would you please check if there is
> a drawback in `FIRST`? Whether the query could run without `FIRST`?
>
> Best,
>
sk.invoke(StreamTask.java:620)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:779)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:566)
at java.base/java.lang.Thread.run(Thread.java:829)
Would appreciate help in the direction of how to debug this issue, or if
anyone has encountered this before.
--
Best Regards,
Yuval Itzchakov.
that with improved classloader separation, you actually need
> to put your dependency into `lib/` instead of putting it into your user
> jar. But I'm pulling in @Chesnay Schepler who has
> much more insights.
>
> On Sun, Jul 4, 2021 at 9:45 PM Yuval Itzchakov wrote:
>
&
Hi,
I am attempting to upgrade Flink from 1.9 to 1.13.1
I am using a YAML based log4j file. In 1.9, it worked perfectly fine by
adding the following dependency to the classpath (I deploy with an uber
JAR):
"com.fasterxml.jackson.dataformat" % "jackson-dataformat-yaml" % "2.12.3"
However, with Fl
ERROR StatusLogger No logging configuration
This indicates that for some reason, the jackson dataformat YAML library is
not getting properly loaded from my uber JAR at runtime.
Has anyone run into this? Any possible workarounds?
--
Best Regards,
Yuval Itzchakov.
an I find out the serialized values that are being used for key
> comparison? Can you recommend any possible solutions or debugging
> strategies that would help?
>
>
>
> Thank you,
>
> Tom
>
--
Best Regards,
Yuval Itzchakov.
As my company is also a heavy user of Flink SQL, the state migration story
is very important to us.
I as well believe that adding new fields should start to accumulate state
from the point in time of the change forward.
Is anyone actively working on this? Is there anyway to get involved?
On Tue,
e you want
>> to have?
>>
>> Best wishes,
>> Nico
>>
>> On Mon, Jun 7, 2021 at 10:09 AM Yuval Itzchakov
>> wrote:
>>
>>> Hi,
>>>
>>> Currently when using StatementSet, the name of the job is autogenerated
>>> by the
Hi,
Currently when using StatementSet, the name of the job is autogenerated by
the runtime:
[image: image.png]
Is there any reason why there shouldn't be an overload that allows the user
to explicitly specify the name of the running job?
--
Best Regards,
Yuval Itzchakov.
t to TypeInformation.
>
> Regards,
> Timo
>
>
> On 04.06.21 16:05, Yuval Itzchakov wrote:
> > Hi Timo,
> > Thank you for the response.
> >
> > The tables being created in reality are based on arbitrary SQL code such
> > that I don't know what th
the near future.
> But for now we went with the most generic solution that supports
> everything that can come out of Table API.
>
> Regards,
> Timo
>
> On 04.06.21 15:12, Yuval Itzchakov wrote:
> > When upgrading to Flink 1.13, I ran into deprecation warnings on
> >
needs to be converted into a
DataStream[Row] and in turn I need to apply some stateful transformations
on it. In order to do that I need the TypeInformation[Row] produced in
order to pass into the various state functions.
@Timo Walther I would love your help on this.
--
Best Regards,
Yuval Itzchakov.
https://github.com/lyft/flinkk8soperator
On Fri, May 28, 2021, 10:09 Ilya Karpov wrote:
> Hi there,
>
> I’m making a little research about the easiest way to deploy link job to
> k8s cluster and manage its lifecycle by *k8s operator*. The list of
> solutions is below:
> - https://github.com/fint
Yes. Instead of calling execute on each table, create a StatementSet using
your StreamTableEnvironment (tableEnv.createStatementSet) and use addInsert
and finally .execute when you want to run the job.
On Sat, Apr 17, 2021, 03:20 tbud wrote:
> If I want to run two different select queries on
Roman, is there an ETA on 1.13?
On Mon, Apr 12, 2021, 16:17 Roman Khachatryan wrote:
> Hi Maciek,
>
> There are no specific plans for 1.11.4 yet as far as I know.
> The official policy is to support the current and previous minor
> release [1]. So 1.12 and 1.13 will be officially supported once
> Best,
> Guowei
>
>
> On Thu, Apr 1, 2021 at 10:33 PM Yuval Itzchakov wrote:
>
>> Hi All,
>>
>> I would really love to merge https://github.com/apache/flink/pull/15307
>> prior to 1.13 release cutoff, it just needs some more tests which I can
>> h
Hi All,
I would really love to merge https://github.com/apache/flink/pull/15307
prior to 1.13 release cutoff, it just needs some more tests which I can
hopefully get to today / tomorrow morning.
This is a critical fix as now predicate pushdown won't work for any stream
which generates a watermark
mark
> assigner or extend the filter push down rule to capture the structure that
> the watermark assigner is the parent of the table scan.
>
> Best,
> Shengkai
>
> Yuval Itzchakov 于2021年3月8日周一 上午12:13写道:
>
>> Hi Jark,
>>
>> Even after implementing bo
markAssigner if you
> assigned a watermark.
> If you implement SupportsWatermarkPushdown, the LogicalWatermarkAssigner
> will be pushed
> into TableSource, and then you can push Filter into source if source
> implement SupportsFilterPushdown.
>
> Best,
> Jark
>
> On Sat, 6 M
till use rowtime without
> implementing `SupportsWatermarkPushDown` in your custom source.
>
> I will lookp in Shengkai who worked on this topic recently.
>
> Regards,
> Timo
>
>
> On 04.03.21 18:52, Yuval Itzchakov wrote:
> > Bumping this up again, would apprecia
>>> Best,
>>> Piotrek
>>>
>>> pon., 1 mar 2021 o 14:43 Joey Tran
>>> napisał(a):
>>>
>>>> Hi, I was looking at Apache Beam/Flink for some of our data processing
>>>> needs, but when reading about the resource managers
>>>> (YARN/mesos/Kubernetes), it seems like they all top out at around 10k
>>>> nodes. What are recommended solutions for scaling higher than this?
>>>>
>>>> Thanks in advance,
>>>> Joey
>>>>
>>>
--
Best Regards,
Yuval Itzchakov.
uld Slack be your first pick? Or would something async but
> easier to interact with also work, like a Discourse forum?
>
> Thanks for bringing this up!
>
> Marta
>
>
>
> On Mon, Feb 22, 2021 at 10:03 PM Yuval Itzchakov
> wrote:
>
>> A dedicated Slack would be aweso
A dedicated Slack would be awesome.
On Mon, Feb 22, 2021, 22:57 Sebastián Magrí wrote:
> Is there any chat from the community?
>
> I saw the freenode channel but it's pretty dead.
>
> A lot of the time a more chat alike venue where to discuss stuff
> synchronously or just share ideas turns out v
>
> that has an upstream Kafka source task. What exactly does the rowtime task
> do?
>
> --
> Thank you,
> Aeden
>
--
Best Regards,
Yuval Itzchakov.
s.apache.org/jira/browse/FLINK-21013
> On 15/02/2021 08:20, Yuval Itzchakov wrote:
>
> Hi,
>
> I have a source that generates events with timestamps. These flow nicely,
> until encountering a conversion from Table -> DataStream[Row]:
>
> def toRowRetractSt
Am I missing anything in the Table -> DataStream[Row] conversion that
should make the timestamp follow through? or is this a bug?
--
Best Regards,
Yuval Itzchakov.
e with just a primary key.
>
> Regards,
> Timo
>
>
> On 03.02.21 14:09, Yuval Itzchakov wrote:
> > Hi,
> > I'm reworking an existing UpsertStreamTableSink into the new
> > DynamicTableSink API. In the previous API, one would get the unique keys
> > for
ay to receive the unique keys for upsert queries
with the new DynamicTableSink API?
--
Best Regards,
Yuval Itzchakov.
:
> Thanks for reaching out to the Flink community Yuval. I am pulling in Timo
> and Jark who might be able to answer this question. From what I can tell,
> it looks a bit like an oversight because VARCHAR is also supported.
>
> Cheers,
> Till
>
> On Tue, Feb 2, 2021 at 6:12 PM
ting the max function:
[image: image.png]
It does seem like all primitives are supported. Is there a particular
reason why a CHAR would not be supported? Is this an oversight?
--
Best Regards,
Yuval Itzchakov.
he.org/jira/browse/FLINK-20986 ?
>
> Regards,
> Timo
>
>
> On 28.01.21 16:30, Yuval Itzchakov wrote:
> > Hi Timo,
> >
> > I tried replacing it with an ordinary ARRAY DataType, which
> > doesn't reproduce the issue.
> > If I use a RawType
nail down the problem.
>
> Have you tried to replace the JSON object with a regular String? If the
> exception is gone after this change. I believe it must be the
> serialization and not the network stack.
>
> Regards,
> Timo
>
>
> On 28.01.21 10:29, Yuval Itzchakov wrote:
type? All
the examples in the docs refer to Scalar functions.
--
Best Regards,
Yuval Itzchakov.
])
Is this the desired behavior? I was quite surprised by this and the fact
that the pruning happens regardless of if the underlying table is a
TableSource or if it actually filters out these unused fields from the
result query.
--
Best Regards,
Yuval Itzchakov.
(err)=> throw err
case Right(value) => value
}
}
Would appreciate any help on how to debug this further.
--
Best Regards,
Yuval Itzchakov.
. DataType is a super type of TypeInformation.
>
> Registering types (e.g. giving a RAW type backed by io.circe.Json a name
> such that it can be used in a DDL statement) is also in the backlog but
> requires another FLIP.
>
> Regards,
> Timo
>
>
> On 28.12.20 13:04, Y
d
> representation of a `CREATE TABLE` statement. In this case you would
> have the full control over the entire type stack end-to-end.
>
> Regards,
> Timo
>
> On 28.12.20 10:36, Yuval Itzchakov wrote:
> > Timo, an additional question.
> >
> > I am currently using Type
.., LEGACY). Is there any way I can preserve
the underlying type and still register the column somehow with CREATE TABLE?
On Mon, Dec 28, 2020 at 11:22 AM Yuval Itzchakov wrote:
> Hi Timo,
>
> Thanks for the explanation. Passing around a byte array is not possible
> since I need to know t
ersions.fromLegacyInfoToDataType`.
>
> Another work around could be that you simply use `BYTES` as the return
> type and pass around a byte array instead.
>
> Maybe you can show us a little end-to-end example, what you are trying
> to achieve?
>
> Regards,
> Timo
>
>
LegacyInfo cannot convert a plain RAW type
> back to TypeInformation.
>
> Did you try to construct type information by a new
> fresh TypeInformationRawType ?
>
> Yuval Itzchakov 于2020年12月24日周四 下午7:24写道:
>
>> An expansion to my question:
>>
>> What I really want
PM Yuval Itzchakov wrote:
> Hi,
>
> I have a UDF which returns a type of MAP 'ANY')>. When I try to register this type with Flink via the
> CREATE TABLE DDL, I encounter an exception:
>
> - SQL parse failed. Encountered "(" at line 2, column 256.
> Was e
">" ...
"MULTISET" ...
"ARRAY" ...
"." ...
Which looks like the planner doesn't like the round brackets on the LEGACY
type. What is the correct way to register the table with this type with
Flink?
--
Best Regards,
Yuval Itzchakov.
ml)
--
Best Regards,
Yuval Itzchakov.
/UNION ALL should work because then there won't
> be any buffering by event time which could delay your output.
>
> Have you tried this and have you seen an actual delay in your output?
>
> Best,
> Aljoscha
>
> On 12.11.20 12:57, Yuval Itzchakov wrote:
> > Hi,
>
I don't want to turn the Table into a DataStream, since I want to leverage
predicate pushdown for the definition of the result table. Does anything
like this currently exist?
--
Best Regards,
Yuval Itzchakov.
.
>
> Regards,
> Timo
>
>
> On 02.11.20 16:00, Aljoscha Krettek wrote:
> > But you're not using apiguardian yourself or have it as a dependency
> > before this, right?
> >
> > Best,
> > Aljoscha
> >
> > On 02.11.20 14:59, Yuval Itzch
1 - 100 of 134 matches
Mail list logo