k forwards to your feedback. I'd love to listen the feedback from
> > community to take the next steps.
> >
> > [1]:
> >
> https://github.com/apache/flink/blob/678370b18e1b6c4a23e5ce08f8efd05675a0cc17/flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/planner/delegation/hive/HiveParser.java#L348
> > [2]:https://issues.apache.org/jira/browse/FLINK-26681
> > [3]:https://issues.apache.org/jira/browse/FLINK-31413
> > [4]:https://issues.apache.org/jira/browse/FLINK-30064
> >
> >
> >
> > Best regards,
> > Yuxia
> >
>
>
> --
>
> Best,
> Benchao Li
>
--
Best regards!
Rui Li
gt; <https://blog.csdn.net/weibokong789/article/details/106427481> and got
> the same result (no real idea what I'm doing here, just trying some things.)
>
> Is there something more I have to do to use HiveCatalog with a kerberized
> Hive Metastore? Should Flink support this out of the box?
>
> Thanks!
> - Andrew Otto
> SRE, Wikimedia Foundation
>
>
>
--
Best regards!
Rui Li
>> >> distributed, high-performing, always-available, and accurate data
>> streaming
>> >> applications.
>> >>
>> >> The release is available for download at:
>> >> https://flink.apache.org/downloads.html
>> >>
>> >> Please check out the release blog post for an overview of the
>> >> improvements for this bugfix release:
>> >> https://flink.apache.org/news/2021/05/03/release-1.13.0.html
>> >>
>> >> The full release notes are available in Jira:
>> >>
>> >>
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12349287
>> >>
>> >> We would like to thank all contributors of the Apache Flink community
>> who
>> >> made this release possible!
>> >>
>> >> Regards,
>> >> Guowei & Dawid
>> >>
>> >>
>> >
>> >
>>
>
--
Best regards!
Rui Li
gt; flink_gaia.ST_GeomFromText already exists in Catalog flink-hive.
>>
>> at
>> org.apache.flink.table.api.internal.TableEnvironmentImpl.createCatalogFunction(TableEnvironmentImpl.java:1459)
>>
>> at
>> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeOperation(TableEnvironme
812)
>
> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:246)
>
> at
> org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1054)
>
> at
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1132)
>
> at java.security.AccessController.doPrivileged(Native Method)
>
> at javax.security.
;>
>> at javax.security.auth.Subject.doAs(Subject.java:422)
>>
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>>
>> at
>> org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>>
>> at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1132)
>>
>> Caused by: org.apache.flink.table.api.ValidationException: Function
>> flink_gaia.ST_GeomFromText already exists in Catalog flink-hive.
>>
>> at
>> org.apache.flink.table.api.internal.TableEnvironmentImpl.createCatalogFunction(TableEnvironmentImpl.java:1459)
>>
>> at
>> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeOperation(TableEnvironmentImpl.java:1009)
>>
>> at
>> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:666)
>>
>> at com.skt.chiron.FlinkApp.main(FlinkApp.java:58)
>>
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>
>> at java.lang.reflect.Method.invoke(Method.java:498)
>>
>> at
>> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:349)
>>
>> ... 11 more
>>
>> (snip)
>>
>>
>>
>> I hope to find out why the functions are missing. Flink(Ver. 1.12.2) job
>> cluster is running on Kubernetes cluster via flink operator and the
>> standalone metastore is running for only the Flink cluster without Hive
>> deployments.
>>
>>
>> Thanks,
>>
>> Youngwoo
>>
>> 1. https://github.com/Esri/spatial-framework-for-hadoop
>>
>
--
Best regards!
Rui Li
> I am able to find the jar from Maven central. See updates in the
> StackOverflow post.
>
> Thank you!
>
> Best,
> Yik San
>
> On Tue, Apr 6, 2021 at 4:05 PM Tzu-Li (Gordon) Tai
> wrote:
>
>> Hi,
>>
>> I'm pulling in Rui Li (cc'ed)
id submit udf's resources when we submit a
> > job.Is it possible?
> >
> >
> >
> > --
> > Sent from:
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
> >
>
>
--
Cheers,
Rui Li
>>>
>>> Please join me in congratulating Dian Fu for becoming a Flink PMC Member!
>>>
>>> Best,
>>> Jincheng(on behalf of the Flink PMC)
>>>
>>
--
Best regards!
Rui Li
> > > >>
>> > > > >> Apache Flink® is an open-source stream processing framework for
>> > > distributed, high-performing, always-available, and accurate data
>> > streaming
>> > > applications.
>> > > > >>
>> > > > >> The release is available for download at:
>> > > > >> https://flink.apache.org/downloads.html
>> > > > >>
>> > > > >> Please check out the release blog post for an overview of the
>> > > improvements for this bugfix release:
>> > > > >> https://flink.apache.org/news/2020/07/21/release-1.11.1.html
>> > > > >>
>> > > > >> The full release notes are available in Jira:
>> > > > >>
>> > >
>> >
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12348323
>> > > > >>
>> > > > >> We would like to thank all contributors of the Apache Flink
>> > community
>> > > who made this release possible!
>> > > > >>
>> > > > >> Regards,
>> > > > >> Dian
>> > > > >
>> > > >
>> > >
>> >
>>
>>
>> --
>>
>> Konstantin Knauf
>>
>> https://twitter.com/snntrable
>>
>> https://github.com/knaufk
>>
>>
>>
>
> --
> Best, Jingsong Lee
>
--
Best regards!
Rui Li
orc table has no complex(list,map,row) types, you can try to set
> `table.exec.hive.fallback-mapred-writer` to false in TableConfig. And Hive
> sink will use ORC native writer, it is a work-around way.
>
> About this error, I think this is a bug for Hive 1.1 ORC. I will try to
> re-pro
reaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:755)
> at
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
> at
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)
> at
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:201)
>
>
> Best,
> Paul Lam
>
>
--
Best regards!
Rui Li
nk all contributors of the Apache Flink community who made
> this release possible!
>
> Cheers,
> Piotr & Zhijiang
>
--
Best regards!
Rui Li
FAIK, yes, you can write streams.
>
> I'm pulling in Jingsong Li and Rui Li as they might know better.
>
> Regards,
> Roman
>
>
> On Mon, May 11, 2020 at 10:21 PM Jaswin Shah
> wrote:
>
>> If I go with table apis, can I write the streams to hive or it is only
&
lanner also supports this both streaming mode
> and batch mode.
> Rlike => Blink planner has REGEXP [1] built-in function which I think is
> similar to Hive's Rlike?
> LATERAL VIEW => This is called UDTF in Flink, see how to use UDTF in docs
> [3] "Join with Table Funct
very
> active in both dev
> and user mailing lists, helped discussing designs and answering users
> questions, also
> helped to verify various releases.
>
> Congratulations Jingsong!
>
> Best, Kurt
> (on behalf of the Flink PMC)
>
>
>
--
Best regards!
Rui Li
://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/types.html#timestamp
> ,
> the default java bridge time for timestamp is java.time.LocalDateTime. Is
> there a setting that can change it to use
> java.sql.Timestamp instead?
>
> Thanks,
> Fanbin
>
--
Best regards!
Rui Li
the release.
>
> Please join in me congratulating Dian for becoming a Flink committer !
>
> Best,
> Jincheng(on behalf of the Flink PMC)
>
--
Best regards!
Rui Li
t;>>>>> >> to blink planner manually when every time start a SQL CLI. And
>>>>>>>> it's
>>>>>>>> >> surprising to see unsupported
>>>>>>>> >> exception if they trying out the new features but not switch
>>>>>>>> planner.
>>>>>>>> >>
>>>>>>>> >> SQL CLI is a very important entrypoint for trying out new
>>>>>>>> feautures and
>>>>>>>> >> prototyping for users.
>>>>>>>> >> In order to give new planner more exposures, I would like to
>>>>>>>> suggest to set
>>>>>>>> >> default planner
>>>>>>>> >> for SQL Client to Blink planner before 1.10 release.
>>>>>>>> >>
>>>>>>>> >> The approach is just changing the default SQL CLI yaml
>>>>>>>> configuration[5]. In
>>>>>>>> >> this way, the existing
>>>>>>>> >> environment is still compatible and unaffected.
>>>>>>>> >>
>>>>>>>> >> Changing the default planner for the whole Table API & SQL is
>>>>>>>> another topic
>>>>>>>> >> and is out of scope of this discussion.
>>>>>>>> >>
>>>>>>>> >> What do you think?
>>>>>>>> >>
>>>>>>>> >> Best,
>>>>>>>> >> Jark
>>>>>>>> >>
>>>>>>>> >> [1]:
>>>>>>>> >>
>>>>>>>> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/streaming/joins.html#join-with-a-temporal-table
>>>>>>>> >> [2]:
>>>>>>>> >>
>>>>>>>> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql/queries.html#top-n
>>>>>>>> >> [3]:
>>>>>>>> >>
>>>>>>>> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql/queries.html#deduplication
>>>>>>>> >> [4]:
>>>>>>>> >>
>>>>>>>> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/tuning/streaming_aggregation_optimization.html
>>>>>>>> >> [5]:
>>>>>>>> >>
>>>>>>>> https://github.com/apache/flink/blob/master/flink-table/flink-sql-client/conf/sql-client-defaults.yaml#L100
>>>>>>>> >
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Best, Jingsong Lee
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Best Regards
>>>>>>
>>>>>> Jeff Zhang
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Benoît Paris
>>>>> Ingénieur Machine Learning Explicable
>>>>> Tél : +33 6 60 74 23 00
>>>>> http://benoit.paris
>>>>> http://explicable.ml
>>>>>
>>>>
>>
>> --
>>
>> Benchao Li
>> School of Electronics Engineering and Computer Science, Peking University
>> Tel:+86-15650713730
>> Email: libenc...@gmail.com; libenc...@pku.edu.cn
>>
>>
--
Best regards!
Rui Li
ask jar will be tens of KB.
>
> --
> zjfpla...@hotmail.com
>
--
Best regards!
Rui Li
Failed to load native Mesos library from
> /usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
>
> Thanks,
> Felipe
>
> *--*
> *-- Felipe Gutierrez*
>
> *-- skype: felipe.o.gutierrez*
> *--* *https://felipeogutierrez.blogspot.com
> <https://felipeogutierrez.blogspot.com>*
>
--
Best regards!
Rui Li
me adding a new
> SELECT-syntax? I already came across the parser extension test but it
> didn't give me the answers I was looking for.
>
>
>
> Thanks for your help!
>
>
>
> Regards,
>
> Dominik Gröninger
>
>
>
>
--
Best regards!
Rui Li
22 matches
Mail list logo