I think upgrading to 2.12.15 would be the correct choice, as 2.12.16 has a
known regression for projects compiled with Java & Scala, which Flink does:
https://github.com/scala/scala/releases/tag/v2.12.16
This regression will be addressed in a few months via 2.12.17, so for now
2.12.15 should be
Hi,
It's mentioned in the following documentation,
https://ci.apache.org/projects/flink/flink-docs-stable/dev/table/connectors/formats/canal.html,
that "...currently Flink can’t combine UPDATE_BEFORE and UPDATE_AFTER into
a single UPDATE message."
Can anyone elaborate on this? Was decomposing a
Nick Burkard created FLINK-20845:
Summary: Drop support for Scala 2.11
Key: FLINK-20845
URL: https://issues.apache.org/jira/browse/FLINK-20845
Project: Flink
Issue Type: Sub-task
Hi
first good job and tank you
i don't find in docker hub the new version 1.12
when it will be there ?
nick
בתאריך יום ה׳, 10 בדצמ׳ 2020 ב-14:17 מאת Robert Metzger <
rmetz...@apache.org>:
> The Apache Flink community is very happy to announce the release of Apache
>
Nick Chadwick created FLINK-4222:
Summary: Allow Kinesis configuration to get credentials from AWS
Metadata
Key: FLINK-4222
URL: https://issues.apache.org/jira/browse/FLINK-4222
Project: Flink
end would be interesting especially if flink could benefit
> from cassandra data locality. Cassandra/spark integration is using this for
> information to schedule spark tasks.
>
> On 9 June 2016 at 19:55, Nick Dimiduk <ndimi...@gmail.com> wrote:
>
> > You might also consider suppo
You might also consider support for a Bigtable
backend: HBase/Accumulo/Cassandra. The data model should be similar
(identical?) to RocksDB and you get HA, recoverability, and support for
really large state "for free".
On Thursday, June 9, 2016, Chen Qin wrote:
> Hi there,
>
I'm also curious for a solution here. My test code executes the flow from a
separate thread. Once i've joined on all my producer threads and I've
verified the output, I simply interrupt the flow thread. This spews
exceptions, but it all appears to be harmless.
Maybe there's a better way? I think
Hi Chenguang,
I've been using the class StreamingMultipleProgramsTestBase, found in
flink-streaming-java test jar as the basis for my integration tests. These
tests spin up a flink cluster (and kafka, and hbase, ) in a single JVM.
It's not a perfect integration environment, but it's as close as
Nick Dimiduk created FLINK-3709:
---
Summary: [streaming] Graph event rates over time
Key: FLINK-3709
URL: https://issues.apache.org/jira/browse/FLINK-3709
Project: Flink
Issue Type: Improvement
.
>
> So unless we find blocker for the current RC I prefer to continue evaluate
> and VOTE current RC.
>
> - Henry
>
> On Tuesday, February 9, 2016, Ufuk Celebi <u...@apache.org <javascript:;>>
> wrote:
>
> > Hey Nick,
> >
> > I agree th
Nick Dimiduk created FLINK-3372:
---
Summary: Setting custom YARN application name is ignored
Key: FLINK-3372
URL: https://issues.apache.org/jira/browse/FLINK-3372
Project: Flink
Issue Type: Bug
https://ci.apache.org/projects/flink/flink-docs-release-0.10/api/java/org/apache/flink/streaming/api/environment/StreamExecutionEnvironment.html#disableOperatorChaining()
On Mon, Feb 8, 2016 at 10:34 AM, Greg Hogan wrote:
> Is it possible to force operator chaining to be
Perhaps too late for the RC, but I've backported FLINK-3293 to this branch
via FLINK-3372. Would be nice for those wanting to monitory yarn
application submissions.
On Mon, Feb 8, 2016 at 9:37 AM, Ufuk Celebi wrote:
> Dear Flink community,
>
> Please vote on releasing the
+1 for a 0.10.2 maintenance release.
On Monday, February 1, 2016, Ufuk Celebi wrote:
> Hey all,
>
> Our release-0.10 branch contains some important fixes (for example a
> critical fix in the network stack). I would like to hear your opinions
> about doing a 0.10.2 bug fix
Thanks Max. I'm accustomed to projects advertising a release with a fixed
ref such as a sha or tag, not a branch. Much obliged.
-n
On Friday, January 15, 2016, Maximilian Michels <m...@apache.org> wrote:
> Hi Nick,
>
> That was an oversight when the release was created. As Step
Hi folks,
I noticed today that the parent pom for the flink-shaded-hadoop pom (and
thus also it's children) are not using ${ROOT}/pom.xml as their parent.
However, ${ROOT}/pom.xml lists the hierarchy as a module. I'm curious to
know why this is. It seems one artifact of this disconnect is that
Nick Dimiduk created FLINK-3228:
---
Summary: Cannot submit multiple streaming involving JDBC drivers
Key: FLINK-3228
URL: https://issues.apache.org/jira/browse/FLINK-3228
Project: Flink
Issue
Nick Dimiduk created FLINK-3224:
---
Summary: The Streaming API does not call setInputType if a format
implements InputTypeConfigurable
Key: FLINK-3224
URL: https://issues.apache.org/jira/browse/FLINK-3224
What's the relationship between the streaming SQL proposed here and the CEP
syntax proposed earlier in the week?
On Sunday, January 10, 2016, Henry Saputra wrote:
> Awesome! Thanks for the reply, Fabian.
>
> - Henry
>
> On Sunday, January 10, 2016, Fabian Hueske
Hi Devs,
It seems no release tag was pushed to 0.10.1. I presume this was an
oversight. Is there some place I can look to see from which sha the 0.10.1
release was built? Are the RC vote threads the only cannon in this matter?
Thanks,
Nick
it would other users.
On Friday, January 8, 2016, Stephan Ewen <se...@apache.org> wrote:
> Hi Nick!
>
> We have not pushed a release tag, but have a frozen release-0.10.1-RC1
> branch (https://github.com/apache/flink/tree/release-0.10.1-rc1)
> A tag would be great, agree!
use the maven-enforcer-plugin to require Maven
> 3.3.
> I guess many Linux distributions are still at Maven 3.2, so users might get
> unhappy users
>
>
> On Thu, Dec 10, 2015 at 6:33 PM, Nick Dimiduk <ndimi...@apache.org> wrote:
>
> > Lol. Okay, thanks a bunch. Mind link
hout restarts. Important for low-latency,
> shells, etc
>
> Flink itself respects these classloaders whenever dynamically looking up a
> class. It may be that Closure is written such that it can only dynamically
> instantiate what is the original classpath.
>
>
>
> O
etz...@apache.org> wrote:
> I had the same though as Nick. Maybe Leiningen allows to somehow build a
> fat-jar containing the clojure standard library.
>
> On Thu, Dec 10, 2015 at 5:51 PM, Nick Dimiduk <ndimi...@apache.org> wrote:
>
> > What happens when you follow the packagi
this idea.
>
> I extended my pom to include clojure-1.5.1.jar in my program jar.
> However, the problem is still there... I did some research on the
> Internet, and it seems I need to mess around with Clojure's class
> loading strategy...
>
> -Matthias
>
> On 12/10/2015
What happens when you follow the packaging examples provided in the flink
quick start archetypes? These have the maven-foo required to package an
uberjar suitable for flink submission. Can you try adding that step to your
pom.xml?
On Thursday, December 10, 2015, Stephan Ewen
r you as a workaround:
>
> wget
>
> http://archive.apache.org/dist/maven/maven-3/3.2.5/binaries/apache-maven-3.2.5-bin.tar.gz
> and then use that maven for now ;)
>
>
> On Thu, Dec 10, 2015 at 12:35 AM, Nick Dimiduk <ndimi...@apache.org
> <javascript:;>> wrote:
s come from is to do
> inside the "flink-dist" project a "mvn dependency:tree" run. That shows how
> the unshaded Guava was pulled in.
>
> Greetings,
> Stephan
>
>
> On Wed, Dec 9, 2015 at 6:22 PM, Nick Dimiduk <ndimi...@gmail.com> wrote:
>
Thanks, I appreciate it.
On Wed, Dec 9, 2015 at 12:50 PM, Robert Metzger <rmetz...@apache.org> wrote:
> I can confirm that guava is part of the fat jar for the 2.7.0, scala 2.11
> distribution.
>
> I'll look into the issue tomorrow
>
> On Wed, Dec 9, 2015 at 7:58
r add
> another dependency that might transitively pull Guava?
>
> Stephan
>
>
> On Tue, Dec 8, 2015 at 9:25 PM, Nick Dimiduk <ndimi...@apache.org> wrote:
>
> > Hi there,
> >
> > I'm attempting to build locally a flink based on release-0.10.0 +
> > FLINK-
Nick Dimiduk created FLINK-3147:
---
Summary: HadoopOutputFormatBase should expose CLOSE_MUTEX for
subuclasses
Key: FLINK-3147
URL: https://issues.apache.org/jira/browse/FLINK-3147
Project: Flink
Nick Dimiduk created FLINK-3148:
---
Summary: Support configured serializers for shipping UDFs
Key: FLINK-3148
URL: https://issues.apache.org/jira/browse/FLINK-3148
Project: Flink
Issue Type
Nick Dimiduk created FLINK-3119:
---
Summary: Remove dependency on Tuple from HadoopOutputFormat
Key: FLINK-3119
URL: https://issues.apache.org/jira/browse/FLINK-3119
Project: Flink
Issue Type
>
> Do you know if Hadoop/HBase is also using a maven plugin to fail a build on
> breaking API changes? I would really like to have such a functionality in
> Flink, because we can spot breaking changes very early.
I don't think we have maven integration for this as of yet. We release
managers
In HBase we keep an hbase-examples module with working code. Snippets from
that module are pasted into docs and referenced. Yes, we do see divergence,
especially when refactor tools are involved. I once looked into a doc tool
for automatically extracting snippets from source code, but that turned
Nick Dimiduk created FLINK-3004:
---
Summary: ForkableMiniCluster does not call RichFunction#open
Key: FLINK-3004
URL: https://issues.apache.org/jira/browse/FLINK-3004
Project: Flink
Issue Type
37 matches
Mail list logo