Hi!
You could try to have Hadoop in your application Jar file, but I expect
trouble with s3a, because of its specific way they do connection pooling.
Making the Bucketing Sink work with Flink's file systems (and thus with
Hadoop-free Flink) is super high up on the list as soon as the release is
Another nice thing is that readers can potentially also read from different
sources (historic/latest). To arrive at a general connector pattern, it
will also be necessary to consider the ordering relationship between
restrictions/splits/blocks/segments when it is important for the processing
logic
Ted Yu created FLINK-8771:
-
Summary: Upgrade scalastyle to 1.0.0
Key: FLINK-8771
URL: https://issues.apache.org/jira/browse/FLINK-8771
Project: Flink
Issue Type: Improvement
Components:
Xinyang Gao created FLINK-8770:
--
Summary: CompletedCheckPoints stored on ZooKeeper is not
up-to-date, when JobManager is restarted it fails to recover the job due to
"checkpoint FileNotFound exception"
Key: FLINK-8770
Hi,
I was using the RowSerializer (package
org.apache.flink.api.java.typeutils.runtime;) recently to serialize Rows to
file (for reading them back in the future).
I observed a strange behavior that I would like to double check with you in
case this is a serious problem to be addressed:
When
Yeah, I meant that latter.. but it sounds like it could be just asking for
trouble. I just like the idea of keeping the set of un-shaded JARs in the
flink/lib directory to a minimum..
Thanks.
On Fri, Feb 23, 2018 at 5:29 AM, Aljoscha Krettek
wrote:
> You mean putting the
You mean putting the Flink-native S3 filesystem in the user jar or Hadoop in
the user jar. The former wouldn't work, I think, because the FileSystems are
being initialised before the user-jar is loaded. The latter might work but only
if you don't have Hadoop in the classpath, i.e. not on YARN
Chesnay Schepler created FLINK-8769:
---
Summary: Quickstart job submission logs contain several exceptions
Key: FLINK-8769
URL: https://issues.apache.org/jira/browse/FLINK-8769
Project: Flink
Nico Kruber created FLINK-8768:
--
Summary: Change {{NettyMessageDecoder}} to inherit from
{{LengthFieldBasedFrameDecoder}}
Key: FLINK-8768
URL: https://issues.apache.org/jira/browse/FLINK-8768
Project:
Thanks, Aljoscha :)
So is it possible to continue to use the new "native' fllesystems along
with the BucketingSink by including the Hadoop dependencies only in the
user's uber jar? Or is that asking for trouble? Has anyone tried that
successfully?
-Jamie
On Fri, Feb 23, 2018 at 12:39 AM,
How about releasing 1.4.2 now, meaning immediately. This can be very
lightweight.
FLINK-8451 looks like it should have more thorough testing and should go
into 1.4.3.
I think there is no harm in more frequent bugfix releases.
On Fri, Feb 23, 2018 at 9:56 AM, Timo Walther
Stephan Ewen created FLINK-8767:
---
Summary: Set the maven.compiler.source and .target properties for
Java Quickstart
Key: FLINK-8767
URL: https://issues.apache.org/jira/browse/FLINK-8767
Project: Flink
Stephan Ewen created FLINK-8765:
---
Summary: Simplify quickstart properties
Key: FLINK-8765
URL: https://issues.apache.org/jira/browse/FLINK-8765
Project: Flink
Issue Type: Sub-task
Stephan Ewen created FLINK-8766:
---
Summary: Pin scala runtime version for Java Quickstart
Key: FLINK-8766
URL: https://issues.apache.org/jira/browse/FLINK-8766
Project: Flink
Issue Type:
Stephan Ewen created FLINK-8764:
---
Summary: Make quickstarts work out of the box for IDE and JAR
packaging
Key: FLINK-8764
URL: https://issues.apache.org/jira/browse/FLINK-8764
Project: Flink
Stephan Ewen created FLINK-8762:
---
Summary: Remove unnecessary examples and make "StreamingJob" the
default
Key: FLINK-8762
URL: https://issues.apache.org/jira/browse/FLINK-8762
Project: Flink
Stephan Ewen created FLINK-8763:
---
Summary: Remove obsolete Dummy.java classes from quickstart
projects.
Key: FLINK-8763
URL: https://issues.apache.org/jira/browse/FLINK-8763
Project: Flink
Stephan Ewen created FLINK-8761:
---
Summary: Various improvements to the Quickstarts
Key: FLINK-8761
URL: https://issues.apache.org/jira/browse/FLINK-8761
Project: Flink
Issue Type: Improvement
Piotr Nowojski created FLINK-8760:
-
Summary: Correctly set `moreAvailable` flag in SingleInputGate and
UnionInputGate
Key: FLINK-8760
URL: https://issues.apache.org/jira/browse/FLINK-8760
Project:
Nico Kruber created FLINK-8759:
--
Summary: Bump Netty to 4.0.56
Key: FLINK-8759
URL: https://issues.apache.org/jira/browse/FLINK-8759
Project: Flink
Issue Type: Improvement
Components:
Aljoscha Krettek created FLINK-8758:
---
Summary: Expose method for non-blocking job submission on
ClusterClient
Key: FLINK-8758
URL: https://issues.apache.org/jira/browse/FLINK-8758
Project: Flink
Aljoscha Krettek created FLINK-8757:
---
Summary: Add MiniClusterResource.getClusterClient()
Key: FLINK-8757
URL: https://issues.apache.org/jira/browse/FLINK-8757
Project: Flink
Issue Type:
Aljoscha Krettek created FLINK-8756:
---
Summary: Support ClusterClient.getAccumulators() in
RestClusterClient
Key: FLINK-8756
URL: https://issues.apache.org/jira/browse/FLINK-8756
Project: Flink
Hi,
you wrote to the Apache Flink development mailing list.
I think your question should go to the Apache Beam user mailing list:
u...@beam.apache.org
Best, Fabian
2018-02-22 14:35 GMT+01:00 shankara :
> I am new to apache beam and spring cloud dataflow. I am trying to
Nico Kruber created FLINK-8755:
--
Summary: SpilledSubpartitionView wrongly relys on the backlog for
determining whether more data is available
Key: FLINK-8755
URL: https://issues.apache.org/jira/browse/FLINK-8755
I am new to apache beam and spring cloud dataflow. I am trying to integrate
apache beam in spring cloud dataflow. How to get spring-kafka message as a
source in beam pipeline ?. How to add spring-kafka as a sink in beam
pipeline ? Wanted to run pipeline forever untilfinish. Please suggest how
can
I also almost have a fix ready for FLINK-8451. I think it should also go
into 1.4.2.
Regards,
Timo
Am 2/22/18 um 11:29 AM schrieb Aljoscha Krettek:
They reason they didn't catch this is that the bug only occurs if users use a
custom timestamp/watermark assigner. But yes, we should be able
Hi,
I'm afraid not, since the BucketingSink uses the Hadoop FileSystem directly and
not the Flink FileSystem abstraction. The flink-s3-fs-* modules only provide
Flink FileSystems.
One of the goals for 1.6 is to provide a BucketingSink that uses the Flink
FileSystem and also works well with
28 matches
Mail list logo