shuai.xu created FLINK-9632:
---
Summary: SlotPool should notify the call when allocateSlot meet an
exception
Key: FLINK-9632
URL: https://issues.apache.org/jira/browse/FLINK-9632
Project: Flink
makeyang created FLINK-9631:
---
Summary: use Files.createDirectories instead of directory.mkdirs
Key: FLINK-9631
URL: https://issues.apache.org/jira/browse/FLINK-9631
Project: Flink
Issue Type:
Youjun Yuan created FLINK-9630:
--
Summary: Kafka09PartitionDiscoverer cause connection leak on
TopicAuthorizationException
Key: FLINK-9630
URL: https://issues.apache.org/jira/browse/FLINK-9630
Project:
Georgii Gobozov created FLINK-9629:
--
Summary: Datadog metrics reporter does not have shaded dependencies
Key: FLINK-9629
URL: https://issues.apache.org/jira/browse/FLINK-9629
Project: Flink
I just want to make sure I am not missing anything to get my pull request
accepted. All tests are passing so I figured it was going to be a short time
after that. Is there anything else that I need to do or a reason like there is
a freeze on accepting pull requests for the moment? Here is the
Hi Piotrek,
I agree that the shuttle way makes the code more easier and clean. But it
will lose the abilities of rules.
For Volcano, it does interrupt searching plans space when it reaches the
maximal iteration times, but there is a condition that the result plan is a
valid plan, or it will
Truong Duc Kien created FLINK-9628:
--
Summary: Options to tolerate truncate failure in BucketingSink
Key: FLINK-9628
URL: https://issues.apache.org/jira/browse/FLINK-9628
Project: Flink
Hi Amol,
> In above code also it will sort the records in specific time window only.
All windows will be emitted as watermark passes the end of the window. The
watermark only increases. So the non-overlapping windows should be also sorted
by time and as a consequence the records across windows
Dominik WosiĆski created FLINK-9627:
---
Summary: Extending
Key: FLINK-9627
URL: https://issues.apache.org/jira/browse/FLINK-9627
Project: Flink
Issue Type: Bug
Reporter: Dominik
Piotr Nowojski created FLINK-9626:
-
Summary: Possible resource leak in FileSystem
Key: FLINK-9626
URL: https://issues.apache.org/jira/browse/FLINK-9626
Project: Flink
Issue Type: Bug
mingleizhang created FLINK-9625:
---
Summary: Enrich the error information to user
Key: FLINK-9625
URL: https://issues.apache.org/jira/browse/FLINK-9625
Project: Flink
Issue Type: Improvement
Hello Andrey,
In above code also it will sort the records in specific time window only.
Anyways we agreed to create N number of partitions with N number of
consumers based on some key as order is maintained per kafka partition.
I have some questions about this.
1. How should I create N
Chesnay Schepler created FLINK-9624:
---
Summary: Move jar/artifact upload logic out of JobGraph
Key: FLINK-9624
URL: https://issues.apache.org/jira/browse/FLINK-9624
Project: Flink
Issue
Chesnay Schepler created FLINK-9623:
---
Summary: Move zipping logic out of blobservice
Key: FLINK-9623
URL: https://issues.apache.org/jira/browse/FLINK-9623
Project: Flink
Issue Type:
Hi,
Good point, sorry for confusion, BoundedOutOfOrdernessTimestampExtractor of
course does not buffer records, you need to apply windowing (e.g.
TumblingEventTimeWindows) for that and then sort the window output by time and
emit records in sorted order.
You can also use windowAll which
Hi,
I think a global ordering is a bit impractical on production, but in theroy,
you still can do that. You need to
- Firstly fix the operate's parallelism to 1(except the source node).
- If you want to sort the records within a bouned time, then you can keyBy() a
constant and window it,
It will, but it defaults to jobmanager.rpc.address if no rest.address has
been specified.
On Wed, Jun 20, 2018 at 9:49 AM Chesnay Schepler wrote:
> Shouldn't the non-HA case be covered by rest.address?
>
> On 20.06.2018 09:40, Till Rohrmann wrote:
>
> Hi Sampath,
>
> it is no longer possible to
Shouldn't the non-HA case be covered by rest.address?
On 20.06.2018 09:40, Till Rohrmann wrote:
Hi Sampath,
it is no longer possible to not start the rest server endpoint by
setting rest.port to -1. If you do this, then the cluster won't start.
The comment in the flink-conf.yaml holds only
Hi Sampath,
it is no longer possible to not start the rest server endpoint by setting
rest.port to -1. If you do this, then the cluster won't start. The comment
in the flink-conf.yaml holds only true for the legacy mode.
In non-HA setups we need the jobmanager.rpc.address to derive the hostname
Hello Andrey,
Thanks for your quick response. I have tried with your above code but it
didn't suit's my requirement. I need global ordering of my records by using
multiple kafka partitions. Please suggest me any workaround for this. as
mentioned in this
I was worried this might be the case.
The rest.port handling was simply copied from the legacy web-server,
which explicitly allowed shutting it down.
It may (I'm not entirely sure) also not be necessary for all deployment
modes; for example if the job is baked into the job/taskmanager images.
Sihua Zhou created FLINK-9622:
-
Summary: DistributedCacheDfsTest failed on travis
Key: FLINK-9622
URL: https://issues.apache.org/jira/browse/FLINK-9622
Project: Flink
Issue Type: Bug
Olivier Zembri created FLINK-9621:
-
Summary: Warning when enabling partition discovery
Key: FLINK-9621
URL: https://issues.apache.org/jira/browse/FLINK-9621
Project: Flink
Issue Type:
aitozi created FLINK-9620:
-
Summary: Add an alternative option to choose when deal with
eventtime cep
Key: FLINK-9620
URL: https://issues.apache.org/jira/browse/FLINK-9620
Project: Flink
Issue
24 matches
Mail list logo