Hello Fabian,
Thanks for drafting the proposal. I like the entire organization in general
and left a few comments. I think this will be a very good kick off to
reorganize the tableAPI&SQL doc.
-shaoxuan
On Fri, May 19, 2017 at 7:06 AM, Fabian Hueske wrote:
> Hi everybody,
>
> I came up with a p
sunjincheng created FLINK-6632:
--
Summary: Fix parameter case sensitive error for test
passing/rejecting filter API
Key: FLINK-6632
URL: https://issues.apache.org/jira/browse/FLINK-6632
Project: Flink
Eron Wright created FLINK-6631:
---
Summary: Implement FLIP-6 MesosTaskExecutorRunner
Key: FLINK-6631
URL: https://issues.apache.org/jira/browse/FLINK-6631
Project: Flink
Issue Type: Sub-task
Eron Wright created FLINK-6630:
---
Summary: Implement FLIP-6 MesosAppMasterRunner
Key: FLINK-6630
URL: https://issues.apache.org/jira/browse/FLINK-6630
Project: Flink
Issue Type: Sub-task
Hi everybody,
I came up with a proposal for the structure of the Table API / SQL
documentation:
https://docs.google.com/document/d/1ENY8tcPadZjoZ4AQ_lRRwWiVpScDkm_4rgxIGWGT5E0/edit?usp=sharing
Feedback and comments are very welcome.
Once we agree on a structure, we can create skeletons and distr
I might have found another blocker:
https://issues.apache.org/jira/browse/FLINK-6629.
The issue is that the ClusterClient only allows to submit jobs to an HA
cluster if you have specified the JobManager's address in the
flink-conf.yaml or via the command line options. If no address is set, then
it
Till Rohrmann created FLINK-6629:
Summary: ClusterClient cannot submit jobs to HA cluster if address
not set in configuration
Key: FLINK-6629
URL: https://issues.apache.org/jira/browse/FLINK-6629
Proj
Chesnay Schepler created FLINK-6628:
---
Summary: Cannot start taskmanager with cygwin in directory
containing spaces
Key: FLINK-6628
URL: https://issues.apache.org/jira/browse/FLINK-6628
Project: Flin
The test document says that the default flink-conf.yml "should define
more than one task slot", but it currently configures exactly 1 task
slot. Not sure if it is a typo in the doc though.
On 18.05.2017 22:10, Chesnay Schepler wrote:
The start-cluster.sh script failed for me on Windows when exe
The start-cluster.sh script failed for me on Windows when executed in a
directory containing spaces.
On 18.05.2017 20:47, Chesnay Schepler wrote:
FLINK-6610 should also be fixed; it is currently not possible to
disable web-submissions.
On 18.05.2017 18:13, jincheng sun wrote:
Hi Robert,
I ha
+1
The Table / SQL component has made significant progress in the last few
months (kudos to all contributors).
It is a good time to have a documentation to reflect all the changes in the
Table / SQL side.
On Thu, May 18, 2017 at 8:12 AM Robert Metzger wrote:
> Thank you Fabian for working on
FLINK-6610 should also be fixed; it is currently not possible to disable
web-submissions.
On 18.05.2017 18:13, jincheng sun wrote:
Hi Robert,
I have some checks to do and some test improve PRs (
https://issues.apache.org/jira/browse/FLINK-6619) need be done soon.
Best,
SunJincheng
2017-05-18
Hi Robert,
I have some checks to do and some test improve PRs (
https://issues.apache.org/jira/browse/FLINK-6619) need be done soon.
Best,
SunJincheng
2017-05-18 22:17 GMT+08:00 Greg Hogan :
> The following tickets for 1.3.0 have a PR in need of review:
>
> [FLINK-6582] [docs] Project from maven
Andrey created FLINK-6627:
-
Summary: Expose tmp directories via API
Key: FLINK-6627
URL: https://issues.apache.org/jira/browse/FLINK-6627
Project: Flink
Issue Type: Improvement
Affects Versions:
Thank you Fabian for working on the proposal.
On Thu, May 18, 2017 at 3:51 PM, Fabian Hueske wrote:
> Thanks for starting this discussion Robert.
>
> I think with the next release the Table API / SQL should be moved up in the
> Application Development menu.
> I also though about restructuring th
The following tickets for 1.3.0 have a PR in need of review:
[FLINK-6582] [docs] Project from maven archetype is not buildable by default
[FLINK-6616] [docs] Clarify provenance of official Docker images
> On May 18, 2017, at 5:40 AM, Fabian Hueske wrote:
>
> I have a couple of PRs ready with b
Thanks for starting this discussion Robert.
I think with the next release the Table API / SQL should be moved up in the
Application Development menu.
I also though about restructuring the docs, but it won't be trivial to do
this, IMO because there are many orthogonal aspects:
- Stream/Batch
- Tabl
I think the ListState interface is pretty well suited for this job.
It allows to add elements with low effort and can serve all elements of a
list through an iterator. Depending on the implementation the elements
could be deserialized as needed.
If the user code needs a List with all elements, it w
Actually that was one option that I was considering. I am still a bit fuzzy
about the advantages and disadvantages of using one type of state over another.
I know that using ValueState would mean that when getting the object value
(i.e. a List in this case) the whole would be deserialized at onc
Hi,
I'm not aware of a performance report for this feature. I don't think it is
well known or used a lot.
The classes to check out for prepartitioned / presorted data are
SplitDataProperties [1], DataSource [2], and as an example
PropertyDataSourceTest [3].
[1]
https://github.com/apache/flink/blo
A big +1 as well.
> On May 18, 2017, at 1:55 PM, Ufuk Celebi wrote:
>
> On Thu, May 18, 2017 at 1:52 PM, Till Rohrmann wrote:
>> I think we have a history of creating too long monolithic documentation
>> pages which are hard to digest. So a big +1 for splitting the Table API/SQL
>> documentatio
On Thu, May 18, 2017 at 1:52 PM, Till Rohrmann wrote:
> I think we have a history of creating too long monolithic documentation
> pages which are hard to digest. So a big +1 for splitting the Table API/SQL
> documentation up into more easily digestible pieces.
+1
Thanks for bringing it up
thanks for tip @Stephan.
To [1] , there's a description about "I’ve got sooo much data to join, do
I really need to ship it?" . How to configure Flink to touch that target?
Is there a performance report ?
[1] :
https://flink.apache.org/news/2015/03/13/peeking-into-Apache-Flinks-Engine-Room.html
I think we have a history of creating too long monolithic documentation
pages which are hard to digest. So a big +1 for splitting the Table API/SQL
documentation up into more easily digestible pieces.
Cheers,
Till
On Thu, May 18, 2017 at 12:01 PM, Shaoxuan Wang wrote:
> Hi Robert,
> This sounds
Hi Robert,
This sounds great to me. While I am in the middle of writing-up UDAGG doc
(FLINK-5905), I also feel it's not good to have entire tableAPI&SQL
introduction in one page.
We can move tableAPI&SQL under "application development", and split it into
small sub-topics, such as basic/UDF/UDTF/UDA
Hi Radu,
Why not using a ValueState that inside stored the whole list.
Whenever you state#get() you get the whole list and you can sort it.
Kostas
> On May 18, 2017, at 3:31 AM, Radu Tudoran wrote:
>
> Hi Aljoscha,
>
> Thanks for the clarification. I understand that there might be advantage
I have a couple of PRs ready with bugfixes that I'll try to get in as well.
Should be done soon.
2017-05-18 11:24 GMT+02:00 Till Rohrmann :
> I'd like to get a fix in for
> https://issues.apache.org/jira/browse/FLINK-6612. This can basically
> thwart
> Flink's recovery capabilities.
>
> On Thu, M
Till Rohrmann created FLINK-6626:
Summary: Unifying lifecycle management of SubmittedJobGraph- and
CompletedCheckpointStore
Key: FLINK-6626
URL: https://issues.apache.org/jira/browse/FLINK-6626
Projec
Till Rohrmann created FLINK-6625:
Summary: Flink removes HA job data when reaching JobStatus.FAILED
Key: FLINK-6625
URL: https://issues.apache.org/jira/browse/FLINK-6625
Project: Flink
Issue
I'd like to get a fix in for
https://issues.apache.org/jira/browse/FLINK-6612. This can basically thwart
Flink's recovery capabilities.
On Thu, May 18, 2017 at 11:13 AM, Chesnay Schepler
wrote:
> This PR reduces logging noise a bit: (got +1 to merge)
> https://github.com/apache/flink/pull/3917
>
This PR reduces logging noise a bit: (got +1 to merge)
https://github.com/apache/flink/pull/3917
This PR fixes the compilation on Windows: (reviewed once, most recent
changes not reviewed)
https://github.com/apache/flink/pull/3854
This PR enables a test for savepoint compatibility: (nice to h
Ted Yu created FLINK-6624:
-
Summary: SharedBuffer#hashCode() uses multiplier in wrong way
Key: FLINK-6624
URL: https://issues.apache.org/jira/browse/FLINK-6624
Project: Flink
Issue Type: Bug
Hi Robert,
There is one last pending fix for the serializer upgrades feature:
https://issues.apache.org/jira/browse/FLINK-6482.
Pending PR: https://github.com/apache/flink/pull/3937.
I can’t say its a complete blocker, but since it will affect serialization
format of checkpoints, it would be be
Hi,
I'm wondering whether we should make the Table API a bit more prominent in
our documentation by upgrading it from below "Libraries" to the same level
as "DataSet" and "DataStream".
This would also allow us to split it from one large page into smaller
sub-pages.
I think it would be nice to do
constantine stanley created FLINK-6623:
--
Summary: unable to build flink master
Key: FLINK-6623
URL: https://issues.apache.org/jira/browse/FLINK-6623
Project: Flink
Issue Type: Bug
I will.
Actually I had it already on my radar because its one of the three
remaining blockers.
Your JIRA has already a PR so I guess its on a good track, for the other
blockers, I think its fine to release without having them fixed.
Is there anything else we need to get into the 1.3.0 release?
Oth
Hi
Problem not directly in flink, but it you use flink with beam then in
classpath you have original netty 4.0.27 from flink and netty 4.1.x from
beam (grpc use netty 4.1.x for communication).
Other interest (specific for me now): netty have custom wrapper for openssl
library which have more prod
constantine stanley created FLINK-6622:
--
Summary: flink master code not compiling
Key: FLINK-6622
URL: https://issues.apache.org/jira/browse/FLINK-6622
Project: Flink
Issue Type: Bug
Hi Alexey,
thanks for looking into it. Are we currently facing any problems with Netty
4.0.27 (bugs or performance)? I agree that in general we should try to use
the latest bug fix release. However, in the past we have seen that they
might entail some slight behaviour changes which breaks things o
39 matches
Mail list logo