Re: Release Flink 1.0.0

2016-01-25 Thread Aljoscha Krettek
Hi,
I think the refactoring of Partitioned State and the WindowOperator on state 
work is almost ready. I also have the RocksDB state backend working. I’m 
running some tests now on the cluster and should be able to open a PR tomorrow.


> On 25 Jan 2016, at 15:36, Stephan Ewen  wrote:
> 
> I agree, with Gyula, one out-of-core state backend should be in. We are
> pretty close to that. Aljoscha has done good work on extending test
> coverage for state backends, so we should be pretty comfortable that it
> works as well, once we integrate new state backends with the tests.
> 
> There is a bit of work do do around extending the interface of the
> key/value state. I would like to start a separate thread on that today or
> tomorrow...
> 
> 
> 
> On Mon, Jan 25, 2016 at 12:16 PM, Gyula Fóra  wrote:
> 
>> Hi,
>> 
>> I agree that getting Flink 1.0.0 out soon would be great as Flink is in a
>> pretty solid state right now.
>> 
>> I wonder whether it would make sense to include an out-of-core state
>> backend in streaming core that can be used with partitioned/window states.
>> I think if we are releasing 1.0.0 we should have a solid feature set for
>> our strong steaming use-cases  (in this case stateful, and windowed
>> computations) and this should be a part of that.
>> 
>> I know that Aljoscha is working on a solution for this which will probably
>> involve a heavy refactor of the State backend interfaces, and I am also
>> working on a similar solution. Maybe it would be good to get at least one
>> good robust solution for this in and definitely Aljoscha's refactor for the
>> interfaces.
>> 
>> If we decide to do this, I think this needs 1-2 extra weeks of proper
>> testing so this might delay the schedule a little bit.
>> 
>> What do you think?
>> 
>> Gyula
>> 
>> 
>> 
>> Robert Metzger  ezt írta (időpont: 2016. jan. 25., H,
>> 11:54):
>> 
>>> Hi,
>>> 
>>> I would like to release 1.0.0 in the next weeks.
>>> Looking at the JIRAs, I think we are going to close a lot of blocking
>>> issues soon. How about we do a first release candidate on Wednesday, 3.
>>> February?
>>> 
>>> The first release candidate is most likely not going to pass the vote,
>> the
>>> primary goal will be collecting a list of issues we need to address.
>>> 
>>> There is also a Wiki page for the 1.0 release:
>>> https://cwiki.apache.org/confluence/display/FLINK/1.0+Release
>>> 
>>> Please -1 to this message if 3. February is too soon for the first RC (it
>>> also means that we'll do a feature freeze around that time).
>>> 
>> 



[jira] [Created] (FLINK-3288) KafkaConsumer (0.8) fails with UnknownException

2016-01-25 Thread Robert Metzger (JIRA)
Robert Metzger created FLINK-3288:
-

 Summary: KafkaConsumer (0.8) fails with UnknownException
 Key: FLINK-3288
 URL: https://issues.apache.org/jira/browse/FLINK-3288
 Project: Flink
  Issue Type: Bug
  Components: Kafka Connector
Reporter: Robert Metzger
Assignee: Robert Metzger


{code}
Exception for partition 19: kafka.common.UnknownException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at java.lang.Class.newInstance(Class.java:383)
at kafka.common.ErrorMapping$.exceptionFor(ErrorMapping.scala:86)
at kafka.common.ErrorMapping.exceptionFor(ErrorMapping.scala)
at 
org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher$SimpleConsumerThread.run(LegacyFetcher.java:406)
at 
org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.run(LegacyFetcher.java:242)
at 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer.run(FlinkKafkaConsumer.java:397)
at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:58)
at 
org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:55)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:218)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:584)
at java.lang.Thread.run(Thread.java:745)
{code}

One of the Kafka brokers is reporting

{code}
[2016-01-25 12:45:30,195] ERROR [Replica Manager on Broker 2]: Error when 
processing fetch request for partition [WordCount,4] offset 335517 from 
consumer with correlation id 0. Possible cause: Attempt to read with a maximum 
offset (335515) less than the start offset (335517). 
(kafka.server.ReplicaManager)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Release Flink 1.0.0

2016-01-25 Thread Stephan Ewen
I agree, with Gyula, one out-of-core state backend should be in. We are
pretty close to that. Aljoscha has done good work on extending test
coverage for state backends, so we should be pretty comfortable that it
works as well, once we integrate new state backends with the tests.

There is a bit of work do do around extending the interface of the
key/value state. I would like to start a separate thread on that today or
tomorrow...



On Mon, Jan 25, 2016 at 12:16 PM, Gyula Fóra  wrote:

> Hi,
>
> I agree that getting Flink 1.0.0 out soon would be great as Flink is in a
> pretty solid state right now.
>
> I wonder whether it would make sense to include an out-of-core state
> backend in streaming core that can be used with partitioned/window states.
> I think if we are releasing 1.0.0 we should have a solid feature set for
> our strong steaming use-cases  (in this case stateful, and windowed
> computations) and this should be a part of that.
>
> I know that Aljoscha is working on a solution for this which will probably
> involve a heavy refactor of the State backend interfaces, and I am also
> working on a similar solution. Maybe it would be good to get at least one
> good robust solution for this in and definitely Aljoscha's refactor for the
> interfaces.
>
> If we decide to do this, I think this needs 1-2 extra weeks of proper
> testing so this might delay the schedule a little bit.
>
> What do you think?
>
> Gyula
>
>
>
> Robert Metzger  ezt írta (időpont: 2016. jan. 25., H,
> 11:54):
>
> > Hi,
> >
> > I would like to release 1.0.0 in the next weeks.
> > Looking at the JIRAs, I think we are going to close a lot of blocking
> > issues soon. How about we do a first release candidate on Wednesday, 3.
> > February?
> >
> > The first release candidate is most likely not going to pass the vote,
> the
> > primary goal will be collecting a list of issues we need to address.
> >
> > There is also a Wiki page for the 1.0 release:
> > https://cwiki.apache.org/confluence/display/FLINK/1.0+Release
> >
> > Please -1 to this message if 3. February is too soon for the first RC (it
> > also means that we'll do a feature freeze around that time).
> >
>


[jira] [Created] (FLINK-3285) Skip Maven deployment of flink-java8

2016-01-25 Thread Maximilian Michels (JIRA)
Maximilian Michels created FLINK-3285:
-

 Summary: Skip Maven deployment of flink-java8
 Key: FLINK-3285
 URL: https://issues.apache.org/jira/browse/FLINK-3285
 Project: Flink
  Issue Type: Sub-task
  Components: Java API
Reporter: Maximilian Michels
Assignee: Maximilian Michels
 Fix For: 1.0.0


{{flink-java8}} has a Scala dependency due to its dependency on 
{{flink-streaming-java}}. It deploys some examples and a test jar. However, the 
deployed jars are not necessary to develop or test Java 8 Lambda code.

I propose to disable deployment of the flink-java8 module and its test jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Features only compatible with hadoop 2.x

2016-01-25 Thread Stephan Ewen
At some point, the state backends should probably have their own parent-
and subprojects, similar to the batch-connectors and streaming-connectors.

Then you have the flexibility to include and exclude them as needed.

Stephan


On Sat, Jan 23, 2016 at 6:42 PM, Gyula Fóra  wrote:

> Alright, thanks.
>
> My dilemma is that currently all the out-of-core state stuff I put under
> streaming-contrib. So I either make the whole contrib package hadoop 2 only
> which is probably kind of awkward, or I move the main interfaces from
> contrib to somewhere else then I can put the HDFS backend stuff into a
> module that is hadoop 2 only.
>
> Gyula
>
> Robert Metzger  ezt írta (időpont: 2016. jan. 22., P,
> 22:31):
>
> > Hi Gyula,
> >
> > will the out of core backend be in a separate maven module? If so, you
> can
> > include the module only in the "hadoop-2" profile.
> >
> > As you can see in the main pom.xml,  "flink-yarn" and "flink-fs-tests"
> are
> > also "hadoop2" only modules:
> >
> > 
> >hadoop-2
> >
> >   
> >  
> >  !hadoop.profile
> >   
> >
> >
> >   ${hadoop-two.version}
> >   flink-shaded-hadoop2 >
> >   flink-shaded-hadoop2 > shading-artifact-module.name>
> >
> >
> >   flink-yarn
> >   
> >   flink-fs-tests
> >
> > 
> >
> >
> > If the backend is not in a separate maven module, you can use reflection.
> > Check out the RollingSink#reflectHflushOrSync() method. It calls "hflush"
> > only if the method is available ;)
> >
> >
> >
> > On Fri, Jan 22, 2016 at 10:23 PM, Gyula Fóra  wrote:
> >
> > > Hi,
> > >
> > > While developing the out-of-core state backend that will store state
> > > directly to hdfs (either TFiles or BloomMapFiles), I realised that some
> > > file formats and features I use are hadoop 2.x only.
> > >
> > > What is the suggested way to handle features that use hadoop 2.x api?
> Can
> > > these be excluded from the travis build for the hadoop 1 profile
> somehow?
> > >
> > > Thanks,
> > > Gyula
> > >
> >
>


[jira] [Created] (FLINK-3287) Flink Kafka Consumer fails due to Curator version conflict

2016-01-25 Thread Robert Metzger (JIRA)
Robert Metzger created FLINK-3287:
-

 Summary: Flink Kafka Consumer fails due to Curator version conflict
 Key: FLINK-3287
 URL: https://issues.apache.org/jira/browse/FLINK-3287
 Project: Flink
  Issue Type: Bug
  Components: Kafka Connector
Affects Versions: 1.0.0
Reporter: Robert Metzger
Assignee: Robert Metzger


{code}
14:32:38,542 INFO  org.apache.flink.yarn.YarnJobManager 
 - Status of job 8eb92c1e3a1c050ecaccd50c6298ac7a (Flink Streaming Job) changed 
to FAILING.
java.lang.NoSuchMethodError: 
org.apache.curator.utils.ZKPaths.fixForNamespace(Ljava/lang/String;Ljava/lang/String;Z)Ljava/lang/String;
at 
org.apache.curator.framework.imps.NamespaceImpl.fixForNamespace(NamespaceImpl.java:82)
at 
org.apache.curator.framework.imps.NamespaceImpl.newNamespaceAwareEnsurePath(NamespaceImpl.java:87)
at 
org.apache.curator.framework.imps.CuratorFrameworkImpl.newNamespaceAwareEnsurePath(CuratorFrameworkImpl.java:457)
at 
org.apache.flink.streaming.connectors.kafka.internals.ZookeeperOffsetHandler.getOffsetFromZooKeeper(ZookeeperOffsetHandler.java:122)
at 
org.apache.flink.streaming.connectors.kafka.internals.ZookeeperOffsetHandler.seekFetcherToInitialOffsets(ZookeeperOffsetHandler.java:90)
at 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer.open(FlinkKafkaConsumer.java:401)
at 
org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
at 
org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:89)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:305)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:227)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:567)
at java.lang.Thread.run(Thread.java:745)
{code}

This flink snapshot version was built from master commit 
c7ada8d785087e0209071a8219ff841006b96639



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Flink Yarn Stack Size

2016-01-25 Thread Hilmi Yildirim

Hi,
I have solved the problem. I used an object which contained a huge 
generic MutableList. When kryo serializes the MutableList it does many 
recursive calls which lead to a huge stack and to the 
StackOverFlowError. I changed the MutableList to a Set and now kryo does 
not do so many recursive calls and the stack keeps small :)


Best Regards,
Hilmi

Am 25.01.2016 um 16:34 schrieb Stephan Ewen:

@Hilmi:

A stack size of 1GB is too much, maybe the JVM ignores it? Can you try
"-Xss2m" or so?

Is this to prevent stack overflow errors in some code?

Stephan


On Fri, Jan 22, 2016 at 2:21 PM, Robert Metzger  wrote:


Hi,

yes, looks like the argument is passed correctly.

By the way: Why are you sending your answers always twice to the mailing
list?

On Fri, Jan 22, 2016 at 2:17 PM, Hilmi Yildirim <
hilmi.yildirim.d...@gmail.com> wrote:


Hi Robert,
in the logs I found this

13:54:41,291 INFO org.apache.flink.yarn.YarnTaskManagerRunner
  -  JVM Options:
13:54:41,292 INFO org.apache.flink.yarn.YarnTaskManagerRunner
  - -Xms3072m
13:54:41,292 INFO org.apache.flink.yarn.YarnTaskManagerRunner
  - -Xmx3072m
13:54:41,292 INFO org.apache.flink.yarn.YarnTaskManagerRunner
  - -XX:MaxDirectMemorySize=3072m
13:54:41,292 INFO org.apache.flink.yarn.YarnTaskManagerRunner
  - -Xss1024m
13:54:41,292 INFO org.apache.flink.yarn.YarnTaskManagerRunner
  -


-Dlog.file=/var/log/hadoop-yarn/container/application_1453384265041_0030/container_1453384265041_0030_01_02/taskmanager.log

13:54:41,292 INFO org.apache.flink.yarn.YarnTaskManagerRunner
  - -Dlogback.configurationFile=file:logback.xml
13:54:41,292 INFO org.apache.flink.yarn.YarnTaskManagerRunner
  - -Dlog4j.configuration=file:log4j.properties


It seems that the argument is passed to the Taskmanager. Or am I wrong?

Best Regards,
Hilmi



Am 22.01.2016 um 12:25 schrieb Robert Metzger:


Hi Hilmi,

it means that each JVM the YARN session is starting (JobManager and
TaskManagers) is initialized with that parameter.

Can you check the logs of the TaskManager to see if the option has been
passed properly?

On Fri, Jan 22, 2016 at 12:22 PM, Hilmi Yildirim <

hilmi.yildi...@dfki.de>

wrote:

Hi,

I added the following line to my flink-conf.yaml:

env.java.opts: "-Xss1024m"

But it does not work. Does this line mean that every single slot of

each

task manager uses a stack size of 1 GB?

Best Regards,
Hilmi

Am 22.01.2016 um 11:46 schrieb Robert Metzger:

Hi Hilmi,

Flink on YARN is respecting the env.java.opts configuration parameter
from
the flink-conf.yaml.

2016-01-22 11:44 GMT+01:00 Hilmi Yildirim :

Hi,


does anybody know how I can increase the stack size of a yarn

session?

Best Regards,
Hilmi

--
==
Hilmi Yildirim, M.Sc.
Researcher

DFKI GmbH
Intelligente Analytik für Massendaten
DFKI Projektbüro Berlin
Alt-Moabit 91c
D-10559 Berlin
Phone: +49 30 23895 1814

E-Mail: hilmi.yildi...@dfki.de

-
Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH
Firmensitz: Trippstadter Strasse 122, D-67663 Kaiserslautern

Geschaeftsfuehrung:
Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender)
Dr. Walter Olthoff

Vorsitzender des Aufsichtsrats:
Prof. Dr. h.c. Hans A. Aukes

Amtsgericht Kaiserslautern, HRB 2313
-



--

==
Hilmi Yildirim, M.Sc.
Researcher

DFKI GmbH
Intelligente Analytik für Massendaten
DFKI Projektbüro Berlin
Alt-Moabit 91c
D-10559 Berlin
Phone: +49 30 23895 1814

E-Mail: hilmi.yildi...@dfki.de

-
Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH
Firmensitz: Trippstadter Strasse 122, D-67663 Kaiserslautern

Geschaeftsfuehrung:
Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender)
Dr. Walter Olthoff

Vorsitzender des Aufsichtsrats:
Prof. Dr. h.c. Hans A. Aukes

Amtsgericht Kaiserslautern, HRB 2313
-






--
==
Hilmi Yildirim, M.Sc.
Researcher

DFKI GmbH
Intelligente Analytik für Massendaten
DFKI Projektbüro Berlin
Alt-Moabit 91c
D-10559 Berlin
Phone: +49 30 23895 1814

E-Mail: hilmi.yildi...@dfki.de

-
Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH
Firmensitz: Trippstadter Strasse 122, D-67663 Kaiserslautern

Geschaeftsfuehrung:
Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender)
Dr. Walter Olthoff

Vorsitzender des Aufsichtsrats:
Prof. Dr. h.c. Hans A. Aukes

Amtsgericht Kaiserslautern, HRB 2313
-



Re: Flink Yarn Stack Size

2016-01-25 Thread Stephan Ewen
@Hilmi:

A stack size of 1GB is too much, maybe the JVM ignores it? Can you try
"-Xss2m" or so?

Is this to prevent stack overflow errors in some code?

Stephan


On Fri, Jan 22, 2016 at 2:21 PM, Robert Metzger  wrote:

> Hi,
>
> yes, looks like the argument is passed correctly.
>
> By the way: Why are you sending your answers always twice to the mailing
> list?
>
> On Fri, Jan 22, 2016 at 2:17 PM, Hilmi Yildirim <
> hilmi.yildirim.d...@gmail.com> wrote:
>
> > Hi Robert,
> > in the logs I found this
> >
> > 13:54:41,291 INFO org.apache.flink.yarn.YarnTaskManagerRunner
> >  -  JVM Options:
> > 13:54:41,292 INFO org.apache.flink.yarn.YarnTaskManagerRunner
> >  - -Xms3072m
> > 13:54:41,292 INFO org.apache.flink.yarn.YarnTaskManagerRunner
> >  - -Xmx3072m
> > 13:54:41,292 INFO org.apache.flink.yarn.YarnTaskManagerRunner
> >  - -XX:MaxDirectMemorySize=3072m
> > 13:54:41,292 INFO org.apache.flink.yarn.YarnTaskManagerRunner
> >  - -Xss1024m
> > 13:54:41,292 INFO org.apache.flink.yarn.YarnTaskManagerRunner
> >  -
> >
> -Dlog.file=/var/log/hadoop-yarn/container/application_1453384265041_0030/container_1453384265041_0030_01_02/taskmanager.log
> > 13:54:41,292 INFO org.apache.flink.yarn.YarnTaskManagerRunner
> >  - -Dlogback.configurationFile=file:logback.xml
> > 13:54:41,292 INFO org.apache.flink.yarn.YarnTaskManagerRunner
> >  - -Dlog4j.configuration=file:log4j.properties
> >
> >
> > It seems that the argument is passed to the Taskmanager. Or am I wrong?
> >
> > Best Regards,
> > Hilmi
> >
> >
> >
> > Am 22.01.2016 um 12:25 schrieb Robert Metzger:
> >
> >> Hi Hilmi,
> >>
> >> it means that each JVM the YARN session is starting (JobManager and
> >> TaskManagers) is initialized with that parameter.
> >>
> >> Can you check the logs of the TaskManager to see if the option has been
> >> passed properly?
> >>
> >> On Fri, Jan 22, 2016 at 12:22 PM, Hilmi Yildirim <
> hilmi.yildi...@dfki.de>
> >> wrote:
> >>
> >> Hi,
> >>> I added the following line to my flink-conf.yaml:
> >>>
> >>> env.java.opts: "-Xss1024m"
> >>>
> >>> But it does not work. Does this line mean that every single slot of
> each
> >>> task manager uses a stack size of 1 GB?
> >>>
> >>> Best Regards,
> >>> Hilmi
> >>>
> >>> Am 22.01.2016 um 11:46 schrieb Robert Metzger:
> >>>
> >>> Hi Hilmi,
> 
>  Flink on YARN is respecting the env.java.opts configuration parameter
>  from
>  the flink-conf.yaml.
> 
>  2016-01-22 11:44 GMT+01:00 Hilmi Yildirim :
> 
>  Hi,
> 
> > does anybody know how I can increase the stack size of a yarn
> session?
> >
> > Best Regards,
> > Hilmi
> >
> > --
> > ==
> > Hilmi Yildirim, M.Sc.
> > Researcher
> >
> > DFKI GmbH
> > Intelligente Analytik für Massendaten
> > DFKI Projektbüro Berlin
> > Alt-Moabit 91c
> > D-10559 Berlin
> > Phone: +49 30 23895 1814
> >
> > E-Mail: hilmi.yildi...@dfki.de
> >
> > -
> > Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH
> > Firmensitz: Trippstadter Strasse 122, D-67663 Kaiserslautern
> >
> > Geschaeftsfuehrung:
> > Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender)
> > Dr. Walter Olthoff
> >
> > Vorsitzender des Aufsichtsrats:
> > Prof. Dr. h.c. Hans A. Aukes
> >
> > Amtsgericht Kaiserslautern, HRB 2313
> > -
> >
> >
> >
> > --
> >>> ==
> >>> Hilmi Yildirim, M.Sc.
> >>> Researcher
> >>>
> >>> DFKI GmbH
> >>> Intelligente Analytik für Massendaten
> >>> DFKI Projektbüro Berlin
> >>> Alt-Moabit 91c
> >>> D-10559 Berlin
> >>> Phone: +49 30 23895 1814
> >>>
> >>> E-Mail: hilmi.yildi...@dfki.de
> >>>
> >>> -
> >>> Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH
> >>> Firmensitz: Trippstadter Strasse 122, D-67663 Kaiserslautern
> >>>
> >>> Geschaeftsfuehrung:
> >>> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender)
> >>> Dr. Walter Olthoff
> >>>
> >>> Vorsitzender des Aufsichtsrats:
> >>> Prof. Dr. h.c. Hans A. Aukes
> >>>
> >>> Amtsgericht Kaiserslautern, HRB 2313
> >>> -
> >>>
> >>>
> >>>
> >
>


Re: Release Flink 1.0.0

2016-01-25 Thread Matthias J. Sax
Hi,

I also would like to get the STOP signal in. But I do not have time to
work in it this week... According to Till's comments, this will be the
last round of reviewing required. So I should be able to finish it till
3rd Feb, but not sure.

What do you think about it?

-Matthias

On 01/25/2016 04:29 PM, Aljoscha Krettek wrote:
> Hi,
> I think the refactoring of Partitioned State and the WindowOperator on state 
> work is almost ready. I also have the RocksDB state backend working. I’m 
> running some tests now on the cluster and should be able to open a PR 
> tomorrow.
> 
> 
>> On 25 Jan 2016, at 15:36, Stephan Ewen  wrote:
>>
>> I agree, with Gyula, one out-of-core state backend should be in. We are
>> pretty close to that. Aljoscha has done good work on extending test
>> coverage for state backends, so we should be pretty comfortable that it
>> works as well, once we integrate new state backends with the tests.
>>
>> There is a bit of work do do around extending the interface of the
>> key/value state. I would like to start a separate thread on that today or
>> tomorrow...
>>
>>
>>
>> On Mon, Jan 25, 2016 at 12:16 PM, Gyula Fóra  wrote:
>>
>>> Hi,
>>>
>>> I agree that getting Flink 1.0.0 out soon would be great as Flink is in a
>>> pretty solid state right now.
>>>
>>> I wonder whether it would make sense to include an out-of-core state
>>> backend in streaming core that can be used with partitioned/window states.
>>> I think if we are releasing 1.0.0 we should have a solid feature set for
>>> our strong steaming use-cases  (in this case stateful, and windowed
>>> computations) and this should be a part of that.
>>>
>>> I know that Aljoscha is working on a solution for this which will probably
>>> involve a heavy refactor of the State backend interfaces, and I am also
>>> working on a similar solution. Maybe it would be good to get at least one
>>> good robust solution for this in and definitely Aljoscha's refactor for the
>>> interfaces.
>>>
>>> If we decide to do this, I think this needs 1-2 extra weeks of proper
>>> testing so this might delay the schedule a little bit.
>>>
>>> What do you think?
>>>
>>> Gyula
>>>
>>>
>>>
>>> Robert Metzger  ezt írta (időpont: 2016. jan. 25., H,
>>> 11:54):
>>>
 Hi,

 I would like to release 1.0.0 in the next weeks.
 Looking at the JIRAs, I think we are going to close a lot of blocking
 issues soon. How about we do a first release candidate on Wednesday, 3.
 February?

 The first release candidate is most likely not going to pass the vote,
>>> the
 primary goal will be collecting a list of issues we need to address.

 There is also a Wiki page for the 1.0 release:
 https://cwiki.apache.org/confluence/display/FLINK/1.0+Release

 Please -1 to this message if 3. February is too soon for the first RC (it
 also means that we'll do a feature freeze around that time).

>>>
> 



signature.asc
Description: OpenPGP digital signature


[jira] [Created] (FLINK-3289) Double reference to flink-contrib

2016-01-25 Thread Stefano Baghino (JIRA)
Stefano Baghino created FLINK-3289:
--

 Summary: Double reference to flink-contrib
 Key: FLINK-3289
 URL: https://issues.apache.org/jira/browse/FLINK-3289
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.0.0
Reporter: Stefano Baghino
Priority: Trivial
 Fix For: 1.0.0


A [commit 
|https://github.com/apache/flink/commit/f94112fbbaaf2ecc6a9ecb314a5565203ce779a7#diff-b57e0c6ee4a76b887f0f6a00398aa33dL78]
 to solve FLINK-1452 introduced the {{flink-contrib}} sub-project in the 
documentation. This other 
[commit|https://github.com/apache/flink/commit/e9bf13d8626099a1d6ddb6ebe98c50be848fe79e#diff-6ac553dce1ab24b343bc66cc6b5d80bfR100]
 to solve FLINK-1712 duplicated the {{flink-contrib}} line to specify it as a 
container of early-stage project.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-3282) Add FlinkRelNode interface.

2016-01-25 Thread Chengxiang Li (JIRA)
Chengxiang Li created FLINK-3282:


 Summary: Add FlinkRelNode interface.
 Key: FLINK-3282
 URL: https://issues.apache.org/jira/browse/FLINK-3282
 Project: Flink
  Issue Type: New Feature
  Components: Table API
Reporter: Chengxiang Li
Assignee: Chengxiang Li


Add FlinkRelNode interface for physical plan translator and Flink program 
generator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Scala 2.10/2.11 Maven dependencies

2016-01-25 Thread Ufuk Celebi

> On 25 Jan 2016, at 11:39, Maximilian Michels  wrote:
> 
> I won't have the time to finish the refactoring. Also, it will be
> pretty painful with all the large streaming pull requests being merged
> at the moment. If there are no objections, I would like to merge the
> Scala suffix changes with "flink-streaming-java" being Scala
> dependent. It will improve the experience for users in the long run.
> 
> After merging, I'll announce the new Scala suffixed modules. In
> addition, we could deploy an empty Maven module to overwrite the
> current snapshot version of flink-streaming-java. That would prevent
> conflicts with different snapshot versions, e.g. combining
> flink-streaming-java (instead of flink-streaming-java_2.11) with
> flink-runtime_2.11. Once 1.0.0 is out, the old artifacts won't be a
> problem anymore.
> 
> Any objections?

Sounds good to me!

– Ufuk



Re: Release Flink 1.0.0

2016-01-25 Thread Gyula Fóra
Hi,

I agree that getting Flink 1.0.0 out soon would be great as Flink is in a
pretty solid state right now.

I wonder whether it would make sense to include an out-of-core state
backend in streaming core that can be used with partitioned/window states.
I think if we are releasing 1.0.0 we should have a solid feature set for
our strong steaming use-cases  (in this case stateful, and windowed
computations) and this should be a part of that.

I know that Aljoscha is working on a solution for this which will probably
involve a heavy refactor of the State backend interfaces, and I am also
working on a similar solution. Maybe it would be good to get at least one
good robust solution for this in and definitely Aljoscha's refactor for the
interfaces.

If we decide to do this, I think this needs 1-2 extra weeks of proper
testing so this might delay the schedule a little bit.

What do you think?

Gyula



Robert Metzger  ezt írta (időpont: 2016. jan. 25., H,
11:54):

> Hi,
>
> I would like to release 1.0.0 in the next weeks.
> Looking at the JIRAs, I think we are going to close a lot of blocking
> issues soon. How about we do a first release candidate on Wednesday, 3.
> February?
>
> The first release candidate is most likely not going to pass the vote, the
> primary goal will be collecting a list of issues we need to address.
>
> There is also a Wiki page for the 1.0 release:
> https://cwiki.apache.org/confluence/display/FLINK/1.0+Release
>
> Please -1 to this message if 3. February is too soon for the first RC (it
> also means that we'll do a feature freeze around that time).
>


[jira] [Created] (FLINK-3283) Failed Kafka 0.9 test on duplicate message

2016-01-25 Thread Ufuk Celebi (JIRA)
Ufuk Celebi created FLINK-3283:
--

 Summary: Failed Kafka 0.9 test on duplicate message
 Key: FLINK-3283
 URL: https://issues.apache.org/jira/browse/FLINK-3283
 Project: Flink
  Issue Type: Test
Reporter: Ufuk Celebi


On a branch with unrelated changes 
{{Kafka09ITCase.testMultipleSourcesOnePartition:82->KafkaConsumerTestBase.runMultipleSourcesOnePartitionExactlyOnceTest}}
 failed.

https://s3.amazonaws.com/archive.travis-ci.org/jobs/104582020/log.txt

{code}
Caused by: java.lang.Exception: Received a duplicate: 1712
at 
org.apache.flink.streaming.connectors.kafka.testutils.ValidatingExactlyOnceSink.invoke(ValidatingExactlyOnceSink.java:53)
at 
org.apache.flink.streaming.connectors.kafka.testutils.ValidatingExactlyOnceSink.invoke(ValidatingExactlyOnceSink.java:30)
at 
org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:37)
at 
org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:166)
at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:63)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:232)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Release Flink 1.0.0

2016-01-25 Thread Robert Metzger
Hi,

I would like to release 1.0.0 in the next weeks.
Looking at the JIRAs, I think we are going to close a lot of blocking
issues soon. How about we do a first release candidate on Wednesday, 3.
February?

The first release candidate is most likely not going to pass the vote, the
primary goal will be collecting a list of issues we need to address.

There is also a Wiki page for the 1.0 release:
https://cwiki.apache.org/confluence/display/FLINK/1.0+Release

Please -1 to this message if 3. February is too soon for the first RC (it
also means that we'll do a feature freeze around that time).


[jira] [Created] (FLINK-3284) Broken web link: Hadoop compatibility in Flink

2016-01-25 Thread Slim Baltagi (JIRA)
Slim Baltagi created FLINK-3284:
---

 Summary: Broken web link: Hadoop compatibility in Flink 
 Key: FLINK-3284
 URL: https://issues.apache.org/jira/browse/FLINK-3284
 Project: Flink
  Issue Type: Task
  Components: Documentation
Affects Versions: 0.10.1
Reporter: Slim Baltagi
Priority: Trivial


On the 'File Systems' page: 
https://ci.apache.org/projects/flink/flink-docs-master/apis/filesystems.html
The web link 
https://ci.apache.org/projects/flink/flink-docs-master/apis/hadoop_compatibility.html
 under 'Read more about Hadoop compatibility in Flink.' is broken. 





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)