@Matthias: +1.

On Fri, Oct 2, 2015 at 11:27 AM, Stephan Ewen <se...@apache.org> wrote:

> @matthias +1 for that approach
>
> On Fri, Oct 2, 2015 at 11:21 AM, Matthias J. Sax <mj...@apache.org> wrote:
>
> > It think, rename "flink-storm-compatibility-core" to just "flink-storm"
> > would be the cleanest solution.
> >
> > So in flink-contrib there would be two modules:
> >   - flink-storm
> >   - flink-storm-examples
> >
> > Please let me know if you have any objection about it.
> >
> > -Matthias
> >
> > On 10/02/2015 10:45 AM, Matthias J. Sax wrote:
> > > Sure. Will do that.
> > >
> > > -Matthias
> > >
> > > On 10/02/2015 10:35 AM, Stephan Ewen wrote:
> > >> @Matthias: How about getting rid of the storm-compatibility-parent and
> > >> making the core and examples projects directly projects in "contrib"
> > >>
> > >> On Fri, Oct 2, 2015 at 10:34 AM, Till Rohrmann <trohrm...@apache.org>
> > wrote:
> > >>
> > >>> +1 for the new project structure. Getting rid of our code dump is a
> > good
> > >>> thing.
> > >>>
> > >>> On Fri, Oct 2, 2015 at 10:25 AM, Maximilian Michels <m...@apache.org>
> > >>> wrote:
> > >>>
> > >>>> +1 Matthias, let's limit the overhead this has for the module
> > >>> maintainers.
> > >>>>
> > >>>> On Fri, Oct 2, 2015 at 12:17 AM, Matthias J. Sax <mj...@apache.org>
> > >>> wrote:
> > >>>>> I will commit something to flink-storm-compatibility tomorrow that
> > >>>>> contains some internal package restructuring. I think, renaming the
> > >>>>> three modules in this commit would be a smart move as both changes
> > >>>>> result in merge conflicts when rebasing open PRs. Thus we can limit
> > >>> this
> > >>>>> pain to a single time. If no objections, I will commit those
> changes
> > >>>>> tomorrow.
> > >>>>>
> > >>>>> -Matthias
> > >>>>>
> > >>>>> On 10/01/2015 09:52 PM, Henry Saputra wrote:
> > >>>>>> +1
> > >>>>>>
> > >>>>>> I like the idea moving "staging" projects into appropriate
> modules.
> > >>>>>>
> > >>>>>> While we are at it, I would like to propose changing "
> > >>>>>> flink-hadoop-compatibility" to "flink-hadoop". It is in my bucket
> > list
> > >>>>>> but would be nice if it is part of re-org.
> > >>>>>> Supporting Hadoop in the connector implicitly means compatibility
> > with
> > >>>> Hadoop.
> > >>>>>> Also same thing with "flink-storm-compatibility" to "flink-storm".
> > >>>>>>
> > >>>>>> - Henry
> > >>>>>>
> > >>>>>> On Thu, Oct 1, 2015 at 3:25 AM, Stephan Ewen <se...@apache.org>
> > >>> wrote:
> > >>>>>>> Hi all!
> > >>>>>>>
> > >>>>>>> We are making good headway with reworking the last parts of the
> > >>> Window
> > >>>> API.
> > >>>>>>> After that, the streaming API should be good to be pulled out of
> > >>>> staging.
> > >>>>>>>
> > >>>>>>> Since we are reorganizing the projects as part of that, I would
> > shift
> > >>>> a bit
> > >>>>>>> more to bring things a bit more up to date.
> > >>>>>>>
> > >>>>>>> In this restructure, I would like to get rid of the
> "flink-staging"
> > >>>>>>> project. Anyone who only uses the maven artifacts sees no
> > difference
> > >>>>>>> whether a project is in "staging" or not, so it does not help
> much
> > to
> > >>>> have
> > >>>>>>> that directory structure.
> > >>>>>>> On the other hand, projects have a tendency to linger in staging
> > >>>> forever
> > >>>>>>> (like avro, spargel, hbase, jdbc, ...)
> > >>>>>>>
> > >>>>>>> The new structure could be
> > >>>>>>>
> > >>>>>>> flink-core
> > >>>>>>> flink-java
> > >>>>>>> flink-scala
> > >>>>>>> flink-streaming-core
> > >>>>>>> flink-streaming-scala
> > >>>>>>>
> > >>>>>>> flink-runtime
> > >>>>>>> flink-runtime-web
> > >>>>>>> flink-optimizer
> > >>>>>>> flink-clients
> > >>>>>>>
> > >>>>>>> flink-shaded
> > >>>>>>>   -> flink-shaded-hadoop
> > >>>>>>>   -> flink-shaded-hadoop2
> > >>>>>>>   -> flink-shaded-include-yarn-tests
> > >>>>>>>   -> flink-shaded-curator
> > >>>>>>>
> > >>>>>>> flink-examples
> > >>>>>>>   -> (have all examples, Scala and Java, Batch and Streaming)
> > >>>>>>>
> > >>>>>>> flink-batch-connectors
> > >>>>>>>   -> flink-avro
> > >>>>>>>   -> flink-jdbc
> > >>>>>>>   -> flink-hadoop-compatibility
> > >>>>>>>   -> flink-hbase
> > >>>>>>>   -> flink-hcatalog
> > >>>>>>>
> > >>>>>>> flink-streaming-connectors
> > >>>>>>>   -> flink-connector-twitter
> > >>>>>>>   -> flink-streaming-examples
> > >>>>>>>   -> flink-connector-flume
> > >>>>>>>   -> flink-connector-kafka
> > >>>>>>>   -> flink-connector-elasticsearch
> > >>>>>>>   -> flink-connector-rabbitmq
> > >>>>>>>   -> flink-connector-filesystem
> > >>>>>>>
> > >>>>>>> flink-libraries
> > >>>>>>>   -> flink-gelly
> > >>>>>>>   -> flink-gelly-scala
> > >>>>>>>   -> flink-ml
> > >>>>>>>   -> flink-table
> > >>>>>>>   -> flink-language-binding
> > >>>>>>>   -> flink-python
> > >>>>>>>
> > >>>>>>>
> > >>>>>>> flink-scala-shell
> > >>>>>>>
> > >>>>>>> flink-test-utils
> > >>>>>>> flink-tests
> > >>>>>>> flink-fs-tests
> > >>>>>>>
> > >>>>>>> flink-contrib
> > >>>>>>>   -> flink-storm-compatibility
> > >>>>>>>   -> flink-storm-compatibility-examples
> > >>>>>>>   -> flink-streaming-utils
> > >>>>>>>   -> flink-tweet-inputformat
> > >>>>>>>   -> flink-operator-stats
> > >>>>>>>   -> flink-tez
> > >>>>>>>
> > >>>>>>> flink-quickstart
> > >>>>>>>   -> flink-quickstart-java
> > >>>>>>>   -> flink-quickstart-scala
> > >>>>>>>   -> flink-tez-quickstart
> > >>>>>>>
> > >>>>>>> flink-yarn
> > >>>>>>> flink-yarn-tests
> > >>>>>>>
> > >>>>>>> flink-dist
> > >>>>>>>
> > >>>>>>> flink-benchmark
> > >>>>>>>
> > >>>>>>>
> > >>>>>>> Let me know if that makes sense!
> > >>>>>>>
> > >>>>>>> Greetings,
> > >>>>>>> Stephan
> > >>>>>
> > >>>>
> > >>>
> > >>
> > >
> >
> >
>

Reply via email to