I will commit something to flink-storm-compatibility tomorrow that
contains some internal package restructuring. I think, renaming the
three modules in this commit would be a smart move as both changes
result in merge conflicts when rebasing open PRs. Thus we can limit this
pain to a single time. If no objections, I will commit those changes
tomorrow.

-Matthias

On 10/01/2015 09:52 PM, Henry Saputra wrote:
> +1
> 
> I like the idea moving "staging" projects into appropriate modules.
> 
> While we are at it, I would like to propose changing "
> flink-hadoop-compatibility" to "flink-hadoop". It is in my bucket list
> but would be nice if it is part of re-org.
> Supporting Hadoop in the connector implicitly means compatibility with Hadoop.
> Also same thing with "flink-storm-compatibility" to "flink-storm".
> 
> - Henry
> 
> On Thu, Oct 1, 2015 at 3:25 AM, Stephan Ewen <se...@apache.org> wrote:
>> Hi all!
>>
>> We are making good headway with reworking the last parts of the Window API.
>> After that, the streaming API should be good to be pulled out of staging.
>>
>> Since we are reorganizing the projects as part of that, I would shift a bit
>> more to bring things a bit more up to date.
>>
>> In this restructure, I would like to get rid of the "flink-staging"
>> project. Anyone who only uses the maven artifacts sees no difference
>> whether a project is in "staging" or not, so it does not help much to have
>> that directory structure.
>> On the other hand, projects have a tendency to linger in staging forever
>> (like avro, spargel, hbase, jdbc, ...)
>>
>> The new structure could be
>>
>> flink-core
>> flink-java
>> flink-scala
>> flink-streaming-core
>> flink-streaming-scala
>>
>> flink-runtime
>> flink-runtime-web
>> flink-optimizer
>> flink-clients
>>
>> flink-shaded
>>   -> flink-shaded-hadoop
>>   -> flink-shaded-hadoop2
>>   -> flink-shaded-include-yarn-tests
>>   -> flink-shaded-curator
>>
>> flink-examples
>>   -> (have all examples, Scala and Java, Batch and Streaming)
>>
>> flink-batch-connectors
>>   -> flink-avro
>>   -> flink-jdbc
>>   -> flink-hadoop-compatibility
>>   -> flink-hbase
>>   -> flink-hcatalog
>>
>> flink-streaming-connectors
>>   -> flink-connector-twitter
>>   -> flink-streaming-examples
>>   -> flink-connector-flume
>>   -> flink-connector-kafka
>>   -> flink-connector-elasticsearch
>>   -> flink-connector-rabbitmq
>>   -> flink-connector-filesystem
>>
>> flink-libraries
>>   -> flink-gelly
>>   -> flink-gelly-scala
>>   -> flink-ml
>>   -> flink-table
>>   -> flink-language-binding
>>   -> flink-python
>>
>>
>> flink-scala-shell
>>
>> flink-test-utils
>> flink-tests
>> flink-fs-tests
>>
>> flink-contrib
>>   -> flink-storm-compatibility
>>   -> flink-storm-compatibility-examples
>>   -> flink-streaming-utils
>>   -> flink-tweet-inputformat
>>   -> flink-operator-stats
>>   -> flink-tez
>>
>> flink-quickstart
>>   -> flink-quickstart-java
>>   -> flink-quickstart-scala
>>   -> flink-tez-quickstart
>>
>> flink-yarn
>> flink-yarn-tests
>>
>> flink-dist
>>
>> flink-benchmark
>>
>>
>> Let me know if that makes sense!
>>
>> Greetings,
>> Stephan

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to