[GitHub] flink issue #2269: [FLINK-4190] Generalise RollingSink to work with arbitrar...

2016-08-30 Thread joshfg
Github user joshfg commented on the issue:

https://github.com/apache/flink/pull/2269
  
That's great, thanks Aljoscha! 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2269: [FLINK-4190] Generalise RollingSink to work with a...

2016-08-30 Thread joshfg
Github user joshfg closed the pull request at:

https://github.com/apache/flink/pull/2269


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2269: [FLINK-4190] Generalise RollingSink to work with arbitrar...

2016-08-26 Thread joshfg
Github user joshfg commented on the issue:

https://github.com/apache/flink/pull/2269
  
Hi Aljoscha, just wanted to remind you about this - any idea when the 
changes will be merged in? Thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2269: [FLINK-4190] Generalise RollingSink to work with arbitrar...

2016-07-25 Thread joshfg
Github user joshfg commented on the issue:

https://github.com/apache/flink/pull/2269
  
Ok I've migrated `BucketingSinkITCase` and 
`BucketingSinkMultipleActiveBucketsCase` over to `BucketingSinkTest` using the 
test harness with `TimeServiceProvider`. I've left the two fault tolerance IT 
cases as they are because it looks like they need to run a proper Flink job 
with a custom source/mappers. Does that sound OK?

If you think it's ready to merge, should I move all the commits into a 
single commit for FLINK-4190?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2269: [FLINK-4190] Generalise RollingSink to work with arbitrar...

2016-07-25 Thread joshfg
Github user joshfg commented on the issue:

https://github.com/apache/flink/pull/2269
  
That works, thanks! :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2269: [FLINK-4190] Generalise RollingSink to work with arbitrar...

2016-07-25 Thread joshfg
Github user joshfg commented on the issue:

https://github.com/apache/flink/pull/2269
  
Ah I see, that makes sense.
I've began refactoring the tests here: 
https://github.com/joshfg/flink/blob/flink-4190/flink-streaming-connectors/flink-connector-filesystem/src/test/java/org/apache/flink/streaming/connectors/fs/bucketing/BucketingSinkTest.java

But have run into this strange exception in the 
`testNonRollingSequenceFileWithoutCompressionWriter` test:
```
java.lang.IllegalStateException: Key Class has not been initialized.
at 
org.apache.flink.streaming.connectors.fs.SequenceFileWriter.open(SequenceFileWriter.java:84)
at 
org.apache.flink.streaming.connectors.fs.bucketing.BucketingSink.openNewPartFile(BucketingSink.java:500)
at 
org.apache.flink.streaming.connectors.fs.bucketing.BucketingSink.invoke(BucketingSink.java:396)
at 
org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:39)
at 
org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness.processElement(OneInputStreamOperatorTestHarness.java:226)
at 
org.apache.flink.streaming.connectors.fs.bucketing.BucketingSinkTest.testNonRollingSequenceFileWithoutCompressionWriter(BucketingSinkTest.java:220)
```

Any ideas what would cause that? I've copied the HDFS cluster 
initialisation exactly as it was in the original tests...





---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2269: [FLINK-4190] Generalise RollingSink to work with arbitrar...

2016-07-25 Thread joshfg
Github user joshfg commented on the issue:

https://github.com/apache/flink/pull/2269
  
Thanks! Oh nice, this looks like a better solution for checking for bucket 
inactivity...
For the tests, is there any reason not to fold all of those tests into the 
new `BucketingSinkTest`? Currently there's 4: (BucketingSinkITCase, 
BucketingSinkFaultToleranceITCase, BucketingSinkFaultTolerance2ITCase, 
BucketingSinkMultipleActiveBucketsCase)

Also, do you know what's the purpose of using MiniDFSCluster in the tests? 
Could we rewrite the other tests in the same way as your example test, without 
running a local HDFS cluster?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2269: [FLINK-4190] Generalise RollingSink to work with a...

2016-07-19 Thread joshfg
GitHub user joshfg opened a pull request:

https://github.com/apache/flink/pull/2269

[FLINK-4190] Generalise RollingSink to work with arbitrary buckets

I've created a new bucketing package with a BucketingSink, which improves 
on the existing RollingSink by enabling arbitrary bucketing, rather than just 
rolling files based on system time.

The main changes to support this are:

- The Bucketer interface now takes the sink's input element as a generic 
parameter, enabling us to bucket based on attributes of the sink's input.
- While maintaining the same rolling mechanics of the existing 
implementation (e.g. rolling when the file size reaches a threshold), the sink 
implementation can now have many 'active' buckets at any point in time. The 
checkpointing mechanics have been extended to support maintaining the state of 
multiple active buckets and files, instead of just one.
- For use cases where the buckets being written to are changing over time, 
the sink now needs to determine when a bucket has become 'inactive', in order 
to flush and close the file. In the existing implementation, this is simply 
when the bucket path changes. Instead, we now determine a bucket as inactive if 
it hasn't been written to recently. To support this there are two additional 
user configurable settings: inactiveBucketCheckInterval and 
inactiveBucketThreshold.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/joshfg/flink flink-4190

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2269.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2269


commit 2011e47de6c8b3c087772c84b4b3e44210dbe50c
Author: Josh <joshformangorn...@gmail.com>
Date:   2016-07-12T17:38:54Z

[FLINK-4190] Generalise RollingSink to work with arbitrary buckets




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2157: [FLINK-4115] FsStateBackend filesystem verification can c...

2016-07-05 Thread joshfg
Github user joshfg commented on the issue:

https://github.com/apache/flink/pull/2157
  
Cool, thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---