[GitHub] flink pull request #4683: [FLINK-5944] Support reading of Snappy files

2017-09-27 Thread mlipkovich
Github user mlipkovich commented on a diff in the pull request:

https://github.com/apache/flink/pull/4683#discussion_r141424281
  
--- Diff: flink-core/pom.xml ---
@@ -52,6 +52,12 @@ under the License.
flink-shaded-asm

 
+   
+   org.apache.flink
+   flink-shaded-hadoop2
+   ${project.version}
+   
--- End diff --

Thanks for your comment Aljoscha,
So there are at least three ways on how to achieve it: either mark this 
dependency as 'provided', move Hadoop Snappy Codec related classes to 
flink-java module or move it to some separate module as suggested @haohui, but 
I'm not sure what should be inside this module


---


[GitHub] flink pull request #4683: [FLINK-5944] Support reading of Snappy files

2017-09-27 Thread aljoscha
Github user aljoscha commented on a diff in the pull request:

https://github.com/apache/flink/pull/4683#discussion_r141312154
  
--- Diff: flink-core/pom.xml ---
@@ -52,6 +52,12 @@ under the License.
flink-shaded-asm

 
+   
+   org.apache.flink
+   flink-shaded-hadoop2
+   ${project.version}
+   
--- End diff --

As of recently everything except the Hadoop compatibility package is free 
from Hadoop dependencies. This should also stay like this so that we can 
provide a Flink distribution without Hadoop dependencies because this was 
causing problems for some users.


---


[GitHub] flink pull request #4683: [FLINK-5944] Support reading of Snappy files

2017-09-25 Thread mlipkovich
Github user mlipkovich commented on a diff in the pull request:

https://github.com/apache/flink/pull/4683#discussion_r140833786
  
--- Diff: flink-core/pom.xml ---
@@ -52,6 +52,12 @@ under the License.
flink-shaded-asm

 
+   
+   org.apache.flink
+   flink-shaded-hadoop2
+   ${project.version}
+   
--- End diff --

Yes, it is a good point to make Hadoop Snappy a default codec. I think we 
still could support a Xerial Snappy since it comes for free. I will do these 
changes once we agree on dependencies

Regarding to separate module what would be the content of this model?  
As I understand a user which would like to read HDFS files will need 
flink-java module anyway since it contains Hadoop wrappers like 
HadoopInputSplit and so on. How do you think if it makes sense to put this 
Hadoop codec there?


---


[GitHub] flink pull request #4683: [FLINK-5944] Support reading of Snappy files

2017-09-25 Thread haohui
Github user haohui commented on a diff in the pull request:

https://github.com/apache/flink/pull/4683#discussion_r140696992
  
--- Diff: flink-core/pom.xml ---
@@ -52,6 +52,12 @@ under the License.
flink-shaded-asm

 
+   
+   org.apache.flink
+   flink-shaded-hadoop2
+   ${project.version}
+   
--- End diff --


Internally we have several users that try Flink to read the files generated 
by Hadoop (e.g. lz4 / gz / snappy). I think the support of Hadoop is quite 
important.

I'm not sure supporting the xerial snappy format is a good idea. The two 
file formats are actually incompatible -- it would be quite confusing for the 
users to find out that they can't access the files using Spark / MR / Hive due 
to a missed configuration.

I suggest at least we should make the Hadoop file format as the default -- 
or to just get rid of the xerial version of the file format.

Putting the dependency in provided sounds fine to me -- if we need even 
tighter controls on the dependency, we can start thinking about having a 
separate module for it.

What do you think?



---


[GitHub] flink pull request #4683: [FLINK-5944] Support reading of Snappy files

2017-09-24 Thread mlipkovich
Github user mlipkovich commented on a diff in the pull request:

https://github.com/apache/flink/pull/4683#discussion_r140652438
  
--- Diff: flink-core/pom.xml ---
@@ -52,6 +52,12 @@ under the License.
flink-shaded-asm

 
+   
+   org.apache.flink
+   flink-shaded-hadoop2
+   ${project.version}
+   
--- End diff --

What do you think about adding this dependency to compile-time only?

Regarding to difference between codecs as I understand the thing is that 
Snappy compressed files are not splittable. So Hadoop splits raw files into 
blocks and compresses each block separately using regular Snappy. If you 
download the whole Hadoop Snappy compressed file regular Snappy will not be 
able to decompress it since it's not aware of block boundaries


---


[GitHub] flink pull request #4683: [FLINK-5944] Support reading of Snappy files

2017-09-24 Thread haohui
Github user haohui commented on a diff in the pull request:

https://github.com/apache/flink/pull/4683#discussion_r140649772
  
--- Diff: flink-core/pom.xml ---
@@ -52,6 +52,12 @@ under the License.
flink-shaded-asm

 
+   
+   org.apache.flink
+   flink-shaded-hadoop2
+   ${project.version}
+   
--- End diff --

Including hadoop as a dependency in flink-core can be problematic for a 
number of downstream projects.

I wonder what is the exact difference between the Hadoop and vanilla snappy 
codec? Is it just due to the fact that there are additional framings in the 
snappy codec in Hadoop?





---


[GitHub] flink pull request #4683: [FLINK-5944] Support reading of Snappy files

2017-09-19 Thread mlipkovich
GitHub user mlipkovich opened a pull request:

https://github.com/apache/flink/pull/4683

[FLINK-5944] Support reading of Snappy files

## What is the purpose of the change

Support reading of Snappy compressed text files (both Xerial and Hadoop 
snappy)

## Brief change log

  - *Added InputStreamFactories for Xerial and Hadoop snappy*
  - *Added config parameter to control whether Xerial or Hadoop snappy 
should be used*

## Verifying this change

  - *Manually verified the change by running word count for text files 
compressed using different Snappy versions*

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): no
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
  - The serializers: no
  - The runtime per-record code paths (performance sensitive): no
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no

## Documentation

  - Does this pull request introduce a new feature? yes
  - If yes, how is the feature documented? JavaDocs 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mlipkovich/flink FLINK-5944

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4683.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4683


commit c4d4016f1e6b44833d24994c97532b4c5243e4d2
Author: Mikhail Lipkovich 
Date:   2017-09-19T13:34:10Z

[FLINK-5944] Support reading of Snappy files




---