Github user asfgit closed the pull request at:
https://github.com/apache/flink/pull/450
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabl
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/450#issuecomment-77361454
Well. I'm reducing the number of files because its a good idea in general
to remove the `flink-hbase` and `flink-streaming-connectors` from dist.
And I'm going to cha
Github user uce commented on the pull request:
https://github.com/apache/flink/pull/450#issuecomment-77344362
I am confused. Which change do you mean? My understanding was the
following: we are going to build with openjdk6 to resolve the issue *instead*
of reducing the number of files
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/450#issuecomment-77344029
No, the change is not yet merged to master and there is an unresolved issue
with it.
The affected user needs the feature until next Tuesday .. so I have a few
da
Github user uce commented on the pull request:
https://github.com/apache/flink/pull/450#issuecomment-77343809
I agree. Reducing the number of files would just postpone the problem
anyways.
This PR can be closed then, right?
---
If your project is set up for it, you can reply
Github user StephanEwen commented on the pull request:
https://github.com/apache/flink/pull/450#issuecomment-77157047
Makes sense, +1 for building the releases with openjdk6 (half of the code
is compiled by the Scala Compiler anyways)
---
If your project is set up for it, you can rep
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/450#issuecomment-77117018
According to this discussion:
https://issues.apache.org/jira/browse/SPARK-1520
Jars build by java 6 can have more than 65k entries.
If thats the case, we could als
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/450#issuecomment-77031055
Damn. The good news is, my test is working, the bad news is, that the
hadoop 2.6.0 version is causing an oversized uberjar:
> The number of files in the uberjar (
GitHub user rmetzger opened a pull request:
https://github.com/apache/flink/pull/450
[FLINK-1637] Reduce number of files in uberjar for java 6
It seems that we've recently surpassed the magic number of 65536 files in
our YARN uberjar.
Java 6 is not able to read jar files with so