GitHub user StephanEwen opened a pull request:

    https://github.com/apache/flink/pull/3877

    [backport] [FLINK-6514] [build] Create a proper separate Hadoop uber jar 
for 'flink-dist' assembly

    Backport of #3876 to `release-1.3`
    
    This fixes the issue that Flink cannot be started locally if built with 
Maven 3.3+
    
    There are two big fixes in this pull request, because they do not 
build/pass tests individually. The wrong Mesos dependencies where the reason 
that the broken Hadoop fat jar building actually passed the Yarn tests.
    
    # Hadoop Uber Jar
    
      - This builds a proper Hadoop Uber Jar with all of Hadoop's needed 
dependencies. The prior build was missing many important dependencies in the 
Hadoop Uber Jar.
    
      - The Hadoop-jar is no longer excluded in `flink-dist` via setting the 
dependency to `provided`, but by explicit exclusion. That way, Hadoop's 
transitive dependencies are not excluded from other dependencies as well. 
Before this patch, various decompression and Avro were broken in a Flink build, 
due to accidental exclusion of their transitive dependencies.
    
    # Dependency fixing
    
      - This also fixes the dependencies of `flink-mesos`, which made all of 
Hadoop's transitive dependencies its own dependencies, by promoting them during 
shading. That way, Flink had various unnecessary dependencies in its 
`flink-dist` jar.
    
      - Incidentally, that brought Hadoop's accidentally excluded dependencies 
back in, but into the `flink-dist` jar, not the `shaded-hadoop2` jar.
    
    


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/StephanEwen/incubator-flink fix_fat_jar_13

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/flink/pull/3877.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #3877
    
----
commit ac15bc32a3d786b50e8864a903d31d0b3e0c3042
Author: Stephan Ewen <se...@apache.org>
Date:   2017-05-11T13:12:10Z

    [hotfix] [build] Drop transitive jersey/jettison/servlet dependencies 
pulled via Hadoop

commit 5e94787ce4e14f5da88e71418200b4bbe517483b
Author: Stephan Ewen <se...@apache.org>
Date:   2017-05-11T13:12:50Z

    [FLINK-6546] [build] Fix dependencies of flink-mesos
    
      - This makes all flink-related dependencies 'provided' to not have the
        transitive dependencies promoted
    
      - Drops the unnecessary dependency on the Hadoop artifact
    
      - Adds directly referenced libraries, like jackson
    
      - Deactivates default logging of tests

commit b568ccfdf7366056d29ee43d14c606cfc4448bab
Author: Stephan Ewen <se...@apache.org>
Date:   2017-05-11T15:32:03Z

    [build] Reduce flink-avro's compile dependency from 'flink-java' to 
'flink-core'

commit 84c150a5798f029bb9aced998ad6b81dd8dc8de5
Author: Stephan Ewen <se...@apache.org>
Date:   2017-05-11T15:35:43Z

    [FLINK-6514] [build] Remove 'flink-shaded-hadoop2' from 'flink-dist' via 
exclusions
    
    This is more tedious/manual than setting it to 'provided' once, but it
    is also correct.
    
    For example, in the case of Hadoop 2.3, having 'flink-shaded-hadoop2' as 
'provided'
    removes other needed dependencies as well, such as 'org.codehaus.jackson' 
from avro.

commit 99658870865c15ad0996066cf94c721f30bc86ca
Author: Stephan Ewen <se...@apache.org>
Date:   2017-05-11T11:52:25Z

    [FLINK-6514] [build] Merge bin and lib assembly

commit 93e37c666aba50988a48b9273d7b531434c5d5b1
Author: Stephan Ewen <se...@apache.org>
Date:   2017-05-11T15:00:03Z

    [FLINK-6514] [build] Create a proper separate Hadoop uber jar for 
'flink-dist' assembly

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to