Currently the way that resources (jars) are distributed within a cluster is
Runner dependent. There is a long term story to use the Runner Api + Fn Api
+ Docker to make it so that portability in this space was improved across
runners.

On Fri, Jun 2, 2017 at 11:50 AM, Will Walters <[email protected]>
wrote:

> Yeah, we're working on altering the build file to include all dependencies
> in one, huge jar. Is there a better way than this to run Beam jobs on a
> cluster? Putting everything into a jar seems like a clunky solution.
>
>
> On Friday, June 2, 2017 11:41 AM, Lukasz Cwik <[email protected]> wrote:
>
>
> To flatten all the dependencies into one jar is build system dependent. If
> using Maven I would look into the Maven Shade Plugin (
> https://maven.apache.org/plugins/maven-shade-plugin/).
> Jar files are also just zip files so you could merge them manually as well
> but you'll need to deal with dependency versions conflicting (copies of the
> same class file but containing different versions of code that are
> potentially incompatible).
>
> On Fri, Jun 2, 2017 at 11:30 AM, Will Walters <[email protected]>
> wrote:
>
> Hello,
>
> My team is trying to run the Beam examples on a Hadoop cluster and are
> running into issues. Our current method is to compile all of the Beam files
> into several jars, move those jars onto the cluster, and use Hadoop's 'jar'
> command to run a class from there. The issue we've been running into is
> that certain dependencies are not included in the jars.
>
> Is there a way to force the compilation to include all dependencies? Is
> there a different way we should be going about this?
>
> Thank you,
> Will Walters.
>
>
>
>
>

Reply via email to