Is there any example about how to add external jar into oozie when running
spark action in oozie?

On Wed, Feb 24, 2016 at 3:37 PM, Liping Zhang <[email protected]> wrote:

> Actually I'm still a little confued about the 4 ways mentioned in "One
> Last Thing" in
> http://blog.cloudera.com/blog/2014/05/how-to-use-the-sharelib-in-apache-oozie-cdh-5/
> I tried all the ways, but all didn't work.(I'm using CDH Hue and oozie
> workflow) Following were what I tried with the 4 ways:
>
> For way 1:
> It recommended "oozie.libpath=/path/to/jars,another/path/to/jars"
> I add
> oozie.libpath=hdfs://ip-10-0-4-248.us-west-1.compute.internal:8020/user/oozie/share/lib/lib_20151201085935/spark
> or
> oozie.libpath=hdfs://ip-10-0-4-248.us-west-1.compute.internal:8020/user/oozie/share/lib/lib_20151201085935/spark/guava-16.0.1.jar
> and oozie.use.system.libpath=true is by default.
> Both don't work.
>
> For way 2:
> I added guava-16.0.1.jar into “lib” next to current workspace workflow.xml
> in HDFS, it doesn't work.
>
> For way 3:
> I can not find any <archive> tag in a Spark action with the path to a
> single jar, so I have no way to try way3.
>
> For way 4:
> I added guava-16.0.1.jar to the ShareLib (e.g.
> hdfs://ip-10-0-4-248.us-west-1.compute.internal:8020/user/oozie/share/lib/lib_20151201085935/spark)
> and set oozie.use.system.libpath=true in job.properties, it still doesn't
> work.
>
> Could you please give any suggestion? Thanks very much for any of your
> help! I appreciated!
>
>
> On Wed, Feb 24, 2016 at 3:13 PM, Liping Zhang <[email protected]>
> wrote:
>
>> Thanks Robert!
>>
>> I tried the way 1, 2, 4 introduced in
>> http://blog.cloudera.com/blog/2014/05/how-to-use-the-sharelib-in-apache-oozie-cdh-5/,
>> but still doesn't work.
>>
>>
>>
>> Here is the detailed I posted in CDH support forum, but no one answers so
>> far.
>>
>>
>> http://community.cloudera.com/t5/Batch-Processing-and-Workflow/how-to-add-external-guava-16-0-1-jar-in-CDH-oozie-classpath/m-p/37803#U37803
>>
>>
>> Can you give an detailed example with commands steps?  Thanks!
>>
>> On Wed, Feb 24, 2016 at 2:21 PM, Liping Zhang <[email protected]>
>> wrote:
>>
>>> Thanks very much Robert for you quick answer!
>>>
>>> I upload the guava-16.0.1.jar into HDFS 
>>> /user/oozie/share/lib/lib_20151201085935/spark
>>> dir, and restart oozie from CM, under the dir, there is only one
>>> guava-16.0.1.jar not guava-14.0.1.jar in HDFS /user/oozie/share/lib/
>>> lib_20151201085935/spark.
>>>
>>> However, it still has the same "main() threw exception,
>>> com.google.common.reflect.TypeToken.isPrimitive()Z" exception. Is there
>>> anything else I need to configure in workflow.xml or in job.properties in
>>> HDFS workspace?
>>>
>>> Appreciated if any help provided!
>>>
>>> On Wed, Feb 24, 2016 at 2:12 PM, Robert Kanter <[email protected]>
>>> wrote:
>>>
>>>> Hi Liping,
>>>>
>>>> If you change jars, or anything with the sharelib directories, you have
>>>> to
>>>> either restart the Oozie server or run the 'oozie admin -sharelibupdate'
>>>> command for it to notice.
>>>>
>>>> That said, CDH Spark requires Guava 14.0.1, which is what's in the Spark
>>>> sharelib directory already.  If you simply put Guava 16.0.1 in there,
>>>> you
>>>> may run into other problems.
>>>>
>>>>
>>>> - Robert
>>>>
>>>> On Wed, Feb 24, 2016 at 12:34 PM, Liping Zhang <[email protected]>
>>>> wrote:
>>>>
>>>> > I added guava-16.0.1.jar into
>>>> > HDFS /user/oozie/share/lib/lib_20151201085935/spark dir, and chown as
>>>> > "oozie:oozie", chmod 777, but it still could not find the jar.
>>>> >
>>>> > job.properties:
>>>> >
>>>> > oozie.use.system.libpath=True
>>>> > security_enabled=False
>>>> > dryrun=False
>>>> > jobTracker=ip-10-0-4-248.us-west-1.compute.internal:8032
>>>> > nameNode=hdfs://ip-10-0-4-248.us-west-1.compute.internal:8020
>>>> >
>>>> >
>>>> > workflow.xml:
>>>> > <workflow-app name="sparktest-cassandra"
>>>> xmlns="uri:oozie:workflow:0.5">
>>>> >     <start to="spark-b23b"/>
>>>> >     <kill name="Kill">
>>>> >         <message>Action failed, error
>>>> > message[${wf:errorMessage(wf:lastErrorNode())}]</message>
>>>> >     </kill>
>>>> >     <action name="spark-b23b">
>>>> >         <spark xmlns="uri:oozie:spark-action:0.1">
>>>> >             <job-tracker>${jobTracker}</job-tracker>
>>>> >             <name-node>${nameNode}</name-node>
>>>> >             <master>local[4]</master>
>>>> >             <mode>client</mode>
>>>> >             <name>sparktest-cassandra</name>
>>>> >               <class>TestCassandra</class>
>>>> >             <jar>lib/sparktest.jar</jar>
>>>> >               <spark-opts>--driver-class-path
>>>> > /opt/cloudera/parcels/CDH/jars/guava-16.0.1.jar --jars
>>>> > lib/*.jar</spark-opts>
>>>> >               <arg>s3n://gridx-output/sparktest/ </arg>
>>>> >               <arg>10</arg>
>>>> >               <arg>3</arg>
>>>> >               <arg>2</arg>
>>>> >         </spark>
>>>> >         <ok to="End"/>
>>>> >         <error to="Kill"/>
>>>> >     </action>
>>>> >     <end name="End"/>
>>>> > </workflow-app>
>>>> >
>>>> > Appreciated if any help!
>>>> >
>>>> > On Wed, Feb 24, 2016 at 12:34 PM, Liping Zhang <[email protected]
>>>> >
>>>> > wrote:
>>>> >
>>>> > > Dear,
>>>> > >
>>>> > > We used CDH5.5.0 Hue Oozie to run Spark action, and in the spark job
>>>> > > action, the job used spark-cassandra-connector_2.10-1.5.0-M2.jar.
>>>> > >
>>>> > > The job can run successfully with spark-submit command, but it was
>>>> failed
>>>> > > to run in oozie.
>>>> > >
>>>> > > It said oozie can not find guava-16.0.1.jar(guava-16.0.1.jar is an
>>>> > > dependency of DSE4.8.3 Cassandra) as following exception.
>>>> > >
>>>> > > Do you know how to add external guava-16.0.1.jar in oozie class
>>>> path?
>>>> > >
>>>> > >  Thanks!
>>>> > > -----
>>>> > >
>>>> > > >>> Invoking Spark class now >>>
>>>> > >
>>>> > >
>>>> > > <<< Invocation of Main class completed <<<
>>>> > >
>>>> > > Failing Oozie Launcher, Main class
>>>> > > [org.apache.oozie.action.hadoop.SparkMain], main() threw exception,
>>>> > > com.google.common.reflect.TypeToken.isPrimitive()Z
>>>> > > java.lang.NoSuchMethodError:
>>>> > > com.google.common.reflect.TypeToken.isPrimitive()Z
>>>> > > at com.datastax.driver.core.TypeCodec.<init>(TypeCodec.java:142)
>>>> > > at com.datastax.driver.core.TypeCodec.<init>(TypeCodec.java:136)
>>>> > > at
>>>> >
>>>> com.datastax.driver.core.TypeCodec$BlobCodec.<init>(TypeCodec.java:609)
>>>> > > at
>>>> > >
>>>> com.datastax.driver.core.TypeCodec$BlobCodec.<clinit>(TypeCodec.java:606)
>>>> > > at
>>>> >
>>>> com.datastax.driver.core.CodecRegistry.<clinit>(CodecRegistry.java:147)
>>>> > > at
>>>> > >
>>>> >
>>>> com.datastax.driver.core.Configuration$Builder.build(Configuration.java:259)
>>>> > > at
>>>> > >
>>>> >
>>>> com.datastax.driver.core.Cluster$Builder.getConfiguration(Cluster.java:1135)
>>>> > > at com.datastax.driver.core.Cluster.<init>(Cluster.java:111)
>>>> > > at com.datastax.driver.core.Cluster.buildFrom(Cluster.java:178)
>>>> > > at com.datastax.driver.core.Cluster$Builder.build(Cluster.java:1152)
>>>> > > at
>>>> > >
>>>> >
>>>> com.datastax.spark.connector.cql.DefaultConnectionFactory$.createCluster(CassandraConnectionFactory.scala:85)
>>>> > > at
>>>> > >
>>>> >
>>>> com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:155)
>>>> > > at
>>>> > >
>>>> >
>>>> com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:150)
>>>> > > at
>>>> > >
>>>> >
>>>> com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:150)
>>>> > > at
>>>> > >
>>>> >
>>>> com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:31)
>>>> > > at
>>>> > >
>>>> >
>>>> com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:56)
>>>> > > at
>>>> > >
>>>> >
>>>> com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:81)
>>>> > > at
>>>> > >
>>>> >
>>>> com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:109)
>>>> > > at
>>>> > >
>>>> >
>>>> com.datastax.spark.connector.cql.CassandraConnector.withClusterDo(CassandraConnector.scala:120)
>>>> > > at
>>>> >
>>>> com.datastax.spark.connector.cql.Schema$.fromCassandra(Schema.scala:241)
>>>> > > at
>>>> > >
>>>> >
>>>> com.datastax.spark.connector.writer.TableWriter$.apply(TableWriter.scala:263)
>>>> > > at
>>>> > >
>>>> >
>>>> com.datastax.spark.connector.RDDFunctions.saveToCassandra(RDDFunctions.scala:36)
>>>> > > at TestCassandra$.main(TestCassandra.scala:44)
>>>> > > at TestCassandra.main(TestCassandra.scala)
>>>> > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>> > >
>>>> > > -----
>>>> > >
>>>> > > --
>>>> > > Cheers,
>>>> > > -----
>>>> > > Big Data - Big Wisdom - Big Value
>>>> > > --------------
>>>> > > Michelle Zhang (张莉苹)
>>>> > >
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > Cheers,
>>>> > -----
>>>> > Big Data - Big Wisdom - Big Value
>>>> > --------------
>>>> > Michelle Zhang (张莉苹)
>>>> >
>>>>
>>>
>>>
>>>
>>> --
>>> Cheers,
>>> -----
>>> Big Data - Big Wisdom - Big Value
>>> --------------
>>> Michelle Zhang (张莉苹)
>>>
>>
>>
>>
>> --
>> Cheers,
>> -----
>> Big Data - Big Wisdom - Big Value
>> --------------
>> Michelle Zhang (张莉苹)
>>
>
>
>
> --
> Cheers,
> -----
> Big Data - Big Wisdom - Big Value
> --------------
> Michelle Zhang (张莉苹)
>



-- 
Cheers,
-----
Big Data - Big Wisdom - Big Value
--------------
Michelle Zhang (张莉苹)

Reply via email to