Hi Ravi,

It looks like you're invoking the PhoenixInputFormat class directly from
Spark, which actually bypasses the phoenix-spark integration completely.

Others on the list might be more helpful with regards to Java
implementation, but I suspect if you start with using the DataFrame API,
following something similar to the PySpark example in the documentation
[1], you'll be able to load your data. If and when you get something
working, please reply to the list or submit a patch with the Java code that
worked for you, we can update the documentation as well.

Thanks!

Josh

[1] https://phoenix.apache.org/phoenix_spark.html



On Wed, Feb 1, 2017 at 3:02 AM, Ravi Kumar Bommada <braviku...@juniper.net>
wrote:

> Hi,
>
> I’m trying to write a phoenix-spark sample job in java to read few colums
> from hbase and write it back to hbase after some manipulation. while
> running this job I’m getting exception saying 
> “org.apache.hadoop.mapred.InvalidJobConfException:
> Output directory not set”, thou I had set the outputformat as
> PhoenixoutputFormat, please find the code and exception attached .The
> command to submit the job is mentioned below, any leads would be
> appreciated.
>
> Spark job submit command:
>
> spark-submit --class bulk_test.PhoenixSparkJob --driver-class-path
> /home/cloudera/Desktop/phoenix-client-4.5.2-1.clabs_phoenix1.2.0.p0.774.jar
> --master local myjar.jar
>
>
>
>
>
> Regard’s
>
> Ravi Kumar B
> Mob: +91 9591144511 <+91%2095911%2044511>
>
>
>
>

Reply via email to