Maybe the guava version in your spark lib folder is not compatible (if your 
Spark version has a guava library)? In this case i propose to create a fat/uber 
jar potentially with a shaded guava dependency.

> Am 18.12.2018 um 11:26 schrieb Mich Talebzadeh <mich.talebza...@gmail.com>:
> 
> Hi,
> 
> I am writing a small test code in spark-shell with attached jar dependencies
> 
> spark-shell --jars 
> /home/hduser/jars/bigquery-connector-0.13.4-hadoop3.jar,/home/hduser/jars/gcs-connector-1.9.4-hadoop3.jar,/home/hduser/jars/other/guava-19.0.jar,/home/hduser/jars/google-api-client-1.4.1-beta.jar,/home/hduser/jars/google-api-client-json-1.2.3-alpha.jar,/home/hduser/jars/google-api-services-bigquery-v2-rev20181202-1.27.0.jar
> 
>  to read an already existing table in Google BigQuery as follows:
> 
> import com.google.cloud.hadoop.io.bigquery.BigQueryConfiguration
> import com.google.cloud.hadoop.io.bigquery.BigQueryFileFormat
> import com.google.cloud.hadoop.io.bigquery.GsonBigQueryInputFormat
> import com.google.cloud.hadoop.io.bigquery.output.BigQueryOutputConfiguration
> import com.google.cloud.hadoop.io.bigquery.output.IndirectBigQueryOutputFormat
> import com.google.gson.JsonObject
> import org.apache.hadoop.io.LongWritable
> import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat
> // Assumes you have a spark context (sc) -- running from spark-shell REPL.
> // Marked as transient since configuration is not Serializable. This should
> // only be necessary in spark-shell REPL.
> @transient
> val conf = sc.hadoopConfiguration
> // Input parameters.
> val fullyQualifiedInputTableId = "axial-glow-224522.accounts.ll_18740868"
> val projectId = conf.get("fs.gs.project.id")
> val bucket = conf.get("fs.gs.system.bucket")
> // Input configuration.
> conf.set(BigQueryConfiguration.PROJECT_ID_KEY, projectId)
> conf.set(BigQueryConfiguration.GCS_BUCKET_KEY, bucket)
> BigQueryConfiguration.configureBigQueryInput(conf, fullyQualifiedInputTableId)
> 
> The problem I have is that even after loading jars with spark-shell --jar 
> 
> I am getting the following error at the last line
> 
> scala> BigQueryConfiguration.configureBigQueryInput(conf, 
> fullyQualifiedInputTableId)
> 
> java.lang.NoSuchMethodError: 
> com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;Ljava/lang/Object;)V
>   at 
> com.google.cloud.hadoop.io.bigquery.BigQueryStrings.parseTableReference(BigQueryStrings.java:68)
>   at 
> com.google.cloud.hadoop.io.bigquery.BigQueryConfiguration.configureBigQueryInput(BigQueryConfiguration.java:260)
>   ... 49 elided
> 
> It says it cannot find method
> 
> java.lang.NoSuchMethodError: 
> com.google.common.base.Preconditions.checkArgument
> 
> but I checked it and it is in the following jar file
> 
> jar tvf guava-19.0.jar| grep common.base.Preconditions
>   5249 Wed Dec 09 15:58:14 UTC 2015 com/google/common/base/Preconditions.class
> 
> I have used different version of guava jar files but none works!
> 
> The code is based on the following:
> 
> https://cloud.google.com/dataproc/docs/tutorials/bigquery-connector-spark-example
> 
> Thanks
> Dr Mich Talebzadeh
>  
> LinkedIn  
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>  
> http://talebzadehmich.wordpress.com
> 
> Disclaimer: Use it at your own risk. Any and all responsibility for any loss, 
> damage or destruction of data or any other property which may arise from 
> relying on this email's technical content is explicitly disclaimed. The 
> author will in no case be liable for any monetary damages arising from such 
> loss, damage or destruction.
>  

Reply via email to