All of them should be "provided".

Pozdrawiam,
Jacek Laskowski
----
https://medium.com/@jaceklaskowski/
Mastering Apache Spark 2.0 http://bit.ly/mastering-apache-spark
Follow me at https://twitter.com/jaceklaskowski


On Sun, Aug 14, 2016 at 12:26 PM, Mich Talebzadeh
<mich.talebza...@gmail.com> wrote:
> LOL
>
> well the issue here was the dependencies scripted in that shell script which
> was modified to add "provided" to it.
>
> The script itself still works just the content of one of functions had to be
> edited
>
> function create_sbt_file {
> SBT_FILE=${GEN_APPSDIR}/scala/${APPLICATION}/${FILE_NAME}.sbt
> [ -f ${SBT_FILE} ] && rm -f ${SBT_FILE}
> cat >> $SBT_FILE << !
> name := "scala"
> version := "1.0"
> scalaVersion := "2.11.7"
> libraryDependencies += "org.apache.spark" %% "spark-core" % "2.0.0"
> libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.0.0"
> libraryDependencies += "org.apache.spark" %% "spark-hive" % "2.0.0" %
> "provided"
> .....
> .....
> !
> }
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> Disclaimer: Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed. The
> author will in no case be liable for any monetary damages arising from such
> loss, damage or destruction.
>
>
>
>
> On 14 August 2016 at 20:17, Jacek Laskowski <ja...@japila.pl> wrote:
>>
>> Hi Mich,
>>
>> Yeah, you don't have to worry about it...and that's why you're asking
>> these questions ;-)
>>
>> Pozdrawiam,
>> Jacek Laskowski
>> ----
>> https://medium.com/@jaceklaskowski/
>> Mastering Apache Spark 2.0 http://bit.ly/mastering-apache-spark
>> Follow me at https://twitter.com/jaceklaskowski
>>
>>
>> On Sun, Aug 14, 2016 at 12:06 PM, Mich Talebzadeh
>> <mich.talebza...@gmail.com> wrote:
>> > The magic does all that(including compiling and submitting with the jar
>> > file. It is flexible as it does all this for any Sala program. it
>> > creates
>> > sub-directories, compiles, submits etc so I don't have to worry about
>> > it.
>> >
>> > HTH
>> >
>> > Dr Mich Talebzadeh
>> >
>> >
>> >
>> > LinkedIn
>> >
>> > https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> >
>> >
>> >
>> > http://talebzadehmich.wordpress.com
>> >
>> >
>> > Disclaimer: Use it at your own risk. Any and all responsibility for any
>> > loss, damage or destruction of data or any other property which may
>> > arise
>> > from relying on this email's technical content is explicitly disclaimed.
>> > The
>> > author will in no case be liable for any monetary damages arising from
>> > such
>> > loss, damage or destruction.
>> >
>> >
>> >
>> >
>> > On 14 August 2016 at 20:01, Jacek Laskowski <ja...@japila.pl> wrote:
>> >>
>> >> Hi,
>> >>
>> >> You should have all the deps being "provided" since they're provided
>> >> by spark infra after you spark-submit the uber-jar for the app.
>> >>
>> >> What's the "magic" in local.ksh? Why don't you sbt assembly and do
>> >> spark-submit with the uber-jar?
>> >>
>> >> Pozdrawiam,
>> >> Jacek Laskowski
>> >> ----
>> >> https://medium.com/@jaceklaskowski/
>> >> Mastering Apache Spark 2.0 http://bit.ly/mastering-apache-spark
>> >> Follow me at https://twitter.com/jaceklaskowski
>> >>
>> >>
>> >> On Sun, Aug 14, 2016 at 11:52 AM, Mich Talebzadeh
>> >> <mich.talebza...@gmail.com> wrote:
>> >> > Thanks Jacek,
>> >> >
>> >> > I thought there was some dependency issue. This did the trick
>> >> >
>> >> > libraryDependencies += "org.apache.spark" %% "spark-core" % "2.0.0"
>> >> > libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.0.0"
>> >> > libraryDependencies += "org.apache.spark" %% "spark-hive" % "2.0.0" %
>> >> > "provided"
>> >> >
>> >> > I use a shell script that builds the jar file depending on type (sbt,
>> >> > mvn,
>> >> > assembly)  and submits it via spark-submit ..
>> >> >
>> >> > ./local.ksh -A ETL_scratchpad_dummy -T sbt
>> >> >
>> >> > As I understand "provided" means that the dependencies will be
>> >> > provided
>> >> > at
>> >> > run-time (spark-submit) through the jar files but they are not needed
>> >> > at
>> >> > compile time.
>> >> >
>> >> > Having said that am I correct that error message below
>> >> >
>> >> > [error] bad symbolic reference. A signature in HiveContext.class
>> >> > refers
>> >> > to
>> >> > type Logging
>> >> > [error] in package org.apache.spark which is not available.
>> >> > [error] It may be completely missing from the current classpath, or
>> >> > the
>> >> > version on
>> >> > [error] the classpath might be incompatible with the version used
>> >> > when
>> >> > compiling HiveContext.class.
>> >> > [error] one error found
>> >> > [error] (compile:compileIncremental) Compilation failed
>> >> >
>> >> > meant that some form of libraries incompatibility was happening at
>> >> > compile
>> >> > time?
>> >> >
>> >> > Cheers
>> >> >
>> >> >
>> >> > Dr Mich Talebzadeh
>> >> >
>> >> >
>> >> >
>> >> > LinkedIn
>> >> >
>> >> >
>> >> > https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> >> >
>> >> >
>> >> >
>> >> > http://talebzadehmich.wordpress.com
>> >> >
>> >> >
>> >> > Disclaimer: Use it at your own risk. Any and all responsibility for
>> >> > any
>> >> > loss, damage or destruction of data or any other property which may
>> >> > arise
>> >> > from relying on this email's technical content is explicitly
>> >> > disclaimed.
>> >> > The
>> >> > author will in no case be liable for any monetary damages arising
>> >> > from
>> >> > such
>> >> > loss, damage or destruction.
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > On 14 August 2016 at 19:11, Jacek Laskowski <ja...@japila.pl> wrote:
>> >> >>
>> >> >> Go to spark-shell and do :imports. You'll see all the imports and
>> >> >> you
>> >> >> could copy and paste them in your app. (but there are not many
>> >> >> honestly and that won't help you much)
>> >> >>
>> >> >> HiveContext lives in spark-hive. You don't need spark-sql and
>> >> >> spark-hive since the latter uses the former as a dependency (unless
>> >> >> you're using types that come from the other dependencies). You don't
>> >> >> need spark-core either. Make the dependencies simpler by:
>> >> >>
>> >> >> libraryDependencies += "org.apache.spark" %% "spark-hive" % "2.0.0"
>> >> >>
>> >> >> and mark it % Provided.
>> >> >>
>> >> >> The reason for provided is that you don't need that for uber-jar
>> >> >> that
>> >> >> you're going to spark-submit.
>> >> >>
>> >> >> Don't forget to reload your session of sbt you're compiling in.
>> >> >> Unsure
>> >> >> how you do it so quit your sbt session and do `sbt compile`.
>> >> >>
>> >> >> Ask away if you need more details.
>> >> >>
>> >> >> Pozdrawiam,
>> >> >> Jacek Laskowski
>> >> >> ----
>> >> >> https://medium.com/@jaceklaskowski/
>> >> >> Mastering Apache Spark 2.0 http://bit.ly/mastering-apache-spark
>> >> >> Follow me at https://twitter.com/jaceklaskowski
>> >> >>
>> >> >>
>> >> >> On Sun, Aug 14, 2016 at 9:26 AM, Mich Talebzadeh
>> >> >> <mich.talebza...@gmail.com> wrote:
>> >> >> > The issue is on Spark shell this works OK
>> >> >> >
>> >> >> > Spark context Web UI available at http://50.140.197.217:55555
>> >> >> > Spark context available as 'sc' (master = local, app id =
>> >> >> > local-1471191662017).
>> >> >> > Spark session available as 'spark'.
>> >> >> > Welcome to
>> >> >> >       ____              __
>> >> >> >      / __/__  ___ _____/ /__
>> >> >> >     _\ \/ _ \/ _ `/ __/  '_/
>> >> >> >    /___/ .__/\_,_/_/ /_/\_\   version 2.0.0
>> >> >> >       /_/
>> >> >> > Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM,
>> >> >> > Java
>> >> >> > 1.8.0_77)
>> >> >> > Type in expressions to have them evaluated.
>> >> >> > Type :help for more information.
>> >> >> > scala> import org.apache.spark.SparkContext
>> >> >> > scala> import org.apache.spark.SparkConf
>> >> >> > scala> import org.apache.spark.sql.Row
>> >> >> > scala> import org.apache.spark.sql.hive.HiveContext
>> >> >> > scala> import org.apache.spark.sql.types._
>> >> >> > scala> import org.apache.spark.sql.SparkSession
>> >> >> > scala> import org.apache.spark.sql.functions._
>> >> >> >
>> >> >> > The code itself
>> >> >> >
>> >> >> >
>> >> >> > scala>   val conf = new SparkConf().
>> >> >> >      |                setAppName("ETL_scratchpad_dummy").
>> >> >> >      |                set("spark.driver.allowMultipleContexts",
>> >> >> > "true").
>> >> >> >      |                set("enableHiveSupport","true")
>> >> >> > conf: org.apache.spark.SparkConf =
>> >> >> > org.apache.spark.SparkConf@33215ffb
>> >> >> >
>> >> >> > scala>   val sc = new SparkContext(conf)
>> >> >> > sc: org.apache.spark.SparkContext =
>> >> >> > org.apache.spark.SparkContext@3cbfdf5c
>> >> >> >
>> >> >> > scala>   val HiveContext = new
>> >> >> > org.apache.spark.sql.hive.HiveContext(sc)
>> >> >> > warning: there was one deprecation warning; re-run with
>> >> >> > -deprecation
>> >> >> > for
>> >> >> > details
>> >> >> > HiveContext: org.apache.spark.sql.hive.HiveContext =
>> >> >> > org.apache.spark.sql.hive.HiveContext@2152fde5
>> >> >> >
>> >> >> > scala>   HiveContext.sql("use oraclehadoop")
>> >> >> > res0: org.apache.spark.sql.DataFrame = []
>> >> >> >
>> >> >> > I think I am getting something missing here a dependency
>> >> >> >
>> >> >> >
>> >> >> > Dr Mich Talebzadeh
>> >> >> >
>> >> >> >
>> >> >> >
>> >> >> > LinkedIn
>> >> >> >
>> >> >> >
>> >> >> >
>> >> >> > https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> >> >> >
>> >> >> >
>> >> >> >
>> >> >> > http://talebzadehmich.wordpress.com
>> >> >> >
>> >> >> >
>> >> >> > Disclaimer: Use it at your own risk. Any and all responsibility
>> >> >> > for
>> >> >> > any
>> >> >> > loss, damage or destruction of data or any other property which
>> >> >> > may
>> >> >> > arise
>> >> >> > from relying on this email's technical content is explicitly
>> >> >> > disclaimed.
>> >> >> > The
>> >> >> > author will in no case be liable for any monetary damages arising
>> >> >> > from
>> >> >> > such
>> >> >> > loss, damage or destruction.
>> >> >> >
>> >> >> >
>> >> >> >
>> >> >> >
>> >> >> > On 14 August 2016 at 17:16, Koert Kuipers <ko...@tresata.com>
>> >> >> > wrote:
>> >> >> >>
>> >> >> >> HiveContext is gone
>> >> >> >>
>> >> >> >> SparkSession now combines functionality of SqlContext and
>> >> >> >> HiveContext
>> >> >> >> (if
>> >> >> >> hive support is available)
>> >> >> >>
>> >> >> >> On Sun, Aug 14, 2016 at 12:12 PM, Mich Talebzadeh
>> >> >> >> <mich.talebza...@gmail.com> wrote:
>> >> >> >>>
>> >> >> >>> Thanks Koert,
>> >> >> >>>
>> >> >> >>> I did that before as well. Anyway this is dependencies
>> >> >> >>>
>> >> >> >>> libraryDependencies += "org.apache.spark" %% "spark-core" %
>> >> >> >>> "2.0.0"
>> >> >> >>> libraryDependencies += "org.apache.spark" %% "spark-sql" %
>> >> >> >>> "2.0.0"
>> >> >> >>> libraryDependencies += "org.apache.spark" %% "spark-hive" %
>> >> >> >>> "2.0.0"
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> and the error
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> [info] Compiling 1 Scala source to
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> /data6/hduser/scala/ETL_scratchpad_dummy/target/scala-2.10/classes...
>> >> >> >>> [error]
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> /data6/hduser/scala/ETL_scratchpad_dummy/src/main/scala/ETL_scratchpad_dummy.scala:4:
>> >> >> >>> object hive is not a member of package org.apache.spark.sql
>> >> >> >>> [error] import org.apache.spark.sql.hive.HiveContext
>> >> >> >>> [error]                             ^
>> >> >> >>> [error]
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> /data6/hduser/scala/ETL_scratchpad_dummy/src/main/scala/ETL_scratchpad_dummy.scala:20:
>> >> >> >>> object hive is not a member of package org.apache.spark.sql
>> >> >> >>> [error]   val HiveContext = new
>> >> >> >>> org.apache.spark.sql.hive.HiveContext(sc)
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> Dr Mich Talebzadeh
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> LinkedIn
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> http://talebzadehmich.wordpress.com
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> Disclaimer: Use it at your own risk. Any and all responsibility
>> >> >> >>> for
>> >> >> >>> any
>> >> >> >>> loss, damage or destruction of data or any other property which
>> >> >> >>> may
>> >> >> >>> arise
>> >> >> >>> from relying on this email's technical content is explicitly
>> >> >> >>> disclaimed. The
>> >> >> >>> author will in no case be liable for any monetary damages
>> >> >> >>> arising
>> >> >> >>> from
>> >> >> >>> such
>> >> >> >>> loss, damage or destruction.
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> On 14 August 2016 at 17:00, Koert Kuipers <ko...@tresata.com>
>> >> >> >>> wrote:
>> >> >> >>>>
>> >> >> >>>> you cannot mix spark 1 and spark 2 jars
>> >> >> >>>>
>> >> >> >>>> change this
>> >> >> >>>> libraryDependencies += "org.apache.spark" %% "spark-hive" %
>> >> >> >>>> "1.5.1"
>> >> >> >>>> to
>> >> >> >>>> libraryDependencies += "org.apache.spark" %% "spark-hive" %
>> >> >> >>>> "2.0.0"
>> >> >> >>>>
>> >> >> >>>> On Sun, Aug 14, 2016 at 11:58 AM, Mich Talebzadeh
>> >> >> >>>> <mich.talebza...@gmail.com> wrote:
>> >> >> >>>>>
>> >> >> >>>>> Hi,
>> >> >> >>>>>
>> >> >> >>>>> In Spark 2 I am using sbt or mvn to compile my scala program.
>> >> >> >>>>> This
>> >> >> >>>>> used
>> >> >> >>>>> to compile and run perfectly with Spark 1.6.1 but now it is
>> >> >> >>>>> throwing
>> >> >> >>>>> error
>> >> >> >>>>>
>> >> >> >>>>>
>> >> >> >>>>> I believe the problem is here. I have
>> >> >> >>>>>
>> >> >> >>>>> name := "scala"
>> >> >> >>>>> version := "1.0"
>> >> >> >>>>> scalaVersion := "2.11.7"
>> >> >> >>>>> libraryDependencies += "org.apache.spark" %% "spark-core" %
>> >> >> >>>>> "2.0.0"
>> >> >> >>>>> libraryDependencies += "org.apache.spark" %% "spark-sql" %
>> >> >> >>>>> "2.0.0"
>> >> >> >>>>> libraryDependencies += "org.apache.spark" %% "spark-hive" %
>> >> >> >>>>> "1.5.1"
>> >> >> >>>>>
>> >> >> >>>>> However the error I am getting is
>> >> >> >>>>>
>> >> >> >>>>> [error] bad symbolic reference. A signature in
>> >> >> >>>>> HiveContext.class
>> >> >> >>>>> refers
>> >> >> >>>>> to type Logging
>> >> >> >>>>> [error] in package org.apache.spark which is not available.
>> >> >> >>>>> [error] It may be completely missing from the current
>> >> >> >>>>> classpath,
>> >> >> >>>>> or
>> >> >> >>>>> the
>> >> >> >>>>> version on
>> >> >> >>>>> [error] the classpath might be incompatible with the version
>> >> >> >>>>> used
>> >> >> >>>>> when
>> >> >> >>>>> compiling HiveContext.class.
>> >> >> >>>>> [error] one error found
>> >> >> >>>>> [error] (compile:compileIncremental) Compilation failed
>> >> >> >>>>>
>> >> >> >>>>>
>> >> >> >>>>> And this is the code
>> >> >> >>>>>
>> >> >> >>>>> import org.apache.spark.SparkContext
>> >> >> >>>>> import org.apache.spark.SparkConf
>> >> >> >>>>> import org.apache.spark.sql.Row
>> >> >> >>>>> import org.apache.spark.sql.hive.HiveContext
>> >> >> >>>>> import org.apache.spark.sql.types._
>> >> >> >>>>> import org.apache.spark.sql.SparkSession
>> >> >> >>>>> import org.apache.spark.sql.functions._
>> >> >> >>>>> object ETL_scratchpad_dummy {
>> >> >> >>>>>   def main(args: Array[String]) {
>> >> >> >>>>>   val conf = new SparkConf().
>> >> >> >>>>>                setAppName("ETL_scratchpad_dummy").
>> >> >> >>>>>                set("spark.driver.allowMultipleContexts",
>> >> >> >>>>> "true").
>> >> >> >>>>>                set("enableHiveSupport","true")
>> >> >> >>>>>   val sc = new SparkContext(conf)
>> >> >> >>>>>   //import sqlContext.implicits._
>> >> >> >>>>>   val HiveContext = new
>> >> >> >>>>> org.apache.spark.sql.hive.HiveContext(sc)
>> >> >> >>>>>   HiveContext.sql("use oraclehadoop")
>> >> >> >>>>>
>> >> >> >>>>>
>> >> >> >>>>> Anyone has come across this
>> >> >> >>>>>
>> >> >> >>>>>
>> >> >> >>>>>
>> >> >> >>>>> Dr Mich Talebzadeh
>> >> >> >>>>>
>> >> >> >>>>>
>> >> >> >>>>>
>> >> >> >>>>> LinkedIn
>> >> >> >>>>>
>> >> >> >>>>>
>> >> >> >>>>>
>> >> >> >>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> >> >> >>>>>
>> >> >> >>>>>
>> >> >> >>>>>
>> >> >> >>>>> http://talebzadehmich.wordpress.com
>> >> >> >>>>>
>> >> >> >>>>>
>> >> >> >>>>> Disclaimer: Use it at your own risk. Any and all
>> >> >> >>>>> responsibility
>> >> >> >>>>> for
>> >> >> >>>>> any
>> >> >> >>>>> loss, damage or destruction of data or any other property
>> >> >> >>>>> which
>> >> >> >>>>> may
>> >> >> >>>>> arise
>> >> >> >>>>> from relying on this email's technical content is explicitly
>> >> >> >>>>> disclaimed. The
>> >> >> >>>>> author will in no case be liable for any monetary damages
>> >> >> >>>>> arising
>> >> >> >>>>> from such
>> >> >> >>>>> loss, damage or destruction.
>> >> >> >>>>>
>> >> >> >>>>>
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>
>> >> >> >>
>> >> >> >
>> >> >
>> >> >
>> >
>> >
>
>

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to