Re: Spark 2.0.0 won't let you create a new SparkContext?

2016-09-13 Thread Mark Hamstra
It sounds like you should be writing an application and not trying to force
the spark-shell to do more than what it was intended for.

On Tue, Sep 13, 2016 at 11:53 AM, Kevin Burton  wrote:

> I sort of agree but the problem is that some of this should be code.
>
> Some of our ES indexes have 100-200 columns.
>
> Defining which ones are arrays on the command line is going to get ugly
> fast.
>
>
>
> On Tue, Sep 13, 2016 at 11:50 AM, Sean Owen  wrote:
>
>> You would generally use --conf to set this on the command line if using
>> the shell.
>>
>>
>> On Tue, Sep 13, 2016, 19:22 Kevin Burton  wrote:
>>
>>> The problem is that without a new spark context, with a custom conf,
>>> elasticsearch-hadoop is refusing to read in settings about the ES setup...
>>>
>>> if I do a sc.stop() , then create a new one, it seems to work fine.
>>>
>>> But it isn't really documented anywhere and all the existing
>>> documentation is now invalid because you get an exception when you try to
>>> create a new spark context.
>>>
>>> On Tue, Sep 13, 2016 at 11:13 AM, Mich Talebzadeh <
>>> mich.talebza...@gmail.com> wrote:
>>>
 I think this works in a shell but you need to allow multiple spark
 contexts

 Spark context Web UI available at http://50.140.197.217:5
 Spark context available as 'sc' (master = local, app id =
 local-1473789661846).
 Spark session available as 'spark'.
 Welcome to
     __
  / __/__  ___ _/ /__
 _\ \/ _ \/ _ `/ __/  '_/
/___/ .__/\_,_/_/ /_/\_\   version 2.0.0
   /_/
 Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java
 1.8.0_77)
 Type in expressions to have them evaluated.
 Type :help for more information.

 scala> import org.apache.spark.SparkContext
 import org.apache.spark.SparkContext
 scala>  val conf = new SparkConf().setMaster("local[2
 ]").setAppName("CountingSheep").
 *set("spark.driver.allowMultipleContexts", "true")*conf:
 org.apache.spark.SparkConf = org.apache.spark.SparkConf@bb5f9d
 scala> val sc = new SparkContext(conf)
 sc: org.apache.spark.SparkContext = org.apache.spark.SparkContext@
 4888425d


 HTH


 Dr Mich Talebzadeh



 LinkedIn * 
 https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
 *



 http://talebzadehmich.wordpress.com


 *Disclaimer:* Use it at your own risk. Any and all responsibility for
 any loss, damage or destruction of data or any other property which may
 arise from relying on this email's technical content is explicitly
 disclaimed. The author will in no case be liable for any monetary damages
 arising from such loss, damage or destruction.



 On 13 September 2016 at 18:57, Sean Owen  wrote:

> But you're in the shell there, which already has a SparkContext for
> you as sc.
>
> On Tue, Sep 13, 2016 at 6:49 PM, Kevin Burton 
> wrote:
>
>> I'm rather confused here as to what to do about creating a new
>> SparkContext.
>>
>> Spark 2.0 prevents it... (exception included below)
>>
>> yet a TON of examples I've seen basically tell you to create a new
>> SparkContext as standard practice:
>>
>> http://spark.apache.org/docs/latest/configuration.html#dynam
>> ically-loading-spark-properties
>>
>> val conf = new SparkConf()
>>  .setMaster("local[2]")
>>  .setAppName("CountingSheep")val sc = new SparkContext(conf)
>>
>>
>> I'm specifically running into a problem in that ES hadoop won't work
>> with its settings and I think its related to this problme.
>>
>> Do we have to call sc.stop() first and THEN create a new spark
>> context?
>>
>> That works,, but I can't find any documentation anywhere telling us
>> the right course of action.
>>
>>
>>
>> scala> val sc = new SparkContext();
>> org.apache.spark.SparkException: Only one SparkContext may be
>> running in this JVM (see SPARK-2243). To ignore this error, set
>> spark.driver.allowMultipleContexts = true. The currently running
>> SparkContext was created at:
>> org.apache.spark.sql.SparkSession$Builder.getOrCreate(
>> SparkSession.scala:823)
>> org.apache.spark.repl.Main$.createSparkSession(Main.scala:95)
>> (:15)
>> (:31)
>> (:33)
>> .(:37)
>> .()
>> .$print$lzycompute(:7)
>> .$print(:6)
>> $print()
>> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>> ssorImpl.java:62)
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>> 

Re: Spark 2.0.0 won't let you create a new SparkContext?

2016-09-13 Thread Kevin Burton
I sort of agree but the problem is that some of this should be code.

Some of our ES indexes have 100-200 columns.

Defining which ones are arrays on the command line is going to get ugly
fast.



On Tue, Sep 13, 2016 at 11:50 AM, Sean Owen  wrote:

> You would generally use --conf to set this on the command line if using
> the shell.
>
>
> On Tue, Sep 13, 2016, 19:22 Kevin Burton  wrote:
>
>> The problem is that without a new spark context, with a custom conf,
>> elasticsearch-hadoop is refusing to read in settings about the ES setup...
>>
>> if I do a sc.stop() , then create a new one, it seems to work fine.
>>
>> But it isn't really documented anywhere and all the existing
>> documentation is now invalid because you get an exception when you try to
>> create a new spark context.
>>
>> On Tue, Sep 13, 2016 at 11:13 AM, Mich Talebzadeh <
>> mich.talebza...@gmail.com> wrote:
>>
>>> I think this works in a shell but you need to allow multiple spark
>>> contexts
>>>
>>> Spark context Web UI available at http://50.140.197.217:5
>>> Spark context available as 'sc' (master = local, app id =
>>> local-1473789661846).
>>> Spark session available as 'spark'.
>>> Welcome to
>>>     __
>>>  / __/__  ___ _/ /__
>>> _\ \/ _ \/ _ `/ __/  '_/
>>>/___/ .__/\_,_/_/ /_/\_\   version 2.0.0
>>>   /_/
>>> Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java
>>> 1.8.0_77)
>>> Type in expressions to have them evaluated.
>>> Type :help for more information.
>>>
>>> scala> import org.apache.spark.SparkContext
>>> import org.apache.spark.SparkContext
>>> scala>  val conf = new SparkConf().setMaster("local[2]").setAppName("
>>> CountingSheep").
>>> *set("spark.driver.allowMultipleContexts", "true")*conf:
>>> org.apache.spark.SparkConf = org.apache.spark.SparkConf@bb5f9d
>>> scala> val sc = new SparkContext(conf)
>>> sc: org.apache.spark.SparkContext = org.apache.spark.SparkContext@
>>> 4888425d
>>>
>>>
>>> HTH
>>>
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>>
>>> LinkedIn * 
>>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> *
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>> On 13 September 2016 at 18:57, Sean Owen  wrote:
>>>
 But you're in the shell there, which already has a SparkContext for you
 as sc.

 On Tue, Sep 13, 2016 at 6:49 PM, Kevin Burton 
 wrote:

> I'm rather confused here as to what to do about creating a new
> SparkContext.
>
> Spark 2.0 prevents it... (exception included below)
>
> yet a TON of examples I've seen basically tell you to create a new
> SparkContext as standard practice:
>
> http://spark.apache.org/docs/latest/configuration.html#
> dynamically-loading-spark-properties
>
> val conf = new SparkConf()
>  .setMaster("local[2]")
>  .setAppName("CountingSheep")val sc = new SparkContext(conf)
>
>
> I'm specifically running into a problem in that ES hadoop won't work
> with its settings and I think its related to this problme.
>
> Do we have to call sc.stop() first and THEN create a new spark context?
>
> That works,, but I can't find any documentation anywhere telling us
> the right course of action.
>
>
>
> scala> val sc = new SparkContext();
> org.apache.spark.SparkException: Only one SparkContext may be running
> in this JVM (see SPARK-2243). To ignore this error, set 
> spark.driver.allowMultipleContexts
> = true. The currently running SparkContext was created at:
> org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.
> scala:823)
> org.apache.spark.repl.Main$.createSparkSession(Main.scala:95)
> (:15)
> (:31)
> (:33)
> .(:37)
> .()
> .$print$lzycompute(:7)
> .$print(:6)
> $print()
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> java.lang.reflect.Method.invoke(Method.java:497)
> scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:786)
> scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1047)
> scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$
> loadAndRunReq$1.apply(IMain.scala:638)
> 

Re: Spark 2.0.0 won't let you create a new SparkContext?

2016-09-13 Thread Sean Owen
You would generally use --conf to set this on the command line if using the
shell.

On Tue, Sep 13, 2016, 19:22 Kevin Burton  wrote:

> The problem is that without a new spark context, with a custom conf,
> elasticsearch-hadoop is refusing to read in settings about the ES setup...
>
> if I do a sc.stop() , then create a new one, it seems to work fine.
>
> But it isn't really documented anywhere and all the existing documentation
> is now invalid because you get an exception when you try to create a new
> spark context.
>
> On Tue, Sep 13, 2016 at 11:13 AM, Mich Talebzadeh <
> mich.talebza...@gmail.com> wrote:
>
>> I think this works in a shell but you need to allow multiple spark
>> contexts
>>
>> Spark context Web UI available at http://50.140.197.217:5
>> Spark context available as 'sc' (master = local, app id =
>> local-1473789661846).
>> Spark session available as 'spark'.
>> Welcome to
>>     __
>>  / __/__  ___ _/ /__
>> _\ \/ _ \/ _ `/ __/  '_/
>>/___/ .__/\_,_/_/ /_/\_\   version 2.0.0
>>   /_/
>> Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java
>> 1.8.0_77)
>> Type in expressions to have them evaluated.
>> Type :help for more information.
>>
>> scala> import org.apache.spark.SparkContext
>> import org.apache.spark.SparkContext
>> scala>  val conf = new
>> SparkConf().setMaster("local[2]").setAppName("CountingSheep").
>> *set("spark.driver.allowMultipleContexts", "true")*conf:
>> org.apache.spark.SparkConf = org.apache.spark.SparkConf@bb5f9d
>> scala> val sc = new SparkContext(conf)
>> sc: org.apache.spark.SparkContext = org.apache.spark.SparkContext@4888425d
>>
>>
>> HTH
>>
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * 
>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> *
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>> On 13 September 2016 at 18:57, Sean Owen  wrote:
>>
>>> But you're in the shell there, which already has a SparkContext for you
>>> as sc.
>>>
>>> On Tue, Sep 13, 2016 at 6:49 PM, Kevin Burton 
>>> wrote:
>>>
 I'm rather confused here as to what to do about creating a new
 SparkContext.

 Spark 2.0 prevents it... (exception included below)

 yet a TON of examples I've seen basically tell you to create a new
 SparkContext as standard practice:


 http://spark.apache.org/docs/latest/configuration.html#dynamically-loading-spark-properties

 val conf = new SparkConf()
  .setMaster("local[2]")
  .setAppName("CountingSheep")val sc = new SparkContext(conf)


 I'm specifically running into a problem in that ES hadoop won't work
 with its settings and I think its related to this problme.

 Do we have to call sc.stop() first and THEN create a new spark context?

 That works,, but I can't find any documentation anywhere telling us the
 right course of action.



 scala> val sc = new SparkContext();
 org.apache.spark.SparkException: Only one SparkContext may be running
 in this JVM (see SPARK-2243). To ignore this error, set
 spark.driver.allowMultipleContexts = true. The currently running
 SparkContext was created at:

 org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:823)
 org.apache.spark.repl.Main$.createSparkSession(Main.scala:95)
 (:15)
 (:31)
 (:33)
 .(:37)
 .()
 .$print$lzycompute(:7)
 .$print(:6)
 $print()
 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 java.lang.reflect.Method.invoke(Method.java:497)
 scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:786)
 scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1047)
 scala.tools.nsc.interpreter.IM
 ain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:638)
 scala.tools.nsc.interpreter.IM
 ain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:637)
 scala.reflect.internal.util.Sc
 alaClassLoader$class.asContext(ScalaClassLoader.scala:31)

 scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
   at
 org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$2.apply(SparkContext.scala:2221)

Re: Spark 2.0.0 won't let you create a new SparkContext?

2016-09-13 Thread Kevin Burton
The problem is that without a new spark context, with a custom conf,
elasticsearch-hadoop is refusing to read in settings about the ES setup...

if I do a sc.stop() , then create a new one, it seems to work fine.

But it isn't really documented anywhere and all the existing documentation
is now invalid because you get an exception when you try to create a new
spark context.

On Tue, Sep 13, 2016 at 11:13 AM, Mich Talebzadeh  wrote:

> I think this works in a shell but you need to allow multiple spark contexts
>
> Spark context Web UI available at http://50.140.197.217:5
> Spark context available as 'sc' (master = local, app id =
> local-1473789661846).
> Spark session available as 'spark'.
> Welcome to
>     __
>  / __/__  ___ _/ /__
> _\ \/ _ \/ _ `/ __/  '_/
>/___/ .__/\_,_/_/ /_/\_\   version 2.0.0
>   /_/
> Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java
> 1.8.0_77)
> Type in expressions to have them evaluated.
> Type :help for more information.
>
> scala> import org.apache.spark.SparkContext
> import org.apache.spark.SparkContext
> scala>  val conf = new SparkConf().setMaster("local[2]").setAppName("
> CountingSheep").
> *set("spark.driver.allowMultipleContexts", "true")*conf:
> org.apache.spark.SparkConf = org.apache.spark.SparkConf@bb5f9d
> scala> val sc = new SparkContext(conf)
> sc: org.apache.spark.SparkContext = org.apache.spark.SparkContext@4888425d
>
>
> HTH
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On 13 September 2016 at 18:57, Sean Owen  wrote:
>
>> But you're in the shell there, which already has a SparkContext for you
>> as sc.
>>
>> On Tue, Sep 13, 2016 at 6:49 PM, Kevin Burton  wrote:
>>
>>> I'm rather confused here as to what to do about creating a new
>>> SparkContext.
>>>
>>> Spark 2.0 prevents it... (exception included below)
>>>
>>> yet a TON of examples I've seen basically tell you to create a new
>>> SparkContext as standard practice:
>>>
>>> http://spark.apache.org/docs/latest/configuration.html#dynam
>>> ically-loading-spark-properties
>>>
>>> val conf = new SparkConf()
>>>  .setMaster("local[2]")
>>>  .setAppName("CountingSheep")val sc = new SparkContext(conf)
>>>
>>>
>>> I'm specifically running into a problem in that ES hadoop won't work
>>> with its settings and I think its related to this problme.
>>>
>>> Do we have to call sc.stop() first and THEN create a new spark context?
>>>
>>> That works,, but I can't find any documentation anywhere telling us the
>>> right course of action.
>>>
>>>
>>>
>>> scala> val sc = new SparkContext();
>>> org.apache.spark.SparkException: Only one SparkContext may be running
>>> in this JVM (see SPARK-2243). To ignore this error, set
>>> spark.driver.allowMultipleContexts = true. The currently running
>>> SparkContext was created at:
>>> org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkS
>>> ession.scala:823)
>>> org.apache.spark.repl.Main$.createSparkSession(Main.scala:95)
>>> (:15)
>>> (:31)
>>> (:33)
>>> .(:37)
>>> .()
>>> .$print$lzycompute(:7)
>>> .$print(:6)
>>> $print()
>>> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>>> ssorImpl.java:62)
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>>> thodAccessorImpl.java:43)
>>> java.lang.reflect.Method.invoke(Method.java:497)
>>> scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:786)
>>> scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1047)
>>> scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$lo
>>> adAndRunReq$1.apply(IMain.scala:638)
>>> scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$lo
>>> adAndRunReq$1.apply(IMain.scala:637)
>>> scala.reflect.internal.util.ScalaClassLoader$class.asContext
>>> (ScalaClassLoader.scala:31)
>>> scala.reflect.internal.util.AbstractFileClassLoader.asContex
>>> t(AbstractFileClassLoader.scala:19)
>>>   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextI
>>> sRunning$2.apply(SparkContext.scala:2221)
>>>   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextI
>>> sRunning$2.apply(SparkContext.scala:2217)
>>>   at scala.Option.foreach(Option.scala:257)
>>>   at org.apache.spark.SparkContext$.assertNoOtherContextIsRunning
>>> (SparkContext.scala:2217)
>>>   at 

Re: Spark 2.0.0 won't let you create a new SparkContext?

2016-09-13 Thread Mich Talebzadeh
I think this works in a shell but you need to allow multiple spark contexts

Spark context Web UI available at http://50.140.197.217:5
Spark context available as 'sc' (master = local, app id =
local-1473789661846).
Spark session available as 'spark'.
Welcome to
    __
 / __/__  ___ _/ /__
_\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.0.0
  /_/
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java
1.8.0_77)
Type in expressions to have them evaluated.
Type :help for more information.

scala> import org.apache.spark.SparkContext
import org.apache.spark.SparkContext
scala>  val conf = new
SparkConf().setMaster("local[2]").setAppName("CountingSheep").
*set("spark.driver.allowMultipleContexts", "true")*conf:
org.apache.spark.SparkConf = org.apache.spark.SparkConf@bb5f9d
scala> val sc = new SparkContext(conf)
sc: org.apache.spark.SparkContext = org.apache.spark.SparkContext@4888425d


HTH


Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 13 September 2016 at 18:57, Sean Owen  wrote:

> But you're in the shell there, which already has a SparkContext for you as
> sc.
>
> On Tue, Sep 13, 2016 at 6:49 PM, Kevin Burton  wrote:
>
>> I'm rather confused here as to what to do about creating a new
>> SparkContext.
>>
>> Spark 2.0 prevents it... (exception included below)
>>
>> yet a TON of examples I've seen basically tell you to create a new
>> SparkContext as standard practice:
>>
>> http://spark.apache.org/docs/latest/configuration.html#dynam
>> ically-loading-spark-properties
>>
>> val conf = new SparkConf()
>>  .setMaster("local[2]")
>>  .setAppName("CountingSheep")val sc = new SparkContext(conf)
>>
>>
>> I'm specifically running into a problem in that ES hadoop won't work with
>> its settings and I think its related to this problme.
>>
>> Do we have to call sc.stop() first and THEN create a new spark context?
>>
>> That works,, but I can't find any documentation anywhere telling us the
>> right course of action.
>>
>>
>>
>> scala> val sc = new SparkContext();
>> org.apache.spark.SparkException: Only one SparkContext may be running in
>> this JVM (see SPARK-2243). To ignore this error, set
>> spark.driver.allowMultipleContexts = true. The currently running
>> SparkContext was created at:
>> org.apache.spark.sql.SparkSession$Builder.getOrCreate(
>> SparkSession.scala:823)
>> org.apache.spark.repl.Main$.createSparkSession(Main.scala:95)
>> (:15)
>> (:31)
>> (:33)
>> .(:37)
>> .()
>> .$print$lzycompute(:7)
>> .$print(:6)
>> $print()
>> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>> ssorImpl.java:62)
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>> thodAccessorImpl.java:43)
>> java.lang.reflect.Method.invoke(Method.java:497)
>> scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:786)
>> scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1047)
>> scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$lo
>> adAndRunReq$1.apply(IMain.scala:638)
>> scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$lo
>> adAndRunReq$1.apply(IMain.scala:637)
>> scala.reflect.internal.util.ScalaClassLoader$class.asContext
>> (ScalaClassLoader.scala:31)
>> scala.reflect.internal.util.AbstractFileClassLoader.asContex
>> t(AbstractFileClassLoader.scala:19)
>>   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextI
>> sRunning$2.apply(SparkContext.scala:2221)
>>   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextI
>> sRunning$2.apply(SparkContext.scala:2217)
>>   at scala.Option.foreach(Option.scala:257)
>>   at org.apache.spark.SparkContext$.assertNoOtherContextIsRunning
>> (SparkContext.scala:2217)
>>   at org.apache.spark.SparkContext$.markPartiallyConstructed(Spar
>> kContext.scala:2290)
>>   at org.apache.spark.SparkContext.(SparkContext.scala:89)
>>   at org.apache.spark.SparkContext.(SparkContext.scala:121)
>>   ... 48 elided
>>
>>
>> --
>>
>> We’re hiring if you know of any awesome Java Devops or Linux Operations
>> Engineers!
>>
>> Founder/CEO Spinn3r.com
>> Location: *San Francisco, CA*
>> blog: http://burtonator.wordpress.com
>> … or check out my Google+ profile
>> 
>>
>>
>


Re: Spark 2.0.0 won't let you create a new SparkContext?

2016-09-13 Thread Marcelo Vanzin
You're running spark-shell. It already creates a SparkContext for you and
makes it available in a variable called "sc".

If you want to change the config of spark-shell's context, you need to use
command line option. (Or stop the existing context first, although I'm not
sure how well that will work.)

On Tue, Sep 13, 2016 at 10:49 AM, Kevin Burton  wrote:

> I'm rather confused here as to what to do about creating a new
> SparkContext.
>
> Spark 2.0 prevents it... (exception included below)
>
> yet a TON of examples I've seen basically tell you to create a new
> SparkContext as standard practice:
>
> http://spark.apache.org/docs/latest/configuration.html#
> dynamically-loading-spark-properties
>
> val conf = new SparkConf()
>  .setMaster("local[2]")
>  .setAppName("CountingSheep")val sc = new SparkContext(conf)
>
>
> I'm specifically running into a problem in that ES hadoop won't work with
> its settings and I think its related to this problme.
>
> Do we have to call sc.stop() first and THEN create a new spark context?
>
> That works,, but I can't find any documentation anywhere telling us the
> right course of action.
>
>
>
> scala> val sc = new SparkContext();
> org.apache.spark.SparkException: Only one SparkContext may be running in
> this JVM (see SPARK-2243). To ignore this error, set 
> spark.driver.allowMultipleContexts
> = true. The currently running SparkContext was created at:
> org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.
> scala:823)
> org.apache.spark.repl.Main$.createSparkSession(Main.scala:95)
> (:15)
> (:31)
> (:33)
> .(:37)
> .()
> .$print$lzycompute(:7)
> .$print(:6)
> $print()
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 62)
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> java.lang.reflect.Method.invoke(Method.java:497)
> scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:786)
> scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1047)
> scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$
> loadAndRunReq$1.apply(IMain.scala:638)
> scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$
> loadAndRunReq$1.apply(IMain.scala:637)
> scala.reflect.internal.util.ScalaClassLoader$class.
> asContext(ScalaClassLoader.scala:31)
> scala.reflect.internal.util.AbstractFileClassLoader.asContext(
> AbstractFileClassLoader.scala:19)
>   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$
> 2.apply(SparkContext.scala:2221)
>   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$
> 2.apply(SparkContext.scala:2217)
>   at scala.Option.foreach(Option.scala:257)
>   at org.apache.spark.SparkContext$.assertNoOtherContextIsRunning(
> SparkContext.scala:2217)
>   at org.apache.spark.SparkContext$.markPartiallyConstructed(
> SparkContext.scala:2290)
>   at org.apache.spark.SparkContext.(SparkContext.scala:89)
>   at org.apache.spark.SparkContext.(SparkContext.scala:121)
>   ... 48 elided
>
>
> --
>
> We’re hiring if you know of any awesome Java Devops or Linux Operations
> Engineers!
>
> Founder/CEO Spinn3r.com
> Location: *San Francisco, CA*
> blog: http://burtonator.wordpress.com
> … or check out my Google+ profile
> 
>
>


-- 
Marcelo


Re: Spark 2.0.0 won't let you create a new SparkContext?

2016-09-13 Thread Sean Owen
But you're in the shell there, which already has a SparkContext for you as
sc.

On Tue, Sep 13, 2016 at 6:49 PM, Kevin Burton  wrote:

> I'm rather confused here as to what to do about creating a new
> SparkContext.
>
> Spark 2.0 prevents it... (exception included below)
>
> yet a TON of examples I've seen basically tell you to create a new
> SparkContext as standard practice:
>
> http://spark.apache.org/docs/latest/configuration.html#
> dynamically-loading-spark-properties
>
> val conf = new SparkConf()
>  .setMaster("local[2]")
>  .setAppName("CountingSheep")val sc = new SparkContext(conf)
>
>
> I'm specifically running into a problem in that ES hadoop won't work with
> its settings and I think its related to this problme.
>
> Do we have to call sc.stop() first and THEN create a new spark context?
>
> That works,, but I can't find any documentation anywhere telling us the
> right course of action.
>
>
>
> scala> val sc = new SparkContext();
> org.apache.spark.SparkException: Only one SparkContext may be running in
> this JVM (see SPARK-2243). To ignore this error, set 
> spark.driver.allowMultipleContexts
> = true. The currently running SparkContext was created at:
> org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.
> scala:823)
> org.apache.spark.repl.Main$.createSparkSession(Main.scala:95)
> (:15)
> (:31)
> (:33)
> .(:37)
> .()
> .$print$lzycompute(:7)
> .$print(:6)
> $print()
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 62)
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> java.lang.reflect.Method.invoke(Method.java:497)
> scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:786)
> scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1047)
> scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$
> loadAndRunReq$1.apply(IMain.scala:638)
> scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$
> loadAndRunReq$1.apply(IMain.scala:637)
> scala.reflect.internal.util.ScalaClassLoader$class.
> asContext(ScalaClassLoader.scala:31)
> scala.reflect.internal.util.AbstractFileClassLoader.asContext(
> AbstractFileClassLoader.scala:19)
>   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$
> 2.apply(SparkContext.scala:2221)
>   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$
> 2.apply(SparkContext.scala:2217)
>   at scala.Option.foreach(Option.scala:257)
>   at org.apache.spark.SparkContext$.assertNoOtherContextIsRunning(
> SparkContext.scala:2217)
>   at org.apache.spark.SparkContext$.markPartiallyConstructed(
> SparkContext.scala:2290)
>   at org.apache.spark.SparkContext.(SparkContext.scala:89)
>   at org.apache.spark.SparkContext.(SparkContext.scala:121)
>   ... 48 elided
>
>
> --
>
> We’re hiring if you know of any awesome Java Devops or Linux Operations
> Engineers!
>
> Founder/CEO Spinn3r.com
> Location: *San Francisco, CA*
> blog: http://burtonator.wordpress.com
> … or check out my Google+ profile
> 
>
>


Spark 2.0.0 won't let you create a new SparkContext?

2016-09-13 Thread Kevin Burton
I'm rather confused here as to what to do about creating a new SparkContext.

Spark 2.0 prevents it... (exception included below)

yet a TON of examples I've seen basically tell you to create a new
SparkContext as standard practice:

http://spark.apache.org/docs/latest/configuration.html#dynamically-loading-spark-properties

val conf = new SparkConf()
 .setMaster("local[2]")
 .setAppName("CountingSheep")val sc = new SparkContext(conf)


I'm specifically running into a problem in that ES hadoop won't work with
its settings and I think its related to this problme.

Do we have to call sc.stop() first and THEN create a new spark context?

That works,, but I can't find any documentation anywhere telling us the
right course of action.



scala> val sc = new SparkContext();
org.apache.spark.SparkException: Only one SparkContext may be running in
this JVM (see SPARK-2243). To ignore this error, set
spark.driver.allowMultipleContexts = true. The currently running
SparkContext was created at:
org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:823)
org.apache.spark.repl.Main$.createSparkSession(Main.scala:95)
(:15)
(:31)
(:33)
.(:37)
.()
.$print$lzycompute(:7)
.$print(:6)
$print()
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:497)
scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:786)
scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1047)
scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:638)
scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:637)
scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
  at
org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$2.apply(SparkContext.scala:2221)
  at
org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$2.apply(SparkContext.scala:2217)
  at scala.Option.foreach(Option.scala:257)
  at
org.apache.spark.SparkContext$.assertNoOtherContextIsRunning(SparkContext.scala:2217)
  at
org.apache.spark.SparkContext$.markPartiallyConstructed(SparkContext.scala:2290)
  at org.apache.spark.SparkContext.(SparkContext.scala:89)
  at org.apache.spark.SparkContext.(SparkContext.scala:121)
  ... 48 elided


-- 

We’re hiring if you know of any awesome Java Devops or Linux Operations
Engineers!

Founder/CEO Spinn3r.com
Location: *San Francisco, CA*
blog: http://burtonator.wordpress.com
… or check out my Google+ profile