[ 
https://issues.apache.org/jira/browse/SPARK-17907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen resolved SPARK-17907.
-------------------------------
    Resolution: Invalid

The error tells you the problem: 

{code}
ValueError: Cannot run multiple SparkContexts at once; existing 
SparkContext(app=PYSPARK, master=spark://172.31.28.208:7077) created by _init_ 
at <ipython-input-2-c7c8de510121>:6
{code}

You already have a context, and can't create a new one. The console mentions 
this.

Have a look at 
https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark please.

> Not allowing more spark console
> -------------------------------
>
>                 Key: SPARK-17907
>                 URL: https://issues.apache.org/jira/browse/SPARK-17907
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 2.0.0
>            Reporter: Sankar Mittapally
>
> We are exploring pyspark and spark cluster, We are able to initiated single 
> spark console connection, while trying to establish new connection. We are 
> getting error.
> ---------------------------------------------------------------------------
> ValueError                                Traceback (most recent call last)
> <ipython-input-15-05f9533b85b9> in <module>()
>       4         .set("spark.executor.memory", "1g")
>       5         
> .set("spark.cores.max","1").set("spark.driver.allowMultipleContexts", "true") 
> )
> ----> 6 sc = SparkContext(conf = conf)
> /opt/spark-2.0.0-bin-hadoop2.7/python/pyspark/context.py in __init__(self, 
> master, appName, sparkHome, pyFiles, environment, batchSize, serializer, 
> conf, gateway, jsc, profiler_cls)
>     110         """
>     111         self._callsite = first_spark_call() or CallSite(None, None, 
> None)
> --> 112         SparkContext._ensure_initialized(self, gateway=gateway)
>     113         try:
>     114             self._do_init(master, appName, sparkHome, pyFiles, 
> environment, batchSize, serializer,
> /opt/spark-2.0.0-bin-hadoop2.7/python/pyspark/context.py in 
> _ensure_initialized(cls, instance, gateway)
>     257                         " created by %s at %s:%s "
>     258                         % (currentAppName, currentMaster,
> --> 259                             callsite.function, callsite.file, 
> callsite.linenum))
>     260                 else:
>     261                     SparkContext._active_spark_context = instance
> ValueError: Cannot run multiple SparkContexts at once; existing 
> SparkContext(app=PYSPARK, master=spark://172.31.28.208:7077) created by 
> __init__ at <ipython-input-2-c7c8de510121>:6 
> Command We are using
> {code}
> conf = (SparkConf()
>         .setMaster("spark://172.31.28.208:7077")
>         .setAppName("sankar")
>         .set("spark.executor.memory", "1g")
>         .set("spark.cores.max","1").set("spark.driver.allowMultipleContexts", 
> "true") )
> sc = SparkContext(conf = conf)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to