[ 
https://issues.apache.org/jira/browse/SPARK-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14259278#comment-14259278
 ] 

Matt Chapman commented on SPARK-4974:
-------------------------------------

With my patch in the PR, I ran into trouble later, so thats clearly not the 
right solution. But now I'm having troubel reproducing the issue even myself, 
though I did find two other reports of the same issue with a google search, so 
its not just my own gremlins. I'm happy to keep working on the solution if 
someone can provide some guidance, since I'm totally new to Spark and only 
moderately competant with python.

> Circular dependency in pyspark/context.py causes import failure.
> ----------------------------------------------------------------
>
>                 Key: SPARK-4974
>                 URL: https://issues.apache.org/jira/browse/SPARK-4974
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>         Environment: Python 2.7.8
> Ubuntu 14.10
>            Reporter: Matt Chapman
>
> Steps to reproduce:
> 1. Run a python cli from the 'python/' directory. (Reproduced with default 
> python cli and also ipython.)
> 2. Run this code:
> {code}
> >>> import sys
> >>> sys.path.append('build/')
> >>> import pyspark.sql
> Traceback (most recent call last):
>   File "<stdin>", line 1, in <module>
>   File "pyspark/__init__.py", line 63, in <module>
>     from pyspark.context import SparkContext
>   File "pyspark/context.py", line 25, in <module>
>     from pyspark import accumulators
> ImportError: cannot import name accumulators
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to