[ 
https://issues.apache.org/jira/browse/SPARK-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14731619#comment-14731619
 ] 

Alex Rovner commented on SPARK-10397:
-------------------------------------

Pull: https://github.com/apache/spark/pull/8608

{noformat}
>>> sc
{'_accumulatorServer': <pyspark.accumulators.AccumulatorServer instance at 
0x10d03a680>,
 '_batchSize': 0,
 '_callsite': CallSite(function='<module>', 
file='/Users/alex.rovner/git/spark/python/pyspark/shell.py', linenum=43),
 '_conf': {'_jconf': JavaObject id=o0},
 '_javaAccumulator': JavaObject id=o11,
 '_jsc': JavaObject id=o8,
 '_pickled_broadcast_vars': set([]),
 '_python_includes': [],
 '_temp_dir': 
u'/private/var/folders/hj/v4zb0_f159q8mt4w3j8m2_mr0000gp/T/spark-a9cc47a9-db90-49a3-a82e-263f0b56268c/pyspark-773c7490-2b2d-4418-a030-256a5b9c1fe1',
 '_unbatched_serializer': PickleSerializer(),
 'appName': u'PySparkShell',
 'environment': {},
 'master': u'local[*]',
 'profiler_collector': None,
 'pythonExec': 'python2.7',
 'pythonVer': '2.7',
 'serializer': AutoBatchedSerializer(PickleSerializer()),
 'sparkHome': None}
>>> print sc
{'_accumulatorServer': <pyspark.accumulators.AccumulatorServer instance at 
0x10d03a680>,
 '_batchSize': 0,
 '_callsite': CallSite(function='<module>', 
file='/Users/alex.rovner/git/spark/python/pyspark/shell.py', linenum=43),
 '_conf': {'_jconf': JavaObject id=o0},
 '_javaAccumulator': JavaObject id=o11,
 '_jsc': JavaObject id=o8,
 '_pickled_broadcast_vars': set([]),
 '_python_includes': [],
 '_temp_dir': 
u'/private/var/folders/hj/v4zb0_f159q8mt4w3j8m2_mr0000gp/T/spark-a9cc47a9-db90-49a3-a82e-263f0b56268c/pyspark-773c7490-2b2d-4418-a030-256a5b9c1fe1',
 '_unbatched_serializer': PickleSerializer(),
 'appName': u'PySparkShell',
 'environment': {},
 'master': u'local[*]',
 'profiler_collector': None,
 'pythonExec': 'python2.7',
 'pythonVer': '2.7',
 'serializer': AutoBatchedSerializer(PickleSerializer()),
 'sparkHome': None}
>>> 

{noformat}

> Make Python's SparkContext self-descriptive on "print sc"
> ---------------------------------------------------------
>
>                 Key: SPARK-10397
>                 URL: https://issues.apache.org/jira/browse/SPARK-10397
>             Project: Spark
>          Issue Type: Improvement
>          Components: PySpark
>    Affects Versions: 1.4.0
>            Reporter: Sergey Tryuber
>            Priority: Trivial
>
> When I execute in Python shell:
> {code}
> print sc
> {code}
> I receive something like:
> {noformat}
> <pyspark.context.SparkContext object at 0x35c0190>
> {noformat}
> But this is very inconvenient, especially if a user wants to create a 
> good-looking and self-descriptive IPython Notebook. He would like to see some 
> information about his Spark cluster.
> In contrast, H2O context does have this feature and it is very helpful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to