[ 
https://issues.apache.org/jira/browse/SPARK-6677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14485521#comment-14485521
 ] 

Davies Liu commented on SPARK-6677:
-----------------------------------

What is the expected output?

I got this:
{code}
key: a
res1 data as row: [Row(foo=1, key=u'a')]
res2 data as row: [Row(bar=3, key=u'a', other=u'foobar')]
res1 and res2 fields: (u'foo', u'key') (u'bar', u'key', u'other')
res1 data as tuple: 1 a
res2 data as tuple: 3 a foobar
key: c
res1 data as row: []
res2 data as row: [Row(bar=4, key=u'c', other=u'barfoo')]
key: b
res1 data as row: [Row(foo=2, key=u'b')]
res2 data as row: []
{code}

> pyspark.sql nondeterministic issue with row fields
> --------------------------------------------------
>
>                 Key: SPARK-6677
>                 URL: https://issues.apache.org/jira/browse/SPARK-6677
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 1.3.0
>         Environment: spark version: spark-1.3.0-bin-hadoop2.4
> python version: Python 2.7.6
> operating system: MacOS, x86_64 x86_64 x86_64 GNU/Linux
>            Reporter: Stefano Parmesan
>              Labels: pyspark, row, sql
>
> The following issue happens only when running pyspark in the python 
> interpreter, it works correctly with spark-submit.
> Reading two json files containing objects with a different structure leads 
> sometimes to the definition of wrong Rows, where the fields of a file are 
> used for the other one.
> I was able to write a sample code that reproduce this issue one out of three 
> times; the code snippet is available at the following link, together with 
> some (very simple) data samples:
> https://gist.github.com/armisael/e08bb4567d0a11efe2db



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to