[ 
https://issues.apache.org/jira/browse/SPARK-20527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon resolved SPARK-20527.
----------------------------------
    Resolution: Duplicate

I tried to reproduces this but I could not reproduce as reported in the current 
master as below:

{code}
from datetime import datetime 
from pyspark.sql import Row


from datetime import datetime 
spark.sql('CREATE DATABASE test_trash')
rows = [
    dict(number=1, letters='real1', some_date=datetime(2017,12,1,3,15)),
    dict(number=2, letters='real2', some_date=datetime(2017,12,2,3,15)),
    dict(number=3, letters='real3', some_date=datetime(2017,12,3,3,15)),
]
rows_rdd = spark.sparkContext.parallelize(rows).map(lambda r: Row(**r))
df = spark.createDataFrame(rows_rdd)
df.write.mode(saveMode='overwrite').saveAsTable('test_trash.thingy')
{code}

{code}
##############################
# this works as I would expect

schema = spark.sql('SELECT number, some_date FROM test_trash.thingy').schema

rows_rdd = spark.sparkContext.parallelize(rows).map(lambda r: Row(**r))
df = spark.createDataFrame(rows_rdd, schema)
assert df.count() == 1 # force evaluation so that the schema is evaluated

print "Made it here! "
{code}

produces the output as below:

{code}
AttributeError: 'str' object has no attribute 'tzinfo'
        at 
org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
        at 
org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
        at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
        at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
        ...
{code}


{code}
########################################################################
# this fails and the only difference is in the order of the query fields

schema = spark.sql('SELECT some_date, number FROM test_trash.thingy').schema

rows_rdd = spark.sparkContext.parallelize(rows).map(lambda r: Row(**r))
df = spark.createDataFrame(rows_rdd, schema)
try:
    df.count()  # force evaluation so that the schema is evaluated
except:
    print "Oh no! Why?!"
    raise
else:
    print "Everything worked just fine."
{code}

produces the output below:

{code}
AttributeError: 'int' object has no attribute 'tzinfo'
        at 
org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
        at 
org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
        at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
        at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
        ...
{code}

It looks the problem is, PySPark sorts the fields in {{Row}} as below:

{code}
>>> Row(b=2, a=2)
Row(a=2, b=2)
{code}

and it seems the schema ended up with mismatching. For this issue, I think it 
duplicates SPARK-12467 in this case.

I am resolving this as a duplicate of this. Please reopen this if I 
misunderstood.

> Schema issues when fields are queries in different order
> --------------------------------------------------------
>
>                 Key: SPARK-20527
>                 URL: https://issues.apache.org/jira/browse/SPARK-20527
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.1.0
>            Reporter: John Berryman
>
> In the code below I
> * create a dataframe
> * shove it into hive
> * use spark.sql to query it with different field orderings, and retrieve the 
> schema
> * try to to apply the same data to the schema
> Observation: the order of the fields in the spark.sql query matters, in one 
> order the schema is successfully applied, in the other order we get an error
> Expectation: since the ``rows`` all have named field, I would suspect that 
> the ordering should not matter and the error appears to be a bug
> Here is the code to trigger this behavior:
> {code}
> #######
> # setup
> from datetime import datetime 
> spark.sql('CREATE DATABASE test_trash')
> rows = [
>     dict(number=1, letters='real1', some_date=datetime(2017,12,1,3,15)),
>     dict(number=2, letters='real2', some_date=datetime(2017,12,2,3,15)),
>     dict(number=3, letters='real3', some_date=datetime(2017,12,3,3,15)),
> ]
> rows_rdd = spark.sparkContext.parallelize(rows).map(lambda r: Row(**r))
> df = spark.createDataFrame(rows_rdd)
> df.write.mode(saveMode='overwrite').saveAsTable('test_trash.thingy')
> ##############################
> # this works as I would expect
> schema = spark.sql('SELECT number, some_date FROM test_trash.thingy').schema
> rows_rdd = spark.sparkContext.parallelize(rows).map(lambda r: Row(**r))
> df = spark.createDataFrame(rows_rdd, schema)
> assert df.count() == 1 # force evaluation so that the schema is evaluated
> print "Made it here! "
> ########################################################################
> # this fails and the only difference is in the order of the query fields
> schema = spark.sql('SELECT some_date, number FROM test_trash.thingy').schema
> rows_rdd = spark.sparkContext.parallelize(rows).map(lambda r: Row(**r))
> df = spark.createDataFrame(rows_rdd, schema)
> try:
>     df.count()  # force evaluation so that the schema is evaluated
> except:
>     print "Oh no! Why?!"
>     raise
> else:
>     print "Everything worked just fine."
> {code}
> results in
> {code}
> Made it here! 
> Oh no! Why?!
> Py4JJavaErrorTraceback (most recent call last)
> <ipython-input-90-353e9a10b2b7> in <module>()
>      35 df = spark.createDataFrame(rows_rdd, schema)
>      36 try:
> ---> 37     df.count()  # force evaluation so that the schema is evaluated
>      38 except:
>      39     print "Oh no! Why?!"
> /usr/local/spark/python/pyspark/sql/dataframe.py in count(self)
>     378         2
>     379         """
> --> 380         return int(self._jdf.count())
>     381 
>     382     @ignore_unicode_prefix
> /usr/local/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py in 
> __call__(self, *args)
>    1131         answer = self.gateway_client.send_command(command)
>    1132         return_value = get_return_value(
> -> 1133             answer, self.gateway_client, self.target_id, self.name)
>    1134 
>    1135         for temp_arg in temp_args:
> /usr/local/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
>      61     def deco(*a, **kw):
>      62         try:
> ---> 63             return f(*a, **kw)
>      64         except py4j.protocol.Py4JJavaError as e:
>      65             s = e.java_exception.toString()
> /usr/local/spark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py in 
> get_return_value(answer, gateway_client, target_id, name)
>     317                 raise Py4JJavaError(
>     318                     "An error occurred while calling {0}{1}{2}.\n".
> --> 319                     format(target_id, ".", name), value)
>     320             else:
>     321                 raise Py4JError(
> Py4JJavaError: An error occurred while calling o2469.count.
> : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 
> in stage 127.0 failed 1 times, most recent failure: Lost task 0.0 in stage 
> 127.0 (TID 100, localhost, executor driver): 
> org.apache.spark.api.python.PythonException: Traceback (most recent call 
> last):
>   File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 174, 
> in main
>     process()
>   File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 169, 
> in process
>     serializer.dump_stream(func(split_index, iterator), outfile)
>   File "/usr/local/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 
> 268, in dump_stream
>     vs = list(itertools.islice(iterator, batch))
>   File "/usr/local/spark/python/pyspark/sql/types.py", line 576, in toInternal
>     return tuple(f.toInternal(v) for f, v in zip(self.fields, obj))
>   File "/usr/local/spark/python/pyspark/sql/types.py", line 576, in <genexpr>
>     return tuple(f.toInternal(v) for f, v in zip(self.fields, obj))
>   File "/usr/local/spark/python/lib/pyspark.zip/pyspark/sql/types.py", line 
> 436, in toInternal
>     return self.dataType.toInternal(obj)
>   File "/usr/local/spark/python/lib/pyspark.zip/pyspark/sql/types.py", line 
> 190, in toInternal
>     seconds = (calendar.timegm(dt.utctimetuple()) if dt.tzinfo
> AttributeError: 'int' object has no attribute 'tzinfo'
>       at 
> org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
>       at 
> org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
>       at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
>       at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
>       at org.apache.spark.scheduler.Task.run(Task.scala:99)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> Driver stacktrace:
>       at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
>       at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>       at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>       at 
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
>       at scala.Option.foreach(Option.scala:257)
>       at 
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
>       at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
>       at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
>       at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
>       at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
>       at 
> org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:1918)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:1931)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:1944)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:1958)
>       at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:935)
>       at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>       at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>       at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>       at org.apache.spark.rdd.RDD.collect(RDD.scala:934)
>       at 
> org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:275)
>       at 
> org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2371)
>       at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
>       at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2765)
>       at 
> org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2370)
>       at 
> org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2377)
>       at 
> org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2405)
>       at 
> org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2404)
>       at org.apache.spark.sql.Dataset.withCallback(Dataset.scala:2778)
>       at org.apache.spark.sql.Dataset.count(Dataset.scala:2404)
>       at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:497)
>       at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
>       at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
>       at py4j.Gateway.invoke(Gateway.java:280)
>       at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
>       at py4j.commands.CallCommand.execute(CallCommand.java:79)
>       at py4j.GatewayConnection.run(GatewayConnection.java:214)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.spark.api.python.PythonException: Traceback (most 
> recent call last):
>   File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 174, 
> in main
>     process()
>   File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 169, 
> in process
>     serializer.dump_stream(func(split_index, iterator), outfile)
>   File "/usr/local/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 
> 268, in dump_stream
>     vs = list(itertools.islice(iterator, batch))
>   File "/usr/local/spark/python/pyspark/sql/types.py", line 576, in toInternal
>     return tuple(f.toInternal(v) for f, v in zip(self.fields, obj))
>   File "/usr/local/spark/python/pyspark/sql/types.py", line 576, in <genexpr>
>     return tuple(f.toInternal(v) for f, v in zip(self.fields, obj))
>   File "/usr/local/spark/python/lib/pyspark.zip/pyspark/sql/types.py", line 
> 436, in toInternal
>     return self.dataType.toInternal(obj)
>   File "/usr/local/spark/python/lib/pyspark.zip/pyspark/sql/types.py", line 
> 190, in toInternal
>     seconds = (calendar.timegm(dt.utctimetuple()) if dt.tzinfo
> AttributeError: 'int' object has no attribute 'tzinfo'
>       at 
> org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
>       at 
> org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
>       at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
>       at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
>       at org.apache.spark.scheduler.Task.run(Task.scala:99)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       ... 1 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to