hi all, I have just come across a problem where I have a table that has a few bigint columns, it seems if I read that table into a dataframe then collect it in pyspark, the bigints are stored and integers in python.
(The problem is if I write it back to another table, I detect the hive type programmatically from the python type, so it turns those columns to integers) Is that intended this way or a bug? thanks, -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Bigints-in-pyspark-tp22668.html Sent from the Apache Spark User List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org