Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18945#discussion_r140167087
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -1761,12 +1761,37 @@ def toPandas(self):
raise ImportError("%s\n%s" % (e.message, msg))
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18945#discussion_r140166654
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -1761,12 +1761,37 @@ def toPandas(self):
raise ImportError("%s\n%s" % (e.message, msg))
Github user logannc commented on a diff in the pull request:
https://github.com/apache/spark/pull/18945#discussion_r140153330
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -1761,12 +1761,37 @@ def toPandas(self):
raise ImportError("%s\n%s" % (e.message,
Github user logannc commented on a diff in the pull request:
https://github.com/apache/spark/pull/18945#discussion_r140148777
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -1761,12 +1761,37 @@ def toPandas(self):
raise ImportError("%s\n%s" % (e.message,
Github user logannc commented on a diff in the pull request:
https://github.com/apache/spark/pull/18945#discussion_r140148164
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -1761,12 +1761,37 @@ def toPandas(self):
raise ImportError("%s\n%s" % (e.message,
Github user logannc commented on a diff in the pull request:
https://github.com/apache/spark/pull/18945#discussion_r140147933
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -1761,12 +1761,37 @@ def toPandas(self):
raise ImportError("%s\n%s" % (e.message,
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18945#discussion_r140144263
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -1761,12 +1761,37 @@ def toPandas(self):
raise ImportError("%s\n%s" % (e.message, msg))
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18945#discussion_r140143875
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -1761,12 +1761,37 @@ def toPandas(self):
raise ImportError("%s\n%s" % (e.message, msg))
Github user a10y commented on a diff in the pull request:
https://github.com/apache/spark/pull/18945#discussion_r140142458
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -1761,12 +1761,37 @@ def toPandas(self):
raise ImportError("%s\n%s" % (e.message, msg))
Github user a10y commented on a diff in the pull request:
https://github.com/apache/spark/pull/18945#discussion_r140141889
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -1761,12 +1761,37 @@ def toPandas(self):
raise ImportError("%s\n%s" % (e.message, msg))
Github user a10y commented on a diff in the pull request:
https://github.com/apache/spark/pull/18945#discussion_r139450187
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -1810,17 +1810,20 @@ def _to_scala_map(sc, jm):
return sc._jvm.PythonUtils.toScalaMap(jm)
Github user ueshin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18945#discussion_r134925269
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -1762,7 +1762,7 @@ def toPandas(self):
else:
--- End diff --
If we use this
Github user BryanCutler commented on a diff in the pull request:
https://github.com/apache/spark/pull/18945#discussion_r134033952
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -1762,7 +1762,7 @@ def toPandas(self):
else:
--- End diff --
If we wanted to
Github user BryanCutler commented on a diff in the pull request:
https://github.com/apache/spark/pull/18945#discussion_r134031415
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -1731,7 +1731,7 @@ def toDF(self, *cols):
return DataFrame(jdf, self.sql_ctx)
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/18945#discussion_r133921465
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -1731,7 +1731,7 @@ def toDF(self, *cols):
return DataFrame(jdf, self.sql_ctx)
GitHub user logannc opened a pull request:
https://github.com/apache/spark/pull/18945
Add option to convert nullable int columns to float columns in toPandâ¦
â¦as to prevent needless Exceptions during routine use.
Add the `strict=True` kwarg to DataFrame.toPandas to allow
16 matches
Mail list logo