[ https://issues.apache.org/jira/browse/SPARK-14141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15212563#comment-15212563 ]
Luke Miner commented on SPARK-14141: ------------------------------------ Is there any way to do this process in chunks: read a chunk of data into a dict and then append to a pandas dataframe with the pre-specified datatypes? The big advantage of a pandas dataframe with categorical datatypes is that it can potentially have a much much smaller memory footprint. However, if everything is loaded into a huge dict beforehand, there's much less of an upside. > Let user specify datatypes of pandas dataframe in toPandas() > ------------------------------------------------------------ > > Key: SPARK-14141 > URL: https://issues.apache.org/jira/browse/SPARK-14141 > Project: Spark > Issue Type: New Feature > Components: Input/Output, PySpark, SQL > Reporter: Luke Miner > Priority: Minor > > Would be nice to specify the dtypes of the pandas dataframe during the > toPandas() call. Something like: > bq. pdf = df.toPandas(dtypes={'a': 'float64', 'b': 'datetime64', 'c': 'bool', > 'd': 'category'}) > Since dtypes like `category` are more memory efficient, you could potentially > load many more rows into a pandas dataframe with this option without running > out of memory. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org