[ https://issues.apache.org/jira/browse/SPARK-8277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14607190#comment-14607190 ]
Felix Cheung commented on SPARK-8277: ------------------------------------- what would be a better approach? Would it work to serialize the R native DataFrame into bytes and then run a version of SQLUtils. bytesToRow? > SparkR createDataFrame is slow > ------------------------------ > > Key: SPARK-8277 > URL: https://issues.apache.org/jira/browse/SPARK-8277 > Project: Spark > Issue Type: Bug > Components: SparkR > Affects Versions: 1.4.0 > Reporter: Shivaram Venkataraman > > For example calling `createDataFrame` on the data from > http://s3-us-west-2.amazonaws.com/sparkr-data/flights.csv takes a really long > time > This is mainly because we try to convert a DataFrame to a List in order to > parallelize it by rows and the conversion from DF to list is very slow for > large data frames. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org