[ https://issues.apache.org/jira/browse/SPARK-13352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Davies Liu updated SPARK-13352: ------------------------------- Fix Version/s: (was: 1.6.2) > BlockFetch does not scale well on large block > --------------------------------------------- > > Key: SPARK-13352 > URL: https://issues.apache.org/jira/browse/SPARK-13352 > Project: Spark > Issue Type: Bug > Components: Block Manager, Spark Core > Reporter: Davies Liu > Assignee: Zhang, Liye > Priority: Critical > Fix For: 2.0.0 > > > BlockManager.getRemoteBytes() perform poorly on large block > {code} > test("block manager") { > val N = 500 << 20 > val bm = sc.env.blockManager > val blockId = TaskResultBlockId(0) > val buffer = ByteBuffer.allocate(N) > buffer.limit(N) > bm.putBytes(blockId, buffer, StorageLevel.MEMORY_AND_DISK_SER) > val result = bm.getRemoteBytes(blockId) > assert(result.isDefined) > assert(result.get.limit() === (N)) > } > {code} > Here are runtime for different block sizes: > {code} > 50M 3 seconds > 100M 7 seconds > 250M 33 seconds > 500M 2 min > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org