GitHub user nishkamravi2 opened a pull request:
https://github.com/apache/spark/pull/1095
Fix for Spark-2151
int format expected for input memory parameter when spark-submit is invoked
in standalone cluster mode. Make it consistent with rest of Spark.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/nishkamravi2/spark master
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/1095.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #1095
----
commit 681b36f5fb63e14dc89e17813894227be9e2324f
Author: nravi <[email protected]>
Date: 2014-05-08T07:05:33Z
Fix for SPARK-1758: failing test
org.apache.spark.JavaAPISuite.wholeTextFiles
The prefix "file:" is missing in the string inserted as key in HashMap
commit 5108700230fd70b995e76598f49bdf328c971e77
Author: nravi <[email protected]>
Date: 2014-06-03T22:25:22Z
Fix in Spark for the Concurrent thread modification issue (SPARK-1097,
HADOOP-10456)
commit 6b840f017870207d23e75de224710971ada0b3d0
Author: nravi <[email protected]>
Date: 2014-06-03T22:34:02Z
Undo the fix for SPARK-1758 (the problem is fixed)
commit df2aeb179fca4fc893803c72a657317f5b5539d7
Author: nravi <[email protected]>
Date: 2014-06-09T19:02:59Z
Improved fix for ConcurrentModificationIssue (Spark-1097, Hadoop-10456)
commit eb663ca20c73f9c467192c95fc528c6f55f202be
Author: nravi <[email protected]>
Date: 2014-06-09T19:04:39Z
Merge branch 'master' of https://github.com/apache/spark
commit 5423a03ddf4d747db7261d08a64e32f44e8be95e
Author: nravi <[email protected]>
Date: 2014-06-10T20:06:07Z
Merge branch 'master' of https://github.com/apache/spark
commit 3bf8fad85813037504189cf1323d381fefb6dfbe
Author: nravi <[email protected]>
Date: 2014-06-16T05:47:00Z
Merge branch 'master' of https://github.com/apache/spark
commit 2b630f94079b82df3ebae2b26a3743112afcd526
Author: nravi <[email protected]>
Date: 2014-06-16T06:00:31Z
Accept memory input as "30g", "512M" instead of an int value, to be
consistent with rest of Spark
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---