[ 
https://issues.apache.org/jira/browse/SPARK-3691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337550#comment-14337550
 ] 

Sean Owen commented on SPARK-3691:
----------------------------------

I think this is basically "local" mode. That would map to what MapReduce 
provides, as far as I understand what you mean. You can also set up a one-node 
YARN+HDFS cluster if you like. VMs exist with Spark preinstalled, and Spark 
supports launching on EC2. What is this seeking beyond that?

> Provide a mini cluster for testing system built on Spark
> --------------------------------------------------------
>
>                 Key: SPARK-3691
>                 URL: https://issues.apache.org/jira/browse/SPARK-3691
>             Project: Spark
>          Issue Type: Test
>          Components: Spark Core
>    Affects Versions: 1.1.0
>            Reporter: Xuefu Zhang
>
> Most Hadoop components such MR, DFS, Tez, and Yarn provide a mini cluster 
> that can be used to test the external systems that rely on those frameworks, 
> such as Pig and Hive. While Spark's local mode can be used to do such testing 
> and is friendly for debugging, it's too far from a real Spark cluster and a 
> lot of problems cannot be discovered. Thus, an equivalent of Hadoop MR mini 
> cluster in Spark would be very helpful in testing system such as Hive/Pig on 
> Spark.
> Spark's local-cluster is considered for this purpose but it doesn't fit well 
> because it requires a Spark installation on the box where the tests run. 
> Also, local-cluster isn't exposed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to