Hey,

On Thu, Dec 12, 2013 at 5:10 PM, Pinak Pani <
[email protected]> wrote:

> Do you mean it has been decided not to support YARN 2.2 in any future
> release of version 0.8?
>
>
Well AFAIK. But it might get in 0.9.


> http://mail-archives.apache.org has big usability issue. You do not get
> URL at the thread level instead month level. Can you please tell me the
> subject of the mail you are referring. I will search in the threads.
>
>
Scala 2.10 Merge.


> Thanks.
>
>
> On Thu, Dec 12, 2013 at 4:45 PM, Prashant Sharma <[email protected]>wrote:
>
>> I don't think yarn 2.2 is supported in 0.8 and very soon it will not be
>> supported in master either. Read this thread
>> http://mail-archives.apache.org/mod_mbox/spark-dev/201312.mbox/browser.
>>
>>
>> On Thu, Dec 12, 2013 at 4:24 PM, Pinak Pani <
>> [email protected]> wrote:
>>
>>> I am trying to setup Spark with YARN 2.2.0. My Hadoop is plain Hadoop
>>> from Apache Hadoop website. When I SBT build against 2.2.0 it fails. While
>>> it compiles with a lot of warnings when I try against Hadoop 2.0.5-alpha.
>>>
>>> How can I compile Spark against YARN 2.2.0.
>>>
>>> There is a related thread here:
>>> https://groups.google.com/forum/#!topic/spark-users/8Gm6ByvdNME
>>> It did help.
>>>
>>> Also, I am a novice to SBT.
>>>
>>> Here is the error log:
>>>
>>> [root@ip-10-110-241-90 spark-0.8.0-incubating]#
>>> SPARK_HADOOP_VERSION=2.2.0 SPARK_YARN=true sbt/sbt clean assembly
>>> [info] Loading project definition from
>>> /tmp/spark/spark-0.8.0-incubating/project/project
>>>
>>> [-- snip --]
>>>
>>> [warn]     jobCommitter.cleanupJob(jobTaskContext)
>>> [warn]                  ^
>>> [warn]
>>> /tmp/spark/spark-0.8.0-incubating/core/src/main/scala/org/apache/spark/scheduler/InputFormatInfo.scala:98:
>>> constructor Job in class Job is deprecated: see corresponding Javadoc for
>>> more information.
>>> [warn]     val job = new Job(conf)
>>> [warn]               ^
>>> [warn] 9 warnings found
>>> [warn] Note: Some input files use unchecked or unsafe operations.
>>> [warn] Note: Recompile with -Xlint:unchecked for details.
>>> [info] Compiling 8 Scala sources to
>>> /tmp/spark/spark-0.8.0-incubating/yarn/target/scala-2.9.3/classes...
>>> [info] Compiling 50 Scala sources to
>>> /tmp/spark/spark-0.8.0-incubating/streaming/target/scala-2.9.3/classes...
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala:42:
>>> not found: type AMRMProtocol
>>> [error]   private var resourceManager: AMRMProtocol = null
>>> [error]                                ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala:126:
>>> not found: type AMRMProtocol
>>> [error]   private def registerWithResourceManager(): AMRMProtocol = {
>>> [error]                                              ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala:119:
>>> value AM_CONTAINER_ID_ENV is not a member of object
>>> org.apache.hadoop.yarn.api.ApplicationConstants
>>> [error]     val containerIdString =
>>> envs.get(ApplicationConstants.AM_CONTAINER_ID_ENV)
>>> [error]                                                           ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala:131:
>>> not found: type AMRMProtocol
>>> [error]     return rpc.getProxy(classOf[AMRMProtocol], rmAddress,
>>> conf).asInstanceOf[AMRMProtocol]
>>> [error]                                 ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala:131:
>>> not found: type AMRMProtocol
>>> [error]     return rpc.getProxy(classOf[AMRMProtocol], rmAddress,
>>> conf).asInstanceOf[AMRMProtocol]
>>> [error]
>>>              ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala:138:
>>> value setApplicationAttemptId is not a member of
>>> org.apache.hadoop.yarn.api.protocolrecords.RegisterApplicationMasterRequest
>>> [error]     appMasterRequest.setApplicationAttemptId(appAttemptId)
>>> [error]                      ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocationHandler.scala:30:
>>> AMRMProtocol is not a member of org.apache.hadoop.yarn.api
>>> [error] import org.apache.hadoop.yarn.api.AMRMProtocol
>>> [error]        ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocationHandler.scala:504:
>>> not found: type AMRMProtocol
>>> [error]                    resourceManager: AMRMProtocol, appAttemptId:
>>> ApplicationAttemptId,
>>> [error]                                     ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocationHandler.scala:24:
>>> AMResponse is not a member of org.apache.hadoop.yarn.api.records
>>> [error] import org.apache.hadoop.yarn.api.records.{AMResponse,
>>> ApplicationAttemptId, ContainerId, Priority, Resource, ResourceRequest,
>>> ContainerStatus, Container}
>>> [error]        ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocationHandler.scala:493:
>>> not found: type AMRMProtocol
>>> [error]                    resourceManager: AMRMProtocol, appAttemptId:
>>> ApplicationAttemptId,
>>> [error]                                     ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocationHandler.scala:485:
>>> not found: type AMRMProtocol
>>> [error]                    resourceManager: AMRMProtocol, appAttemptId:
>>> ApplicationAttemptId,
>>> [error]                                     ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala:325:
>>> value setAppAttemptId is not a member of
>>> org.apache.hadoop.yarn.api.protocolrecords.FinishApplicationMasterRequest
>>> [error]     finishReq.setAppAttemptId(appAttemptId)
>>> [error]               ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala:326:
>>> value setFinishApplicationStatus is not a member of
>>> org.apache.hadoop.yarn.api.protocolrecords.FinishApplicationMasterRequest
>>> [error]     finishReq.setFinishApplicationStatus(status)
>>> [error]               ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala:31:
>>> YarnClientImpl is not a member of org.apache.hadoop.yarn.client
>>> [error] import org.apache.hadoop.yarn.client.YarnClientImpl
>>> [error]        ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala:42:
>>> not found: type YarnClientImpl
>>> [error] class Client(conf: Configuration, args: ClientArguments) extends
>>> YarnClientImpl with Logging {
>>> [error]
>>>  ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala:51:
>>> not found: value init
>>> [error]     init(yarnConf)
>>> [error]     ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala:52:
>>> not found: value start
>>> [error]     start()
>>> [error]     ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala:55:
>>> value getNewApplication is not a member of AnyRef with
>>> org.apache.spark.Logging with ScalaObject
>>> [error]     val newApp = super.getNewApplication()
>>> [error]                        ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala:66:
>>> value setUser is not a member of
>>> org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext
>>> [error]
>>> appContext.setUser(UserGroupInformation.getCurrentUser().getShortUserName())
>>> [error]                ^
>>>  [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala:286:
>>> value submitApplication is not a member of AnyRef with
>>> org.apache.spark.Logging with ScalaObject
>>> [error]     super.submitApplication(appContext)
>>> [error]           ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala:76:
>>> value getYarnClusterMetrics is not a member of AnyRef with
>>> org.apache.spark.Logging with ScalaObject
>>> [error]     val clusterMetrics: YarnClusterMetrics =
>>> super.getYarnClusterMetrics
>>> [error]                                                    ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala:79:
>>> value getQueueInfo is not a member of AnyRef with org.apache.spark.Logging
>>> with ScalaObject
>>> [error]     val queueInfo: QueueInfo = super.getQueueInfo(args.amQueue)
>>> [error]                                      ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala:216:
>>> value getMinimumResourceCapability is not a member of
>>> org.apache.hadoop.yarn.api.protocolrecords.GetNewApplicationResponse
>>> [error]     val minResMemory: Int =
>>> newApp.getMinimumResourceCapability().getMemory()
>>> [error]                                    ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala:273:
>>> value setResource is not a member of
>>> org.apache.hadoop.yarn.api.records.ContainerLaunchContext
>>> [error]     amContainer.setResource(capability)
>>> [error]                 ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala:278:
>>> value setContainerTokens is not a member of
>>> org.apache.hadoop.yarn.api.records.ContainerLaunchContext
>>> [error]
>>> amContainer.setContainerTokens(ByteBuffer.wrap(dob.getData()))
>>> [error]                 ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala:292:
>>> value getApplicationReport is not a member of AnyRef with
>>> org.apache.spark.Logging with ScalaObject
>>> [error]       val report = super.getApplicationReport(appId)
>>> [error]                          ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/WorkerRunnable.scala:34:
>>> ProtoUtils is not a member of org.apache.hadoop.yarn.util
>>> [error] import org.apache.hadoop.yarn.util.{Apps, ConverterUtils,
>>> Records, ProtoUtils}
>>> [error]        ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/WorkerRunnable.scala:48:
>>> not found: type ContainerManager
>>> [error]   var cm: ContainerManager = null
>>> [error]           ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/WorkerRunnable.scala:202:
>>> not found: type ContainerManager
>>> [error]   def connectToCM: ContainerManager = {
>>> [error]                    ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/WorkerRunnable.scala:63:
>>> value setContainerId is not a member of
>>> org.apache.hadoop.yarn.api.records.ContainerLaunchContext
>>> [error]     ctx.setContainerId(container.getId())
>>> [error]         ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/WorkerRunnable.scala:64:
>>> value setResource is not a member of
>>> org.apache.hadoop.yarn.api.records.ContainerLaunchContext
>>> [error]     ctx.setResource(container.getResource())
>>> [error]         ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/WorkerRunnable.scala:103:
>>> value setUser is not a member of
>>> org.apache.hadoop.yarn.api.records.ContainerLaunchContext
>>> [error]
>>> ctx.setUser(UserGroupInformation.getCurrentUser().getShortUserName())
>>> [error]         ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/WorkerRunnable.scala:108:
>>> value setContainerTokens is not a member of
>>> org.apache.hadoop.yarn.api.records.ContainerLaunchContext
>>> [error]     ctx.setContainerTokens(ByteBuffer.wrap(dob.getData()))
>>> [error]         ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/WorkerRunnable.scala:212:
>>> not found: value ProtoUtils
>>> [error]
>>> user.addToken(ProtoUtils.convertFromProtoFormat(containerToken, cmAddress))
>>> [error]                     ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/WorkerRunnable.scala:216:
>>> not found: type ContainerManager
>>> [error]         .doAs(new PrivilegedExceptionAction[ContainerManager] {
>>> [error]                                             ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocationHandler.scala:49:
>>> not found: type AMRMProtocol
>>> [error] private[yarn] class YarnAllocationHandler(val conf:
>>> Configuration, val resourceManager: AMRMProtocol,
>>> [error]
>>>                         ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocationHandler.scala:87:
>>> value getAMResponse is not a member of
>>> org.apache.hadoop.yarn.api.protocolrecords.AllocateResponse
>>> [error]     val amResp =
>>> allocateWorkerResources(workersToRequest).getAMResponse
>>> [error]                                                            ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocationHandler.scala:296:
>>> value getHostName is not a member of
>>> org.apache.hadoop.yarn.api.records.ResourceRequest
>>> [error]       val candidateHost = container.getHostName
>>> [error]                                     ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocationHandler.scala:374:
>>> value setApplicationAttemptId is not a member of
>>> org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest
>>> [error]     req.setApplicationAttemptId(appAttemptId)
>>> [error]         ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocationHandler.scala:376:
>>> value addAllAsks is not a member of
>>> org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest
>>> [error]     req.addAllAsks(resourceRequests)
>>> [error]         ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocationHandler.scala:379:
>>> value addAllReleases is not a member of
>>> org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest
>>> [error]     req.addAllReleases(releasedContainerList)
>>> [error]         ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocationHandler.scala:391:
>>> value getHostName is not a member of
>>> org.apache.hadoop.yarn.api.records.ResourceRequest
>>> [error]       logInfo("rsrcRequest ... host : " + req.getHostName + ",
>>> numContainers : " + req.getNumContainers +
>>> [error]                                               ^
>>> [error]
>>> /tmp/spark/spark-0.8.0-incubating/yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocationHandler.scala:441:
>>> value setHostName is not a member of
>>> org.apache.hadoop.yarn.api.records.ResourceRequest
>>> [error]     rsrcRequest.setHostName(hostname)
>>> [error]                 ^
>>> [error] 43 errors found
>>>
>>> [-- snip --]
>>>
>>> [warn] Merging
>>> 'org/yaml/snakeyaml/constructor/SafeConstructor$ConstructYamlBool.class'
>>> with strategy 'first'
>>> [warn] Merging
>>> 'org/yaml/snakeyaml/emitter/Emitter$ExpectStreamStart.class' with strategy
>>> 'first'
>>> [warn] Strategy 'concat' was applied to 2 files
>>> [warn] Strategy 'discard' was applied to 2 files
>>> [warn] Strategy 'first' was applied to 794 files
>>>  [info] Checking every *.class/*.jar file's SHA-1.
>>> [info] SHA-1: deebf2bd4f022965649cfe78d51ff1c8780c92a2
>>> [info] Packaging
>>> /tmp/spark/spark-0.8.0-incubating/examples/target/scala-2.9.3/spark-examples-assembly-0.8.0-incubating.jar
>>> ...
>>> [info] Done packaging.
>>> [error] (yarn/compile:compile) Compilation failed
>>> [error] Total time: 668 s, completed 12 Dec, 2013 10:15:25 AM
>>>
>>>
>>
>>
>> --
>> s
>>
>
>


-- 
s

Reply via email to