Hi, Karthik,

First of all, which platform do you want between Zeppelin and
Hive-on-spark? Hive-on-spark is a subproject for running hive on spark
cluster, which is developed by Hive community, It doesn't have any
relationship of Zeppelin. If you want to use Zeppelin with HiveContext,
just use 'hc', but you cannot see any query in you ResourceManger UI
because the query runs inside Zeppelin through Spark.

Regards,
JL

On Tue, Jul 14, 2015 at 2:06 AM, moon soo Lee <m...@apache.org> wrote:

> Hi,
>
> Now it looks like hdfs permission problem.
>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
> Permission denied: user=root, access=WRITE, inode="/user":hdfs:supergroup:
> drwxr-xr-x
>
> You'll need permission to access '/user' directory for 'root' user.
> ('root' is not a super user in your hdfs).
>
> Hope this helps.
>
> Best,
> moon
>
> On Mon, Jul 13, 2015 at 9:36 AM Vadla, Karthik <karthik.va...@intel.com>
> wrote:
>
>>  Hi Moon,
>>
>>
>>
>> Yes it did set master property and exported Hadoop config
>>
>>
>>
>> Master= yarn-client
>>
>> export HADOOP_CONF_DIR =/etc/Hadoop/conf      (in zeppelin-env.sh file)
>>
>>
>>
>> It is throwing below error.
>>
>>
>>
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>> Permission denied: user=root, access=WRITE,
>> inode="/user":hdfs:supergroup:drwxr-xr-x
>>
>>        at
>> org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:257)
>>
>>        at
>> org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238)
>>
>>        at
>> org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:216)
>>
>>        at
>> org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:145)
>>
>>        at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138)
>>
>>        at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6596)
>>
>>        at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6578)
>>
>>        at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6530)
>>
>>        at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4334)
>>
>>        at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4304)
>>
>>        at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4277)
>>
>>        at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:852)
>>
>>        at
>> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.mkdirs(AuthorizationProviderProxyClientProtocol.java:321)
>>
>>        at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:601)
>>
>>        at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>
>>        at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>>
>>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
>>
>>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
>>
>>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
>>
>>        at java.security.AccessController.doPrivileged(Native Method)
>>
>>        at javax.security.auth.Subject.doAs(Subject.java:415)
>>
>>        at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
>>
>>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)
>>
>>
>>
>>        at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>>
>>        at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>>
>>        at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>>
>>        at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
>>
>>        at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
>>
>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>
>>        at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>
>>        at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>
>>        at java.lang.reflect.Method.invoke(Method.java:483)
>>
>>        at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>>
>>        at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>>
>>        at com.sun.proxy.$Proxy15.mkdirs(Unknown Source)
>>
>>        at
>> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2760)
>>
>>        at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2731)
>>
>>        at
>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
>>
>>        at
>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
>>
>>        at
>> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>>
>>        at
>> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
>>
>>        at
>> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
>>
>>        at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1817)
>>
>>        at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:597)
>>
>>        at
>> org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:224)
>>
>>        at
>> org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:384)
>>
>>        at
>> org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:102)
>>
>>        at
>> org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:58)
>>
>>        at
>> org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:141)
>>
>>        at org.apache.spark.SparkContext.<init>(SparkContext.scala:381)
>>
>>        at
>> org.apache.zeppelin.spark.SparkInterpreter.createSparkContext(SparkInterpreter.java:301)
>>
>>        at
>> org.apache.zeppelin.spark.SparkInterpreter.getSparkContext(SparkInterpreter.java:146)
>>
>>        at
>> org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:423)
>>
>>        at
>> org.apache.zeppelin.interpreter.ClassloaderInterpreter.open(ClassloaderInterpreter.java:74)
>>
>>        at
>> org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:68)
>>
>>        at
>> org.apache.zeppelin.spark.PySparkInterpreter.getSparkInterpreter(PySparkInterpreter.java:353)
>>
>>        at
>> org.apache.zeppelin.spark.PySparkInterpreter.getJavaSparkContext(PySparkInterpreter.java:374)
>>
>>        at
>> org.apache.zeppelin.spark.PySparkInterpreter.open(PySparkInterpreter.java:140)
>>
>>        at
>> org.apache.zeppelin.interpreter.ClassloaderInterpreter.open(ClassloaderInterpreter.java:74)
>>
>>        at
>> org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:68)
>>
>>        at
>> org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:92)
>>
>>        at
>> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:275)
>>
>>        at org.apache.zeppelin.scheduler.Job.run(Job.java:170)
>>
>>        at
>> org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:118)
>>
>>        at
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>>
>>        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>
>>        at
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>>
>>        at
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>>
>>        at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>
>>        at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>
>>        at java.lang.Thread.run(Thread.java:745)
>>
>>
>>
>>
>>
>>
>>
>> Thanks
>>
>> Karthik
>>
>> *From:* moon soo Lee [mailto:m...@apache.org]
>> *Sent:* Sunday, July 12, 2015 9:05 AM
>> *To:* users@zeppelin.incubator.apache.org
>> *Subject:* Re: Yarn configuration on Zeppelin
>>
>>
>>
>> Hi,
>>
>>
>>
>> Did you set 'master' property to 'yarn-client' in 'Interpreter' menu?
>>
>> You'll also need export HADOOP_CONF_DIR in bin/zeppelin-env.sh file.
>>
>>
>>
>> Hope this helps.
>>
>>
>>
>> Thanks,
>>
>> moon
>>
>>
>>
>> On Fri, Jul 10, 2015 at 1:26 PM Vadla, Karthik <karthik.va...@intel.com>
>> wrote:
>>
>>  Hi All,
>>
>>
>>
>> I have built my zeppelin binaries with yarn profile. With below command
>> in maven
>>
>> *mvn clean package -Pspark-1.3 -Ppyspark -Dhadoop.version=2.6.0-cdh5.4.2
>> -Phadoop-2.6 -Pyarn –DskipTests*
>>
>>
>>
>> I have enabled *hive-on-spark* option in Cloudera manager and copied
>> *hive-site.xml* to my *zeppelin conf/* folder.
>>
>> But still I’m not able to see any queries ran on spark with zeppelin
>> notebook in my Yarn  ResourceManager Web UI (master)
>> <http://master.trinity2.cluster.gao-nova:8088/> .
>>
>>
>>
>> Do I need to do any specific configuration. ?
>>
>>
>>
>> Reading some previous post I got some idea that zeppelin is using hive
>> server-2 . Can anyone help me where I can find configuration folder and
>> what files I need to copy.
>>
>>
>>
>> Appreciate your help
>>
>>
>>
>> Thanks
>>
>> Karthik Vadla
>>
>>


-- 
이종열, Jongyoul Lee, 李宗烈
http://madeng.net

Reply via email to