[ https://issues.apache.org/jira/browse/HDDS-851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16722634#comment-16722634 ]
Bharat Viswanadham edited comment on HDDS-851 at 12/17/18 1:47 AM: ------------------------------------------------------------------- Thank You, [~elek] for the contribution. I have verified web UI, but when I try running some ozone commands getting the following error. I also don't see any logs in /opt/hadoop/logs directory to check the logs. And another question, why do we do this? {code:java} if [ ! -d "$DIR/build/apache-rat-0.12" ]; then wget "https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download&filename=creadur/apache-rat-0.12/apache-rat-0.12-bin.tar.gz" -O "$DIR/build/apache-rat.tar.gz" cd $DIR/build tar zvxf apache-rat.tar.gz cd - fi java -jar $DIR/build/apache-rat-0.12/apache-rat-0.12.jar $DIR -e .dockerignore -e public -e apache-rat-0.12 -e .git -e .gitignore{code} hadoop@bc20ba918f6e:~$ ozone sh volume create /vol1 {code:java} 2018-12-17 01:44:23 WARN NativeCodeLoader:60 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2018-12-17 01:44:24 ERROR OzoneClientFactory:294 - Couldn't create protocol class org.apache.hadoop.ozone.client.rpc.RpcClient exception: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:291) at org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:169) at org.apache.hadoop.ozone.web.ozShell.Handler.verifyURI(Handler.java:108) at org.apache.hadoop.ozone.web.ozShell.volume.CreateVolumeHandler.call(CreateVolumeHandler.java:71) at org.apache.hadoop.ozone.web.ozShell.volume.CreateVolumeHandler.call(CreateVolumeHandler.java:41) at picocli.CommandLine.execute(CommandLine.java:919) at picocli.CommandLine.access$700(CommandLine.java:104) at picocli.CommandLine$RunLast.handle(CommandLine.java:1083) at picocli.CommandLine$RunLast.handle(CommandLine.java:1051) at picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:959) at picocli.CommandLine.parseWithHandlers(CommandLine.java:1242) at picocli.CommandLine.parseWithHandler(CommandLine.java:1181) at org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:61) at org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:52) at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:80) Caused by: java.io.IOException: Getting service list failed, error: INTERNAL_ERROR at org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.getServiceList(OzoneManagerProtocolClientSideTranslatorPB.java:777) at org.apache.hadoop.ozone.client.rpc.RpcClient.getScmAddressForClient(RpcClient.java:155) at org.apache.hadoop.ozone.client.rpc.RpcClient.<init>(RpcClient.java:127) ... 19 more Getting service list failed, error: INTERNAL_ERROR {code} was (Author: bharatviswa): Thank You [~elek] for the fix. I have verified web UI, but when I try running some ozone commands getting the following error. I also don't see any logs in /opt/hadoop/logs directory to check the logs. And another question, why do we do this? {code:java} if [ ! -d "$DIR/build/apache-rat-0.12" ]; then wget "https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download&filename=creadur/apache-rat-0.12/apache-rat-0.12-bin.tar.gz" -O "$DIR/build/apache-rat.tar.gz" cd $DIR/build tar zvxf apache-rat.tar.gz cd - fi java -jar $DIR/build/apache-rat-0.12/apache-rat-0.12.jar $DIR -e .dockerignore -e public -e apache-rat-0.12 -e .git -e .gitignore{code} hadoop@bc20ba918f6e:~$ ozone sh volume create /vol1 {code:java} 2018-12-17 01:44:23 WARN NativeCodeLoader:60 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2018-12-17 01:44:24 ERROR OzoneClientFactory:294 - Couldn't create protocol class org.apache.hadoop.ozone.client.rpc.RpcClient exception: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:291) at org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:169) at org.apache.hadoop.ozone.web.ozShell.Handler.verifyURI(Handler.java:108) at org.apache.hadoop.ozone.web.ozShell.volume.CreateVolumeHandler.call(CreateVolumeHandler.java:71) at org.apache.hadoop.ozone.web.ozShell.volume.CreateVolumeHandler.call(CreateVolumeHandler.java:41) at picocli.CommandLine.execute(CommandLine.java:919) at picocli.CommandLine.access$700(CommandLine.java:104) at picocli.CommandLine$RunLast.handle(CommandLine.java:1083) at picocli.CommandLine$RunLast.handle(CommandLine.java:1051) at picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:959) at picocli.CommandLine.parseWithHandlers(CommandLine.java:1242) at picocli.CommandLine.parseWithHandler(CommandLine.java:1181) at org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:61) at org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:52) at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:80) Caused by: java.io.IOException: Getting service list failed, error: INTERNAL_ERROR at org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.getServiceList(OzoneManagerProtocolClientSideTranslatorPB.java:777) at org.apache.hadoop.ozone.client.rpc.RpcClient.getScmAddressForClient(RpcClient.java:155) at org.apache.hadoop.ozone.client.rpc.RpcClient.<init>(RpcClient.java:127) ... 19 more Getting service list failed, error: INTERNAL_ERROR {code} > Provide official apache docker image for Ozone > ---------------------------------------------- > > Key: HDDS-851 > URL: https://issues.apache.org/jira/browse/HDDS-851 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Reporter: Elek, Marton > Assignee: Elek, Marton > Priority: Major > Attachments: docker-ozone-latest.tar.gz, ozonedocker.png > > > Similar to the apache/hadoop:2 and apache/hadoop:3 images I propose to > provide apache/ozone docker images which includes the voted release binaries. > The image can follow all the conventions from HADOOP-14898 > 1. BRANCHING > I propose to create new docker branches: > docker-ozone-0.3.0-alpha > docker-ozone-latest > And ask INFRA to register docker-ozone-(.*) in the dockerhub to create > apache/ozone: images > 2. RUNNING > I propose to create a default runner script which starts om + scm + datanode > + s3g all together. With this approach you can start a full ozone cluster as > easy as > {code} > docker run -p 9878:9878 -p 9876:9876 -p 9874:9874 -d apache/ozone > {code} > That's all. This is an all-in-one docker image which is ready to try out. > 3. RUNNING with compose > I propose to include a default docker-compose + config file in the image. To > start a multi-node pseudo cluster it will be enough to execute: > {code} > docker run apache/ozone cat docker-compose.yaml > docker-compose.yaml > docker run apache/ozone cat docker-config > docker-config > docker-compose up -d > {code} > That's all, and you have a multi-(pseudo)node ozone cluster which could be > scaled up and down with ozone. > 4. k8s > Later we can also provide k8s resource files with the same approach: > {code} > docker run apache/ozone cat k8s.yaml | kubectl apply -f - > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org