Hi,

Can you try to launch the application using apex CLI instead of the UI?
That might help to determine if it is a problem with the Hadoop install or
the gateway:

http://apex.apache.org/docs/apex/apex_cli/

Thanks,
Thomas


On Mon, Sep 26, 2016 at 5:04 PM, David Yan <[email protected]> wrote:

> Added back users@ to the thread.
>
> On Mon, Sep 26, 2016 at 5:03 PM, David Yan <[email protected]> wrote:
>
>> Can you try one of the released hadoop versions, like 2.7.3 or 2.7.2 or
>> 2.6.4?
>> We will check that sha commit on our side as well.
>>
>> David
>>
>> On Mon, Sep 26, 2016 at 4:40 PM, <[email protected]> wrote:
>>
>>> On 2016-09-26 16:02, David Yan wrote:
>>>
>>>> From your first email, you said you're using hadoop 2.7.4, but that
>>>> hadoop version has not been released.
>>>>
>>>
>>> Did you build it yourself? If so, can you provide a github tag or
>>>> something similar so we can build that ourselves so that we can
>>>> reproduce the issue?
>>>>
>>>
>>> Yes.  Built from apache git repo, branch branch-2.7, date Aug29, sha is:
>>>
>>> 32a86f199cd8e7f32c264af55e3459e4b4751963
>>>
>>> I changed 1 file:
>>>
>>> diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
>>> index 1765363..c35df02 100644
>>> --- a/hadoop-project/pom.xml
>>> +++ b/hadoop-project/pom.xml
>>> @@ -74,3 +74,3 @@
>>>      <!-- define the protobuf JAR version
>>>  -->
>>> -    <protobuf.version>2.5.0</protobuf.version>
>>> +    <protobuf.version>2.6.1</protobuf.version>
>>>      <protoc.path>${env.HADOOP_PROTOC_PATH}</protoc.path>
>>> @@ -95,3 +95,3 @@
>>>      <!-- Plugin versions and config -->
>>> -    <maven-surefire-plugin.argLine>-Xmx4096m -XX:MaxPermSize=768m
>>> -XX:+HeapDumpOnOutOfMemoryError</maven-surefire-plugin.argLine>
>>> +    <maven-surefire-plugin.argLine>-Xmx4096m
>>> -XX:+HeapDumpOnOutOfMemoryError</maven-surefire-plugin.argLine>
>>>      <maven-surefire-plugin.version>2.17</maven-surefire-plugin.version>
>>>
>>>
>>>> Also, what Apex or DT RTS version are you using?
>>>>
>>>
>>> 3.4.0, also built from Apache source, using tar file:
>>> apex-3.4.0-source-release.tar.gz
>>>
>>> Likewise for malhar, using apache-apex-malhar-3.4.0-sourc
>>> e-release.tar.gz
>>>
>>> I had to change one thing to get it to build.
>>> This disables the maven check against deploying from a snapshot.
>>>
>>> apache-apex-malhar-3.4.0> diff -p -C1 contrib/pom.xml
>>> contrib/pom.xml.orig
>>> *** contrib/pom.xml     Tue Sep 13 12:02:27 2016
>>> --- contrib/pom.xml.orig        Fri May 20 00:19:42 2016
>>> ***************
>>> *** 199,201 ****
>>>        </plugin>
>>> - <!-- SOFTIRON patch
>>>        <plugin>
>>> --- 199,200 ----
>>> ***************
>>> *** 214,216 ****
>>>        </plugin>
>>> ! -->
>>>         <plugin>
>>> --- 213,215 ----
>>>        </plugin>
>>> !
>>>         <plugin>
>>>
>>>
>>> Note: I also tried malhar v3.5 on top of apex3.4, and got the same
>>> results.
>>>
>>> I think that covers everything.
>>>
>>> thanks again,
>>> -david
>>>
>>>
>>>
>>>
>>>> Thanks!
>>>>
>>>> David
>>>>
>>>> On Mon, Sep 26, 2016 at 3:15 PM, <[email protected]> wrote:
>>>>
>>>> On 2016-09-26 14:52, David Yan wrote:
>>>>>
>>>>> Also, one of the first steps in the installation wizard is to
>>>>>> enter
>>>>>> the hadoop location (the same screen as the DFS directory).
>>>>>> Can you please double check the hadoop location points to the
>>>>>> correct
>>>>>> hadoop binary in your system?
>>>>>>
>>>>>
>>>>> That, I've already done.  It's definitely correct
>>>>> (/opt/hadoop/bin/hadoop)
>>>>>
>>>>> -dbs
>>>>>
>>>>>
>>>>> David
>>>>>
>>>>> On Mon, Sep 26, 2016 at 2:45 PM, David Yan <[email protected]>
>>>>> wrote:
>>>>>
>>>>> Do you see any exception stacktrace in the log when this error
>>>>> occurred:
>>>>>
>>>>> Wrong FS: hdfs://namenode:9000/user/dtadmin/datatorrent, expected:
>>>>> file:///
>>>>>
>>>>> David
>>>>>
>>>>> On Mon, Sep 26, 2016 at 2:39 PM, <[email protected]> wrote:
>>>>> From    David Yan <[email protected]>
>>>>> Date    Sat 00:12
>>>>>
>>>>> Can you please provide any exception stacktrace in the dtgateway.log
>>>>> file when that happens?
>>>>>
>>>>> I reran the dtgateway installation wizard, which failed (same as
>>>>> before) with error msg:
>>>>>
>>>>> | DFS directory cannot be written to with error message "Mkdirs
>>>>> failed to create /user/dtadmin/datatorrent (exists=false,
>>>>> cwd=file:/opt/datatorrent/releases/3.4.0)"
>>>>>
>>>>> The corresponding exception in dtgateway.log is:
>>>>>
>>>>> | 2016-09-26 13:43:54,767 ERROR com.datatorrent.gateway.I: DFS
>>>>> Directory cannot be written to with exception:
>>>>> | java.io.IOException: Mkdirs failed to create
>>>>> /user/dtadmin/datatorrent (exists=false,
>>>>> cwd=file:/opt/datatorrent/releases/3.4.0)
>>>>> |     at
>>>>>
>>>>>
>>>>> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileS
>>>> ystem.java:455)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileS
>>>> ystem.java:440)
>>>>
>>>>> |     at
>>>>> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:911)
>>>>> |     at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:892)
>>>>> |     at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:789)
>>>>> |     at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:778)
>>>>> |     at
>>>>> com.datatorrent.stram.client.FSAgent.createFile(FSAgent.java:77)
>>>>> |     at com.datatorrent.gateway.I.h(gc:324)
>>>>> |     at com.datatorrent.gateway.I.h(gc:284)
>>>>> |     at
>>>>> com.datatorrent.gateway.resources.ws.v2.ConfigResource.h(fc:136)
>>>>> |     at
>>>>> com.datatorrent.gateway.resources.ws.v2.ConfigResource.h(fc:171)
>>>>> |     at
>>>>>
>>>>>
>>>>> com.datatorrent.gateway.resources.ws.v2.ConfigResource.setCo
>>>> nfigProperty(fc:34)
>>>>
>>>>> |     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>>>>> Method)
>>>>> |     at
>>>>>
>>>>>
>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>>>> ssorImpl.java:62)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>>>> thodAccessorImpl.java:43)
>>>>
>>>>> |     at java.lang.reflect.Method.invoke(Method.java:498)
>>>>> |     at
>>>>>
>>>>>
>>>>> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invo
>>>> ke(JavaMethodInvokerFactory.java:60)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> com.sun.jersey.server.impl.model.method.dispatch.AbstractRes
>>>> ourceMethodDispatchProvider$ResponseOutInvoker._dispatch(Abs
>>>> tractResourceMethodDispatchProvider.java:205)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> com.sun.jersey.server.impl.model.method.dispatch.ResourceJav
>>>> aMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(H
>>>> ttpMethodRule.java:288)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accep
>>>> t(RightHandPathRule.java:147)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(S
>>>> ubLocatorRule.java:134)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accep
>>>> t(RightHandPathRule.java:147)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(S
>>>> ubLocatorRule.java:134)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accep
>>>> t(RightHandPathRule.java:147)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accep
>>>> t(ResourceClassRule.java:108)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accep
>>>> t(RightHandPathRule.java:147)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule
>>>> .accept(RootResourceClassesRule.java:84)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> com.sun.jersey.server.impl.application.WebApplicationImpl._h
>>>> andleRequest(WebApplicationImpl.java:1469)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> com.sun.jersey.server.impl.application.WebApplicationImpl._h
>>>> andleRequest(WebApplicationImpl.java:1400)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> com.sun.jersey.server.impl.application.WebApplicationImpl.ha
>>>> ndleRequest(WebApplicationImpl.java:1349)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> com.sun.jersey.server.impl.application.WebApplicationImpl.ha
>>>> ndleRequest(WebApplicationImpl.java:1339)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> com.sun.jersey.spi.container.servlet.WebComponent.service(We
>>>> bComponent.java:416)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> com.sun.jersey.spi.container.servlet.ServletContainer.servic
>>>> e(ServletContainer.java:537)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> com.sun.jersey.spi.container.servlet.ServletContainer.servic
>>>> e(ServletContainer.java:699)
>>>>
>>>>> |     at
>>>>> javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
>>>>> |     at
>>>>>
>>>>>
>>>>> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:669)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHan
>>>> dler.java:457)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> org.eclipse.jetty.server.session.SessionHandler.doHandle(Ses
>>>> sionHandler.java:229)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> org.eclipse.jetty.server.handler.ContextHandler.doHandle(Con
>>>> textHandler.java:1075)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHand
>>>> ler.java:384)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> org.eclipse.jetty.server.session.SessionHandler.doScope(Sess
>>>> ionHandler.java:193)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> org.eclipse.jetty.server.handler.ContextHandler.doScope(Cont
>>>> extHandler.java:1009)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> org.eclipse.jetty.server.handler.ScopedHandler.handle(Scoped
>>>> Handler.java:135)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> org.eclipse.jetty.server.handler.HandlerCollection.handle(Ha
>>>> ndlerCollection.java:154)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> org.eclipse.jetty.server.handler.HandlerWrapper.handle(Handl
>>>> erWrapper.java:116)
>>>>
>>>>> |     at org.eclipse.jetty.server.Server.handle(Server.java:368)
>>>>> |     at
>>>>>
>>>>>
>>>>> org.eclipse.jetty.server.AbstractHttpConnection.handleReques
>>>> t(AbstractHttpConnection.java:489)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> org.eclipse.jetty.server.AbstractHttpConnection.content(Abst
>>>> ractHttpConnection.java:953)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandl
>>>> er.content(AbstractHttpConnection.java:1014)
>>>>
>>>>> |     at
>>>>> org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861)
>>>>> |     at
>>>>>
>>>>>
>>>>> org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHtt
>>>> pConnection.java:82)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(Select
>>>> ChannelEndPoint.java:628)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectC
>>>> hannelEndPoint.java:52)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(Queued
>>>> ThreadPool.java:608)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedT
>>>> hreadPool.java:543)
>>>>
>>>>> |     at java.lang.Thread.run(Thread.java:745)
>>>>> | 2016-09-26 13:43:54,772 INFO
>>>>> com.datatorrent.gateway.resources.ws.v2.WSResource: Caught exception
>>>>> in processing web service:
>>>>> com.datatorrent.stram.client.DTConfiguration$ConfigException: DFS
>>>>> directory cannot be written to with error message "Mkdirs failed to
>>>>> create /user/dtadmin/datatorrent (exists=false,
>>>>> cwd=file:/opt/datatorrent/releases/3.4.0)"
>>>>>
>>>>> Also, when I shutdown the gateway, I get a different exception in
>>>>> dtgateway.log:
>>>>>
>>>>> | 2016-09-26 13:40:44,366 INFO com.datatorrent.gateway.DTGateway:
>>>>> Shutting down
>>>>> | 2016-09-26 13:40:44,427 ERROR com.datatorrent.gateway.I: DFS
>>>>> Directory cannot be written to with exception:
>>>>> | java.io.IOException: Filesystem closed
>>>>> |     at
>>>>> org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:808)
>>>>> |     at
>>>>> org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:2041)
>>>>> |     at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(Distr
>>>> ibutedFileSystem.java:707)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(Distr
>>>> ibutedFileSystem.java:703)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSyst
>>>> emLinkResolver.java:81)
>>>>
>>>>> |     at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.DistributedFileSystem.delete(Distribu
>>>> tedFileSystem.java:714)
>>>>
>>>>> |     at
>>>>> com.datatorrent.stram.client.FSAgent.deleteFile(FSAgent.java:94)
>>>>> |     at com.datatorrent.gateway.I.h(gc:30)
>>>>> |     at com.datatorrent.gateway.l.K(pc:158)
>>>>> |     at com.datatorrent.gateway.l.h(pc:65)
>>>>> |     at com.datatorrent.gateway.K.run(yc:156)
>>>>> |     at java.lang.Thread.run(Thread.java:745)
>>>>> | 2016-09-26 13:40:44,444 INFO com.datatorrent.gateway.DTGateway:
>>>>> Shutdown complete
>>>>>
>>>>> Thanks much.
>>>>> -david
>>>>>
>>>>> On Tue, Sep 13, 2016 at 4:06 PM, <[email protected]> wrote:
>>>>>
>>>>> I have the dtgateway running (community edition) with
>>>>> apex/malhar 3.4 and hadoop 2.7.4 on SuSE SLES 12.1 on an ARM64
>>>>> server (aarch64).
>>>>>
>>>>> In the Installation Wizard, I set the DFS location to:
>>>>>
>>>>> hdfs://namenode:9000/user/dtadmin/datatorrent
>>>>>
>>>>> and the gateway saves the hadoop configuration and restarts
>>>>> successfully, but  I get an error:
>>>>>
>>>>> Wrong FS: hdfs://namenode:9000/user/dtadmin/datatorrent,
>>>>> expected: file:///
>>>>>
>>>>> the 'service dtgateway status' command says it's running.
>>>>> The log file has the same error as above, but nothing else
>>>>> useful.
>>>>>
>>>>> Is there a manual way to move the demo apps into the gateway?
>>>>>
>>>>> If I ignore this error and try to upload or import a demo app,
>>>>> I get the same error.
>>>>>
>>>>> What's odd is that the gateway does write into the HDFS
>>>>> directory:
>>>>>
>>>>> Found 6 items
>>>>> drwxrwxrwt   - dtadmin hadoop          0 2016-09-13 15:35
>>>>>
>>>> /user/dtadmin/datatorrent/appPackages
>>>>
>>>>> drwxrwxrwt   - dtadmin hadoop          0 2016-09-13 15:35
>>>>> /user/dtadmin/datatorrent/apps
>>>>> drwxrwxrwt   - dtadmin hadoop          0 2016-09-13 15:35
>>>>> /user/dtadmin/datatorrent/conf
>>>>> drwxrwxrwt   - dtadmin hadoop          0 2016-09-13 15:35
>>>>> /user/dtadmin/datatorrent/dashboards
>>>>> drwxrwxrwt   - dtadmin hadoop          0 2016-09-13 15:35
>>>>> /user/dtadmin/datatorrent/licenses
>>>>> drwxrwxrwt   - dtadmin hadoop          0 2016-09-13 15:35
>>>>> /user/dtadmin/datatorrent/systemAlerts
>>>>>
>>>>> I've tried playing with different ways to configure the DFS
>>>>> location, but nothing works.
>>>>> If I use an actual local filesystem directory, I get a
>>>>> different errro from the installation wizard (Non-DFS file system
>>>>> is
>>>>> used: org.apache.hadoop.fs.LocalFileSystem), which makes sense.  But
>>>>> if I ignore this error, the app wizard successsfully writes packages
>>>>> into the local directory, but I can't launch them (no surprise
>>>>> there).
>>>>>
>>>>> Anybody have any ideas what I can do?
>>>>>
>>>>> thanks much.
>>>>> -david
>>>>>
>>>>
>>>
>>
>

Reply via email to