Is gateway/apex installed on edge node of mapr cluster? Most likely this is compatibility issue between jars installed on gateway and the actual cluster. The root cause of the application failure is the following exception on Apex application master

java.lang.NoSuchFieldError: IS_WINDOWS
at org.apache.hadoop.security.login.GenericOSLoginModule.getOSLoginModuleName(GenericOSLoginModule.java:56) at org.apache.hadoop.security.login.GenericOSLoginModule.<clinit>(GenericOSLoginModule.java:64)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:348)
    at javax.security.auth.login.LoginContext.invoke(LoginContext.java:710)
at javax.security.auth.login.LoginContext.access$000(LoginContext.java:195)
    at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682)
    at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680)
    at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)
    at javax.security.auth.login.LoginContext.login(LoginContext.java:587)
at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:724) at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:688) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:572) at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:447) at com.datatorrent.common.util.FSStorageAgent.<init>(FSStorageAgent.java:84) at com.datatorrent.common.util.AsyncFSStorageAgent.<init>(AsyncFSStorageAgent.java:64) at com.datatorrent.common.util.AsyncFSStorageAgent.readResolve(AsyncFSStorageAgent.java:151)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
at java.io.ObjectStreamClass.invokeReadResolve(ObjectStreamClass.java:1104) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1810)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
    at java.util.HashMap.readObject(HashMap.java:1396)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1900) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
at com.datatorrent.stram.plan.logical.LogicalPlan.read(LogicalPlan.java:2324) at com.datatorrent.stram.StreamingAppMasterService.serviceInit(StreamingAppMasterService.java:541) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at com.datatorrent.stram.StreamingAppMaster.main(StreamingAppMaster.java:102)


Vlad

On 9/20/16 23:23, Bhupesh Chawda wrote:
Hi Jaspal,

I think the issue might be related to using mapr specific jars in the application package.

Can you check if you are able to run any other app on the cluster; for ex.pidemo in malhar?

Also check the jars included in your apa file using "mvn dependency:tree" and verify that no hadoop jars have been bundled.

~ Bhupesh


On Wed, Sep 21, 2016 at 1:45 AM, Jaspal Singh <jaspal.singh1...@gmail.com <mailto:jaspal.singh1...@gmail.com>> wrote:

    Hi Team,

    We are trying to build a DT application to read messages from mapr
    streams and write them on to hdfs. Its the same application used
    with Kafka as source with the following changes:

    1. Added the map example code jar in the pom.xml.
    2. Changed the topic name to fully qualified path of the stream
    with topic name.

    After packaging the code into apa file, we are able to launch the
    application but it remains in ACCEPTED state and then goes to
    FAILED state after a while. We are having a hard time time to
    figure out the issue with the project, so need some assistance
    from anyone who has encountered any similar issue.

    dtgateway.log and pom.xml attached.


    Really appreciate the inputs !!


    Thanks
    Jaspal



Reply via email to