Jaspal,

Ensure that the app package is not including any hadoop jars as
dependencies. Check target/deps/

You can exclude those dependencies in the maven pom file.

Regards,
Ashwin.

On Wed, Oct 19, 2016 at 12:09 PM, Jaspal Singh <jaspal.singh1...@gmail.com>
wrote:

> Hi Ashwin, That's correct flow !!
>
> Team - ANother thing we are receiving below exception in the logs and
> application is FAILING when we are trying to make an update to HBase table.
> Any idea what could be the reason ?
>
> Container: container_e32_1476503307399_0172_02_000001 on
> dbslt0079.uhc.com_8091
>
> ============================================================
> =====================
>
> LogType:AppMaster.stderr
>
> Log Upload Time:Wed Oct 19 11:35:36 -0500 2016
>
> LogLength:1246
>
> Log Contents:
>
> SLF4J: Class path contains multiple SLF4J bindings.
>
> SLF4J: Found binding in [jar:file:/opt/mapr/tmp/hadoop-mapr/nm-local-dir/
> usercache/mapr/appcache/application_1476503307399_0172/filecache/26/slf4j-
> log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>
> SLF4J: Found binding in [jar:file:/opt/mapr/lib/slf4j-
> log4j12-1.7.12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
>
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>
> Exception in thread "main" java.lang.IllegalArgumentException: Invalid
> ContainerId: container_e32_1476503307399_0172_02_000001
>
>         at org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(
> ConverterUtils.java:182)
>
>         at com.datatorrent.stram.StreamingAppMaster.main(
> StreamingAppMaster.java:91)
>
> Caused by: java.lang.NumberFormatException: For input string: "e32"
>
>         at java.lang.NumberFormatException.forInputString(
> NumberFormatException.java:65)
>
>         at java.lang.Long.parseLong(Long.java:589)
>
>         at java.lang.Long.parseLong(Long.java:631)
>
>         at org.apache.hadoop.yarn.util.ConverterUtils.
> toApplicationAttemptId(ConverterUtils.java:137)
>
>         at org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(
> ConverterUtils.java:177)
>
>         ... 1 more
>
> End of LogType:AppMaster.stderr
>
>
> Thanks!!
>
> On Tue, Oct 18, 2016 at 7:47 PM, Ashwin Chandra Putta <
> ashwinchand...@gmail.com> wrote:
>
>> Jaspal,
>>
>> If I understand this correctly, the flow looks like this.
>>
>> REST API --> Kafka topic --> Apex Kafka Input Operator.
>>
>> If this is the flow, then the kafka input operator should be reading
>> messages from Kafka without losing them. There is no retry attempts
>> necessary.
>>
>> Let me know if the understanding of the flow is incorrect.
>>
>> Regards,
>> Ashwin.
>>
>> On Tue, Oct 18, 2016 at 2:49 PM, Jaspal Singh <jaspal.singh1...@gmail.com
>> > wrote:
>>
>>> Hi Team,
>>>
>>> We are pushing messages from Kafka to Datatorrent application using REST
>>> API service as a producer. If say due to some issue with the service the
>>> message couldn't be pushed/processed so we want to "retry it for n times"
>>> before it is dropped.
>>>
>>> Is there any retry functionality built within Datatorrent or we have
>>> write some code logic for the same ??
>>>
>>>
>>> Thanks!!
>>>
>>
>>
>>
>> --
>>
>> Regards,
>> Ashwin.
>>
>
>


-- 

Regards,
Ashwin.

Reply via email to