Ok, versions seem compatible. Can you do an ls on the target/deps/
directory and send the list of included jars?

Regards,
Ashwin.

On Wed, Oct 19, 2016 at 1:12 PM, Bandaru, Srinivas <
srinivas.band...@optum.com> wrote:

> Mapr RD version:
>
> <groupId>org.apache.hbase</groupId>
>       <artifactId>hbase-client</artifactId>
>       <version>*1.1.1-mapr-1602*</version>
>
>
>
>
>
> Hadoop version : hadoop-2.7.0 (Mapr distribution 5.1)
>
>
>
> Thanks,
>
> Srinivas
>
>
>
> *From:* Ashwin Chandra Putta [mailto:ashwinchand...@gmail.com]
> *Sent:* Wednesday, October 19, 2016 3:05 PM
>
> *To:* users@apex.apache.org
> *Subject:* Re: Retry functionality in Datatorrent
>
>
>
> What is your HBase client version? Also, what is the version of hadoop on
> your cluster?
>
>
>
> Make sure that the HBase client version is compatible with the version of
> hadoop you are running.
>
>
>
> Regards,
>
> Ashwin.
>
>
>
> On Wed, Oct 19, 2016 at 12:54 PM, Bandaru, Srinivas <
> srinivas.band...@optum.com> wrote:
>
> We have a Hbase client dependency and do not have any Hadoop dependencies
> in project.
>
>
>
> *From:* Ashwin Chandra Putta [mailto:ashwinchand...@gmail.com]
> *Sent:* Wednesday, October 19, 2016 2:40 PM
> *To:* users@apex.apache.org
> *Subject:* Re: Retry functionality in Datatorrent
>
>
>
> Jaspal,
>
>
>
> Ensure that the app package is not including any hadoop jars as
> dependencies. Check target/deps/
>
>
>
> You can exclude those dependencies in the maven pom file.
>
>
>
> Regards,
>
> Ashwin.
>
>
>
> On Wed, Oct 19, 2016 at 12:09 PM, Jaspal Singh <jaspal.singh1...@gmail.com>
> wrote:
>
> Hi Ashwin, That's correct flow !!
>
> Team - ANother thing we are receiving below exception in the logs and
> application is FAILING when we are trying to make an update to HBase table.
> Any idea what could be the reason ?
>
> Container: container_e32_1476503307399_0172_02_000001 on
> dbslt0079.uhc.com_8091
>
> ============================================================
> =====================
>
> LogType:AppMaster.stderr
>
> Log Upload Time:Wed Oct 19 11:35:36 -0500 2016
>
> LogLength:1246
>
> Log Contents:
>
> SLF4J: Class path contains multiple SLF4J bindings.
>
> SLF4J: Found binding in [jar:file:/opt/mapr/tmp/hadoop-mapr/nm-local-dir/
> usercache/mapr/appcache/application_1476503307399_0172/filecache/26/slf4j-
> log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>
> SLF4J: Found binding in [jar:file:/opt/mapr/lib/slf4j-
> log4j12-1.7.12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
>
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>
> Exception in thread "main" java.lang.IllegalArgumentException: Invalid
> ContainerId: container_e32_1476503307399_0172_02_000001
>
>         at org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(
> ConverterUtils.java:182)
>
>         at com.datatorrent.stram.StreamingAppMaster.main(
> StreamingAppMaster.java:91)
>
> Caused by: java.lang.NumberFormatException: For input string: "e32"
>
>         at java.lang.NumberFormatException.forInputString(
> NumberFormatException.java:65)
>
>         at java.lang.Long.parseLong(Long.java:589)
>
>         at java.lang.Long.parseLong(Long.java:631)
>
>         at org.apache.hadoop.yarn.util.ConverterUtils.
> toApplicationAttemptId(ConverterUtils.java:137)
>
>         at org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(
> ConverterUtils.java:177)
>
>         ... 1 more
>
> End of LogType:AppMaster.stderr
>
>
>
> Thanks!!
>
>
>
> On Tue, Oct 18, 2016 at 7:47 PM, Ashwin Chandra Putta <
> ashwinchand...@gmail.com> wrote:
>
> Jaspal,
>
>
>
> If I understand this correctly, the flow looks like this.
>
>
>
> REST API --> Kafka topic --> Apex Kafka Input Operator.
>
>
>
> If this is the flow, then the kafka input operator should be reading
> messages from Kafka without losing them. There is no retry attempts
> necessary.
>
>
> Let me know if the understanding of the flow is incorrect.
>
>
>
> Regards,
>
> Ashwin.
>
>
>
> On Tue, Oct 18, 2016 at 2:49 PM, Jaspal Singh <jaspal.singh1...@gmail.com>
> wrote:
>
> Hi Team,
>
> We are pushing messages from Kafka to Datatorrent application using REST
> API service as a producer. If say due to some issue with the service the
> message couldn't be pushed/processed so we want to "retry it for n times"
> before it is dropped.
>
> Is there any retry functionality built within Datatorrent or we have write
> some code logic for the same ??
>
> Thanks!!
>
>
>
>
>
> --
>
>
>
> Regards,
>
> Ashwin.
>
>
>
>
>
>
>
> --
>
>
>
> Regards,
>
> Ashwin.
>
>
> This e-mail, including attachments, may include confidential and/or
> proprietary information, and may be used only by the person or entity
> to which it is addressed. If the reader of this e-mail is not the intended
> recipient or his or her authorized agent, the reader is hereby notified
> that any dissemination, distribution or copying of this e-mail is
> prohibited. If you have received this e-mail in error, please notify the
> sender by replying to this message and delete this e-mail immediately.
>
>
>
>
>
> --
>
>
>
> Regards,
>
> Ashwin.
>
>
> This e-mail, including attachments, may include confidential and/or
> proprietary information, and may be used only by the person or entity
> to which it is addressed. If the reader of this e-mail is not the intended
> recipient or his or her authorized agent, the reader is hereby notified
> that any dissemination, distribution or copying of this e-mail is
> prohibited. If you have received this e-mail in error, please notify the
> sender by replying to this message and delete this e-mail immediately.
>



-- 

Regards,
Ashwin.

Reply via email to