Great Mike!
Thank you both for suggestions. I'll try to implement the ideas.

A little bit more about the scenario:

   - We are using the version 1.2 of Fluo
   - Spark is in version 1.6 (unfortunately) with JDK 1.8
   - and Accumulo in verions 1.7.

When we try less messages everything goes well.
Any result I'll let you know.

Alan Camillo
*BlueShift *I IT Director
Cel.: +55 11 98283-6358
Tel.: +55 11 4605-5082

2018-03-13 15:04 GMT-03:00 Mike Walch <mwa...@apache.org>:

> I opened a PR to add some troubleshooting docs to the website.
>
> https://github.com/apache/fluo-website/pull/142
>
> On Tue, Mar 13, 2018 at 10:59 AM, Keith Turner <ke...@deenlo.com> wrote:
>
> > On Tue, Mar 13, 2018 at 7:11 AM, Alan Camillo <a...@blueshift.com.br>
> > wrote:
> > > Hey fellas!
> > > Sorry to demand so much from you. But we are really trying to put Fluo
> > to work here and we are facing some issues.
> > >
> > > Recently we decided to use Apache Spark to star the process to ingest
> > 300 millions of lines with 62 columns each.
> > >
> > > We study this:
> > > https://fluo.apache.org/blog/2016/12/22/spark-load/ carefully and
> > decided to implement the first strategy described. Executing load
> > transactions in Spark
> > >
> > > In that way we could reuse the code we build for the application
> > transactions. But...
> > > But we are not going well. Fluo stop to insert after a while and we are
> > not able to know why.
> > > We tried to adjust the loader queue and size to see what happens but
> > nothing really helps.
> > > I need a help to debug Fluo and understanding what’s going on. Can
> > someone point me a direction?
> >
> > Can you jstack the spark process a few times and see if Fluo code is
> > stuck anywhere?
> >
> > >
> > > Thanks!
> > > Alan Camillo
> >
>

Reply via email to