Hi Venky,

The records are not skipped intentionally, after shutdown the application
we are getting the tuples.
But some times few tuples are missing. we have identified those missing
tuples and  tested it. we dont have any conditions to drop those records.

The other issue is its random issue, the same record is not missing in all
the runs. In DT console we can see at the writer operator processed
required records but the count is not matching with the data written in

at operator it is 500 , but in hdfs we have 450 before shutdown. one we
shutdown the application the data is flused to hdfs but around 495-500. a
very few tuples are missing but not every time.

Chiranjeevi V.

On Wed, Aug 9, 2017 at 10:01 PM, Venkatesh Kottapalli <
venkat...@datatorrent.com> wrote:

> You can try adding logs in the operator to see where your records are
> getting skipped.
> -Venky.
> > On Aug 9, 2017, at 7:08 AM, chiranjeevi vasupilli <chiru....@gmail.com>
> wrote:
> >
> > Hi Team,
> >
> > In my use case im seeing some random issue in writing data to HDFS.
> >
> > Im using AbstractFileOutPutOperator to write the data to HDFS, and
> upstream operator generate the data. In DT console we can see the writer
> operator processed 100 tuples but in hdfs we can 80-90 records. When we
> kill/shutdown the application we are getting 95-100 records.
> >
> > Its a random behavior, not same tuple is missing in every run. Please
> suggest further without killing/shutdown the app we need to write all
> incoming tuples to HFDS.
> >
> > let me know if you need more information.
> >
> > Thanks
> > Chiranjeevi V


Reply via email to