Glad to hear the exception was resolved!

Thanks for reporting the results.

On Thu, Jun 2, 2016 at 12:58 PM, pradeepbill <[email protected]> wrote:

> Ok , I can confirm, I dont see the below exception after using  0.6.1
> jars.Thanks again Bryan.I dont know what will happen when I get a burst of
> data (yet to hit that), but with the incubating jars I could not even
> process anything because of exception below.
>
>
>  java.io.NotSerializableException:
> > org.apache.nifi.spark.NiFiReceiver$ReceiveRunnable$1
> > Serialization stack:
>
> Ok can you try getting the 0.6.1 version of those jars and see if the
> problem still occurrs?
>
> I want to make sure it still happens in the latest code before we change
> anything, because I know we fixed a similar serialization issue before.
>
>
> http://central.maven.org/maven2/org/apache/nifi/nifi-site-to-site-client/0.6.1/nifi-site-to-site-client-0.6.1.pom
>
> On Wednesday, June 1, 2016, pradeepbill &lt;pradeep.bill@&gt; wrote:
>
> > I am using the below jars, Please see the respective versions, there is
> no
> > default storage level really, but in the examples listed below use
> > MEMORY_ONLY storage level, I had to change because the MEMORY_ONLY option
> > does not work always, sometimes when there is a spill we may need DISK as
> > well.
> >
> >
> >
> https://community.hortonworks.com/articles/12708/nifi-feeding-data-to-spark-streaming.html
> > https://blogs.apache.org/nifi/entry/stream_processing_nifi_and_spark
> >
> >
> >
> >
> /home/parumalla/nifi-jars/nifi-site-to-site-client-0.0.2-incubating.jar,/home/parumalla/nifi-jars/nifi-spark-receiver-0.0.2-incubating.jar,/home/parumalla/nifi-jars/spark-assembly.jar,/home/parumalla/nifi-jars/spark-streaming_2.10-1.2.0.jar,/home/parumalla/nifi-jars/nifi-api-0.5.1.jar,/home/parumalla/nifi-jars/nifi-utils-0.0.2-incubating.jar,/home/parumalla/nifi-jars/nifi-client-dto-0.0.2-incubating.jar
> >
> >
> > Thanks
> > Pradeep
> >
> >
> >
> > --
> > View this message in context:
> >
> http://apache-nifi-developer-list.39713.n7.nabble.com/back-pressure-tp10801p10898.html
> > Sent from the Apache NiFi Developer List mailing list archive at
> > Nabble.com.
> >
>
>
> --
> Sent from Gmail Mobile
>
>
> Ok can you try getting the 0.6.1 version of those jars and see if the
> problem still occurrs?
>
> I want to make sure it still happens in the latest code before we change
> anything, because I know we fixed a similar serialization issue before.
>
>
> http://central.maven.org/maven2/org/apache/nifi/nifi-site-to-site-client/0.6.1/nifi-site-to-site-client-0.6.1.pom
>
> On Wednesday, June 1, 2016, pradeepbill &lt;pradeep.bill@&gt; wrote:
>
> > I am using the below jars, Please see the respective versions, there is
> no
> > default storage level really, but in the examples listed below use
> > MEMORY_ONLY storage level, I had to change because the MEMORY_ONLY option
> > does not work always, sometimes when there is a spill we may need DISK as
> > well.
> >
> >
> >
> https://community.hortonworks.com/articles/12708/nifi-feeding-data-to-spark-streaming.html
> > https://blogs.apache.org/nifi/entry/stream_processing_nifi_and_spark
> >
> >
> >
> >
> /home/parumalla/nifi-jars/nifi-site-to-site-client-0.0.2-incubating.jar,/home/parumalla/nifi-jars/nifi-spark-receiver-0.0.2-incubating.jar,/home/parumalla/nifi-jars/spark-assembly.jar,/home/parumalla/nifi-jars/spark-streaming_2.10-1.2.0.jar,/home/parumalla/nifi-jars/nifi-api-0.5.1.jar,/home/parumalla/nifi-jars/nifi-utils-0.0.2-incubating.jar,/home/parumalla/nifi-jars/nifi-client-dto-0.0.2-incubating.jar
> >
> >
> > Thanks
> > Pradeep
> >
> >
> >
> > --
> > View this message in context:
> >
> http://apache-nifi-developer-list.39713.n7.nabble.com/back-pressure-tp10801p10898.html
> > Sent from the Apache NiFi Developer List mailing list archive at
> > Nabble.com.
> >
>
>
> --
> Sent from Gmail Mobile
>
>
>
>
>
> --
> View this message in context:
> http://apache-nifi-developer-list.39713.n7.nabble.com/back-pressure-tp10801p10986.html
> Sent from the Apache NiFi Developer List mailing list archive at
> Nabble.com.
>

Reply via email to