​Another approach would be to use a zookeeper. If you have zookeeper
running somewhere in the cluster you can simply create a path like
*/dynamic-list*​ in it and then write objects/values to it, you can even
create/access nested objects.

Thanks
Best Regards

On Fri, Jun 5, 2015 at 7:06 PM, Cosmin Cătălin Sanda <
cosmincata...@gmail.com> wrote:

> Hi,
>
> I am trying to gather some statistics from an RDD using accumulators.
> Essentially, I am counting how many times specific segments appear in each
> row of the RDD. This works fine and I get the expected results, but the
> problem is that each time I add a new segment to look for, I have to
> explicitly create an Accumulator for it and explicitly use the Accumulator
> in the foreach method.
>
> Is there a way to use a dynamic list of Accumulators in Spark? I want to
> control the segments from a single place in the code and the accumulators
> to be dynamically created and used based on the metrics list.
>
> BR,
> ------------------------------------
> *Cosmin Catalin SANDA*
> Software Systems Engineer
> Phone: +45.27.30.60.35
>
>

Reply via email to