HDFS can be r placed by other filesystem plugins (eg ignitefs, s3, etc) so the easiest is to write a file system plugin. This is not a plug-in for Spark but part of the Hadoop functionality used by Spark.
> On 13. Oct 2017, at 17:41, Anand Chandrashekar <anandchan...@gmail.com> wrote: > > Greetings! > > I would like to accomplish a custom kafka checkpoint strategy (instead of > hdfs, i would like to use redis). is there a strategy I can use to change > this behavior; any advise will help. Thanks! > > Regards, > Anand. --------------------------------------------------------------------- To unsubscribe e-mail: user-unsubscr...@spark.apache.org