JulianJaffePinterest commented on issue #9780:
URL: https://github.com/apache/druid/issues/9780#issuecomment-776613079


   I cleaned up the docs a little more and added a few more tests. I don't have 
access to Azure or GCS blob storage to actually implement default segment 
writer registration functions and not every class the constructors require are 
serializable, so I haven't implemented working versions. While I do have access 
to S3, I use custom SegmentPushers and SegmentKillers and so the same caveats 
apply to the shipped s3 segment writer functions. Local and HDFS deep storage 
segment writers work out of the box, and I'm hopeful that people who use the 
various cloud providers' blob storages for deep storage will contribute actual 
working default implementations as they build them 🤞.
   
   With the admittedly big exception of some of the deep storage writers, this 
is pretty much done. I think this process has pretty conclusively demonstrated 
that there isn't sufficient community support for maintaining Spark connectors 
as part of mainline Druid, so I'll leave this issue open but the next step will 
likely be to move these connectors to a standalone repository and ask for a 
pointer to be added to the Druid docs.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to