alborotogarcia edited a comment on issue #12087: URL: https://github.com/apache/druid/issues/12087#issuecomment-1000832314
Alright finally It got solved, by looking at the middle-manager logs, it seems that druid-s3-extensions needs to be properly tuned even if it's not used for deep storage (FWIW I was missing the aws zone, as I was trying to use minio s3 buckets, even if I finally stick to hdfs deep storage). Mind also the required write permisions on hdfs/s3. Question @asdf2014 , even though I got all the segments listed on hdfs, not all the segments are available on druid after a while. How could anyone keep some segments for a while (eg.24h period) when ingesting from kafka? Should I read back from hdfs as another datasource? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
