Hello,

I'm planning to use Elasticsearch with Logstash for logs management and 
search, however, one thing I'm unable to find an answer for is making sure 
that the data cannot be modified once it reaches Elasticsearch.

"action.destructive_requires_name" prevents deleting all indices at once, 
but they can still be deleted. Are there any options to prevent deleting 
indices altogether? 

And on the document level, is it possible to disable 'delete' *AND* 
'update' operations without setting the entire index as read-only (ie. 
'index.blocks.read_only')?

Lastly, does setting 'index.blocks.read_only' ensure that the index files 
on disk are not changed (so they can be monitored using a file integrity 
monitoring solution)? as many regulatory and compliance bodies have 
requirements for ensuring logs integrity.

Thanks

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/dfc73db4-18ac-405e-8929-68be32b01a6c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to