There are several approaches. I would check hdfs trash folder of the user
deleting a file. Expiration of items in trash is controlled by
fs.trash.interval property on core-site.xml.
Artem Ervits
On Feb 26, 2015 1:31 PM, "Krish Donald" <[email protected]> wrote:

> Hi,
>
> As per my understanding we don't take backup of Hadoop cluster as the size
> is very large generally .
>
> However in case if somebody has dropped a table by mistake then how should
> we recover the data ?
>
> How to take backup of Hadoop ecosystem individual component.
>
> Thanks
> Krish
>

Reply via email to