Thank you sooo much guys! I used s3n and that's fixed it. I will try s3a
too. Thanks for much for that thread, Ted.
On 5 May 2016 at 14:07, Ted Yu wrote:
> Lex:
> Please also see this thread about s3n versus s3a:
>
>
Lex:
Please also see this thread about s3n versus s3a:
http://search-hadoop.com/m/uOzYtE1Fy22eEWfe1=Re+S3+Hadoop+FileSystems
On Wed, May 4, 2016 at 9:01 PM, Matteo Bertozzi
wrote:
> never seen that problem before, but a couple of suggestions you can try.
>
> Instead of
never seen that problem before, but a couple of suggestions you can try.
Instead of the old s3 driver, you can use s3n or s3a if you have it
available (those are the ones I tested)
and instead of using hbase.root dir use -copy-from
ExportSnapshot -snapshot SNAPSHOT_NAME -copy-to
Hi all,
I'm having a couple of problems with exporting HBase snapshots to S3. I am
running HBase version 1.2.0.
I have a table called "domain"
And I have created a snapshot for it:
hbase(main):003:0> snapshot 'domain', 'domain-aws-test'
0 row(s) in 0.3310 seconds
---
I am attempting to
Another practice is to send the snapshots to S3. That works great for
disaster recovery and specially if you are running your HBase cluster on
EC2 or if you have means to use AWD Direct Connect from your private
infrastructure.
cheers,
esteban.
--
Cloudera, Inc.
On Thu, Apr 9, 2015 at 3:38
Indeed you will be sending 1.2TB over the wire. I think the common practice
is to export a snapshot from local HDFS to remote HDFS (or HDFS-alike, such
as S3). The idea is you get full bi-directional bandwidth (modulo
head-of-rack switching) between all peers in both clusters.
On Thu, Apr 9, 2015
Hi,
what is the reason to backup HDFS? It's distributed, reliable,
fault-tolerant, e.t.c.
NFS should expensive in order to keep TBs of data.
What problem you are trying to solve?
2015-04-09 20:35 GMT+02:00 Afroz Ahmad ahmad@gmail.com:
We are planning to use the snapshot feature that
We are planning to use the snapshot feature that takes a backup of a table
with 1.2 TB of data. We are planning to export the data using
ExportSnapshot and copy the resulting files to a NFS mount periodically.
Out infrastructure team is very concerned about the amount of data that
will be going