Re: Efficient way to read a large number of files in S3 and upload their content to HBase

2012-05-30 Thread Marcos Ortiz Valmaseda
Like I said before, I need to store all click streams of a advertising network to do later deep analysis for this huge data. We want to store in two places: - first to Amazon S3 - then to HBase But I think that wen don't need S3 if we can store in a proper HBase cluster using the asynchbase

Efficient way to read a large number of files in S3 and upload their content to HBase

2012-05-24 Thread Marcos Ortiz
Regards to all the list. We are using Amazon S3 to store millions of files with certain format, and we want to read the content of these files and then upload its content to a HBase cluster. Anyone has done this? Can you recommend me a efficient way to do this? Best wishes. -- Marcos Luis

Re: Efficient way to read a large number of files in S3 and upload their content to HBase

2012-05-24 Thread Amandeep Khurana
Marcos, You could to a distcp from S3 to HDFS and then do a bulk import into HBase. Are you running HBase on EC2 or on your own hardware? -Amandeep On Thursday, May 24, 2012 at 11:52 AM, Marcos Ortiz wrote: Regards to all the list. We are using Amazon S3 to store millions of files with

Re: Efficient way to read a large number of files in S3 and upload their content to HBase

2012-05-24 Thread Marcos Ortiz
Thanks a lot for your answer, Amandeep. On 05/24/2012 02:55 PM, Amandeep Khurana wrote: Marcos, You could to a distcp from S3 to HDFS and then do a bulk import into HBase. The quantity of files are very large, so, we want to combine some files, and then construct the HFile to upload to

Re: Efficient way to read a large number of files in S3 and upload their content to HBase

2012-05-24 Thread Amandeep Khurana
Marcos Can you elaborate on your use case a little bit? What is the nature of data in S3 and why you want to use HBase? Why do you want to combine HFiles and upload back to S3? It'll help us answer your questions better. Amandeep On May 24, 2012, at 12:19 PM, Marcos Ortiz mlor...@uci.cu wrote:

Re: Efficient way to read a large number of files in S3 and upload their content to HBase

2012-05-24 Thread Marcos Ortiz
On 05/24/2012 03:21 PM, Amandeep Khurana wrote: Marcos Can you elaborate on your use case a little bit? What is the nature of data in S3 and why you want to use HBase? Why do you want to combine HFiles and upload back to S3? It'll help us answer your questions better. Amandeep Ok, let me

Re: Efficient way to read a large number of files in S3 and upload their content to HBase

2012-05-24 Thread Amandeep Khurana
Thanks for that description. I'm not entirely sure why you want to use HBase here. You've got logs coming that you want to process in batch to do calculations on. This can be done by running MR jobs on the flat files itself. You could use Java MR, Hive or Pig to accomplish this. Why do you want

Re: Efficient way to read a large number of files in S3 and upload their content to HBase

2012-05-24 Thread Marcos Ortiz
On 05/24/2012 04:47 PM, Amandeep Khurana wrote: Thanks for that description. I'm not entirely sure why you want to use HBase here. You've got logs coming that you want to process in batch to do calculations on. This can be done by running MR jobs on the flat files itself. You could use Java

Re: Efficient way to read a large number of files in S3 and upload their content to HBase

2012-05-24 Thread Ian Varley
This is a question I see coming up a lot. Put differently: what characteristics make it useful to use HBase on top of HDFS, as opposed to just flat files in HDFS directly? Quantity isn't really an answer, b/c HDFS does fine with quantity (better, even). The basic answers are that HBase is

Re: Efficient way to read a large number of files in S3 and upload their content to HBase

2012-05-24 Thread Marcos Ortiz
On 05/24/2012 05:12 PM, Ian Varley wrote: This is a question I see coming up a lot. Put differently: what characteristics make it useful to use HBase on top of HDFS, as opposed to just flat files in HDFS directly? Quantity isn't really an answer, b/c HDFS does fine with quantity (better,