Hi,

The libhdfs picks up from the same configuration source as other
components (i.e., from hdfs-site.xml, via launch classpath).

Losing a complete rack, on a rack-aware and healthy HDFS cluster, will
not cause failures in reads or writes.

On Mon, Mar 19, 2012 at 2:14 AM, Rita <rmorgan...@gmail.com> wrote:
> In the libhdfs how can I throttle the number of retries?
>
>
> On Sun, Mar 18, 2012 at 1:12 PM, Marcos Ortiz <mlor...@uci.cu> wrote:
>>
>> HDFS is precisely built with these concerns in mind.
>> If you read a 60 GB file and the rack goes down, the system
>> will present to you transparently another copy, based on your
>> replication factor.
>> A block can not be available too due to corruption, and in this case,
>> it can be replicated to other live machines and fix the error with
>> the fsck utility.
>>
>> Regards
>>
>>
>> On 3/18/2012 9:46 AM, Rita wrote:
>>>
>>> My replication factor is 3 and if I were reading data thru libhdfs using
>>> C is there a retry method? I am reading a 60gb file and what would will
>>> happen if a rack goes down and the next block isn't available? Will the
>>> API retry? is there a way t configuration this option?
>>>
>>>
>>> --
>>> --- Get your facts first, then you can distort them as you please.--
>>
>>
>> --
>> Marcos Luis Ortíz Valmaseda (@marcosluis2186)
>>  Data Engineer at UCI
>>  http://marcosluis2186.posterous.com
>>
>> 10mo. ANIVERSARIO DE LA CREACION DE LA UNIVERSIDAD DE LAS CIENCIAS
>> INFORMATICAS...
>> CONECTADOS AL FUTURO, CONECTADOS A LA REVOLUCION
>>
>> http://www.uci.cu
>> http://www.facebook.com/universidad.uci
>> http://www.flickr.com/photos/universidad_uci
>
>
>
>
> --
> --- Get your facts first, then you can distort them as you please.--



-- 
Harsh J

Reply via email to