Currently a row is part of a region and there's a single region server serving 
that region at a particular moment. 
So when that row is updated a lock is acquired for that row until the actual 
data is updated in memory (note that a put will be written to cache on the 
region server and also persisted in the write-ahead log - WAL). Subsequent puts 
to that row will have to wait for that lock.

HBase is fully consistent. This being said all the locking takes place at row 
level only, so when you scan you have to take that into account as there's no 
range locking. 

I'm not sure I understand the resource releasing issue. HTable.close() flushes 
the current write buffer (you can have write buffer if you use autoFlush set to 
false). 

Cosmin


On Jul 16, 2010, at 1:33 PM, Michael Segel wrote:

> 
> Ok,
> 
> First, I'm writing this before I've had my first cup of coffee so I am 
> apologizing in advance if the question is a brain dead question....
> 
> Going from a relational background, some of these questions may not make 
> sense in the HBase world.
> 
> 
> When does HBase acquire a lock on a row and how long does it persist? Does 
> the lock only hit the current row, or does it also lock the adjacent rows too?
> Does HBase support the concept of 'dirty reads'? 
> 
> The issue is what happens when you have two jobs trying to hit the same table 
> at the same time and update/read the rows at the same time.
> 
> A developer came across a problem and the fix was to use the HTable.close() 
> method to release any resources.
> 
> I am wondering if you explicitly have to clean up or can a lazy developer let 
> the object just go out of scope and get GC'd.
> 
> Thx
> 
> -Mike
> 
>                                         
> _________________________________________________________________
> The New Busy is not the too busy. Combine all your e-mail accounts with 
> Hotmail.
> http://www.windowslive.com/campaign/thenewbusy?tile=multiaccount&ocid=PID28326::T:WLMTAGL:ON:WL:en-US:WM_HMP:042010_4

Reply via email to