[jira] Commented: (HBASE-1845) MultiGet, MultiDelete, and MultiPut - batched to the appropriate region servers

2010-06-08 Thread ryan rawson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876559#action_12876559
 ] 

ryan rawson commented on HBASE-1845:


multi-get is more of a parallel random get, not like a scan at all.



On Mon, Jun 7, 2010 at 11:14 PM, Jeff Hammerbacher (JIRA)



 MultiGet, MultiDelete, and MultiPut - batched to the appropriate region 
 servers
 ---

 Key: HBASE-1845
 URL: https://issues.apache.org/jira/browse/HBASE-1845
 Project: HBase
  Issue Type: New Feature
Reporter: Erik Holstad
 Fix For: 0.21.0

 Attachments: batch.patch, hbase-1845_0.20.3.patch, 
 hbase-1845_0.20.5.patch, multi-v1.patch


 I've started to create a general interface for doing these batch/multi calls 
 and would like to get some input and thoughts about how we should handle this 
 and what the protocol should
 look like. 
 First naive patch, coming soon.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HBASE-2558) [mvn] Our javadoc overview -- Getting Started, requirements, etc. -- is not carried across by mvn javadoc:javadoc target

2010-06-08 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-2558:
-

Attachment: 2558-v3.txt

Brings over doc from 0.20.  I fixed it up. xdoc format changed.  Did a home 
page and fixed up nav menus.  This should do for now.  Going to commit.

 [mvn] Our javadoc overview -- Getting Started, requirements, etc. -- is not 
 carried across by mvn javadoc:javadoc target
 --

 Key: HBASE-2558
 URL: https://issues.apache.org/jira/browse/HBASE-2558
 Project: HBase
  Issue Type: Bug
Reporter: stack
Priority: Blocker
 Fix For: 0.21.0

 Attachments: 2558-v2.txt, 2558-v3.txt, 2558.txt


 I see this http://jira.codehaus.org/browse/MJAVADOC-278 which does not bode 
 well.  I messed around upping the plugin version and explicitly specifying 
 the overview.html file to use but no luck.  Come back and figure this or an 
 alternative.
 Made it a blocker.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-50) Snapshot of table

2010-06-08 Thread Li Chongxin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-50?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876699#action_12876699
 ] 

Li Chongxin commented on HBASE-50:
--

bq. ... but also after snapshot is done your design should include 
description of how files are archived, rather than deleted...

Are you talking about files that are no longer used by hbase table but are 
referenced by snapshot? I think this has been described in chapter 6 'Snapshot 
Maintenance'. For example, hfiles are archived in delete directory. And section 
6.4 describes how these files will be cleaned up.

bq. ..In fact you'll probably be doing a snapshot of at least a subset of 
.META. on every table snapshot I'd imagine - at least the entries for the 
relevant table.

.META. entries for the snapshot table have been dumped, haven't they? Why we 
still need a snapshot of a subset of .META.?

bq. So, do you foresee your restore-from-snapshot running split over the logs 
as part of the restore? That makes sense to me.

Yes, restore-from-snapshot has to run split over the WAL logs. It will take 
some time. So restore-from-snapshot will not be very fast.

bq. Why you think we need a Reference to the hfile? Why not just a file that 
lists the names of all the hfiles? We don't need to execute the snapshot, do 
we? Restoring from a snapshot would be a bunch of file renames and wal 
splitting?

At first I thought snapshot probably should keep the table directory structure 
for the later use. For example, a reader like HalfStoreFileReader could be 
provided so that we could read from the snapshot directly. But yes, we actually 
don't execute the snapshot. So keeping a list of all the hfiles (actually one 
list per RS, right?) should be enough. And also restroing from snapshot is not 
just file renames. Since a hfile might be referenced by several snapshot, we 
should probably do real copy when restroing, right?

bq. Shall we name the new .META. column family snapshot rather than reference?

sure

bq. On the filename '.deleted', I think it a mistake to give it a '.' prefix 
especially given its in the snapshot dir...

Ok, I will rename the snapshot dir as '.snapshot'. For dir '.deleted', what 
name do you think we should use? Because there might be several snapshots under 
the dir '.snapshot', each has a snapshot name, I name this dir as '.deleted' to 
discriminate it from a snapshot name.

bq. Do you need a new catalog table called snapshots to keep list of snapshots, 
of what a snapshot comprises and some other metadata such as when it was made, 
whether it succeeded, who did it and why?

It'll be much more convenient if a catalog table 'snapshot' can be created. 
Will this impact normal operation of hbase?

bq. Section 7.4 is missing split of WAL files. Perhaps this can be done in a MR 
job? 

I'll add the split of WAL logs. Yes, a MR job can be used. Which method do you 
think is better? Read from the imported file and inserted into the table by 
hbase api. Or just copy the hfile into place and update the .META.?

bq. Lets not have the master run the snapshot... let the client run it?
bq. Snapshot will be doing same thing whether table is partially online or not..

I put these two issues together because I think they are correlative. In 
current design, if a table is opened, snapshot will be performed by each RS 
which serves tha table regions. Otherwise, if a table is closed, snapshot will 
be performed by the master because the table is not served by any RS. For the 
first comment, it is talking about closed table. So master will perform the 
snapshot because client does not have access to underlying dfs. For the second 
one, I was thinking if a table is partially online, table regions might be 
partially served by RS and partially offline, right? Then who will perform the 
snapshot? If RS, the regions that are offline will be missed. If the master, 
regions that are online might lose data in memstore. I'm confused..

bq. It's a synchronous way. Do you think this is appropriate? Yes. I'm w/ JG on 
this.

This is another problem confusing me..In current design (which is a synchronous 
way), a snapshot is started when all the RS are ready for snapshot. Then all RS 
perform snapshot concurrently. This guarantees snapshot is not started if one 
RS fails. If we switch to an asynchronous approach. Should the RS start 
snapshot immediately when it is ready?

 Snapshot of table
 -

 Key: HBASE-50
 URL: https://issues.apache.org/jira/browse/HBASE-50
 Project: HBase
  Issue Type: New Feature
Reporter: Billy Pearson
Assignee: Li Chongxin
Priority: Minor
 Attachments: HBase Snapshot Design Report V2.pdf, snapshot-src.zip


 Havening an option to take a snapshot of a table would be vary useful in 
 production.
 What I would like to see this option do is do a 

[jira] Commented: (HBASE-2691) LeaseStillHeldException totally ignored by RS, wrongly named

2010-06-08 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876738#action_12876738
 ] 

stack commented on HBASE-2691:
--

.bq What about ServerAlreadyExistingException and 
ServerAlreadyConsideredDeadException? (I'm not good at naming stuff)

Doing above and purging LeaseStillHeldException as you suggest is a good idea.  
It solves differentiating the different startup/dead-server circumstances.

Regards naming, they ain't too bad.  The latter could be YouAreDeadException 
(with its message holding info on why its considered dead).  The former could 
be PleaseHoldException (its message would be why the holdup).

 LeaseStillHeldException totally ignored by RS, wrongly named
 

 Key: HBASE-2691
 URL: https://issues.apache.org/jira/browse/HBASE-2691
 Project: HBase
  Issue Type: Bug
Reporter: Jean-Daniel Cryans
Assignee: Jean-Daniel Cryans
 Fix For: 0.20.6, 0.21.0


 Currently region servers don't handle 
 org.apache.hadoop.hbase.Leases$LeaseStillHeldException in any way that's 
 useful so what happens right now is that it tries to report to the master and 
 this happens:
 {code}
 2010-06-07 17:20:54,368 WARN  [RegionServer:0] 
 regionserver.HRegionServer(553): Attempt=1
 org.apache.hadoop.hbase.Leases$LeaseStillHeldException
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.hadoop.hbase.RemoteExceptionHandler.decodeRemoteException(RemoteExceptionHandler.java:94)
 at 
 org.apache.hadoop.hbase.RemoteExceptionHandler.checkThrowable(RemoteExceptionHandler.java:48)
 at 
 org.apache.hadoop.hbase.RemoteExceptionHandler.checkIOException(RemoteExceptionHandler.java:66)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:541)
 at 
 org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:173)
 at java.lang.Thread.run(Thread.java:637)
 {code}
 Then it will retry until the watch is triggered telling it that the session's 
 expired! Instead, we should be a lot more proactive initiate abort procedure.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-2558) [mvn] Our javadoc overview -- Getting Started, requirements, etc. -- is not carried across by mvn javadoc:javadoc target

2010-06-08 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876740#action_12876740
 ] 

stack commented on HBASE-2558:
--

I just committed a bit more cleanup, added a faq, shortened news (added old 
news doc.), etc.  Now I'm definetly done till get feedback.  Hopefully hudson 
will start publishing site now trunk is working again.

 [mvn] Our javadoc overview -- Getting Started, requirements, etc. -- is not 
 carried across by mvn javadoc:javadoc target
 --

 Key: HBASE-2558
 URL: https://issues.apache.org/jira/browse/HBASE-2558
 Project: HBase
  Issue Type: Bug
Reporter: stack
Priority: Blocker
 Fix For: 0.21.0

 Attachments: 2558-v2.txt, 2558-v3.txt, 2558.txt


 I see this http://jira.codehaus.org/browse/MJAVADOC-278 which does not bode 
 well.  I messed around upping the plugin version and explicitly specifying 
 the overview.html file to use but no luck.  Come back and figure this or an 
 alternative.
 Made it a blocker.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HBASE-2693) [doc] We speak about ganglia 3.0 vs 3.1 in head of conf file; should be up in the metrics article

2010-06-08 Thread stack (JIRA)
[doc] We speak about ganglia 3.0 vs 3.1 in head of conf file; should be up in 
the metrics article
-

 Key: HBASE-2693
 URL: https://issues.apache.org/jira/browse/HBASE-2693
 Project: HBase
  Issue Type: Bug
Reporter: stack


Need to also be clear that if need patched hadoop, then the patched hadoop 
needs to be put under hbase/lib.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HBASE-2694) Move RS to Master region open/close messaging into ZooKeeper

2010-06-08 Thread Jonathan Gray (JIRA)
Move RS to Master region open/close messaging into ZooKeeper


 Key: HBASE-2694
 URL: https://issues.apache.org/jira/browse/HBASE-2694
 Project: HBase
  Issue Type: Sub-task
  Components: master, regionserver
Reporter: Jonathan Gray
Priority: Critical
 Fix For: 0.21.0


As a first step towards HBASE-2485, this issue is about changing the message 
flow of opening and closing of regions without actually changing the 
implementation of what happens on both the Master and RegionServer sides.  This 
way we can debug the messaging changes before the introduction of more 
significant changes to the master architecture and handling of regions in 
transition.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HBASE-2696) ZooKeeper cleanup and refactor

2010-06-08 Thread Jonathan Gray (JIRA)
ZooKeeper cleanup and refactor
--

 Key: HBASE-2696
 URL: https://issues.apache.org/jira/browse/HBASE-2696
 Project: HBase
  Issue Type: Sub-task
  Components: zookeeper
Reporter: Jonathan Gray
Priority: Critical
 Fix For: 0.21.0


Currently almost everything we do with ZooKeeper is stuffed into a single class 
{{ZookeeperWrapper}}.

This issue will deal with cleaning up our usage of ZK, adding some new 
abstractions to help with the master changes, splitting up watchers from 
utility methods, and nailing down the contracts of our ZK methods with respect 
to setting watchers, throwing exceptions, etc...

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HBASE-2698) Reimplement cluster startup to use new load balancer

2010-06-08 Thread Jonathan Gray (JIRA)
Reimplement cluster startup to use new load balancer


 Key: HBASE-2698
 URL: https://issues.apache.org/jira/browse/HBASE-2698
 Project: HBase
  Issue Type: Sub-task
  Components: master, regionserver
Reporter: Jonathan Gray
Priority: Critical
 Fix For: 0.21.0


The new load balancer provides a special call for cluster startup.  Ideally 
this takes into account block locations so we can get good data locality, but 
primarily this will make it so cluster startup does not rely on the MetaScanner 
and normal assignment process as it does today.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-2670) Reader atomicity broken in trunk

2010-06-08 Thread ryan rawson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876789#action_12876789
 ] 

ryan rawson commented on HBASE-2670:


#1 is the old memstore scanner - it would get an 'entire row' at a time and 
then chomp on it's internal array.  This made memstore scans really slow.  The 
whole index hbase saga.

#2 workish, and is what is implemented in HBASE-2616 (which is now in branch 
and trunk).

The problem with #2, and you will see it in HRegionScanner is we no longer 
update the read point between rows.  Since by the time we figure out we are 
on the next row, we've already peek()ed the value off the scanner, we cant 
switch to a different read point, or else you will get the first KeyValue from 
one read point, and the rest from another (sounds familiar?)

As for why did I commit and not await review, this is because the hudson 
instance at hudson.hbase.org is one of the few machines that trips against this 
failure frequently.  I haven't been able to repro on my dev environment, ever.  
  And now we have 11 successful branch builds of 0.20, thus indicating that our 
bug is now fixed.

 Reader atomicity broken in trunk
 

 Key: HBASE-2670
 URL: https://issues.apache.org/jira/browse/HBASE-2670
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.21.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Blocker

 There appears to be a bug in HBASE-2248 as committed to trunk. See following 
 failing test:
 http://hudson.zones.apache.org/hudson/job/HBase-TRUNK/1296/testReport/junit/org.apache.hadoop.hbase/TestAcidGuarantees/testAtomicity/
 Think this is the same bug we saw early on in 2248 in the 0.20 branch, looks 
 like the fix didn't make it over.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-2670) Reader atomicity broken in trunk

2010-06-08 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876795#action_12876795
 ] 

Todd Lipcon commented on HBASE-2670:


This isn't fixed in trunk, see: 
http://hudson.zones.apache.org/hudson/job/HBase-TRUNK/1308/
Specifically:
http://hudson.zones.apache.org/hudson/job/HBase-TRUNK/1308/testReport/org.apache.hadoop.hbase/TestAcidGuarantees/testAtomicity/

 Reader atomicity broken in trunk
 

 Key: HBASE-2670
 URL: https://issues.apache.org/jira/browse/HBASE-2670
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.21.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Blocker

 There appears to be a bug in HBASE-2248 as committed to trunk. See following 
 failing test:
 http://hudson.zones.apache.org/hudson/job/HBase-TRUNK/1296/testReport/junit/org.apache.hadoop.hbase/TestAcidGuarantees/testAtomicity/
 Think this is the same bug we saw early on in 2248 in the 0.20 branch, looks 
 like the fix didn't make it over.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (HBASE-2684) TestMasterWrongRS flaky in trunk

2010-06-08 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans reassigned HBASE-2684:
-

Assignee: Jean-Daniel Cryans

 TestMasterWrongRS flaky in trunk
 

 Key: HBASE-2684
 URL: https://issues.apache.org/jira/browse/HBASE-2684
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.21.0
Reporter: Todd Lipcon
Assignee: Jean-Daniel Cryans

 I think this is just a flaky test. I saw:
 java.lang.AssertionError: expected:2 but was:3
 on the first:
 assertEquals(2, cluster.getLiveRegionServerThreads().size());
 My guess is that the 2 second sleep is not good enough. We should probably 
 either force a heartbeat somehow, or hook in so we can wait until there's 
 been a heartbeat, rather than sleeping a hardcoded amount of time.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HBASE-2699) Reimplement load balancing to be a background process and to not use heartbeats

2010-06-08 Thread Jonathan Gray (JIRA)
Reimplement load balancing to be a background process and to not use heartbeats
---

 Key: HBASE-2699
 URL: https://issues.apache.org/jira/browse/HBASE-2699
 Project: HBase
  Issue Type: Sub-task
  Components: master, regionserver
Reporter: Jonathan Gray
Priority: Critical
 Fix For: 0.21.0




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-2684) TestMasterWrongRS flaky in trunk

2010-06-08 Thread HBase Review Board (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876821#action_12876821
 ] 

HBase Review Board commented on HBASE-2684:
---

Message from: st...@duboce.net

---
This is an automatically generated e-mail. To reply, visit:
http://review.hbase.org/r/163/#review158
---

Ship it!


+1 after making below suggested fix.


/trunk/src/test/java/org/apache/hadoop/hbase/master/TestMasterWrongRS.java
http://review.hbase.org/r/163/#comment791

This looks wrong.  Shouldn't the index be '1' for the second server?  Its 
the index you used at start of the test to get reference to seconServer.

Otherwise, looks good.. go ahead and commit


- stack





 TestMasterWrongRS flaky in trunk
 

 Key: HBASE-2684
 URL: https://issues.apache.org/jira/browse/HBASE-2684
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.21.0
Reporter: Todd Lipcon
Assignee: Jean-Daniel Cryans

 I think this is just a flaky test. I saw:
 java.lang.AssertionError: expected:2 but was:3
 on the first:
 assertEquals(2, cluster.getLiveRegionServerThreads().size());
 My guess is that the 2 second sleep is not good enough. We should probably 
 either force a heartbeat somehow, or hook in so we can wait until there's 
 been a heartbeat, rather than sleeping a hardcoded amount of time.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HBASE-2700) Handle master failover for regions in transition

2010-06-08 Thread Jonathan Gray (JIRA)
Handle master failover for regions in transition


 Key: HBASE-2700
 URL: https://issues.apache.org/jira/browse/HBASE-2700
 Project: HBase
  Issue Type: Sub-task
  Components: master, zookeeper
Reporter: Jonathan Gray
Priority: Critical
 Fix For: 0.21.0


To this point in HBASE-2692 tasks we have moved everything for regions in 
transition into ZK, but we have not fully handled the master failover case.  
This is to deal with that and to write tests for it.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HBASE-2684) TestMasterWrongRS flaky in trunk

2010-06-08 Thread HBase Review Board (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12876825#action_12876825
 ] 

HBase Review Board commented on HBASE-2684:
---

Message from: Jean-Daniel Cryans jdcry...@apache.org


bq.  On 2010-06-08 14:40:49, stack wrote:
bq.   
/trunk/src/test/java/org/apache/hadoop/hbase/master/TestMasterWrongRS.java, 
line 71
bq.   http://review.hbase.org/r/163/diff/2/?file=1096#file1096line71
bq.  
bq.   This looks wrong.  Shouldn't the index be '1' for the second server? 
 Its the index you used at start of the test to get reference to seconServer.
bq.   
bq.   Otherwise, looks good.. go ahead and commit

After chatting with Stack and looking at the code, it appears I'm actually 
doing the right thing (he had another method in mind that did not remove the 
region server from the list).


- Jean-Daniel


---
This is an automatically generated e-mail. To reply, visit:
http://review.hbase.org/r/163/#review158
---





 TestMasterWrongRS flaky in trunk
 

 Key: HBASE-2684
 URL: https://issues.apache.org/jira/browse/HBASE-2684
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.21.0
Reporter: Todd Lipcon
Assignee: Jean-Daniel Cryans

 I think this is just a flaky test. I saw:
 java.lang.AssertionError: expected:2 but was:3
 on the first:
 assertEquals(2, cluster.getLiveRegionServerThreads().size());
 My guess is that the 2 second sleep is not good enough. We should probably 
 either force a heartbeat somehow, or hook in so we can wait until there's 
 been a heartbeat, rather than sleeping a hardcoded amount of time.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HBASE-2702) Add client API to expose bloom filter check

2010-06-08 Thread Jonathan Gray (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Gray updated HBASE-2702:
-

Labels: noob  (was: )

 Add client API to expose bloom filter check
 ---

 Key: HBASE-2702
 URL: https://issues.apache.org/jira/browse/HBASE-2702
 Project: HBase
  Issue Type: New Feature
  Components: client, regionserver
Reporter: Jonathan Gray
Priority: Minor

 There may be some valid use cases where a user wants to check for existence 
 of a given row or row+col using bloom filters but without performing the Get 
 if the bloom gets a hit.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HBASE-2703) ui not working in distributed context

2010-06-08 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-2703:
-

Attachment: 2703.txt

This patch seems to fix it. Main change is this in bin/hbase:

{code}
@@ -158,11 +158,15 @@ fi

 # For releases, add hbase  webapps to CLASSPATH
 # Webapps must come first else it messes up Jetty
-if [ -d $HBASE_HOME/webapps ]; then
+if [ -d $HBASE_HOME/hbase-webapps ]; then
   CLASSPATH=${CLASSPATH}:$HBASE_HOME
 fi
 for f in $HBASE_HOME/hbase*.jar; do
-  if [ -f $f ]; then
+  if [[ $f = *sources.jar ]]
+  then
+: # Skip sources.jar
+  elif [ -f $f ]
+  then
 CLASSPATH=${CLASSPATH}:$f;
   fi
 done
{code}

The first change seems to fix it.  The second gets rid of a needless unjarring 
that was going on because sources has jsp in it.

I'm still at this.  I might add a bit more to the patch.  It looks like 
WEB-INF/web.xml is being generated into our src/main/resources...  rather than 
under target/.  Will try fix that while in here. 

 ui not working in distributed context
 -

 Key: HBASE-2703
 URL: https://issues.apache.org/jira/browse/HBASE-2703
 Project: HBase
  Issue Type: Bug
Reporter: stack
 Fix For: 0.21.0

 Attachments: 2703.txt


 UI is not showing when you put hbase on a cluster; this is since we renamed 
 webapps dir as hbase-webapps.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HBASE-2691) LeaseStillHeldException totally ignored by RS, wrongly named

2010-06-08 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans updated HBASE-2691:
--

Attachment: HBASE-2691.patch

Patch that will be committed.

 LeaseStillHeldException totally ignored by RS, wrongly named
 

 Key: HBASE-2691
 URL: https://issues.apache.org/jira/browse/HBASE-2691
 Project: HBase
  Issue Type: Bug
Reporter: Jean-Daniel Cryans
Assignee: Jean-Daniel Cryans
 Fix For: 0.20.6, 0.21.0

 Attachments: HBASE-2691.patch


 Currently region servers don't handle 
 org.apache.hadoop.hbase.Leases$LeaseStillHeldException in any way that's 
 useful so what happens right now is that it tries to report to the master and 
 this happens:
 {code}
 2010-06-07 17:20:54,368 WARN  [RegionServer:0] 
 regionserver.HRegionServer(553): Attempt=1
 org.apache.hadoop.hbase.Leases$LeaseStillHeldException
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.hadoop.hbase.RemoteExceptionHandler.decodeRemoteException(RemoteExceptionHandler.java:94)
 at 
 org.apache.hadoop.hbase.RemoteExceptionHandler.checkThrowable(RemoteExceptionHandler.java:48)
 at 
 org.apache.hadoop.hbase.RemoteExceptionHandler.checkIOException(RemoteExceptionHandler.java:66)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:541)
 at 
 org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:173)
 at java.lang.Thread.run(Thread.java:637)
 {code}
 Then it will retry until the watch is triggered telling it that the session's 
 expired! Instead, we should be a lot more proactive initiate abort procedure.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.