No problem I had the test script written so I could test in secs.

looks good here

Billy



"Bryan Duxbury" <[EMAIL PROTECTED]> wrote in 
message news:[EMAIL PROTECTED]
>I posted another version of the patch that fixes this problem, I  think. 
>Give it another try?
>
> (Sorry for relying on you to do the testing - I figure you already  have 
> the framework set up, and I'm currently trapped in an airport.)
>
> -Bryan
>
> On Dec 29, 2007, at 8:44 PM, Billy wrote:
>
>> found this in the rest log i running rest outside of master and  logging 
>> it
>>
>> 07/12/29 22:36:00 WARN rest: /api/search_index/scanner/3977a5e4:
>> java.lang.ArrayIndexOutOfBoundsException: 3
>>         at
>> org.apache.hadoop.hbase.rest.ScannerHandler.doDelete 
>> (ScannerHandler.java:132)
>>         at
>> org.apache.hadoop.hbase.rest.Dispatcher.doDelete(Dispatcher.java:146)
>>         at javax.servlet.http.HttpServlet.service(HttpServlet.java: 715)
>>         at javax.servlet.http.HttpServlet.service(HttpServlet.java: 802)
>>         at
>> org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:427)
>>         at
>> org.mortbay.jetty.servlet.WebApplicationHandler.dispatch 
>> (WebApplicationHandler.java:475)
>>         at
>> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java: 567)
>>         at org.mortbay.http.HttpContext.handle(HttpContext.java:1565)
>>         at
>> org.mortbay.jetty.servlet.WebApplicationContext.handle 
>> (WebApplicationContext.java:635)
>>         at org.mortbay.http.HttpContext.handle(HttpContext.java:1517)
>>         at org.mortbay.http.HttpServer.service(HttpServer.java:954)
>>         at org.mortbay.http.HttpConnection.service 
>> (HttpConnection.java:814)
>>         at
>> org.mortbay.http.HttpConnection.handleNext(HttpConnection.java:981)
>>         at org.mortbay.http.HttpConnection.handle 
>> (HttpConnection.java:831)
>>         at
>> org.mortbay.http.SocketListener.handleConnection 
>> (SocketListener.java:244)
>>         at org.mortbay.util.ThreadedServer.handle 
>> (ThreadedServer.java:357)
>>         at org.mortbay.util.ThreadPool$PoolThread.run 
>> (ThreadPool.java:534)
>>
>>
>> "Bryan Duxbury" <[EMAIL PROTECTED]> wrote in
>> message 
>> news:[EMAIL PROTECTED]
>>> I've created an issue and submitted a patch to fix the problem.
>>> Billy, can you download the patch and check to see if it works  alright?
>>>
>>> https://issues.apache.org/jira/browse/HADOOP-2504
>>>
>>> -Bryan
>>>
>>> On Dec 29, 2007, at 3:36 PM, Billy wrote:
>>>
>>>> I checked and added the delete option to my code for the scanner
>>>> based on
>>>> the api from wiki but it looks like its not working at this time
>>>> basedo nthe
>>>> code and responce I got form the rest interfase. i get a "Not
>>>> hooked back up
>>>> yet" responce any idea on when this will be fixed?
>>>>
>>>> Thanks
>>>>
>>>> src/contrib/hbase/src/java/org/apache/hadoop/hbase/rest/
>>>> ScannerHandler.java
>>>>
>>>> public void doDelete(HttpServletRequest request, HttpServletResponse
>>>> response,
>>>> String[] pathSegments)
>>>> throws ServletException, IOException {
>>>> doMethodNotAllowed(response, "Not hooked back up yet!");
>>>> }
>>>>
>>>>
>>>> "Bryan Duxbury" <[EMAIL PROTECTED]> wrote 
>>>> in
>>>> message
>>>> news:[EMAIL PROTECTED]
>>>>> Are you closing the scanners when you're done? If not, those  might be
>>>>> hanging around for a long time. I don't think we've built in the
>>>>> proper
>>>>> timeout logic to make that work by itself.
>>>>>
>>>>> -Bryan
>>>>>
>>>>> On Dec 21, 2007, at 5:10 PM, Billy wrote:
>>>>>
>>>>>> I was thanking the same thing and been running REST outside of the
>>>>>> Master on
>>>>>> each server for about 5 hours now and used the master as a
>>>>>> backup  if
>>>>>> local
>>>>>> rest interface failed. You are right I seen a little faster
>>>>>> processing
>>>>>> time
>>>>>> from doing this vs. using just the master.
>>>>>>
>>>>>> Seams the problem is not with the master its self looks like
>>>>>> REST  is
>>>>>> using
>>>>>> up more and more memory not sure but I thank its to do with  inserts
>>>>>> maybe
>>>>>> not but the memory usage is going up I an doing a scanner 2  threads
>>>>>> reading
>>>>>> rows and processing the data and inserting it in to a separate  table
>>>>>> building a inverted index.
>>>>>>
>>>>>> I will restart everything when this job is done and try to do just
>>>>>> inserts
>>>>>> and see if its the scanner or inserts.
>>>>>>
>>>>>> The master is holding at about 75mb and the rest interfaces are
>>>>>> up  to
>>>>>> 400MB
>>>>>> and slowly going up on the ones running the jobs.
>>>>>>
>>>>>> I am still testing I will see what else I can come up with.
>>>>>>
>>>>>> Billy
>>>>>>
>>>>>>
>>>>>> "stack" <[EMAIL PROTECTED]> wrote in
>>>>>> message
>>>>>> news:[EMAIL PROTECTED]
>>>>>>> Hey Billy:
>>>>>>>
>>>>>>> Master itself should use little memory and though it is not out
>>>>>>> of  the
>>>>>>> realm of possibiliites, it should not have a leak.
>>>>>>>
>>>>>>> Are you running with the default heap size?  You might want to
>>>>>>> give it
>>>>>>> more memory if you are (See
>>>>>>> http://wiki.apache.org/lucene-hadoop/Hbase/FAQ#3 for how).
>>>>>>>
>>>>>>> If you are uploading all via the REST server running on the
>>>>>>> master, the
>>>>>>> problem as you speculate, could be in the REST servlet itself
>>>>>>> (though
>>>>>>> it
>>>>>>> looks like it shouldn't be holding on to anything having given  it a
>>>>>>> cursory glance).  You could try running the REST server
>>>>>>> independent of
>>>>>>> the
>>>>>>> master.  Grep for 'Starting the REST Server' in this page,
>>>>>>> http://wiki.apache.org/lucene-hadoop/Hbase/HbaseRest, for how
>>>>>>> (If  you
>>>>>>> are
>>>>>>> only running one REST instance, your upload might go faster if
>>>>>>> you  run
>>>>>>> multiple).
>>>>>>>
>>>>>>> St.Ack
>>>>>>>
>>>>>>>
>>>>>>> Billy wrote:
>>>>>>>> I forgot to say that once restart the master only uses about
>>>>>>>> 70mb of
>>>>>>>> memory
>>>>>>>>
>>>>>>>> Billy
>>>>>>>>
>>>>>>>> "Billy" <[EMAIL PROTECTED]>
>>>>>>>> wrote
>>>>>>>> in message news:[EMAIL PROTECTED]
>>>>>>>>
>>>>>>>>> I not sure of this but why does the master server use up so  much
>>>>>>>>> memory.
>>>>>>>>> I been running an script that been inserting data into a
>>>>>>>>> table  for a
>>>>>>>>> little over 24 hours and the master crashed because of
>>>>>>>>> java.lang.OutOfMemoryError: Java heap space.
>>>>>>>>>
>>>>>>>>> So my question is why does the master use up so much memory
>>>>>>>>> at  most
>>>>>>>>> it
>>>>>>>>> should store the -ROOT-,.META. tables in memory and block to
>>>>>>>>> table
>>>>>>>>> mapping.
>>>>>>>>>
>>>>>>>>> Is it cache or a memory leak?
>>>>>>>>>
>>>>>>>>> I am using the rest interface so could that be the reason?
>>>>>>>>>
>>>>>>>>> I inserted according to the high edit ids on all the region
>>>>>>>>> servers
>>>>>>>>> about
>>>>>>>>> 51,932,760 edits and the master ran out of memory with a  heap of
>>>>>>>>> about
>>>>>>>>> 1GB.
>>>>>>>>>
>>>>>>>>> The other side to this is the data I inserted is only taking up
>>>>>>>>> 886.61
>>>>>>>>> MB and that's with
>>>>>>>>> dfs.replication set to 2 so half that is only 440MB of data
>>>>>>>>> compressed
>>>>>>>>> at the block level.
>>>>>>>>> From what I understand the master should have lower memory
>>>>>>>>> and  cpu
>>>>>>>>> usage
>>>>>>>>> and the namenode on hadoop should be the memory hog it has to
>>>>>>>>> keep up
>>>>>>>>> with all the data about the blocks.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>
>>
>>
>
> 



Reply via email to