I applied the patch to my truck version 605657

but here what I am getting

called for results: column from my search_index table
PUT /api/search_index/scanner?column=results:

Returned location:
/api/search_index/scanner/3977a5e4

called
DELETE /api/search_index/scanner/3977a5e4

returned:
HTTP/1.1 500 3
Date: Sun, 30 Dec 2007 04:36:00 GMT
Server: Jetty/5.1.4 (Linux/2.6.9-67.0.1.ELsmp i386 java/1.5.0_12
Connection: close
Content-Type: text/html
Content-Length: 1230

<html>
<head>
<title>Error 500 3</title>
</head>
<body>
<h2>HTTP ERROR: 500</h2><pre>3</pre>
<p>RequestURI=/api/search_index/scanner/3977a5e4</p>
<p><i><small><a href="http://jetty.mortbay.org";>Powered by 
Jetty://</a></small></i></p>


Billy


"Bryan Duxbury" <[EMAIL PROTECTED]> wrote in 
message news:[EMAIL PROTECTED]
> I've created an issue and submitted a patch to fix the problem.
> Billy, can you download the patch and check to see if it works alright?
>
> https://issues.apache.org/jira/browse/HADOOP-2504
>
> -Bryan
>
> On Dec 29, 2007, at 3:36 PM, Billy wrote:
>
>> I checked and added the delete option to my code for the scanner
>> based on
>> the api from wiki but it looks like its not working at this time
>> basedo nthe
>> code and responce I got form the rest interfase. i get a "Not
>> hooked back up
>> yet" responce any idea on when this will be fixed?
>>
>> Thanks
>>
>> src/contrib/hbase/src/java/org/apache/hadoop/hbase/rest/
>> ScannerHandler.java
>>
>> public void doDelete(HttpServletRequest request, HttpServletResponse
>> response,
>> String[] pathSegments)
>> throws ServletException, IOException {
>> doMethodNotAllowed(response, "Not hooked back up yet!");
>> }
>>
>>
>> "Bryan Duxbury" <[EMAIL PROTECTED]> wrote in
>> message 
>> news:[EMAIL PROTECTED]
>>> Are you closing the scanners when you're done? If not, those might be
>>> hanging around for a long time. I don't think we've built in the
>>> proper
>>> timeout logic to make that work by itself.
>>>
>>> -Bryan
>>>
>>> On Dec 21, 2007, at 5:10 PM, Billy wrote:
>>>
>>>> I was thanking the same thing and been running REST outside of the
>>>> Master on
>>>> each server for about 5 hours now and used the master as a
>>>> backup  if
>>>> local
>>>> rest interface failed. You are right I seen a little faster
>>>> processing
>>>> time
>>>> from doing this vs. using just the master.
>>>>
>>>> Seams the problem is not with the master its self looks like
>>>> REST  is
>>>> using
>>>> up more and more memory not sure but I thank its to do with inserts
>>>> maybe
>>>> not but the memory usage is going up I an doing a scanner 2 threads
>>>> reading
>>>> rows and processing the data and inserting it in to a separate table
>>>> building a inverted index.
>>>>
>>>> I will restart everything when this job is done and try to do just
>>>> inserts
>>>> and see if its the scanner or inserts.
>>>>
>>>> The master is holding at about 75mb and the rest interfaces are
>>>> up  to
>>>> 400MB
>>>> and slowly going up on the ones running the jobs.
>>>>
>>>> I am still testing I will see what else I can come up with.
>>>>
>>>> Billy
>>>>
>>>>
>>>> "stack" <[EMAIL PROTECTED]> wrote in 
>>>> message
>>>> news:[EMAIL PROTECTED]
>>>>> Hey Billy:
>>>>>
>>>>> Master itself should use little memory and though it is not out
>>>>> of  the
>>>>> realm of possibiliites, it should not have a leak.
>>>>>
>>>>> Are you running with the default heap size?  You might want to
>>>>> give it
>>>>> more memory if you are (See
>>>>> http://wiki.apache.org/lucene-hadoop/Hbase/FAQ#3 for how).
>>>>>
>>>>> If you are uploading all via the REST server running on the
>>>>> master, the
>>>>> problem as you speculate, could be in the REST servlet itself
>>>>> (though
>>>>> it
>>>>> looks like it shouldn't be holding on to anything having given it a
>>>>> cursory glance).  You could try running the REST server
>>>>> independent of
>>>>> the
>>>>> master.  Grep for 'Starting the REST Server' in this page,
>>>>> http://wiki.apache.org/lucene-hadoop/Hbase/HbaseRest, for how
>>>>> (If  you
>>>>> are
>>>>> only running one REST instance, your upload might go faster if
>>>>> you  run
>>>>> multiple).
>>>>>
>>>>> St.Ack
>>>>>
>>>>>
>>>>> Billy wrote:
>>>>>> I forgot to say that once restart the master only uses about
>>>>>> 70mb of
>>>>>> memory
>>>>>>
>>>>>> Billy
>>>>>>
>>>>>> "Billy" <[EMAIL PROTECTED]> 
>>>>>> wrote
>>>>>> in message news:[EMAIL PROTECTED]
>>>>>>
>>>>>>> I not sure of this but why does the master server use up so much
>>>>>>> memory.
>>>>>>> I been running an script that been inserting data into a
>>>>>>> table  for a
>>>>>>> little over 24 hours and the master crashed because of
>>>>>>> java.lang.OutOfMemoryError: Java heap space.
>>>>>>>
>>>>>>> So my question is why does the master use up so much memory
>>>>>>> at  most
>>>>>>> it
>>>>>>> should store the -ROOT-,.META. tables in memory and block to
>>>>>>> table
>>>>>>> mapping.
>>>>>>>
>>>>>>> Is it cache or a memory leak?
>>>>>>>
>>>>>>> I am using the rest interface so could that be the reason?
>>>>>>>
>>>>>>> I inserted according to the high edit ids on all the region
>>>>>>> servers
>>>>>>> about
>>>>>>> 51,932,760 edits and the master ran out of memory with a heap of
>>>>>>> about
>>>>>>> 1GB.
>>>>>>>
>>>>>>> The other side to this is the data I inserted is only taking up
>>>>>>> 886.61
>>>>>>> MB and that's with
>>>>>>> dfs.replication set to 2 so half that is only 440MB of data
>>>>>>> compressed
>>>>>>> at the block level.
>>>>>>> From what I understand the master should have lower memory
>>>>>>> and  cpu
>>>>>>> usage
>>>>>>> and the namenode on hadoop should be the memory hog it has to
>>>>>>> keep up
>>>>>>> with all the data about the blocks.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>
>>
>>
>
> 



Reply via email to