I forgot to say that once restart the master only uses about 70mb of memory

Billy

"Billy" <[EMAIL PROTECTED]> wrote in 
message news:[EMAIL PROTECTED]
>I not sure of this but why does the master server use up so much memory. I 
>been running an script that been inserting data into a table for a little 
>over 24 hours and the master crashed because of java.lang.OutOfMemoryError: 
>Java heap space.
>
> So my question is why does the master use up so much memory at most it 
> should store the -ROOT-,.META. tables in memory and block to table 
> mapping.
>
> Is it cache or a memory leak?
>
> I am using the rest interface so could that be the reason?
>
> I inserted according to the high edit ids on all the region servers about
> 51,932,760 edits and the master ran out of memory with a heap of about 
> 1GB.
>
> The other side to this is the data I inserted is only taking up 886.61 MB 
> and that's with
> dfs.replication set to 2 so half that is only 440MB of data compressed at 
> the block level.
> From what I understand the master should have lower memory and cpu usage 
> and the namenode on hadoop should be the memory hog it has to keep up with 
> all the data about the blocks.
>
>
> 



Reply via email to