gents, I'm working on something in the background that may certainly help here.

Right now we (Aconex) have open sourced a performance library called Parfait 
(ASL 2.0 licensed, see [1]) for Java, which is generic, but initially targetted 
at writing data out to SGI's open-sourced Performance Co-Pilot (PCP) compatible 
format (see [2])

PCP is designed for monitoring WTF goes on in large clusters (see [3], 
hopefully that's big enough for you...). It integrates application, OS and 
hardware metrics together to get a holistic view of what is going on.  It logs 
data into archives, allowing one to retrospectively go over an event and 
analyse it.  One can build inference rules then to test theories and then run 
the rules over archives to look back in time to see if you had similar events.  
The rules can also be used to check live data to trigger things like Nagios 
alarms.  We use this extensively at Aconex.  I could not possibly live without 
it.  Hard data gets to the bottom quickly.  Looking through all those log4j log 
lines would do my head in, I tip my hat to you for trying.  PCP is totally 
cross platform (Linux, Mac, Windows).

Parfait can poll JMX counters, or counters can be invoked direct.  I'm working 
on a MetricContext that exports all HBase and Hadoop JMX counters into Parfait. 
 The goal is to be able to have PCP visualize data more effectively for 
HBase/Hadoop clusters. To give an example of what sort of visualization I'd 
love to have for HBase & Hadoop see a simple working pic of 3d visualisation  
at [4] below, that's basic, but imagine a 3D vis of all the HBase region 
servers showing visualizations of Hbase specific metrics, played back in real 
time, or retrospectively at any pace you want.

I originally posted this back to Hadoop back in September (see [5]), but no-one 
seemed that interested which is a bit weird.  had planned to make more progress 
on this but the Mavenization got in the way.

At any rate, I'd like to further discuss what requirements for analysing these 
types of problems you think you need.  The log4j logging is good (great for 
some types of basic analysis) but I think we can do better, and I think Parfait 
& PCP could really help you guys in production a LOT..

[1] Parfait - http://code.google.com/p/parfait/
[2] PCP - http://oss.sgi.com/projects/pcp/
[3] NASA's SGI Columbia Supercomputer - 
http://www.nas.nasa.gov/News/Images/Images/AC04-0208-9.jpg 
[4] Clusterviz - http://people.apache.org/~psmith/clustervis.png
[5] Original Hadoop mail - 
http://markmail.org/search/?q=3D%20Cluster%20Performance%20Visualization#query:3D%20Cluster%20Performance%20Visualization+page:1+mid:4t52nnla4snntwow+state:results

On 07/04/2010, at 5:41 AM, Lars George wrote:

> I agree with Jon here, parsing these files especially not having a
> central logging is bad. I tried Splunk and that sort of worked as well
> to quickly scan for exceptions. A problem were multiline stacktraces
> (which they usually all are). They got mixed up when multiple servers
> sent events at the same time. The Splunk data got all garbled then.
> But something like that yeah.
> 
> Maybe with the new Multiput style stuff the WAL is not such a big
> overhead anymore?
> 
> Lars
> 
> On Tue, Apr 6, 2010 at 7:12 PM, Jonathan Gray <jg...@facebook.com> wrote:
>> I like this idea.
>> 
>> Putting major cluster events in some form into ZK.  Could be used for jobs 
>> as Todd says.  Can also be used as a cluster history report on web ui and 
>> such.  Higher level historian.
>> 
>> I'm a fan of anything that moves us away from requiring parsing hundreds or 
>> thousands of lines of logs to see what has happened.
>> 
>> JG
>> 
>>> -----Original Message-----
>>> From: Todd Lipcon [mailto:t...@cloudera.com]
>>> Sent: Tuesday, April 06, 2010 9:49 AM
>>> To: hbase-dev@hadoop.apache.org
>>> Subject: Re: Should HTable.put() return a Future?
>>> 
>>> On Tue, Apr 6, 2010 at 9:46 AM, Jean-Daniel Cryans
>>> <jdcry...@apache.org>wrote:
>>> 
>>>> Yes it is, you will be missing a RS ;)
>>>> 
>>>> 
>>> How do you detect this, though?
>>> 
>>> It might be useful to add a counter in ZK for region server crashes. If
>>> the
>>> master ever notices that a RS goes down, it increments it. Then we can
>>> check
>>> the before/after for a job and know when we might have lost some data.
>>> 
>>> -Todd
>>> 
>>> 
>>>> General rule when uploading without WAL is if there's a failure, the
>>>> job is screwed and that's the tradeoff for speed.
>>>> 
>>>> J-D
>>>> 
>>>> On Tue, Apr 6, 2010 at 9:36 AM, Todd Lipcon <t...@cloudera.com>
>>> wrote:
>>>>> On Tue, Apr 6, 2010 at 9:31 AM, Jean-Daniel Cryans
>>> <jdcry...@apache.org
>>>>> wrote:
>>>>> 
>>>>>> The issue isn't with the write buffer here, it's the WAL. Your
>>> edits
>>>>>> are in the MemStore so as far as your clients can tell, the data
>>> is
>>>>>> all persisted. In this case you would need to know when all the
>>>>>> memstores that contain your data are flushed... Best practice when
>>>>>> turning off WAL is force flushing the tables after the job is
>>> done,
>>>>>> else you can't guarantee durability for the last edits.
>>>>>> 
>>>>>> 
>>>>> You still can't guarantee durability for any of the edits, since a
>>>> failure
>>>>> in the middle of your job is undetectable :)
>>>>> 
>>>>> -Todd
>>>>> 
>>>>> 
>>>>>> J-D
>>>>>> 
>>>>>> On Tue, Apr 6, 2010 at 4:02 AM, Lars George
>>> <lars.geo...@gmail.com>
>>>> wrote:
>>>>>>> Hi,
>>>>>>> 
>>>>>>> I have an issue where I do bulk import and since WAL is off and
>>> a
>>>>>>> default write buffer used (TableOutputFormat) I am running into
>>>>>>> situations where the MR job completes successfully but not all
>>> data is
>>>>>>> actually restored. The issue seems to be a failure on the RS
>>> side as
>>>>>>> it cannot flush the write buffers because the MR overloads the
>>> cluster
>>>>>>> (usually the .META: hosting RS is the breaking point) or causes
>>> the
>>>>>>> underlying DFS to go slow and that repercussions all the way up
>>> to the
>>>>>>> RS's.
>>>>>>> 
>>>>>>> My question is, would it make sense as with any other
>>> asynchronous IO
>>>>>>> to return a Future from the put() that will help checking the
>>> status
>>>>>>> of the actual server side async flush operation? Or am I
>>> misguided
>>>>>>> here? Please advise.
>>>>>>> 
>>>>>>> Lars
>>>>>>> 
>>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> --
>>>>> Todd Lipcon
>>>>> Software Engineer, Cloudera
>>>>> 
>>>> 
>>> 
>>> 
>>> 
>>> --
>>> Todd Lipcon
>>> Software Engineer, Cloudera
>> 

Reply via email to