On Sun, Jan 18, 2009 at 11:54:14PM +0800, Spike Spiegel wrote:
> On Sun, Jan 18, 2009 at 10:27 PM, Carlo Marcelo Arenas Belon
> <care...@sajinet.com.pe> wrote:
> > http://bugzilla.ganglia.info/cgi-bin/bugzilla/attachment.cgi?id=188&action=view
> > that should apply cleanly to 3.1.1, 3.0.7 and 2.5.7
> 
> Looks fine to me, altho I'd argue that without the other changes
> supporting the multi-patch becomes a risk for reasons I already
> explained.

since the "multi-patch" is not yet committed it is not an issue yet
and considering that the buffer overflow will need to be backported up
to 2.5.7 (which will never had any features added as it is already out of
maintenance), decoupling the changes this way is recommended.

> I'd feel much better if we agreed on a solution to
> propagate errors back to the client and implement that alongside with
> not returning the entire tree.

agree, but that is to be done in the context of getting "multi-patch"
committed and backported, but not in fixing this buffer overflow in the
interactive port, which is what BUG223 is about.

> >> Is that what you meant when you said "banging to resulting binary"?
> >
> > partially; scripts would be able to help only after the testing
> > parameters had been defined, and at least for this test might be limited
> > by the fact that the interactive port is mainly used by the web
> > frontend.
> 
> I'm not sure I follow. Especially in this case, but also true in
> general, as far as testing gmetad goes you "only" need to make sure
> that:
> 1) it can pull data
> 2) it can store data
> 3) it can serve data

  3a) non interactively (usually through TCP/8651)
  3b) interactively (usually through TCP/8652)

4) it can summarize data (this requires at least another gmetad)

> with a gmond running and that you can point gmetad at you need 3
> scripts to test the above:
> 1) one script to generate udp traffic to send to gmond

gmetric could help here

> 2) one script to read the data off gmetad
> 3) one script to read the data off of the rrd files
> 
> the script that generates the synthetic data can be completely random
> as long as it has a list of valid elements and range values, which you
> also use to test for errors by futzing with those parameters. The same
> auto generated data should be in output to the second script. Given
> that you control sampling and polling rate I see this quite feasible
> and a good integration test. Load testing would be different as with
> lots of rapid reads and writes it would be harder to ensure what
> you're supposed to be seeing.
> 
> am I missing something obvious?

for this problem you need only the scripts from 3b, but you first need to
define which cases you are looking for and that are considered valid so
that a script can help validate them and from what I check while trying
some fuzzing we have still a problem (probably introduced with the
buffer overflow patch) when the request is too long (over 2048 bytes) as
shown by :

  $ echo "/`python -c \"print \\"%s/%s/%s\\" % ('a'*1700,'b'*300,'c'*48)\"`" | 
netcat 127.0.0.1 8652

Carlo

------------------------------------------------------------------------------
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
_______________________________________________
Ganglia-developers mailing list
Ganglia-developers@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ganglia-developers

Reply via email to