Glenn, I think Chaitanya's Grafana Dashboard generator solves the problem?

https://github.com/bhattchaitanya/Grafana-Dashboard-Generator/wiki



On 11/04/15 01:45, Glenn Caccia wrote:
Thinking about this more, you could use a dynamic rootMetricsPrefix, something 
like..

jmeter.${__TestPlanName}.${__time}.

That could then be used across all scripts and would satisfy the basic 
requirement from a storage perspective, but Grafana itself still can't easily 
handle the requirement from a display perspective.  Since queries are hard 
coded into a graph, you'd be stuck either needing to make a new dashboard for 
each test run or manually editing a dashboard for each test run.  It would be a 
mess to work with.
        From: Glenn Caccia <[email protected]>
  To: JMeter Users List <[email protected]>
  Sent: Friday, April 10, 2015 1:23 PM
  Subject: Re: Thoughts on InfluxDB/Grafana integration
You could do that, but it would then require remembering to change the root 
value each time you did a new run, which would then also require changing your 
dashboard queries each time to pick up on the new run.  I don't think that's a 
solution I would want to maintain.  I would definitely use variations on the 
rootMetricsPrefix to distinguish between test scripts, however.  The 
InfluxDB/Grafana solution is great for real-time analysis, which is certainly 
important, but seems to fall short on the need to easily compare runs.
        From: Philippe Mouawad <[email protected]>


  To: JMeter Users List <[email protected]>
  Sent: Friday, April 10, 2015 11:54 AM
  Subject: Re: Thoughts on InfluxDB/Grafana integration
Hi,
What about playing on rootMetricsPrefix to do that ?

Regarding SQL, do you know that you can now easily build a jdbc backend to
store results in a database, you could contribute this to core.


Regards



On Friday, April 10, 2015, Glenn Caccia <[email protected]> wrote:

   I've successfully installed InfluxDB and Grafana and did some basic
testing where I can now see results in Grafana.  I'm beginning to wonder
about the benefits of this system.  A while ago I had toyed around with the
idea of using Elasticsearch as a backend for JMeter test results and using
Kibana to view results.  I ultimately dropped the idea because of the
limitations of how data is structured.  I see the exact same issue with
InfluxDB and Grafana (either that, or I don't fully understand what can be
done in these tools).
What I want when viewing results is the ability to work with results in
terms of projects, test plans, and results from a particular test run.  For
example, I want to see results for project A, test plan B and compare
results from the prior run with the current run.  With InfluxDB/Grafana
solution, there is no concept of a run.  If I run a test one day and then
run the same test the subsequent day, I can't compare the results using the
same view.  I can certainly change my time filter to see both inline (with
a big gap inbetween) or view one and then view the other, but I can't stack
them in separate graphs and see them at the same time or display them in
the same graph.  Likewise, if I want to see what performance was like the
last time a test was run and I don't know when the last test was run, I
have to do a bit of searching by playing with the time filter.
A while ago I worked for a company that used SQL Server for a lot of their
data storage needs.  This gave me access to the SQL Server Report Builder
tool.  I was able to create a solution where JMeter results were loaded
into SQL Server and we had a report interface where you could choose your
project, choose your test plan and then see the dates/times for all prior
runs.  From this, you could choose which run(s) to view.  I don't have
access to tools like that with my current company, but I miss that kind of
ability to structure and access test results.  A similar approach
to storing and presenting results can be seen with loadosphia.
In short, it seems like this new solution is primarily useful for
analyzing results from a current test run (which can already be done with
existing listeners) but is not as useful a tool for comparing results or
checking on results from prior runs.  Am I missing something or is that a
fair conclusion?




---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to