[ 
https://issues.apache.org/jira/browse/TINKERPOP-1233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stephen mallette closed TINKERPOP-1233.
---------------------------------------
    Resolution: Won't Do

I'm going to close this. It doesn't feel actionable to me. If we want to 
implement something here we should create a specific issue to do that 
particular thing.

> Gremlin-Benchmark wish list.
> ----------------------------
>
>                 Key: TINKERPOP-1233
>                 URL: https://issues.apache.org/jira/browse/TINKERPOP-1233
>             Project: TinkerPop
>          Issue Type: Improvement
>          Components: benchmark
>    Affects Versions: 3.2.0-incubating
>            Reporter: Marko A. Rodriguez
>            Priority: Major
>
> [~twilmes] has developed {{gremlin-benchmark}} which is slated for 3.2.0 
> (TINKERPOP-1016). This is really good as now we can ensure the Gremlin 
> traversal machine only speeds up with each release. Here is a collection of 
> things I would like to be able to do with {{gremlin-benchmark}}.
> ----
> *Benchmarks in the Strategy Tests*
> {code}
> // ensure that traversalA is at least 1.5 times faster than traversalB
> assertTrue(Benchmark.compare(traversalA,traversalB) > 1.50d) 
> {code}
> With this, I can have an {{OptimizationStrategy}} applied to {{traversalA}} 
> and not to {{traversalB}} and prove via "mvn clean install" that the strategy 
> is in fact "worth it." I bet there are other good static methods we could 
> create. Hell, why not just have a {{BenchmarkAsserts}} that we can statically 
> import like JUnit {{Asserts}}. Then its just:
> {code}
> assertFaster(traversalA,traversalB,1.50d)
> assertSmaller(traversalA,traversalB) // memory usage or object creation?
> assertTime(traversal, 1000, TimeUnits.MILLISECONDS) // has to complete in 1 
> second? 
> ... ?
> {code}
> Its a little scary as not all computers are the same, but it would be nice to 
> know that we have tests for space and time costs.
> ----
> *Benchmarks saved locally over the course of a release*
> This is tricky, but it would be cool if local folders (not to GitHub) were 
> created like this:
> {code}
> tinkerpop3/gremlin-benchmark/benchmarks/g_V_out_out_12:23:66UT23.txt
> {code}
> Then a test case could ensure that all newer runs of that benchmark are 
> faster than older ones. If its, lets say, 10%+ slower, {{Exception}} and test 
> fails. ??
> What else can we do? Can we know whether a certain area of code is faster? 
> For instance, strategy application or requirements aggregation? If we can 
> introspect like that, that would be stellar.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to