On Thu, Sep 15, 2016 at 12:33 PM, Sanne Grinovero <sa...@infinispan.org>
wrote:

> I was actually planning to start a similar topic, but from the point of
> view of user's testing needs.
>
> I've recently created Hibernate OGM support for Hot Rod, and it wasn't as
> easy as other NoSQL databases to test; luckily I have some knowledge and
> contact on Infinispan ;) but I had to develop several helpers and refine
> the approach to testing over multiple iterations.
>
> I ended up developing a JUnit rule - handy for individual test runs in the
> IDE - and with a Maven life cycle extension and also with an Arquillian
> extension, which I needed to run both the Hot Rod server and start a
> Wildfly instance to host my client app.
>
> At some point I was also in trouble with conflicting dependencies so
> considered making a Maven plugin to manage the server lifecycle as a proper
> IT phase - I didn't ultimately make this as I found an easier solution but
> it would be great if Infinispan could provide such helpers to end users
> too.. Forking the ANT scripts from the Infinispan project to assemble and
> start my own (as you do..) seems quite cumbersome for users ;)
>
> Especially the server is not even available via Maven coordinates*.*
>
The server is available at [1]

[1]
http://central.maven.org/maven2/org/infinispan/server/infinispan-server-build/9.0.0.Alpha4/



> I'm of course happy to contribute my battle-tested Test helpers to
> Infinispan, but they are meant for JUnit users.
> Finally, comparing to developing OGM integrations for other NoSQL stores..
> It's really hard work when there is no "viewer" of the cache content.
>
We need some kind of interactive console to explore the stored data, I felt
> like driving blind: developing based on black box, when something doesn't
> work as expected it's challenging to figure if one has a bug with the
> storage method rather than the reading method, or maybe the encoding not
> quite right or the query options being used.. sometimes it's the used flags
> or the configuration properties (hell, I've been swearing a lot at some of
> these flags!)
>
> Thanks,
> Sanne
>
> On 15 Sep 2016 11:07, "Tristan Tarrant" <ttarr...@redhat.com> wrote:
>
>> Recently I've had a chat with Galder, Will and Vittorio about how we
>> test the Hot Rod server module and the various clients. We also
>> discussed some of this in the past, but we now need to move forward with
>> a better strategy.
>>
>> First up is the Hot Rod server module testsuite: it is the only part of
>> the code which still uses Scala. Will has a partial port of it to Java,
>> but we're wondering if it is worth completing that work, seeing that
>> most of the tests in that testsuite, in particular those related to the
>> protocol itself, are actually duplicated by the Java Hot Rod client's
>> testsuite which also happens to be our reference implementation of a
>> client and is much more extensive.
>> The only downside of removing it  is that verification will require
>> running the client testsuite, instead of being self-contained.
>>
>> Next up is how we test clients.
>>
>> The Java client, partially described above, runs all of the tests
>> against ad-hoc embedded servers. Some of these tests, in particular
>> those related to topology, start and stop new servers on the fly.
>>
>> The server integration testsuite performs yet another set of tests, some
>> of which overlap the above, but using the actual full-blown server. It
>> doesn't test for topology changes.
>>
>> The C++ client wraps the native client in a Java wrapper generated by
>> SWIG and runs the Java client testsuite. It then checks against a
>> blacklist of known failures. It also has a small number of native tests
>> which use the server distribution.
>>
>> The Node.js client has its own home-grown testsuite which also uses the
>> server distribution.
>>
>> Duplication aside, which in some cases is unavoidable, it is impossible
>> to confidently say that each client is properly tested.
>>
>> Since complete unification is impossible because of the different
>> testing harnesses used by the various platforms/languages, I propose the
>> following:
>>
>> - we identify and group the tests depending on their scope (basic
>> protocol ops, bulk ops, topology/failover, security, etc). A client
>> which implements the functionality of a group MUST pass all of the tests
>> in that group with NO exceptions
>> - we assign a unique identifier to each group/test combination (e.g.
>> HR.BASIC.PUT, HR.BASIC.PUT_FLAGS_SKIP_LOAD, etc). These should be
>> collected in a "test book" (some kind of structured file) for comparison
>> with client test runs
>> - we refactor the Java client testsuite according to the above grouping
>> / naming strategy so that testsuite which use the wrapping approach
>> (i.e. C++ with SWIG) can consume it by directly specifying the supported
>> groups
>> - other clients get reorganized so that they support the above grouping
>>
>> I understand this is quite some work, but the current situation isn't
>> really sustainable.
>>
>> Let me know what your thoughts are
>>
>>
>> Tristan
>> --
>> Tristan Tarrant
>> Infinispan Lead
>> JBoss, a division of Red Hat
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Reply via email to