On Mon, May 8, 2017 at 2:32 PM, Galder Zamarreño <gal...@redhat.com> wrote:
> Btw, thanks Anna for working on this!
>
> I've had a look at the list and I have some questions:
>
> * HotRodAsyncReplicationTest: I don't think it should be a client TCK test. 
> There's nothing the client does differently compared to executing against a 
> sync repl cache. If anything, it's a server TCK test since it verifies that a 
> put sent by a HR client gets replicated. The same applies to all the test of 
> local vs REPl vs DIST tests.
>
> * LockingTest: same story, this is a client+server integration test, I don't 
> think it's a client TCK test. If anything, it's a server TCK test. It 
> verifies that if a client sends a put, the entry is locked.
>
> * MixedExpiry*Test: it's dependant on the server configuration, not really a 
> client TCK test IMO. I think the only client TCK tests that deal with expiry 
> should only verify that the entry is expirable if the client decides to make 
> it expirable.
>

I think they should be included, because this is part of the HotRod
wire specification:

* +0x0002+    = use cache-level configured default lifespan
* +0x0004+    = use cache-level configured default max idle

> * ClientListenerRemoveOnStopTest: Not sure this is a client TCK test. Yeah, 
> it verifies that the client removes its listeners on stop, but it's not a Hot 
> Rod protocol TCK test. Going back to what Radim said, how are you going to 
> verify each client does this? What we can verify for all clients easily is 
> they send the commands to remove the client servers to the server. Maybe for 
> these and below client specific logic related tests, as Martin suggesteds, we 
> go with the approach of just verifying that tests exist.
>
> * Protobuf marshaller tests: client specific and testing client-side 
> marshalling logic. Same reasons above.
>
> * Near caching tests: client specific and testing client-side near caching 
> logic. Same issues above.
>
> * Topology change tests: I consider these TCK tests cos you could think that 
> if the server sends a new topology, the client's next command should have the 
> ID of this topology in its header.
>
> * Failover/Retry tests: client specific and testing client-side retry logic. 
> Same issues above, how do you verify it works accross the board for all 
> clients?
>
> * Socket timeout tests: again these are client specific...
>
> I think in general it'd be a good idea to try to verify somehow most of the 
> TCK via some server-side logic, as Radim hinted, and where that's not 
> possible, revert to just verifying the client has tests to cover certain 
> scenarios.

+1

Dan

>
> Cheers,
> --
> Galder Zamarreño
> Infinispan, Red Hat
>
>> On 11 Apr 2017, at 14:33, Martin Gencur <mgen...@redhat.com> wrote:
>>
>> Hello all,
>> we have been working on https://issues.jboss.org/browse/ISPN-7120.
>>
>> Anna has finished the first step from the JIRA - collecting information
>> about tests in the Java HotRod client test suite (including server
>> integration tests) and it is now prepared for wider review.
>>
>> She created a spreadsheet [1]. The spread sheet includes for each Java
>> test its name, the suggested target package in the TCK, whether to
>> include it in the TCK or not, and some other notes. The suggested
>> package also poses grouping for the tests (e.g. tck.query, tck.near,
>> tck.xsite, ...)
>>
>> Let me add that right now the goal is not to create a true TCK [2]. The
>> goal is to make sure that all implementations of the HotRod protocol
>> have sufficient test coverage and possibly the same server side of the
>> client-server test (including the server version and configuration).
>>
>> What are the next step?
>>
>> * Please review the list (at least a quick look) and see if some of the
>> tests which are NOT suggested for the TCK should be added or vice versa.
>> * I suppose the next step would then be to check other implementations
>> (C#, C++, NodeJS, ..) and identify tests which are missing there (there
>> will surely be some).
>> * Gradually implement the missing tests in the other implementations
>>   Note: Here we should ensure that the server is configured in the same
>> way for all implementations. One way to achieve this (thanks Anna for
>> suggestion!) is to have a shell/batch scripts for CLI which would be
>> executed before the tests. This can probably be done for all impls. and
>> both UNIX/WINDOWS. I also realize that my PR for ISPN [3] becomes
>> useless because it uses Creaper (Java) and we need a language-neutral
>> solution for configuring the server.
>>
>> Some other notes:
>> * there are some duplicated tests in hotrod-client and server
>> integration test suites, in this case it probably makes sense to only
>> include in the TCK the server integration test
>> * tests from the hotrod-client module which are supposed to be part of
>> the TCK should be copied to the server integration test suite one day
>> (possibly later)
>>
>> Please let us know what you think.
>>
>> Thanks,
>> Martin
>>
>>
>> [1]
>> https://docs.google.com/spreadsheets/d/1bZBBi5m4oLL4lBTZhdRbIC_EA0giQNDZWzFNPWrF5G4/edit#gid=0
>> [2] https://en.wikipedia.org/wiki/Technology_Compatibility_Kit
>> [3] https://github.com/infinispan/infinispan/pull/5012
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Reply via email to