There is an abstraction layer, and it is pretty solid, but (as you know), 
database abstraction subtleties are pretty significant.  The way a database 
gathers statistics, for example, will determine how often you can get away with 
doing an ANALYZE, and the sensitivity of the database's planner to small 
statistics differences might force a certain query design over another.

Thus, even if the abstraction layer fully supports a variety of databases, I 
would not feel confident in the performance of any given database backend under 
LCF without some fairly rigorous testing, on that specific database.  So if we 
are suggesting Postgresql, we should test on Postgresql, etc.

Other areas in which you might run into problems with something like Derby 
might be the completeness of its sql implementation.  I'll have to look into it 
further to see if that would likely be a problem.

Karl

-----Original Message-----
From: ext Simon Willnauer [mailto:simon.willna...@googlemail.com] 
Sent: Monday, May 24, 2010 4:03 PM
To: connectors-dev@incubator.apache.org
Subject: Re: Basic core testing infrastructure

I have to admit I don't know much about the connectors but this sounds
like connectors heavily rely on postgresql. Karl, is it somehow
feasible to abstract this out to work with more than just postgresql
in production? Maybe making different test-backends pluggable would be
a win-win for tests and the project itself. Correct me if I'm wrong
but I as a user would appreciate if I could start it up with a derby
DB by default without the hassle of installing postgres.

Self-contained testing is the way to go here or try mocking things out
if you can / want to.

simon

On Mon, May 24, 2010 at 9:54 PM,  <karl.wri...@nokia.com> wrote:
> Hi Robert,
>
> The dependency on postgresql is indeed mainly performance, as you say, 
> although there are a few kinds of queries that I am sure are somewhat 
> postgresql-specific at this point.  These are mainly for the reporting 
> features, though.  So your idea could work in a limited way.
>
> Obviously an end-to-end test would be the best, though.  But something is 
> better than nothing.
>
> Karl
>
>
> -----Original Message-----
> From: ext Robert Muir [mailto:rcm...@gmail.com]
> Sent: Monday, May 24, 2010 3:43 PM
> To: connectors-dev@incubator.apache.org
> Subject: Re: Basic core testing infrastructure
>
> Hi Karl,
>
> Just a question, I read all the warnings about how dependent LCF is on
> postgres, but how much of this is really only about performance?
>
> When I look at the code it seems like there is enough abstraction you
> could add support for say, hsqldb or similar, even if its only for
> testing purposes?
>
> This way you could create a 'new world' for each test, rather than
> worrying about cleaning up the database etc.
>
> I admit I don't know if what I am saying is even close to practical as
> far as how dependent things are on postgres, but it might be an idea
> to make testing simpler.
>
> On Mon, May 24, 2010 at 9:49 AM,  <karl.wri...@nokia.com> wrote:
>> To all you lurking Solr committers out there,
>>
>> I would like to throw some cycles towards at least getting Solr-style unit 
>> tests set up for LCF, running under Junit or something like it.  My thoughts 
>> were as follows:
>>
>> (1)     We presume a blank, already-installed version of Postgresql, 
>> configured to listen on port 5432, with a standard superuser name and 
>> password;
>> (2)     We do not attempt to test the UI at this time, because that would 
>> involve presuming an app server was installed, and would also require me to 
>> port my "simple browser simulator" from python to Java.  Or maybe we can do 
>> this later?
>> (3)     The filesystem connector only would be used by the core tests.
>>
>> The question is, does this fit well with the Solr testing infrastructure?  
>> Is there a document describing that infrastructure and how to most 
>> effectively write tests for it?  What are the standard pre/post-test cleanup 
>> semantics for the Solr tests, for instance?  (The MetaCarta tests do a 
>> "preclean" stage, which removes any crap leftover from a previous failure of 
>> the same test, for instance, and also always clean up after themselves upon 
>> success.)
>>
>> I know the projects are quite different, but if I understand the assumptions 
>> and the "how to's" for Solr, it will help me enormously I think...
>>
>> Thanks,
>> Karl
>>
>>
>>
>>
>
>
>
> --
> Robert Muir
> rcm...@gmail.com
>

Reply via email to