Parallel merge of indexes

2020-02-04 Thread Erol Akarsu
I need some help in merging indexes in parallel much faster way. I am using
IndexMergeTool provided by Lucene but it seems very slow. Is there a way to
speed up the process ?

What I do is that I make 16 shards with no replication and then add replica
for every node and every shard. In the last step, I merge indexes. First 2
steps is finished quickly but last merging step takes time

I appreciate your help

Erol Akarsu

-- 

Erol Akarsu

-- 

Erol Akarsu


Re: Filtered join in Solr?

2020-02-04 Thread Edward Ribeiro
Just for the sake of an imagined scenario, you could use the [subquery] doc
transformer. A query like the one below:

/select?q=family: Smith=watched_movies:[* TO *]=*,
movies:[subquery]={!terms f=id v=$row.watched_movies}

Would bring back the results below:

{ "responseHeader":{
"status":0,
"QTime":0,
"params":{
  "movies.q":"{!terms f=id v=$row.watched_movies}",
  "q":"family: Smith",
  "fl":"*, movies:[subquery]",
  "fq":"watched_movies:[* TO *]"}},
  "response":{"numFound":2,"start":0,"docs":[
  {
"id":"user_1",
"name":["Jane"],
"family":["Smith"],
"born":["1990-01-01T00:00:00Z"],
"watched_movies":["1",
  "3"],
"_version_":1657646162820202496,
"movies":{"numFound":2,"start":0,"docs":[
{
  "id":"1",
  "title":["Rambo 1"],
  "release_date":["1978-01-01T00:00:00Z"],
  "_version_":1657646123722997760},
{
  "id":"3",
  "title":["300 Spartaans"],
  "release_date":["2005-01-01T00:00:00Z"],
  "_version_":1657646123726143488}]
}},
  {
"id":"user_2",
"title":["Joe"],
"family":["Smith"],
"born":["1970-01-01T00:00:00Z"],
"watched_movies":["2"],
"_version_":1657646162827542528,
"movies":{"numFound":1,"start":0,"docs":[
{
  "id":"2",
  "title":["Rambo 5"],
  "release_date":["1998-01-01T00:00:00Z"],
  "_version_":1657646123725094912}]
}}]
  }}

But I wasn't able to filter on date (I could filter a specific date
using movies.fq={!term f=release_date v=2005-01-01T00:00:00Z} but not on
range) nor could I perform facets in the children of the above example. It
probably only works on a single node too. Finally, there a couple of
parameters that can be important but that I ommited for the sake of brevity
and clarity: movies.limit=100 and movies.sort=release_date DESC


Best,
Edward



On Tue, Feb 4, 2020 at 11:17 AM Radu Gheorghe 
wrote:
>
> Hello Solr users,
>
> How would you design a filtered join scenario?
>
> Say I have a bunch of movies (excuse any inaccuracies, this is an
> imagined scenario):
>
> curl -XPOST -H 'Content-Type: application/json'
> 'localhost:8983/solr/test/update?commitWithin=1000' --data-binary '
> [{
> "id": "1",
> "title": "Rambo 1",
> "release_date": "1978-01-01"
> },
> {
> "id": "2",
> "title": "Rambo 5",
> "release_date": "1998-01-01"
> },
> {
> "id": "3",
> "title": "300 Spartaans",
> "release_date": "2005-01-01"
> }]'
>
> And a bunch of users of certain families who watched those movies:
>
> curl -XPOST -H 'Content-Type: application/json'
> 'localhost:8983/solr/test/update?commitWithin=1000' --data-binary '
> [{
> "id": "user_1",
> "name": "Jane",
> "family": "Smith",
> "born": "1990-01-01",
> "watched_movies": ["1", "3"]
> },
> {
> "id": "user_2",
> "title": "Joe",
> "family": "Smith",
> "born": "1970-01-01",
> "watched_movies": ["2"]
> },
> {
> "id": "user_3",
> "title": "Radu",
> "family": "Gheorghe,
> "born": "1985-01-01",
> "watched_movies": ["1", "2", "3"]
> }]'
>
> They don't have to be in the same collection. The important question
> is how to get:
> - movies watched by user of family Smith
> - after they were born
> - including the matching users
> - I'd like to be able to facet on movie metadata, but I don't need to
> facet on user metadata, just to be able to retrieve those fields
>
> The above query should bring back Rambo 5 and 300, with Joe and Jane
> respectively. I wouldn't get Rambo 1, because although Jane watched
> it, the movie was released before she was born.
>
> Here are some options that I have in mind:
> 1) using the join query parser (or the newer XCJF) to do the join
> itself. Then have some sort of plugin pull the "born" value or each
> corresponding user (via some subquery) and filter movies afterwards.
> Normalized, but likely painfully slow
>
> 2) similar approach with 1), in a streaming expression. Again,
> normalized, but slow (we're talking billions of movies, millions of
> users). And limited support for facets.
>
> 3) have some sort of denormalization. For example, pre-compute
> matching users for every movie, then just use join/XCJF to do the
> actual join. This makes indexing/updates expensive and potentially
> complicated
>
> 4) normalization with nested documents. This is best for searches, but
> pretty much a no-go for indexing/updates. In this imaginary use-case,
> there are binge-watchers who might watch a billion movies in a week,
> making us reindex everything
>
> Do you see better ways?
>
> Thanks in advance and best regards,
> Radu


ID is a required field in SolrSchema . But not found in DataConfig

2020-02-04 Thread Karl Stoney
Hey all,
I'm trying to use the DIH to copy from one collection to another, it appears to 
work (data gets copied) however I've noticed this in the logs:

17:39:58.167 [qtp1472216456-87] INFO  
org.apache.solr.handler.dataimport.config.DIHConfiguration - ID is a required 
field in SolrSchema . But not found in DataConfig

I can't find the appropriate configuration to get rid of it.  Do I need to care?

My config looks like this:



   http://127.0.0.1/solr/at-uk;>
   



Cheers
Karl
This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


Re: Handling All Replicas Down in Solr 8.3 Cloud Collection

2020-02-04 Thread Joseph Lorenzini
Here's roughly what was going on:


   1. set up three node cluster with a collection. The collection has one
   shard and three replicas for that shard.
   2. Shut down two of the nodes and verify the remaining node is the
   leader. Verified the other two nodes are registered as dead in solr ui.
   3. bulk import several million documents into solr from a CSV file.
   4. shut down the remaining node
   5. start up all three nodes

Even after three minutes no leader was active. I executed the FORCELEADER
API call which completed successfully and waited three minutes -- still no
replica was elected leader. I then compared my solr 8 cluster to a
different solr cluster. I noticed that the znode
*/collections/example/leaders/shard1
*existed on both clusters but in the solr 8 cluster the znode was empty. I
manually uploaded a json document with the proper settings to that znode
and then called the FORCELEADER API call again and waited 3 minutes.

A leader still wasn't elected.

Then, I removed the replica for the node that I imported all the documents
into it and added the replica back in. At that point, a leader was elected.
I am not sure i have exact steps to reproduce but I did get it working.

Thanks,
Joe

On Tue, Feb 4, 2020 at 7:54 AM Erick Erickson 
wrote:

> First, be sure to wait at least 3 minutes before concluding the replicas
> are permanently down, that’s the default wait period for certain leader
> election fallbacks. It’s easy to conclude it’s never going to recover, 180
> seconds is an eternity ;).
>
> You can try the collections API FORCELEADER command. Assuming a leader is
> elected and becomes active, you _may_ have to restart the other two Solr
> nodes.
>
> How did you stop the servers? You mention disaster recovery, so I’m
> thinking you did a “kill -9” or similar? Were you actively indexing at the
> time? Solr _should_ manage the recovery even in that case, I’m mostly
> wondering what the sequence of events that lead up to this was…
>
> Best,
> Erick
>
> > On Feb 4, 2020, at 8:38 AM, Joseph Lorenzini  wrote:
> >
> > Hi all,
> >
> > I have a 3 node solr cloud instance with a single collection. The solr
> > nodes are pointed to a 3-node zookeeper ensemble. I was doing some basic
> > disaster recovery testing and have encountered a problem that hasn't been
> > obvious to me on how to fix.
> >
> > After i started back up the three solr java processes, i can see that
> they
> > are registered back in the solr UI. However, each replica is in a down
> > state permanently. there are no logs in either solr or zookeeper that may
> > indicate what the the problem would be -- neither exceptions nor
> warnings.
> >
> > So is there any way to collect more diagnostics to figure out what's
> going
> > on? Short of deleting and recreating the replicas is there any way to fix
> > this?
> >
> > Thanks,
> > Joe
>
>


Re: Exact search in Solr

2020-02-04 Thread yeikel valdes
You can store a non alayzed version and copy it to an analyzed field.


If you need full text search, you se the analyzed version. Otherwise use the 
non analyzed version.


If you want to search both you could still do that and boost the non alayzed 
version if needed




 On Tue, 04 Feb 2020 04:50:22 -0500 m...@apache.org wrote 


Hello, Łukasz
The later for sure.

On Tue, Feb 4, 2020 at 12:44 PM Antczak, Lukasz
 wrote:

> Hi, Solr experts!
>
> I would like to learn from you if there is a better solution for doing
> 'exact search' in Solr.
> Exact search means no analysis for the text other then tokenization. Query
> "secret" gives back only documents containing exactly "secret" not
> "secrets", "secrection", etc. Text that needs to be searched is content of
> some articles.
>
> Solution 1. - index whole text as string, use regex for searching.
> Solution 2. - index text with just tokenization, no lowercase, stemming,
> etc.
>
> Which solution will be faster? Any other clever ideas to be evaluated?
>
> Regards
> Łukasz Antczak
> --
> *Łukasz Antczak*
> Senior IT Professional
> GS Data Frontiers Team 
>
> *Planned absences:*
> *Roche Polska Sp. z o.o.*
> ADMD Group Services - Business Intelligence Team
> HQ: ul. Domaniewska 39B, 02-672 Warszawa
> Office: ul. Abpa Baraniaka 88D, 61-131 Poznań
>
> Mobile: +48 519 515 010
> mailto: lukasz.antc...@roche.com
>
> *Informacja o poufności: *Treść tej wiadomości zawiera informacje
> przeznaczone tylko dla adresata. Jeżeli nie jesteście Państwo jej
> adresatem, bądź otrzymaliście ją przez pomyłkę, prosimy o powiadomienie o
> tym nadawcy oraz trwałe jej usunięcie. Wszelkie nieuprawnione
> wykorzystanie informacji zawartych w tej wiadomości jest zabronione.
>
> *Confidentiality Note:* This message is intended only for the use of the
> named recipient(s) and may contain confidential and/or proprietary
> information. If you are not the intended recipient, please contact the
> sender and delete this message. Any unauthorized use of the information
> contained in this message is prohibited.
>


--
Sincerely yours
Mikhail Khludnev


Filtered join in Solr?

2020-02-04 Thread Radu Gheorghe
Hello Solr users,

How would you design a filtered join scenario?

Say I have a bunch of movies (excuse any inaccuracies, this is an
imagined scenario):

curl -XPOST -H 'Content-Type: application/json'
'localhost:8983/solr/test/update?commitWithin=1000' --data-binary '
[{
"id": "1",
"title": "Rambo 1",
"release_date": "1978-01-01"
},
{
"id": "2",
"title": "Rambo 5",
"release_date": "1998-01-01"
},
{
"id": "3",
"title": "300 Spartaans",
"release_date": "2005-01-01"
}]'

And a bunch of users of certain families who watched those movies:

curl -XPOST -H 'Content-Type: application/json'
'localhost:8983/solr/test/update?commitWithin=1000' --data-binary '
[{
"id": "user_1",
"name": "Jane",
"family": "Smith",
"born": "1990-01-01",
"watched_movies": ["1", "3"]
},
{
"id": "user_2",
"title": "Joe",
"family": "Smith",
"born": "1970-01-01",
"watched_movies": ["2"]
},
{
"id": "user_3",
"title": "Radu",
"family": "Gheorghe,
"born": "1985-01-01",
"watched_movies": ["1", "2", "3"]
}]'

They don't have to be in the same collection. The important question
is how to get:
- movies watched by user of family Smith
- after they were born
- including the matching users
- I'd like to be able to facet on movie metadata, but I don't need to
facet on user metadata, just to be able to retrieve those fields

The above query should bring back Rambo 5 and 300, with Joe and Jane
respectively. I wouldn't get Rambo 1, because although Jane watched
it, the movie was released before she was born.

Here are some options that I have in mind:
1) using the join query parser (or the newer XCJF) to do the join
itself. Then have some sort of plugin pull the "born" value or each
corresponding user (via some subquery) and filter movies afterwards.
Normalized, but likely painfully slow

2) similar approach with 1), in a streaming expression. Again,
normalized, but slow (we're talking billions of movies, millions of
users). And limited support for facets.

3) have some sort of denormalization. For example, pre-compute
matching users for every movie, then just use join/XCJF to do the
actual join. This makes indexing/updates expensive and potentially
complicated

4) normalization with nested documents. This is best for searches, but
pretty much a no-go for indexing/updates. In this imaginary use-case,
there are binge-watchers who might watch a billion movies in a week,
making us reindex everything

Do you see better ways?

Thanks in advance and best regards,
Radu


Re: Handling All Replicas Down in Solr 8.3 Cloud Collection

2020-02-04 Thread Erick Erickson
First, be sure to wait at least 3 minutes before concluding the replicas are 
permanently down, that’s the default wait period for certain leader election 
fallbacks. It’s easy to conclude it’s never going to recover, 180 seconds is an 
eternity ;).

You can try the collections API FORCELEADER command. Assuming a leader is 
elected and becomes active, you _may_ have to restart the other two Solr nodes.

How did you stop the servers? You mention disaster recovery, so I’m thinking 
you did a “kill -9” or similar? Were you actively indexing at the time? Solr 
_should_ manage the recovery even in that case, I’m mostly wondering what the 
sequence of events that lead up to this was…

Best,
Erick

> On Feb 4, 2020, at 8:38 AM, Joseph Lorenzini  wrote:
> 
> Hi all,
> 
> I have a 3 node solr cloud instance with a single collection. The solr
> nodes are pointed to a 3-node zookeeper ensemble. I was doing some basic
> disaster recovery testing and have encountered a problem that hasn't been
> obvious to me on how to fix.
> 
> After i started back up the three solr java processes, i can see that they
> are registered back in the solr UI. However, each replica is in a down
> state permanently. there are no logs in either solr or zookeeper that may
> indicate what the the problem would be -- neither exceptions nor warnings.
> 
> So is there any way to collect more diagnostics to figure out what's going
> on? Short of deleting and recreating the replicas is there any way to fix
> this?
> 
> Thanks,
> Joe



Handling All Replicas Down in Solr 8.3 Cloud Collection

2020-02-04 Thread Joseph Lorenzini
Hi all,

I have a 3 node solr cloud instance with a single collection. The solr
nodes are pointed to a 3-node zookeeper ensemble. I was doing some basic
disaster recovery testing and have encountered a problem that hasn't been
obvious to me on how to fix.

After i started back up the three solr java processes, i can see that they
are registered back in the solr UI. However, each replica is in a down
state permanently. there are no logs in either solr or zookeeper that may
indicate what the the problem would be -- neither exceptions nor warnings.

So is there any way to collect more diagnostics to figure out what's going
on? Short of deleting and recreating the replicas is there any way to fix
this?

Thanks,
Joe


Re: How to compute index size

2020-02-04 Thread Andrzej Białecki
If you’re using Solr 8.2 or newer there’s a built-in index analysis tool that 
gives you a better understanding of what kind of data in your index occupies 
the most disk space, so that you can tweak your schema accordingly: 
https://lucene.apache.org/solr/guide/8_2/collection-management.html#colstatus 


Which is another way of saying that you have to try and see ;)

> On 3 Feb 2020, at 18:02, David Hastings  wrote:
> 
> Yup, I find the right calculation to be as much ram as the server can take,
> and as much SSD space as it will hold, when you run out, buy another server
> and repeat.  machines/ram/SSD's are cheap.  just get as much as you can.
> 
> On Mon, Feb 3, 2020 at 11:59 AM Walter Underwood 
> wrote:
> 
>> What he said.
>> 
>> But if you must have a number, assume that the index will be as big as
>> your (text) data. It might be 2X bigger or 2X smaller. Or 3X or 4X, but
>> that is a starting point. Once you start updating, the index might get as
>> much as 2X bigger before merges.
>> 
>> Do NOT try to get by with the smallest possible RAM or disk.
>> 
>> wunder
>> Walter Underwood
>> wun...@wunderwood.org
>> http://observer.wunderwood.org/  (my blog)
>> 
>>> On Feb 3, 2020, at 5:28 AM, Erick Erickson 
>> wrote:
>>> 
>>> I’ve always had trouble with that advice, that RAM size should be JVM +
>> index size. I’ve seen 300G indexes (as measured by the size of the
>> data/index directory) run in 128G of memory.
>>> 
>>> Here’s the long form:
>> https://lucidworks.com/post/sizing-hardware-in-the-abstract-why-we-dont-have-a-definitive-answer/
>>> 
>>> But the short form is “stress test and see”.
>>> 
>>> To answer your question, though, when people say “index size” they’re
>> usually referring to the size on disk as I mentioned above.
>>> 
>>> Best,
>>> Erick
>>> 
 On Feb 3, 2020, at 4:24 AM, Mohammed Farhan Ejaz 
>> wrote:
 
 Hello All,
 
 I want to size the RAM for my Solr cloud instance. The thumb rule is
>> your
 total RAM size should be = (JVM size + index size)
 
 Now I have a simple question, How do I know my index size? A simple
>> method,
 perhaps from the Solr cloud admin UI or an API?
 
 My assumption so far is the total segment info size is the same as the
 index size.
 
 Thanks & Regards
 Farhan
>>> 
>> 
>> 



Re: Exact search in Solr

2020-02-04 Thread Mikhail Khludnev
Hello, Łukasz
The later for sure.

On Tue, Feb 4, 2020 at 12:44 PM Antczak, Lukasz
 wrote:

> Hi, Solr experts!
>
> I would like to learn from you if there is a better solution for doing
> 'exact search' in Solr.
> Exact search means no analysis for the text other then tokenization. Query
> "secret" gives back only documents containing exactly "secret" not
> "secrets", "secrection", etc.  Text that needs to be searched is content of
> some articles.
>
> Solution 1. - index whole text as string, use regex for searching.
> Solution 2. - index text with just tokenization, no lowercase, stemming,
> etc.
>
> Which solution will be faster? Any other clever ideas to be evaluated?
>
> Regards
> Łukasz Antczak
> --
> *Łukasz Antczak*
> Senior IT Professional
> GS Data Frontiers Team 
>
> *Planned absences:*
> *Roche Polska Sp. z o.o.*
> ADMD Group Services - Business Intelligence Team
> HQ: ul. Domaniewska 39B, 02-672 Warszawa
> Office: ul. Abpa Baraniaka 88D, 61-131 Poznań
>
> Mobile: +48 519 515 010
> mailto: lukasz.antc...@roche.com
>
> *Informacja o poufności: *Treść tej wiadomości zawiera informacje
> przeznaczone tylko dla adresata. Jeżeli nie jesteście Państwo jej
> adresatem, bądź otrzymaliście ją przez pomyłkę, prosimy o powiadomienie o
> tym nadawcy oraz trwałe jej usunięcie. Wszelkie nieuprawnione
> wykorzystanie informacji zawartych w tej wiadomości jest zabronione.
>
> *Confidentiality Note:* This message is intended only for the use of the
> named recipient(s) and may contain confidential and/or proprietary
> information. If you are not the intended recipient, please contact the
> sender and delete this message. Any unauthorized use of the information
> contained in this message is prohibited.
>


-- 
Sincerely yours
Mikhail Khludnev


Exact search in Solr

2020-02-04 Thread Antczak, Lukasz
Hi, Solr experts!

I would like to learn from you if there is a better solution for doing
'exact search' in Solr.
Exact search means no analysis for the text other then tokenization. Query
"secret" gives back only documents containing exactly "secret" not
"secrets", "secrection", etc.  Text that needs to be searched is content of
some articles.

Solution 1. - index whole text as string, use regex for searching.
Solution 2. - index text with just tokenization, no lowercase, stemming,
etc.

Which solution will be faster? Any other clever ideas to be evaluated?

Regards
Łukasz Antczak
-- 
*Łukasz Antczak*
Senior IT Professional
GS Data Frontiers Team 

*Planned absences:*
*Roche Polska Sp. z o.o.*
ADMD Group Services - Business Intelligence Team
HQ: ul. Domaniewska 39B, 02-672 Warszawa
Office: ul. Abpa Baraniaka 88D, 61-131 Poznań

Mobile: +48 519 515 010
mailto: lukasz.antc...@roche.com

*Informacja o poufności: *Treść tej wiadomości zawiera informacje
przeznaczone tylko dla adresata. Jeżeli nie jesteście Państwo jej
adresatem, bądź otrzymaliście ją przez pomyłkę, prosimy o powiadomienie o
tym nadawcy oraz trwałe jej usunięcie. Wszelkie nieuprawnione
wykorzystanie informacji zawartych w tej wiadomości jest zabronione.

*Confidentiality Note:* This message is intended only for the use of the
named recipient(s) and may contain confidential and/or proprietary
information. If you are not the intended recipient, please contact the
sender and delete this message. Any unauthorized use of the information
contained in this message is prohibited.


How can shards distributed evenly among nodes

2020-02-04 Thread Yuan Zhao
Hi Team,

We are using autoscaling policy, we make use of the utilize node feature to
move replica to new nod.
But we found after replica are moved, solr can make sure the repilica
belongs to a same shard located
on different nodes,  but it can not make sure shard distributed evenly on
all the nodes.
That means a node might contain all the shards of an index.
And, more remarkable, the shards distributed evenly before utilize node
command is executed.

   index_name  | replica_name | shard_name |  node_name   |
replica_state
---+--++--+---
 test_index.t2 | core_node6   | shard1 | test-server:8983_solr|
active
 test_index.t4 | core_node7   | shard2 | test-server:8983_solr|
active
 test_index.t4 | core_node5   | shard1 | test-server:8983_solr|
active
 test_index.t2 | core_node4   | shard0 | test-server:8983_solr|
active
 test_index.t1 | core_node3   | shard1 | test-server:8984_solr|
active
 test_index.t4 | core_node8   | shard2 | test-server:8984_solr|
active
 test_index.t3 | core_node8   | shard1 | test-server:8984_solr|
active
 test_index.t2 | core_node2   | shard0 | test-server:8984_solr|
active
 test_index.t2 | core_node10  | shard1 | test-server:8985_solr|
active
 test_index.t1 | core_node18  | shard2 | test-server:8985_solr|
active
 test_index.t4 | core_node10  | shard1 | test-server:8985_solr|
active
 test_index.t3 | core_node10  | shard0 | test-server:8985_solr|
active
 test_index.t1 | core_node14  | shard2 | test-server:8987_solr|
active
 test_index.t3 | core_node14  | shard0 | test-server:8987_solr|
active
 test_index.t3 | core_node12  | shard1 | test-server:8987_solr|
active
 test_index.t1 | core_node16  | shard1 | test-server:8987_solr|
active

 Do you have any good solution to this problem.
 The solr version we are using is 7.4.
 The cluster policy like:
 {
"set-cluster-policy" : [{
 "replica" : "<2",
 "shard" : "#EACH",
 "node" : "#ANY",
 "strict" : false
}]
}

-- 
Thanks & regards,
Yuan


Re: Haystack CFP is open, come and tell us how you tune relevance for Lucene/Solr

2020-02-04 Thread Charlie Hull

Hi all,

You have until this Friday to submit a talk to Haystack! Very much 
looking forward to your submissions.


Charlie

On 27/01/2020 21:53, Doug Turnbull wrote:

Just an update the CFP was extended to Feb 7th, less than 2 weeks away.  ->
http://haystackconf.com

It's your ethical imperative to share! ;)
https://opensourceconnections.com/blog/2020/01/23/opening-up-search-is-an-ethical-imperative/

And no talk is too small, people often underestimate what they're doing,
and very much underestimate how interesting others will find your story!
The best talks often come from the least expected people & orgs.

On Thu, Jan 9, 2020 at 4:13 AM Charlie Hull  wrote:


Hi all,

Haystack, the search relevance conference, is confirmed for 29th & 30th
April 2020 in Charlottesville, Virginia - the CFP is open and we need
your contributions! More information at www.haystackconf.com
including links to previous talks, deadline
is 31st January. We'd love to hear your Lucene/Solr relevance stories.

Cheers

Charlie
--

Charlie Hull
Flax - Open Source Enterprise Search

tel/fax: +44 (0)8700 118334
mobile:  +44 (0)7767 825828
web: www.flax.co.uk




--
Charlie Hull
OpenSource Connections, previously Flax

tel/fax: +44 (0)8700 118334
mobile:  +44 (0)7767 825828
web: www.o19s.com