what does having a problem mean? Index-time? Query time?
But your problem is most likely the tokenizer as you suspect. Try something
like WhitespaceTokenizer and build up from there.
Three friends:
1 admin/analysis page
2 admin/schema-browser
3 debugQuery=on
The first will show you what the
that's very strange. How much memory are you giving the JVM? And how much
memory is on your machine?
If your index is cutting in half on optimize, then it sounds like you're
re-indexing everything. Optimize will squeeze out all the data left around
by document deletes or updates, so the only
Hmmm, first an aside. If by commit after every batch of documents you
mean after every call to server.add(doclist), there's no real need to do
that unless you're striving for really low latency. the usual
recommendation is to use commitWithin when adding and commit only at the
very end of the
There was a discussion of this a bit ago, but the upshot is that the
maintainer hasn't released a version compatible with 4.0 yet. Send him
money G...
FWIW,
Erick
On Fri, Nov 16, 2012 at 11:16 AM, Miguel Ángel Martín
miguelangel.mar...@brainsins.com wrote:
hi all:
i can open an index
1 Well, it loads the local conf directory up to zookeeper so new nodes can
fetch the configuration and store it locally.
2 No, you have to upload the configuration to ZK and (I think) restart the
other servers. It's easy enough to test, just make your changes to the
config, upload it, and look at
I would _guess_ (but haven't done this with DIH) that simply putting
the body.chain in the updatehandler (updateHandler
class=solr.DirectUpdateHandler2)
would do what you want.
But that's purely a guess at this point on my part.
Anyone want to correct me?
Best
Erick
On Fri, Nov 16, 2012 at
Hi Spadez,
Nabble has helpfully stripped out your script. Maybe don't use Nabble?
Steve
On Nov 16, 2012, at 5:06 PM, Spadez james_will...@hotmail.com wrote:
Hey guys,
I am after a bash script (or python script) which I can use to trigger a
delta import of XML files via CRON. After a bit
You can force Solr to use the new configs by reloading a collection:
http://localhost:8983/solr/admin/collections?action=RELOADname=mycollection
This'll cause all shards (and replicas) in a collection to collect new
configs from ZooKeeper.
The main thing to note re Jetty, is that the Jetty
On 11/16/2012 12:52 PM, Shawn Heisey wrote:
On 11/16/2012 12:36 PM, Jack Krupansky wrote:
Generally, you don't need the preserveOriginal attribute for WDF.
Generate both the word parts and the concatenated terms, and queries
should work fine without the original. The separated terms will be
bq. fetch the configuration and store it locally.
New nodes don't fetch the configs and store them locally - configs are
loaded straight from zookeeper currently.
- Mark
On 11/16/2012 12:30 PM, Shawn Heisey wrote:
I am extremely interested in the Unicode behavior of ICUTokenizer, but
I cannot disable the punctuation-splitting behavior and let WDF handle
it properly, which causes recall problems. There is no filter that I
can run after tokenization, either.
Manish,
Need to set hasSingleNormFile=0 ins schema
On Sun, Nov 18, 2012 at 9:11 AM, Manish Bafna manish.bafna...@gmail.comwrote:
Hi,
I need to disable HasSingleNormFile in solr, so that multiple norm files
are created. Can anyone plz provide information on how to disable this in
solr.
If
I think this means the pattern did not match any files:
str name=Total Rows Fetched0/str
The wiki example includes a '^' at the beginning of the filename pattern. This
matches a complete line.
http://wiki.apache.org/solr/DataImportHandler#Transformers_Example
More:
Add rootEntity=true. It
13 matches
Mail list logo