The only way I know is by using a copyfield at index time that copies
everything from fields called E_* to a field with a known name, then use
that field for searching.
On Wed, Dec 21, 2011 at 9:41 AM, Isan Fulia isan.fu...@germinait.comwrote:
Hi,
I hava a dynamic field E_*
I want to seach
Hi Vijayaragavan, did you apply a patch for grouping in Solr 3.1? It is
available out of the box since 3.3.
Also, the result from grouping will not look exactly like you are
expecting, as results with the same value in the grouping field (in this
case, thread_id) will be collapsed into one group.
Yes, soft commit currently clears Solr's caches.
On Mon, Jan 2, 2012 at 12:01 PM, ramires uy...@beriltech.com wrote:
hi
After soft-commit with below command all cache are cleared. Is it normal?
curl http://localhost:8984/solr/update -H Content-Type: text/xml
--data-binary 'commit
Hi Mike,
- exact match (disabling stemming): Ideally, users need a way of turning
this on or off for terms in their query (e.g. [ =walking running ] would
stem the word running, but not walking).
Correct, there is no way to do this with Solr just by
activating/deactivating one parameter.
Can those modifications be made on the server side? If so, you could create
an UpdateRequestProcessor. See
http://wiki.apache.org/solr/UpdateRequestProcessor
On Thu, Jan 12, 2012 at 5:19 PM, jmuguruza jmugur...@gmail.com wrote:
If I have individual files in the expected Solr format (having just
As far as I know, the replication is supposed to delete the old directory
index. However, the initial question is why is this new index directory
being created. Are you adding/updating documents in the slave? what about
optimizing it? Are you rebuilding the index from scratch in the master?
Also,
You'll get this same behavior with edismax or lucene QP. Wildcard queries
are not analyzed (not the lowercase filter nor any other).
2012/1/20 Matthias Müller mm4...@googlemail.com
Hi,
I'm using an edismax handler
All fields and queries are lower case (LowerCaseFilterFactory in
schema.xml)
You say warming queries didn't help? How do those look like? Make sure you
facet and sort in all of the fields that your application allow
faceting/sorting. The same with the filters. Uninversion of fields is done
only when you commit, but warming queries should help you here.
Tomás
On Fri, Jan
The problem is that in order to make the changes visible to the user you
have to issue a commit. If you commit with every user change (I assume you
may have concurrent users) you may have many commits per second. That's too
much for Solr, as each commit will flush a new segment, reopen an index
You could use the grouping feature, depending on your needs:
http://wiki.apache.org/solr/FieldCollapsing
2012/3/12 André Maldonado andre.maldon...@gmail.com
Hi.
I need to setup an index that have relational data. This index will be for
houses to rent, where the user will search for date,
Make sure the Solr cell jar is in the classpath. You probably have a line
like this in your solrconfig.xml:
lib dir=../../dist/ regex=apache-solr-cell-\d.*\.jar /
Make sure that points to the right file.
On Mon, Mar 12, 2012 at 2:59 PM, rdancy rda...@wiley.com wrote:
Hello, I running Solr
it should be in
lucidworks-solr-3.2.0_01/dist/lucidworks-solr-cell-3.2.0_01.jar, don't
you have that one?
On Mon, Mar 12, 2012 at 5:44 PM, rdancy rda...@wiley.com wrote:
I see the line - lib dir=../../dist/ regex=apache-solr-cell-\d.*\.jar
/
but I don't see any solr cell jars, only Tika jars.
Well, this is another error. Looks like you are using cores and you are not
adding the core name to the URL. Make sure you do it:
http://localhost:8585/solr/[CORENAME]/update/extract?literal.id=1commit=true
The core name is the one you defined in solr.xml and should always be used
in the URL. If
This looks like you are using a SolrJ version different than the Solr
server version you are using. Make sure that server and client are using
the same Solr version.
On Mon, Mar 19, 2012 at 8:02 AM, Markus Jelsma
markus.jel...@openindex.iowrote:
You probably have a non-char codepoint hanging
Yes, you can use replication: http://wiki.apache.org/solr/SolrReplication
On Wed, Mar 21, 2012 at 5:07 AM, ravicv ravichandra...@gmail.com wrote:
Hi
I have requirement to index data using solr in server and move the index
folder to all clients through webservice.
Is it possible to move the
However, If the multivalued complex data field is not possible. Is it
possible to use copyField directive to copy fields if a certain score is
higher than a threshold ?
I don't think that's possible out of the box, but you could use custom
UpdateRequestProcessor for for that.
How many different
When you say and send to dotnet system through webservice, you mean that
the client will be dotnet, but Solr is still going to be Solr, in Java,
right?
I'm sure that if you stop Solr, change the index directory (like if you
unzip the one you brought from the other server) and start Solr again,
as the value.
How would I go about changing the schema ?
Thanks
Ramdev
On 3/21/12 3:24 PM, Tomás Fernández Löbbe tomasflo...@gmail.com wrote:
However, If the multivalued complex data field is not possible. Is it
possible to use copyField directive to copy fields if a certain score
Or if you still want to have stemming, you could use a Spanish stemmer,
like:
filter class=solr.SnowballPorterFilterFactory language=Spanish/
or
filter class=solr.SpanishLightStemFilterFactory/
Tomás
On Thu, Mar 22, 2012 at 11:09 AM, Juan Pablo Mora jua...@informa.es wrote:
Remove the stemmer
Hi Ben, only new segments are replicated from master to slave. In a
situation where all the segments are new, this will cause the index to be
fully replicated, but this rarely happen with incremental updates. It can
also happen if the slave Solr assumes it has an invalid index.
Are you committing
. When it
kicks in I see a new version of the index and then it copys the full 5gb
index.
Thanks
Ben
-Original Message-
From: Tomás Fernández Löbbe [mailto:tomasflo...@gmail.com]
Sent: 23 March 2012 14:29
To: solr-user@lucene.apache.org
Subject: Re: Simple Slave Replication
Also, what happens if, instead of adding the 40K docs you add just one and
commit?
2012/3/23 Tomás Fernández Löbbe tomasflo...@gmail.com
Have you changed the mergeFactor or are you using 10 as in the example
solrconfig?
What do you see in the slave's log during replication? Do you see any
Alexandre, additionally to what Erick said, you may want to check in the
slave if what's 300+GB is the data directory or the index.timestamp
directory.
On Fri, Mar 23, 2012 at 12:25 PM, Erick Erickson erickerick...@gmail.comwrote:
not really, unless perhaps you're issuing commits or optimizes
Can't you simply calculate that at index time and assign the result to a
field, then sort by that field.
On Thu, Mar 29, 2012 at 12:07 PM, Darren Govoni dar...@ontrenet.com wrote:
I'm going to try index time per-field boosting and do the boost
computation at index time and see if that helps.
, 2012-03-29 at 16:29 -0300, Tomás Fernández Löbbe wrote:
Can't you simply calculate that at index time and assign the result to a
field, then sort by that field.
On Thu, Mar 29, 2012 at 12:07 PM, Darren Govoni dar...@ontrenet.com
wrote:
I'm going to try index time per-field boosting
You could check the index version. See
http://wiki.apache.org/solr/SolrReplication. Every time a commit is issued,
the index version is incremented.
but you could also use the backupAfter feature, explained also in
http://wiki.apache.org/solr/SolrReplication
Tomás
On Mon, Apr 16, 2012 at 11:07
I'm wondering if Solr is the best tool for this kind of usage. Solr is a
text search engine, so even if it supports all those features, it is
design for text search, which doesn't seem to be what you need. Which are
the reasons for moving from a DB implementation to Solr?
Don't misunderstand me,
I guess this should be possible by setting the echoParams=none or
explicit as an invariant. For example:
requestHandler name=/dataimport... class=solr.DataImportHandler
lst name=invariants
str name=echoParamsnone/str
/lst
...
/requestHandler
I haven't tried it, but I think
The warmup process reloads the data from the new index.
Cache in Solr expires with a new searcher, correct. You could have
evictions too if it gets filled.
On Thu, Apr 26, 2012 at 8:33 AM, mizayah miza...@gmail.com wrote:
Please help me understand that.
What wil happen if if have cached data
Is this still true? Assuming that I know that there hasn't been updates or
that I don't care to see a different version of the document, are the term
QP or the raw QP faster than the real-time get handler?
On Fri, Mar 11, 2011 at 3:12 PM, Yonik Seeley yo...@lucidimagination.comwrote:
On Fri,
With replication every 15 minutes you could still do some autowarming. But
if autowarming was the problem you should see only the first couple of
queries slow, after that it should go back to normal, is this what you are
seeing?
Are your queries very complex? Do you facet in many fields? are
Hi Alexey, responses are inline:
Zookeeper manages not only the cluster state, but also the common
configuration files.
My question is, what are the exact rules of precedence? That is, when SOLR
node will decide to download new configuration files?
When the SolrCore is started.
Will
1.If I have a solrcloud cluster with two shards and 0 replica on two
different server.
when one of server restarts will the solr instance on that server replay
the transaction log to make sure these operations persistent to the index
files(commit the transaction log)?
Yes, the Solr
...@gmail.com
wrote:
Hi
On Wed, Nov 7, 2012 at 7:49 AM, Tomás Fernández Löbbe
tomasflo...@gmail.com
wrote:
1.If I have a solrcloud cluster with two shards and 0 replica on two
different server.
when one of server restarts will the solr instance on that server
replay
the transaction log
Are you sure you are pointing to the correct conf directory? sounds like
you are missing the collection name in the path (maybe it should be
../solr/YOURCOLLECTIONNAME/conf?)
On Fri, Nov 9, 2012 at 1:58 PM, Carlos Alexandro Becker
caarl...@gmail.comwrote:
I started my JBoss server with the
the following files:
conf/
-stopwords.txt
-synonyms.txt
data/
inde/ (etc..)
solr.xml
zoo.cfg
zoo.cfg is the default of the solrcloud example.
THanks in advance.
On Fri, Nov 9, 2012 at 3:09 PM, Tomás Fernández Löbbe
tomasflo...@gmail.com
wrote:
Are you sure you
Also, JBoss AS uses Tomcat, rigth? you may want to look at Mark Miller's
comments here:
http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201210.mbox/%3ccabcj++j+am6e0ghmm+hpzak5d0exrqhyxaxla6uutw1yqae...@mail.gmail.com%3E
On Fri, Nov 9, 2012 at 4:30 PM, Tomás Fernández Löbbe tomasflo
(CoreContainer.java:789)
[apache-solr-core-4.0.0.jar:4.0.0 1394950 - rmuir - 2012-10-06 03:05:55]
... 19 more
Thanks in advance.
On Fri, Nov 9, 2012 at 5:34 PM, Tomás Fernández Löbbe
tomasflo...@gmail.com
wrote:
Also, JBoss AS uses Tomcat, rigth? you may want to look at Mark Miller's
comments
Alexandro Becker caarl...@gmail.com
wrote:
Hm, OK, now I just leave my work, next week I'll try to do what you say
and
give you a feedback.
Meanwhile, thank you very much for your help.
On Fri, Nov 9, 2012 at 6:30 PM, Tomás Fernández Löbbe
tomasflo...@gmail.com
wrote:
I
adminPath=/admin/cores
zkClientTimeout=${zkClientTimeout:15000} hostPort=8080
hostContext=solr
core instanceDir=. name=collection1/
/cores
/solr
Thanks in advance.
On Sat, Nov 10, 2012 at 1:39 PM, Tomás Fernández Löbbe
tomasflo...@gmail.com wrote:
Created https
instanceDir=. name=collection1/
/cores
/solr
I'm pretty sure that I'm missing some simple thing.. but cant figure out
what.
Thanks in advance.
On Mon, Nov 12, 2012 at 12:16 PM, Tomás Fernández Löbbe
tomasflo...@gmail.com wrote:
I'm not sure what could be the issue here
12, 2012 at 12:16 PM, Tomás Fernández Löbbe
tomasflo...@gmail.com wrote:
I'm not sure what could be the issue here, maybe there is a problem
with
finding the name of your machine? can you manually find '
http://carlos-OptiPlex-790:8080/solr' ? Maybe if you set the host
If you are in Solr 4 you could use realtime get and list the ids that you
need. For example:
http://host:port/solr/mycore/get?ids=my_id_1,my_id_2...
See http://lucidworks.lucidimagination.com/display/solr/RealTime+Get
Tomás
On Mon, Nov 19, 2012 at 5:27 PM, Otis Gospodnetic
Maybe it would be better if Solr checked the live nodes and not all the
existing nodes in zk. If a server dies and you need to start a new one, it
would go straight to the correct shard without one needing to specify it
manually. Of course, the problem could be if a server goes down for a
minute
- We aren't using shards because our index only contains 1 mil simple docs.
We only need multiple server because the amount of traffic. In the examples
of solrCloud i see only examples with shards. Is numshards=1 possible? One
big index is faster than multiple shards? I need 1 collection with
I will use numshards=1. Are there some instructions on how to install only
zookeeper on a separate server? Or do i have to install solr 4 on that
server?
You don't need to install Solr in that server. See
http://zookeeper.apache.org/doc/trunk/zookeeperStarted.html
How make the connection
Hi Federico, it should work. Make sure you set the shards.qt parameter
too (in your case, it should be shards.qt=/terms)
On Thu, Nov 22, 2012 at 6:51 AM, Federico Méndez federic...@gmail.comwrote:
Anyone knows if the TermsComponent supports distributed search trough a
SolrCloud installation?
You can either escape the whitespace with \ or search as a phrase.
fieldNonTokenized:foo\ bar
...or...
fieldNonTokenized:foo bar
On Thu, Nov 22, 2012 at 9:08 AM, Varun Thacker
varunthacker1...@gmail.comwrote:
I have indexed documents using a fieldType which does not break the word
up. I
- I change my synonyms.txt on a solr node. How can i get zookeeper in sync
and the other solr nodes without restart?
Well, you can upload the whole collection configuration again with zkClient
(included in the cloud.scripts section). see
You could use Zookeeper's chroot:
http://zookeeper.apache.org/doc/r3.2.2/zookeeperAdmin.html#sc_bestPractices
You can use chroot in Solr by specifying it in the zkHost parameter, for
example -DzkHost=localhost:2181/namespace1
In order for this to work, you need to first create the initial path
If you need to reload all the cores from a given collection you can use the
Collections API:
http://localhost:8983/solr/admin/collections?action=RELOADname=mycollection
On Thu, Nov 22, 2012 at 3:17 PM, joe.cohe...@gmail.com
joe.cohe...@gmail.com wrote:
Hi,
I'm using solr-4.0.0
I'm trying to
I think that's correct. Queries to the existing nodes will still work with
no ZK.
On Fri, Nov 23, 2012 at 7:16 AM, roySolr royrutten1...@gmail.com wrote:
Thanks Tomás for the information so far.
You said:
You can effectively run with only one zk instance, the problem with this is
that if
If I remember correctly, updated files in the master only get replicated if
there is a change in the index (if the index version from the master and
the slave are the same, nothing gets replicated, not even the configuration
files). Are you currently updating the index or just the configuration
://www.linkedin.com/pub/andr%C3%A9-maldonado/23/234/4b3
http://www.youtube.com/andremaldonado
On Fri, Dec 7, 2012 at 12:09 PM, Tomás Fernández Löbbe
tomasflo...@gmail.com wrote:
If I remember correctly, updated files in the master only get replicated
if
there is a change in the index
/105605760943701739931
http://www.linkedin.com/pub/andr%C3%A9-maldonado/23/234/4b3
http://www.youtube.com/andremaldonado
On Fri, Dec 7, 2012 at 1:03 PM, Tomás Fernández Löbbe
tomasflo...@gmail.com
wrote:
Have you committed the changes on the master? Are you sure that the
replication didn't happen
://profiles.google.com/105605760943701739931
http://www.linkedin.com/pub/andr%C3%A9-maldonado/23/234/4b3
http://www.youtube.com/andremaldonado
On Fri, Dec 7, 2012 at 2:04 PM, Tomás Fernández Löbbe
tomasflo...@gmail.com
wrote:
hmm then I'm not sure what can be happening. Do you see anything in the
logs
What do you mean? Could you explain your use case?
Tomás
On Thu, Dec 13, 2012 at 9:36 AM, nihed mbarek nihe...@gmail.com wrote:
Hello,
Is it possible to define a custom search for a fieldType on a schema ? ?
Regards,
--
M'BAREK Med Nihed
Fernández Löbbe
tomasflo...@gmail.com wrote:
What do you mean? Could you explain your use case?
Tomás
On Thu, Dec 13, 2012 at 9:36 AM, nihed mbarek nihe...@gmail.com wrote:
Hello,
Is it possible to define a custom search for a fieldType on a schema ?
?
Regards
optimize operations only occur when you explicitly request for them. All
nodes should get the command, so if you have set the buildOnOptimize in
all nodes (you probably are, as you are using the same configuration) then
all of them should rebuild the spellcheck index.
Tomás
On Mon, Dec 17, 2012
It only rebuilds on explicit optimize operations. A background merge that
merges all segments (to 1) won't fire the rebuild AFAIK.
And Upayavira is right, you can choose to use a DirectSolrSpellChecker,
that way you don't need an external index at all.
On Mon, Dec 17, 2012 at 9:46 AM, Upayavira
If by cronned commit you mean auto-commit: auto-commits are local to
each node, are not distributed, so there is no something like a
cluster-wide atomicity there. The commit may be performed in one node
now, and in other nodes in 5 minutes (depending on the maxTime you have
configured).
If you
Try with quotes or escaping whitespaces:
fq:Management en Organisatie
...or
fq:Management\ en\ Organisatie
Make sure you use the correct case.
Tomás
On Mon, Dec 31, 2012 at 6:54 AM, PeterKerk vettepa...@hotmail.com wrote:
I'm trying to filter on the field functiontitle_nl when the user
It can't be *really* case independent. You could lowercase everything, but
you'd see the facet value in lowercase too. If you really need to search in
lowercase and display the original content on the facet value you could use
two fields, one for faceting (of type string) and one for filtering (of
AFAIK Solr 4 should be able to read Solr 3.6 indexes. Soon those files will
be updated to 4.0 format and will not be readable by Solr 3.6 anymore. See
http://wiki.apache.org/lucene-java/BackwardsCompatibility
You should not use a a 3.6 SolrJ client with Solr 4 server.
Tomás
On Wed, Jan 2, 2013
I think it should be
–DzkHost=zoo1:8983,zoo2:8983,zoo3:8983/solrroot
Tomás
On Thu, Jan 3, 2013 at 2:14 PM, Mark Miller markrmil...@gmail.com wrote:
I don't really understand your question. More than one what?
More than one external zk node? Start up an ensemble, and pass a comma sep
list
Yes, you must issue hard commits. You can use autocommit and use
openSearcher=false. Autocommit is not distributed, it has to be configured
in every node (which will automatically be, because you are using the exact
same solrconfig for all your nodes).
Other option is to issue an explicit hard
Bu Amit is right, when you use group.main, the number of groups is not
displayed, even if you set grop.ngroups.
I think in this case NumFound should display the number of groups instead
of the number of docs matching. Other option would be to keep numFound as
the number of docs matching and add
I think fieldValueCache is not per segment, only fieldCache is. However,
unless I'm missing something, this cache is only used for faceting on
multivalued fields
On Thu, Jan 17, 2013 at 8:58 PM, Erick Erickson erickerick...@gmail.comwrote:
filterCache: This is bounded by 1M * (maxDoc) / 8 *
(outside to Solr), an indexing a bulk every 10
minutes.
Thanks.
On Fri, Jan 18, 2013 at 2:15 AM, Tomás Fernández Löbbe
tomasflo...@gmail.com wrote:
I think fieldValueCache is not per segment, only fieldCache is. However,
unless I'm missing something, this cache is only used for faceting
I think the best way will be to pre-process the document (or use a custom
UpdateRequestProcessor). Other option, if you'll only use the cities
field for faceting/sorting/searching (you don't need the stored content)
would be to use a regular copyField and use a KeepWordFilter for the
cities field.
Yes and no. SolrCloud won't do it automatically. But it will make it easier
for you to add/remove nodes from a collection. And if you use
CloudSolrServer for queries, the new nodes will automatically be used for
queries once they are ready to respond.
Tomás
On Fri, Feb 1, 2013 at 7:35 AM,
Yes, currently the only option is to shutdown the node. Maybe not the
cleanest way to remove a node. See this jira too:
https://issues.apache.org/jira/browse/SOLR-3512
On Thu, Feb 7, 2013 at 7:20 AM, yriveiro yago.rive...@gmail.com wrote:
Hi,
Exists any way to eject a node from a solr
In step 4, once the node 1 gets all the responses, it merges and sorts
them: Lets say you requested 15 docs from each shard (because the rows
parameter is 15), at this point node 1 merges the results from all the
responses and gets the top 15 across all of them. The second request is
only to get
It should be easy to extend ExtendedDismaxQParser and do your
pre-processing in the parse() method before calling edismax's parse. Or
maybe you could change the way EDismax is splitting the input query into
clauses by extending the splitIntoClauses method?
Tomás
On Wed, Mar 6, 2013 at 6:37 AM,
A couple of comments about your deployment architecture too. You'll need to
change the zoo.cfg to make the Zookeeper ensemble work with two instances
as you are trying to do, have you? The example configuration with the
zoo.cfg is intended for a single ZK instance as described in the SolrCloud
You can also take a look at http://wiki.apache.org/solr/HowToContribute
Tomás
On Mon, Mar 11, 2013 at 9:20 AM, Andy Lester a...@petdance.com wrote:
On Mar 11, 2013, at 11:14 AM, chandresh pancholi
chandreshpancholi...@gmail.com wrote:
I am beginner in this field. It would be great if
Hi Feroz, due to Lucene's backward compatibility policy (
http://wiki.apache.org/lucene-java/BackwardsCompatibility ), a Solr 4.1
instance should be able to read an index generated by a Solr 3.5 instance.
This would not be true if you need to change the schema. Also, be careful
because Solr 4.1
Hi Floyd, I don't think the feature that allows to use multiple gaps for a
range facet is committed. See
https://issues.apache.org/jira/browse/SOLR-2366
You can achieve a similar functionality by using facet.query. see:
Hi, you are not giving us much information. What's your default operator?
What do you mean with results are not correct?
On Tue, Jul 26, 2011 at 3:04 AM, deniz denizdurmu...@gmail.com wrote:
Here is the situation..
when i make search with 3 or more words, the results are corret, however if
i
result was not correct.
Am I missing something?
Floyd
2011/7/26 Tomás Fernández Löbbe tomasflo...@gmail.com
Hi Floyd, I don't think the feature that allows to use multiple gaps for
a
range facet is committed. See
https://issues.apache.org/jira/browse/SOLR-2366
You can achieve a similar
Hi Michael, I guess this could be solved using grouping as you said.
Documents inside a group can be sorted on a field (in your case, the version
field, see parameter group.sort), and you can show only the first one. It
will be more complex to show facets (post grouping faceting is work in
I guess this is because Lucene QP is interpreting the 'OR' operator.
You can either:
use lowercase
use other query parser, like the term query parser. See
http://lucene.apache.org/solr/api/org/apache/solr/search/TermQParserPlugin.html
Also, if you just removed the or term from the
I think not, but if what you could get a similar result by using faceting on
the field and set the parameter facet.mincount=1. It will be slower than the
TermsComponent.
On Wed, Aug 17, 2011 at 1:19 PM, Darren Govoni dar...@ontrenet.com wrote:
Hi,
Is it possible to restrict the /terms
As far as I know, Solr's trunk is pretty stable, so you shoundl't have many
problems with it if you test it correctly. Lucid's search platform is built
upon the trunk (
http://www.lucidimagination.com/products/lucidworks-search-platform/enterprise
).
The one thing I would be concerned is with the
until I got the needed
functionality.
-Original Message-
From: Tomás Fernández Löbbe [mailto:tomasflo...@gmail.com]
Sent: Wednesday, August 17, 2011 5:12 PM
To: solr-user@lucene.apache.org
Subject: Re: 'Stable' 4.0 version
As far as I know, Solr's trunk is pretty stable, so you
Hi Jame, the size for the queryResultCache is the number of queries that
will fit into this cache. AutowarmCount is the number of queries that are
going to be copyed from the old cache to the new cache when a commit occurrs
(actually, the queries are going to be executed again agains the new
rather than number of queries ?
2011/8/19 Tomás Fernández Löbbe tomasflo...@gmail.com
Hi Jame, the size for the queryResultCache is the number of queries that
will fit into this cache. AutowarmCount is the number of queries that are
going to be copyed from the old cache to the new cache
You can do a lot of dependency injection though solrconfig.xml and
schema.xml, Specify search components, update processors, filters,
similarity, etc. Solr doesn't use any DI framework, everything is built-in
in a pluggable manner. What kind of customizations do you need to apply?
maybe we can
Both of those features are needed at indexing time, right? If it is, the
best place to put it is on an UpdateRequestProcessor. See
http://wiki.apache.org/solr/UpdateRequestProcessor
Tomás
On Mon, Aug 29, 2011 at 11:06 AM, samuele.mattiuzzo samum...@gmail.comwrote:
I've posted a similar
I think I get it. Many of the objects that depend on the configuration
are instantiated by using reflection, is that an option for you?
On Mon, Aug 29, 2011 at 12:33 PM, Federico Fissore fiss...@celi.it wrote:
Tomás Fernández Löbbe, il 29/08/2011 16:39, ha scritto:
You can do a lot
a
SearchComponent that uses that retriever, right?
On Mon, Aug 29, 2011 at 1:30 PM, Federico Fissore fiss...@celi.it wrote:
Tomás Fernández Löbbe, il 29/08/2011 17:58, ha scritto:
I think I get it. Many of the objects that depend on the
configuration
are instantiated by using reflection
You need to create also a class that
extends org.apache.solr.update.processor.UpdateRequestProcessorFactory. This
is the one that you indicate on the solrconfig and it's the one that will
instantiate your UpdateRequestProcessor.
see
auto* is not a leading wildcard query, a leading wildcard query would be
*car. Wildcard queries in general will take more time than regular
queries, the more close the wildcard is to the first character, the more
expensive the query is.
With a regular field type, Solr will allow wildcards (not
Hi Scott, now your queries are going to be created by a QueryParser. you
have a couple of options here, most common are LuceneQueryParser,
DismaxQueryParser and ExtendedDismaxQueryParser, but there are others. The
QueryParser will be creating all those queries you mentiones, for example,
if you
If you need those kinds of searches then you should probably not be using
the KeywordTokenizerFactory, is there any reason why you can't switch to a
WhitespaceTokenizer for example? then you could use a simple phrase query
for your search case. if you need everything as a Token, you could use a
and not
other queries . is it possible to control this query cache results and
window size for each query separately ?
2011/8/19 Tomás Fernández Löbbe tomasflo...@gmail.com
From my understanding, seeing the cache as a set of key-value pairs, this
cache has the query as key and the list of IDs
Hi Andrew, I think this question belongs to the users list more than to the
dev's list.
Programmatically, it depends on the client library you are using, if you are
using SolrJ, it should be something like:
SolrQuery query = new SolrQuery();
...
query.setRows(20);
query.setStart(40);
You can
Well, yes. You probably have a string field for that content, right? so the
content is being compared as strings, not as numbers, that why something
like 1000 is lower than 2. Leading zeros would be an option. Another option
is to separate the field into numeric fields and sort by those (this last
Hi Tod, Solr doesn't actually crawl, If you need to feed Solr with that kind
of information you'll have to use some crawling tool or implement that
yourself.
Regards,
Tomás
On Fri, Oct 21, 2011 at 2:48 PM, Tod listac...@gmail.com wrote:
I have a feeling the answer is no since you wouldn't
I was thinking in this, would it make sense to keep the master / slave
architecture, adding documents to the master and the slaves, do soft commits
(only) to the slaves and hard commits to the master? That way you wouldn't
be doing any merges on slaves. Would that make sense?
On Fri, Oct 21, 2011
1 - 100 of 271 matches
Mail list logo