stemming.
--
Regards,
Shalin Shekhar Mangar.
. coreX is serving the requests, coreY is updated and now you can
swap coreX with coreY so that new requests hit the updated index. I suggest
you look at the swap operation instead of index merge.
--
Regards,
Shalin Shekhar Mangar.
executeWithRetry
[java] INFO: I/O exception (java.net.ConnectException) caught when
processing request: Connection refused
The Connection refused message suggests that your Solr instance is either
not running or you have given the wrong host/port in your driver.
--
Regards,
Shalin Shekhar Mangar.
any benefit from using a mergeFactor
of 50.
--
Regards,
Shalin Shekhar Mangar.
be added to this
and it becomes the latest and is again swapped?
Perhaps it is best if we take a step back and understand why you need two
identical cores?
--
Regards,
Shalin Shekhar Mangar.
of day and
it will again pass it on to COREX. This process continues everyday.
You could use the same approach that Solr 1.3's snapinstaller script used.
It deletes the files and creates hard links to the new index files.
--
Regards,
Shalin Shekhar Mangar.
://wiki.apache.org/solr/CoreAdmin
--
Regards,
Shalin Shekhar Mangar.
I safely remove the index on the
slave and restart the slave and the slave will start over the replication
from scratch?
Yes, replication will copy the latest index from master on restarting the
slave.
--
Regards,
Shalin Shekhar Mangar.
the expected results, let us know and we can figure out the problem.
--
Regards,
Shalin Shekhar Mangar.
in trunk but that was implemented after
the 1.4 release.
--
Regards,
Shalin Shekhar Mangar.
are
queued up but they are still accepted. Are you using the same Solr server
for reads as well as writes?
--
Regards,
Shalin Shekhar Mangar.
/lucene/solr/trunk/src/test/org/apache/solr/core/TestJmxIntegration.java
--
Regards,
Shalin Shekhar Mangar.
--
Regards,
Shalin Shekhar Mangar.
you post
the stack trace of any exceptions that you can find in the logs?
--
Regards,
Shalin Shekhar Mangar.
to this problem.
Have you tried using CachedSqlEntityProcessor?
See http://wiki.apache.org/solr/DataImportHandler#CachedSqlEntityProcessor
--
Regards,
Shalin Shekhar Mangar.
approach work for your use-case? You can define a
secret key per core and share it with the application supposed to use that
core. Then you can write a Java Filter placed before SolrDispatchFilter
which can look at the request path and verify access.
--
Regards,
Shalin Shekhar Mangar.
that due to heavy load these logs gets removed or does not get
created.
No, heavy load does not cause Solr to stop logging.
--
Regards,
Shalin Shekhar Mangar.
Processing Document # 1
...
Caused by: javax.xml.transform.TransformerConfigurationException: Could not
compile stylesheet
Anyone that can help me out here? Or has a running example using XSLT with
DIH?
Can you send the complete stacktrace?
--
Regards,
Shalin Shekhar Mangar.
, 2010 at 5:19 PM, Erik Hatcher erik.hatc...@gmail.com
wrote:
won't some stemmers leave diacritics in the terms that ought to be
removed
before indexing?
On Feb 21, 2010, at 4:57 PM, Shalin Shekhar Mangar wrote:
Hello,
Looking over the CharFilter franchise, it seems to me
Hello,
Looking over the CharFilter franchise, it seems to me that the
ASCIIFoldingFilter is a perfect candidate for being a CharFilter as it
performs character level substitutions like MappingCharFilter. However it is
not a CharFilter. Is there a reason why?
--
Regards,
Shalin Shekhar Mangar.
. So the only way I can think of why this happens is
because there is some persistent cache that survives the solr
restarts. Is this the case? Or why could this be?
Solr does not have a persistent cache. That is the operating system's file
cache at work.
--
Regards,
Shalin Shekhar Mangar.
munged by dismax into video poker video
poker... Which is wrong.
Have you tried the pf parameter?
--
Regards,
Shalin Shekhar Mangar.
On Tue, Feb 9, 2010 at 2:43 PM, Xavier Schepler
xavier.schep...@sciences-po.fr wrote:
Shalin Shekhar Mangar a écrit :
On Mon, Feb 8, 2010 at 9:47 PM, Xavier Schepler
xavier.schep...@sciences-po.fr wrote:
Hey,
I'm thinking about using dynamic fields.
I need one or more user specific
).
SOLR-1768 :)
--
Regards,
Shalin Shekhar Mangar.
might be querying for data set #2
Should I be defining multiple document .. or entity .. entries
Or what ??
You can define multiple entities (all at the root level) to import all your
views at once.
--
Regards,
Shalin Shekhar Mangar.
as they are not
specified explicitly. In this case, however, the problem is that the ${
root.id} is case sensitive. There is no way right now to resolve variables
in a case-insensitive manner.
--
Regards,
Shalin Shekhar Mangar.
will send and retrieve values from its field. It will then be used
to filter result.
How would it impact query performance ?
Can you give an example of such a query?
--
Regards,
Shalin Shekhar Mangar.
On Fri, Feb 5, 2010 at 4:07 AM, Jason Rutherglen jason.rutherg...@gmail.com
wrote:
Robert, thanks for redoing all the Solr analyzers to the new API! It
helps to have many examples to work from, best practices so to speak.
+1
Thank you so much Robert!
--
Regards,
Shalin Shekhar Mangar.
if there is another way through the admin pages to update solr
There is no way in the admin pages to do that right now. You will need to
use curl or post.jar for now.
--
Regards,
Shalin Shekhar Mangar.
to post. that would be enough...
Increase maxFieldLength in your solrconfig.xml. The default is 10KB.
--
Regards,
Shalin Shekhar Mangar.
Shekhar Mangar.
,
Shalin Shekhar Mangar.
in the Solr logs?
--
Regards,
Shalin Shekhar Mangar.
to get only the new records, then DataImportHandler
can index it, otherwise not. Perhaps you can write to Solr when you update
your database and commit periodically?
--
Regards,
Shalin Shekhar Mangar.
)
at org.apache.lucene.document.Field.init(Field.java:305)
at
org.apache.solr.schema.FieldType.createField(FieldType.java:210)
That exception indicates that a field name itself was null. Can you post
your data-config?
--
Regards,
Shalin Shekhar Mangar.
, 2010 10:47:51 AM org.apache.solr.core.SolrCore execute
That is just an INFO level log message. Have you seen an exception saying
that pollInterval cannot be null? If yes, can you please paste the stack
trace.
--
Regards,
Shalin Shekhar Mangar.
, QTime is 1124 but log / handler shows 90571 seconds.
Similar thing happens across all queries...
Any pointers on why this may be happening?
Which Solr version are you using? Is the performance as bad on a non
virtualized instance too?
--
Regards,
Shalin Shekhar Mangar.
/group) to the document and then
filter on that e.g.
field column=docType template=group/
--
Regards,
Shalin Shekhar Mangar.
or write a custom
UpdateRequestProcessor to count the number of adds and throw an exception
once the limit is reached. Though the latter gets slightly tricky when you
delete by query or when you replace docs.
--
Regards,
Shalin Shekhar Mangar.
an analyzer.
--
Regards,
Shalin Shekhar Mangar.
. If you have changed
the solrconfig.xml or any other configuration file then that too needs to be
transferred to your production server.
--
Regards,
Shalin Shekhar Mangar.
to the test-files directory. I've added this information to
the wiki at http://wiki.apache.org/solr/TestingSolr
Note that you can also use ant from command-line to run your tests.
--
Regards,
Shalin Shekhar Mangar.
quite a few
people on this forum regarding this topic. Thanks Ahmet for challenging me
and Erik for the authoritative word :)
/me goes off to fix the wiki
--
Regards,
Shalin Shekhar Mangar.
those mammals that are not matched
by the actual query parameter?
I've read this twice but the problem is still not clear to me. I guess you
will have to explain it better to get a meaningful response.
--
Regards,
Shalin Shekhar Mangar.
will not cause the JVM to go out of memory.
--
Regards,
Shalin Shekhar Mangar.
every time.
Sure, make the text field as stored, read the old document and create the
new one. Sorry, there is no way to update an indexed document in Solr (yet).
--
Regards,
Shalin Shekhar Mangar.
of the master index
is always less than the slave's index. This causes all the files to be
replicated. If that is the case then you don't need to worry.
--
Regards,
Shalin Shekhar Mangar.
the actual problem so that we can be of more help?
--
Regards,
Shalin Shekhar Mangar.
Shekhar Mangar.
out that line to disable compression.
--
Regards,
Shalin Shekhar Mangar.
?
The ReplicationHandler configuration in the solrconfig.xml determines
whether the core is a master or a slave. You can use environment variables
to switch between master and slave if you are sharing the same
solrconfig.xml. See http://wiki.apache.org/solr/SolrReplication
--
Regards,
Shalin Shekhar Mangar.
,
Shalin Shekhar Mangar.
can actually track the replication
progress on a slave, but you can't track the backup progress on a master.
You are right. This can be improved. See
https://issues.apache.org/jira/browse/SOLR-1714
--
Regards,
Shalin Shekhar Mangar.
On Fri, Jan 8, 2010 at 3:41 AM, Otis Gospodnetic otis_gospodne...@yahoo.com
wrote:
- Original Message
From: Shalin Shekhar Mangar shalinman...@gmail.com
To: solr-user@lucene.apache.org
Sent: Wed, December 23, 2009 2:45:21 AM
Subject: Re: Adaptive search?
On Wed, Dec 23
constantly.
Core reload swaps the old core with a new core on the same configuration
files with no downtime. See CoreContainer#reload.
--
Regards,
Shalin Shekhar Mangar.
see
http://www.lucidimagination.com/Community/Hear-from-the-Experts/Articles/Scaling-Lucene-and-Solr
--
Regards,
Shalin Shekhar Mangar.
/solr/SolrRelevancyFAQ#How_can_I_boost_the_score_of_newer_documents
--
Regards,
Shalin Shekhar Mangar.
, as it removes certain sources
of thread contention.
How to open the IndexReader with readOnly=true ?
I can't find anything related to this parameter.
Solr always opens IndexReader with readOnly=true. It was added with SOLR-730
and released in Solr 1.3
--
Regards,
Shalin Shekhar Mangar.
at index-time. If
they are, you will need to re-index your documents after reloading the core.
--
Regards,
Shalin Shekhar Mangar.
. Can you open a jira issue?
--
Regards,
Shalin Shekhar Mangar.
On Wed, Dec 30, 2009 at 12:10 AM, Mohamed Parvez par...@gmail.com wrote:
Ditto. There should have been an DIH command to re-sync the Index with the
DB.
But there is such a command; it is called full-import.
--
Regards,
Shalin Shekhar Mangar.
at work.
--
Regards,
Shalin Shekhar Mangar.
in our
system and we do want the list of document IDs matched. Is there a
better/different way of doing the same?
No, I guess not.
--
Regards,
Shalin Shekhar Mangar.
in
size. Is there a way of making the literal a POST variable rather than
a GET?
With Curl? Yes, see the man page.
Will Solr Cell accept it as a POST?
Yes, it will.
--
Regards,
Shalin Shekhar Mangar.
is not mentioned on the Solr wiki.
Thanks Lance. I've added it to the wiki at
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters
--
Regards,
Shalin Shekhar Mangar.
it, but still I couldn't find any such property in config file in Solr
1.4 latest download.
May be I am overlooking some simple property. Any help would be
appreciated.
Note that there are things like replication which will not work if you are
using a RAMDirectory.
--
Regards,
Shalin Shekhar Mangar.
to modify SolrIndexSearcher to allow custom collectors soon for
field collapsing but for now you will have to modify it.
What should be my starting point? Custom search handler?
A custom SearchComponent which extends/overrides QueryComponent will do the
job.
--
Regards,
Shalin Shekhar Mangar.
). If you're
interresting in the logs, I can send those to you.
What is the issue that you are facing? What is it exactly that you want to
change?
--
Regards,
Shalin Shekhar Mangar.
://wiki.apache.org/solr/FilterQueryGuidance
--
Regards,
Shalin Shekhar Mangar.
with 400 ms response time.
I am attaching solrconfig.xml for both master and slaves.
There is no autowarming on slaves which is probably OK if you are committing
so often. But do you really need to index new documents so often?
--
Regards,
Shalin Shekhar Mangar.
). If you use NOW/DAY, the query
can be cached for a day.
--
Regards,
Shalin Shekhar Mangar.
. But TermsComponent can only sort by
frequency in descending order or by index order (lexicographical order).
Perhaps the patch in SOLR-1672 is more suitable for your task.
--
Regards,
Shalin Shekhar Mangar.
then some things are not possible
(think indexed fields). It would be far efficient to just do a full-import
each time instead.
--
Regards,
Shalin Shekhar Mangar.
. See
http://wiki.apache.org/solr/FieldAliasesAndGlobsInParams
--
Regards,
Shalin Shekhar Mangar.
have.
--
Regards,
Shalin Shekhar Mangar.
.
--
Regards,
Shalin Shekhar Mangar.
Shekhar Mangar.
a text
type as given in the example solrconfig.xml?
--
Regards,
Shalin Shekhar Mangar.
On Thu, Dec 24, 2009 at 2:39 AM, Prasanna R plistma...@gmail.com wrote:
On Tue, Dec 22, 2009 at 11:49 PM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
I am curious how an approach that simply uses the wildcard query
functionality on an indexed field would work.
It works
some kind of suppression. For example, as
individual clicks get older, you can push them down. Or you can put a
cap on the number of clicks used to rank the query.
We use clicks/views instead of just clicks to avoid this problem.
--
Regards,
Shalin Shekhar Mangar.
,
Shalin Shekhar Mangar.
,
Shalin Shekhar Mangar.
On Mon, Dec 21, 2009 at 5:37 PM, Marc Sturlese marc.sturl...@gmail.comwrote:
Should sortMissingLast param be working on trie-fields?
Nope, trie fields do not support sortMissingFirst or sortMissingLast.
--
Regards,
Shalin Shekhar Mangar.
already been stemmed.
Therefore, you'll need to re-index all documents which contained the words
you have specified in protwords.txt.
--
Regards,
Shalin Shekhar Mangar.
and/or plurals?
Or would I need to disable stemming to make this special case disapear?
For specific cases like this, you can add the word to a file and specify it
in schema, for example:
filter class=solr.SnowballPorterFilterFactory language=English
protected=protwords.txt/
--
Regards,
Shalin Shekhar
2009/12/17 Steinar Asbjørnsen steinar...@gmail.com
Den 17. des. 2009 kl. 12.42 skrev Shalin Shekhar Mangar:
For specific cases like this, you can add the word to a file and specify
it
in schema, for example:
filter class=solr.SnowballPorterFilterFactory language=English
protected
about different
thing..
Can somebody please explain what it is?
That error message means that you are trying to sort on a tokenized (or
multi-valued) field which is not possible in Solr. Sorting must be done on a
field which has a single token per document.
--
Regards,
Shalin Shekhar Mangar.
.
If you specify a name for your data source, then that name must also be
specified in your entity. I think you meant to use type=JdbcDataSource
instead of name=JdbcDataSource. Do that and it will work.
--
Regards,
Shalin Shekhar Mangar.
it will be applied by
default
updateRequestProcessorChain name=custom
/updateRequestProcessorChain
--
Regards,
Shalin Shekhar Mangar.
people do. The hard part is when some documents are
shared across multiple users.
Bear with me if these are newbie questions please, this is my first day
with
SOLR.
No problem. Welcome to Solr!
--
Regards,
Shalin Shekhar Mangar.
should see the results.
--
Regards,
Shalin Shekhar Mangar.
is a map of (q, sort, n) to ordered list of Lucene
docids. Assuming queryResultWindowSize iw 20 and an average user does not go
beyond 20 results, your memory usage of the values in this map is
approx 20*sizeof(int)*512. Add some more for keys, map, references etc.
--
Regards,
Shalin Shekhar Mangar.
cause adjacent terms to be highlighted
which you may not want.
--
Regards,
Shalin Shekhar Mangar.
be collected server site - on Solr.
Do you know how to do that?
The number of hits are logged along with each query in INFO level. You can
analyze the logs to figure out this stat.
--
Regards,
Shalin Shekhar Mangar.
then the answer is no. You must specify the field name.
--
Regards,
Shalin Shekhar Mangar.
Shekhar Mangar.
the syntax in the last post in the Jira issue been approved so a patch
can be made?
SOLR-1387 is not final. There's also SOLR-1351. A local param based syntax
looks like the right way to go. However, we have not reached consensus yet.
You are welcome to take them forward.
--
Regards,
Shalin Shekhar
it is the hierarchical faceting patch.
They may just be having flag-fields for each level e.g. for a solr-user
mail, they may index Email, user, dev in a multi-valued field called
Source and display it in a hierarchical UI.
--
Regards,
Shalin Shekhar Mangar.
that will allow me
to do this.
No, I don't think there is a way to do request for more than one
facet.prefix on the same field in one request.
--
Regards,
Shalin Shekhar Mangar.
more many listings sites. Typically we index the expiry date in
the document and exclude them through a filter e.g.
fq=expiry_date:[NOW/DAY+1DAYS TO *]
--
Regards,
Shalin Shekhar Mangar.
601 - 700 of 1748 matches
Mail list logo