As I said is working, but I'm blocked again. I wish to have for each user a
different index in a different dataDir. There is a way to do this with only
one application of solr and without using multicore option? Can I set to
switch the index by the table name?
--
View this message in context:
you should be able to do it using ${feed-source.last-update}
You can find examples and explaination @
http://wiki.apache.org/solr/DataImportHandler
Regards,
Jayendra
On Mon, Sep 5, 2011 at 8:02 AM, penela pen...@gmail.com wrote:
Hi!
This might probably be a stupid question, but I can't find
Thanks Jan,
I will look into using the JDBC driver.
/Tobias
2011/9/5 Jan Høydahl jan@cominvent.com
Hi,
You should be able to index Notes databases through JDBC, either with DIH
or ManifoldCF. Have not tried myself though.
--
Jan Høydahl, search solution architect
Cominvent AS -
well... the problem was... a silly typo in config...
case is over guys :P
-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context:
http://lucene.472066.n3.nabble.com/Field-with-No-data-tp3312488p3312785.html
Sent from the Solr - User mailing list archive at Nabble.com.
I am using solr 3.1.0 and running solr using multicore.
this is solr import URL which is working yesterday *
http://localhost:8983/solr/jobs/dataimport?command=delta-importwt=json*
now its not working
what was the issue.
thanks
Hello, our current goal is finding a solution for a translations company. Their
issue is that very often they have to translate documents which have parts that
have been copy pasted from another document that was translated before, so
they do the same work more than once.
I am a newcomer to
On Tue, Sep 6, 2011 at 11:44 AM, crisfromnova crisfromn...@gmail.com wrote:
As I said is working, but I'm blocked again. I wish to have for each user a
different index in a different dataDir. There is a way to do this with only
one application of solr and without using multicore option? Can I
On Tue, Sep 6, 2011 at 1:01 PM, Deepak Singh deep...@praumtech.com wrote:
I am using solr 3.1.0 and running solr using multicore.
this is solr import URL which is working yesterday *
http://localhost:8983/solr/jobs/dataimport?command=delta-importwt=json*
now its not working
what was the
The issue is that I don't want to have multiple schema.xml and solrconfig.xml
files. For a multicore implementation it is possible to use the same
solrconfig.xml and schema.xml files? All my cores will have the same
behaviour so, if I want to change something in future I want to be able to
do this
Hello,
If I understand correctly,
1) to get the number of matches per price intervals in Solr, here are the
different ways one can use:
* per static price intervals:
* facet.range
* facet.query
* per dynamic price intervals
* there is Solr JIRA to get dynamic
Hello, our current goal is finding a solution for a translations company. Their
issue is that very often they have to translate documents which have parts that
have been copy pasted from another document that was translated before, so
they do the same work more than once.
I am a newcomer to
I found the solution. I will use multicore but all cores will have the same
instanceDir, so I will have only one configuration for all cores.
Thank you very much for your support!
--
View this message in context:
2011/9/6 Domènec Sos i Vallès d...@nextret.net:
Hello, our current goal is finding a solution for a translations company.
Their issue is that very often they have to translate documents which have
parts that have been copy pasted from another document that was translated
before, so they do
Closing a searcher while thread(s) is/are still using it is definitely
bad, so, this code looks spooky...
But: it possible something higher up (in Solr) is ensuring this code
runs exclusively? I don't know enough about this part of Solr...
Mike McCandless
http://blog.mikemccandless.com
On
Luis:
First, I managed to invite you to chat by mistake, don't se a way to
cancel it... Sorry.
Anyway, what exactly slows down? Indexing? search performance on the slaves?
We need some more details to answer your questions, it might help to review:
What if you were to make your field a multi-valued field, and at indexing time,
split up the text into sentences, putting each sentence into the solr document
as one of the values for the mv field? Then I think the normal highlighting
code can be used to pull the entire value (i.e. sentence)
Hello, Erik.
Thank you for answering. The performance decreases during indexing: while
replication is in process the batch machine could not recieve and process
quickly the indexing petitions and some read timed out exceptions appear.
Luckily I just load some hundreds of documents every day
The $page object is an instance of PageTool... which currently is constructed
this way:
public PageTool(SolrQueryRequest request, SolrQueryResponse response)
It doesn't work, currently, with the grouped stuff. It's just some simple math
to get the page size, etc. You can pretty easily
I solved a similar kind of issue (where I actually needed multi-valued
attributes, e.g. people with multiple or hyphenated last names) by including
PositionFilterFactory in the filter list for the analyzer in such fields'
fieldType, thereby setting the position of each value to 1.
JRJ
Not quite sure what you are asking.
You can certainly use copyField to copy a field, and then apply regex on the
destination field's fieldType. We do that.
JRJ
-Original Message-
From: alx...@aim.com [mailto:alx...@aim.com]
Sent: Thursday, September 01, 2011 4:16 PM
To:
You seem to have two questions:
1) How to write a script to import data
2) How to schedule that in Windows
For #1, I suggest that you visit the Solr tutorials at
http://lucene.apache.org/solr/tutorial.html to learn what commands might be
used to import data. You might find that you need to
Not that I know of, serving queries as you replicate is the normal
use-case.
My first recommendation is just to get some more space...
Best
Erick
On Tue, Sep 6, 2011 at 12:21 AM, shinkanze rajatrastogi...@gmail.com wrote:
thanks Erick
For your suggestions
My other slave is working fine with
Hi all
The Question might sound stupid . I have a large synonym file and have
created the synonyms something like below
*allergy test = Doctors, Doctors-Medical, PHYSICIANS, Physicians
Surgeons
*
I have also added the synonym to get indexed during index time like
below
fieldType
Hi Mark,
The implementation is logging anyway, we have subclassed
StreamingUpdateSolrServer and used handleError to log, but inspecting the
stack trace in in the handleError method
does not give any clue about the document(s) that failed. We have a solution
that uses Solr as backend for indexing
: *allergy test = Doctors, Doctors-Medical, PHYSICIANS, Physicians
: Surgeons
..
: analyzer type=index
...
: filter class=solr.SynonymFilterFactory synonyms=synonyms.txt
: ignoreCase=true expand=true/
...
: But when I do a search for allergy , I get 0 results
You've
Well, if the documents do get indexed, then all you have to do
is lengthen the timeout for your connection, what is it set at now?
But this isn't expected. The first place I'd look is whether your
indexing machine is allowing the op system enough memory
to manage its disk caches well. The second
Hi Chris
The Terms Doctors , Doctors-Medical are all present in my Document body,
title fields etc.. but Allergy Test is not . So what I am doing in synonym
file is if a user searches for allergy test bring me results that match
Doctors etc.. i.e
Explicit mappings match any token sequence
Hi All,
Has anyone tackled the challenge of question detection in search using solr?
A lot of my users don't do simple keyword searches, but rather ask questions
as their queries. For example:
what are the business hours?
who is the ceo?
what's the weather?
more information about joe
Are there
Hi Simon,
Thanks for your reply and looking into this one.
Yes we are using TIKA/SOLRJ as client process, trying to index using JVM max
heap memory to 8GB RAM and it is a 64 bit VM with Server option enabled.
We have mixed set of emails and documents which ranges from few KB's to
700MB's.
It won't work given your current schema. To get the desired results, you would
need to expand your synonyms at both index AND query time. Right now your
schema seems to specify it only at index time.
So, as the other respondent indicated, currently you replace allergy with the
other list
Thanks Hoss.
Whatever you described does make sense, however we are migrating over from
FAST and the date range buckets work differently there. The expectations of
the business users are based on the existing system.
I need to reset their expectations ;-) ...
Thanks for the very detailed
With SolrDeletionPolicy you can chose the number of versions of the index
to store ( maxCommitsToKeep, it defaults to 1). Well, how can you revert to
an arbitrary version that you have stored? Is there anything in Solr or in
Lucene to pick the version of the index to load?
Thank you
Emmanuel
Hi,
Note that if you want more control over the buckets, you may use facet.query
instead. Also, under development is SOLR-2366 which will eventually give a more
powerful gap specification to range facets.
--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com
Solr Training
I don't know of any way in Solr land. (One of the reasons for being able to
keep 1 commit was to be able to deal with NFS semantics whereby files are
immediately deleted in contrast to the normal *x behavior of not deleting
until the last file handl is closed; that way you avoid 'stale file
If you're batching the documents when you send them to Solr with the #add
method, you may be out of luck - Solr doesn't do a very good job of
reporting which document in a batch caused the failure.
If you reverted to CommonsHTTPServer and added a doc at a time there
wouldn't be any ambiguity, but
On Tue, Sep 6, 2011 at 6:56 PM, simon mtnes...@gmail.com wrote:
If you're batching the documents when you send them to Solr with the #add
method, you may be out of luck - Solr doesn't do a very good job of
reporting which document in a batch caused the failure.
If you reverted to
It won't work given your current schema. To get the desired results, you
would need to expand your synonyms at both index AND query time. Right now
your schema seems to specify it only at index time.
I have a very huge schema spanning up to 10K lines , if I use query time it
will be huge
erick
thanks for your support
I would like to know one more thing why my replication estimated time goes
to negative i.e
Esimated Time is -45 seconds etc .while replicating
thanks
Rajat
--
View this message in context:
Can you let me know how did you get the suggestions? Are you using a
file-based spell checker...? Or an Indexed-based one? And what is the exact
structure for them and how to set up the Solr configuration for the Spell
checker...since I am not able to get teh spell checker build.
please guide
39 matches
Mail list logo