Well, it's got all to do with how we have decided the rest of our deployment
environment. So the point is basicly that there should be no configurations
to the tomcat because the webapp should know all it's settings and could
basicly be deployed to whatever tomcat without configuration. Also there
Lance, I know there are many variables that's why I'm asking where to start
and what to check.
Updates are sent every 5-7 seconds, each update contains between 1 and 50
docs. Commit is done every time (on each update).
Currently queries aren't very frequent - about 1 query every 3-5 seconds,
but th
> I'm not sure why deltas were implemented this way. Possibly it was designed
> to behave like some of our object-to-relational libraries? In any case,
> there are 2 ways to do deltas and you just have to take your pick based on
> what will work best for your situation. I wouldn't consider th
You could use the Solr Command Utility SCU that runs from Windows and can be
scheduled to run.
https://github.com/justengland/Solr-Command-Utility
This is a windows system that will index using a core, and swap it if it
succeeds. It works it's Solr.
Let me know if you have any questions.
On
On 3/28/2012 12:46 PM, Artem Shnayder wrote:
Does anyone know of any work done to automatically run a backup prior to a
DataImportHandler full-import?
I've asked this question on #solr and was pointed to
https://wiki.apache.org/solr/SolrReplication?highlight=%28backup%29#HTTP_API
which
is helpfu
If you want to do *anything* across all matches, you probably should be using a
relational database. Search engines, like Solr, are optimized for just the best
matches. Fetching all matches is likely to be slow. Relational databases are
optimized for working with the whole set of matches.
wunde
Hi Mark,
As I know you can either limit server access by only allowing certain IPs which
is a crude, or as you said using the servlet BASIC authentication.
Solr is just a WAR file deployed in a Servlet container, so you can use all of
the capabilities of Servlet containers to control access. I
Hello,
>
>
We have been trying out the new Solrcloud from trunk and were able to setup
a multi-slice solr configuration. But when ever we have multiple
slices/shards, the number of threads used by solr reaches more than 25k
during indexing and hence runs out of memory/native threads. It occurs even
Hi, It does the magic! Thanks a lot!
Although I found the transformer was added there but has no reference, so I
suppose it is not needed.
Thanks again!
-Original Message-
From: Luca Cavanna [mailto:cavannal...@gmail.com]
Sent: 2012年3月28日 23:16
To: solr-user@lucene.apache.org
Cc: Ahme
Hi
What is the best way to retrieve the score of a query across ALL documents in
the index? i.e.
given:
1) docs, [A,B,C,D,E,...M] of M dimensions
2) Query q
searcher outputs (efficiently)
1) the score of q across _all_ M dimensional documents, ordered by index
number. i.e
score(q) = [A=0
It doesn't delete - this has come up before with the idea of adding a param.
The discussion was part of a Jira adding a diff param that was closed though -
go ahead and make a new one if you'd like.
Sent from my iPhone
On Mar 28, 2012, at 6:58 PM, vybe3142 wrote:
> I'll try this again after
How often are updates? And when are commits? How many CPUs? How much
query load? There are so many variables.
Check the mailing list archives and Solr issues, there might be a
similar problem already discussed. Also, attachments do not work with
Apache mailing lists. (Well, ok, they work for direc
I'll try this again after restarting SOLR .
--
View this message in context:
http://lucene.472066.n3.nabble.com/Unload-true-doesn-t-delele-Index-file-when-unloading-a-core-tp3862816p3866259.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi,
I have an old Solr 1.3 version running on an issue. I have a field
configured in such a way that "fifa 12" and "fifa12" should match the same
documents, as it can be seen in screenshot bellow.
[image: Imagem inline 1]
When I run the query itemNameSearch:fifa12, I get the folowing result:
[
Thanks for you help James, I'll try that out.
On Wed, Mar 28, 2012 at 12:30 PM, Dyer, James wrote:
> Unfortunately there isn't a good way to solve this. Your best bet is to
> trigger a backup before the nightly re-index using
> /replication?command=backup
>
> The problem is the backup runs async
Thanks Otis but that's not an option for me. "Should" be pretty easy to do
this with Solr, I will still continue to work on it.
Great William I will give a try with this method, thanks.
On 28 March 2012 06:11, William Bell wrote:
> I am also very confused at the use case for the Suggester compo
Is there a way to get Solr/Luke to return the aggregated results across
shards? I tried setting the shards parameter
(http://localhost:8983/solr/admin/luke?shards=localhost:8983/solr,localhost:7574/solr)
but only got the results for localhost:8983. I am able to search across the
shards so my url's
I changed my velocity settings to false, restarted Solr and it worked!
Thanks for your help.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Tomcat-Install-tp3865290p3865773.html
Sent from the Solr - User mailing list archive at Nabble.com.
update -
ok - i was reading about replication here:
http://wiki.apache.org/solr/SolrReplication
and noticed comments in the solrconfig.xml file related to HTTP Basic
Authentication and the usage of the following tags:
username
password
*Can i place these tags in the request handler to
Unfortunately there isn't a good way to solve this. Your best bet is to
trigger a backup before the nightly re-index using /replication?command=backup
The problem is the backup runs asynchronously so its hard to script a way to
determine if the backup is finished or not. What we do is poll t
thank you, James.
--
View this message in context:
http://lucene.472066.n3.nabble.com/preventing-words-from-being-indexed-in-spellcheck-dictionary-tp3861472p3865670.html
Sent from the Solr - User mailing list archive at Nabble.com.
hello,
environment:
running solr 3.5 under jboss 5.1
i have been searching the user list along with the locations below - to find
out how you require a user to authenticate in to the solr /admin page. i
thought this would be a common issue - but maybe not ;)
any help would be apprecaited
than
My typical workflow is a once-a-day full-import with hourly delta-imports.
Ideally, the backup would occur only during the full-import commits. Is
there a way to differentiate in the replication handler?
On Wed, Mar 28, 2012 at 11:54 AM, Dyer, James wrote:
> I don't know of any effort out there t
I don't know of any effort out there to have DIH trigger a backup
automatically. However, you can set the replication handler to automatically
backup after each commit. This might solve your problem if you aren't
committing frequently.
James Dyer
E-Commerce Systems
Ingram Content Group
(615)
Hi Jerry,
thanks for your response:)
This thread("SolrCloud new...") is new for me, thanks!
How far are you with your setup? Which problems/errors du you have?
Best regards
Vadim
2012/3/27 jerry.min...@gmail.com :
> Hi Vadim,
>
> I too am experimenting with SolrCloud and need help with setting
Does anyone know of any work done to automatically run a backup prior to a
DataImportHandler full-import?
I've asked this question on #solr and was pointed to
https://wiki.apache.org/solr/SolrReplication?highlight=%28backup%29#HTTP_API
which
is helpful but is not an automatic backup in the context
1) you need apache velocity in the class path for tomcat
2) here's a way of dealing with these that may go quicker than asking on the
mailing list everytime they come up -- clip out the pertinent part of the stack
trace (in this case
"java.lang.NoClassDefFoundError: org/apache/velocity/context/C
: Hello, I have configured Solr inside Tomcat and I get the following error
: when I go to browser and click on the solr admin link:
https://wiki.apache.org/solr/Solr3.5#Post-Release_Notes
"Although the solr example configs are designed with the intent that they
can be copied, modified, and re-
Hello,
I am having an issue with range query in solr 1.3 .
query is price: [ 1 TO 20 ] is returning values out of this range ,like
23.00 AND 55.00 .The field type of the price field is sfloat .
When I check this form admin Debug query ,I am seeing junk instead of price.
example:
price:[ 1 TO 2
Tommaso,
Thank you so much for looking into this, I am very grateful!
Chris
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-with-UIMA-tp3863324p3865291.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hello, I have configured Solr inside Tomcat and I get the following error
when I go to browser and click on the solr admin link:
HTTP Status 500 - Severe errors in solr configuration. Check your log files
for more detailed information on what may be wrong. If you want solr to
continue after config
Is there any good reason for keeping solr_home within the webapp?
It should work, but I would not recommend it. Have you configured
solr_home somewhere?
One way in Tomcat is to do something like this:
--
--
in either: $tomcat_home/conf/Catalina/localhost/solr.xml or in
$tomcat_home/conf/ser
Hi Chris,
I did never tried the Nutch integration so I can't help with that.
However I'll try to repeat your same setup and will let you know what it
comes out for me.
Tommaso
2012/3/28 chris3001
> Still not getting there on Solr with UIMA...
> Has anyone taken example 1 (RoomAnnotator) and su
Hi,
you should change your data-config moving data that come from
FileListEntityProcessor to its entity, one level up. Try this configuration:
Still not getting there on Solr with UIMA...
Has anyone taken example 1 (RoomAnnotator) and successfully tested this by
any chance?
Thanks to Tommaso my curl statement has changed to /update:
curl http://localhost:8080/solr/update?commit=true -H "Content-Type:
text/xml" --data-binary '4Test Room
Hello folks,
i work with Solr 4.0 r1292064 from trunk.
My index grows fast, with 10Mio. docs i get an index size of 150GB
(25% stored, 75% indexed).
I want to find out, which fields(content) are too large, to consider measures.
How can i localize/discover the largest fields in my index?
Luke(late
Janne,
You're correct on how the delta import works. You specify 3 queries:
- deletedPkQuery = query should return all "id"s (only) of items that were
deleted since the last run.
- deltaQuery = query should return all "id"s (only) of items that were
added/updated since the last run.
- deltaImp
On 3/28/2012 6:04 AM, Spadez wrote:
I am writing a bash script which will install SOLR. I had two quick
questions:
Is there a way to update SOLR, changing only the files which have changed
from the previous version, or is it a case that I need to fully uninstall
the old version and then reinstal
Hi Henri,
you have not provided very much information, so, here comes a guess:
try ${bdte1} instead of $bdte1 - maybe Velocity resolves $bdte and
concatenates "1" instead of trying the longer value as variable, first.
Chantal
On Wed, 2012-03-28 at 12:04 +0200, henri.gour...@laposte.net wrote:
Tommaso-
Thank you so much for your reply and pointing this out! I will look into it.
However, when I run nutch I still dont see the new fields:
$ bin/nutch crawl urls -solr http://localhost:8080/solr/ -depth 1 -topN 2
Does that still have to do with the update/extract call?
Thanks again for you
Hi ,
i have multi valued field want to sort the docs order the particular
text eq:'B1' is added.
how i should query? ad_text is multivalued field.
t
B1
B2
B3
B2
B1
B3
B1
B2
B3
B3
B2
B1
Hi Chris,
2012/3/28 chris3001
> I am having a hard time integrating UIMA with Solr. I have downloaded the
> Solr 3.5 dist and have it successfully running with nutch and tika on
> windows 7 using solrcell and curl via cygwin. To begin, I copied the 6 jars
> from solr/contrib/uima/lib to the work
Hmmm, seems like it should. But.
1> if this is a WIndows box, then any open searchers will prevent the
deletion from happening.
2> if this is a *nix box, and anything has the files open, the files
will hang around
until the process that has them open goes away.
Best
Erick
On Tue, Mar 2
Hi,
I am writing a bash script which will install SOLR. I had two quick
questions:
Is there a way to update SOLR, changing only the files which have changed
from the previous version, or is it a case that I need to fully uninstall
the old version and then reinstall the newest version.
Secondly,
Hi
My company has just decided to use maven to build new projects, which then
includes building solr with maven too.
But then it has been decided that solr_home also should be installed within
the webapp someplace. But now I have got the problem that solr cant find
the config files and so on.
The following code fails on the $bdte1 substring. Both $bdte and $bdte1
appear to be identical!
triggers the following error message:
The problem persiste with various values of the indices.
Am I missing something?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Velocity-su
Isn't it administratively easier with multiple cores instead of multiple
webapps??
Regards
Sujatha
On Tue, Mar 27, 2012 at 6:24 PM, Erick Erickson wrote:
> It might be administratively easier to have multiple webapps, but
> it shouldn't really matter as far as I know...
>
> Best
> Erick
>
> On
47 matches
Mail list logo