Thanks. That is what we concluded i.e. to write a wrapper method within our
service to build the query for Solr by examining the example bean.
Thanks again.
The DIH XML config file has to be specified dataSource. In my case, and
possibly with many others, the logon credentials as well as mysql server
paths would differ based on environments (dev, stag, prod). I don't want to
end up coming with three different DIH config files, three different
handlers
Hi Pranav,
If you are using Tomcat to host Solr, you can define your data source in
context.xml file under tomcat configuration.
You have to refer to this datasource with the same name in all the 3
environments from DIH data-config.xml.
This context.xml file will vary across 3 environments having
Hi Simon,
i checked my log files one more time to get the error timestamps.
I get the first Error at 14:37:
06.07.2012 14:37:52 org.apache.solr.common.SolrException log
SCHWERWIEGEND: null:ClientAbortException: java.net.SocketException: Broken pipe
at
That's cool. Is there something similar for Jetty as well? We use Jetty!
*Pranav Prakash*
temet nosce
On Wed, Jul 11, 2012 at 1:49 PM, Rahul Warawdekar
rahul.warawde...@gmail.com wrote:
Hi Pranav,
If you are using Tomcat to host Solr, you can define your data source in
context.xml file
http://wiki.eclipse.org/Jetty/Howto/Configure_JNDI_Datasource
http://docs.codehaus.org/display/JETTY/DataSource+Examples
On Wed, Jul 11, 2012 at 2:30 PM, Pranav Prakash pra...@gmail.com wrote:
That's cool. Is there something similar for Jetty as well? We use Jetty!
*Pranav Prakash*
temet
Hi guys,
I'm using Solr 3.6 and I just found out there are some changes in the
request handlers configuration and the use of qt. I read the whole
SOLR-3161 issue and the updated wiki, but I'm confused.
I'd like to have a specific handler in order to make auto-complete
suggestions using the
I deviate from the examples by creating multiple cores
(artits,tracks,
albums)
my boost is:
str name=qfsong^4 artist^4 album/str
str name=pfsong
artist^4/str
str
name=pf2artist^8/str
One thing that is being tough is coming with the right
boosting. What I'm
doing is
Thanks for the reply. Yes I wanted the executable to run
after the commit operation. I would like to have the doc XML
though. The further processing might be intensive and so I
didn't want the document to wait for the extra processing
before committing (if I use UpdateRequestProcessor). The
Thank Ahmet, I did that, it kinda worked (not as well as expected) the
document with ringtone was the 1st match, it was moved to the 2nd position,
I was expecting it to be at very bottom. Tried other factors for boosting
up to 10E6 but no success.
Another issue, is that I have some bad words I
There is a school of thought that suggests you should always set Xms
and Xmx to the same thing if you expect your heap to hit Xms. This
results in your process only needing to allocate the memory once,
rather in a series of little allocations as the heap expands.
I can't explain how this fixed
Hi,
I am working as a student aid for a company that uses Solr as a search
engine.
When using the class mentioned in subject, it would seem that for a query
string, the stemmer files are loaded several time, for each part of the
query using the stemmer (instead of being loaded just once, the
On Wed, Jul 11, 2012 at 10:57 AM, Vinicius Carvalho
viniciusccarva...@gmail.com wrote:
Hi there.
I was checking the faq and found that solr does not support field updates
right. So I assume that in order to update a document, one should first
retrieve it by its Id and then change the required
Hi,
I've recently got NPE with 500 status with my search:
SEVERE: java.lang.NullPointerException
at
org.apache.lucene.index.DocTermOrds$TermOrdsIterator.reset(DocTermOrds.java:623)
at org.apache.lucene.index.DocTermOrds.lookup(DocTermOrds.java:649)
at
--- On Wed, 7/11/12, Vinicius Carvalho viniciusccarva...@gmail.com wrote:
From: Vinicius Carvalho viniciusccarva...@gmail.com
Subject: Re: Boosting tips
To: solr-user@lucene.apache.org
Date: Wednesday, July 11, 2012, 4:24 PM
Thank Ahmet, I did that, it kinda
worked (not as well as
This solves the problem by allocating memory up front, instead of at some
point later when JVM needs it. At that later point in time there may not
be enough free memory left on the system to allocate.
On 7/11/2012 11:04 AM, Michael Della Bitta wrote:
There is a school of thought that suggests
hello all,
i noticed something in one of our logs that periodically polls the status of
an data import.
can someone help me understand where / how the times for Full Dump
Started are derived?
here it shows the dataimport dump starting at 1:32
?xml version=1.0 encoding=UTF-8?
response
lst
I think the issue here is that DIH uses Woodstox BasicStreamReader
(see
http://woodstox.codehaus.org/3.2.9/javadoc/com/ctc/wstx/sr/BasicStreamReader.html)
which has only minimal DTD support. It might be best to use
ValidatingStreamReader
Thanks for the explanation and bug report Robert!
-Original Message-
From: Robert Muir [mailto:rcm...@gmail.com]
Sent: Monday, July 09, 2012 3:18 PM
To: solr-user@lucene.apache.org
Subject: Re: problem adding new fields in DIH
Thanks again for reporting this Brent. I opened a JIRA
On 7/2/2012 2:33 AM, Nabeel Sulieman wrote:
Argh! (and hooray!)
I started from scratch again, following the wiki instructions. I did only
one thing differently; put my data directory in /opt instead of /home/dev.
And now it works!
I'm glad it's working now. I just wish I knew exactly what the
On Jul 11, 2012, at 2:52 PM, Shawn Heisey wrote:
On 7/2/2012 2:33 AM, Nabeel Sulieman wrote:
Argh! (and hooray!)
I started from scratch again, following the wiki instructions. I did only
one thing differently; put my data directory in /opt instead of /home/dev.
And now it works!
I'm
On 7/11/2012 2:55 PM, Alexander Aristov wrote:
content:?? doesn't work :)
I would try escaping them: content:\?\?\?\?\?\?
Hi Ahmet
Basically we have an application that does indexing of documents to SOLR. This
application is basically a third party and we didn't do much meddling with it.
There is another application that I'm developing to use some of the fields data
indexed (both when it is new or updated), do
OK, would it work just add an fq clause fq=-user:10? Or, depending on your Solr,
fq=*:* -user:10?
Best
Erick
On Tue, Jul 10, 2012 at 5:45 AM, davidbougearel
david.bougea...@smile-benelux.com wrote:
Ok sorry to not be clear and thanks again for your answers.
This isn't a good idea. You _must_ index the unique key in order for
documents having that unique key to be found and deleted when you
add another document with that uniqueKey.
Best
Erick
On Tue, Jul 10, 2012 at 10:32 AM, Noordeen, Roxy
roxy.noord...@wwecorp.com wrote:
-Original
Hi,
We upgraded to Solr 4.0 Alpha and our CPU usage shot off to 400%.In
profiling we are getting following trace.
- *100.0%* -
*java.lang*.Thread.runhttps://rpm.newrelic.com/accounts/132291/applications/834717/profiles/1266#
- *42 Collapsed methods
In text format
100.0% - java.lang.Thread.run
42 Collapsed methods (show)
98.0% - org.apache.lucene.index.DocumentsWriter.updateDocument
77.0% - org.apache.lucene.index.DocumentsWriterPerThread.updateDocument
76.0% - org.apache.lucene.index.DocFieldProcessor.processDocument
76.0% -
There are about a zillion garbage collection options, see:
http://www.lucidimagination.com/blog/2011/03/27/garbage-collection-bootcamp-1-0/
for a great intro.
Be a bit careful. Allocating more memory to the JVM can cause the GCs to take
a longer time when they do occur. How much memory are you
: Basically we have an application that does indexing of documents to
: SOLR. This application is basically a third party and we didn't do much
: meddling with it. There is another application that I'm developing to
: use some of the fields data indexed (both when it is new or updated), do
:
Hi, thanks for the reply. My reply as below
: Basically we have an application that does indexing of documents to
: SOLR. This application is basically a third party and we didn't do much
: meddling with it. There is another application that I'm developing to
: use some of the fields data
: this is an option I am exploring if the postcommit doesn't work. It's
: just in my head i thought there must be some way to get the document
: through postCommit and that I'm just missing it because I can't find the
: document that says so. Someone told me it is possible if I use SolrJ, so
Hi.
: this is an option I am exploring if the postcommit doesn't work. It's
: just in my head i thought there must be some way to get the document
: through postCommit and that I'm just missing it because I can't find the
: document that says so. Someone told me it is possible if I use SolrJ,
Thanks. Can you explain more the first TermsComponent option to obtain
max(id)? Do I have to modify schema.xml to add a new field? How exactly do I
query for the lowest value of 1 - id?
--
View this message in context:
Ok this is the id but in fact (sorry about this) my wish is the reverse, i
want to get just the facet for which a have the right so i want to put
fq=user:10 in order to get only facet with user:10.
In my fq i can have something like user:10 AND user:3 because it's auto
generated by rights of my
On Wed, Jul 11, 2012 at 8:11 PM, Pavitar Singh psi...@sprinklr.com wrote:
We upgraded to Solr 4.0 Alpha and our CPU usage shot off to 400%.In
profiling we are getting following trace.
That could either be good or bad. Higher CPU can mean higher
concurrency. Have you benchmarked your indexing
35 matches
Mail list logo