Hi, I am currently trying to write a Jetty embedded java app that implements
SOLR and uses SOLRJ by excepting posts telling it to do a batch index, or a
deletion or what have you. At this point I am completely lost trying to
follow http://wiki.apache.org/solr/SolrJetty . In my constructor I am
apparently the row return a null 'board_id'
your stacktrace sugggests this. even if it is fixed I guess it may not
work because your are storing the id as
board-${test.board_id}
and unless your query returns something like board-some-id it may
not work for you.
Anyway i shall put in a fix ion
a have raised an issue and fixed it
https://issues.apache.org/jira/browse/SOLR-1228
2009/6/18 Noble Paul നോബിള് नोब्ळ् noble.p...@corp.aol.com:
apparently the row return a null 'board_id'
your stacktrace sugggests this. even if it is fixed I guess it may not
work because your are storing
MilkDud schrieb:
Ok, so lets suppose i did index across just the album. Using that
index, how would I be able to handle searches of the form artist name
track name.
What does the user interface look like? Do you have separate fields for
artists and tracks? Or just one field?
If i do the
Otis Gospodnetic schrieb:
[...] nothing prevents the indexing client from sending the same doc
to multiple shards. In some scenarios that's exactly what you want
to do.
What kind of scenario would that be?
One scenario is making use of small and large core to provide near
real-time search -
Manepalli, Kalyan schrieb:
I am seeing an issue with the filtercache setting on my solr app
which is causing slower faceting.
Here is the configuration.
filterCache class=solr.LRUCache size=512 initialSize=512
autowarmCount=256/
hitratio : 0.00
inserts : 973531
evictions : 972978
size :
Rakhi Khatwani schrieb:
[...] how do we do a distributed search across multicores?? is it
just like how we query using multiple shards?
I don't know how we're supposed to use it. I did the following:
http://flunder:8983/solr/xpg/select?q=blashards=flunder:8983/solr/xpg,flunder:8983/solr/kk
On Thu, Jun 18, 2009 at 3:51 PM, Michael Ludwig m...@as-guides.com wrote:
Rakhi Khatwani schrieb:
[...] how do we do a distributed search across multicores?? is it
just like how we query using multiple shards?
I don't know how we're supposed to use it. I did the following:
Rakhi Khatwani schrieb:
On Thu, Jun 18, 2009 at 3:51 PM, Michael Ludwig m...@as-guides.com
wrote:
I don't know how we're supposed to use it. I did the following:
http://flunder:8983/solr/xpg/select?q=blashards=flunder:8983/solr/xpg,flunder:8983/solr/kk
i am gettin a page load error...
Hi Michael,
Sorry for the misinterpretation.
in that case, its the same like querying multiple shards. :)
Thanks,
Raakhi
On Thu, Jun 18, 2009 at 4:09 PM, Michael Ludwig m...@as-guides.com wrote:
Rakhi Khatwani schrieb:
On Thu, Jun 18, 2009 at 3:51 PM, Michael Ludwig
On Jun 17, 2009, at 10:32 PM, Mark Miller wrote:
Right, so if you are on 1.3 or early 1.4 dev, with so many uniques,
you should be using the FieldCache method of faceting. The RAM
depends on the number of documents and number of uniques terms mostly.
With 1.4 you may be using an
Thats why I asked about multi-valued terms. If hes not using the enum
faceting method (which only makes sense with fewer uniques), and the
fields are not multi-valued, than it is using the FieldCache method.
Which of course does use the filterCache, and works best when the
filterCache size is
Hello all,
I have a simple question :-)
In my project it is mandatory to use Jboss 4.0.1 SP3 and Java 1.5.0_06/08.
The software relies on Solr 1.4.
Now, I am aware that some JSP Admin pages will not be displayed due to some
Java5/6 dependency but this is not a problem because rewriting some of
On Jun 18, 2009, at 4:51 AM, Noble Paul നോബിള്
नोब्ळ् wrote:
apparently the row return a null 'board_id'
No. I'm working with a test database situation with a single record,
and I simply do a full-import, then change the deleted column to 'Y'
and try a delta-import. The
On Jun 18, 2009, at 4:51 AM, Noble Paul നോബിള്
नोब्ळ् wrote:
apparently the row return a null 'board_id'
I replied No earlier, but of course you're right here. The
deletedPkQuery I originally used was not returning a board_id column.
And even if it did, that isn't the uniqueKey (id
On Thu, Jun 18, 2009 at 8:35 AM, Mark Millermarkrmil...@gmail.com wrote:
Thats why I asked about multi-valued terms. If hes not using the enum
faceting method (which only makes sense with fewer uniques), and the fields
are not multi-valued, than it is using the FieldCache method. Which of
Yonik Seeley wrote:
On Thu, Jun 18, 2009 at 8:35 AM, Mark Millermarkrmil...@gmail.com wrote:
Thats why I asked about multi-valued terms. If hes not using the enum
faceting method (which only makes sense with fewer uniques), and the fields
are not multi-valued, than it is using the FieldCache
Mark Miller wrote:
Yonik Seeley wrote:
On Thu, Jun 18, 2009 at 8:35 AM, Mark Millermarkrmil...@gmail.com
wrote:
Thats why I asked about multi-valued terms. If hes not using the enum
faceting method (which only makes sense with fewer uniques), and the
fields
are not multi-valued, than it is
Hi Giovanni,
Solr 1.4 does work fine in JBoss (all of the features, including all of
the admin pages). For example, I am running it in JBoss 4.0.5.GA on JDK
1.5.0_18 without problems. I am also using Jetty instead of Tomcat, however
instructions for getting it to work in JBoss with
Hi Hossman,
We are also facing similar issue:
is there any way to boost fields in standard query parser itself?
~Vikrant
hossman wrote:
: The reason I brought the question back up is that hossman said:
...
: I tried it and it didn't work, so I was curious if I was still doing
I am faceting on the single values only. I ran load test against solr app and
found that under increased load the faceting just gets slower and slower. That
is why I wanted to investigate filtercache and any other features to tweak the
performance.
As suggested by Mark in the earlier email, I
Hi Vicky,
Vicky_Dev schrieb:
We are also facing same problem mentioned in the post (we are using
dismaxrequesthandler)::
When we are searching for --q=prdTitle_s:ladybirdqt=dismax , we are
getting 2 results -- unique key ID =1000 and unique key ID =1001
(1) Append debugQuery=true to your
On Jun 18, 2009, at 10:54 AM, Vicky_Dev wrote:
is there any way to boost fields in standard query parser itself?
You can boost terms using field:term^2.0 syntax
See http://wiki.apache.org/solr/SolrQuerySyntax and down into http://lucene.apache.org/java/2_4_0/queryparsersyntax.html
for more
On Thu, Jun 18, 2009 at 10:59 AM, Manepalli,
Kalyankalyan.manepa...@orbitz.com wrote:
I am faceting on the single values only.
You may have only added a single value to each field, but is the field
defined to be single valued or multi valued?
Also, what version of Solr are you using?
-Yonik
Hey,
So... I'm assuming your problem is that you're having trouble deploying
Solr in Jetty? Or is your problem that it's deploying just fine but your
code throws an exception when you try to run it?
I am running Solr in Jetty, and I just copied the war into the webapps
directory and it
I'm having some trouble getting the PlainTextEntityProcessor to populate a
field in an index. I'm using the TemplateTransformer to fill 2 fields, and
have a timestamp field in schema.xml, and these fields make it into the
index. Only the plaintText data is missing. Here is my configuration:
Hi,
I'm currently using facet.query to do my numerical range faceting. I
basically use a fixed price range of €0 to €1 in steps of €500 which
means 20 facet.queries plus an extra facet.query for anything above
€1. I use the inclusive/exclusive query as per my question two days
ago so
Can I transport the index from Solr 1.2 to Sol 1.3 without
resubmiting/reloading again from Database?
Francis
The fields are defined as single valued and they are non tokenized for.
I am using solr 1.3 waiting for release of solr 1.4.
Thanks,
Kalyan Manepalli
-Original Message-
From: ysee...@gmail.com [mailto:ysee...@gmail.com] On Behalf Of Yonik Seeley
Sent: Thursday, June 18, 2009 10:15 AM
To:
On Thu, Jun 18, 2009 at 12:19 PM, Manepalli,
Kalyankalyan.manepa...@orbitz.com wrote:
The fields are defined as single valued and they are non tokenized for.
I am using solr 1.3 waiting for release of solr 1.4.
Then the filterCache won't be used for faceting, just for filters.
You should be
can you just log it and see what is contained in the plainText field.
(using LogTransformer)
On Thu, Jun 18, 2009 at 8:54 PM, Jay Hilljayallenh...@gmail.com wrote:
I'm having some trouble getting the PlainTextEntityProcessor to populate a
field in an index. I'm using the TemplateTransformer to
Maybe he is not using the FieldCache method?
Yonik Seeley wrote:
On Thu, Jun 18, 2009 at 12:19 PM, Manepalli,
Kalyankalyan.manepa...@orbitz.com wrote:
The fields are defined as single valued and they are non tokenized for.
I am using solr 1.3 waiting for release of solr 1.4.
Then the
Why do you have:
query.set(hl.maxAnalyzedChars, -1);
Have you tried using the default? Unless -1 is an undoc'd feature, this
means you wouldnt get anything back! This should normally be a fairly
hefty value and defaults to 51200, according to the wiki.
And why:
query.set(hl.fragsize, 1);
I've tried with default values and didn't work either.
On Thu, Jun 18, 2009 at 2:31 PM, Mark Miller markrmil...@gmail.com wrote:
Why do you have:
query.set(hl.maxAnalyzedChars, -1);
Have you tried using the default? Unless -1 is an undoc'd feature, this
means you wouldnt get anything back!
On Thu, Jun 18, 2009 at 1:22 PM, Mark Millermarkrmil...@gmail.com wrote:
Maybe he is not using the FieldCache method?
It occurs to me that this might be nice info to add to debugging info
(the exact method used + perhaps some other info).
-Yonik
http://www.lucidimagination.com
Couple of things I've forgot to mention:
Solr Version: 1.3
Enviroment: Websphere
On Thu, Jun 18, 2009 at 2:34 PM, Bruno brun...@gmail.com wrote:
I've tried with default values and didn't work either.
On Thu, Jun 18, 2009 at 2:31 PM, Mark Miller markrmil...@gmail.comwrote:
Why do you have:
Nothing off the top of my head ...
I can play around with some of the solrj unit tests a bit later and
perhaps see if I can dig anything up.
Note:
if you expect wildcard/prefix/etc queries to highlight, they will not
with Solr 1.3.
query.set(hl.highlightMultiTerm, *true*);
The above only
Note that highlighting is NOT part of the document list returned.
It's in an additional NamedList section of the response (with
name=highlighting)
Erik
On Jun 18, 2009, at 1:22 PM, Bruno wrote:
Hi guys.
I new at using highlighting, so probably I'm making some stupid
mistake,
Here is the query, search for the term ipod on the log field
I've checked the NamedList you told me about, but it contains only one
highlighted doc, when there I have more docs that sould be highlighted.
On Thu, Jun 18, 2009 at 3:03 PM, Erik Hatcher e...@ehatchersolutions.comwrote:
Note that highlighting is NOT part of the document list returned. It's
Mark,
Where do we specify the method? fieldCache or otherwise
Thanks,
Kalyan Manepalli
-Original Message-
From: Mark Miller [mailto:markrmil...@gmail.com]
Sent: Thursday, June 18, 2009 12:22 PM
To: solr-user@lucene.apache.org
Subject: Re: FilterCache issue
Maybe he is not using
Just figured out what happened... It's necessary for the schema to have a
uniqueKey set, otherwise, highlighting will have one or less entries, as the
map's key is the doc uniqueKey, so on debuggin I figured out that the
QueryResponse tries to put all highlighted results in a map with null key...
And unfortunately, that isn't the best approach for highlighting to
take - a uniqueKey shouldn't be required for highlighting. I've yet
to see a real-world deployment of Solr that did not have a uniqueKey
field, but there's no reason Solr should make that assumption.
Erik
On Jun
Its the facet.method param:
http://wiki.apache.org/solr/SimpleFacetParameters#head-7574cb658563f6de3ad54cd99a793cd73d593caa
--
- Mark
http://www.lucidimagination.com
Manepalli, Kalyan wrote:
Mark,
Where do we specify the method? fieldCache or otherwise
Thanks,
Kalyan Manepalli
Got that. Since I am still using Solr 1.3, the defaults should work fine, field
cache for single value and enum for multi-valued fields.
Thanks,
Kalyan Manepalli
-Original Message-
From: Mark Miller [mailto:markrmil...@gmail.com]
Sent: Thursday, June 18, 2009 3:01 PM
To:
A question for anyone familiar with the details of the time-based
autocommit mechanism in Solr:
if I am running several core on the same server and send updates to
each core at the same time, what happens? If all the cores have
their autocommit time run out at the same time, will every core try
Hi,
I've hit a bit of a problem with destemming and could use some advice.
Right now there is a word in the index called Stylesight and another
word Stylesightings, which was just added. When users search for
Stylesightings, the client really only wants them to get results
that match
Are you using Porter Stemming? If so I think you can just specify your
word in the protwords.txt file (or whatever you've called it).
Check out http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters
and the example config for the Porter Stemmer:
fieldtype name=myfieldtype
Yes, that's exactly what I needed. I don't know how I missed that.
Thank you!
--
Steve
On Jun 18, 2009, at 4:49 PM, Brendan Grainger wrote:
Are you using Porter Stemming? If so I think you can just specify
your word in the protwords.txt file (or whatever you've called it).
Check out
Michael Ludwig-4 wrote:
MilkDud schrieb:
What do you expect the user to enter?
* dream theater innocence faded - certainly wrong
* dream theater innocence faded - much better
Most likely they would just enter dream theater innocence faded, no
quotes. Without any quotes around any
Hello Daryl,
thank you very much for sharing your experience with me :-)
My software Architect reported some exceptions thrown when accessing some
Admin JSPs using Solr 1.4, jboss 4.0.1 SP3 and tomcat, java jdk 1.5.0_06.
I will forward the info you gave me.
Thank you very much.
Giovanni
On
building a temporary index would certainly work, but it's a question of
how efficient it would be (ie: how many users do you have, how often do
they log in, how long does it take to build a typical index, how many
concurrent users will you have, etc...)
one solution i've seen to a problem
i'm a bit confused. hoping someone can help.
solr is awesome on my macbook for development.
i've been fighting with getting solr-jetty running on my ubuntu box
all day.
after countless searching, it seems that there is no .war file in the
distro
should this be the case?
the actual
On Thu, Jun 18, 2009 at 4:27 PM, Peter Wolaninpeter.wola...@acquia.com wrote:
I think I understand
that all the pending changes are on disk already, so the commit that
happens when the time is up is really just opening new searchers that
include the added documents.
Only some of the pending
My problem is that my project doesn't compile and I have know way of knowing
if I'm on the right track code wise. There just isn't any comprehensive
guide out there for having a solr/jetty app.
Development Team wrote:
Hey,
So... I'm assuming your problem is that you're having trouble
On Thu, Jun 18, 2009 at 4:00 PM, Jonathan Vanascojvana...@2xlp.com wrote:
can anyone give me a suggestion ? i haven't touched java / jetty / tomcat /
whatever in at least a good 8 years and am lost.
I spent a lot of time trying to get this working too. My conclusion
was simply that the .deb
So for now would it make sense to spread out the autocommit times for
the different cores?
Thanks.
-Peter
On Thu, Jun 18, 2009 at 7:07 PM, Yonik Seeleyyo...@lucidimagination.com wrote:
On Thu, Jun 18, 2009 at 4:27 PM, Peter Wolaninpeter.wola...@acquia.com
wrote:
I think I understand
that
On Thu, Jun 18, 2009 at 8:30 PM, Peter Wolaninpeter.wola...@acquia.com wrote:
So for now would it make sense to spread out the autocommit times for
the different cores?
Sure.
You might also consider using commitWithin (solr 1.4) when updating
the index - then you could either send the updates
: Date: Fri, 08 May 2009 08:27:58 -0400
: From: Mark Miller
: Subject: Re: Solr spring application context error
:
: I've run into this in the past as well. Its fairly annoying. Anyone know why
: the limitation? Why aren't we passing the ClassLoader thats loading Solr
: classes as the parent to
Chris Hostetter wrote:
: Date: Fri, 08 May 2009 08:27:58 -0400
: From: Mark Miller
: Subject: Re: Solr spring application context error
:
: I've run into this in the past as well. Its fairly annoying. Anyone know why
: the limitation? Why aren't we passing the ClassLoader thats loading Solr
:
: Yeah, I actually looked at the code and saw that later. I was forgetting the
: issue that bugged me (and confusing it with the trouble this guy was having) -
: which is that plugins in the solr/lib folder cannot load from other jars in
: that folder. I think that was the actual issue.
WTF?!?
Marc: I know it's been a while since you asked this question, but i didn't
see any reply ... in general the problem is that a low boost is stil la
boost, it can only improve the score of documents that match.
one way to fake a negative boost is to give a high boost to everything
that does
Chris Hostetter wrote:
: Yeah, I actually looked at the code and saw that later. I was forgetting the
: issue that bugged me (and confusing it with the trouble this guy was having) -
: which is that plugins in the solr/lib folder cannot load from other jars in
: that folder. I think that was the
Development Team wrote:
To specify the
solr-home I use a Java system property (instead of the JNDI way) since I
already have other necessary system properties for my apps.
Could you please give me a concrete example of how you did this? There is no
example code or commandline examples to
64 matches
Mail list logo