I have an application where I am calling DirectUpdateHandler2 directly with:
update.addDoc(cmd);
This will sometimes hit:
java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.UnicodeUtil.UTF16toUTF8(UnicodeUtil.java:248)
at
I'm looking for a way to quickly flag/unflag documents.
This could be one at a time or by query (even *:*)
I have hacked together something based on ExternalFileField that is
essentially a FST holding all the ids (solr not lucene). Like the
FieldCache, it holds a
Hi-
I am trying to add a setting that will boost results based on
existence in different buckets. Using edismax, I added the bq
parameter:
location:A^5 location:B^3
I want this to put everything in location A above everything in
location B. This mostly works, BUT depending on the number of
thanks!
On Fri, Oct 26, 2012 at 4:20 PM, Chris Hostetter
hossman_luc...@fucit.org wrote:
: How about a boost function, bf or boost?
:
: bf=if(exists(query(location:A)),5,if(exists(query(location:B)),3,0))
Right ... assuming you only want to ignore tf/idf on these fields in this
specifc
If you optimize the index, are the results the same?
maybe it is showing counts for deleted docs (i think it does... and
this is expected)
ryan
On Sat, Aug 25, 2012 at 9:57 AM, Fuad Efendi f...@efendi.ca wrote:
This is bug in Solr 4.0.0-Beta Schema Browser: Load Term Info shows 9682
News,
for the ExtractingRequestHandler, you can put anything into the
request contentType.
try:
addFile( file, application/octet-stream )
but anything should work
ryan
On Thu, Jun 7, 2012 at 2:32 PM, Koorosh Vakhshoori
kvakhsho...@gmail.com wrote:
In latest 4.0 release, the addFile() method has
In 4.0, solr no longer uses JSP, so it is not enabled in the example setup.
You can enable JSP in your servlet container using whatever method
they provide. For Jetty, using start.jar, you need to add the command
line: java -jar start.jar -OPTIONS=jsp
ryan
On Mon, May 14, 2012 at 2:34 PM,
On 5/15/12 10:56 AM, Ryan McKinley ryan...@gmail.com wrote:
In 4.0, solr no longer uses JSP, so it is not enabled in the example
setup.
You can enable JSP in your servlet container using whatever method
they provide. For Jetty, using start.jar, you need to add the command
line: java -jar
thanks!
On Wed, May 2, 2012 at 4:43 PM, Chris Hostetter
hossman_luc...@fucit.org wrote:
: How do I search for things that have no value or a specified value?
Things with no value...
(*:* -fieldName:[* TO *])
Things with a specific value...
fieldName:A
Things with no value
check a release since r1332752
If things still look problematic, post a comment on:
https://issues.apache.org/jira/browse/SOLR-3426
this should now have a less verbose message with an older SLF4j and with Log4j
On Tue, May 1, 2012 at 10:14 AM, Gopal Patwa gopalpa...@gmail.com wrote:
I have
If your json value is amp; the proper xml value is amp;amp;
What is the value you are setting on the stored field? is is or amp;?
On Mon, Apr 30, 2012 at 12:57 PM, William Bell billnb...@gmail.com wrote:
One idea was to wrap the field with CDATA. Or base64 encode it.
On Fri, Apr 27, 2012
In general -- i would not suggest mixing EmbeddedSolrServer with a
different style (unless the other instances are read only). If you
have multiple instances writing to the same files on disk you are
asking for problems.
Have you tried just using StreamingUpdateSolrServer for daily update?
I
I would suggest debugging with browser requests -- then switching to
Solrj after you are at 1st base.
In particular, try adding the debugQuery=true parameter to the
request and see what solr thinks is happening.
The value that will work for the 'qt' parameter depends on what is
configured in
zookeeper.jsp was removed (along with all JSP stuff) in trunk
Take a look at the cloud tab in the UI, or check the /zookeeper
servlet for the JSON raw output
ryan
On Mon, Apr 9, 2012 at 6:42 AM, Benson Margulies bimargul...@gmail.com wrote:
Starting the leader with:
java
There have been a bunch of changes getting the zookeeper info and UI
looking good. The info moved from being on the core to using a
servlet at the root level.
Note, it is not a request handler anymore, so the wt=XXX has no
effect. It is always JSON
ryan
On Fri, Apr 6, 2012 at 7:01 AM, Jamie
On Wed, Mar 7, 2012 at 7:25 AM, Matt Mitchell goodie...@gmail.com wrote:
Hi,
I'm researching options for handling a better geospatial solution. I'm
currently using Solr 3.5 for a read-only database, and the
point/radius searches work great. But I'd like to start doing point in
polygon
Hi Matthias-
I'm trying to understand how you have your data indexed so we can give
reasonable direction.
What field type are you using for your locations? Is it using the
solr spatial field types? What do you see when you look at the debug
information from debugQuery=true?
From my
I have an application where I need to return all results that are not
in a SetString (the Set is managed from hazelcast... but that is
not relevant)
As a fist approach, i have a SerachComponent that injects a BooleanQuery:
BooleanQuery bq = new BooleanQuery(true);
for( String id :
Ah, thanks Hoss - I had meant to respond to the original email, but
then I lost track of it.
Via pseudo-fields, we actually already have the ability to retrieve
values via FieldCache.
fl=id:{!func}id
But using CSF would probably be better here - no memory overhead for
the FieldCache
patches are always welcome!
On Tue, Jul 5, 2011 at 3:04 PM, Yonik Seeley yo...@lucidimagination.com wrote:
On Mon, Jul 4, 2011 at 11:54 AM, Per Newgro per.new...@gmx.ch wrote:
i've tried to add the params for group=true and group.field=myfield by using
the SolrQuery.
But the result is null.
On Fri, Jul 1, 2011 at 9:06 AM, Yonik Seeley yo...@lucidimagination.com wrote:
On Thu, Jun 30, 2011 at 6:19 PM, Ryan McKinley ryan...@gmail.com wrote:
Hello-
I'm looking for a way to find all the links from a set of results. Consider:
doc
id:1
type:X
link:a
link:b
/doc
doc
id:2
Hello-
I'm looking for a way to find all the links from a set of results. Consider:
doc
id:1
type:X
link:a
link:b
/doc
doc
id:2
type:X
link:a
link:c
/doc
doc
id:3
type:Y
link:a
/doc
Is there a way to search for all the links from stuff of type X -- in
this case (a,b,c)
If I'm
You can store binary data using a binary field type -- then you need
to send the data base64 encoded.
I would strongly recommend against storing large binary files in solr
-- unless you really don't care about performance -- the file system
is a good option that springs to mind.
ryan
Does anyone know of a patch or even when this functionality might be included
in to Solr4.0? I need to query for polygons ;-)
check:
http://code.google.com/p/lucene-spatial-playground/
This is my sketch / soon-to-be-proposal for what I think lucene
spatial should look like. It includes a
You may have noticed the ResponseWriter code is pretty hairy! Things
are package protected so that the API can change between minor release
without concern for back compatibility.
In 4.0 (/trunk) I hope to rework the whole ResponseWriter framework so
that it is more clean and hopefully stable
Not crazy -- but be aware of a few *key* caviates.
1. Do good testing on a stable snapshot.
2. Don't get surprised if you have to rebuild the index from scratch
to upgrade in the future. The official releases will upgrade smoothly
-- but within dev builds, anything may happen.
On Sat, Feb 19,
, Feb 11, 2011 at 4:31 PM, Ryan McKinley ryan...@gmail.com wrote:
I have an odd need, and want to make sure I am not reinventing a wheel...
Similar to the QueryElevationComponent, I need to be able to move
documents to the top of a list that match a given query.
If there were no sort
I have an odd need, and want to make sure I am not reinventing a wheel...
Similar to the QueryElevationComponent, I need to be able to move
documents to the top of a list that match a given query.
If there were no sort, then this could be implemented easily with
BooleanQuery (i think) but with
I am using the edismax query parser -- its awesome! works well for
standard dismax type queries, and allows explicit fields when
necessary.
I have hit a snag when people enter something that looks like a windows path:
lst name=params
str name=qF:\path\to\a\file/str
/lst
this gets parsed as:
str
ah -- that makes sense.
Yonik... looks like you were assigned to it last week -- should I take
a look, or do you already have something in the works?
On Thu, Feb 10, 2011 at 2:52 PM, Chris Hostetter
hossman_luc...@fucit.org wrote:
: extending edismax. Perhaps when F: does not match a given
foo_s:foo\-bar
is a valid lucene query (with only a dash between the foo and the
bar), and presumably it should be treated the same in edismax.
Treating it as foo_s:foo\\-bar (a backslash and a dash between foo and
bar) might cause more problems than it's worth?
I don't think we should
Where do you get your Lucene/Solr downloads from?
[] ASF Mirrors (linked in our release announcements or via the Lucene website)
[X] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[X] I/we build them from source via an SVN/Git checkout.
also try debugQuery=true and see why each result matched
On Thu, Dec 30, 2010 at 4:10 PM, mrw mikerobertsw...@gmail.com wrote:
Basically, just what you've suggested. I did the field/query analysis piece
with verbose output. Not entirely sure how to interpret the results, of
course.
On Mon, Oct 18, 2010 at 10:12 AM, Tharindu Mathew mcclou...@gmail.com wrote:
Thanks Peter. That helps a lot. It's weird that this not documented anywhere.
:(
Feel free to edit the wiki :)
Do you already have the files as solr XML? If so, I don't think you need solrj
If you need to build SolrInputDocuments from your existing structure,
solrj is a good choice. If you are indexing lots of stuff, check the
StreamingUpdateSolrServer:
I have an indexing pipeline that occasionally needs to check if a
document is already in the index (even if not commited yet).
Any suggestions on how to do this without calling commit/ before each check?
I have a list of document ids and need to know which ones are in the
index (actually I need
Multiple threads work well.
If you are using solrj, check the StreamingSolrServer for an
implementation that will keep X number of threads busy.
Your mileage will very, but in general I find a reasonable thread
count is ~ (number of cores)+1
On Wed, Sep 22, 2010 at 5:52 AM, Andy
deletequery*:*/query/delete
will leave you a fresh index
On Thu, Sep 23, 2010 at 12:50 AM, xu cheng xcheng@gmail.com wrote:
deletequerythe query that fetch the data you wanna
delete/query/delete
I did like this to delete my data
best regards
2010/9/23 Igor Chudov ichu...@gmail.com
I suppose an index 'remaker' might be something like a DIH reader for
a Solr index - streams everything out of the existing index, writing
it into the new one?
This works fine if all fields are stored (and copy field does not go
to a stored field), otherwise you would need/want to start with
Check:
http://lucene.apache.org/java/3_0_2/fileformats.html
On Tue, Sep 7, 2010 at 3:16 AM, rajini maski rajinima...@gmail.com wrote:
All,
While we post data to Solr... The data get stored in //data/index path
in some multiple files with different file extensions...
Not worrying about
I have a function that works well in 3.x, but when I tried to
re-implement in 4.x it runs very very slow (~20ms vs 45s on an index w
~100K items).
Big picture, I am trying to calculate a bounding box for items that
match the query. To calculate this, I have two fields bboxNS, and
bboxEW that get
Note that the 'setRequestWriter' is not part of the SolrServer API, it
is on the CommonsHttpSolrServer:
http://lucene.apache.org/solr/api/org/apache/solr/client/solrj/impl/CommonsHttpSolrServer.html#setRequestWriter%28org.apache.solr.client.solrj.request.RequestWriter%29
If you are using
Any pointers on how to sort by reverse index order?
http://search.lucidimagination.com/search/document/4a59ded3966271ca/sort_by_index_order_desc
it seems like it should be easy to do with the function query stuff,
but i'm not sure what to sort by (unless I add a new field for indexed
time)
Any
Looks like you can sort by _docid_ to get things in index order or
reverse index order.
?sort=_docid_ asc
thank you solr!
On Fri, Jul 23, 2010 at 2:23 PM, Ryan McKinley ryan...@gmail.com wrote:
Any pointers on how to sort by reverse index order?
http://search.lucidimagination.com/search
If there is a real desire/need to make things restful in the
official sense, it is worth looking at using a REST framework as the
controller rather then the current solution. perhaps:
http://www.restlet.org/
https://jersey.dev.java.net/
These would be cool since they encapsulate lots of the
Interesting -- I don't think there is anything that does this.
Though it seems like something the XML Query syntax should be able to
do, but we would still need to add the ability to send the xml style
query to solr.
On Fri, May 28, 2010 at 12:23 PM, Phillip Rhodes
rhodebumpl...@gmail.com
The two approaches solve different needs. In 'multicore' you have a
single webapp with multiple indexes. This means they are all running
in the same JVM. This may be an advantage or a disadvantage depending
on what you are doing.
ryan
On Thu, May 27, 2010 at 10:44 AM, Antonello Mangone
Check:
http://wiki.apache.org/solr/CoreAdmin
Unless I'm missing something, I think you should be able to sort what you need
On Fri, May 21, 2010 at 7:55 PM, Ken Krugler
kkrugler_li...@transpac.com wrote:
I've got a situation where my data directory (a) needs to live elsewhere
besides inside
Any other commonly compelling reasons to use SolrJ?
The most compelling reason (I think) is that if you program against
the Solrj API, you can switch between embedded/http/streaming
implementations without changing anything.
This is great for our app that is either run as a small local
On Wed, May 19, 2010 at 6:38 AM, Peter Karich peat...@yahoo.de wrote:
Hi all,
while asking a question on stackoverflow [1] some other questions appear:
Is SolrJ a recommended way to access Solr or should I prefer the HTTP
interface?
solrj vs HTTP interface? That will just be a matter of
On Fri, Apr 2, 2010 at 7:07 AM, Na_D nabam...@zaloni.com wrote:
hi,
I need to monitor the index for the following information:
1. Size of the index
2 Last time the index was updated.
If by 'size o the index' you mean document count, then check the Luke
Request Handler
The 'abortOnConfigurationError' option was added a long time ago...
at the time, there were many errors that would just be written to the
logs but startup would continue normally.
I felt (and still do) that if there is a configuration error
everything should fail loudly. The option in
On Jan 13, 2010, at 5:34 PM, Minutello, Nick wrote:
Agreed, commit every second.
Do you need the index to be updated this often? Are you reading from
it every second? and need results that are that fresh
If not, i imagine increasing the auto-commit time to 1min or even 10
secs would
On Jan 7, 2010, at 10:50 AM, MitchK wrote:
Eric,
you mean, everything is okay, but I do not see it?
Internally for searching the analysis takes place and writes to the
index in an inverted fashion, but the stored stuff is left alone.
if I use an analyzer, Solr stores it's output two
On Jan 7, 2010, at 12:11 PM, MitchK wrote:
Thank you, Ryan. I will have a look on lucene's material and luke.
I think I got it. :)
Sometimes there will be the need, to response on the one hand the
value and
on the other hand the indexed version of the value.
How can I fullfill such
On Jan 7, 2010, at 1:05 PM, Jon Poulton wrote:
I've also just noticed that QueryParsing is not in the SolrJ API.
It's in one of the other Solr jar dependencies.
I'm beginning to think that maybe the best approach it to write a
query string generator which can generate strings of the form:
what version of solr are you running?
On Jan 7, 2010, at 3:08 PM, Jake Brownell wrote:
Hi all,
Our application uses solrj to communicate with our solr servers. We
started a fresh index yesterday after upping the maxFieldLength
setting in solrconfig. Our task indexes content in batches
On Jan 6, 2010, at 3:48 PM, MitchK wrote:
I have tested a lot and all the time I thought I set wrong options
for my
custom analyzer.
Well, I have noticed that Solr isn't using ANY analyzer, filter or
stemmer.
It seems like it only stores the original input.
The stored value is always
Ya, structured data gets a little funny.
For starters, the order of multi-valued fields should be maintained,
so if you have:
doc
field name=urlhttp://aaa/field
field name=url_rank5/field
field name=urlhttp://bbb/field
field name=url_rank4/field
/doc
the response will return result in
If you need to search via the Hibernate API, then use hibernate search.
If you need a scaleable HTTP (REST) then solr may be the way to go.
Also, i don't think hibernate has anything like the faceting / complex
query stuff etc.
On Dec 29, 2009, at 3:25 PM, Márcio Paulino wrote:
Hey
check:
http://wiki.apache.org/solr/SolrLogging
if you are using 1.4 you want to drop in the slf4j-log4j jar file and
then it should read your log4j configs
On Nov 19, 2009, at 2:15 PM, Harsch, Timothy J. (ARC-TI)[PEROT
SYSTEMS] wrote:
Hi all,
I have an J2EE application using embedded
Solr includes slf4j-jdk14-1.5.5.jar, if you want to use the nop (or
log4j, or loopback) impl you will need to include that in your own
project.
Solr uses slf4j so that each user can decide their logging
implementation, it includes the jdk version so that something works
off-the-shelf,
It looks like solr+spatial will get some attention in 1.5, check:
https://issues.apache.org/jira/browse/SOLR-1561
Depending on your needs, that may be enough. More robust/scaleable
solutions will hopefully work their way into 1.5 (any help is always
appreciated!)
On Nov 13, 2009, at
Also:
https://issues.apache.org/jira/browse/SOLR-1302
On Nov 13, 2009, at 11:12 AM, Bertie Shen wrote:
Hey,
I am interested in using LocalSolr to go Local/Geo/Spatial/Distance
search. But the wiki of LocalSolr(http://wiki.apache.org/solr/LocalSolr
)
points to pretty old documentation. Is
The HTMLStripCharFilter will strip the html for the *indexed* terms,
it does not effect the *stored* field.
If you don't want html in the stored field, can you just strip it out
before passing to solr?
On Nov 11, 2009, at 8:07 PM, aseem cheema wrote:
Hey Guys,
How do I add HTML/XML
On Nov 2, 2009, at 8:29 AM, Grant Ingersoll wrote:
On Nov 2, 2009, at 12:12 AM, Licinio Fernández Maurelo wrote:
Hi folks,
as we are using an snapshot dependecy to solr1.4, today we are
getting
problems when maven try to download lucene 2.9.1 (there isn't a any
2.9.1
there).
Which
I'm sure it is possible to configure JDK logging (java.util.loging)
programatically... but I have never had much luck with it.
It is very easy to configure log4j programatically, and this works
great with solr.
To use log4j rather then JDK logging, simply add slf4j-
log4j12-1.5.8.jar
I wonder why the common classes are in the solrj JAR?
Is the solrj JAR not just for the clients?
the solr server uses solrj for distributed search. This makes solrj
the general way to talk to solr (even from within solr)
Hello-
I have an application that can run in the background on a user Desktop
-- it will go through phases of being used and not being used. I want
to be able to free as many system resources when not in use as possible.
Currently I have a timer that wants for 10 mins of inactivity and
do you have anything custom going on?
The fact that the lock is in java2d seems suspicious...
On Sep 23, 2009, at 7:01 PM, pof wrote:
I had the same problem again yesterday except the process halted
after about
20mins this time.
pof wrote:
Hello, I was running a batch index the other
Should be fixed in trunk. Try updating and see if it works for you
See:
https://issues.apache.org/jira/browse/SOLR-1424
On Sep 9, 2009, at 8:12 PM, Allahbaksh Asadullah wrote:
Hi ,
I am building Solr from source. During building it from source I am
getting
following error.
can you just add a new field that has the real or ave price?
Just populate that field at index time... make it indexed but not
stored
If you want the real or average price to be treated the same in
faceting, you are really going to want them in the same field.
On Aug 28, 2009, at 1:16
On Aug 27, 2009, at 10:35 PM, Paul Tomblin wrote:
Yesterday or the day before, I asked specifically if I would need to
restart the Solr server if somebody else loaded data into the Solr
index using the EmbeddedServer, and I was told confidently that no,
the Solr server would see the new data
On Aug 26, 2009, at 3:33 PM, djain101 wrote:
I have one quick question...
If in solrconfig.xml, if it says ...
abortOnConfigurationError${solr.abortOnConfigurationError:false}/
abortOnConfigurationError
does it mean abortOnConfigurationError defaults to false if it is
not set
as
On Aug 19, 2009, at 6:45 AM, johan.sjob...@findwise.se wrote:
Hi,
we're glancing at the GEO search module known from the jira issue 773
(http://issues.apache.org/jira/browse/SOLR-773).
It seems to us that the issue is still open and not yet included in
the
nightly builds.
correct
check:
https://issues.apache.org/jira/browse/SOLR-945
this will not likely make it into 1.4
On Jul 30, 2009, at 1:41 PM, Jérôme Etévé wrote:
Hi,
Nope, I'm not using solrj (my client code is in Perl), and I'm with
solr 1.3.
J.
2009/7/30 Shalin Shekhar Mangar shalinman...@gmail.com:
On
ya... 'expected', but perhaps not ideal. As is, LocalSolr munges the
document on its way out the door to add the distance.
When LocalSolr makes it into the source, it will likely use a method
like:
https://issues.apache.org/jira/browse/SOLR-705
to augment each document with the
On Jul 20, 2009, at 8:47 AM, Edward Capriolo wrote:
Hey all,
We have several deployments of Solr across our enterprise. Our largest
one is a several GB and when enough documents are added an OOM
exception is occurring.
To debug this problem I have enable JMX. My goal is to write some
cacti
not sure what you mean... yes, i guess...
you send a bunch of requests with add( doc/collection ) and they are
not visible until you send commit()
On Jul 20, 2009, at 9:07 AM, Gérard Dupont wrote:
my mistake, pb with the buffer I added. But it raises a question :
does solr
(using
Hi-
I'm trying to use the LukeRequestHandler with an index of ~9 million
docs. I know that counting the top / distinct terms for each field is
expensive and can take a LONG time to return.
Is there a faster way to check the number of documents for each field?
Currently this gets the doc count
On Jun 16, 2009, at 5:21 PM, Grant Ingersoll wrote:
On Jun 16, 2009, at 1:57 PM, Ryan McKinley wrote:
Is there a faster way to check the number of documents for each
field?
Currently this gets the doc count for each term:
In the past, I've created a field that contains the names
I am working with an in index of ~10 million documents. The index
does not change often.
I need to preform some external search criteria that will return some
number of results -- this search could take up to 5 mins and return
anywhere from 0-10M docs.
I would like to use the output of
two key things to try (for anyone ever wondering why a query matches documents)
1. add debugQuery=true and look at the explain text below --
anything that contributed to the score is listed there
2. check /admin/analysis.jsp -- this will let you see how analyzers
break text up into tokens.
Not
careful what you ask for... what if you have a million docs? will
you get an OOM?
Maybe a better solution is to run a loop where you grab a bunch of
docs and then increase the start value.
but you can always use:
query.setRows( Integer.MAX_VALUE )
ryan
On May 21, 2009, at 8:37 PM,
I cringe to suggest this but you can use the deprecated call:
SolrCore.getSolrCore().getCoreContainer()
On May 19, 2009, at 11:21 AM, Giovanni De Stefano wrote:
Hello all,
I have a quick question but I cannot find a quick answer :-)
I have a Java client running on the same JVM where
since there is so little overlap, I would look at a core for each
user...
However, to manage 20K cores, you will not want to use the off the
shelf core management implementation to maintain these cores.
Consider overriding SolrDispatchFilter to initialize a CoreContainer
that you
how much overlap is there with the 20k user documents?
if you create a separate index for each of them will you be indexing
90% of the documents 20K times? How many total documents could an
individual user typically see? How many total distinct documents are
you talking about? Is the
right -- which one you pick will depend more on your runtime
environment then anything else.
If you need to hit a server (on a different machine)
CommonsHttpSolrServer is your only option.
If you are running an embedded application -- where your custom code
lives in the same JVM as solr
use this constructor:
public CommonsHttpSolrServer(String solrServerUrl, HttpClient
httpClient, ResponseParser parser) throws MalformedURLException {
this(new URL(solrServerUrl), httpClient, parser, false);
}
and give it the XMLResponseParser
-- - - -
Is this just helpful for
The point of using solrj is that you don't have to do any parsing
yourself -- you get access to the results in object form.
If you need to do parsing, just grab the xml directly:
http://host/solr/select?q=*:*wt=xml
On May 4, 2009, at 9:36 AM, ahmed baseet wrote:
As I know when we query
I would suggest looking at Apache commons VFS and using the solrj API:
http://commons.apache.org/vfs/
With SVN, you may be able to use the webdav provider.
ryan
On Apr 26, 2009, at 4:08 AM, Ashish P wrote:
Is there any way to index contents of SVN rep in Solr ??
--
View this message in
Right, you will have to build a new war with your own subclass of
SolrDispatchFilter *rather* then using the packaged one.
On Apr 23, 2009, at 12:34 PM, Noble Paul നോബിള്
नोब्ळ् wrote:
nope.
you must edit the web.xml and register the filter there
On Thu, Apr 23, 2009 at 3:45 PM,
I have not looked at this in a while, but I think the biggest thing it
is missing right now is a champion -- someone to get the patches (and
bug fixes) to a state where it can easily be committed. Minor bug
fixes are road blocks to getting things integrated.
ryan
On Apr 20, 2009, at
as long as you make sure there are never two applications writing to
the same index, you *should* be ok.
But tread carefully...
On Apr 19, 2009, at 3:28 PM, vivek sar wrote:
Both Solr instances will be writing to separate indexes, but can they
share the same solr.home? So, here is what I
When you say Test ... Are you suggesting there is a test suite I
should run, or do just do my own testing?
your own testing...
If you use a 'nightly' the unit tests all pass.
BUT if you are not running from a standard release, there is may be
things that are not totally flushed out, or
The work being done is addressing the deletes, AIUI, but of course
there are other things happening during shutdown, too.
There are no deletes to do. It was a clean index to begin with
and there were no duplicates.
I have not followed this thread, so forgive me if this has already
been
what about:
fieldA:value1 AND fieldB:value2
this can also be written as:
+fieldA:value1 +fieldB:value2
On Apr 13, 2009, at 9:53 PM, Johnny X wrote:
I'll start a new thread to make things easier, because I've only
really got
one problem now.
I've configured my Solr to search on all
On Apr 10, 2009, at 7:48 AM, Nicolas Pastorino wrote:
Hello !
Browsing the mailing-list's archives did not help me find the
answer, hence the question asked directly here.
Some context first :
Integrating Solr with a CMS ( eZ Publish ), we chose to support
Elevation. The idea is to be
If you use the off the shelf .war, it *should* be the same. (if not,
we need to fix it)
If you are building your own .war, how SLF4 behaves depends on what
implementation is in the runtime path. If you want to use log4j
logging, put in the slf4j-log4j.jar in your classpath and you should
On Mar 29, 2009, at 8:42 AM, Shalin Shekhar Mangar wrote:
On Sun, Mar 29, 2009 at 4:57 PM, aerox7 amyne.berr...@me.com wrote:
I want to get results orderd by keyword matching (score) and
popularity.
When i tryed somthing like this : q=hpsort=popularity desc, score
desc
I get Hp
1 - 100 of 607 matches
Mail list logo