Hi list,
I have fixed an issue and created a patch (SOLR-2726) but how to
change Status and Resolution in jira?
And how to commit this, any idea?
Regards,
Bernd
samuele.mattiuzzo, il 31/08/2011 18:22, ha scritto:
SEVERE: org.apache.solr.common.SolrException: Error Instantiating
UpdateRequestProcessorFactory, ToTheGoCustom is not a
org.apache.solr.update.processor.UpdateRequestProcessorFactory
btw you can't load classes in the default package from
Hi,
when reloading a core, it seems that the execution of firstSearcher and
newSearcher events will happen after the new core takes over from the old. This
will effectively stall querying until the caches on the new core are warmed
(which can take quite a long time on large installations). Is
requestHandler name=MYREQUESTHANDLER class=solr.SearchHandler
!-- default values for query parameters --
lst name=defaults
str name=echoParamsexplicit/str
str name=facet.methodenum/str
str name=facet.mincount1/str
str name=facet.limit10/str
str
Hi there,
i´m really new in Solr and have a question about the Solr replication.
We want to use Solr in two data centers (dedicated fibre channel lane, like
intranet) behind a load balancer. Is the following infrastructure possible?
- one repeater and one slave per data center
- the repeaters
yay i did it! i wasn't that far away from the correct implementation, it just
was a bit tricky to understand how to...
now i've got a problem with my singleton class:
i have DBConnectionManager.jar put inside a folder (lib
dir=../../../dist/custom/ regex=\*\.jar / from solrconfig.xml) but at
a brief question: is using an IoC framework like spring an option for you?
if so, maybe this could help
http://lucene.472066.n3.nabble.com/dependency-injection-in-solr-td3292685.html#a3295939
ok solved it by changing (lib dir=../../../dist/custom/ regex=\*\.jar
/) to
lib path=../../../dist/custom/DBConnectionManager.jar /
ty guys for all yor help, now off to debug some java errors hehe
thanks again, for real!
--
View this message in context:
Hi,
Would anyone be able to tell me roughly when version 3.4 will be
released please?
I'm working on a project that needs the grouping functionality, and as
far as I can tell 3.4 is the version that will include the support for
this into SolrJ:
https://issues.apache.org/jira/browse/SOLR-2637
Hi,
I am trying to index some documents through ExtractingRequestHandler.
everything works fine with the jetty server but when i configure it with
tomcat then my documents are not getting indexed only ids are getting index.
means text field is blank but id field is having values though there
Hello,
I meet an issue with Solr and copyFields and other things probably. here's a
description of the issue. if anyone could help, it would be greatly
appreciated as I've searched everywhere an am not able to figure out what's
happening.
my Solr is configured as follow:
DIH request:
SELECT
you need to define the search field as MultiValued since you're copying
into it multiple sources
http://wiki.apache.org/solr/FAQ#How_do_I_use_copyField_with_wildcards.3F
--
View this message in context:
http://lucene.472066.n3.nabble.com/Issue-with-Solr-and-copyFields-tp3300763p3300794.html
Hi
I have indexed some files from a directory and I can see them in results @
http://localhost:8080/solr/browse
I have also added a field Location which displays the file Location as Link :
Following changes I have done for Links stuff:
1. Added field column=fileAbsolutePath name=links/ in
Hi
To open files from local system use file protocol . (file:///followed by
absolute path)
If you are running the application in jboss instead of jetty you can deploy
the input files as a separate war which is exploded and then avoid using
file protocol.
File protocol is erroneous. It is not
Hi
To open files from local system use file protocol . (file:///followed by
absolute path)
If you are running the application in jboss instead of jetty you can deploy
the input files as a separate war which is exploded and then avoid using
file protocol.
File protocol is erroneous. It is not
Solr allows you to load custom code to perform a variety of tasks within Solr
-- from custom Request Handlers to process your searches, to custom
Analyzers and Token Filters for your text field, even custom Field Types.
-
kitchen cabinet
--
View this message in context:
Hi Balaji
Thanks for your reply, I have tried file:///Absolute path stuff as well.. it
still fails to open the file in mozilla/ IE both
My files are not placed in Solr-home , can that be the issue ? please suggest.
Thanks
Jagdish
Date: Thu, 1 Sep 2011 03:40:00 -0700
From:
Thanks, but this was not the point of the topic :) I'm way more further than
this :) Please, avoid random replies :)
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-custom-plugins-is-it-possible-to-have-them-persistent-tp3292781p3301057.html
Sent from the Solr - User
Hmm, I'm guessing a bit here, but using an invalid query
doesn't sound very safe, but I suppose it *might* be OK.
What does invalid mean? Syntax error? not safe.
search that returns 0 results? I don't know, but I'd guess
that filling your caches, which is the point of warming
queries, might be
You probably want to write your own ResponseWriter,
since you're working in JSON, maybe
JSONResponseWriter is the place to start.
Best
Erick
On Wed, Aug 31, 2011 at 4:58 AM, malic benbenfoxw...@yahoo.com.hk wrote:
Hello, I have a very specific question about the Solr response passed to
remote
See below:
On Wed, Aug 31, 2011 at 2:16 PM, Mike Austin mike.aus...@juggle.com wrote:
I've set up a master slave configuration and it's working great! I know
this is the better setup but if I had just one index due to requirements,
I'd like to know more about the performance hit of the
If your files are under Solr's conf/ directory, you can get them served up
using the ShowFileRequestHandler (see how the /browse serves up CSS and the
autocomplete JQuery library). But otherwise, Solritas doesn't give you any
capability to serve up files and the browsers aren't too happy about
Hi there,
i´m really new in Solr and have a question about the Solr replication.
We want to use Solr in two data centers (dedicated fibre channel lane, like
intranet) behind a load balancer. Is the following infrastructure possible?
- one repeater and one slave per data center
- the repeaters
Wow.. thanks for the great answers Erick! This answered my concerns
perfectly.
Mike
On Thu, Sep 1, 2011 at 7:54 AM, Erick Erickson erickerick...@gmail.comwrote:
See below:
On Wed, Aug 31, 2011 at 2:16 PM, Mike Austin mike.aus...@juggle.com
wrote:
I've set up a master slave configuration
Given a document id:n show me those other documents with similar values in the
'Name' field:
http://devsolr03:8983/solr/primary/select?q=id:182652fl=id,Name,scoremlt=truemlt.fl=Name
My assumption is the above query will generate the desired outcome. It does;
however, given a different
On 31 August 2011 20:27, Jaeger, Jay - DOT jay.jae...@dot.wi.gov wrote:
Well, if it is for creating a *new* core, Solr doesn't know it is pointing
to your shared conf directory until after you create it, does it?
JRJ
Indeed, but the conf directory is not a problem for me. The things is I
I've begun tinkering with MLT using the standard request handler. The Wiki
also suggests using the MoreLikeThis handler directly, but apparently, this is
not in the default configuration (as I recall, I haven't removed anything from
solrconfig.xml as shipped). For example:
I have an application where I need to return all results that are not
in a SetString (the Set is managed from hazelcast... but that is
not relevant)
As a fist approach, i have a SerachComponent that injects a BooleanQuery:
BooleanQuery bq = new BooleanQuery(true);
for( String id :
(11/09/01 23:24), Herman Kiefus wrote:
requestHandler name=/mlt
class=org.apache.solr.handler.component.MoreLikeThisComponent
arr name=components
strmlt/str
/arr
/requestHandler
but ends up returning a 500 error on a core reload. What is an appropriate
configuration
Ok, so I feel like I'm 90% of the way there. For standard queries
things work fine, but for distributed queries I'm running into a bit
of an issue. Right now the queries run fine but when doing
distributed queries (using SolrCloud) the numFound is always getting
set to the number of requested
Hi all,
I've read numerous guides on how to set up autocomplete on solr and it works
great the way I have it now. However, my only complaint is that it only
matches the beginning of the word. For example, if I try to autocomplete
dober, I would only get, Doberman, Doberman Pincher but not
If you are indexing data, rather than documents, another possibility is to
use database triggers to fire off updates.
-Original Message-
From: Erik Hatcher [mailto:erik.hatc...@gmail.com]
Sent: Wednesday, August 31, 2011 9:13 AM
To: solr-user@lucene.apache.org
Subject: Re: is it
I found that if I change
filter class=solr.EdgeNGramFilterFactory minGramSize=1 maxGramSize=25
/
to
filter class=solr.NGramFilterFactory minGramSize=1 maxGramSize=25 /
I can do autocomplete in the middle of a term.
Thanks!
Brian Lamb
On Thu, Sep 1, 2011 at 11:27 AM, Brian Lamb
Ok, think I got it. Basically the issue was that I can't modify the
offset and start params when the search is a distributed one,
otherwise the correct offset and max are lost, a simple check in
prepare fixed this.
On Thu, Sep 1, 2011 at 11:10 AM, Jamie Johnson jej2...@gmail.com wrote:
Ok, so I
: I want to deduplicate documents from search results. What should be the
: parameters on which I should decide an efficient SignatureClass? Also, what
: are the SignaureClasses available?
the signature classes available are the ones mentioned on the wiki...
: pricing. I have written a functionquery to get the pricing, which works
: fine as part of the search query, but doesn't seem to be doing anything when
: I try to use it in a filter query. I wrote my pricing function query based
how are you trying to use it in a filter query?
function
Hello,
The keywords field type is text_en_splitting
My query is as follows:q=keywords:(symantec AND corporation)
Result: Documents are returned as normal
My wildcard query is as follows: q=keywords:(symante* AND corporation)
Result: Wildcard functions correctly, and documents are
It seems to work correctly once I remove the brackets like this:
q=keywords:symante* AND corporatio*
But I don't understand why...
On Thu, Sep 1, 2011 at 2:26 PM, Aaron Bains aaronba...@gmail.com wrote:
Hello,
The keywords field type is text_en_splitting
My query is as follows:
:
http://localhost:8983/solr/core0/select?q={!join%20from=matchset_id_ss%20to=id}*:*fq=status_s:completed
:
: I get filtered results of documents that are completed. The issue I am now
: trying to face is how do I filter the initial search of documents based on
: multiple conditions and then
Thank you very much.
requestHandler name=/mlt class=solr.MoreLikeThisHandler
lst name=defaults
str name=mlt.flName/str
/lst
arr name=components
strmlt/str
/arr
/requestHandler
-Original Message-
From: Koji Sekiguchi
I solved the problem by setting multiValued=false
On Thu, Sep 1, 2011 at 2:37 PM, Aaron wrote:
It seems to work correctly once I remove the brackets like this:
q=keywords:symante* AND corporatio*
But I don't understand why...
On Thu, Sep 1, 2011 at 2:26 PM, Aaron wrote:
Hello,
The
Hello,
I have tried to implement spellchecker based on index in nutch-solr by adding
spell field to schema.xml and making it a copy from content field. However,
this increased data folder size twice and spell filed as a copy of content
field appears in xml feed which is not necessary. Is it
The changes to DirectSpellChecker are included in SOLR-2585 patch, which I
sync'ed to the current Trunk today. So all you have to do is apply the patch,
build and then add the 1-2 new parameters to your query:
- spellcheck.alternativeTermCount - the # of suggestions you want to generate
on
Regarding :
http://wiki.apache.org/solr/FunctionQuery#Date_Boosting
Specifcally : recip(ms(NOW/HOUR,mydatefield),3.16e-11,1,1).
I am using dismax, and I am very unsure on where to put this or call the
function... for example in the fq= param??, in the q= param?
Sample query :
Is there a way to add arbitrary values into the response header? I
have a need to insert a boolean into the header and doing something
like
SolrQueryResponse rsp = rb.rsp;
rsp.getResponseHeader().add(testValue, Boolean.TRUE);
Works so long as the query is not distributed. When the query is
Hi Everyone,
Sorry if the subject was too vague. What i am trying to do is this:
field name=A/
field name=B/
field name=C multiValued=true/
field name=D /
field name=E multiValued=true/
copyField source=A dest=C/
copyField source=B dest=C/
copyField source=D dest=E/
copyField source=C dest=E/
This won't work, according to http://wiki.apache.org/solr/SchemaXml#Copy_Fields
This is provided as a convenient way to ensure that data is put into
several fields, without needing to include the data in the update
command multiple times. The copy is done at the stream source level
and no copy
We put here
requestHandler name=whatever class=solr.StandardRequestHandler
default=true
lst name=defaults
str name=defTypedismax/str
str name=qf.../str
str name=pf.../str
str name=bfrecip(ms(NOW,sear_dataupdate),3.16e-11,1,1)/str
...
2011/9/1 Craig Stadler cstadle...@hotmail.com
Regarding :
Thanks Jamie. Missed to notice that one from the wiki.
Thanks,
Karthik
On Thu, Sep 1, 2011 at 3:38 PM, Jamie Johnson jej2...@gmail.com wrote:
This won't work, according to
http://wiki.apache.org/solr/SchemaXml#Copy_Fields
This is provided as a convenient way to ensure that data is put into
instanceDir=.
does that fit your needs ?
Ludovic.
-
Jouve
France.
--
View this message in context:
http://lucene.472066.n3.nabble.com/core-creation-and-instanceDir-parameter-tp3287124p3302496.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi Everyone,
I am trying to configure date faceting on Solr 3.1. I browsed through the
wiki and understood how to enable and configure it.
To explain this better, lets take an example -
my index has docs with dates ranging from 01/01/1995 until NOW (ie., today)
as of now. To configure date
Hi Bill,
As far as I know, you can pass a completely different set of parameters to
each of the functions/filters. For example:
http://localhost:8983/solr/select?q={!func}add(geodist(field1, 10,
-10),geodist(field2, 20, -20))fq={!geofilt sfield=field3 pt=30,-30
No, field data is copied verbatim. Copy the field and strip what you don't
need.
Hello,
Is it possible to create a copy field from another by applying a regex or a
function to source.
Thanks.
Alex.
(11/09/02 4:00), Herman Kiefus wrote:
Thank you very much.
requestHandler name=/mlt class=solr.MoreLikeThisHandler
lst name=defaults
str name=mlt.flName/str
/lst
arr name=components
strmlt/str
/arr
/requestHandler
This is not
: I've got a consistent test failure on Solr source code checked out from svn.
: The same thing happens with 3.3 and branch_3x. I have information saved from
Shawn: sorry for hte late reply.
I can't reproduce your specific problem, but the test in question is
suspiciously hinky enough taht i
: However, I tested this against a slower SQL Server and I saw
: dramatically worse results. Instead of re-using their database, each of
: the sub-entities is recreating a connection each time the query runs.
are you seeing any specific errors logged before these new connections are
created?
: sort=map(map(myNumField,0,10,0),20,100,0) desc, score desc
: sort=map(map(myNumField,0,10,100),20,100,100) asc, score desc
...
: By doing the second one, I expected to get the same results, ordered like
: 13, 17,18, 20. But, what I got were other values as results, that are not in
: the
: so coming back to the issue .. even if am sorting it by _docid_ i need to do
: paging( 2 million docs in result)
: how is it internally doing it ?
: when sorted by docid, don we have deep pagin issue ? (getting all the
: previous pages into memory to get the next page)
: so whats the main
: entity name=f processor=FileListEntityProcessor
: baseDir=/sites/ fileName=promotions.xml
:
: how do i set base dir to be solr.data.dir? on each server solr.data.dir is
: different. I use multi core solr instance
: we created a solr which is connected to two databases and we created a
: jquery auto complete.in two databases we r having keywords and it is
: default search.so beside the search button we r ctearing more more
: drop down list and nmaed the two databases when the user click one one
: database
: For example, a document with the field numberOfParticipant at 10, i would
: like to have some similar documents with numberOfParticipant between 5 and
: 15.
:
: Does this option exist ?
No ... MLT works purely on the basis of terms, so if you tried have
MLT use a numeric field it would just
: I was not referring to Lucene's doc ids but the doc numbers (unique key)
Uh ... ok. this is why i asked you waht you were planning on doing with
the OpenBitSet -- it's just bits, indicating hte offsets of the documents
in the total index. having access to a copy of that on the lcient side
: *What I want:* to change the output by embedding the highlighting properties
: into the response properties, such that the response part looks like:
Work along the lines of making this a generally available feature is
already in progress on the trunk as part of the psuedo fields work
: Trunk builds and tests fine but 3.3 fails the test below
...
: NOTE: reproduce with: ant test -Dtestcase=ContentStreamTest
: -Dtestmethod=testURLStream
: -Dtests.seed=743785413891938113:-7792321629547565878
...
: java.net.ConnectException: Connection timed out: connect
Hi All
Is there any body know when version 4.0 will be released?
From https://issues.apache.org/jira/browse/SOLR-1873,this features will be
fixed in version 4.0。It's very important to me because I'm using SolrClound in
a real project。
I will be out of the office starting 02/09/2011 and will not return until
06/09/2011.
Please email to itsta...@actionimages.com for any urgent issues.
Action Images is a division of Reuters Limited and your data will therefore be
protected
in accordance with the Reuters Group Privacy / Data
Hi Chris
I understood how to handle this now.
I tried and I am getting what I wanted.
Thanks for a very detailed explanation. I reversed the asc, desc part and
was wondering that its not working as I wanted. After seeing the latest
mail, I figured out my mistake.
Thanks once again!
On Fri, Sep
67 matches
Mail list logo