hi
I tried to patch by following command on console
patch -p0 SOLR-236-trunk.patch
patching file
solr/src/java/org/apache/solr/handler/component/CollapseComponent.java
patching file
solr/src/test/test-files/solr/conf/solrconfig-fieldcollapse.xml
patching file
I tried to patch from diffrent src folder but asking for file name to
patch..
and I think some dependent patch also required..
please guide me to patch SOLR-236.patch (with dependent patch if any).
If it will require then I will again download the src code from trunk..
thanks
On Wed, May 12,
Hi,
Thanks Eric..
The search parameter length is a lot to be done in GET, I am thinking of
opting for POST, is it possible to do POST request to solr. Any
configuration changes or code changes required for the same? I have many
parameters but only one is supposed to be very lengthy.
Any
Hallo Solr community,
We are considering Solr for searching on content from various partners
with wildly different content.
Is it possible or practical to work with multi-valued associated fields like
this?
Make:Audi, Model:A4, Color:Blue, Year:1998, KM:20, Extras:GPS
Type:Flat, Rooms:2,
Hi,
You can use localsolr (http://www.gissearch.com/localsolr) that supports
sharding if you need this feature.
Marco Martínez Bautista
http://www.paradigmatecnologico.com
Avenida de Europa, 26. Ática 5. 3ª Planta
28224 Pozuelo de Alarcón
Tel.: 91 352 59 42
2010/5/11 Jean-Sebastien Vachon
Hello Eric,
Certainly it is possible. I would strongly advice to have field which
differentiates the record type (RECORD_TYPE:CAR / PROPERTY).
In general I was also wondering how Solr developers implement websites
that
uses tag filters.For example, a user clicks on Hard drives then get tags
Hi,
2º solution:
Not use multiValue fields, instead use two single fields, in your example
will be:
doc1:
dept: student1
city: city1
principalFlag:T
doc2:
dept: student2
city: city2
principalFlag:F
So, if you search without specify any city or dept, you should put
princiaplFlag:T for no get
hi Marco,
Thanks for quick reply..
I have another doubt: In 2nd solution: How to set flag for duplicate value.
because I am not sure about the no fo duplicate rows (it could be random
no..)
so how can I set the flag..
thank
On Wed, May 12, 2010 at 12:59 PM, Marco Martinez
Hi Eric
I catch the NPE in the NonAdjacentDocumentCollapser class and now it does
return the data field collapsed.
However I can not promise how accurate or correct this fix is becuase I have not
got allot of time to study all the code.
It would be best if some of the experts could give us a
You should do a preprocessing(multiply your document as many documents as
values you have in your multivalue field, with the principalFlag:T in your
first document) before you indexing the data with that logic
Marco Martínez Bautista
http://www.paradigmatecnologico.com
Avenida de Europa, 26.
Hi Aditya,
Thanks for your response.
Yes, a category type would be needed.
One thing I am not clear about,
If you have multi-values like toshiba, tecra, LCD
it is then clear that you can run solr queries like:
fq=mymultivaluefield:LCD
but for associated fields like:
make=toshiba,
Hello..
how can i boost docs where the field is_highlight = 1 in my DIH ??
can i put an if-block into the DIH ?
i found not so much about this .. =(
th
--
View this message in context:
http://lucene.472066.n3.nabble.com/Boost-at-IndexTime-for-special-docs-tp812277p812277.html
Sent from the
Thanks Erick for your response. Here is the debugQuery output:
?xml version=1.0 encoding=UTF-8 ?
-%5Cl%20%22%22 response
-%5Cl%20%22%22 lst name=responseHeader
int name=status0/int
int name=QTime15/int
-%5Cl%20%22%22 lst name=params
str name=debugQuerytrue/str
str name=indenton/str
str
Sorry the previous copy paste of the output seems to be messed up. Hope this
one is better:
?xml version=1.0 encoding=UTF-8?
response
lst name=responseHeader
int name=status0/int
int name=QTime15/int
lst name=params
str name=debugQuerytrue/str
str name=indenton/str
str name=start0/str
Hi,
On 07.05.2010 22:47, Chris Hostetter wrote:
so it's the full request time, and would be inclusive of any postCommit
event handlers -- that's important to know. the logs will help clear up
wether the underlying commit is really taking up a large amount of time
or if it's some postCommit
sorry, the problem was sitting in front of the monitor!
it is not an error or something, i forgot that some documents didn't have all
fields filled,
so its absolutly normal, that not all fields were in the result.
markus
-Ursprüngliche Nachricht-
Von:
Hello,
I was doing some more testing but I could not find a definitive reason for
this behavior. The following is my transformer:
public MapString, Object transformRow(MapString, Object row, Context
context) {
ListMapString, String fields = context.getAllEntityFields();
Mike,
This only happens when I attempt to do a delta-import without first deleting
the index dir before doing a full-index.
For example these will work correctly.
1) Delete /home/corename/data
2) Full-Import
3) Delta-Import
However I attempt to do the following, it will result in an error
Hmmm, nothing looks odd about that, except perhaps the casing. If you use
the admin
console to look at the raw terms, is productbean mixed case or all lower
case? If the
latter, that would explain things
Be a bit cautious because if you look at the *stored* data it will be in
mixed case, but
Are u reusing the context object? It may help if u can paste the relevant
part of ur code
On 10 May 2010 19:03, ahammad ahmed.ham...@gmail.com wrote:
I have a Solr core that retrieves data from an Oracle DB. The DB table has a
few columns, one of which is a Blob that represents a PDF document.
I'm not sure I understand how your results are truncated. They both find 21502
documents. The fact that you are sorting on '_erstelldatum' ascending and not
seeing any results for that field on the first page leads me to think that you
have 'sortMissingLast=false' on that field's fieldType. In
Sorry Erick, can you tell me how to find the raw *indexed* terms from the admin
console? I am not familiar with the admin console.
Thanks,
On May 12, 2010, at 10:18 AM, Erick Erickson wrote:
Hmmm, nothing looks odd about that, except perhaps the casing. If you use
the admin
console to look
Is the cleanup of indexes using Solr 1.4 Replication documented
somewhere? I can't find any information regarding this at:
http://wiki.apache.org/solr/SolrReplication
Too many snapshot indexes are being left around, and so they need to
be cleaned up.
Hello,
I am not reusing the context object. The remaining part of the code takes in
a Blob object, converts it to a FileInputStream, and reads the contents
using PDFBox. It does not deal with anything related to Solr.
The Transformer doesn't even execute the remaining part of the code. It
Not til this evening, don't have a handy SOLR implementation to ping...
But another option is to get a copy of Luke and look at the index, but the
same caution
about seeing terms not stored data holds.
Or you could just try your troublesome search with all lower case for your
term (not field)
Hi
I ran into a replication issue yesterday and I have no explanation for it. I
see the following in my logs:
SEVERE: Unable to move index file from:
/my/dir/Solr/data/property/index.20100511050029/_3zj.fdt to:
/my/dir/Solr/data/property/index.20100511042539/_3zj.fdt
I restarted the subscriber
Hi, fellows!
I use field collapsing to collapse near-duplicate documents based on
document fuzzy signature calculated at index time.
The problem is that, when field collapsing is enabled, in query
response numFound is equal to the number of rows requested.
For instance, with solr example schema
We're running into an out of memory problem when sending a large file to our
SOLR server using the ContentStreamUpdateRequest. It appears that this
happens because when the request method of CommonsHttpSolrServer is called
(this is called even when using a StreamingUpdateSolrServer instance
dont know if its the best solution but i have a field i facet on
called type its either 0,1, combined with collapse.facet=before i just
sum all the values of the facet field to get the total number found
if you dont have such a field u can always add a field with a single value
--joe
On Wed,
Erick,
I tried the lower case search (productType:productbean) and I did not get any
results either. Luke shows that ProductBean for the field, not sure whether
it's indexed term or stored term. Does this mean that this is not a case issue?
Here is the field definition in the schema:
field
Hi Erick,
Thank your for your thoughts,
I had exactly the same idea like your screenLCD suggestion (but with a
semicolon)
For example:
range1 range2 range3 range_flagsproperties
890 2001 range1;km, range2;kw, range3;year group;auto,
make;audi, model;a4,
The core name is set in solr.xml.
Start with the example/multicore directory in the solr distribution.
This shows how to set up multiple cores.
Also, spaces in URLs are translated as + signs, and maybe translated
back. People generally use alphanumeric and underscore names for
cores; these work
Because leading negative clauses don't work. The (*:* AND x) syntax
means select everything AND also select x.
You could also do
(+category:xyz +price:[100 TO *]) -category:xyz
On Tue, May 11, 2010 at 12:36 PM, Satish Kumar
satish.kumar.just.d...@gmail.com wrote:
thanks Ahmet.
(+category:xyz
I had the same problem as you last year, i.e. indexing stuff from different
sources with different characteristics. The way I approached it is by
setting up a multi-core environment, with each core representing one type of
data. Within each core, I had a data type sort of field that would define
: this test fails for requests built from a SimpleRequestParser or
: StandardRequestParser where the parameter key was given, but empty ( e.g.
: localhost:8393/select/?key=para1=val1parm2=val2 ).
:
: The reason is that oas.request.ServletSolrParams returns null for values with
: length() == 0,
:
: Is there some way to override the data directory in the Tomcat context file?
i don't believe so.
work was done a long time ago to support system property substitution when
the solrconfig.xml file is loaded, but i don't think that was ever
generalized to support JNDI values as well (which is
Hi Ahmed,
Interesting, I did not think of a multi-core approach.
I am not sure, but we might have upto 10 different kinds of data to contend
with like property, pets, farming, electronics, travel, auto, jobs, sport
etc that might complicate things.
Also, one practical limitation we have, is that
: Does anyone know if there is any way to create a new Core with specified
: properties or to alter and reload Core Properties for a Core without
: restarting the service?
:
: I tried to do this in three steps:
:
: 1) Create a new core;
:
: 2) Edit solr.xml directly to add
: However, I'd like to hear a comment on the approach of doing the parsing
: using Lucene and then constructing a SolrQuery from a Lucene Query:
I believe you are asking about doing this in the client code? using the
Lucene QueryParser to parse a string using an analyzer, then toString'ing
Hi Lance,
On Wed, May 12, 2010 at 11:48 AM, Lance Norskog goks...@gmail.com wrote:
The core name is set in solr.xml.
Ah. Ok. I'll look into that.
Start with the example/multicore directory in the solr distribution.
This shows how to set up multiple cores.
Do I need to set up multiple
: I got a fundamental understanding question that Mike's posting did not
: answer:
: You say q=apple iPhone qf=title^5 manufacturer mm=100% is correct.
: That means:
: title: iphone - matches iphone but not apple
: manufacturer: apple - matches apple but not iphone
: According to the query, at
: On the wiki, I've read something about MediaWiki indexation:
: http://wiki.apache.org/solr/DataImportHandler#Example:_Indexing_wikipedia
...
: I do not know MW database very well, but should the schema be really as
: simple as the one given on the above page?
thta example demonstrates
In our deployment, we thought that complications might arise when attempting
to hit the Solr server with addresses of too many cores. For instance, we
have 15+ cores running at the moment. At the worst case, we will have to use
all 15+ addresses of all the cores to search all our data. What we
: pf has the format as qf.
:
: pftitlePhrase^10.0/pf
...uh, i'm not sure what that xml syntax is suppose to convey, but if you
are putting it in a solrconfig file as a default it would be...
str name=qftitlePhrase^10.0/str
Note also that Dickens qf is also malformed...
: str
: result name=response numFound=0 start=0/
: lst name=debug
: lst name=queryBoosting
: str name=qproductType:ProductBean/str
: null name=match/
: /lst
...can you please disable the QueryElevationComponent and see if that
changes things?
: str
If you do go this route, I'd use something besides a colon, it's too easily
confused with a field delimiter. That's just *asking* for trouble...
Any non-alpha character you use will cause some grief if you choose an
incompatible Analyzer. Even upper/lower case can be split in ways
you wouldn't
Hi,
We are trying to use SOLR for searching our catalog online and during QA
came across a interesting case where SOLR is not returning results that it
should.
Specificially, we have indexed things like Title and Description, of the
words in the Title happens to be Prepaid' and Postpaid.
Click the schema browser link on the admin page.
On the next page click
the fields link, then the field in question.
But first I'd do whatever Chris suggested.
BTW, the field definition you pasted isn't the one that
really counts here, fieldtype is the one that does, but in this case
the
Hmmm, there's not much information to go on here.
You might review this page:
http://wiki.apache.org/solr/UsingMailingLists
and post with more information. At minimum,
the field definitions, the query output (include
debugQuery=on), perhaps what comes out
of the analysis admin page for both
Hi,
Thanks for your response. Attached are the Schema.xml and sample docs
that were indexed. The query and response are as below. The attachment
Prodsku4270257.xml has a field paymenttype whose value is 'prepaid'.
query:
Thanks Hoss. Please see the query results as follows:
: result name=response numFound=0 start=0/
: lst name=debug
: lst name=queryBoosting
: str name=qproductType:ProductBean/str
: null name=match/
: /lst
...can you please disable the QueryElevationComponent and see if that
Anyone know of any way to accomplish (or at least simulate) this?
Thanks again
--
View this message in context:
http://lucene.472066.n3.nabble.com/MLT-Boost-Function-tp811227p813982.html
Sent from the Solr - User mailing list archive at Nabble.com.
Sorry please discard my query results here, because I was playing with the
field type and changed it to text from string and forgot to change it back.
I will change it back to string and post the query results shortly.
I apologize for the careless mistake.
Thanks,
Alex
On May 12, 2010, at
Found the problem! It's because all the values in productType field have
trailing spaces in it like this: ProductBean . Thanks Hoss for your
suggestion of using Luke query which exposed the problem.
You guys are awesome!
Thanks,
Alex
On May 12, 2010, at 10:12 PM, Alex Wang wrote:
Sorry
You are absolutely right. The fields have trailing spaces in it. Thanks Erick
for your time. Really appreciated!
Thanks,
Alex
On May 12, 2010, at 8:29 PM, Erick Erickson wrote:
Click the schema browser link on the admin page.
On the next page click
the fields link, then the field in
I have indexed person names in solr using synonym expansion and am getting a
match when I explicitly use that field in my query (name:query). However,
when I copy that field into another field using copyfield and search on that
field, I don't get a match. Below are excerpts from schema.txt. I am
Hi Rama,
What field types are these Title and Description?
You may go to SOLR admin console and try Analysis, and select the field type
that you have used for Title and Description and provide those words Prepaid
and Postpaid in the indexing analyzer and see how is it storing the information.
Hi all,
I am forming a query to boost a certain ids, the list of ids can go till
2000 too. I am sometimes getting the error for too many clauses in the
boolean query and otherwise i am getting a null page. Can you suggest any
config changes regarding this.
I am using solr 1.3.
Regards,
Pooja
NO I actually Bothered About the QueryTime(Qtime which shows in the solr
Log).
Taking around 4-5 secs for each Query which results 18 Lacks Records After i
added the fl parameter to fecth The required Fields.
I want to Reduce this Qtime further.
Iam using solr 1.3 version with multicore (
59 matches
Mail list logo