I don’t want to add separate fields since I have many dates to index. How to
index it as timestamp and do function query, any example or documentation?
Regards,
Albert
From: Emir Arnautović
Sent: Wednesday, March 14, 2018 5:38 PM
To: solr-user@lucene.apache.org
Subject: Re: solr query
Hi
th?
>
> Regard,
> Albert
>
>
> From: Emir Arnautović
> Sent: Wednesday, March 14, 2018 5:26 PM
> To: solr-user@lucene.apache.org
> Subject: Re: solr query
>
> Hi Albert,
> It does - you can use NOW/MONTH and NOW/YEAR to get the start of month/year.
> Here is ref
NOW/MONTH and NOW/YEAR to get the start of month/year, but how can I get
current month of regardless year. Like the use case, people who’s birthdate is
this month?
Regard,
Albert
From: Emir Arnautović
Sent: Wednesday, March 14, 2018 5:26 PM
To: solr-user@lucene.apache.org
Subject: Re: solr
Hi Albert,
It does - you can use NOW/MONTH and NOW/YEAR to get the start of month/year.
Here is reference to date math:
https://lucene.apache.org/solr/guide/6_6/working-with-dates.html#WorkingwithDates-DateMathSyntax
Dear Solr,
I want to whether solr support query by this year or this month?
If can, how to do that.
Thanks.
Regards,
Albert
Hi Ivan,
You might be able to use complexphrase query parser to get what you need, you
can test something like this:
{!complexphrase df=my_field}”Leonardo -(da Vinci)”
HTH,
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training -
Markus Jelsma-2 wrote
> You can abuse phrase query for that, q=leonardo AND -"leonardo da vinci"
> (asuming you have a proper default field set).
>
> Markus
This way i'm losing results where i have both "Leonardo" and "Leonardo da
vinci in the same field, see example number 3 "Leonardo foo bar
; Subject: Search for a word NOT followed by another on a Solr query
>
> What i'm trying to do is to only get results for "Leonardo" when is not
> followed by "da vinci".
> If i have "Leonardo da vinci" in my result is fine as long as i have another
&g
What i'm trying to do is to only get results for "Leonardo" when is not
followed by "da vinci".
If i have "Leonardo da vinci" in my result is fine as long as i have another
"Leonardo" without "da vinci".
Examples:
"Leonardo foo bar" OK
"Leonardo da vinci foo bar" KO
"Leonardo foo bar Leonardo da
Hi all,
We would like to perform a benchmark of
https://issues.apache.org/jira/browse/SOLR-11831
The patch improves the performance of grouped queries asking only for one
result per group (aka. group.limit=1).
I remember seeing a page showing a benchmark of the query performance on
Wikipedia,
On 1/25/2018 1:04 AM, Mahesh Gupta wrote:
can any one help me on writing equalent cql solr query of below "http api
based json solr query"
CQL is a query language for Cassandra, not Solr. You'll need to talk to
that project, not this one.
CQL will not be involved with a Solr que
Hi Team,
can any one help me on writing equalent cql solr query of below "http api
based json solr query"
My Running json solr query:
http://hostname:8983/solr/tablename/select?q=*%3A*=pre_upgrade%3A%22yes%22=json=true:true,group.field:product_family
Expecting : equal-ant cql
I have never been a big fan of " getting N results from Solr and then filter
them client side" .
I get your point about the document modelling, so I will assume you properly
tested it and having the small documents at Solr side is really not
sustainable.
I also appreciate the fact you want to
@lucene.apache.org
Subject: RE: Using lucene to post-process Solr query results
And you want to show to the users only the Lucene documents that matched the
original query sent to Solr? (what if a lucene document matches only part of
the query?)
From: solr-user@lucene.apache.org At: 01/23/18 13:55:46To: Diego
@lucene.apache.org
Subject: RE: Using lucene to post-process Solr query results
Hi Diego,
Basically, each Solr document has a text field , which contains large amount of
text separated by some delimiters. I split this text into parts and then assign
each part to a separate lucene Document object.
The field
to post-process Solr query results
Rahul, can you provide more details on how you decide that the smaller lucene
objects are part of the same solr document?
From: solr-user@lucene.apache.org At: 01/23/18 09:59:17To:
solr-user@lucene.apache.org
Subject: Re: Using lucene to post-process Solr query
at streaming expressions, looks interesting.
Regards,
Rahul Chhiber
-Original Message-
From: Atita Arora [mailto:atitaar...@gmail.com]
Sent: Tuesday, January 23, 2018 3:29 PM
To: solr-user@lucene.apache.org
Subject: Re: Using lucene to post-process Solr query results
Hi Rahul,
Looks like
Rahul, can you provide more details on how you decide that the smaller lucene
objects are part of the same solr document?
From: solr-user@lucene.apache.org At: 01/23/18 09:59:17To:
solr-user@lucene.apache.org
Subject: Re: Using lucene to post-process Solr query results
Hi Rahul,
Looks like
Hi Rahul,
Looks like Streaming expressions can probably can help you.
Is there something else you have tried for this?
Atita
On Jan 23, 2018 3:24 PM, "Rahul Chhiber"
wrote:
Hi All,
For our business requirement, once our Solr client (Java) gets the results
Hi All,
For our business requirement, once our Solr client (Java) gets the results of a
search query from the Solr server, we need to further search across and also
within the content of the returned documents. To accomplish this, I am
attempting to create on the client-side an in-memory
On 10/17/2017 5:53 PM, Phillip Wu wrote:
> I've indexed a lot of documents (*.docx & *.vsd).
>
> When I run a query from the website it returns only a small proportion of the
> data in the index:
> {
> "responseHeader":{
> "status":0,
> "QTime":66,
> "params":{
>"q":"NS Finance 9.2",
>
bq: Is this expected behavior where it returns only a subset of the
documents it has found?
No. But there is _so_ much you're leaving out here that it's totally
impossible to say much.
bq: I've indexed a lot of documents (*.docx & *.vsd).
how? Tika? ExtractingRequestHandler? Some custom code?
Hi,
I've indexed a lot of documents (*.docx & *.vsd).
When I run a query from the website it returns only a small proportion of the
data in the index:
{
"responseHeader":{
"status":0,
"QTime":66,
"params":{
"q":"NS Finance 9.2",
"fl":"id,date",
"start":"0",
"_":"1508193512223"}},
You can add a ~3 to the query to allow the order to be reversed, but you
will get extra hits. Maybe it is a ~4, i can never remember on phrases and
reversals. I usually just try it.
Alternatively, you can create a custom query field for what you need from
dates. For example, if you want to
What field types are you using for your dates?
Have a look at:
https://cwiki.apache.org/confluence/display/solr/Working+with+Dates
On Thu, Aug 17, 2017 at 10:08 AM, Nawab Zada Asad Iqbal
wrote:
> Hi Krishna
>
> I haven't used date range queries myself. But if Solr only
Hi Krishna
I haven't used date range queries myself. But if Solr only supports a
particular date format, you can write a thin client for queries, which will
convert the date to solr's format and query solr.
Nawab
On Thu, Aug 17, 2017 at 7:36 AM, chiru s wrote:
> Hello
Hello guys
I am working on Apache solr and I am stuck with a use case.
The input data will be in the documents like 2017/03/15 in 1st document,
2017/04/15 in 2nd doc,
2017/05/15 in 3rd doc,
2017/06/15 in 4th doc so on
But while fetching the data it should fetch like 03/15/2017 for the first
with
core1.empid.
Below is my solr query to join core2.sid = core3.sid, but do not know how to
write the query to join
core1.empid = core2.empid and core2.pid = core3.pid.
http://localhost:8983/solr/core2/select?q={!join from=sid to=sid
fromIndex=core3 v='*:*'}
currently i am not able to write solr
: I could have sworn I was paraphrasing _your_ presentation Hoss. I
: guess I did not learn my lesson well enough.
:
: Thank you for the correction.
Trust but verify! ... we're both wrong.
Boolean functions (like lt(), gt(), etc...) behave just like sum() -- they
"exist" for a document if and
Bother,
I could have sworn I was paraphrasing _your_ presentation Hoss. I
guess I did not learn my lesson well enough.
Thank you for the correction.
Regards,
Alex.
http://www.solr-start.com/ - Resources for Solr users, new and experienced
On 1 June 2017 at 15:26, Chris Hostetter
: Because the value of the function will be treated as a relevance value
: and relevance value of 0 (and less?) will cause the record to be
: filtered out.
I don't believe that's true? ... IIRC 'fq' doesn't care what the scores
are as long as the query is a "match" and a 'func' query will match
Function queries:
https://cwiki.apache.org/confluence/display/solr/Function+Queries
The function would be sub
Then you want its result mapped to a fq. could probably be as simple
as fq={!func}sub(value,cost).
Because the value of the function will be treated as a relevance value
and relevance
Hi,I have 2 fields "cost" and "value" at my records. I want to get all
documents that have "value" greater than "cost". Something likeq=value:[cost TO
*]
Please advise.
Thanks
r
> search results which is not present in Solr and return accordingly...
>
> On Wed, May 31, 2017 at 7:58 AM, mganeshs <mgane...@live.in> wrote:
>
>> Hi,
>>
>> In my use case, we need to validate the solr query which is getting fired
>> to
>> SOLR in the s
of the option where you can add you additional data to your
search results which is not present in Solr and return accordingly...
On Wed, May 31, 2017 at 7:58 AM, mganeshs <mgane...@live.in> wrote:
> Hi,
>
> In my use case, we need to validate the solr query which is getting fi
Hi,
In my use case, we need to validate the solr query which is getting fired to
SOLR in the solr layer.
Validation like, we want few fields to be passed always in the query, we
don't want few fields not to be passed in the query.
Which is the right place to do in the SOLR ? Currently we
On 5/10/2017 12:33 AM, Adnan Shaikh wrote:
> Thanks Alexandre for the update.
>
> Please help me to understand the other part of the query as well , if there
> is any limit to how many values we can pass for a key.
The limit is not the number of values, but the size of the request in bytes.
A
know that Solr can accept the query parameters in
> the POST body as well.
>
> Regards,
> Alex.
>
> http://www.solr-start.com/ - Resources for Solr users, new and experienced
>
>
> On 9 May 2017 at 10:19, Adnan Shaikh <adnanj.sha...@gmail.com> wrote:
&g
rt.com/ - Resources for Solr users, new and experienced
On 9 May 2017 at 10:19, Adnan Shaikh <adnanj.sha...@gmail.com> wrote:
Hello Team,
Have a query pertaining to how many values are we able to pass in a Solr
query.
Can we please find out if:
1. There is a limit to the number of charac
://www.solr-start.com/ - Resources for Solr users, new and experienced
On 9 May 2017 at 10:19, Adnan Shaikh <adnanj.sha...@gmail.com> wrote:
> Hello Team,
>
> Have a query pertaining to how many values are we able to pass in a Solr
> query.
>
> Can we please find out if:
Hello Team,
Have a query pertaining to how many values are we able to pass in a Solr query.
Can we please find out if:
1. There is a limit to the number of characters that we can pass in a
Solr query field?
2. Is there a limit to how many values we can pass for the one key?
Thanks,
Mohammad
Thanks everyone for taking time to respond to my email. I think you are
correct in that the query results might be coming from main memory as I
only had around 7k queries.
However it is still not clear to me, given that everything was being
served from main memory, why is that I am not able to
On 4/28/2017 12:43 PM, Toke Eskildsen wrote:
> Shawn Heisey wrote:
>> Adding more shards as Toke suggested *might* help,[...]
> I seem to have phrased my suggestion poorly. What I meant to suggest
> was a switch to a single shard (with 4 replicas) setup, instead of the
>
Beautiful, thank you.
-Original Message-
From: Walter Underwood [mailto:wun...@wunderwood.org]
Sent: Friday, April 28, 2017 3:07 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr Query Performance benchmarking
I use the JMeter plugins. They’ve been reorganized recently, so
Davis, Daniel (NIH/NLM) [C]
> <daniel.da...@nih.gov> wrote:
>
> Walter,
>
> If you can share a pointer to that JMeter add-on, I'd love it.
>
> -Original Message-
> From: Walter Underwood [mailto:wun...@wunderwood.org]
> Sent: Friday, April 28, 2017 2:53 PM
> To: sol
Walter,
If you can share a pointer to that JMeter add-on, I'd love it.
-Original Message-
From: Walter Underwood [mailto:wun...@wunderwood.org]
Sent: Friday, April 28, 2017 2:53 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr Query Performance benchmarking
I use production logs
I use production logs to get a mix of common and long-tail queries. It is very
hard to get a realistic distribution with synthetic queries.
A benchmark run goes like this, with a big shell script driving it.
1. Reload the collection to clear caches.
2. Split the log into a cache warming set
Shawn Heisey wrote:
> Adding more shards as Toke suggested *might* help,[...]
I seem to have phrased my suggestion poorly. What I meant to suggest was a
switch to a single shard (with 4 replicas) setup, instead of the current 2
shards (with 2 replicas).
- Toke
Well, the best way to get no cache hits is to set the cache sizes to
zero ;). That provides worst-case scenarios and tells you exactly how
much you're relying on caches. I'm not talking the lower-level Lucene
caches here.
One thing I've done is use the TermsComponent to generate a list of
terms
(aside: Using Gatling or Jmeter?)
Question: How can you easily randomize something in the query so you get no
cache hits? I think there are several levels of caching.
--
Sorry for being brief. Alternate email is rickleir at yahoo dot com
re: the q vs. fq question. My claim (not verified) is that the fastest
of all would be q=*:*={!cache=false}. That would bypass the scoring
that putting it in the "q" clause would entail as well as bypass the
filter cache.
But I have to agree with Walter, this is very suspicious IMO. Here's
what
More “unrealistic” than “amazing”. I bet the set of test queries is smaller
than the query result cache size.
Results from cache are about 2 ms, but network communication to the shards
would add enough overhead to reach 40 ms.
wunder
Walter Underwood
wun...@wunderwood.org
On 4/27/2017 5:20 PM, Suresh Pendap wrote:
> Max throughput that I get: 12000 to 12500 reqs/sec
> 95 percentile query latency: 30 to 40 msec
These numbers are *amazing* ... far better than I would have expected to
see on a 27GB index, even in a situation where it fits entirely into
available
On Thu, 2017-04-27 at 23:20 +, Suresh Pendap wrote:
> Number of Solr Nodes: 4
> Number of shards: 2
> replication-factor: 2
> Index size: 55 GB
> Shard/Core size: 27.7 GB
> maxConnsPerHost: 1000
The overhead of sharding is not trivial. Your overall index size is
fairly small, relative to
Hi,
I am trying to perform Solr Query performance benchmarking and trying to
measure the maximum throughput and latency that I can get from.a given Solr
cluster.
Following are my configurations
Number of Solr Nodes: 4
Number of shards: 2
replication-factor: 2
Index size: 55 GB
Shard/Core size
Hi Emir,Grouping is exactly what I wanted to achieve. Thanks !!Thank
you,Vrinda Davda
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Query-Suggestion-tp4323180p4323743.html
Sent from the Solr - User mailing list archive at Nabble.com.
this message in context:
http://lucene.472066.n3.nabble.com/Solr-Query-Suggestion-tp4323180.html
Sent from the Solr - User mailing list archive at Nabble.com.
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr & Elasticsearch Support * http://sematext.com/
:
http://lucene.472066.n3.nabble.com/Solr-Query-Suggestion-tp4323180.html
Sent from the Solr - User mailing list archive at Nabble.com.
get the top 10 items first,
>>>>> before
>>>>> I run the JSON Facet to get the total amount and average amount for
>>>>> that
>>>>> 10
>>>>> items.
>>>>>
>>>>> Regards,
>>>>&g
ontext: http://lucene.472066.n3.
nabble.com/Select-TOP-10-items-from-Solr-Query-tp4320863p4320910.html
Sent from the Solr - User mailing list archive at Nabble.com.
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr & Elasticsearch Support * http://sematext.com/
g here ...
>>>> You want to fetch the top 10 results for your query, and allow the user
>>>>
>>> to
>>>
>>>> navigate only those 10 results through facets ?
>>>>
>>>> Which facets are you interested in ?
>>>> F
uld need an additional query ?
Cheers
-
---
Alessandro Benedetti
Search Consultant, R Software Engineer, Director
Sease Ltd. - www.sease.io
--
View this message in context: http://lucene.472066.n3.
nabble.com/Select-TOP-10-items-from-Solr-Query-tp4320863p4320910.html
Sent from
y ?
> >
> > Cheers
> >
> >
> >
> > -
> > ---
> > Alessandro Benedetti
> > Search Consultant, R Software Engineer, Director
> > Sease Ltd. - www.sease.io
> > --
> > View this message in context: http://lucene.472066.n3.
> > nabble.com/Select-TOP-10-items-from-Solr-Query-tp4320863p4320910.html
> > Sent from the Solr - User mailing list archive at Nabble.com.
> >
>
l query ?
> >>
> >> Cheers
> >>
> >>
> >>
> >> -
> >> ---
> >> Alessandro Benedetti
> >> Search Consultant, R Software Engineer, Director
> >> Sease Ltd. - www.sease.io
> >> --
> >> View this message in context: http://lucene.472066.n3.
> >> nabble.com/Select-TOP-10-items-from-Solr-Query-tp4320863p4320910.html
> >> Sent from the Solr - User mailing list archive at Nabble.com.
> >>
>
>
gt; Cheers
>
>
>
> -
> ---
> Alessandro Benedetti
> Search Consultant, R Software Engineer, Director
> Sease Ltd. - www.sease.io
> --
> View this message in context: http://lucene.472066.n3.
> nabble.com/Select-TOP-10-items-from-Solr-Query-tp4320863p4320910.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
;> shouldn't be that problematic.
>> Are we missing something ? Why you would need an additional query ?
>>
>> Cheers
>>
>>
>>
>> -
>> ---
>> Alessandro Benedetti
>> Search Consultant, R Software Engineer, Director
>> Sease Ltd. - www.sease.io
>> --
>> View this message in context: http://lucene.472066.n3.
>> nabble.com/Select-TOP-10-items-from-Solr-Query-tp4320863p4320910.html
>> Sent from the Solr - User mailing list archive at Nabble.com.
>>
an additional query ?
>
> Cheers
>
>
>
> -
> ---
> Alessandro Benedetti
> Search Consultant, R Software Engineer, Director
> Sease Ltd. - www.sease.io
> --
> View this message in context: http://lucene.472066.n3.
> nabble.com/Select-TOP-10-items-from-So
-items-from-Solr-Query-tp4320863p4320910.html
Sent from the Solr - User mailing list archive at Nabble.com.
>> Am 17.02.2017 um 05:00 schrieb Zheng Lin Edwin Yeo:
>>> Hi,
>>>
>>> Would like to check, is it possible to do a select of say TOP 10 items
>> from
>>> Solr query, and use the list of the items to do another query (Eg: JSON
>>> Facet)?
>>
m 17.02.2017 um 05:00 schrieb Zheng Lin Edwin Yeo:
> > Hi,
> >
> > Would like to check, is it possible to do a select of say TOP 10 items
> from
> > Solr query, and use the list of the items to do another query (Eg: JSON
> > Facet)?
> >
> > Currently, I'
, you can't determine whether this will make it into
the top x elements or not since there will come more.
-Michael
Am 17.02.2017 um 05:00 schrieb Zheng Lin Edwin Yeo:
> Hi,
>
> Would like to check, is it possible to do a select of say TOP 10 items from
> Solr query, and use the list
Hi,
Would like to check, is it possible to do a select of say TOP 10 items from
Solr query, and use the list of the items to do another query (Eg: JSON
Facet)?
Currently, I'm using a normal facet to retrieve the list of the TOP 10 item
from the normal faceting.
After which, I have to list out
On 2/2/2017 6:16 AM, deepak.gha...@mediawide.com wrote:
> I am writting query for getting response from specific index content first.
> eg.
> http://192.168.200.14:8983/solr/mypgmee/select?q=*blood*=id:(*/939/* OR
> **)=id=json=true
>
> In above query I am getting response, Means suppose I Get
Hello Sir,
I am writting query for getting response from specific index content first.
eg.
http://192.168.200.14:8983/solr/mypgmee/select?q=*blood*=id:(*/939/* OR
**)=id=json=true
In above query I am getting response, Means suppose I Get 4 result for course
"939" out of 10. It works fine by
free plugin:
Solr Query Debugger
https://chrome.google.com/webstore/detail/solr-query-debugger/gmpkeiamnmccifccnbfljffkcnacmmdl
Solr Query Debugger aims to help Solr developers and users with queries.
You can modify Solr queries, execute, debug and, very important, see the
explain in a clear
st is 2MB, since about version
>>
>>
>> 4.1. Before that version, it was controlled by the container config,
>>
>>
>> not Solr. This can be adjusted with the formdataUploadLimitInKB setting
>>
>>
>> in solrconfig.xml. The default value for this is 2048, resulting in the
>&g
;
> https://cwiki.apache.org/confluence/display/solr/RequestDispatcher+in+SolrConfig
>
> Thanks,
>
>
> Shawn
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> --
>
>
>
>
> If you reply to this e
On 1/12/2017 9:36 AM, 武井宜行 wrote:
> My Application throws too large query to solr server with solrj
> client.(Http Method is Post)
>
> I have two questions.
>
> At first,I would like to know the limit of clauses of Boolean Query.I Know
> the number is restricted to 1024 by default, and I can
That doesn't seem like an efficient use of a search engine. Maybe what you
want to do is use streaming expressions to process some data:
https://cwiki.apache.org/confluence/display/solr/Streaming+Expressions
k/r,
Scott
On Thu, Jan 12, 2017 at 11:36 AM, 武井宜行 wrote:
> Hi,all
>
Hi,all
My Application throws too large query to solr server with solrj
client.(Http Method is Post)
I have two questions.
At first,I would like to know the limit of clauses of Boolean Query.I Know
the number is restricted to 1024 by default, and I can increase the limit
by setting
Hmmm i have to check something
it seems, that it's no error
There are some zip files which are indexed, and on the admin page
there are fetched all fields, including the contents ... and the zip
document has a realy big content :O
Zitat von sn0...@ulysses-erp.com:
Hello - an hour ago,
Hello - an hour ago, solr worked fine i had about 2 documents in
the index.
I had made an upadte/extract process from the batch, and saw that on
document has blocked the batch
I waited fo about 2 minutes than i killed the update batch process.
After a restart of the server, i started
hi everyone. hope you all had a great christmas!
i'm having trouble converting an example mysql script into a solr query.
here's my preliminary query:
select vendorItem, min(unitPrice), max(unitPrice), -(min(unitPrice) -
> max(unitPrice)) as `diff`
> from transactions
> where orgId
Hi I am encountering the following exception with Solr 6.2.1
occasionally when querying the index. The exception causes the request
to fail.
java.lang.NegativeArraySizeException
at
org.apache.lucene.search.FieldComparator$LongComparator.(FieldComparator.java:406)
at
- that
should not slow down query time.
More indexed fields, more likely that you will create more complex query
and more complex query, slower query time.
Assuming you are interested in end user query time not Solr query only
time, more stored fields, more likely that more fields are returned.
More
Hi,
I am having 120 fields in a single document and i am indexing all of them
i.e. index=true and stored=true in my schema.
I need to understand how that might be affecting my query time overall.
what is the relation between query time with respect to indexing all fields
in schema??
Regards,
Thanks Shawn for your insight!
On Fri, Jul 22, 2016 at 6:32 PM, Shawn Heisey wrote:
> On 7/22/2016 12:41 AM, Shyam R wrote:
> > I see that SOLR returns status value as 0 for successful searches
> > org.apache.solr.core.SolrCore; [users_shadow_shard1_replica1]
> >
On 7/22/2016 12:41 AM, Shyam R wrote:
> I see that SOLR returns status value as 0 for successful searches
> org.apache.solr.core.SolrCore; [users_shadow_shard1_replica1]
> webapp=/solr path=/user/ping params={} status=0 QTime=0 I do see that
> the status come's back as 400 whenever the search is
All,
I see that SOLR returns status value as 0 for successful searches
org.apache.solr.core.SolrCore; [users_shadow_shard1_replica1] webapp=/solr
path=/user/ping params={} status=0 QTime=0
I do see that the status come's back as 400 whenever the search is invalid
( invoking query with
Perfect - got it.
Now I know where I stand when I'm writing the bq, bf, etc...
Thank you very much.
On Thu, Jun 23, 2016 at 1:35 PM, Erick Erickson
wrote:
> bq:
>
> My big question however is whether I can
> trust that the Boost Functions and
> Boost Query are running
bq:
My big question however is whether I can
trust that the Boost Functions and
Boost Query are running against the entire index
In a word "no". The bf's are intended to alter
the score of the documents found by the primary query
by multiplying the function results by the raw score.
If a
Oh - gotcha... Thanks for taking the time to reply. My use of the phrase
"sub query" is probably misleading...
Here's the XML (below). I'm calling the Boost Query and Boost Function
statements "sub queries"...
The thing I was referencing was this -- where I create an "alias" for the
query
John:
I'm not objecting to the XML, but to the very presence of "more than
one query in a request handler". Request handlers don't have, AFAIK,
"query chains". They have a list of defaults for the _single_ query
being sent at a time to that handler. So having
blah blah
is something I've never
Hi Erick -
I was trying to simplify and not waste anyone's time parsing my
requestHandler... That is, as you imply, bogus xml.
The basic question is: If I have two "sub queries" in a single
requestHandler, do they both run independently against the entire index?
Alternatively, is there some
Where are you seeing that this does anything? It wouldn't be the first time
new functionality happened that I totally missed, but I've never seen that
config.
You might get some mileage out of ReRankingQParserPlugin though, that runs
the top N queries from one query through another.
Best,
Erick
Hi all,
I have a question about whether sub-queries in Solr requestHandlers go
against the total index or against the results of the previous query.
Here's a simple example:
{!edismax qf=blah, blah}
{!edismax qf=blah, blah}
My question is:
What does Query2 run
Thanks for the suggestion. At this time I wont be able to change any code
in the API ...my options are limited to changing things at the solr level.
Any suggestions regarding solr settings in config or schema changes are
something in my control.
On Fri, May 27, 2016 at 7:03 AM, Ahmet Arslan
Hi Jay,
Please separate the clauses. Feed one of them to the main q parameter with
content score operator =^ since you are sorting on a structured field(e.g. date)
q:fieldB:(123 OR 456)^=1.0
=dt1:[date1 TO *]
=dt2:[* TO NOW/DAY+1]
=fieldA:abc
=dt1 asc,field2 asc, fieldC desc
Play with the
I updated almost 1/3 of the data and ran my queries with new columns as
mentioned earlier. The query returns data in almost half the time as
compared to before.
I am thinking that if I update all the columns there would not be much
difference in query response time.
Are there any
Hi,
Thanks for the feedback. The queries I run are very basic filter queries
with some sorting.
q:*:*=(dt1:[date1 TO *] && dt2:[* TO NOW/DAY+1]) && fieldA:abc &&
fieldB:(123 OR 456)=dt1 asc,field2 asc, fieldC desc
I noticed that the date fields(dt1,dt2) are using date instead of tdate
fields &
101 - 200 of 780 matches
Mail list logo