Thanks Erick,
"You have to cache (or something) somewhere to make this work."-- Actually
they are not interested to use cache mechanism.
they dont need paging,they want only 10 records with 1 millions ID search in
background etc.
As of now i have implemented terms query parser but result
Still i am struck ,how to solve my problem, search millions of ID with
minimum response time.
@Upayavira Please elaborate it.
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/how-to-search-miilions-of-record-in-solr-query-tp4248360p4248597.html
Sent from the Solr -
@Ere Maijala
>>question is: WHY do you need to search for millions of IDs?
I am explaining:
I have a list of ID's of 1 Millions
I will search in solr suppose like below
IP:8083/select?q=ID:(1,4,7,...upto 1 Millions)=10=0, then it
will display 10 result ,
for pagination next search will
So still use Ere's suggestion. There's no reason at all to
search all million every time. If start=0, just search the first
N (say 1,000). Keep doing that until you don't get docs
then add more docs.
Or fire off the first query then, when you know there are
going to be pagination, fire off the
Well, if you already know that you need to display only the first 20
records, why not only search for them? Or if you don't know whether they
already exist, search for, say, a hundred, then thousand and so on until
you have enough.
Nevertheless, what's really needed for a good answer or ideas
Thanks for your reply @Ere Maijala,
one of my eCommerce based client have a requirement to search some of
records based on ID's like
IP:8083/select?q=ID:(1,4,7,...upto 1 Millions), display only 10 to 20
records.
if i use above procedure it takes too much time or if i am going to use
@Erick Erickson thanks for reply,
Actually they give me only this task to search 1 millions ID's with good
performance ,result should be appear within 50-100ms.
Yeah i will fire off the full query (up to millions) in the background, but
how what is the efficient way of doing it in term of
Well, you're serving the first set of results very quickly because you're
only looking for, say, the first 1,000. Thereafter you assemble the rest
of the result set in the background (and I'd use the export function) to
have your app have the next N ready for immediate response to the
user.
But
You might get better answers if you'd describe your use-case. If, for
instance, you know all the IDs and you just need to be able to display a
hundred records among those millions quickly, it would make sense to
search for only a chunk of 100 IDs at a time. If you need to support
more search
hi,
I have a requirement to search ID field values like Id:(2,3,6,7 upto
millions),
in which query parser i should to write the result should be display within
a 50 ms.
Please suggest me which query parser i should use for above search.
--
View this message in context:
This is not a use-case to which Lucene lends itself. However, if you
must, I would try the terms query parser, which I believe is used like
this:
{!terms f=id}2,3,6,7
Upayavira
On Mon, Jan 4, 2016, at 10:41 AM, Mugeesh Husain wrote:
> hi,
>
> I have a requirement to search ID field values like
Yes, because only a small portion of that 250ms is spent in the query
parser. Most of it, i would suggest, is spent retrieving and merging
posting lists.
In an inverted index (which Lucene is), you store the list of documents
matching a term against that term - that is your postings list.
When
>>This is not a use-case to which Lucene lends itself. However, if you
>>must, I would try the terms query parser, which I believe is used like
>>this: {!terms f=id}2,3,6,7
I did try terms query parser like above, but the problem is performance, i
am getting result 250ms but i am looking for a
Best of luck with that ;). 250ms isn't bad at all for "searching
millions of IDs".
Frankly, I'm not at all sure where I'd even start. With millions of search
terms, I'd have to profile the application to see where it was spending the
time before even starting.
Best,
Erick
On Mon, Jan 4, 2016 at
14 matches
Mail list logo