A Rowe sar...@syr.edu wrote:
Why Jason, I declare, whatever do you mean?
-Original Message-
From: Jason Rutherglen [mailto:jason.rutherg...@gmail.com]
Sent: Wednesday, January 18, 2012 8:29 PM
To: solr-user@lucene.apache.org
Subject: Re: How to accelerate your Solr-Lucene appication
*Laugh*
I stand by what Mark said:
Right - in most NRT cases (very frequent soft commits), the cache should
probably be disabled.
On Mon, Jan 2, 2012 at 7:45 PM, Yonik Seeley yo...@lucidimagination.com wrote:
On Mon, Jan 2, 2012 at 9:58 PM, Jason Rutherglen
jason.rutherg...@gmail.com wrote
multi-select faceting
Yikes. I'd love to see a test showing that un-inverted field cache
(which is for ALL segments as a single unit) can be used efficiently
with NRT / soft commit.
On Tue, Jan 3, 2012 at 1:50 PM, Yonik Seeley yo...@lucidimagination.com wrote:
On Tue, Jan 3, 2012 at 4:36 PM,
The main point is, Solr unlike for example Elastic Search and other
Lucene based systems does NOT cache filters or facets per-segment.
This is a fundamental design flaw.
On Tue, Jan 3, 2012 at 1:50 PM, Yonik Seeley yo...@lucidimagination.com wrote:
On Tue, Jan 3, 2012 at 4:36 PM, Erik Hatcher
Seeley yo...@lucidimagination.com wrote:
On Tue, Jan 3, 2012 at 5:03 PM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
Yikes. I'd love to see a test showing that un-inverted field cache
(which is for ALL segments as a single unit) can be used efficiently
with NRT / soft commit.
Please
It still normally makes sense to have the caches enabled (esp filter and
document caches).
In the NRT case that statement is completely incorrect
On Mon, Jan 2, 2012 at 5:37 PM, Yonik Seeley yo...@lucidimagination.com wrote:
On Mon, Jan 2, 2012 at 1:28 PM, Mark Miller markrmil...@gmail.com
Wow the shameless plugging of product (footer) has hit a new low Otis.
On Fri, Dec 16, 2011 at 7:32 AM, Otis Gospodnetic
otis_gospodne...@yahoo.com wrote:
Hi Yury,
Not sure if this was already covered in this thread, but with N smaller cores
on a single N-CPU-core box you could run N queries
that
will help you solve your problem. That is responsive to the OP and it is
clear that it is a commercial deal.
On Fri, Dec 16, 2011 at 10:02 AM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
Wow the shameless plugging of product (footer) has hit a new low Otis.
On Fri, Dec 16, 2011 at 7
Ted,
The list would be unreadable if everyone spammed at the bottom their
email like Otis'. It's just bad form.
Jason
On Fri, Dec 16, 2011 at 12:00 PM, Ted Dunning ted.dunn...@gmail.com wrote:
Sounds like we disagree.
On Fri, Dec 16, 2011 at 11:56 AM, Jason Rutherglen
jason.rutherg
CATALINA_OPTS=$CATALINA_OPTS -Dfile.encoding=utf-8
export CATALINA_OPTS=$CATALINA_OPTS -XX:+UseConcMarkSweepGC
==
Thanks
Jason
--
View this message in context:
http://lucene.472066.n3.nabble.com/server-down-caused-by-complex-query
I'm thinking about modifying my index process to use json because all my
docs are originally in json anyway . Are there any performance issues if I
insert json docs instead of xml docs? A colleague recommended to me to
stay with xml because solr is highly optimized for xml.
Hi, all
I'm using surround query parser.
The request A B returns ParseException.
But A OR B returns correct results.
I think this is the problem of default query operator.
Anyone know how to set?
Thanks,
Jason
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-set
Oh. That's bad to me.
Thanks anyway.
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-set-default-query-operator-in-surround-query-parser-tp3570034p3570088.html
Sent from the Solr - User mailing list archive at Nabble.com.
I've been reading the solr source code and made modifications by
implementing a custom Similarity class.
I want to implement a weight to the score by multiplying a number
based on if the current doc has certain term in it.
So if the query was q=data_text:foo
then the Similiarity class would
that contains all user
documents that are active. This docset is used as filter during the
execution of the main query (q param),
so it only returns posts with the contain the text hello for active users.
Martijn
On 28 October 2011 01:57, Jason Toy jason...@gmail.com wrote:
Does anyone have
Hello,
I'm using solr 1.4 version.
I want to use some plugin in trunk version.
But I got IndexFormatTooOldException when it run old version index at trunk.
Is there a way using 1.4 index at 4.0 trunk?
Thanks,
Jason
--
View this message in context:
http://lucene.472066.n3.nabble.com
Hi all
Is it possible to use SurroundQParserPlugin in Solr 1.4.0?
if so, how shoud I do it?
Thank in advance
Jason
--
View this message in context:
http://lucene.472066.n3.nabble.com/appling-SurroundQParserPlugin-tp3540283p3540283.html
Sent from the Solr - User mailing list archive
.)
After that, server is like down.
We also have old version's k2 engine.
But k2 is not down for same query.
k2 uses more i/o than memory.
Could we control solr memory usage?
Or is there any other solution?
(we are using solr1.4)
Thanks in advance.
Jason
--
View this message in context:
http
I've written a script that does bulk insertion from my database, it
grabs chunks of 500 docs (out of 100 million ) and inserts them into
solr over http. I have 5 threads that are inserting from a queue.
After each insert I issue a commit.
Every 20 or so inserts I get this error message:
Error:
Jason
--
View this message in context:
http://lucene.472066.n3.nabble.com/abort-processing-query-tp3495876p3495876.html
Sent from the Solr - User mailing list archive at Nabble.com.
It should be supported in SolrJ, I'm surprised it's been lopped out.
Bulk indexing is extremely common.
On Fri, Nov 4, 2011 at 1:16 PM, Ken Krugler kkrugler_li...@transpac.com wrote:
Hi list,
I'm working on improving the performance of the Solr scheme for Cascading.
This supports generating
Thanks Robert.
We optimize less frequently than we used to. Down to twice a month from once
a day.
Without optimizing the search speed stays the same, however the index size
increases to 70+ GB.
Perhaps there is a different way to restrict disk usage.
Thanks,
Jason
Robert Stewart bstewart
Thanks Erick,
Will take a look at this article.
Cheers,
Jason
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: Tuesday, November 01, 2011 8:05 AM
To: solr-user@lucene.apache.org
Subject: Re: Replicating Large Indexes
Yes, that's expected behavior. When
:8080/solr/update?optimize=truemaxSegments=1waitFlush=trueexpungeDeletes=true'
Willing to share our experiences with Solr.
Thanks,
Jason
We should maybe try to fix this in 3.x too?
+1 I suggested it should be backported a while back. Or that Lucene
4.x should be released. I'm not sure what is holding up Lucene 4.x at
this point, bulk postings is only needed useful for PFOR.
On Fri, Oct 28, 2011 at 3:27 PM, Simon Willnauer
it will be 6 months before 4.0 ships, that's too long.
Also it is an amusing contradiction that your argument flies in the
face of Lucid shipping 4.x today without said functionality.
On Fri, Oct 28, 2011 at 5:09 PM, Robert Muir rcm...@gmail.com wrote:
On Fri, Oct 28, 2011 at 5:03 PM, Jason Rutherglen
abstract away the encoding of the index
Robert, this is what you wrote. Abstract away the encoding of the
index means pluggable, otherwise it's not abstract and / or it's a
flawed design. Sounds like it's the latter.
I have a similar problem except I need to filter scores that are too high.
Robert Stewart bstewart...@gmail.com 於 Oct 27, 2011 7:04 AM 寫道:
BTW, this would be good standard feature for SOLR, as I've run into this
requirement more than once.
On Oct 27, 2011, at 9:49 AM,
Does anyone have any idea on this issue?
On Tue, Oct 25, 2011 at 11:40 AM, Jason Toy jason...@gmail.com wrote:
Hi Yonik,
Without a Join I would normally query user docs with:
q=data_text:testfq=is_active_boolean:true
With joining users with posts, I get no no results:
q={!join from
, the exact
matches are not filtered to the top.
This should be a simple use case, anyone can suggest when goes wrong ?
Thanks,
Jason
, but with the ability to join
with the Posts docs.
On Tue, Oct 25, 2011 at 11:30 AM, Yonik Seeley
yo...@lucidimagination.comwrote:
Can you give an example of the request (URL) you are sending to Solr?
-Yonik
http://www.lucidimagination.com
On Mon, Oct 24, 2011 at 3:31 PM, Jason Toy jason...@gmail.com
I have 2 types of docs, users and posts.
I want to view all the docs that belong to certain users by joining posts
and users together. I have to filter the users with a filter query of
is_active_boolean:true so that the score is not effected,but since I do a
join, I have to move the filter query
Sweet + Very cool!
On Fri, Oct 21, 2011 at 7:50 AM, Simon Willnauer
simon.willna...@googlemail.com wrote:
In trunk we have a feature called IndexDocValues which basically
creates the uninverted structure at index time. You can then simply
suck that into memory or even access it on disk
Which one is better performance of setting inOrder=false in solrconfig.xml
and quering with A B~1 AND B A~1 if performance differences?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Question-about-near-query-order-tp3427312p3437701.html
Sent from the Solr - User mailing
Thank you for your kind reply.
Is it possible only defType=lucnee in your second suggestion?
I'm using ComplexPhraseQueryParser.
So my defType is complexphrase.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Question-about-near-query-order-tp3427312p3431465.html
Sent from
Thanks a ton iorixxx.
Jason.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Question-about-near-query-order-tp3427312p3432922.html
Sent from the Solr - User mailing list archive at Nabble.com.
analyze term~2
term analyze~2
In my case, two queries return different result set.
Isn't that in your case?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Question-about-near-query-order-tp3427312p3429916.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi, all
I have some near query like analyze term~2.
That is matched in that order.
But I want to search regardless of order.
So far, I just queried analyze term~2 OR term analyze~2.
Is there a better way than what i did?
Thanks in advance.
Jason.
--
View this message in context:
http://lucene
search or WordDelimiterFilterFactory options just set
catenateAll=1, no problems.
But If WordDelimiterFilterFactory options set like below 'My Schema.xml',
occured error.
How can I solve this problem?
Give me any idea.
Thanks in advance.
Jason
[Error Message
Hi, Ludovic
That's just what I'm looking for.
You're been a big help.
Thank you so much.
Jason.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Phrase-search-error-tp3423799p3423916.html
Sent from the Solr - User mailing list archive at Nabble.com.
I have several different document types that I store. I use a serialized
integer that is unique to the document type. If I use id as the uniqueKey,
then there is a possibility to have colliding docs on the id, what would be
the best way to have a unique id given I am storing my unique identifier
I'm testing out the join functionality on the svn revision 1175424.
I've found when I add a single filter query to a join it works fine, but
when I do more then 1 filter query, the query does not return results.
This single function query with a join returns results:
Can dismax understand that query in a translated form?
在 Sep 29, 2011 10:01 PM 時,yingshou guo guoyings...@gmail.com 寫到:
you cann't use this kind of query syntax against dismax query parser.
your query can by understood by standard query parser or edismax query
parser. qt request parameter is
Hi all, I am testing various versions of solr from trunk, I am finding that
often times the example doesn't build and I can't test out the version. Is
there a resource that shows which versions build correctly so that we can
test it out?
?
Jason
time or other times,
hence why I was thinking that maybe solr is doing something different. My
script notifies me of the memory exception and then restarts the jvm.
Running the script manually works fine. I'll try to do some more testing to
see what exactly is going on.
Jason
On Wed, Sep 21
I had a join query that was originally written as :
{!join from=self_id_i to=user_id_i}data_text:hello
and that works fine. I later added an fq filter:
{!frange l=0.05 }div(termfreq(data_text,'hello'),max_i)
and the query doesn't work anymore. if I do the fq by itself without the
join the query
I have solr issues where I keep running out of memory. I am working on
solving the memory issues (this will take a long time), but in the meantime,
I'm trying to be notified when the error occurs. I saw with the jvm I can
pass the -XX:OnOutOfMemoryError= flag and pass a script to run. Every time
Anyone know the query I would do to get the join to work? I'm unable to get
it to work.
On Wed, Sep 14, 2011 at 10:49 AM, Jason Toy jason...@gmail.com wrote:
I've been reading the information on the new join feature and am not quite
sure how I would use it given my schema structure. I have
is the
description/fieldfield name=title_textthis is a cool
title/field/field/doc
/add?xml version=1.0 encoding=UTF-8?commit/
Is it possible to do this with the join functionality? If not, how would I
do this?
I'd appreciate any pointers or help on this.
Jason
I had queries breaking on me when there were spaces in the text I was
searching for. Originally I had :
fq=state_s:New York
and that would break, I found a work around by using:
fq={!raw f=state_s}New York
My problem now is doing this with an OR query, this is what I have now, but
it doesn't
I wrote the title wrong, its a filter query, not a function query, thanks
for the correction.
The field is a string, I had tried fq=stats_s:New York before and that
did not work, I'm puzzled to why this didn't work.
I tried out your b suggestion and that worked,thanks!
On Tue, Sep 13, 2011 at
I'd love to see the progress on this.
On Tue, Sep 13, 2011 at 10:34 AM, Roman Chyla roman.ch...@gmail.com wrote:
Hi,
The standard lucene/solr parsing is nice but not really flexible. I
saw questions and discussion about ANTLR, but unfortunately never a
working grammar, so... maybe you find
I'm trying to limit my data to only docs that have the word 'foo' appear at
least once.
I am trying to use:
fq=termfreqdata,'foo'):[1+TO+*]
but I get the syntax error:
Caused by: org.apache.lucene.queryparser.classic.ParseException: Encountered
: : at line 1, column 33.
Was expecting one of:
After running a combination of different queries, my solr server eventually
is unable to complete certain requests because it runs out of memory, which
means I need to restart the server as its basically useless with some
queries working and not others. I am moving to distributed setting soon,
I have a large ec2 instance(7.5 gb ram), it dies every few hours with out of
heap memory issues. I started upping the min memory required, currently I
use -Xms3072M .
I insert about 50k docs an hour and I currently have about 65 million docs
with about 10 fields each. Is this already too much
.
-Original Message-
From: Jason Toy [mailto:jason...@gmail.com]
Sent: Wednesday, August 17, 2011 5:15 PM
To: solr-user@lucene.apache.org
Subject: solr keeps dying every few hours.
I have a large ec2 instance(7.5 gb ram), it dies every few hours with out
of heap memory issues. I
I've only set set minimum memory and have not set maximum memory. I'm doing
more investigation and I see that I have 100+ dynamic fields for my
documents, not the 10 fields I quoted earlier. I also sort against those
dynamic fields often, I'm reading that this potentially uses a lot of
memory.
What can I do temporarily in this situation? It seems like I must eventually
move to a distributed setup. I am sorting on dynamic float fields.
On Wed, Aug 17, 2011 at 3:01 PM, Yonik Seeley yo...@lucidimagination.comwrote:
On Wed, Aug 17, 2011 at 5:56 PM, Jason Toy jason...@gmail.com wrote
I am trying to list some data based on a function I run ,
specifically termfreq(post_text,'indie music') and I am unable to do it
without passing in data to the q paramater. Is it possible to get a sorted
list without searching for any terms?
/8/8 Jason Toy jason...@gmail.com
I am trying to list some data based on a function I run ,
specifically termfreq(post_text,'indie music') and I am unable to do it
without passing in data to the q paramater. Is it possible to get a
sorted
list without searching for any terms
your index size
and
number of unique terms.
On Mon, Aug 8, 2011 at 1:08 PM, Alexei Martchenko
ale...@superdownloads.com.br wrote:
You can use the standard query parser and pass q=*:*
2011/8/8 Jason Toy jason...@gmail.com
I am trying to list some data based
choices effect what is possible at query time. Lucene In Action
is a pretty good book.
On 8/8/2011 5:02 PM, Jason Toy wrote:
Are not Dismax queries able to search for phrases using the default
index(which is what I am using?) If I can already do phrase searches,
I
don't understand
How can I run a query to get the result count only? I only need the count
and so I dont need solr to send me all the results back.
As I'm using solr more and more, I'm finding that I need to do searches and
then order by new criteria. So I am constantly add new fields into solr
and then reindexing everything.
I want to know if adding in all this data into solr is the normal way to
deal with sorting. I'm finding that I
How does one search for words with characters like # and +. I have tried
searching solr with #test and \#test but all my results always come up
with test and not #test. Is this some kind of configuration option I
need to set in solr?
--
- sent from my mobile
6176064373
In Solr 1.3.1 I am able to store timestamps in my docs so that I query them.
In trunk when I try to store a doc with a timestamp I get a sever error, is
there a different way I should store this data or is this a bug?
Jul 22, 2011 7:20:14 PM org.apache.solr.update.processor.LogUpdateProcessor
I haven't modified my schema in the older solr or trunk solr,is it required
to modify my schema to support timestamps?
On Fri, Jul 22, 2011 at 4:45 PM, Chris Hostetter
hossman_luc...@fucit.orgwrote:
: In Solr 1.3.1 I am able to store timestamps in my docs so that I query
them.
:
: In trunk
=solr.DateField sortMissingLast=true
omitNorms=true/
On Fri, Jul 22, 2011 at 5:00 PM, Jason Toy jason...@gmail.com wrote:
I haven't modified my schema in the older solr or trunk solr,is it required
to modify my schema to support timestamps?
On Fri, Jul 22, 2011 at 4:45 PM, Chris Hostetter
Hi Chris, you were correct, the filed was getting set as a double. Thanks
for the help.
On Fri, Jul 22, 2011 at 7:03 PM, Jason Toy jason...@gmail.com wrote:
This is the document I am posting:
?xml version=1.0 encoding=UTF-8?adddocfield name=idPost
75004824785129473/fieldfield name=typePost
According to that bug list, there are other characters that break the
sorting function. Is there a list of safe characters I can use as a
delimiter?
On Mon, Jul 18, 2011 at 1:31 PM, Chris Hostetter
hossman_luc...@fucit.orgwrote:
: When I try to sort by a column with a colon in it like
:
Hi all, I found a bug that exists in the 3.1 and in trunk, but not in 1.4.1
When I try to sort by a column with a colon in it like
scores:rails_f, solr has cutoff the column name from the colon
forward so scores:rails_f becomes scores
To test, I inserted this doc:
In 1.4.1 I was able to
actually prohibited, but that could
be your problem.
Nick
On 7/18/2011 8:10 AM, Jason Toy wrote:
Hi all, I found a bug that exists in the 3.1 and in trunk, but not in
1.4.1
When I try to sort by a column with a colon in it like
scores:rails_f, solr has cutoff the column name from
How does one search for the term google+ with solr? I noticed on twitter I
can search for google+: http://search.twitter.com/search?q=google%2B (which
uses lucene, not sure about solr) but searching on my copy of solr, I can't
search for google+
--
- sent from my mobile
6176064373
Hi All
I have complex phrase queries including wildcard.
(ex. q=conn* pho*~2 OR inter* pho*~2 OR ...)
That takes long query result time.
I tried reindex after changing termIndexInterval to 8 for reduce the query
result time through more loading term index info.
I thought if I do so query result
Hi All
I have 5 shards. (sh01 ~ sh05)
I was debugging using solrJ.
When I quiried at each shard, results are right.
But when I quiried at all shards, elementData of SolrDocumentList is null.
But numFound of SolrDocumentList is right.
How can I get the SolrDocumentList in shards?
Thanks in Advance
I am trying to use sorting by the termfreq function using the trunk code
since termfreq was added in the 4.0 code base.
I run this query:
http://127.0.0.1:8983/solr/select/?q=librariansort=termfreq(all_lists_text,librarian)%20desc
but I get:
HTTP ERROR 500
Problem accessing /solr/select/.
Hi, Mark
I think FileNotFoundException will be worked around by raise the ulimit.
I just want to know why segments are created more than mergeFactor.
During the googling, I found contents concerning mergeFactor:
http://web.archiveorange.com/archive/v/bH0vUQzfYcdtZoocG2C9
Yonik wrote:
mergeFactor
I'm trying to run the example app from the svn source, but it doesn't seem
to work. I am able to run :
java -jar start.jar
and Jetty starts with:
INFO::Started SocketConnector@0.0.0.0:8983
But then when I go to my browser and go to this address:
http://localhost:8983/solr/
I get a 404 error.
Hi, All
I have 12 shards and ramBufferSizeMB=512, mergeFactor=5.
But solr raise java.io.FileNotFoundException (Too many open files).
mergeFactor is just 5. How can this happen?
Below is segments of some shard. That is too many segments over mergFactor.
What's wrong and How should I set the
Hi, All
I want to get the search result which is not sorted by anything.
Sorting by score take more time.
So, I want to disable sorting by score.
How can i do this?
Thanks, Jason.
--
View this message in context:
http://lucene.472066.n3.nabble.com/disable-sort-by-score-tp3057767p3057767.html
)
++
I tried to check perfomance using _docid_ asc.
But _docid_ didn't work in distributed search.
So I made inquiries to know that another method is.
Best
Jason
--
View this message in context:
http://lucene.472066.n3.nabble.com/disable-sort-by-score
I am trying to use sorting by function on solr 3.2 and it doesn't now workt
with termfreq. I do this query:
/solr/select?q=testqf=all_lists_textdefType=dismaxsort=termfreq%28all_lists_text%2Ctest%29+descrows=50
I get this error:
Can't determine Sort Order: 'termfreq(description_text,'test')
Ahmet, that doesnt return the idf data in my results, unless I am
doing something wrong. When you run any function you get the results
of the function back?
Can you show me an example query you run ?
//http://wiki.apache.org/solr/FunctionQuery#idf
On Thu, Jun 9, 2011 at 9:23 AM, Jason Toy
Markus,
Thanks for this info, I'll use debugQuery to test for now. It seems strange
that I can't have arbitrary function results returned with my data. Is
this an obstacle on the lucene or solr side?
Jason
On Fri, Jun 10, 2011 at 5:59 AM, Markus Jelsma
markus.jel...@openindex.iowrote:
Ah
I want to be able to run a query like idf(text, 'term') and have that data
returned with my search results. I've searched the docs,but I'm unable to
find how to do it. Is this possible and how can I do that ?
. For that reason I believe the bug is in solr and not in
lucene.
Jason Toy
socmetrics
http://socmetrics.com
@jtoy
Thanks Shashi, this is oddly coincidental with another issue being put
into Solr (SOLR-2193) to help solve some of the NRT issues, the timing
is impeccable.
At a base however Solr uses Lucene, as does ES. I think the main
advantage of ES is the auto-sharding etc. I think it uses a gossip
-0700, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
Mark,
Nice email address. I personally have no idea, maybe ask Shay Banon
to post an answer? I think it's possible to make Solr more elastic,
eg, it's currently difficult to make it move cores between servers
without a lot of manual
And some way to delete the core when it has been transferred.
Right, I manually added that to CoreAdminHandler. I opened an issue
to try to solve this problem: SOLR-2569
On Wed, Jun 1, 2011 at 8:26 AM, Upayavira u...@odoko.co.uk wrote:
On Wed, 01 Jun 2011 07:52 -0700, Jason Rutherglen
Jonathan,
This is all true, however it ends up being hacky (this is from
experience) and the core on the source needs to be deleted. Feel free
to post to the issue.
Jason
On Wed, Jun 1, 2011 at 8:44 AM, Jonathan Rochkind rochk...@jhu.edu wrote:
On 6/1/2011 10:52 AM, Jason Rutherglen wrote
Mark,
Nice email address. I personally have no idea, maybe ask Shay Banon
to post an answer? I think it's possible to make Solr more elastic,
eg, it's currently difficult to make it move cores between servers
without a lot of manual labor.
Jason
On Tue, May 31, 2011 at 7:33 PM, Mark
million full text index.
That is running 10 shards on 1 tomcat.
Thanks,
Jason
--
View this message in context:
http://lucene.472066.n3.nabble.com/how-to-work-cache-and-improve-performance-phrase-query-included-wildcard-tp2956671p2956671.html
Sent from the Solr - User mailing list archive
/6/2011 7:34 PM
To: solr-user@lucene.apache.org
Subject: Re: *:* query with dismax
it does seem a little weird, but q.alt will get what you want:
http://wiki.apache.org/solr/DisMaxQParserPlugin#q.alt
hth,
rc
On Fri, May 6, 2011 at 7:41 PM, Jason Chaffee jchaf...@ebates.com wrote:
Can you
have any clues?
Thanks,
Jason
to use *:* when the query
is
empty, so that you can still get back a full result set if you need it,
say
for faceting.
HTH
Mark
On May 7, 2011 9:22 AM, Jason Chaffee jchaf...@ebates.com wrote:
I am using dismax and trying to use q=*:* to return all indexed
documents. However, it is always returning 0
Good question, you could be correct about that. It's possible that
part hasn't been built yet? If not then you could create a patch?
On Thu, Apr 28, 2011 at 10:13 PM, Andy angelf...@yahoo.com wrote:
--- On Fri, 4/29/11, Jason Rutherglen jason.rutherg...@gmail.com wrote:
It's answered
It's answered on the wiki site:
TSTLookup - ternary tree based representation, capable of immediate
data structure updates
Although the EdgeNGram technique is probably more widely adopted, eg,
it's closer to what Google has implemented.
Renaud,
Can you provide a brief synopsis of how your system works?
Jason
On Wed, Apr 27, 2011 at 11:17 AM, Renaud Delbru renaud.del...@deri.org wrote:
Hi,
you might want to look at the SIREn plugin [1,2], which allows you to index
and query 1:N relationships such as yours, in a tabular data
You can index and optimize at the same time. The current limitation
or pause is when the ram buffer is flushing to disk, however that's
changing with the DocumentsWriterPerThread implementation, eg,
LUCENE-2324.
On Tue, Apr 12, 2011 at 8:34 AM, Shawn Heisey s...@elyograg.org wrote:
On 4/12/2011
301 - 400 of 706 matches
Mail list logo