I don't understand why this sometimes takes two minutes between the
start
commit /update and sometimes takes 20 minutes? One of our caches
has about
~40,000 items, but I can't imagine it taking 20 minutes to autowarm a
searcher.
What do your cache configs look like?
How big is the
hitratio : 0.99
inserts : 44551
evictions : 0
size : 44417
cumulative_lookups : 8720372
cumulative_hits : 8676170
cumulative_hitratio : 0.99
cumulative_inserts : 44551
cumulative_evictions : 0
best,
cloude
On Wed, Mar 25, 2009 at 8:38 AM, Ryan McKinley ryan...@gmail.com
wrote:
I don't understand
I implemented OAI-PMH for solr a few years back for the Massachusetts
library system... it appears not to be running right now, but
check... http://www.digitalcommonwealth.org/
It would be great to get that code revived and live open source
somewhere. As is, it uses a pre 1.3 release
My question is - From design and query speed point of - should I add
new core to handle the additional data or should I add the data to
the existing core.
Do you ever need to get results from both sets of data in the same
query? If so, putting them in the same index will be faster.
do you know if your java file is encoded with utf-8?
sometimes it will be encoded as something different and that can cause
funny problems..
On Mar 18, 2009, at 7:46 AM, Walid ABDELKABIR wrote:
when executing this code I got in my index the field includes with
this
value : ?
Also consider droids:
http://incubator.apache.org/droids/
On Mar 5, 2009, at 6:32 PM, Tony Wang wrote:
Hi,
I wonder if there's any open source crawler product that could be
integrated
with Solr. What crawler do you guys use? or you coded one by
yourself? I
have been trying to find out
No but you can always fire three requests. Writing your own handler
which
prints data in a custom format means that you can no longer use
existing
solr clients for java/ruby/python etc.
That's not a fair characterization of at least the Ruby client. The
NamedList (err, Hash in Ruby) is
The jetty vs tomcat vs resin vs whatever question pretty much comes
down to what you are comfortable running/managing.
Solr tries its best to stay container agnostic.
On Mar 5, 2009, at 1:55 PM, Jonathan Haddad wrote:
Is there any compelling reason to use tomcat instead of jetty if all
Are there any easily foreseeable problems with implementing an r-
tree box
indexing/searching extension to Solr, in the spirit of localsolr? If
anyone
has any pointers I'm all ears.
I have implemented an R-Tree based integration for solr. It is pretty
ugly and memory intensive, but
On Feb 28, 2009, at 5:56 PM, Stephen Weiss wrote:
Yeah honestly I don't know how it ever worked either.
my guess is that the XPP parser did not validate anything -- when we
switched to StAX it validates something...
ryan
i hit that one too!
try: ant clean
On Feb 24, 2009, at 12:08 PM, Brian Whitman wrote:
Seeing this in the logs of an otherwise working solr instance.
Commits are
done automatically I believe every 10m or 1 docs. This is solr
trunk
(last updated last night) Any ideas?
INFO: []
But I have some problems setting this up. As long as I try the
multicore
sample everything works but when I copy my schema.xml into the
multicore/core0/conf dir I only get 404 error messages when I enter
the
admin url.
what is the url you are hitting?
Do you see links from the index
Is Solr 1.4 (and its nice SLF4J logging) in a state ready for
intensive
production usage?
While it is not officially recommended, trunk is quite stable.
Of course back up and make sure to test well before deploying anything
real.
ryan
yes. This works fine.
But make sure only one SolrServer is writing to the index at a time.
Also note that if you use the EmbeddedSolrServer to index and another
one to read, you will need to call commit/ on the 'read only' server
to refresh the index view (the work commit is a bit
Keep in mind that the way lucene/solr work is that the results are
constant from when you open the searcher. If new documents are added
(without re-opening the searcher) they will not be seen.
commit/ tells solr to re-open the index and see the changes.
1. Does this mean that
On Feb 9, 2009, at 10:40 AM, Michael Lackhoff wrote:
On 09.02.2009 15:40 Ryan McKinley wrote:
But I have some problems setting this up. As long as I try the
multicore
sample everything works but when I copy my schema.xml into the
multicore/core0/conf dir I only get 404 error messages when I
It may not be as fine-grained as you want, but also check the
QueryElevationComponent. This takes a preconfigured list of what the
top results should be for a given query and makes thoes documents the
top results.
Presumably, you could use click logs to determine what the top result
I am build a system that indexes a bunch of data and then will let
users manually put the data in lists. I have seen http://wiki.apache.org/solr/UserTagDesign
The behavior I would like is identical to 'tagging' each document with
the list-id/user/order and then using standard faceting to
=A boost=5 / doc id=B boost=4 /
/query
And I could write a script that looks at click data once a day to
fill out this file.
Thanks for your time!
Matthew Runo
Software Engineer, Zappos.com
mr...@zappos.com - 702-943-7833
On Jan 30, 2009, at 6:37 AM, Ryan McKinley wrote:
It may not be as fine
check:
http://wiki.apache.org/solr/SolrLogging
You configure whatever flavor logger to write error to a separate log
On Jan 30, 2009, at 4:36 PM, James Brady wrote:
Hi all,What's the best way for me to split Solr/Lucene error message
off to
a separate log?
Thanks
James
if you use this constructor:
public CommonsHttpSolrServer(URL baseURL, HttpClient client)
then solrj never touches the HttpClient configuration.
I normally reuse a single CommonsHttpSolrServer as well.
On Jan 27, 2009, at 9:52 AM, Walter Underwood wrote:
Making requests in parallel,
I don't know of any standard export/import tool -- i think luke has
something, but it will be faster if you write your own.
Rather then id:[* TO *], just try *:* -- this should match all
documents without using a range query.
On Jan 25, 2009, at 3:16 PM, Ian Connor wrote:
Hi,
Given
On Jan 25, 2009, at 6:06 PM, James Brady wrote:
Hi,I have a number of indices that are supposed to maintaining
windows of
indexed content - the last month's work of data, for example.
At the moment, I'm cleaning out old documents with a simple cron job
making
requests like:
On Jan 9, 2009, at 8:12 PM, qp19 wrote:
Please bear with me. I am new to Solr. I have searched all the
existing posts
about this and could not find an answer. I wanted to know how do I
go about
creating a
SolrServer using EmbeddedSolrServer. I tried to initialize this
several ways
there are plans for a regular release (1.4) later this month. No
plans for bug fix release.
If there are critical bugs there would be a bug fix release, but not
for minor ones.
On Jan 7, 2009, at 11:06 AM, Jerome L Quinn wrote:
Hi, all. Are there any plans for putting together a
be in the release..
ryan
On Jan 7, 2009, at 12:14 PM, William Pierce wrote:
That is fantastic! Will the Java replication support be included in
this release?
Thanks,
- Bill
--
From: Ryan McKinley ryan...@gmail.com
Sent: Wednesday, January 07, 2009
We want to write a single query where the query returns doc1_1,
doc2_2 and
so on...that is for documents that have the same id, we want the
query to
return the document with highest versionId or the latest timestamp.
Any thoughts how this can be done?
not exactly what you are asking
the url you type has some * in it, make sure they are removed:
*wt=php*hl
also, try adding echoParams=EXPLICIT and make sure the params you are
passing get parsed ok.
ryan
On Dec 26, 2008, at 8:00 PM, Tony Wang wrote:
Otis,
Thanks.
So I can do the search like this:
i'm not sure what the proper behavior should be...
At the very least it should have an error that says no documents --
alternatively it could just do nothing, but I'm not sure what the
return value should be in that case.
On Dec 21, 2008, at 11:54 AM, Gunnar Wagenknecht wrote:
Ryan
are you sure the Collection is not empty?
what version are you running?
what do the server logs say when you get this error on the client?
On Dec 18, 2008, at 6:42 AM, Gunnar Wagenknecht wrote:
Hi,
I'm using SolrJ to index a couple of documents. I do this in batches
of
50 docs to safe some
lots of options out there
Rather then doing a slow query like Prefix, i think its best to index
the ngrams so the autocomplete is a fast query.
http://www.mail-archive.com/solr-user@lucene.apache.org/msg06776.html
On Dec 18, 2008, at 11:56 AM, Kashyap, Raghu wrote:
Hi,
One of
at the root of the issue is that logging uses a static logger:
static Logger log = LoggerFactory.getLogger(SolrCore.class);
i don't know of any minimally invasive way to get get around this...
On Dec 17, 2008, at 10:22 AM, Marc Sturlese wrote:
I am thinking in doing a hack to specify the
? This would give you core logging
granularity just by config, rather than scraping.
Yes?
Erik
On Dec 17, 2008, at 9:47 AM, Ryan McKinley wrote:
As is, the log classes are statically bound to the class, so they
are configured for the entire VM context.
Off hand i can't think
groovy snippet I wrote:
final MDC_KEY = OraSeqId
MDC.put(MDC_KEY, seq.id as String)//must be removed; see finally
//in-finally
MDC.remove(MDC_KEY)
~ David Smiley
On 12/17/08 2:17 PM, Erik Hatcher e...@ehatchersolutions.com
wrote:
On Dec 17, 2008, at 12:24 PM, Ryan McKinley wrote:
I'm not sure
sint sorts in numeric order, int does not.
check the sortMissingLast params in the example config
On Dec 16, 2008, at 12:24 PM, Marc Sturlese wrote:
Hey there,
I am using sort at searching time.
I would like to know the advantages of using sint field type instead
of
integer field type.
perhaps CoreAdminRequest... it does not give you the property, but
you can see where things are...
http://wiki.apache.org/solr/CoreAdmin
On Dec 16, 2008, at 12:53 PM, Kay Kay wrote:
I am reading the wiki here at - http://wiki.apache.org/solr/Solrj .
Is there a requestHandler ( may be -
What do you see in the admin schema browser?
/admin/schema.jsp
When you select the field names, do you see the property
Multivalued?
ryan
On Dec 15, 2008, at 10:55 AM, Schilperoort, René wrote:
Sorry,
Forgot the most important detail.
The document I am adding contains multiple names
solr 1.3 uses java logging. Most app containers (tomcat, resin, etc)
give you a way to configure that. Also check:
http://java.sun.com/j2se/1.4.2/docs/guide/util/logging/overview.html#1.8
You can make runtime changes from the /admin/ logging tab. However,
these changes are not persisted
I'm indexing some mail archives and within the various formats/
encodings etc, some messages have invalid control characters.
doc.setField( body, content.toString() );
In the solr logs, I get:
[java] SEVERE: java.io.IOException: Illegal character ((CTRL-
CHAR, code 22))
[java] at
For a similar idea, check:
https://issues.apache.org/jira/browse/SOLR-906
This opens a single stream and writes all documents to that. It could
easily be extended to have multiple threads draining the same Queue
On Dec 9, 2008, at 4:02 AM, Noble Paul നോബിള്
नोब्ळ् wrote:
I guess this
it depends!
yes there is overhead to each core -- how much it matters will depend
entirely on your setup and typical usage pattern.
sorry this is not a particularly useful answer.
I think the choice of how many cores will come down to your domain
logic needs more then hardware. If you
I have not looked at Field Collapsing in a long time. If someone made
an effort to bring it up-to-date, i'll review it.
It would be great to get Field Collapsing in 1.4
ryan
On Dec 9, 2008, at 12:46 PM, Otis Gospodnetic wrote:
Tracy,
I think Iván de Prado's patch is the latest. Porting
I think your best option is to edit the jsp and remove that syntax...
So you are not running 1.5? how does anything else work?!
On Dec 8, 2008, at 10:12 AM, Sorbo wrote:
Yes everything works except the JSPs. The errors are with the Java
1.5 syntax
viz with the index.jsp
it doesn't like
what about just calling:
http://doom:8983/solr/content_item_representations_20081201/select
That should give you a 404 if it does not exist.
the admin stuff will behave funny if the core does not exist (perhaps
you can file a JIRA issue for that)
ryan
On Dec 4, 2008, at 3:38 PM, Dean
On Dec 4, 2008, at 3:57 PM, Dean Thompson wrote:
Thanks for the quick response, Ryan!
Actually, my admin/ping call gives me a 404 if the core doesn't
exist, which seemed reasonable. I get the 500 if the core *did*
exist.
aaah -- check what ping query you have configured and make sure
yes:
http://localhost:8983/solr/admin/cores?action=STATUS
will give you a list of running cores. However that is not easy to
check with a simple status != 404
see:
http://wiki.apache.org/solr/CoreAdmin
On Dec 4, 2008, at 11:46 PM, Chris Hostetter wrote:
: Subject: Is there a clean way
sure:
LukeRequest luke = new LukeRequest();
luke.setShowSchema( false );
LukeResponse rsp = luke.process( server );
On Dec 1, 2008, at 11:42 AM, Matt Mitchell wrote:
Is it possible to send a request to admin/luke using the
EmbeddedSolrServer?
Check the results from the poll:
http://people.apache.org/~ryan/solr-logo-results.html
The obvious winner is:
https://issues.apache.org/jira/secure/attachment/12394282/solr2_maho_impression.png
But since things are never simple given the similarity of this
logo to solaris logo:
Just a reminder that if you have not yet voted for your favorite
logos, the vote closes tonight at midnight.
Happy Thanksgiving!
ryan
/solr2_maho_impression.png
On Sun, Nov 23, 2008 at 11:59 AM, Ryan McKinley [EMAIL PROTECTED] wrote:
Please submit your preferences for the solr logo.
For full voting details, see:
http://wiki.apache.org/solr/LogoContest#Voting
The eligible logos are:
http://people.apache.org/~ryan/solr
On Nov 25, 2008, at 11:40 AM, Brian Whitman wrote:
This is probably severe user error, but I am curious about how to
index docs
to make this query work:
happy birthday
to return the doc with n_name:Happy Birthday before the doc with
n_name:Happy Birthday, Happy Birthday . As it is now, the
lots of approaches out there...
the easiest off the shelf method would be to use the
MoreLikeThisHandler and get the top interesting terms;
http://wiki.apache.org/solr/MoreLikeThisHandler
ryan
On Nov 25, 2008, at 2:09 PM, Plaatje, Patrick wrote:
Hi all,
Strugling with a question I
Please submit your preferences for the solr logo.
For full voting details, see:
http://wiki.apache.org/solr/LogoContest#Voting
The eligible logos are:
http://people.apache.org/~ryan/solr-logo-options.html
Any and all members of the Solr community are encouraged to reply to
this thread
hymm -- that *should* not be the case. The id field in
QueryElevationComponent uses the globally defined field:
SchemaField sf = core.getSchema().getUniqueKeyField();
...
idField = sf.getName().intern();
The only thing that may be weird is that if you id field is named
myid,
if only i could magic all these damn pdfs I have into some code :)
+1
I want some of that magic too!
On Nov 20, 2008, at 11:57 AM, Erik Holstad wrote:
Thanks for the help Ryan!
Using the start.jar with 1.3 and added the slf4j jar to the
classpath. When
with 1.3 -- the logging is java.util.logging --
The slf4j advice only applies to 1.4-dev
ryan
I'm also hitting some threading issues with autocommit -- JConsole
does not show deadlock, but it shows some threads 'BLOCKED' on
scheduleCommitWithin
Perhaps this has something to do with the changes we made for: SOLR-793
I am able to fix this (at least I don't see the blocking with the
the trunk (solr-1.4-dev) is now using SLF4J
If you are using the packaged .war, the behavior should be identical
to 1.3 -- that is, it uses the java.util.logging implementation.
However, if you are using solr.jar, you select what logging framework
you actully want to use by including that
schema fields should be case sensitive... so DOCTYPE != doctype
is the behavior different for you in 1.3 with the same file/schema?
On Nov 19, 2008, at 6:26 PM, Jon Baer wrote:
Hi,
I wanted to try the TermVectorComponent w/ current schema setup and
I did a build off trunk but it's giving
waitFlush I'm not sure...
waitSearcher=true it will wait until a new searcher is opened after
your commit, that way the client is guaranteed to have the results
that were just sent in the index. if waitSearcher=true, a query could
hit a searcher that does not have the new documents in
it
to false? I f I don't wait, how will I ever know when the new
searcher is ready?
On Nov 18, 2008, at 10:27 PM, Ryan McKinley wrote:
waitFlush I'm not sure...
waitSearcher=true it will wait until a new searcher is opened
after your commit, that way the client is guaranteed to have
nope... solr does not have a DTD.
On Nov 18, 2008, at 1:44 PM, Simon Hu wrote:
Hi,
I assume there is a schema definition or DTD for XML response but
could not
find it anywhere.
Is there one?
thanks
-Simon
--
View this message in context:
Are all the documents in the same search space? That is, for a given
query, could any of the 10MM docs be returned?
If so, I don't think you need to worry about multicore. You may
however need to put part of the index on various machines:
http://wiki.apache.org/solr/DistributedSearch
Say you do filtering by user - how would you enforce that the client
(if it's a browser) only send in the proper filter?
Ryan already mentioned his technique... and here's how I'd do it
similarly...
Write a custom servlet Filter that grokked roles/authentication
(this piece you'd need
On Nov 17, 2008, at 12:06 PM, Matthias Epheser wrote:
Ryan McKinley schrieb:
however I have found that in any site where
stability/load and uptime are a serious concern, this is better
handled in a tier in front of java -- typically the loadbalancer /
haproxy / whatever -- and managed
On Nov 17, 2008, at 1:35 PM, Erik Hatcher wrote:
Can you elaborate on the use case for why you need the raw response
like that?
I vaguely get it, but want to really understand the need here.
I'm weary of the EmbeddedSolrServer usage in there, as I want to
distill the VrW stuff to be able
On Nov 17, 2008, at 2:59 PM, Erik Hatcher wrote:
On Nov 17, 2008, at 2:11 PM, Matthias Epheser wrote:
After we add the SolrQueryResponse to the templates first, we
realized that some convenience methods for iterating the result
docs, accessing facets etc. would be fine.
The idea was to
On Nov 17, 2008, at 4:20 PM, Erik Hatcher wrote:
trouble is, you can also GET /solr/update, even all on the URL, no
request body...
http://localhost:8983/solr/update?stream.body=%3Cadd%3E%3Cdoc%3E%3Cfield%20name=%22id%22%3ESTREAMED%3C/field%3E%3C/doc%3E%3C/add%3Ecommit=true
Solr is a
I'm not totally sure what you are suggesting. Is there a general way
people deal with security and search?
I'm assuming we already have good ways (better ways) to make sure
people are authorized/logged in etc. What do you imagine solr
security would add?
FYI, I used to have a custom
magic 'security' tier).
Erik
On Nov 16, 2008, at 5:54 PM, Ryan McKinley wrote:
I'm not totally sure what you are suggesting. Is there a general
way people deal with security and search?
I'm assuming we already have good ways (better ways) to make sure
people are authorized/logged
I'd be parsing out wildcards, boosts, and fuzzy searches (or at
least thinking about the effects).
I mean jakarta apache~1000 or roam~0.1 aren't as efficient as a
regular query.
Even if you leave the solr instance public, you can still limit
grossly inefficent params by forcing things
not sure if it is something we can do better or part of HttpClient...
From:
http://www.nabble.com/CLOSE_WAIT-td19959428.html
it seems to suggest you may want to call:
con.closeIdleConnections(0L);
But if you are creating a new MultiThreadedHttpConnectionManager for
each request, is seems odd
and field matching from the searchcomponent
version
of morelikethis?
On 11/11/08 6:28 PM, Ryan McKinley [EMAIL PROTECTED] wrote:
did you try debugQuery=true?
(i don't know what it does off hand... perhaps nothing, but it may
add
something for MLT)
On Nov 11, 2008, at 6:56 PM, Jeff
if performance is a problem, you can try adding the synonyms at index
time... this should give you similar results without the runtime
results.
The obvious disadvantage is that you need to have the synonyms at
index time...
On Nov 11, 2008, at 2:37 PM, Manepalli, Kalyan wrote:
Hi
On Nov 11, 2008, at 8:03 PM, Yonik Seeley wrote:
On Tue, Nov 11, 2008 at 6:59 PM, Matthew Runo [EMAIL PROTECTED]
wrote:
What happens when we use another uniqueKey in this case? I was
under the
assumption that if we say uniqueKeystyleId/uniqueKey then our
doc IDs
will be our styleIds.
Is
In the application I m applying URLEncoding on the search string
thus the
entire search string gets converted into :
http://localhost:8080/apache-solr-1.3.0/core51043/select/?
q=Sigma+Survey+for+Police+Officers%26field%3DIndex_Type_s
tried removing the plusses i am inserting but now shows too many
results
fq=+i_subjects:Film+i_subjects:+media+i_subjects:+mass+communication
fq is a multi-valued field, try calling it like:
fq=i_subjects:Filmfq=i_subjects:mass communicationfq=...
ryan
also check the timing in debugQuery=true...
I suspect most of the time should be spent in:
process:
lst name=org.apache.solr.handler.component.QueryComponent
On Nov 11, 2008, at 12:33 PM, Manepalli, Kalyan wrote:
Hi Otis,
I tested by taking out the newly added synonyms data and the query
Data :
367380 documents
nGeographicLocations : 39298 distincts values
nPersonNames : 325142 distincts values
nOrganizationNames : 130681 distincts values
nCategories : 929 distincts values
nSimpleConcepts : 110198 distincts values
nComplexConcepts : 1508141 distincts values
Each of those
On Nov 5, 2008, at 7:30 AM, Muhammed Sameer wrote:
Salaam,
When I run post.jar or start.jar its throws a lot of information on
the screen, I even tried redirecting the info but that does not seem
to help, I have configured a cron to run post.jar to run every 2mins
to keep the index
have you tried yet?
solr supports UTF-8... so I don't see why there would be a problem...
you should even be able to put a synonym mapping § = section (or the
other way around)
Check the utf8-example.xml to see some examples of working with utf8
chars.
ryan
On Nov 4, 2008, at 5:06 PM,
the only 'limit' is the effect on your query times... you could have
1000+ facets if you are ok with the response time.
Sorry to give the it depends answer, but it totally depends on your
data and your needs.
On Oct 30, 2008, at 7:28 AM, Jeryl Cook wrote:
is there a limit on the
isn't this just: fl=f1,f3,f4 etc
or am I missing something?
On Oct 24, 2008, at 12:26 PM, Manepalli, Kalyan wrote:
Hi,
In my usecase, I query a set of fields. Then based on the
results, I want to output a customized set of fields. Can I do this
without using a search component?
E:g.
will always be (f1 ... f6)
Thanks,
Kalyan Manepalli
-Original Message-
From: Ryan McKinley [mailto:[EMAIL PROTECTED]
Sent: Friday, October 24, 2008 1:25 PM
To: solr-user@lucene.apache.org
Subject: Re: customizing results in StandardQueryHandler
isn't this just: fl=f1,f3,f4 etc
or am I
are you running the packaged .war directly? or something custom? Did
it ever work?
Is anyone else running successfully on weblogic?
On Oct 24, 2008, at 5:10 PM, Dadasheva, Olga wrote:
Hi,
I run Solr 1.3 in Weblogic 10.3 Java 6;
I have a single core application deployed to the same
This is not something solr does currently...
It sounds like something that should be added to Mahout:
http://lucene.apache.org/mahout/
On Oct 24, 2008, at 4:18 PM, Charlie Jackson wrote:
During a recent sales pitch to my company by FAST, they mentioned
entity
extraction. I'd never heard of
On Oct 22, 2008, at 4:17 PM, Otis Gospodnetic wrote:
Hello,
It looks like we might have lost SolrSharp:
http://wiki.apache.org/solr/SolrSharp
It looks like its home is http://www.codeplex.com/solrsharp , but
the site is no longer available.
Does anyone know its status?
looks like it is
do you have handleSelect set to true in solrconfig?
requestDispatcher handleSelect=true
...
if not, it would use a Servlet that is now deprecated
On Oct 20, 2008, at 4:52 PM, Feak, Todd wrote:
I found out what's going on.
My test queries from existing Solr (not 1.3.0) that I am
when do you get this error?
Is it on startup or when you are posting xml to /update?
Are there any characters in your xml files before the first ?
ryan
On Oct 17, 2008, at 6:14 AM, sunnyfr wrote:
INFO: Adding 'file:/data/solr/lib/wstx-asl-3.2.7.jar' to Solr
classloader
Oct 17, 2008
that will depend on your servlet container. (jetty, resin, tomcat,
etc...)
If you are running jetty from the example, you can change the port by
adding -Djetty.port=1234 to the command line. The port is configured
in example/etc/jetty.xml
the relevant line is:
Set
try:
field:[* TO *]
On Oct 15, 2008, at 9:44 AM, John E. McBride wrote:
Hello All,
I need to run a query which asks:
field = NOT NULL
should this perhaps be done with a filter?
I can't find out how to do NOT NULL from the documentation, would
appreciate any advice.
Thanks,
John
it may be possible however, you would need to use 1.2 with the
lucene libraries from 1.3. The index format has changed, so a newer
index can not be read by an older lucene.
ryan
On Oct 13, 2008, at 3:25 PM, Lucas F. A. Teixeira wrote:
Is this possible? Thinking in a two-phase
check:
https://issues.apache.org/jira/browse/LUCENE-1387
My progress has stumbled since I could not get the tests to work... I
am currently not using this in my own projects, so i'm not yet
comfortable pushing to finish it. If you get it up and running with
success, that could get the
what is your actual query?
Are you doing faceting / highlighting / or anything else?
On Oct 8, 2008, at 2:17 PM, Rajiv2 wrote:
Hi, thanks for responding so quickly,
6-12 seconds seems really long and 15 million docs is nothing on a
machine like this. Are you sure the issue is in Solr?
On Oct 8, 2008, at 4:03 PM, Rajiv2 wrote:
and query times without faceting are... ?
solr's built in faceting is simple and has its limits. 15M is
higher than i've seen good faceting performance out of, particularly
multivalued fields.
Erik
Hi, My facet fields are multi valued
-
lst name=process
double name=time6727.0/double
-
lst name=org.apache.solr.handler.component.QueryComponent
double name=time6457.0/double
/lst
-
lst name=org.apache.solr.handler.component.FacetComponent
double name=time0.0/double
/lst
-
So I take it, this is with faceting turned off...
what
On Oct 8, 2008, at 6:11 PM, Rajiv2 wrote:
w/ faceting qtime is around +200ms.
if your target time is 250, this will need some work... but lets
ignore that for now...
qtime for a standard query on the default search field is less than
100ms.
Usually around 60ms.
qtime for id:
do you mean, writing an appender that when you call:
log.info( blah bla ba... );
that gets posted to a solr index?
- - -
I have not heard of anything, but should be relatively easy using
solrj...
On Oct 7, 2008, at 4:35 PM, Moazam Raja wrote:
Hi all, has anyone tried communicating to Solr
On Oct 6, 2008, at 5:58 PM, Chris Hostetter wrote:
: The only filesystem dependency that I want is the index itself.
should we assume you're baking your solrconfig.xml and schema.xml
directly into a jar?
: The current implementation of the SolrResource seems to suggest
that i need
: a
I'm not totally on top of how distributed components work, but check:
http://wiki.apache.org/solr/WritingDistributedSearchComponents
and:
https://issues.apache.org/jira/browse/SOLR-680
Do you want each of the shards to append values? or just the final
result? If appending the values is not
101 - 200 of 607 matches
Mail list logo