Hi all,
I read about multifaceting [1] and tried it for myself. With
multifaceting I would like to conserve the number of documents for the
'un-facetted case'. This works nice with normal fields, but I get an
exception [2] if I apply this on a multivalued field.
Is this a bug or logical :-) ? If
Martínez Bautista
http://www.paradigmatecnologico.com
Avenida de Europa, 26. Ática 5. 3ª Planta
28224 Pozuelo de Alarcón
Tel.: 91 352 59 42
2010/5/18 Peter Karich peat...@yahoo.de
Hi all,
I read about multifaceting [1] and tried it for myself. With
multifaceting I would like
Sorry. Wasn't intended as a hijacking :-(
: Subject: Moving from Lucene to Solr?
: References: aanlktimxy1wscs_bjzkkkdy7dlrw1iober5kzszrf...@mail.gmail.com
: In-Reply-To: aanlktimxy1wscs_bjzkkkdy7dlrw1iober5kzszrf...@mail.gmail.com
http://people.apache.org/~hossman/#threadhijack
Thread
Hi,
just as a side note as I did not read the link in your conversation:
http://wiki.apache.org/lucene-java/NearRealtimeSearch (I just stumbled
over this as I am interested in this feature too)
Regards,
Peter.
Thanks for the new information. Its really great to see so many options for
of things?
Thom
On 2010-05-23, at 7:36 AM, Peter Karich wrote:
Hi,
just as a side note as I did not read the link in your conversation:
http://wiki.apache.org/lucene-java/NearRealtimeSearch (I just stumbled
over this as I am interested in this feature too)
Regards,
Peter
Hi,
where can I find more information about a failure of a Java replication
in Solr 1.4?
(Dashboard does not seem to be the best place!?)
Regards,
Peter.
I had success with a previous version (~ 12/2009). Try to ask directly
in the comments of the patch.
I got immediately help there.
Regards,
Peter.
Hello !
I am trying to apply the solr-236 patch to the sources I got from svn. I
downloaded the sources from
Hi,
Now we are getting the following exception [1] under
admin/replication/index.jsp and I have no clue what the cause could be
and couldn't find further info about it...
And how can I configure that the indices log into different log files
under the multi-index setup for tomcat [2]?
Regards,
Hi Jonty,
what is your specific problem?
You could use a cronjob or the Java-lib called quartz to automate this task.
Or did you mean replication?
Regards,
Peter.
Hi All,
I am very new to solr as well as java too.
I require to use solrj for indexing also require to index automatically once
Hoss,
thanks a lot! (We are using tomcat so the logging properties file is fine.)
Do you know what the reason of the mentioned exception could be?
It seems to me that if this exception accurs that even the replication
for that index does not work.
If I then remove the data director + reload +
Hi Raakhi,
I am not sure if I understand your usecase correctly,
but if you need this custom location to test against an
existing schema/config file I found this snippet [1].
Otherwise the solr home can be set with
-Dsolr.solr.home=/opt/solr/example
more information is available here [2]
We use this in production since several months.
So, try the patch and see if it is working for you as expected, if not,
improve it :-)
Regards,
Peter.
Do we know when it will be added? Are there any alternatives to Solr
that do this?
Thanks,
Moazzam
On Wed, Jun 9, 2010 at 10:29 PM, Lance
So the 'enable.master' property works and the 'solr.core.schemaName' not?
Maybe solr.core is reservered? - try another name.
If you want to externalize the properties then another solution could be
to import the whole xml snippet (requestHandler
../requestHanddler) via xml include:
-Original Message-
From: Peter Karich [mailto:peat...@yahoo.de]
Sent: Thursday, June 10, 2010 3:09 PM
To: solr-user@lucene.apache.org
Subject: Re: Schema not replicating when using multicore property parameter
So the 'enable.master' property works and the 'solr.core.schemaName
I asked this some weeks ago on the list here too, no ideas found so far :-/
Regards,
Peter.
Hi,
was someone already successful in separating the complete logging of each
solr-core? We plan to add more solr-cores to ouer current setup ( solr 1.4,
running in tomcat 6.0.2) and it would be
Hi,
it seems to me that the MoreLikeThis component doesn't work for dynamic
fields. Is that correct?
And it also doesn't work for fields which are indexed but not stored,
right? e.g. 'text' where dynamic fields could be copied to.
Or did I create an incorrect example?
Regards,
Peter.
--
. You'll have to reconfigure the 'text'
copyField target to have term vectors.
On Fri, Jun 11, 2010 at 1:06 PM, Peter Karich peat...@yahoo.de wrote:
Hi,
it seems to me that the MoreLikeThis component doesn't work for dynamic
fields. Is that correct?
And it also doesn't work for fields
Hi Alex,
as I understand the thread you will have to change the solr src then,
right? The logPath is not available or did I understand something wrong?
If you are okay with touching solr I would rather suggest repackaging
the solr.war with a different logging configuration. (so that the cores
do
Hi Alex!
Am I missing something? Anything more to test?
Are you using solrj too? If so, beware of:
https://issues.apache.org/jira/browse/SOLR-1950
Regards,
Peter.
I tried luke via
ssh -X ...
with success ;-)
Hi,
I don't think there is a GUI for this, other than the Web browser.
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/
- Original Message
From: abhay
Did you kill the process or does a reload help afterwards?
Did you look into the logs? Are there errors saying sth. of a write-lock?
Peter.
Hi,
I have multi-core solr setup. All cores finished indexing in reasonable time
but one. I look at the dataimport info for the one that's hanging. The
Do you need to search (or facetting, filtering) through the dates or costs?
Maybe you can store only the max and min price and the
available-date-range in solr?
And then get detailed information from an additional database-query?
Peter.
I want to be able to store property information in Solr,
Hi Raakhi,
First, field collapsing works pretty well in our system. And, as Martin
has said on 17.06.2010 in the other thread Field Collapsing SOLR-236:
I've added a new patch to the issue, so building the trunk (rev
955615) with the latest patch should not be a problem. Due to recent
changes in
Hi Alex,
finally found a solution through stackoverflow:
http://stackoverflow.com/questions/3025464/how-to-deploy-the-same-webapp-with-different-logging-tomcat-solr
=
http://www.lucidimagination.com/search/document/CDRG_ch09_9.4.2.1
Excerpt from this link:
To change logging settings for Solr
E.g. take a look at:
http://www.craftyfella.com/2010/01/faceting-and-multifaceting-syntax-in.html
Peter.
Huh? Read through the wiki: See http://wiki.apache.org/solr/LocalParams but I
still don't understand its utility?
Can someone explain to me why this would even be used? Any examples to
On Tue, Jun 22, 2010 at 12:59 AM, Peter Karich peat...@yahoo.de wrote:
Hi Raakhi,
First, field collapsing works pretty well in our system. And, as Martin
has said on 17.06.2010 in the other thread Field Collapsing SOLR-236:
I've added a new patch to the issue, so building the trunk (rev
, 2010 at 12:59 AM, Peter Karich peat...@yahoo.de wrote:
Hi Raakhi,
First, field collapsing works pretty well in our system. And, as Martin
has said on 17.06.2010 in the other thread Field Collapsing SOLR-236:
I've added a new patch to the issue, so building the trunk (rev
955615
as always: it depends.
take a look into hibernate search also, which is lucene powered.
Peter.
I have complex data model with bi directional relations I Use hibernate
as ORM provider.so I have several model objects representing data model. All
together my model objetcs are 75 to 100 and
Hi!
How can I improve the performance of a fuzzy search like: mihchael~0.7
through a relative large index (~1 million docs)?
It takes over 15 seconds at the moment if we would perform it on the
normal text search field.
I searched the web and the jira and couldn't find anything related to that.
Hi Mark!
Solr trunk should have much improved fuzzy speeds (due to some very
cool work that was done in Lucene) - you using 1.4?
yes.
So, you mean I should try it out her:
http://svn.apache.org/viewvc/lucene/dev/trunk/solr/
or some 'more stable' branch?
Thanks, Robert and Otis!
will try it out now.
Peter.
Btw. here you can see Robert's presentation on what he did to speed up fuzzy
queries: http://www.slideshare.net/otisg
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
So, you mean I should try it out her:
wow! indeed a lot faster (~order of a magnitude). Hopefully we do not
encounter a bug with the trunk :-)
So, Thanks and congrats for that awesome piece of software!
On Wed, Jun 23, 2010 at 3:34 PM, Peter Karich peat...@yahoo.de wrote:
So, you mean I should try it out her:
http
Hi Li Li,
If the changes are not that frequently just copy the data folder:
http://wiki.apache.org/solr/SolrOperationsTools
Or see this question + answer:
http://stackoverflow.com/questions/3083314/solr-incremental-backup-on-real-time-system-with-heavy-index
where those direct links could help:
Hi,
is it possible to use the stored terms of a field for a faceted search?
I mean, I don't want to get the term frequency per document as it is
shown here:
http://wiki.apache.org/solr/TermVectorComponentExampleOptions
I want to get the frequency of the term of my special search and show
only
Dear Hoss,
I will try to clarify what I want to achieve :-)
Assume I have the following three docs:
id:1
description: bmx bike 123
id:2
description: bmx bike 321
id:3
description: a mountain bike
If I query against *:* I want to get the facets and its document count ala:
bike: 3
bmx: 2
I
How does your queries look like? Do you use faceting, highlighting, ... ?
Did you try to customize the cache?
Setting the HashDocSet to 0.005 of all documents improves our search speed a
lot.
Did you optimize the index?
500ms seems to be slow for an 'average' search. I am not an expert but
we are using 1.4.0 without any major problems so far. (So, I would use
1.4.1 for the next app, just to have the latest version.)
the trunk is also nice to use fuzzy search performance boosts.
Peter.
Hi all,
I'm going to develop a search architecture solr based and i wonder if you
could
?
Thanks.
Regards.
Scott
在2010-07-15 17:19:57,Peter Karich peat...@yahoo.de 写道:
How does your queries look like? Do you use faceting, highlighting, ... ?
Did you try to customize the cache?
Setting the HashDocSet to 0.005 of all documents improves our
search speed a lot.
Did you optimize
satya,
just a side question: did you use dismax handler?
dismax won't handle q=*:* for dismax it should be empty q=
to get all docs
First, look at the SOLR admin page and see if there's anything in your
index.
Second, examine the SOLR log files, see what comes out when you try this.
You
satya,
sorry for being a bit harsh, but did you read the answer of Erick in the
'problem with storing??'-thread at all?
just ask the same question again (and not answering old questions) might
be a bit disappointing for people who want to help you.
just my side-note ...
Regards,
Peter.
Hi
Hi,
Why do you need the weight for the tags?
you could index it this way:
{
id: 123
tag:'tag1'
weight: 0.01
uniqueKey: combine(id, tag)
}
{
id: 123
tag:'tag2'
weight: 0.3
uniqueKey: combine(id, tag)
}
and specify the query-time boost with the help of the weight.
I didn't looked at payloads as mentioned by Jonathan, but another
solution could be (similar to Dennis'):
create a field 'tags' and then add the tag1 several times to it -
depending on the weight.
E.g. add it 10 times if the weight is 1.0
But add it only 2 times if the weight is 0.2 etc.
Of
Each solr(jetty) instance on consume 40M-60M memory.
java -Xmx1024M -jar start.jar
That's a good suggestion!
Please, double check that you are using the -server version of the jvm
and the latest 1.6.0_20 or so.
Additionally you can start jvisualvm (shipped with the jdk) and hook
into
Hi Andrew,
I didn't correctly understand what you are trying to do with 'copying'?
Just use one core as a template or use it to replicate data?
You can reload only one application via:
http://localhost/manager/html/reload?path=/yourapp
(if you do this often you need to increase the PermGen
) doesn't reload and the other cores aren't then working.
I don't need replication just yet but I will be looking into that
eventually.
Regards
Andrew
On 20 July 2010 10:32, Peter Karich peat...@yahoo.de wrote:
Hi Andrew,
I didn't correctly understand what you are trying to do
Hi!
Any there any known issues may cause the index sync between the
master/slave abnormal?
What do you mean here? Corrupt indices? Please, describe your problems
in more detail.
And is there any API to call to force sync the index between the
master and slave, or force to delete the old
Hi James,
triggering an optimize (on the salve) helped us to shrink the disc usage
of the slaves.
But I think, the slaves will clean them up automatically on the next
replication (if you don't mind the double-size-index)
Regards,
Peter.
Hi Peter,
Thanks your reponse. I will check the
maybe its too simple, but did you try the rows=20 or sth. greater as
Lance suggested?
=
select?rows=20qt=dismax
Regards,
Peter.
Yes i've data... maybe my query is wrong?
select?q=motoqt=dismaxq=city:Paris
Field city is not showing?
-Original Message-
From:
Another possibility could be the well known 'field collapse' ;-)
http://wiki.apache.org/solr/FieldCollapsing
Regards,
Peter.
Thanks.
If I set uniqueKey on the field, then I can save duplicates?
I need to remove duplicates only from search results. The ability to save
duplicates are should
to delete duplicates (I don't need cout
of duplicates or select certain duplicate).
2010/7/23 Peter Karich peat...@yahoo.de
Another possibility could be the well known 'field collapse' ;-)
http://wiki.apache.org/solr/FieldCollapsing
Regards,
Peter.
Thanks.
If I set uniqueKey
ids):
1, 9
or
0, 8
2010/7/23 Peter Karich peat...@yahoo.de
Hi Pavel!
The patch can be applied to 1.4.
The performance is ok, but for some situations it could be worse than
without the patch.
For us it works good, but others reported some exceptions
(see the patch site: https
Gora,
just for my interests:
does apache bench sends different queries, or from the logs, or always
the same query?
If it would be always the same query the cache of solr will come and
make the response time super small.
I would like to find a tool or script where I can send my logfile to solr
Hi Girish,
I am not aware of such a thing.
But you could use a middleware to avoid certain fields from being
retrieved via the 'fl' parameter:
http://wiki.apache.org/solr/CommonQueryParameters#fl
E.g. for your customers the query looks like q=hellofl=title and for
your admin the query looks like
did you try an optimize on the slave too?
Yes I always run an optimize whenever I index on master. In fact I just ran
an optimize command an hour ago, but it didn't make any difference.
We have three dedicated servers for solr, two for slaves and one for master,
all with linux/debian packages installed.
I understand that replication does always copies over the index in an exact
form as in master index directory (or it is supposed to do that at least),
and if the master
Hi,
I am indexing a solr 1.4.0 core and commiting gets slower and slower.
Starting from 3-5 seconds for ~200 documents and ending with over 60
seconds after 800 commits. Then, if I reloaded the index, it is as fast
as before! And today I have read a similar thread [1] and indeed: if I
set
Hi Muneeb,
I fear you'll have no chance: replicating an index will use more disc
space on the slave nodes.
Of course, you could minimize disc usage AFTER the replication via the
'optimize-hack'.
But are you sure the reason for the slave-node die, is due to disc
limitations?
Try to observe the
before the warmup queries
from the previous commit have done their magic, you might be getting
into a death spiral.
HTH
Erick
On Thu, Jul 29, 2010 at 7:02 AM, Peter Karich peat...@yahoo.de wrote:
Hi,
I am indexing a solr 1.4.0 core and commiting gets slower and slower.
Starting from 3-5
Both approaches are ok, I think. (although I don't know the python API)
BTW: If you query q=*:* then add rows=0 to avoid some traffic.
Regards,
Peter.
I want to programmatically retrieve the number of indexed documents. I.e.,
get the value of numDocs.
The only two ways I've come up with are
Hi Peter :-),
did you already try other values for
hl.maxAnalyzedChars=2147483647
? Also regular expression highlighting is more expensive, I think.
What does the 'fuzzy' variable mean? If you use this to query via
~someTerm instead someTerm
then you should try the trunk of solr which is a lot
is pretty frequent for Solr.
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/
- Original Message
From: Peter Karich peat...@yahoo.de
To: solr-user@lucene.apache.org
Sent: Fri, July 30, 2010 4:06:48 PM
to be reopened, and this happens on commit.
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/
- Original Message
From: Peter Karich peat...@yahoo.de
To: solr-user@lucene.apache.org
Sent: Fri, July 30, 2010 6:19
Ophir,
this sounds a bit strange:
CommonsHttpSolrServer.java, line 416 takes about 95% of the application's
total search time
Is this only for heavy load?
Some other things:
* with lucene you accessed the indices with MultiSearcher in a LAN, right?
* did you look into the logs of the
The default solr solution is client side loadbalance.
Is there a solution provide the server side loadbalance?
No. Most of us stick a HTTP load balancer in front of multiple Solr servers.
E.g. mod_jk is a very easy solution (maybe too simple/stupid?) for a
load balancer,
but it
Hi,
I have 5 Million small documents/tweets (= ~3GB) and the slave index
replicates itself from master every 10-15 minutes, so the index is
optimized before querying. We are using solr 1.4.1 (patched with
SOLR-1624) via SolrJ.
Now the search speed is slow 2s for common terms which hits more than
)
Tom Burton-West
-Original Message-
From: Peter Karich [mailto:peat...@yahoo.de]
Sent: Tuesday, August 10, 2010 9:54 AM
To: solr-user@lucene.apache.org
Subject: Improve Query Time For Large Index
Hi,
I have 5 Million small documents/tweets (= ~3GB) and the slave index
replicates
Hi Robert!
Since the example given was http being slow, its worth mentioning that if
queries are one word urls [for example http://lucene.apache.org] these
will actually form slow phrase queries by default.
do you mean that http://lucene.apache.org will be split up into http
lucene
filter class=solr.CommonGramsQueryFilterFactory words=new400common.txt/
/analyzer
/fieldType
Tom
-Original Message-
From: Peter Karich [mailto:peat...@yahoo.de]
Sent: Tuesday, August 10, 2010 3:32 PM
To: solr-user@lucene.apache.org
Subject: Re: Improve Query Time For Large Index
words list. (Details on CommonGrams
here:
http://www.hathitrust.org/blogs/large-scale-search/slow-queries-and-common-words-part-2)
Tom Burton-West
-Original Message-
From: Peter Karich [mailto:peat...@yahoo.de]
Sent: Tuesday, August 10, 2010 9:54 AM
To: solr-user
I wonder too, that there shouldn't be a special tool which analyzes solr
logfiles (e.g. parses qtime, the parameters q, fq, ...)
Because there are some other open source log analyzers out there:
http://yaala.org/ http://www.mrunix.net/webalizer/
Another free tool is newrelic.com (you will
Is there a way to verify that I have added correctlly?
on linux you can do
ps -elf | grep Boot
and see if the java command has the parameters added.
@all: why and when do you get those OOMs? while querying? which queries
in detail?
Regards,
Peter.
Hi Wenca,
I am not sure wether my information here is really helpful for you,
sorry if not ;-)
I want only hotels that have room with 2 beds and the room has a
package with all inclusive boarding and price lower than 400.
you should tell us what you want to search and filter? Do you want only
is just
5-6 GB yet that particular error is seldom observed... (SEVERE ERROR : JAVA
HEAP SPACE , OUT OF MEMORY ERROR )
I could see one lock file generated in the data/index path just after this
error.
On Tue, Aug 17, 2010 at 4:49 PM, Peter Karich peat...@yahoo.de wrote
.
I am new to Solr so excuse me if I don't use the right terminology
yet, but I hope that my description of the use case is quite clear
now. ;-)
Thanks
Wenca
Dne 17.8.2010 13:46, Peter Karich napsal(a):
Hi Wenca,
I am not sure wether my information here is really helpful for you,
sorry
Hi all,
my queryResultCache has no hits. But if I am removing one line from the
bf section in my dismax handler all is fine. Here is the line:
recip(ms(NOW,date),3.16e-11,1,1)
According to
http://wiki.apache.org/solr/SolrRelevancyFAQ#How_can_I_boost_the_score_of_newer_documents
this should be
Thanks a lot Yonik! Rounding makes sense.
Is there a date math for the 'LAST_COMMIT'?
Peter.
On Tue, Aug 17, 2010 at 6:29 PM, Peter Karich peat...@yahoo.de wrote:
my queryResultCache has no hits. But if I am removing one line from the
bf section in my dismax handler all is fine. Here
Hi Yonik,
would you point me to the Java classes where solr handles a commit or an
optimize and then the date math definitions?
Regards,
Peter.
On Wed, Aug 18, 2010 at 4:34 PM, Peter Karich peat...@yahoo.de wrote:
Thanks a lot Yonik! Rounding makes sense.
Is there a date math
forget to say: thanks again! Now the cache gets hits!
Regards,
Peter.
On Wed, Aug 18, 2010 at 4:34 PM, Peter Karich peat...@yahoo.de wrote:
Thanks a lot Yonik! Rounding makes sense.
Is there a date math for the 'LAST_COMMIT'?
No - but it's an interesting idea!
-Yonik
http
Hi Ankita,
first: thanks for trying apache solr.
does all the data to be indexed has to be in exampledocs folder?
No. And there are several ways to push data into solr: via indexing,
dataimporthandler, solrj, ...
I know that getting comfortable with a new project is a bit complicated
at
Hi!
What do you mean? You want a quickstart?
Then see
http://lucene.apache.org/solr/tutorial.html
(But I thought you already got solr working (from previous threads)!?)
Or do you want to know if solr is running? Then try the admin view:
http://localhost:8080/solr/admin/
Regards,
Peter.
Hi
Hi,
that issue is not really related to solr. See this:
http://stackoverflow.com/questions/88235/how-to-deal-with-java-lang-outofmemoryerror-permgen-space-error
Increasing maxpermsize -XX:MaxPermSize=128m does not really solve this
issue but you will see less errros :-)
I have written a mini
aaah okay.
so its SolrDocument in normal search never been used ? its only for other
solr-plugins ?
SolrDocument is under org.apache.solr.common which is for the
solr-solj.jar and not available for the solr-core.jar
see e.g.:
Hi,
Solr is only able to handle unicode (UTF-8).
Make really sure that you push it into the index in the correct encoding.
See my (accepted ;-)) answer:
http://stackoverflow.com/questions/3086367/how-to-view-the-xml-documents-sent-to-solr/3088515#3088515
Regards,
Peter.
I have an index that
Hi there,
I don't know if my idea is perfect but it seems to work ok in my
twitter-search prototype:
http://www.jetwick.com
(keep in mind it is a vhost and only one fat index, no sharding, etc...
so performance isn't perfect ;-))
That said, type in 'so' and you will get 'soldier', 'solar', ...
Peter,
thanks a lot for your in-depth explanations!
Your findings will be definitely helpful for my next performance
improvement tests :-)
Two questions:
1. How would I do that:
or a local read-only instance that reads the same core as the indexing
instance (for the latter, you'll need
, as the RO instance is simply another shard in the pack.
On Sun, Sep 12, 2010 at 8:46 PM, Peter Karich peat...@yahoo.de wrote:
Peter,
thanks a lot for your in-depth explanations!
Your findings will be definitely helpful for my next performance
improvement tests :-)
Two questions:
1. How
Hi,
if you index your doc with text='operating system' with an additional
keyword field='linux'
(of type string, can be multivalued) then solr facetting should be what
you want:
solr/select?q=*:*facet=truefacet.field=keywordrows=10 or rows=0
depending on your needs
Does this help?
Regards,
see
http://stackoverflow.com/questions/88235/how-to-deal-with-java-lang-outofmemoryerror-permgen-space-error
and the links there. There seems to be no good solution :-/
The only reliable solution is restart, before you haven't enough
permgenspace (use jvisualvm to monitor)
And try to increase
Jonathan,
this field described here from Chantal:
2.) create an additional field that stores uses the
String type with the same content (use copy field to fill either)
can be multivalued. Or what did you mean?
BTW: The nice thing about facet.prefix is that you can add an arbitrary
(filter)
How long does it take to get 1000 docs?
Why not ensure this while indexing?
I think besides your suggestion or the suggestion of Luke there is no
other way...
Regards,
Peter.
Hello,
What would be the best way to check Solr index against original system
(Database) to make sure index is up to
Hi solr community!
Is it recommended to replace the data directory of a heavy used solr
instance?
(I am aware of the http queries, but that will be too slow)
I need a fast way to push development data to production servers.
I tried the following with success even under load of the index:
mv
Hi,
any hints or suggestions?
Does anyone do the updating this way?
Regards,
Peter.
Hi solr community!
Is it recommended to replace the data directory of a heavy used solr
instance?
(I am aware of the http queries, but that will be too slow)
I need a fast way to push development data to
Hi
Ups, sorry. I didn't recognized the answer because it was in the bulk
folder.
I though with this procedure it will be a lot faster and less overhead.
Just two lines of shell script.
What do you think?
Regards,
Peter.
This should work on Linux. The rsync based replication scripts used
also take a look at:
http://wiki.apache.org/solr/HierarchicalFaceting
+ SOLR-64, SOLR-792
+ http://markmail.org/message/jxbw2m5a6zq5jhlp
Regards,
Peter.
Take a look at Mastering the Power of Faceted Search with Chris
Hostetter
(http://www.lucidimagination.com/solutions/webcasts/faceting). I
Hi,
there is a solution without the patch. Here it should be explained:
http://www.lucidimagination.com/blog/2010/08/11/stumped-with-solr-chris-hostetter-of-lucene-pmc-at-lucene-revolution/
If not, I will do on 9.10.2010 ;-)
Regards,
Peter.
I've a similar problem with a project I'm working on
Hi,
there are two relative similar solutions for this problem.
I will describe one of them:
* create a multivalued string field called 'category'
* you have a category tree. so make sure a document gets not only the
leaf category, but all categories (name or id) until the root
* now facet
Hi,
I need a feature which is well explained from Mr Goll at this site **
So, it then would be nice to do sth. like:
facet.stats=sum(fieldX)facet.stats.sort=fieldX
And the output (sorted against the sum-output) can look sth. like this:
lst name=facet_counts
lst name=facet_fields
lst
I'm not sure ... just reading it yesterday night ...
but isn't the unapplied patch from Harish
https://issues.apache.org/jira/secure/attachment/12400054/SOLR-680.patch
what you want?
Regards,
Peter.
Running 1.4.1.
I'm able to execute stats queries against multi-valued fields, but when
given
Hi Olivier,
maybe the slave replicates after startup? check replication status here:
http://localhost/solr/admin/replication/index.jsp
what is your poll frequency (could you paste the replication part)?
Regards,
Peter.
Hello,
I setup a server for the replication of Solr. I used 2 cores and
Hi Olivier,
the index size is relative big and you enabled replication after startup:
str name=replicateAfterstartup/str
This could explain why the slave is replicating from the very beginning.
Are the index versions/generations the same? (via command or
admin/replication)
If not, the slaves
1 - 100 of 155 matches
Mail list logo