Hi All,
I made a change to schema to add new fields in a
collection, this was uploaded to Zookeeper via the
below command:
For the Schema
solr zk cp
file:E:\SolrCloud\server\solr\configsets\COLLECTIO
N\conf\schema.xml
zk:/configs/COLLECTION/schema.xml -z
SERVERNAME1.uleaf.site
For the
would more than satisy high availability.
Regards
Ian
-Original Message-
From: Jörn Franke
If you have a properly secured cluster eg with Kerberos then you should not
update files in ZK directly. Use the corresponding Solr REST interfaces then
you also less likely to mess something
Hi,
I am relatively new to Solr especially Solr Cloud and have been using it for
a few days now. I think I have setup Solr Cloud correctly however would like
some guidance to ensure I am doing it correctly. I ideally want to be able
to process 40 million documents on production via Solr Cloud.
Agreed, but yes it skips them even when explicitly referenced by name. The
line I linked to (530) will skip any file whose name begins with a dot. If
there's a better workaround than what I've proposed then I'm certainly open
to it.
Best,
Ian
On Fri, Jun 1, 2018 at 1:25 PM, Alexandre Rafalovitch
t sure which git branch would be best in this
case. Any guidance on best practices is much appreciated!
Best,
--
Ian Goldsmith-Rooney
changed grouping to be two level so
lots of change is grouping code)
In the 5.5.3 code base we changed the method construceRequest(ResponseBuilder
rb) in TopGroupsShardRequestFactory to always call createRequestForAllShards(rb)
Ian
NLA
-Original Message-
From: Diego Ceccarelli (BLOOMBERG
n this node)? That would help
immensely when debugging.
Thanks!
- Ian
is there anyway
to acheive similar as we as stuck on v4?
Any help / advice / pointers would be great! - thanks in advance
Ian
Done!
https://issues.apache.org/jira/browse/SOLR-7481
On Tue, Apr 28, 2015 at 11:09 AM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
This is a bug. Can you please open a Jira issue?
On Tue, Apr 28, 2015 at 8:35 PM, Ian Rose ianr...@fullstory.com wrote:
Is it possible to run
,
Ian
On Tue, Apr 28, 2015 at 12:47 PM, Anshum Gupta ans...@anshumgupta.net
wrote:
Hi Ian,
DELETESHARD doesn't support ASYNC calls officially. We could certainly do
with a better response but I believe with most of the Collections API calls
at this time in Solr, you could send random params
:06 PM, Anshum Gupta ans...@anshumgupta.net
wrote:
Hi Ian,
What do you mean by *my testing shows* ? Can you elaborate on the steps
and how did you confirm that the call was indeed *async* ?
I may be wrong but I think what you're seeing is a normal DELETEREPLICA
call succeeding behind
with Did not
find taskid [12-foo-4] in any tasks queue.
Synchronous deletes are causing problems for me in production as they are
timing out in some cases.
Thanks,
Ian
p.s. I'm on version 5.0.0
at the same time (should I be?). How does the SolrJ
client handle this?
Thanks!
- Ian
its good to know how the route that Solrj has chosen...
cheers,
Ian
On Tue, Apr 14, 2015 at 3:56 PM, Hrishikesh Gadre gadre.s...@gmail.com
wrote:
Hi Ian,
As per my understanding, Solrj does not use Zookeeper watches but instead
caches the information (along with a TTL). You can find more
Wups - sorry folks, I send this prematurely. After typing this out I think
I have it figured out - although SPLITSHARD ignores maxShardsPerNode,
ADDREPLICA does not. So ADDREPLICA fails because I already have too many
shards on a single node.
On Wed, Apr 8, 2015 at 11:18 PM, Ian Rose ianr
Thanks, I figured that might be the case (hand-editting clusterstate.json).
- Ian
On Wed, Apr 8, 2015 at 11:46 PM, ralph tice ralph.t...@gmail.com wrote:
It looks like there's a patch available:
https://issues.apache.org/jira/browse/SOLR-5132
Currently the only way without that patch
On my local machine I have the following test setup:
* 2 nodes (JVMs)
* 1 collection named testdrive, that was originally created with
numShards=1 and maxShardsPerNode=1.
* After a series of SPLITSHARD commands, I now have 4 shards, as follows:
testdrive_shard1_0_0_replica1 (L) Active 115
I previously created several collections with maxShardsPerNode=1 but I
would now like to change that (to unlimited if that is an option). Is
changing this value possible?
Cheers,
- Ian
is based on Apache Solr 4.4.0, but I expect/hope it
did not get worse in newer releases.
Just to give you some idea of what can at least be achieved - in the
high-end of #replica and #docs, I guess
Regards, Per Steffensen
On 24/03/15 14:02, Ian Rose wrote:
Hi all -
I'm sure this topic has
Hi Erik -
Sorry, I totally missed your reply. To the best of my knowledge, we are
not using any surround queries (have to admit I had never heard of them
until now). We use solr.SearchHandler for all of our queries.
Does that answer the question?
Cheers,
Ian
On Fri, Mar 13, 2015 at 10:08 AM
machine, even if you throw a lot of hardware at it.
Thanks!
- Ian
, Senior Solutions Architect
http://www.lucidworks.com http://www.lucidworks.com/
On Mar 24, 2015, at 8:55 AM, Ian Rose ianr...@fullstory.com wrote:
Hi Erik -
Sorry, I totally missed your reply. To the best of my knowledge, we are
not using any surround queries (have to admit I had
not be
economical (or efficient). Or perhaps I am not understanding what your
definition of tenant is?
Cheers,
Ian
On Tue, Mar 24, 2015 at 4:51 PM, Toke Eskildsen t...@statsbiblioteket.dk
wrote:
Jack Krupansky [jack.krupan...@gmail.com] wrote:
I'm sure that I am quite unqualified to describe his
that everything
will still work fine if I add 10 nearly-idle cores to that machine? What
about 100? 1000? I figure the overhead of each core is probably fairly
low but at some point starts to matter.
Does that make sense?
- Ian
On Tue, Mar 24, 2015 at 11:12 AM, Jack Krupansky jack.krupan
of concurrent queries running on the server
is too high?
Also, is this a builtin limit or something set in a config file?
Thanks!
- Ian
I don't think zookeeper has a REST api. You'll need to use a Zookeeper
client library in your language (or roll one yourself).
On Wed, Nov 19, 2014 at 9:48 AM, nabil Kouici koui...@yahoo.fr wrote:
Hi All,
I'm connecting to solr using REST API (No library like SolJ). As my solr
configuration
. Plus Go makes little proxies like this
super easy to do.
Hope all that is useful to someone. Thanks again to the posters above for
providing suggestions!
- Ian
On Sat, Nov 1, 2014 at 7:13 PM, Erick Erickson erickerick...@gmail.com
wrote:
bq: but it should be more or less a constant factor
that one could be the
leader, is the replica deletion done in a safe manner such that no
documents will be lost (e.g. ones that were recently received by the leader
and not yet synced over to the slave replica before the leader is deleted)?
Thanks as always,
Ian
between
DELETEREPLICA and unloading the core directly.
Michael
On 11/7/14 10:24, Ian Rose wrote:
Howdy -
What is the current best practice for migrating shards to another
machine?
I have heard suggestions that it is add replica on new machine, wait
for
it to catch up
Awesome, thanks. That's what I was hoping.
Cheers,
Ian
On Wed, Nov 5, 2014 at 10:33 AM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
There's no difference between the two. Even if you send updates to a shard
url, it will still be forwarded to the right shard leader according
in
the client to see what kind of additional performance gain that gets us.
Cheers,
Ian
On Fri, Oct 31, 2014 at 3:43 PM, Peter Keegan peterlkee...@gmail.com
wrote:
Yes, I was inadvertently sending them to a replica. When I sent them to the
leader, the leader reported (1000 adds) and the replica
to have that working today - will report
back on my findings.
Cheers,
- Ian
p.s. To clarify why we are rolling our own smart router code, we use Go
over here rather than Java. Although if we still get bad performance with
our custom Go router I may try a pure Java load client using
)
* Network bandwidth is a few MB/s, well under the gigabit capacity of our
network
* Disk bandwidth ( 2 MB/s) and iops ( 20/s) are low.
Any ideas? Thanks very much!
- Ian
p.s. Here is my raw data broken out by number of nodes and number of
simulated users:
Num NodesNum
not issuing any
queries, only writes (document inserts). In the case of writes, increasing
the number of shards should increase my throughput (in ops/sec) more or
less linearly, right?
On Thu, Oct 30, 2014 at 4:50 PM, Shawn Heisey apa...@elyograg.org wrote:
On 10/30/2014 2:23 PM, Ian Rose wrote
,
- Ian
On Thu, Oct 30, 2014 at 8:01 PM, Erick Erickson erickerick...@gmail.com
wrote:
Your indexing client, if written in SolrJ, should use CloudSolrServer
which is, in Matt's terms leader aware. It divides up the
documents to be indexed into packets that where each doc in
the packet belongs
Very interested in what you find out with your benchmarking, and whether it
bears out what I've experienced.
Does anyone know when 4.10 is likely to be released?
I'm benchmarking this right now so I'll share some numbers soon.
--
View this message in context:
help.
Timothy Potter wrote
Hi Ian,
What's the CPU doing on the leader? Have you tried attaching a
profiler to the leader while running and then seeing if there are any
hotspots showing. Not sure if this is related but we recently fixed an
issue in the area of leader forwarding to replica
when the
replica is enabled, compared to when it is disabled:
http://lucene.472066.n3.nabble.com/file/n4147645/solr-cpu-usage.jpg
In the above chart, the dip in CPU usage in the middle was while the replica
(which lives on a different VM) was disabled.
Thanks
Ian
Timothy Potter wrote
Hi Ian
it? Is there a forum where I should
raise that?
Thanks again for your help
Ian
Shalin Shekhar Mangar wrote
You can use CloudSolrServer (if you're using Java) which will route
documents correctly to the leader of the appropriate shard.
On Tue, Jul 15, 2014 at 3:04 PM, ian lt
as Solr, and working out in advance which shard my inserts should be
sent to. Do you know whether that's an approach that others have used?
Thanks again
Ian
--
View this message in context:
http://lucene.472066.n3.nabble.com/Slow-inserts-when-using-Solr-Cloud-tp4146087p4147183.html
Sent from
of an overhead for shard routing and replicas, or
might this indicate a problem in my configuration?
Many thanks
Ian
---
Maer wybodaeth a gynhwysir yn y neges e-bost hon ac yn unrhyw atodiadaun
gyfrinachol. Os ydych yn ei derbyn ar gam, rhowch wybod ir anfonwr ai dileun
ddi-oed. Ni
Thanks Marc.
On May 4, 2012, at 8:52 PM, Marc Sturlese wrote:
http://lucene.472066.n3.nabble.com/Multiple-Facet-Dates-td495480.html
--
View this message in context:
http://lucene.472066.n3.nabble.com/Faceting-on-a-date-field-multiple-times-tp3961282p3961865.html
Sent from the Solr - User
to do this without hitting solr 3 times?
thanks
Ian
errors.
I can see *nothing* in *any* log (believe me I've looked!) and clearly my
configuration is correct because it works most of the time. Anyone any
ideas how I can find more information on this problem?
Thanks!
--
Ian
i...@isfluent.com a...@endissolutions.com
+44 (0)1223 257903
Right.
This is REALLY weird - I've now started from scratch on another
machine (this time Windows 7), and got _exactly_ the same problem !?
On Mon, Nov 28, 2011 at 7:37 AM, Husain, Yavar yhus...@firstam.com wrote:
Hi Ian
I am having exactly the same problem what you are having on Win 7
! Weird!
On Mon, Nov 28, 2011 at 11:59 AM, Husain, Yavar yhus...@firstam.com wrote:
Hi Ian
I downloaded and build latest Solr (3.4) from sources and finally hit
following line of code in Solr (where I put my debug statement) :
if(url != null){
LOG.info(Yavar: getting handle
Aha! That sounds like it might be it!
On Mon, Nov 28, 2011 at 4:16 PM, Husain, Yavar yhus...@firstam.com wrote:
Thanks Kai for sharing this. Ian encountered the same problem so marking him
in the mail too.
From: Kai Gülzau [kguel...@novomind.com
in the tomcat catalina log is:
org.apache.solr.handler.dataimport.JdbcDataSource$1 call
INFO: Creating a connection for entity data with URL:
jdbc:sqlserver://localhost;databaseName=CATLive
--
Ian
i...@isfluent.com
+44 (0)1223 257903
advice on how to diagnose would be appreciated!
On Fri, Nov 25, 2011 at 12:29 PM, Ian Grainger i...@isfluent.com wrote:
Hi I have copied my Solr config from a working Windows server to a new
one, and it can't seem to run an import.
They're both using win server 2008 and SQL 2008R2. This is the data
results.
- I have also asked this question on StackOverflow, here:
http://stackoverflow.com/questions/7905756/solr-3-4-group-truncate-does-not-work-with-facet-queries
Thanks!
--
Ian
i...@isfluent.com a...@endissolutions.com
+44 (0)1223 257903
-not-work-with-facet-queries
I'll
accept your answer.
On Fri, Oct 28, 2011 at 12:14 PM, Martijn v Groningen
martijn.v.gronin...@gmail.com wrote:
Hi Ian,
I think this is a bug. After looking into the code the facet.query
feature doesn't take into account the group.truncate option.
This needs
This turned out to be a missing SolrDeletionPolicy in the configuration.
Once the slaves had a SolrDeletionPolicy, they stopped growing out of
control.
Ian.
On Wed, Aug 17, 2011 at 8:46 AM, Ian Connor ian.con...@gmail.com wrote:
Hi,
We have noticed that many index.* directories
this to see if we can find out how to
reproduce it or at least the conditions that tend to reproduce it.
--
Regards,
Ian Connor
1 Leighton St #723
Cambridge, MA 02141
Call Center Phone: +1 (714) 239 3875 (24 hrs)
Fax: +1(770) 818 5697
Skype: ian.connor
14, 2011, at 11:34 , Ian Connor wrote:
It is nothing special - just like this:
conn = Solr::Connection.new(http://#{LOCAL_SHARD};,
{:timeout = 1000, :autocommit = :on})
options[:shards] = HA_SHARDS
response = conn.query(query, options)
Where LOCAL_SHARD points
is an
array of 18 shards (via haproxy).
Ian.
On Mon, Aug 8, 2011 at 12:50 PM, Erik Hatcher erik.hatc...@gmail.comwrote:
Ian -
What does your solr-ruby using code look like?
Solr::Connection is light-weight, so you could just construct a new one of
those for each request. Are you keeping
a new
one inside of the connection or is something more serious going on?
ubuntu 10.04
passenger 3.0.8
rails 2.3.11
--
Regards,
Ian Connor
?
If I can't store it as a multi-value, I could create a schema where I put each
document into a unique field, but I'm not sure how to create the query to
search all the fields.
Regards
Ian
between a multi-valued field and storing all the data in a
single field
as far as relevance calculations are concerned.
so.. it will suck regardless.. I thought we had per-field relevance in the
current trunk. :-(
Best
Erick
On Tue, May 31, 2011 at 11:02 AM, Ian Holsman had...@holsman.net
, May 31, 2011 at 12:16 PM, Ian Holsman had...@holsman.net wrote:
On May 31, 2011, at 12:11 PM, Erick Erickson wrote:
Can you explain the use-case a bit more here? Especially the post-query
processing and how you expect the multiple documents to help here.
we have a collection of related
I have a bunch of documents representing points of interest indexed in Solr.
I'm trying to boost the score of documents based on distance from an origin
point, and having some difficulty.
I'm currently using the standard query parser and sending in this query:
(name:sushi OR tags:sushi OR
Has there been any progress on this or tools people might use to capture the
average or 90% time for the last hour?
That would allow us to better match up slowness with other metrics like
CPU/IO/Memory to find bottlenecks in the system.
Thanks,
Ian.
On Wed, Mar 31, 2010 at 9:13 PM, Chris
to the search engine as opposed to a human user)
Thanks, Ian
, but you can get
smaller sets.
Actually, I do heavy analysis of the entire wikipedia, plus 1m top webpages
from Alexa, and all of dmoz url's, in order to build the semantic engine in
the first place. However, an outside corpus is required to test it's
quality outside of this space.
Cheers, Ian
in 09, and it inclues a number of hard drives which
are shipped to you. Any crawl that would be available as an Amazon Public
Dataset would be totally perfect.
Ian
of the top 10M or top 100M page-ranked URL's in the world.
Short of using Nutch to crawl the entire web and build this page-rank, is
there any other ways? What other ways or resources might be available for
me to get this (smaller) corpus of top webpages?
Thanks, Ian
to catch it up).
Ian.
On Wed, Jul 14, 2010 at 9:22 PM, Chris Hostetter
hossman_luc...@fucit.orgwrote:
: I have found that this search crashes:
:
: /solr/select?q=*%3A*fq=start=0rows=1fl=id
Ouch .. that exception is kind of hairy. it suggests that your index may
have been corrupted
)
at
org.apache.solr.search.SolrIndexReader.document(SolrIndexReader.java:259)
but this one works:
/solr/select?q=*%3A*fq=start=1rows=1fl=id
It looks like just that first document is bad. I am happy to delete it - but
not sure how to get to it. Does anyone know how to find it?
- Ian
Been testing nutch to crawl for solr and I was wondering if anyone had
already worked on a system for getting the urls out of solr and generating
an XML sitemap for Google.
the tokenizer kicked in, unfortunately Solr just refused to allow sorting on
anything tokenized with characters other than whitespace.
Cheers, Ian.
-Original Message-
From: MitchK [mailto:mitc...@web.de]
Sent: 07 March 2010 22:44
To: solr-user@lucene.apache.org
Subject: Re: Handling and sorting
Forgive what might seem like a newbie question but am struggling desperately
with this.
We have a dynamic field that holds email address and we'd like to be able to
sort by it, obviously when trying to do this we get an error as it thinks
the email address is a tokenized field. We've tried a
I just saw this on twitter, and thought you guys would be interested.. I
haven't tried it, but it looks interesting.
http://snaprojects.jira.com/wiki/display/ZOIE/Zoie+Solr+Plugin
Thanks for the RT Shalin!
On 2/24/10 8:42 AM, Grant Ingersoll wrote:
What would it be?
most of this will be coming in 1.5,
but for me it's
- sharding.. it still seems a bit clunky
secondly.. this one isn't in 1.5.
I'd like to be able to find interesting terms that appear in my result
set that don't appear in the
Just wanted to give an update on my efforts.
I installed the Feb. 26 update this morning. Was able to access /solr/admin.
Copied over the nutch schema.xml. restarted solr and was able to access
/solr/admin
Edited solrconfig.xml to add the nutch requesthandler snippet from
lucidimagination.
Hi everyone,
Last night I was able to get solr up and running. Ran and was able to
access:
http://localhost:8983/solr/admin
This morning, I started on the nutch crawling instructions over at:
http://www.lucidimagination.com/blog/2009/03/09/nutch-solr/
After adding the following to
problem while
still allowing failover for the shard requests.
Even after 1.5, I would then still advocate haproxy between ruby (or your
http stack) and solr. It is just when Solr is sharding the request it can
keep its connections open and save some resources here.
Ian.
On Thu, Feb 11, 2010 at 11:49
) | 200 OK [
http://localhost:3000/search?q=nik+gene+clusterview=2]
Has anyone done such a plug-in or extension already?
--
Regards,
Ian Connor
This seems to allow you to log each query - which is a good start.
I was thinking of something that would add all the ms together and report it
in the completed at line so you can get a higher level view of which
requests take the time and where.
Ian.
On Thu, Feb 11, 2010 at 1:13 PM, Mat Brown
...
On Thu, Feb 11, 2010 at 13:22, Ian Connor ian.con...@gmail.com wrote:
This seems to allow you to log each query - which is a good start.
I was thinking of something that would add all the ms together and report
it
in the completed at line so you can get a higher level view of which
requests
Thanks,
I bypassed haproxy as a test and it did reduce the number of connections -
but it did not seem as those these connections were hurting anything.
Ian.
On Tue, Feb 9, 2010 at 11:01 PM, Lance Norskog goks...@gmail.com wrote:
This goes through the Apache Commons HTTP client library:
http
...
Digging a little into the haproxy documentation, it seems that they do not
support persistent connections.
Does solr normally persist the connections between shards (would this
problem happen even without haproxy)?
Ian.
a shard goes down, an error is
returned and the search fails, is there a way to avoid the error and
return the results from the shards that are still up?
thx much
--joe
--
Regards,
Ian Connor
Can anyone think of a reason why these locks would hang around for more than
2 hours?
I have been monitoring them and they look like they are very short lived.
On Tue, Jan 26, 2010 at 10:15 AM, Ian Connor ian.con...@gmail.com wrote:
We traced one of the lock files, and it had been around for 3
We traced one of the lock files, and it had been around for 3 hours. A
restart removed it - but is 3 hours normal for one of these locks?
Ian.
On Mon, Jan 25, 2010 at 4:14 PM, mike anderson saidthero...@gmail.comwrote:
I am getting this exception as well, but disk space is not my problem. What
/index/
to actually *fix* the index, add the -fix argument
java -cp lucene-core-2.9-dev.jar org.apache.lucene.index.CheckIndex -fix
/path/to/solr/data/index/
hope that helps,
-Ian
On 1/8/10 2:09 PM, Giovanni Fernandez-Kincade wrote:
I've seen many mentions of the Lucene CheckIndex tool, but where
On 1/5/10 12:46 AM, Shalin Shekhar Mangar wrote:
sitename:XYZ OR sitename:All Sites) AND (localeid:1237400589415) AND
((assettype:Gallery)) AND (rbcategory:ABC XYZ ) AND (startdate:[* TO
2009-12-07T23:59:00Z] AND enddate:[2009-12-07T00:00:00Z TO
*])rows=9start=63sort=date
would need to use a external field and a custom scoring function to
do something like this.
regards
Ian
Thanks,
On Thu, Dec 17, 2009 at 7:50 PM, Paul Libbrechtp...@activemath.org wrote:
What can it mean to adapt to user clicks ? Quite many things in my head.
Do you have maybe a citation
OK thanks for the reply, fortunately we have now found an approach which
avoids storing the field. It would be nice to be able to search for
dynamic fields in a way which is consistent with their definition,
although I suppose there probably isn't demand for this.
Regards,
Ian.
-Original
, or a fl=-FIELDNAME query parameter to remove the fixed field.
Is such a feature planned, or is there a workaround that I have missed?
Regards,
Ian.
we don't seem to be talking about the
same thing . . . but thanks anyway,
Ian.
-Original Message-
From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
Sent: 30 November 2009 23:05
To: solr-user@lucene.apache.org
Subject: RE: schema-based Index-time field boosting
: I am talking
I believe you need to use the fq parameter with dismax (not to be confused
with qf) to add a filter query in addition to the q parameter.
So your text search value goes in q parameter (which searches on the fields
you configure) and the rest of the query goes in the fq.
Would that work?
On Thu,
. Regards,
Ian.
-Original Message-
From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
Sent: 23 November 2009 18:34
To: solr-user@lucene.apache.org
Subject: RE: schema-based Index-time field boosting
: Yeah, like I said, I was mistaken about setting field boost in
: schema.xml - doesn't
, make sure you have the tdouble types declared with
fieldType name=tdouble class=solr.TrieDoubleField
precisionStep=8 omitNorms=true positionIncrementGap=0/
in your types section.
HTH
Ian.
2009/11/21 Bertie Shen bertie.s...@gmail.com:
Hey everyone,
I used localsolr and locallucene to do
this might not work in practice?
Regards,
Ian.
-Original Message-
From: Smiley, David W. [mailto:dsmi...@mitre.org]
Sent: 19 November 2009 19:29
To: solr-user@lucene.apache.org
Subject: Re: Index-time field boosting not working?
Hi Ian. Thanks for buying my book.
The boost attribute goes
the
encoding is not stripped away, so it is still present in search
responses.
Is there a way to pass literal values containing non-URL safe characters
to Solr Cell?
Regards,
Ian.
Web design and intelligent Content Management. www.twitter.com/gossinteractive
Registered Office: c/o Bishop Fleming, Cobourg
Sorry guys, the bad request seemed to be caused elsewhere, no need to
URL encode now.
Ian.
-Original Message-
From: Ian Smith [mailto:ian.sm...@gossinteractive.com]
Sent: 20 November 2009 15:26
To: solr-user@lucene.apache.org
Subject: Solr Cell text extraction
Hi Guys,
I am trying
:(
If you or anyone else here has any historical perspective on this, I'd
be interested to hear about it.
Regards,
Ian,
-Original Message-
From: Otis Gospodnetic [mailto:otis_gospodne...@yahoo.com]
Sent: 18 November 2009 22:55
To: solr-user@lucene.apache.org
Subject: Re: Index-time field
seen no debug output during startup which would indicate
that fild boosting is being configured - should there be any?
I have found no usage examples of this in the Solr 1.4 book, except a
vague discouragement - is this a deprecated feature?
TIA,
Ian
Web design and intelligent Content Management
to the final release when it's available.
Just a thought, cheers for all your hard work.
Ian.
2009/11/2 Ryan McKinley ryan...@gmail.com
On Nov 2, 2009, at 8:29 AM, Grant Ingersoll wrote:
On Nov 2, 2009, at 12:12 AM, Licinio Fernández Maurelo wrote:
Hi folks,
as we are using an snapshot
this I'd really
appreciate any help.
Cheers all, hope this is of use to someone out there, if anyone has
corrections/comments I'd really appreciate any info.
Best,
Ian.
: User - HTTP Load Balancer - Mogrel Cluster -
Haproxy - N x Solr Shards
and it looks like that is the standard setup for performance from what you
suggest here and most of the performance tweaks I thought of are already in
use.
Ian.
On Fri, Sep 18, 2009 at 3:09 AM, Erik Hatcher erik.hatc
1 - 100 of 206 matches
Mail list logo