Hi,
I have an requirement in which I have to add some fields in schema at run
time and after that i need to add the copy fields for some of the schema
fields.
To add the fields in schema I used the following REST API, which is giving
success response in output as shown below:
*Post URL:
Yes, that is what we are seeing. Thanks for pointing me to the right issues
to track.
Where can I find out when 4.10 final is going to be released?
Thanks,
Matthias
On Sat, Aug 30, 2014 at 9:26 PM, Erick Erickson erickerick...@gmail.com
wrote:
There have been some recent improvements in that
The release vote has passed, the release packages are spreading out to the
mirrors, and the announcement should appear in the next 12-24 hours.
Steve
www.lucidworks.com
On Sep 2, 2014, at 11:56 PM, Matthias Broecheler m...@matthiasb.com wrote:
Yes, that is what we are seeing. Thanks for
Hello,
We are looking for a solr consultant to help us with our devs using solr.
We've been working on this for a little while, and we feel we need an
expert point of view on what we're doing, who could give us insights about
our solr conf, performance issues, error handling issues (big thing).
I will be out of the office starting 03/09/2014 and will not return until
04/09/2014
Please email itsta...@actionimages.com for any urgent queries.
Note: This is an automated response to your message How can I set shard
members? sent on 9/3/2014 5:00:04.
This is the only notification you
I have solr installed on Debian and every time delta import takes place a
file gets created in my root directory. The files that get created look
like this
dataimport?command=delta-import.1
dataimport?command=delta-import.2
.
.
.
dataimport?command=delta-import.30
Every time
Hi , all:
I created collection per day dynamically in my program.Like this:
http://lucene.472066.n3.nabble.com/file/n4156601/create1.png
But,when I searched data with collection=myCollection-20140903,it
showed Collection not found:myCollection-20140903 .
I checked
created collection per day dynamically in my program.Like this:
http://lucene.472066.n3.nabble.com/file/n4156601/create1.png
But,when I searched data with collection=myCollection-20140903,it
showed Collection not found:myCollection-20140903 .
I checked the clusterState
Once I upgraded to 4.9.0, the solr.ssl.checkPeerName option was used, and I
was able to create a collection.
I'm still wondering if there is a good way to remove references to any
collections that didn't complete, but block a collection from being made
with the same name?
Thanks!
-- Chris
On
Don't forget to check out the Solr Support wiki where consultants advertise
their services:
http://wiki.apache.org/solr/Support
And any Solr or Lucene consultants on this mailing list should be sure that
they are registered on that support wiki. Hey, it's free! And be sure to
keep your
On 9/3/2014 3:19 AM, madhav bahuguna wrote:
I have solr installed on Debian and every time delta import takes place a
file gets created in my root directory. The files that get created look
like this
I figure there's one of two possibilities:
1) You've got a misconfiguration in the
Is ' dataimport?command=delta-import.1' actually a file name? If this
the case, are you running the trigger from a cron job or similar? If I
am still on the right track, check your cron job/script and see if you
have misplaced new line, quote (e.g. MSWord quote instead of normal)
or some other
Thanks Erick and Diego. Yes, I noticed in my last message I'm not
actually using defaults, not sure why I chose non-defaults originally.
I still need to find time to make a smaller isolation/reproduction case,
I'm getting confusing results that suggest some other part of my field
def may be
: I have solr installed on Debian and every time delta import takes place a
: file gets created in my root directory. The files that get created look
: like this
:
:
: dataimport?command=delta-import.1
that is exactly the output you would expect to see if you have a cron
somewhere, running
Hi,
I use the below highlight search component in one of my request handler.
I am trying to figure out a way to change the value of highlight search
component dynamically from the query. Is it possible to modify the
parameters dynamically using the query (without creating another
iirc, Lucene In Action describes http://rdelbru.github.io/SIREn/ in the one
of appendixes. I know that they spoke at LuenceRevolution recently. that's
all what I know.
On Wed, Sep 3, 2014 at 2:40 PM, Pragati Meena pme...@bostonanalytics.com
wrote:
Hi,
I want to index rdf/xml document into
Thanks a lot for your answers.
Best regards,
Elisabeth
2014-09-03 17:18 GMT+02:00 Jack Krupansky j...@basetechnology.com:
Don't forget to check out the Solr Support wiki where consultants
advertise their services:
http://wiki.apache.org/solr/Support
And any Solr or Lucene consultants on
Hi,
I need to change the components (inside a request handler) dynamically using
query parameters instead of creating multiple request handlers. Is it
possible to do this on the fly from the query?
For Ex:
change the highlight search component to use different search component
based on a query
We have SolrCloud instance with 2 solr nodes and 3 zk ensemble. One of the
solr node goes down as soon as we send search traffic to it, but update
works fine.
When I analyzed thread dump I saw lot of blocked threads with following
error message. This explains why it couldn't create any native
Forgot to add the source thread thats blocking every other thread
http-bio-52158-exec-61 - Thread t@591
java.lang.Thread.State: RUNNABLE
at
org.apache.lucene.search.FieldCacheImpl$Uninvert.uninvert(FieldCacheImpl.java:312)
at
Hi,
You can skip certain components. Every component has a name, if you set its
name to false, it is skipped. Example : facet=false or query=false
but you cannot change order of them. You need a custom RequestHandler for that.
Ahmet
On Wednesday, September 3, 2014 10:12 PM, bbarani
Jonathan:
If at all possible, delete your collection/data directory (the whole
directory, including data) between runs after you've changed
your schema (at least any of your analysis that pertains to indexing).
Mixing old and new schema definitions can add to the confusion!
Good luck!
Erick
On
Depends on which ones. Any parameter in the defaults sections
can be overridden on dynamically, i.e.
.hl.bs.language=fr
Best,
Erick
On Wed, Sep 3, 2014 at 10:38 AM, bbarani bbar...@gmail.com wrote:
Hi,
I use the below highlight search component in one of my request handler.
I am trying
Do you have indexing traffic going to it? b/c this _looks_
like the node is just starting up or a searcher is
being opened and you're loading your
index first time. This happens when you index data and
when you start up your nodes. Adding some autowarming
(firstSearcher in this case) might load up
3 September 2014, Apache Lucene™ 4.10.0 available
The Lucene PMC is pleased to announce the release of Apache Lucene 4.10.0
Apache Lucene is a high-performance, full-featured text search engine
library written entirely in Java. It is a technology suitable for nearly
any application that requires
3 September 2014, Apache Solr™ 4.10.0 available
The Lucene PMC is pleased to announce the release of Apache Solr 4.10.0
Solr is the popular, blazing fast, open source NoSQL search platform
from the Apache Lucene project. Its major features include powerful
full-text search, hit highlighting,
I'm confused, wondering if it's a mismatch between the docs and the
intent or just a bug or whether I'm just not understanding the point:
The DELETEREPLICA docs say:
Delete a replica from a given collection and shard. If the
corresponding core is up and running the core is unloaded and the
entry
I have a Solr server indexes 2500 documents (up to 50MB each, ave 3MB) to Solr
server. When running on Solr 4.0 I managed to finish index in 3 hours.
However after we upgrade to Solr 4.9, the index need 3 days to finish.
I've done some profiling, numbers I get are:
size figure of document,
Erick,
It is just one shard. Indexing traffic is going to the other node and then
synched with this one(both are part of cloud). We kept that setting
running for 5 days as defective node would just go down with search
traffic. So both were in sync when search was turned on. Soft commit is
Hmmm, I'm puzzled then. I'm guessing that the node
that keeps going down is the follower, which means
it should have _less_ work to do than the node that
stays up. Not a lot less, but less still.
I'd try lengthening out my commit interval. I realize you've
set it to 2 seconds for a reason, this
30 matches
Mail list logo