Hi users
I get a very werid problem within solr 4.6
I just want to reload a core :
http://10.7.23.125:8080/solr/admin/cores?action=RELOADcore=reportCore_201210_r1
However it give out an exception[1].As the exception the SolrCore
'collection1' does not exist. I create a default core not with
*something* is still referring to collection1. Have you tried searching
through your SOLR_HOME dir for any references to collection1?
Upayavira
On Mon, Dec 23, 2013, at 08:44 AM, YouPeng Yang wrote:
Hi users
I get a very werid problem within solr 4.6
I just want to reload a core :
On 12/22/2013 09:48 PM, Shawn Heisey wrote:
On 12/22/2013 2:10 PM, David Santamauro wrote:
My goal is to have a redundant copy of all 8 currently running, but
non-redundant shards. This setup (8 nodes with no replicas) was a test
and it has proven quite functional from a performance
Thank you, Guys all for the responces
--
View this message in context:
http://lucene.472066.n3.nabble.com/Call-to-Solr-via-TCP-tp4105932p4107921.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi Guys,
According to the article
http://yonik.com/posts/advanced-filter-caching-in-solr/ about Advanced
Filter Caching and few examples ho to implement custom PostFilter in Solr I
implemented my own class that extends ExtendedQueryBase and implements
PostFilter.
All filtering functionality
Hi
There are several ways to do it. One way is to create to two entities at the
same level. Use its name to call it.
Request : command=full-importentity=messages_test
document
entity name=messages_test query=select * from BLOB_TEST
...
/entity
entity name=messages_test1
hello
suppose I have this synonim
abxpower = abx power
and suppose you are indexing abxpower pipp
From the analyzer I see that abxpower is splitted in two words, but the
second word power overlaps the next one
text raw_bytes keyword position start end type positionLength
abxpower [61 62 78 70
Here's the Tomcat 6 SSL HOWTO:
http://tomcat.apache.org/tomcat-6.0-doc/ssl-howto.html
Generally, Tomcat expects a keystore password of changeit and a key
password that matches the keystore, unless you configure it otherwise. You
can use a different keystore password, but the key and keystore
Hello.
The behaviour we observed was that a zookeeper election took about 2s plus 1.5s
for reading the zoo_data snapshot. During this time solr tried to establish
connections to any zookeeper in the ensemble but only succeeded once a leader
was elected *and* that leader was done reading the
Interesting stuff! This is expected but not really something I have thought
about yet.
Can you file a JIRA issue? I think we want to try and tackle this with code.
We currently reject updates when we lose our connection to ZooKeeper. We
are pretty strict about this. I think you could reasonably
Sure. https://issues.apache.org/jira/i#browse/SOLR-5577 filed. Thanks.
- Original Message -
From: solr-user@lucene.apache.org
To: Christine Poerschke (BLOOMBERG/ LONDON), solr-user@lucene.apache.org
At: Dec 23 2013 18:12:50
Interesting stuff! This is expected but not really something I
Shawn,
I managed to create 8 new cores and the Solr Admin cloud page showed
them wonderfully as active replicas.
The only issue I have is what goes into solr.xml (I'm using tomcat)?
Putting
core name=... /
for each of the new cores I created seemed like the reasonable approach
but when
Hi,
I have this scenario that I think is no unusual: solr will get a user
entered query string like 'apple pear france'.
I need to do this: if any of the terms is a country, then change the query
params to move that term to a fq, i.e:
q=apple pear france
to
q=apple pearfq=country:france
What do
I would suggest handling this in the client. You could write custom Solr
code also but it would be more complicated because you'd be working with
Solr's API's.
Joel Bernstein
Search Engineer at Heliosearch
On Mon, Dec 23, 2013 at 2:36 PM, jmlucjav jmluc...@gmail.com wrote:
Hi,
I have this
Hello,
I'm loading up our solr cloud with data (from a solrj client) and
running into a weird memory issue. I can reliably reproduce the
problem.
- Using Solr Cloud 4.4.0 (also replicated with 4.6.0)
- 24 solr nodes (one shard each), spread across 3 physical hosts, each
host has 256G of memory
Hi Greg,
I have a suspicion that the problem might be related or exacerbated be
overly large tlogs. Can you adjust your autoCommits to 15 seconds. Leave
openSearcher = false. I would remove the maxDoc as well. If you try
rerunning under those commit setting it's possible the OOM errors will stop
Hi Joel,
Could you clarify what would be in the key,value Map added to the
SearchRequest context? It seems that all the docId/score tuples need to be
there, including the ones not in the 'top N ScoreDocs' PriorityQueue
(score=0). If so would the Map be something like:
Hi Joel,
Thanks for the suggestion. I could see how decreasing autoCommit time
would reduce tlog size, and how that could possibly be related to the
original OOM error. I'm not seeing how that would make any difference
once a tlog exists, though?
I have a saved off copy of my data dir that has
Yes, I'm well aware of the performance implications, many of which are
mitigated by 2TB of SSD and 512GB RAM
I've got a very similar setup in production. 2TB SSD, 256G RAM (128G
heaps), and 1 - 1.5 TB of index per node. We're in the process of
splitting that to multiple JVMs per host. GC
Greg,
There is a memory component to the tlog, which supports realtime gets. This
memory component grows until there is a commit, so it will appear like a
leak. I suspect that replaying a tlog that was big enough to possibly cause
OOM is also problematic.
One thing you might want to try is going
On 12/23/2013 05:03 PM, Greg Preston wrote:
Yes, I'm well aware of the performance implications, many of which are
mitigated by 2TB of SSD and 512GB RAM
I've got a very similar setup in production. 2TB SSD, 256G RAM (128G
heaps), and 1 - 1.5 TB of index per node. We're in the process of
Interesting. In my original post, the memory growth (during restart)
occurs after the tlog is done replaying, but during the merge.
-Greg
On Mon, Dec 23, 2013 at 2:06 PM, Joel Bernstein joels...@gmail.com wrote:
Greg,
There is a memory component to the tlog, which supports realtime gets.
I believe you can just define multiple cores:
core default=true instanceDir=shard1/
name=collectionName_shard1 shard=shard1/
core default=true instanceDir=shard2/
name=collectionName_shard2 shard=shard2/
...
(this is the old style solr.xml. I don't know how to do it in the newer style)
Also,
Peter,
You actually only need the current score being collected to be in the
request context. So you don't need a map, you just need an object wrapper
around a mutable float.
If you have a page size of X, only the top X scores need to be held onto,
because all the other scores wouldn't have made
Yeah, sounds like a leak might be there. Having the huge tlog might have
just magnified it's importance.
Joel Bernstein
Search Engineer at Heliosearch
On Mon, Dec 23, 2013 at 5:17 PM, Greg Preston gpres...@marinsoftware.comwrote:
Interesting. In my original post, the memory growth (during
On 12/23/2013 12:23 PM, David Santamauro wrote:
I managed to create 8 new cores and the Solr Admin cloud page showed
them wonderfully as active replicas.
The only issue I have is what goes into solr.xml (I'm using tomcat)?
Putting
core name=... /
for each of the new cores I created
Hi users
Solr supports for writing and reading its index and transaction log files
to the HDFS distributed filesystem.
**I am curious about that there are any other futher improvement about
the integration with HDFS.*
**For the solr native replication will make multiple copies of the
27 matches
Mail list logo