Hi,
I created a ticket and try to describe here
https://issues.apache.org/jira/browse/SOLR-4471
Actually search speed, ram and memory usage on solr 4.x compared with 3.6.
looks good, only the network is blocked by full copy index from slave.
André
On 16.02.13 03:25, "Mark Miller" wrote:
>For
Hi All
I have a use case where I have a list of words, on which I don't want to
perform spellcheck.
Like stemming ignores the words listed in protwords.txt file.
Any idea, how it can be solved?
Thanks
Hemant
--
View this message in context:
http://lucene.472066.n3.nabble.com/SpellCheck-Ignore
Hi
I have some questions about tlog files and how are managed.
I'm using dih to do incremental data loading, once a day I do a full
refresh.
these are the request parameters
/dataimport?command=full-import&commit=true
/dataimport?command=delta-import&commit=true&optimize=false
I was expect
Hi,
By defaut SolrCloud partitions records by the hash of the uniqueKey field but
we want to do some tests and partition the records by a signed integer field
but keep the current uniqueKey unique. I've scanned through several issues
concerning distributed index, custom hashing, shard policies
Hi!
Although more than 1 year has passed, could I ask you, Parvin, what was your
final approach?
I have to deal with a similar problem
(http://lucene.472066.n3.nabble.com/Combining-Solr-score-with-customized-user-ratings-for-a-document-td4040200.html),
maybe a bit more difficult because it's a by
Chris, Mihhail,
I'd like to avoid issueing a query and spare the cycles. In SOLR-4280 i only
look for the smallest DocSet by iterating over them. I would tend to think it's
cheaper than getDocSet() and perhaps cacheDocSet().
In case i would add non-usercaches to the cacheMap and create a separa
Hi,
I was able to implement custom hashing with the use of "_shard_" field. It
contains the name of shard a document should go to. Works fine. Maybe
there's some other method to do the same with the use of solrconfig.xml,
but I have not found any docs about it so far.
Regards.
On 18 February 20
Hi,
i got a problem.
problem is i have json file
[
{
"id":"5",
"is_good":{"add":"1"}
},
{
"id":"1",
"is_good":{"add":"1"}
},
{
"id":"2",
"is_good":{"add":"1"}
},
{
"id":"3",
"is_good":{"add":"1"}
}
]
now due to stopping o
On 2/18/2013 4:57 AM, giovanni.bricc...@banzai.it wrote:
I have some questions about tlog files and how are managed.
I'm using dih to do incremental data loading, once a day I do a full
refresh.
these are the request parameters
/dataimport?command=full-import&commit=true
/dataimport?command=d
I am seeing the following error in my Admin console and the core/ cloud status
is taking forever to load.
SEVERERecoveryStrategyRecovery failed - trying again... (9)
What causes this and how can I recover from this mode?
Regards,
Rohit
The 4.x based spellcheck process just looks in the index and enumerates the
terms, there's no special "sidecar" index. So you'd probably have to create
a different field that contained only the words you wanted to be returned
as possibilities
Best
Erick
On Mon, Feb 18, 2013 at 5:06 AM, Heman
Hi,
I'm running SolrCloud (Solr4) with 1 core, 8 shards and zookeeper
My index is being updated every minute, so I'm running optimization once a
day.
Every time during the optimization there is an error:
SEVERE: shard update error StdNode: http://host:port/solr/core_name/
SEVERE: shard update erro
We need to see more of your logs to determine why - there should be some
exceptions logged.
- Mark
On Feb 18, 2013, at 9:47 AM, Cool Techi wrote:
> I am seeing the following error in my Admin console and the core/ cloud
> status is taking forever to load.
>
> SEVERERecoveryStrategyRe
Yeah, I think we are missing some docs on this…
I think the info is in here: https://issues.apache.org/jira/browse/SOLR-2592
But it's not so easy to pick out - I'd been considering going through and
writing up some wiki doc for that feature (unless I'm somehow missing it), but
just been too bus
Not sure - any other errors? An optimize once a day is a very heavy operation
by the way! Be sure the gains are worth the pain you pay.
- Mark
On Feb 18, 2013, at 10:04 AM, adm1n wrote:
> Hi,
>
> I'm running SolrCloud (Solr4) with 1 core, 8 shards and zookeeper
> My index is being updated eve
Look at HTMLStripCharFilter, which accepts HTML as its source text, which
preserves all the HTML tags in the stored value, but then strips off the
HTML tags for tokenization into terms. So, you can search for the actual
text terms, but the HTML will still be in the returned field value for
high
1. Create a copy of the field and add the exception list to it.
2. Or, add a second spell checker to your spellcheck search component that
is a FileBasedSpellChecker with the exceptions in a simple text file. Then
reference both spellcheckers with spellcheck.dictionary, with the
FileBasedSpell
I think it's best to tweak merge parameters instead and amortize the cost of
keeping down the number of segments. Deletes will be naturally expunged as
documents come in and segments are merged. For 90% of use cases, this is the
best way to go IMO. Even if you just want to get rid of deletes, lo
Use "set" instead of "add".
See:
http://wiki.apache.org/solr/UpdateJSON#Atomic_Updates
-- Jack Krupansky
-Original Message-
From: anurag.jain
Sent: Monday, February 18, 2013 6:09 AM
To: solr-user@lucene.apache.org
Subject: Re: Updating data
Hi,
i got a problem.
problem is i have js
I am replying to this post because I am also facing "very similar" issue.
I am indexing the documents stored in a blob field of a MySQL database. I
have described the whole setup in the following blog post:
http://tuxdna.wordpress.com/2013/02/04/indexing-the-documents-stored-in-a-database-using-a
When trying to use SolrEntityProcessor to do data import from another solr
index (solor 4.1)
I added the following in solrconfig.xml
data-config.xml
and create new file data-config.xml with
http://wolf:1Xnbdoq@myserver:8995/solr/"; query="*:*"
fl="id,md5_text,title,text
thanks Eric,
is this what you are pointing me to ?
http://.../solr/select?q=if(exist(title.3),(title.3:"xyz"),(title.0:"xyz"))
I believe i should be able to use boost along with proximity too.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Conditional-Field-Search-with
Found it by myself. It's here
http://mirrors.ibiblio.org/maven2/org/apache/solr/solr-dataimporthandler/4.1.0/
Download and move the jar file to solr-webapp/webapp/WEB-INF/lib directory,
and the errors are all gone.
Ming
On Mon, Feb 18, 2013 at 11:52 AM, Mingfeng Yang wrote:
> When trying to u
I hope my question is somewhat relevant to the discussion.
I'm relatively new to zk/SolrCloud, and I have new environment configured
with an ZK ensemble (3 nodes) running with SolrCloud. Things are running,
yet I'm puzzled since I can't find the Solr congif data on zookeeper nodes.
What is the
@Marcin - Maybe I mis-understood your process but I don't think you
need to reload the collection on each node if you use the expanded
collections admin API, i.e. the following will propagate the reload
across your cluster for you:
http://localhost:8983/solr/admin/collections?action=RELOAD&name=my
Hey all,
I feel having to unload the leader core to force an election is "hacky", and as
far as I know would still leave which node becomes the Leader to chance, ie I
cannot guarantee "NodeX" becomes Leader 100% in all cases.
Also, this imposes additional load temporarily.
Is there a way to fo
In Solr3.6.1 using text_ja field generated huge number of results, that
degraded its performance significantly. The queries that were taking 15ms
have gone up to 400ms and the other issue, it is not honoring rows
parameter. The output results are not capped by the the number of documents
requested
I'm relatively new to zk/SolrCloud, and I have new environment configured
with an ZK ensemble (3 nodes) running with SolrCloud. Things are running,
yet I'm puzzled since I can't find the Solr config data on zookeeper nodes.
What is the default location?
Thank you in advance!
/michael
--
V
/configs/collectionName
You should be able to see this from the Solr admin console as well:
Cloud > Tree > configs > collectionName
Cheers,
Tim
On Mon, Feb 18, 2013 at 4:23 PM, mshirman wrote:
>
> I'm relatively new to zk/SolrCloud, and I have new environment configured
> with an ZK ensemble (
Maybe you need to turn on autoGeneratePhraseQueries=true on your field type.
And turn on &debugQuery=true on your query to see what actually get
generated.
Show us a typical query - the &rows parameter should always work, unless
it's written wrong.
-- Jack Krupansky
-Original Message--
There is not error I can see in the logs, my shards are divided over three
machines, the cloud runs fine when I don't bring up one of the nodes, the
moment I start that particular note, the cloud stops responding,
Feb 19, 2013 5:22:22 AM
org.apache.solr.handler.component.SpellCheckComponent$Spe
Hi all,
I have json file in which there is field name last_login and value of that
field in timestamp.
I want to store that value in timestamp. do not want to change field type.
Now question is how to store timestamp so that when i need output in
datetime format it give date time format and wh
Thank you for replying sir !!!
I have two queries related with this -
1) So in this case which request handler I have to use because
'ExtractingRequestHandler' by default strips the html content and the
default handler 'UpdateRequestHandler' does not accepts the HTML contrents.
2) How can I 'Ext
Use the standard update handler and pass the entire HTML page as literal
text in a Solr XML document for the field that has the HTML strip filter,
but be sure to escape the HTML (angle brackets, ampersands, etc.) syntax.
You'll have to process meta information yourself.
-- Jack Krupansky
Should I log a defect in Jira for this?
Ari Maniatis
On 14/02/13 6:50pm, Aristedes Maniatis wrote:
I'm trying to monitor the state of a master-slave Solr4.1 cluster. I can easily
get the generation number of the slaves using JMX like this:
solr/{corename}/org.apache.solr.handler.Replica
35 matches
Mail list logo