Hi,
there has been an JIRA issue[0] for a long time that contains some patches
for multiple releases of Solr that implement this functionality. It's a
different topic if those patches still work in recent versions, and the
issue has been resolved as a won't fix.
Personally, I think starting
Hi GW,
It would be great if you can elaborate or provide pointers to scenarios which
can cause this issue w.r.t commit problem.
Regards,
Prateek Jain
-Original Message-
From: GW [mailto:thegeofo...@gmail.com]
Sent: 21 November 2016 01:56 PM
To: solr-user@lucene.apache.org
Subject:
A agree that dispatching multiple queries is better.
With multiple queries, we need to deal with multiple result codes, multiple
timeouts, and so on. Then write tests for all that stuff.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Nov 21,
Help,
(Solr 6.3)
Trying to do a "sub-facet" using the new json faceting API, but can't
seem to figure out how to get the "max" date in the subfacet?
I've tried a couple of different ways:
== query ==
json.facet={
code_s:{
limit:-1,
type:terms,field:code_s,facet:{
I think I figured out a very hacky "work around" - does this look like
it will work consistently?
json.facet={
code_s:{
limit:-1,
type:terms,
field:code_s,
facet:{
issuedate_tdt:{
type:terms,
field:issuedate_tdt,
Have a "store only" text field that contains a serialized (json?) of the
master object for deserilization as part of the results parsing if you
are wanting to save a DB lookup.
I would still store everything in a DB though to have a "master" copy of
everthing.
On 11/18/2016 04:45 AM,
Argh! I'm trying to run some test queries using the web ui but it keeps
aborting the connection at 10 seconds? Is there anyway to easily change
this?
(We currently have heavy indexing going on and the cache keeps getting
"un-warmed").
Hello,
I am doing a solr6 pilot to try out the new features. We have Cross Data
Center Replication (CDCR) setup as follows:
Source cluster - 3 zk nodes, 3 solr instances
Target cluster - 3 zk nodes, 3 solr instances
*Below are source and target solrconfig.xml files.*
solrconfig.xml
On Mon, Nov 21, 2016 at 3:42 PM, Michael Joyner wrote:
> Help,
>
> (Solr 6.3)
>
> Trying to do a "sub-facet" using the new json faceting API, but can't seem
> to figure out how to get the "max" date in the subfacet?
>
> I've tried a couple of different ways:
>
> == query ==
>
HTTP 2 and whatever that Google's new protocol is are both into
pipelining over the same connection (HTTP 1.1 too, but not as well).
So, I feel, the right approach would be instead to check whether
SolrJ/Jetty can handle those and not worry about it within Solr
itself.
Regards,
Alex.
Thank you Erick for pointing out. I missed that!!
Below are the commit settings in solrconfig.xml in both source and target.
${solr.autoCommit.maxTime:15000}
false
${solr.autoSoftCommit.maxTime:-1}
Is it recommended to issue a commit on the target when indexing the
document, as replication
** PROTECTED 関係者外秘
Hi,
I am having a strange issue working with solr 6.1 cloud setup on zookeeper 3.4.8
Intermittently after I run Indexing, the replicas are having a different record
count.
And even though there is this mismatch, it is still marked healthy and is being
used for queries.
So,
Thanks Erick and Shawn.
I have reduced number of rows per page from 500K to 100K.
I also increased the ZKclientTimeOut to 30 seconds so that I don't run into
ZK time out issues. The ZK cluster has been deployed on the hosts other
than the SolrCloud hosts.
However, I was trying to increase the
You could do:
*) LinkedIn
*) Wiki
*) Write it up, give it to me and I'll stick it as a guest post on my
blog (with attribution of your choice)
*) Write it up, give it to Lucidworks and they may (not sure about
rules) stick it on their blog
Regards,
Alex.
http://www.solr-start.com/ -
The very first thing I'd do is issue a commit on the target cluster.
The only thing I see there is:
commit{,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
which will not make a doc visible since openSearcerr=false and
Hi,
I am try to implement solr cloud with version 6.2.1 but I have a problem that I
can’t compile solr source code to write custom plugin for solr. I just this
command for remote debug "java -Xdebug
-Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8080 -jar “ but it
doesn’t work. Help
Thanks Erick
Very helpful indeed.
Your guesses on data size are about right. There might only be 50,000 items in
the whole index. And typically we'd fetch a batch of 10. Disk is cheap and this
really isn't taking much room anyway. For such a tiny data set, it seems like
this approach will
A blog article about what you learned would be very welcome. These
edge cases are something other people could certainly learn from.
Share the knowledge forward etc.
Regards,
Alex.
http://www.solr-start.com/ - Resources for Solr users, new and experienced
On 21 November 2016 at 23:57,
What's the specific error message for 2). And did it only happen once
or once in a while?
http://www.solr-start.com/ - Resources for Solr users, new and experienced
On 22 November 2016 at 00:39, Prateek Jain J
wrote:
>
> 1. Commits are issued every 30 seconds,
Hi All,
We are observing that SOLR is able to query documents but is failing to write
documents (create indexes). This is happening only for one core, other cores
are working fine. Can you think of possible reasons which can lead to this?
Disk has enough space to write/index and has correct
1) Are you definitely issuing commits?
2) Do you have anything in the logs? You should if that's an
exceptional situation.
Regards,
Alex.
http://www.solr-start.com/ - Resources for Solr users, new and experienced
On 21 November 2016 at 23:18, Prateek Jain J
Hi Mikhail,
Thanks for your advice, it went a long way towards helping me get the right
documents in the first place, especially paramterising the block join with an
explicit v, as otherwise it was a nightmare of parser errors. Not to mention
I'm still figuring out the nuances of where I need
1. Commits are issued every 30 seconds, not on every write operation.
2. logs only has error entries saying it failed to write documents.
Regards,
Prateek Jain
-Original Message-
From: Alexandre Rafalovitch [mailto:arafa...@gmail.com]
Sent: 21 November 2016 12:34 PM
To: solr-user
Check out Prateeks answer first and use commit wisely.
99% chance it's a commit issue.
On 21 November 2016 at 08:42, Alexandre Rafalovitch
wrote:
> What's the specific error message for 2). And did it only happen once
> or once in a while?
>
>
Did you turn on/off docValues on a already existing field?
On Nov 16, 2016 11:51 AM, "Jaco de Vroed" wrote:
> Hi,
>
> I made a typo. The Solr version number in which this error occurs is 5.5.3.
> I also checked 6.3.0, same problem.
>
> Thanks, bye,
>
> Jaco.
>
> On 16
In a word, "no". Resending the same document will
1> delete the old version (based on ID)
2> index the document just sent.
When a document comes in, Solr can't assume that
"nothing's changed". What if you changed your schema?
So I'd expect the second run to take at least as long as the first.
Hi All,
I am observing following error in logs, any clues about this:
2016-11-06T23:15:53.066069+00:00@solr@@ org.apache.solr.core.SolrCore:1650 -
[my_custom_core] PERFORMANCE WARNING: Overlapping onDeckSearchers=2
Slight web search suggests that it could be a case of too-frequent commits. I
Sure thing Alex. I don't actually do any personal blogging, but if there's a
suitable place - the Solr Wiki perhaps - you'd suggest I can write something up
I'd be more than happy to. What goes around comes around!
-Original Message-
From: Alexandre Rafalovitch
I'm familiar enough with 7-8 years of Solr usage in how it performs as a full
text search index, including spatial coordinates and much more. But for the
most part, we've been returning database ids from Solr rather than a full
record ready to display. We then grab the data and related records
Searching isn't really going to be impacted much, if at all. You're
essentially talking about setting some field with store="true" and
stuffing the HTML into that, right? It will probably have indexed="false"
and docValues="false".
So.. what that means is that very early in the indexing process,
_when_ are you seeing this? I see this on startup upon occasion, and I _think_
there's a JIRA about startup opening more than one searcher on startup.
If it _is_ on startup, you can simply ignore it.
If it's after the system is up and running, then you're probably committing too
frequently. "Too
Hi Folks,
I have DIH cores that are being indexed by my Lucee application. That
works, but I'd like to make some improvements:
- Make a standalone scheduler that's not part of a larger application.
(FYI, I want to Dockerize the import-triggering service.)
- Prevent import requests from
Thanks EricK
Regards,
Prateek Jain
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: 21 November 2016 04:32 PM
To: solr-user
Subject: Re: solr | performance warning
_when_ are you seeing this? I see this on startup upon
Hi Team,
I have indexed data with 143 rows(docs) into solr.It takes around 3 hours to
index.I usde csvUpdateHandler and indexes the csv file by remote streaming.
Now ,when i re-indexing the same csv data,it is still taking 3+ hours.
Ideally,since there are no changes in _id values,it should have
34 matches
Mail list logo