Hi,
As per
https://lucene.apache.org/solr/guide/7_2/schema-factory-definition-in-solrconfig.html#SchemaFactoryDefinitioninSolrConfig-Classicschema.xml,
the only difference between schema.xml and managed-schema is that one
accepts schema-changes through an API while the other one does not.
However
We were initially having an issue with DBQ and heavy batch updates which
used to result in many missing updates.
After reading many mails in mailing list which mentions that DBQ and batch
update do not work well together, we switched to DBI. But we are seeing
issue as mentioned in this jira is
Most likely you don't have your autocommit settings set up correctly
and, at a guess, your indexing process fires a commit at the end.
If I'm right, autoCommit has "openSearcher" set to "false" and
autoSoftCommit is either commented out or set to -1.
More than you might want to know:
https://luc
"When authentication is enabled ALL requests must carry valid
credentials." I believe this behavior depends on the value you set for
the *blockUnknown* authentication parameter.
On 06/15/2018 06:25 AM, Jan Høydahl wrote:
> When authentication is enabled ALL requests must carry valid credentials.
If I start with a collection X on two nodes with one shard and two replicas
(for redundancy, in case a node goes down): a node on host1 has
X_shard1_replica1 and a node on host2 has X_shard1_replica2: when I try
SPLITSHARD, I generally get X_shard1_0_replica1, X_shard1_1_replica1 and
X_shard1_0
Hello,
We are migrating from solr 4.7 to 7.3. It takes about an hour to perform a
complete re-index against our development database. During this upgrade (to
7.3) testing, I typically wait for the re-index to complete before doing
sample queries from our application. However, I got a bit impatient
My first guess is that you're indexing to the slave nodes.
Second guess is that you're re-indexing your entire corpus on the master node.
Third guess is that you're optimizing on the master node (don't do this)
What does the slave's log say is the reason? If all the segments on
the master have c
Hi All,
Current am using SOLR 5.2.1 on Linux machine. I have cluster of 5 nodes with
master and salve configuration, which gives 5 master nodes and 5slave node. We
have enabled only hard commit on master nodes and both soft & hard commit on
the slave nodes since the search will happen on slave
> Am 15.06.2018 um 01:23 schrieb Joel Bernstein :
>
> We have to check the behavior of the innerJoin. I suspect that its closing
> the second stream when the first stream his finished. This would cause a
> broken pipe with the second stream. The export handler has specific code
> that eats the b
Hi,
Sorry to reply to this so late. Hopefully you've long since figured out
the issue. But if not...
1. Just to clarify, are you seeing the error message above when Solr tries
to talk to ZooKeeper? Or does that error message appear in your ZK logs,
or from a ZK-client you're using to test conn
Thanks Shawn,
As mentioned previously, we are hard committing every 60 seconds, which we
have been doing for years, and have had no issues until enabling CDCR. We
have never seen large tlog sizes before, and even manually issuing a hard
commit to the collection does not reduce the size of the tlog
Hi,
I had come across the reduce function in the docs but
I have a hard time getting it to work; I haven't found any documentation on it
or its parameters, and the source code of the GroupOperation doesn't explain it
either ...
For example, what is the "n" parameter about?
I constructed a sourc
When authentication is enabled ALL requests must carry valid credentials.
Are you asking for a feature where a request is authenticated based on IP
address of the client, not username/password?
Jan
Sendt fra min iPhone
> 14. jun. 2018 kl. 22:24 skrev Dinesh Sundaram :
>
> Hi,
>
> I have conf
13 matches
Mail list logo