Nope, I got this on the dovecot side. But I think the mailbox got
indexed anyway - I did a search and it returned results from that
mailbox after a few seconds. This is just amazing - have no idea why it
took me so long to get it.
Best,
Francis
On 2020-08-27 22:25, Alexandre Rafalovitch
Is this a Solr-side message? Looks like dovecot doing proactive
trimming of some crazy long header.
You can lookup the record by UID in the Admin UI (UID=153535 instead
of *:*) to check what is being indexed. Check that dovecot does not do
any prefixing of field names (any record from first
It works now! You were right - the files were on a different place. It
seems to be working now.
One last question:
I got this error:
400/49727doveadm(fran...@francisaugusto.com): Warning:
fts-solr(fran...@francisaugusto.com): Mailbox All Mail UID=153535 header
size is huge, truncating
If you are indexing from Drupal into Solr, that's the question for
Drupal's solr module. If you are doing it some other way, which way
are you doing it? bin/post command?
Most likely this is not the Solr question, but whatever you have
feeding data into Solr.
Regards,
Alex.
On Thu, 27 Aug
Ok, you may want to step back and do a basic Solr example (download
matching version tgz file, decompress, "bin/solr -e techproducts" is a
good one, may need to shut the other Solr or give different port (-p
flag). Just so you know what you are looking at before dovecot starts
to introduce extra
Can you or how do you exclude a specific folder/directory from indexing in SOLR
version 7.x or 8.x? Also our CMS is Drupal 8
Thanks,
Phil Staley
DCF Webmaster
608 422-6569
phil.sta...@wisconsin.gov
I also noticed that I get this - maybe a permission error after all?
org.apache.solr.common.SolrException: Server error writing document id
1/a73b6e1ab8e4475f2bae1100c3fdd3da/fran...@med-lo.eu to the index
at
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:243)
Hi Alex and Erick,
Thanks for helping out.
True, restarting solr recreated the directory, but I still get 500
internal errors when reindexing from Dovecot.
Just to be clear: I delete the Data directory inside the
solr/data/dovecot directory.
All the directories are owned by solr:solr, so it
Uhm right. I may have forgotten to mention that you do need to reload
the core or maybe restart Solr server as well. If you literally just
deleted the index, Solr is probably freaking out about suddenly gone
files. It needs to redo the path of "is this the first time or do I
reopen the indexes"
“write.lock” is used by Lucene to insure that no two cores open the same index
because if they do, Bad Things Happen.
The “NoSuchFileException” may be a bit misleading, is there any chance that any
other core is looking at the same directory?
And your assertion: I then deleted the "data" under
Thanks Alex.
Well, I just deleted the whole data, and configured it again, and get
these errors from dovecot when indexing:
doveadm(fran...@francisaugusto.com): Error: fts_solr: Indexing failed:
500 Server Error
doveadm(fran...@francisaugusto.com): Error: Mailbox UOL: Mail search
failed:
Have you tried blowing the index directory away (usually 'data'
directory next to 'conf'). Because:
cannot change field "box" from index
options=DOCS_AND_FREQS_AND_POSITIONS to inconsistent index
options=DOCS
This implies that your field box had different definitions, you
updated it but the index
Hi,
I have - for a long time now - hoped to use an fts engine with dovecot.
My dovecot version is 2.3.7.2 under Ubuntu 20.04.
I installed solr 7.7.3 and then 8.6.0 to see if this was a
version-related error. I copied the schema from 7.7.0 as many people
said this was fine.
I get the
Did you replace any node recently in the cluster? That downed replica may be
related to that "gone/replaced" node and solr autoscaling added the new
replicas on the new replacing node. It shows that downed replica if the
replaced node hasn't been cleaned-up with the DELETENODE api command
Hello,
We want to receive Solr logs in DataDog. Configured and all good, but
the logs are ugly, not parsed and not really useful.
Anyone knows a way to send the logs from Solr in JSON format?
Thank you.
I was a little annoyed of the default "SolrRocks" password so I wrote a
little utility to generate solr passwords for the Basic Authentication
plugin and made it available online.
The password encoder is written in simple plain javascript, there is no
need to install or download anything, the
sorry there is a typo on earlier message.. I didn't make any changes in
autoscalling policies/triggers
On Thu, Aug 27, 2020 at 12:12 PM yaswanth kumar
wrote:
> Thanks for looking into this.
>
> I did used replication factor as 3 with autoAddReplica=true, and did make
> any changes in
Thanks for looking into this.
I did used replication factor as 3 with autoAddReplica=true, and did make
any changes in autoscalling policies/triggers everything is defaults that
comes with solr.
On Thu, Aug 27, 2020 at 11:50 AM Howard Gonzalez <
howard.gonza...@careerbuilder.com> wrote:
> Hi,
Hi,
You can also connect to ZK element and use zkCli.sh tools
http://www.mtitek.com/tutorials/zookeeper/zkCli.php
Regards
Dominique
Le jeu. 27 août 2020 à 17:28, Webster Homer <
webster.ho...@milliporesigma.com> a écrit :
> I am using solr 7.7.2 solr cloud
>
> We version our collection and
Hello,
I can't find anything in the docs to understand how Solr sorts suggest results
when the weight is the same (0 in my case).
Here is my suggester config:
---
mySuggester
AnalyzingInfixLookupFactory
DocumentDictionaryFactory
autocomplete
payload
Hi, could you share the replication factor that you're using for those
collections (in case they are NRT replicas)? DId you make any changes in
autoscaling policies/triggers?
From: yaswanth kumar
Sent: Thursday, August 27, 2020 11:37 AM
To:
Never mind I figured out my problem.
-Original Message-
From: Webster Homer
Sent: Thursday, August 27, 2020 10:29 AM
To: solr-user@lucene.apache.org
Subject: Odd Solr zkcli script behavior
I am using solr 7.7.2 solr cloud
We version our collection and config set names with dates. I
Can someone help me understand on why the below is happening?
Solr: 8.2; Zookeer:3.5
One zookeeper + 3 solr nodes
Initially created multiple collections with 3 replicas , indexed data
everything looked great.
We now restarted all 3 solr nodes, and we started the zookeeper and solr
services ,
I am using solr 7.7.2 solr cloud
We version our collection and config set names with dates. I have two
collections sial-catalog-product-20200711 and sial-catalog-product-20200808. A
developer uploaded a configuration file to the 20200711 version that was not
checked into our source control,
Hi,
There were few discussions about similar issues these days. A JIRA issue
was created
https://issues.apache.org/jira/browse/SOLR-14768
Regards
Dominique
Le jeu. 27 août 2020 à 15:00, Divino I. Ribeiro Jr. <
divinoirj.ib...@gmail.com> a écrit :
> Hello everyone!
> When I run an query to
Hi,
Which Solr version ?
Restart which node ? Solr ? ZK ? Only one node ?
Collections are missing in Solr console (lost in Zookeeper) but cores are
still present ?
Why put Zk data and datalog in a "temp" directory
(dataDir=/applis/24374-iplsp-00/IPLS/apache-zookeeper-3.5.5-bin/temp) ?
This
Any logfiles after restart?
Which Solr version?
I would activate autopurge in Zookeeper
> Am 27.08.2020 um 10:49 schrieb "antonio.di...@bnpparibasfortis.com"
> :
>
> Good morning,
>
>
> I would like to get some help if possible.
>
>
>
> We have a 3 node Solr cluster (ensemble) with
Good morning,
I would like to get some help if possible.
We have a 3 node Solr cluster (ensemble) with apache-zookeeper 3.5.5.
It works fine until we need to restart one of the nodes. Then all the content
of the collection gets deleted.
This is a production environment, and every time
Hi,
I'm trying to add distributed tracing to Solr following this document:
https://lucene.apache.org/solr/guide/8_6/solr-tracing.html
I downloaded the latest version, 8.6.1, and edited the
./server/solr/solr.xml file with the tracerConfig as listed in the
document. I have a Jaeger backend
29 matches
Mail list logo