I just located the QueryAutoStopWordAnalyzer in lucene.
Has anyone managed to use it for solr?
Could imagine to have a language independent search clean up
for the text_all field.
Can it be used for solr right out of the box or do I have to
write a wrapper or factory?
Regards
Bernd
missing?
Regards,
Alex.
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853
On 23 October 2014 10:31, Bernd Fehling bernd.fehl...@uni
While starting now with SolrCloud I tried to understand the sense
of external zookeeper.
Let's assume I want to split 1 huge collection accross 4 server.
My straight forward idea is to setup a cloud with 4 shards (one
on each server) and also have a replication of the shard on another
server.
offline if
you have to restart Solr for some reason.
Also, you want 3 or 5 zookeeepers, not 4 or 8.
On 10/27/14 10:35, Bernd Fehling wrote:
While starting now with SolrCloud I tried to understand the sense
of external zookeeper.
Let's assume I want to split 1 huge collection accross 4
to be connected to
ZK, you can't have the master instances in ZK, and the replicas not
connected (that's more of the old Master-Slave replication system which is
still available but orthogonal to Cloud).
On 28 October 2014 07:01, Bernd Fehling bernd.fehl...@uni-bielefeld.de
wrote:
Yes, garbage
Is the new faceted search module the cause why I don't have
any lucene-facet-hs_0.08.jar in the binary distribution?
And what is with lucene-classification and lucene-replicator?
How can I build from source, with solr/hs.xml?
Regards
Bernd
Am 27.10.2014 um 17:25 schrieb Yonik Seeley:
Hi list,
with my first cloud (Solr 4.10.2) up and running (4 shards, 1 replica)
I can't find any info with Solr Admin about my collection, like summary
number of docs or summary index size of my collection.
Any idea where to find it?
Regards,
Bernd
I'm getting the following error with 4.10.4
WARN org.apache.solr.handler.dataimport.SolrWriter – Error creating document :
SolrInputDocument(fields: [dcautoclasscode=310, dclang=unknown,
..., dcdocid=dd05ad427a58b49150a4ca36148187028562257a77643062382a1366250112ac])
. Or not?
Bernd
Am 11.05.2015 um 14:13 schrieb Bernd Fehling:
I'm getting the following error with 4.10.4
WARN org.apache.solr.handler.dataimport.SolrWriter – Error creating
document :
SolrInputDocument(fields: [dcautoclasscode=310, dclang=unknown,
...,
dcdocid
value to be to big. Also check if you
have some copyField that should not be there.
Thanks,
Emir
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr Elasticsearch Support * http://sematext.com/
On 11.05.2015 14:13, Bernd Fehling wrote:
I'm getting
Hi Shawn,
that means if I set a length limit on dcdescription or make dcdescription
multivalue
than the problem is solved because f_dcperson is already multivalue?
Regards
Bernd
Am 11.05.2015 um 15:17 schrieb Shawn Heisey:
On 5/11/2015 6:13 AM, Bernd Fehling wrote:
Caused
11.05.2015 um 15:35 schrieb Shawn Heisey:
On 5/11/2015 7:19 AM, Bernd Fehling wrote:
After reading https://issues.apache.org/jira/browse/LUCENE-5472
one question still remains.
Why is it complaining about f_dcperson which is a copyField when the
origin problem field is dcdescription which definately
,
Emir
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr Elasticsearch Support * http://sematext.com/
On 11.05.2015 15:22, Bernd Fehling wrote:
Hi Emir,
the dcdescription field is definately to big.
But why is it complaining about f_dcperson
. But that's just a guess.
Best,
Erick.
On Mon, May 11, 2015 at 8:43 AM, Bernd Fehling
bernd.fehl...@uni-bielefeld.de wrote:
It turned out that I didn't recognized that dcdescription is not indexed,
only stored. So the next in chain ist f_dcperson where dccreator and
dcdescription
How about:
http://lmgtfy.com/?q=unsubscribe+solr+user+list
Am 09.06.2015 um 10:47 schrieb Abdul Hameed:
How do I unsubscribe from this list please?
-Original Message-
From: Carl Roberts [mailto:carl.roberts.zap...@gmail.com]
Sent: 08 June 2015 18:54
To:
Hi Chris,
another question pops up, is cursorMark cloud aware?
And if so, who is handling the cursorMark and what if a shard goes down
and comes up again?
Regards
Bernd
Am 30.06.2015 um 08:43 schrieb Bernd Fehling:
Thanks for your explanation.
Right out of your head, are there any other
Hi list,
while just trying cursorMark I got the following search response:
error: {
msg: Can not search using both cursorMark and timeAllowed,
code: 400
}
Yes, I'm using timeAllowed which is set in my requestHandler as
invariant to 6 (60 seconds) as a limit to killer searches.
Thanks for your explanation.
Right out of your head, are there any other options which prevent
getting a cursorMark?
Yes, that was also my idea to set up a separate request handler
for harvesting without timeAllowed.
As Shawn suggested, a short note about this should go into the documentation.
Am 06.08.2015 um 17:48 schrieb Mikhail Khludnev:
On Thu, Aug 6, 2015 at 3:56 PM, Bernd Fehling
bernd.fehl...@uni-bielefeld.de wrote:
Am 06.08.2015 um 14:33 schrieb Upayavira:
Typically such performance issues with faceting are to do with the time
spend uninverting the index before
Single Index Solr 4.10.4, optimized Index, 76M docs, 235GB index size.
I was analysing my solr logs and it turned out that I have some queries
which are above 30 seconds qtime while normally the qtime is below 1 second.
Looking closer about the queries it turned out that this is for
Eskildsen:
On Thu, 2015-08-06 at 13:00 +0200, Bernd Fehling wrote:
Single Index Solr 4.10.4, optimized Index, 76M docs, 235GB index size.
I was analysing my solr logs and it turned out that I have some queries
which are above 30 seconds qtime while normally the qtime is below 1 second.
Looking closer
for searches.
The q=*.* with sorting and facetting is always the first query I'm doing
at static warming and it helped until switching to docValues :-(
Bernd
Upayavira
On Thu, Aug 6, 2015, at 12:38 PM, Toke Eskildsen wrote:
On Thu, 2015-08-06 at 13:00 +0200, Bernd Fehling wrote:
Single
around 35 seconds to 3.5 seconds for faceting.
After 1 hour under load the qtime is somewhere around 15 percent performance
increase for faceting.
This patch is a must have!
Regards
Bernd
Am 07.08.2015 um 08:45 schrieb Bernd Fehling:
Am 06.08.2015 um 17:48 schrieb Mikhail Khludnev:
On Thu, Aug
I'm doing some testing on long running huge indexes.
Therefore I need a clean state after some days running.
My idea was to open a new searcher with commit command:
INFO - org.apache.solr.update.DirectUpdateHandler2;
start
...if the state didn't change why do you want
another (identical) view?
On 15 Jul 2015 02:30, Bernd Fehling bernd.fehl...@uni-bielefeld.de
wrote:
I'm doing some testing on long running huge indexes.
Therefore I need a clean state after some days running.
My idea was to open a new searcher
Best,
Andrea
On 15 Jul 2015 02:51, Andrea Gazzarini a.gazzar...@gmail.com wrote:
What do you mean with clean state? A searcher is a view over a given
index (let's say) state...if the state didn't change why do you want
another (identical) view?
On 15 Jul 2015 02:30, Bernd Fehling bernd.fehl
Am 15.07.2015 um 14:47 schrieb Alessandro Benedetti:
...
What ever you name a problem, I just wanted to open a new searcher
after several days of heavy load/searching on one of my slaves
to do some testing with empty field-/document-/filter-caches.
Aren't you warming your caches on commits
Dear solr users,
while setting up some new servers (virtual machines) using XEN I was
thinking about an alternative like KVM. My last tests with KVM is
a while ago and XEN performed much better in the area of I/O and
CPU usage.
This lead me to the idea to start a poll about virtualization
ing virtualisation?
>
> If it is just code separation, consider using containers and Docker
> rather than fully fledged VMs.
>
> CPU is shared, but each container sees its own view of its file system.
>
> Upayavira
>
> On Thu, Oct 1, 2015, at 07:47 AM, Bernd Fehling wrote:
>&
mentioned we have the indexer (master) on one physical machine
and two searchers (slaves) on other physical machines, all together with
other little VMs which are not I/O and CPU heavy.
Regards
Bernd
Am 30.09.2015 um 18:48 schrieb Shawn Heisey:
> On 9/30/2015 3:12 AM, Bernd Fehling wrote:
>> whil
better
and should be preferred.
Regards
Bernd
Am 01.10.2015 um 09:44 schrieb Toke Eskildsen:
> Bernd Fehling <bernd.fehl...@uni-bielefeld.de> wrote:
>> unfortunately we have to run VMs, otherwise we would waste hardware.
>> I thought other solr users are in the sam
It is always interesting to see what other Search Engines are doing.
So I just wanted to have a look at Heliosearch (http://heliosearch.org/)
but nothing showed up.
Is Heliosearch discontinued or only a hiccup in the internet?
Bernd
l is that they
>>> rely on querying Solr directly, while our searches go through multiple
>>> levels of an application which includes a lot of additional logic in
>> terms
>>> of what the data that gets sent to Solr are, so they just aren't going to
>>> be much u
Has anyone experiences with searching in two indices?
E.g. having one index with nearly static data (like personal data)
and a second index with articles which changes pretty much.
A search would then start for articles and from the list of results
(e.g. first page, 10 articles) start a sub
te w their first result set.
>
> Worth noting: our hand was a bit forced as some search results would need to
> be in the thousands and as such the secondary lookup would be incredibly slow
> and painful, so YMMV
>
>
> On May 30, 2016, 6:21 AM -0400, Bernd
> Fehlin
ally want to make it effective and convenient
> until it's released.
>
>
> On Mon, May 30, 2016 at 1:20 PM, Bernd Fehling <
> bernd.fehl...@uni-bielefeld.de> wrote:
>
>> Has anyone experiences with searching in two indices?
>>
>> E.g. having one
SolrCloud has some disadvantages and can't beat the easiness and simpleness of
Master Slave Replica. So I can only encourage to keep Master Slave Replica
in future versions.
Bernd
Am 13.01.2016 um 21:57 schrieb Jack Krupansky:
> The "Legacy Scaling and Distribution" section of the Solr Reference
Am 17.06.2016 um 09:06 schrieb Ere Maijala:
> 16.6.2016, 1.41, Shawn Heisey kirjoitti:
>> If you want to continue avoiding G1, you should definitely be using
>> CMS. My recommendation right now would be to try the G1 settings on my
>> wiki page under the heading "Current experiments" or the CMS
Now this is strange with solr 4.10.4,
I have a multivalue string field for creator.
And a multivalue string field for f_person, prepared for facetting with
docValues.
To fill f_person I use copyField.
The input to creator is 43470 bytes long with names, split at ";" for each
subfield.
Hi Shawn,
the DIH is doing the splitting:
...
...
Bernd
Am 18.02.2016 um 14:42 schrieb Shawn Heisey:
> On 2/18/2016 3:45 AM, Bernd Fehling wrote:
>> Now this is strange with solr 4.10.4,
>> I have a multivalue string field for creator.
>> > multiValued="tr
Just in case someone uses ScriptTransformer in DIH extensively
and is thinking about going from Java7 to Java8, some behavior
has changed due to change from Mozilla Rhino (Java7) to
Oracle Nashorn (Java8).
Took me a while to figure out why my DIH chrashed after changing to Java8.
A good help is
After enhancing the server with SSDs I'm trying to speed up indexing.
The server has 16 CPUs and more than 100G RAM.
JAVA (1.8.0_92) has 24G.
SOLR is 4.10.4.
Plain XML data to load is 218G with about 96M records.
This will result in a single index of 299G.
I tried with 4, 8, 12 and 16 concurrent
; servers (especially if you're also using Tika) in addition
> to all indexing etc.
>
> Here's a sample:
> https://lucidworks.com/blog/2012/02/14/indexing-with-solrj/
>
> Dodging the question I know, but DIH sometimes isn't
> the best solution.
>
> Best,
> Erick
>
> O
(assuming SolrCloud) has efficiencies in terms of
> routing...
>
> Best
> Erick
>
> On Jul 27, 2016 7:24 AM, "Bernd Fehling" <bernd.fehl...@uni-bielefeld.de>
> wrote:
>
>> So writing some SolrJ doing the same job as the DIH script
>> and using
why do you have so many deletes? Is it expected?
> When you run DIHs concurrently, do you shard intput data by uniqueKey?
>
> On Wed, Jul 27, 2016 at 6:20 PM, Bernd Fehling <
> bernd.fehl...@uni-bielefeld.de> wrote:
>
>> If there is a problem in single index then it mig
wrote:
>
>> Bernd,
>> But why do you have so many deletes? Is it expected?
>> When you run DIHs concurrently, do you shard intput data by uniqueKey?
>>
>> On Wed, Jul 27, 2016 at 6:20 PM, Bernd Fehling <
>> bernd.fehl...@uni-bielefeld.de> wrote:
/LUCENE-6161. Can you upgrade to 5.1
> or newer?
>
> On Wed, Jul 27, 2016 at 7:29 PM, Bernd Fehling <
> bernd.fehl...@uni-bielefeld.de> wrote:
>
>> After enhancing the server with SSDs I'm trying to speed up indexing.
>>
>> The server has 16 CPUs and more
2016 um 16:16 schrieb Mikhail Khludnev:
> These deletes seem really puzzling to me. Can you experiment with
> commenting uniqeKey in schema.xml. My expectation that deletes should go
> away after that.
>
> On Tue, Aug 2, 2016 at 4:50 PM, Bernd Fehling <
> bernd.fehl...@un
Hi list,
while going from SOLR 4.10.4 to 5.5.3 I noticed a change in query parsing.
4.10.4
text:star text:trek
text:star text:trek
(+((text:star text:trek)~2))/no_coord
+((text:star text:trek)~2)
5.5.3
text:star text:trek
text:star text:trek
(+(+text:star +text:trek))/no_coord
Hi,
>
> The tilde in the former looks interesting.
> I think it related to proximity search.
> What query parser is this?
>
> Ahmet
>
>
>
> On Wednesday, September 7, 2016 10:52 AM, Bernd Fehling
> <bernd.fehl...@uni-bielefeld.de> wrote:
> Hi list,
>
&
I suppose
>>>+((text:star text:trek)~2)
>>> and
>>> +(+text:star +text:trek)
>>> are equal. mm=2 is equal to +foo +bar
>>>
>>> On Wed, Sep 7, 2016 at 10:52 AM, Bernd Fehling <
>>> bernd.fehl...@uni-bielefeld.de> wrote:
>&g
case, you're setting 'mm' anyway,
> so it shouldn't be relevant.
>
> Ta,
> Greg
>
> On 9 September 2016 at 16:44, Bernd Fehling <bernd.fehl...@uni-bielefeld.de>
> wrote:
>
>> Hi Greg,
>>
>> thanks a lot, thats it.
>> After setting
"parsedquery" is different to version 4.10.4.
And why is parsedquery different to parsedquery_toString in version 5.5.3?
Where is my second boost in "parsedquery" of 5.5.3?
Bernd
Am 09.09.2016 um 08:44 schrieb Bernd Fehling:
> Hi Greg,
>
> thanks a lot, thats it.
> A
n <erickerick...@gmail.com>
> wrote:
>
>> Perhaps https://issues.apache.org/jira/browse/SOLR-8812 and related?
>>
>> Best,
>> Erick
>>
>> On Tue, Sep 13, 2016 at 11:37 PM, Bernd Fehling
>> <bernd.fehl...@uni-bielefeld.de> wrote:
>>>
Am 12.12.2016 um 04:00 schrieb Brian Narsi:
> We are using Solr 5.1.0 and DIH to build index.
>
> We are using DIH with clean=true and commit=true and optimize=true.
> Currently retrieving about 10.5 million records in about an hour.
>
> I will like to find from other member's experiences as to
Hi list,
I don't want to write-up another SOLR vs ES. Every user should build up
his own mind by installing and testing both. This is more about questions
to the developers in which direction they "think" the future of SOLR will go.
After installing the most recent version of ES I was shocked
I'm testing some function queries and have some questions.
original queries:
1. q=collection:ftmuenster=*
--> numFound="6029"
2. q=collection:ftmuenster+AND+-description:*=*
--> numFound="1877"
3. q=collection:ftmuenster+AND+description:*=*
--> numFound="4152"
This looks good.
But now with
chrieb Mikhail Khludnev:
> Hello,
> A function query matches all docs. Use {!frange} if you want to select docs
> with some particular values.
>
> On Thu, Mar 16, 2017 at 6:08 PM, Bernd Fehling <
> bernd.fehl...@uni-bielefeld.de> wrote:
>
>> I'm t
Tried today to have a look at solr 6.5.0.
- download solr-6.5.0-src.tgz from apache.org and extracted to workspace
- ant eclipse
- imported to eclipse neon as new project
- from eclipse in lucene subdir clicked on build.xml and selected
"Run As" --> "Ant Build..."
- selected "package" and "Run"
minimum 2.3.
>
> Try removing all pre-2.3 ivy-*.jar files from ~/.ant/lib/, then running “ant
> ivy-bootstrap”.
>
> --
> Steve
> www.lucidworks.com
>
>> On Apr 19, 2017, at 10:55 AM, Bernd Fehling <bernd.fehl...@uni-bielefeld.de>
>> wrote:
>>
>>
nf/solrconfig.xml:38:
> 6.4.2
> server/solr/configsets/data_driven_schema_configs/conf/solrconfig.xml:38:
> 6.4.2
>
>
> Maybe you downloaded the 6.4.1 version by mistake?
> Thanks,
> Ishan
>
>
> On Thu, Mar 9, 2017 at 10:19 AM, Shawn Heisey <apa...@elyograg.or
Hi list,
Graph TokenStream processing is enabled by default
org.apache.solr.search.SolrQueryParser via org.apache.lucene.util.QueryBuilder.
How to disable this default value in config files?
Regards
Bernd
extensive mirroring network
> for distributing releases. It is possible that the mirror you are using may
> not have replicated the release yet. If that is the case, please try
> another mirror. This also applies to Maven access.
>
--
*
shCode? And this is
> confusing query cache in solr?
> Can you disable the query cache, to test it?
> By the way, which query parser are you using? I believe SynonymQuery is
> produced by BM25 similarity, right?
>
> Ahmet
>
>
> On Friday, August 11, 2017, 2:48:07
times to
TermQuery with term "textth:rss" instead of being a SynonymQuery.
This is strange!!!
What is ReInit right before try doing, is that a cahe lookup?
Or is the problem in TopLevelQuery?
Regards
Bernd
Am 16.08.2017 um 09:06 schrieb Bernd Fehling:
> Hi Ahmet,
>
> thank
We just noticed a very strange problem with Solr 6.4.2 QueryParser.
The QueryParser changes the query by itself from time to time.
This happens if doing a search request reload several times at higher rate.
Good example:
...
textth:waffenhandel
...
textth:waffenhandel
textth:waffenhandel
With SolrCloud 6.4.2 (5 shards on 5 server) and a wildcard query
I get different results between the same query.
I assume this is alltogether due to the distributed search and
the response time of each server and the constant score of 1.0 ???
Is there any config where I can set the shard order
Bernd
>
> On Tue, Jul 25, 2017 at 3:39 PM, Bernd Fehling <
> bernd.fehl...@uni-bielefeld.de> wrote:
>
>> Any wildcard query will do it, e.g. .../select?q=ant*=json&...
>>
>> A couple of "shift + reload" (to bypass cache) in the browser and you
&g
y 12 mio. docs.
Regards
Bernd
Am 25.07.2017 um 14:20 schrieb Susheel Kumar:
> What is the query you are executing if you can share. Due you think
> difference could be due to updates/ingestion happening same time?
>
> Thanks,
> Susheel
>
> On Tue, Jul 25, 2017 at 7:47 AM,
Hi,
bq: What amazes me that in 2017 we don't see a lot more SolrCloud users!
Really? SolrCloud is much more complex. All of a sudden you have to
deal with zookeeper which brings a new level of complexity into play
where you only want do have some data stored and searchable.
The easyness of
While looking into SolrCloud I noticed that my logging
gets moved to archived dir by starting a new node.
E.g.:
bin/solr start -cloud -p 8983
-> server/logs/ has solr-8983-console.log
bin/solr start -cloud -p 7574
-> solr-8983-console.log is moved to server/logs/archived/
-> server/logs/ has
hoose to locate your virtual machine in US-west-Oregon or
> US-east-i-forget or a few other locations, but that is a very coarse
> division. Can you choose physical machine?
>
> With Google, it might be dynamic?
> cheers -- Rick
>
>
> On 2017-05-09 03:44 AM, Bernd Fehling
l.
>
> Best,
> Erick
>
> On Mon, May 8, 2017 at 7:44 AM, Shawn Heisey <apa...@elyograg.org> wrote:
>> On 5/8/2017 5:38 AM, Bernd Fehling wrote:
>>> boss -- shard1 - server2:7574
>>>| |-- server2:8983 (leader)
>>
>> T
a nodeSet when you
> create the nodes, and in particular the special value EMPTY. That'll
> create a collection with no replicas and you can ADDREPLICA to
> precisely place each one if you require that level of control.
>
> Best,
> Erick
>
> On Mon, May 8, 2017 at 7:44 AM, Shawn Heis
n:
> If you want all the replicas for shard1 on the same port then I think the
> rule is: 'shard:shard1,replica:port:8983'
>
> On 22 May 2017 at 18:47, Bernd Fehling <bernd.fehl...@uni-bielefeld.de>
> wrote:
>
>> I tried many settings with "Rule-based Replica
ticket for this.
Regards
Bernd
Am 23.05.2017 um 14:09 schrieb Noble Paul:
> did you try the rule
> shard:shard1,port:8983
>
> this ensures that all replicas of shard1 is allocated in the node w/ port
> 8983.
>
> if it doesn't , it's a bug. Please open aticket
>
> On T
I tried many settings with "Rule-based Replica Placement" on Solr 6.5.1
and came to the conclusion that it is not working at all.
My test setup is 6 nodes on 3 servers (port 8983 and 7574 on each server).
The call to create a new collection is
ther.
E.g. server1:8983, server2:7574, server1:7574,...
What do you think will happen if comparing server1:8983 against server2:7574
(and so on)???
It will _NEVER_ match!!!
Regards
Bernd
Am 23.05.2017 um 08:54 schrieb Bernd Fehling:
> No, that is way off, because:
> 1. you have
shard1,port:8983
>
> this ensures that all replicas of shard1 is allocated in the node w/ port
> 8983.
>
> if it doesn't , it's a bug. Please open aticket
>
> On Tue, May 23, 2017 at 7:10 PM, Bernd Fehling
> <bernd.fehl...@uni-bielefeld.de> wrote:
>> After so
ird. Does the 7574 console log really get archived or
>>> is the 8983 console log archived twice? If 7574 doesn't get moved to
>>> the archive, this sounds like a JIRA, I'd go ahead and raise it.
>>>
>>> Actually either way I think it needs a JIRA. Either the w
ahead and raise it.
>>
>> Actually either way I think it needs a JIRA. Either the wrong log is
>> getting moved or the message needs to be fixed.
>>
>> Best,
>> Erick
>>
>> On Wed, May 3, 2017 at 5:29 AM, Bernd Fehling
>> <bernd.fehl...@uni-bielefeld.de&
Hi list,
next problem with SolrCloud.
Situation:
- 5 x Zookeeper fresh, clean on 5 server
- 5 x Solr 6.5.1 fresh, clean on 5 server
- start of Zookeepers
- upload of configset with Solr to Zookeepers
- start of only one Solr instance port 8983 on each server
- With Solr Admin GUI check that all
ogs are
> moved into the archived folder by default. This can be disabled by
> setting SOLR_LOG_PRESTART_ROTATION=false as a environment variable
> (search for its usage in bin/solr) but it will also disable all log
> rotation.
>
> On Wed, May 3, 2017 at 5:59 PM, Bernd Fehling
My assumption was that the strength of SolrCloud is the distribution
of leader and replica within the Cloud and make the Cloud somewhat failsafe.
But after setting up SolrCloud with a collection I have both, leader and
replica, on the same shard. And this should be failsafe?
Hi Ere,
https://issues.apache.org/jira/browse/SOLR-9120
Regards,
Bernd
Am 03.05.2017 um 08:22 schrieb Ere Maijala:
> I'm running a three-node SolrCloud (tested with versions 6.4.2 and 6.5.0)
> with 1 shard and 3 replicas, and I'm having trouble getting the
> collection backup API to actually
tor; Retrying request to
{}->http://solrmn01.ub.de:8983
ERROR: Connection refused (Connection refused)
Regards
Bernd
Am 04.05.2017 um 10:13 schrieb Bernd Fehling:
> Hi list,
> next problem with SolrCloud.
> Situation:
> - 5 x Zookeeper fresh, clean on 5 server
> - 5 x Solr
|
> --- shard5 - server3:7574 (leader)
> |-- server5:8983
>
> Hope this helps.
>
> Amrit Sarkar
> Search Engineer
> Lucidworks, Inc.
> 415-589-9269
> www.lucidworks.com
> Twitter http://twitter.com/lucidworks
> LinkedI
A simple question about solrj (Solr 6.4.2),
how to update documents with expungeDeletes true/false?
In org.apache.solr.client.solrj.SolrClient there are many add,
commit, delete, optimize, ... but no "update".
What is the best way to "update"?
- just "add" the same docid with new content as
> When it comes to expungeDeletes - that is a flag that can be set when
> committing.
>
> HTH,
> Emir
> --
> Monitoring - Log Management - Alerting - Anomaly Detection
> Solr & Elasticsearch Consulting Support Training - http://sematext.com/
>
>
>
>> On 4 Oct 201
Thanks a lot Alessandro and Emir.
Am 09.10.2017 um 13:40 schrieb alessandro.benedetti:
> In addition to what Emir mentioned, when Solr opens a new Transaction Log
> file it will delete the older ones up to some conditions :
> keep at least N number of records [1] and max K number of files[2].
> N
Questions coming to my mind:
Is there a "Resiliency Status" page for SolrCloud somewhere?
How would SolrCloud behave in a Jepsen test?
Regards
Bernd
Am 10.10.2017 um 09:22 schrieb Toke Eskildsen:
> On Mon, 2017-10-09 at 20:50 -0700, Tech Id wrote:
>> Being a long term Solr user, I tried to do
Just an idea, how about taking a dump with jmap and using
MemoryAnalyzerTool to see what is going on?
Regards
Bernd
Am 24.08.2017 um 11:49 schrieb Markus Jelsma:
> Hello Shalin,
>
> Yes, the main search index has DocValues on just a few fields, they are used
> for facetting and function
tuff?)
>
> -Yonik
>
>
> On Wed, Aug 16, 2017 at 7:32 AM, Bernd Fehling
> <bernd.fehl...@uni-bielefeld.de> wrote:
>> My class SynonymQParser which calls SolrQueryParserBase.parse :
>>
>> class SynonymQParser extends QParser {
>> prot
Can someone fix https://cwiki.apache.org/confluence/ ?
Seams to have problems with styles?
Tons of #66solid and #66nonesolid in the text.
E.g. :
https://cwiki.apache.org/confluence/display/solr/Getting+Started+with+SolrCloud
Thanks, Bernd
I'm trying to figure out when transaction logs are closing.
Unfortunately the docs and guides are not very clear about this.
I tried any combination of commits with waitSearcher true/false,
expungeDeletes true/false, openSearcher true/false.
And also optimize with maxSegements=1.
The stats of my
e guide is freely editable. Well, actually they're
> just files in asciidoc format. It'd be great if you wanted to edit
> them. I use Atom to edit them as it's free. If you do feel moved to do
> this, just raise a JIRA and add the diff as a patch. There's also an
> IntelliJ plugin tha
Thanks,
but I tried to access the mentioned issues of
https://lucene.apache.org/solr/6_6_2/changes/Changes.html
https://issues.apache.org/jira/browse/SOLR-11477
https://issues.apache.org/jira/browse/SOLR-11482
I get something like "permissionViolation=true", even after login!!!
Is SOLR going to
Hi Walter,
you can check if the JVM OOM hook is acknowledged by JVM
and setup in the JVM. The options are "-XX:+PrintFlagsFinal -version"
You can modify your bin/solr script and tweak the function "launch_solr"
at the end of the script. Replace "-jar start.jar" with "-XX:+PrintFlagsFinal
Hi list,
actually a simple question, but somehow i can't figure out how to get
the total number of terms in a field in the index, example:
record_1: fruit: apple, banana, cherry
record_2: fruit: apple, pineapple, cherry
record_3: fruit: kiwi, pineapple
record_4: fruit:
- a search for fruit:*
ords which should
>> be 8?
>>
>> I'm talking about 100 Million records, the 4 above are just an example.
>> This is not a general use case, more for statistical purposes.
>>
>> Regards
>> Bernd
>
>
--
201 - 300 of 396 matches
Mail list logo