Hello, Raj
I've just checked my Schema page for external file field
Solr version 8.3.1 gives only such parameters for externalFileField:
Field: fff
Field-Type:
org.apache.solr.schema.ExternalFileField
Flags:
UnInvertible
Omit Term Frequencies & Positions
Properties
√
√
Are u sure
Hi
That seems the reason of solr not starting:
cannot open
'/home/pawasthi/projects/solr_practice/ex1/solr-8.4.1/example/cloud/node1/solr/../logs/solr.log'
for reading: No such file or directory
> -Original Message-
> From: Prabhat Awasthi [mailto:pawasthi.i...@gmail.com]
> Sent:
ear.
>
> Thank you for your help.
>
> From: Hongxu Ma
> Sent: Tuesday, February 18, 2020 10:22
> To: Vadim Ivanov ; solr-
> u...@lucene.apache.org
> Subject: Re: A question about solr filter cache
>
> Thank you @Vadim Ivanov<mailto:vadim.i
You can easily check amount of RAM used by core filterCache in Admin UI:
Choose core - Plugins/Stats - Cache - filterCache
It shows useful information on configuration, statistics and current RAM
usage by filter cache,
as well as some examples of current filtercaches in RAM
Core, for ex, with 10
> Hi,
>
> If you want to sort on your field and want to put a count restriction too then
> you have to use mincount. That seems to be best approach for your
> problem.
>
> Thanks
> Saurabh
>
> On Fri, Feb 14, 2020, 6:24 PM Vadim Ivanov < vadim.iva...@spb.ntk-
>
"doclist":{"numFound":1,"start":2,"docs":[]
}},
{
"groupValue":"164355:20200708:22:251",
"doclist":{"numFound":1,"start":2,"docs":[]
> -Original
Probably solution is here
https://stackoverflow.com/questions/51416042/solr-error-stream-body-is-disab
led/51420987
> -Original Message-
> From: Nitish Kumar [mailto:nnitishku...@firstam.com]
> Sent: Friday, February 14, 2020 10:28 AM
> To: solr-user@lucene.apache.org
> Subject: Deleting
Hello guys!
I need an advise. My task is to delete some documents in collection.
Del algorithm is following:
Group docs by field1 with sort by field2 and delete every 3 and following
occurrences in every group.
Unfortunately I didn't find easy way to do so.
Closest approach was to use
Hi,
I'm facing the same problem with Solrcloud 7x - 8x.
I have TLOG type of replicas and when I delete Leader, log is always full
of this:
2019-12-28 14:46:56.239 ERROR (indexFetcher-45942-thread-1) [ ]
o.a.s.h.IndexFetcher No files to download for index generation: 7166
2019-12-28 14:48:03.157
Hi,
Latest jdbc driver 7.4.1 seems to support JRE 8, 11, 12
https://www.microsoft.com/en-us/download/details.aspx?id=58505
You have to delete all previous versions of Sql Server jdbc driver from Solr
installation (/solr/server/lib/ in my case)
--
Vadim
> -Original Message-
> From:
records from DIH. Am I wrong?
--
Vadim
> -Original Message-
> From: Mikhail Khludnev [mailto:m...@apache.org]
> Sent: Monday, September 02, 2019 12:23 PM
> To: Vadim Ivanov; solr-user
> Subject: Re: Idle Timeout while DIH indexing and implicit sharding in 7.4
>
> It
May be consider having one collection with implicit sharding ?
This way you can have all advantages of solrcloud and can control content of
each core "manualy" as well as query them independently (=false)
... or some of them using =core1,core2 as was proposed before
Quote from doc
" If you
kerick...@gmail.com]
> Sent: Monday, July 29, 2019 3:19 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Solr join query
>
> Vadim:
>
> Are you using streaming or the special “cross collection” join that requires
> colocated collection?
>
> > On Jul 29, 2019, at 4:2
I'm using join of multivalued field to the id field of dictionary (another
collection).
It's working pretty well
--
Vadim
> -Original Message-
> From: Rajdeep Sahoo [mailto:rajdeepsahoo2...@gmail.com]
> Sent: Monday, July 22, 2019 9:19 PM
> To: solr-user@lucene.apache.org
> Subject:
... and =false if you want to index just new records and keep old ones.
--
Vadim
> -Original Message-
> From: Jan Høydahl [mailto:jan@cominvent.com]
> Sent: Tuesday, June 25, 2019 10:48 AM
> To: solr-user
> Subject: Re: Solr 8.0.0 Customized Indexing
>
> Adjust your SQL (located
updates. The
> > only solution I see now is to use manual replication and trigger it on
> > every node after leader optimized index and this configuration was
> > available on master-salve legacy...
> >
> > On Tue, Apr 16, 2019 at 6:30 PM Vadim Ivanov <
> > vadim.iva..
Hi, Dmitri
There was discussion here a while ago...
http://lucene.472066.n3.nabble.com/Soft-commit-and-new-replica-types-td4417253.html
May be it helps you somehow.
--
Vadim
> -Original Message-
> From: Dmitry Vorotilin [mailto:d.voroti...@gmail.com]
> Sent: Tuesday, April 16, 2019
Hi!
If clean=true then index will be replaced completely by the new import. That is
how it is supposed to work.
If you don't want preemptively delete your index set =false. And set
=true instead of =true
Are you sure about optimize? Do you really need it? Usually it's very costly.
So, I'd try:
Hi!
I had the same issue and found that actual problem with the file limit (in
spite of the error message)
To increase file limit:
On Linux, you can increase the limits by running the following command as root:
sysctl -w vm.max_map_count=262144
To set this value permanently, update the
You can try to tweak solr.xml
coreLoadThreads
Specifies the number of threads that will be assigned to load cores in parallel.
https://lucene.apache.org/solr/guide/7_6/format-of-solr-xml.html
>
> > -Original Message-
> > From: Hendrik Haddorp [mailto:hendrik.hadd...@gmx.net]
> >
You can try to tweak solr.xml
> -Original Message-
> From: Hendrik Haddorp [mailto:hendrik.hadd...@gmx.net]
> Sent: Friday, January 25, 2019 11:39 AM
> To: solr-user@lucene.apache.org
> Subject: SolrCloud recovery
>
> Hi,
>
> I have a SolrCloud with many collections. When I restart an
ction1 just won't hit
> filter cache, and will be cached as new entry and lately the old entry will
> be evicted.
>
> On Tue, Jan 15, 2019 at 5:30 PM Vadim Ivanov <
> vadim.iva...@spb.ntk-intourist.ru> wrote:
>
> > Thanx, Mikhail for reply
> > > collection1 has n
=none from=id fromIndex=collection2 to=field1}*:*
> On Tue, Jan 15, 2019 at 1:18 PM Vadim Ivanov <
> vadim.iva...@spb.ntk-intourist.ru> wrote:
>
> > Sory, I've sent unfinished message
> > So, query on collection1
> > q=*:*{!join score=none from=id fromIndex=collecti
to use caches on
collection1 and ...
Does new searcher starts on collection1 as well?
> -Original Message-
> From: Vadim Ivanov [mailto:vadim.iva...@spb.ntk-intourist.ru]
> Sent: Tuesday, January 15, 2019 1:00 PM
> To: solr-user@lucene.apache.org
> Subject: join query a
Solr 6.3
I have a query like this:
q=*:*{!join score=none from=id fromIndex=hss_4 to=rpk_hdquotes v=$qq}*:*
--
Vadim
Using cdcr with new replica types be aware of
https://issues.apache.org/jira/browse/SOLR-12057?focusedComm
Parallel indexing to both cluster might be an option as well
--
Vadim
> -Original Message-
> From: Bernd Fehling [mailto:bernd.fehl...@uni-bielefeld.de]
> Sent: Monday, January
Hi!
(Solr 7.6 , Tlog replicas)
I have an issue while reloading collection with 100 shards and 3 replicas per
shard residing on 5 nodes.
Configuration of that collection is pretty complex (90 external file fields)
When node starts cores load always successfully.
When I reload collection with
In order to see newly added fields you have to reindex.
If there were any mistakes while reindexing they should appear in the log
file.
No clues in the kog?
--
From: Surender Reddy [mailto:suren...@swooptalent.com]
Sent: Tuesday, December 25, 2018 8:15 AM
To:
+
> (with some of the last improvements in 7.5).
>
> Best,
> Erick
>
> On Sun, Dec 23, 2018 at 1:43 AM Vadim Ivanov
> wrote:
> >
> > Hi!
> > After restart of nodes I have situation when no leader on shard can be
> > elected
> > Shar
Hi!
After restart of nodes I have situation when no leader on shard can be
elected
Shard rpk51_222_306 resides on 3 nodes (solr00, solr06, solr09) with
corresponding replica names
(rpk51_222_306_00, rpk51_222_306_06, rpk51_222_306_09)
Logs looks like this
PeerSync: core=rpk51_222_306_00
helpful to keep big clusters in order...
I wonder, why there is no any jira about this case (or maybe I missed it)?
Anyone who cares, please, help to create jira and improve this feature in the
nearest releaase
--
Vadim
> -Original Message-
> From: Vadim Ivanov [mailto:vadim.iva...@s
on the leader every 300 sec,
> > > during
> > > > > indexing. Polling interval on other replicas 150 sec, but not every
> > > poll
> > > > > attempt they fetch new segment from the leader, afaiu. Erick, do you
> > > mean
> > > > >
t;that'll soon be merged away and the merged segment re-pulled.
>
>Apparently, though, nobody's seen this "in the wild", so it's
>theoretical at this point.
>On Sun, Dec 9, 2018 at 1:48 AM Vadim Ivanov
< vadim.iva...@spb.ntk-intourist.ru> wrote:
>
> Thanks, Edward,
018 12:42 AM
> To: solr-user@lucene.apache.org
> Subject: Re: Soft commit and new replica types
>
> Some insights in the new replica types below:
>
> On Sat, December 8, 2018 08:42, Vadim Ivanov <
> vadim.iva...@spb.ntk-intourist.ru wrote:
>
> >
> > Fr
Before 7.x all replicas in SolrCloud were NRT type.
And following rules were applicable:
https://stackoverflow.com/questions/45998804/when-should-we-apply-hard-commit-and-soft-commit-in-solr
and
https://lucene.apache.org/solr/guide/7_5/updatehandlers-in-solrconfig.html#commit-and-softcommit
But
e.apache.org
> Subject: Re: REBALANCELEADERS is not reliable
>
> Thanks for looking this up.
> It could be a hint where to jump into the code.
> I wonder why they rejected a jira ticket about this problem?
>
> Regards, Bernd
>
> Am 06.12.18 um 16:31 schrieb Vadim Ivan
tly stored in zookeeper or is overseer the problem?
> > >>
> > >> I was also reading something about a "leader queue" where possible
> > >> leaders have to be requeued or something similar.
> > >>
> > >> May be I should try to get a si
Hi, Bernd
I have tried REBALANCELEADERS with Solr 6.3 and 7.5
I had very similar results and notion that it's not reliable :(
--
Br, Vadim
> -Original Message-
> From: Bernd Fehling [mailto:bernd.fehl...@uni-bielefeld.de]
> Sent: Tuesday, November 27, 2018 5:13 PM
> To:
Hi!
Have you tried to name entity in Fulldataimport http call
As
/dataimport/?command=full-import=Document1=true=true
Is there something sane in the log file after that command?
--
Vadim
> -Original Message-
> From: Santosh Kumar S [mailto:santoshkumar.saripa...@infinite.com]
>
the collection, it's possible that the startup
> progress kicked off a replication
>and if there's really a bug reloading just masked it.
>
> Best,
> Erick
> On Sun, Nov 11, 2018 at 2:34 AM Vadim Ivanov
> wrote:
> >
> > Reload collection helps !
> >
Reload collection helps !
After reloading collection generation and indexversion returned by
Replicationhandler catch up with the leader
> -Original Message-
> From: Vadim Ivanov [mailto:vadim.iva...@spb.ntk-intourist.ru]
> Sent: Sunday, November 11, 2018 1:09 PM
> T
dequate information of
indexversion
and generation returned by mbeans, just curious of that strange behavior.
> -Original Message-
> From: Shawn Heisey [mailto:apa...@elyograg.org]
> Sent: Saturday, November 10, 2018 6:46 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Replicatio
Hello!
I have SolrCloud 7.5 with TLOG replicas.
I have noticed that information about replication state of replicas differs
when received from
...core/admin/mbeans?stats=true=replication=true=REPLICATION
And
...core/replication?command=indexversion
Seems, the latter gets some wrong information
the same happen with PULL replicas with Solr 7.5. Solr was
> > showing that they all had correct index version, but the changes were
> > not showing. Unfortunately the solr.log size was too small to catch any
> > issues, so I've now increased and waiting for it to happen again.
> &g
Hi, Chris
I had the same messages in solr log while testing 7.4 and 7.5
The only remedy I've found - increasing header size:
/opt/solr/server/etc/jetty.xml
After solr restart - no more annoying messages
> -Original Message-
> From: Chris Ulicny [mailto:culicny@iq.media]
> Sent:
.
>
> If we prove out that this is really happening as you think, then a
> JIRA (with steps to reproduce) is _definitely_ in order.
>
> Best,
> Erick
> On Wed, Oct 24, 2018 at 2:07 AM Vadim Ivanov
> wrote:
> >
> > Hi All !
> >
> > I'm testing
Hi All !
I'm testing Solr 7.5 with TLOG replicas on SolrCloud with 5 nodes.
My collection has shards and every shard has 3 TLOG replicas on different
nodes.
I've noticed that some replicas stop receiving updates from the leader
without any visible signs from the cluster status.
(all replicas
Hi,
You CAN join across collections with runtime "join".
The only limitation is that FROM collection should not be sharded and joined
data should reside on one node.
Solr cannot join across nodes (distributed search is not supported).
Though using streaming expressions it's possible to do
...but using Streaming Expressions it's possible to achieve the goal, AFAIK
https://lucene.apache.org/solr/guide/7_5/stream-decorators.html#innerjoin
Though, probably it won't be so fast as search
--
Vadim
-Original Message-
From: Joel Bernstein [mailto:joels...@gmail.com]
Sent:
Hi
You cannot join on two fields in SOLR as you do using SQL.
Having the same situations I add new string field in collections(to concatenate
Type and Id fields) and fill it at index time)
Then join two collections on that field at query time
--
Vadim
-Original Message-
From:
Hi,
AFAIK Solr can join only local indexes. No matter whether you join the same
collection or two different ones.
So, in your case shard1 will be joined to shard1 and shard2 to shard2.
Unfortunately it's hard to say from your data which document resides in which
shard, but you can test using
ode. The probable measures
are: try to shuffle updates to load other shards for a while and let
parallel merge to pack that shard. And just wait a little by increasing
timeout in jetty.
Let us know what you will encounter.
On Thu, Sep 13, 2018 at 3:54 PM Vadim Ivanov <
vadim.iva...@spb.ntk-intour
with uneven shard distribution of documents.
Any suggestion how to mitigate issue?
--
BR
Vadim Ivanov
-Original Message-
From: Вадим Иванов [mailto:vadim.iva...@spb.ntk-intourist.ru]
Sent: Wednesday, September 12, 2018 4:29 PM
To: solr-user@lucene.apache.org
Subject: Idle Timeout while DIH
Hi, Steve
If you are using solr1:8983 to access solr and solr1 is down IMHO nothing
helps you to access dead ip.
You should switch to any other live node in the cluster or I'd propose to
have nginx as frontend to access
Solrcloud.
--
BR, Vadim
-Original Message-
From: Gu, Steve
54 matches
Mail list logo