at
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
On 11/09/15 23:32, Shawn Heisey wrote:
> On 9/11/2015 3:12 PM, Hendrik Haddorp wrote:
>> I'm using Solr 5.3.0 and noticed that the following code does not work
>> with Solr Cloud:
>> CollectionAdminRequest.R
Heisey <apa...@elyograg.org> wrote:
>
>> On 9/11/2015 3:12 PM, Hendrik Haddorp wrote:
>>> I'm using Solr 5.3.0 and noticed that the following code does not work
>>> with Solr Cloud:
>>> CollectionAdminRequest.Reload reloadReq = new
>>> CollectionAdmi
Hi,
I'm using Solr 5.3.0 and noticed that the following code does not work
with Solr Cloud:
CollectionAdminRequest.Reload reloadReq = new
CollectionAdminRequest.Reload();
reloadReq.process(client, collection);
It complains that the name parameter is required. When adding
Hi,
I have a simple Solr 5.3 cloud setup with two nodes using a manged
schema. I'm creating a collection using a schema that initially only
contains the id field. When documents get added I'm dynamically adding
the required fields. Currently this fails quite consistently as in bug
SOLR-7536 but
Hi,
the JavaDoc of SolrInputDocument.addField [1] states:
Add a field with implied null value for boost. The class type of value
and the name parameter should match schema.xml. schema.xml can be found
in conf directory under the solr home by default.
This sounds as if the value would need to be
Hi,
is it possible to instruct Solr to publish a different host/port into
ZooKeeper then it is actually running on? This is required if the Solr
node is not directly reachable on its port from outside due to a NAT
setup or when running Solr as a Docker container with a mapped port.
For what its
in a way to support
> also setting them via System properties:
>
> ${host:}
> ${jetty.port:8983}
>
> Tomás
>
> On Wed, Mar 23, 2016 at 11:26 PM, Hendrik Haddorp <hendrik.hadd...@gmx.net>
> wrote:
>
>> Hi,
>>
>> is it possible to instruct Sol
Hi,
I have a Solr Cloud 6 setup with a managed schema. It seems like when I
create multiple collections from the same config set that they still
share the same schema. That was rather unexpected, as in the REST and
SolrJ API I do specify a collection when doing the schema change.
Looking into
Looks like I ran into the same as was discussed here:
http://grokbase.com/t/lucene/solr-user/15c4nr1j48/solrcloud-1-server-1-configset-multiple-collections-multiple-schemas
Would be nice if that would be changed in the future as it would make
these setups much easier.
On 29/04/16 20:07, Hendrik
Evans wrote:
> On Mon, May 23, 2016 at 10:37 AM, Hendrik Haddorp
> <hendrik.hadd...@gmx.net> wrote:
>> Hi,
>>
>> I have a SolrCloud 6.0 setup and created my collection with a
>> replication factor of 1. Now I want to increase the replication factor
>> but
2016 at 10:37 AM, Hendrik Haddorp
> <hendrik.hadd...@gmx.net> wrote:
>> Hi,
>>
>> I have a SolrCloud 6.0 setup and created my collection with a
>> replication factor of 1. Now I want to increase the replication factor
>> but would like the replicas for the same s
Hi,
I'm using SolrCloud 6.0 with a managed schema. When I add fields using
SolrJ and immediately afterwards try to index data I sometimes get an
error telling me that a field that I just added does not exist. If I do
an explicit collection reload after the schema modification things seem
to work.
tps://issues.apache.org/jira/browse/SOLR-8662 addresses
> this very issue I believe, but it's in 6.1
>
> Best,
> Erick
>
> On Mon, Apr 18, 2016 at 1:34 PM, Hendrik Haddorp
> <hendrik.hadd...@gmx.net> wrote:
>> Hi,
>>
>> I'm using SolrCloud 6.0 with a manage
Hi,
are you using HDFS? According to the documentation the feature should be
only available if you are using HDFS. For me it did however also fail on
that. See the thread "Solr on HDFS: AutoAddReplica does not add a
replica" from about two weeks ago.
regards,
Hendrik
On 02.02.2017 07:21,
Might be that your overseer queue overloaded. Similar to what is
described here:
https://support.lucidworks.com/hc/en-us/articles/203959903-Bringing-up-downed-Solr-servers-that-don-t-want-to-come-up
If the overseer queue gets too long you get hit by this:
replica, possibly using very old data.
FWIW,
Erick
On Tue, Feb 21, 2017 at 1:12 PM, Hendrik Haddorp
<hendrik.hadd...@gmx.net> wrote:
Hi,
I had opened SOLR-10092 (https://issues.apache.org/jira/browse/SOLR-10092)
for this a while ago. I was now able to gt this feature working with a very
smal
of date.
Erick
On Tue, Feb 21, 2017 at 10:30 PM, Hendrik Haddorp
<hendrik.hadd...@gmx.net> wrote:
Hi Erick,
in the none HDFS case that sounds logical but in the HDFS case all the index
data is in the shared HDFS file system. Even the transaction logs should be
in there. So the node tha
. Not really sure why one
replica needs to be up though. I added the patch based on Solr 6.3 to
the bug report. Would be great if it could be merged soon.
regards,
Hendrik
On 19.01.2017 17:08, Hendrik Haddorp wrote:
HDFS is like a shared filesystem so every Solr Cloud instance can
access the data
Hi,
I'm seeing the same issue on Solr 6.3 using HDFS and a replication
factor of 3, even though I believe a replication factor of 1 should work
the same. When I stop a Solr instance this is detected and Solr actually
wants to create a replica on a different instance. The command for that
does
Hi,
I have a 6.2.1 solr cloud setup with 5 nodes containing close to 3000
collections having one shard and three replicas each. It looks like when
nodes crash the overseer queue can go wild on grows until ZooKeeper is
not working anymore correctly. This looks pretty much like SOLR-5961
he Overseer just has to move
the ownership of the replica, which seems like what the code is trying
to do. There just seems to be a bug in the code so that the core does
not get created on the target node.
Each data directory also contains a lock file. The documentation states
that one should us
know whether a
node is truly functioning, being absent from live_nodes trumps
whatever state is in state.json.
Best,
Erick
On Sat, Oct 22, 2016 at 1:00 AM, Hendrik Haddorp
<hendrik.hadd...@gmx.net> wrote:
Thanks, that was what I was hoping for I just didn't see any indication for
th
wrote:
On 10/21/2016 6:56 AM, Hendrik Haddorp wrote:
I'm running solrcloud in foreground mode (-f). Does it make a
difference for Solr if I stop it by pressing ctrl-c, sending it a
SIGTERM or using "solr stop"?
All of those should produce the same result in the end -- Solr's
shu
Hi,
I'm running solrcloud in foreground mode (-f). Does it make a difference
for Solr if I stop it by pressing ctrl-c, sending it a SIGTERM or using
"solr stop"?
regards,
Hendrik
Hi,
I have a SolrCloud 6.2.1 setup with 5 nodes. I do an occasional restart
of my nodes in which I restart one node at a time. I have quite a few
collections. Lets say 2000 with a replication factor of 3. When the node
comes up again it looks like I get the same issue as described in
Hi,
I have a SolrCloud setup with 5 nodes and am creating collections with a
replication factor of 3. If I kill and restart nodes at the "right" time
during the creation process the creation seems to get stuck. Collection
data is left in the clusterstate.json file in ZooKeeper and no
Hi,
in
solr-6.3.0/solr/core/src/java/org/apache/solr/cloud/overseer/ClusterStateMutator.java
there is the following code starting line 107:
//TODO default to 2; but need to debug why BasicDistributedZk2Test fails
early on
String znode = message.getInt(DocCollection.STATE_FORMAT, 1) == 1
Hi,
is the HDFS support in Solr 6.3 considered production ready?
Any idea how many setups might be using this?
thanks,
Hendrik
Hi,
playing around with Solr 6.3 and HDFS I noticed that after deleting a
collection the directories for the Solr cores are left in HDFS. There is
no date left in them but still this doesn't look clean to me.
regards,
Hendrik
Heisey wrote:
On 1/3/2017 2:59 AM, Hendrik Haddorp wrote:
I have a SolrCloud setup with 5 nodes and am creating collections with
a replication factor of 3. If I kill and restart nodes at the "right"
time during the creation process the creation seems to get stuck.
Collection data is left i
The UI warning was quite easy to resolve. I'm currently testing Solr
with HDFS but for some reason the core ended up on the local storage of
the node. After a delete and restart the problem was gone.
On 05.01.2017 12:42, Hendrik Haddorp wrote:
Right, I had to do that multiple times already
eason than it's confusing.
Times past the node needed to be there even if empty. Although I just
tried removing it completely on 6x and I was able to start Solr, part
of the startup process recreates it as an empty node, just a pair of
braces.
Best,
Erick
On Wed, Jan 4, 2017 at 1:22 PM, Hendr
teMutator is executed.
>
> On Wed, Jan 4, 2017 at 6:16 PM, Hendrik Haddorp <hendrik.hadd...@gmx.net>
> wrote:
>> Hi,
>>
>> in
>> solr-6.3.0/solr/core/src/java/org/apache/solr/cloud/overseer/ClusterStateMutator.java
>> there is the following code starting line
ifying legacyCloud=false clusterprop
>
> Kind of a shot in the dark...
>
> Erick
>
> On Wed, Jan 4, 2017 at 11:12 AM, Hendrik Haddorp
> <hendrik.hadd...@gmx.net> wrote:
>> You are right, the code looks like it. But why did I then see collection
>> data in the clusters
Hi,
the SolrJ API has this method: SolrClient.commit(String collection,
boolean waitFlush, boolean waitSearcher, boolean softCommit).
My assumption so far was that when you set waitSearcher to true that the
method call only returns once a search would find the new data, which
sounds what you
Hi,
I would like to dynamically modify a query, for example by replacing a
field name with a different one. Given how complex the query parsing is
it does look error prone to duplicate that so I would like to work on
the Lucene Query object model instead. The subclasses of Query look
Hi,
the Solr 6.6. ref guide states that to "finds all documents without a
value for field" you can use:
-field:[* TO *]
While this is true I'm wondering why it is recommended to use a range
query instead of simply:
-field:*
regards,
Hendrik
forgot the link with the statement:
https://lucene.apache.org/solr/guide/6_6/the-standard-query-parser.html
On 20.07.2017 15:20, Hendrik Haddorp wrote:
Hi,
the Solr 6.6. ref guide states that to "finds all documents without a
value for field" you can use:
-field:[* TO *]
While th
witter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
On Fri, Jul 21, 2017 at 9:50 PM, Hendrik Haddorp <hendrik.hadd...@gmx.net>
wrote:
Hi,
when I try to use an atomic update in conjunction with optimistic
concurrency Solr sometimes complains that the version I passed i
;
updateRequest = new UpdateRequest();
updateRequest.add(docs);
client.request(updateRequest, collection);
updateRequest = new UpdateRequest();
updateRequest.commit(client, collection);
}
Maybe you can let us know more details how the update been made?
Amrit Sarkar
Searc
If the range query is so much better shouldn't the Solr query parser
create a range query for a token query that only contains the wildcard?
For the *:* case it does already contain a special path.
On 20.07.2017 21:00, Shawn Heisey wrote:
On 7/20/2017 7:20 AM, Hendrik Haddorp wrote:
the Solr
Hi,
when I try to use an atomic update in conjunction with optimistic
concurrency Solr sometimes complains that the version I passed in does
not match. The version in my request however match to what is stored and
what the exception states as the actual version does not exist in the
Courtade wrote:
I set jute.maxbuffer on the so hosts should this be done to solr as well?
Mine is happening in a severely memory constrained end as well.
Jeff Courtade
M: 240.507.6116
On Aug 22, 2017 8:53 AM, "Hendrik Haddorp" <hendrik.hadd...@gmx.net> wrote:
We have Sol
17 at 1:14 PM, Hendrik Haddorp <hendrik.hadd...@gmx.net>
wrote:
- stop all solr nodes
- start zk with the new jute.maxbuffer setting
- start a zk client, like zkCli, with the changed jute.maxbuffer setting
and check that you can read out the overseer queue
- clear the queue
- restart zk
You should be able to just use
price_owner_float:[100 TO 200] OR price_customer_float:[100 TO 200]
If the document doesn't have the field the condition is false.
On 12.09.2017 23:14, xdzgor1 wrote:
Rick Leir-2 wrote
Peter
The common setup is to use copyfield from all your fields into a 'grab
e a couple of options:
1> agitate fo ra 6.6.2 with this included
2> apply the patch yourself and compile it locally
Best,
Erick
On Sun, Sep 10, 2017 at 6:04 AM, Hendrik Haddorp
<hendrik.hadd...@gmx.net> wrote:
Hi,
looks like SOLR-10506 didn't make it into 6.6.1. I do however
Hi,
looks like SOLR-10506 didn't make it into 6.6.1. I do however also not
see it listen in the current release notes for 6.7 nor 7.0:
https://issues.apache.org/jira/projects/SOLR/versions/12340568
https://issues.apache.org/jira/projects/SOLR/versions/12335718
Is there any any rough
Hi,
I'm trying to use the streaming API via SolrJ but have some trouble with
the documentation and samples. In the reference guide I found the below
example in
http://lucene.apache.org/solr/guide/6_6/streaming-expressions.html.
Problem is that "withStreamFunction" does not seem to exist.
t;select(search(gettingstarted,\n" +
"q=*:* NOT personal_email_s:*,\n" +
"fl=\"id,business_email_s\",\n" +
"sort=\"business_email_s asc\"),\n" +
"id,\n" +
"b
17 8:01 AM, "Hendrik Haddorp" <hendrik.hadd...@gmx.net> wrote:
Hi Jeff,
we ran into that a few times already. We have lots of collections and when
nodes get started too fast the overseer queue grows faster then Solr can
process it. At some point Solr tries to redo things like leader
are the zookeeper
servers residing on solr nodes? Are the solr nodes underpowered ram and or
cpu?
Jeff Courtade
M: 240.507.6116
On Aug 22, 2017 8:30 AM, "Hendrik Haddorp" <hendrik.hadd...@gmx.net> wrote:
I'm always using a small Java program to delete the nodes directly. I
ass
/overseer/queue
Or do i need to delete individual entries?
Will
rmr /overseer/queue/*
work?
Jeff Courtade
M: 240.507.6116
On Aug 22, 2017 8:20 AM, "Hendrik Haddorp" <hendrik.hadd...@gmx.net> wrote:
When Solr is stopped it did not cause a problem so far.
I cleared the queue
Hi Jeff,
we ran into that a few times already. We have lots of collections and
when nodes get started too fast the overseer queue grows faster then
Solr can process it. At some point Solr tries to redo things like
leaders votes and adds new tasks to the list, which then gets longer and
Hi,
we noticed that triggering collection reloads on many collections has a
good chance to result in an OOM-Error. To investigate that further I did
a simple test:
- Start solr with a 2GB heap and 1GB Metaspace
- create a trivial collection with a few documents (I used only 2
fields
Did you get an answer? Would really be nice to have that in the next
release.
On 28.08.2017 18:31, Erick Erickson wrote:
Varun Thacker is the RM for Solr 6.6.1, I've pinged him about including it.
On Mon, Aug 28, 2017 at 8:52 AM, Walter Underwood wrote:
That would be
this hanging around I'd guess.
Best,
Erick
On Thu, Oct 19, 2017 at 11:55 PM, Hendrik Haddorp
<hendrik.hadd...@gmx.net> wrote:
Hi Erick,
that is actually the call I'm using :-)
If you invoke
http://solr_target_machine:port/solr/core/replication?command=details after
that you can see the repli
Hi,
I want to transfer a Solr collection from one SolrCloud to another one.
For that I create a collection in the target cloud using the same config
set as on the source cloud but with a replication factor of one. After
that I'm using the Solr core API with a "replication?command=fetchindex"
.getData(SolrZkClient.java:354)
at
org.apache.solr.cloud.ZkController.getLeaderProps(ZkController.java:1021)
... 9 more
Can I modify zookeeper to force a leader? Is there any other way to
recover from this? Thanks very much!
-Joe
On 11/21/2017 3:24 PM, Hendrik Haddorp wrote:
W
We did some testing and the performance was strangely even better with
HDFS then the with the local file system. But this seems to greatly
depend on how your setup looks like and what actions you perform. We now
had a patter with lots of small updates and commits and that seems to be
quite a
know what are the factors influence
and what considerations are to be taken in relation to this?
Thanks
On Wed, 22 Nov 2017 at 14:16 Hendrik Haddorp <hendrik.hadd...@gmx.net>
wrote:
We did some testing and the performance was strangely even better with
HDFS then the with the local fil
Hi,
currently we use a lot of small collections that all basically have the
same schema. This does not scale too well. So we are looking into
combining multiple collections into one. We would however like some
analyzers to behave slightly differently depending on the logical
collection. We
Hi, how do multiple analyzers help?
On 18.12.2017 10:25, Markus Jelsma wrote:
Hi - That is impossible. But you can construct many analyzers instead.
-Original message-
From:Hendrik Haddorp
Sent: Monday 18th December 2017 8:35
To: solr-user
Hi,
the write.lock issue I see as well when Solr is not been stopped
gracefully. The write.lock files are then left in the HDFS as they do
not get removed automatically when the client disconnects like a
ephemeral node in ZooKeeper. Unfortunately Solr does also not realize
that it should be
don't want to issue the manual commit.
Best,
Erick
On Tue, Nov 21, 2017 at 10:34 AM, Hendrik Haddorp
<hendrik.hadd...@gmx.net> wrote:
Hi,
the write.lock issue I see as well when Solr is not been stopped
gracefully.
The write.lock files are then left in the HDFS as they do not get
removed
aut
eader assigned:
http://lovehorsepower.com/SolrClusterErrors.jpg
-Joe
On 11/21/2017 1:34 PM, Hendrik Haddorp wrote:
Hi,
the write.lock issue I see as well when Solr is not been stopped
gracefully. The write.lock files are then left in the HDFS as they do
not get removed automatically when the client disc
!
-joe
On 11/21/2017 2:35 PM, Hendrik Haddorp wrote:
We actually also have some performance issue with HDFS at the moment.
We are doing lots of soft commits for NRT search. Those seem to be
slower then with local storage. The investigation is however not
really far yet.
We have a setup
Hi,
the SolrJ DocCollection object seems to contain all information from the
cluster status except the name of the config set.
Is that a bug or on purpose?
The reason might be that everything in the DocCollection object
originates from the state.json while the config set name is stored in
An option is actually to do an explicit
ClusterStatus.getClusterStatus().process(solr, collectionName) request
and then get the config set name out of the result. This is a bit
cumbersome but works.
On 12.11.2017 19:54, Hendrik Haddorp wrote:
Hi,
the SolrJ DocCollection object seems
Hi,
for the HDFS case wouldn't it be nice if there was a mode in which the
replicas just read the same index files as the leader? I mean after all
the data is already on a shared readable file system so why would one
even need to replicate the transaction log files?
regards,
Hendrik
On
the
NoLockFactory you could specify. That would allow you to share a
common index, woe be unto you if you start updating the index though.
Best,
Erick
On Sat, Dec 9, 2017 at 4:46 AM, Hendrik Haddorp <hendrik.hadd...@gmx.net> wrote:
Hi,
for the HDFS case wouldn't it be nice if there was a mode in
Hi,
we are looking into using manged resources for synonyms via the
ManagedSynonymGraphFilterFactory. It seems like there is no SolrJ API
for that. I would be especially interested in one via the
CloudSolrClient. I found
, Jun 6, 2018 at 8:33 PM, Hendrik Haddorp
wrote:
Hi,
I'm trying to read and modify the autoscaling config. The API on
https://lucene.apache.org/solr/guide/7_3/solrcloud-autoscaling-api.html
does only mention the REST API. The read part does however also work via
SolrJ
The only option should be to configure Solr to just have a replication
factor of 1 or HDFS to have no replication. I would go for the middle
and configure both to use a factor of 2. This way a single failure in
HDFS and Solr is not a problem. While in 1/3 or 3/1 option a single
server error
Hi,
I'm trying to read and modify the autoscaling config. The API on
https://lucene.apache.org/solr/guide/7_3/solrcloud-autoscaling-api.html
does only mention the REST API. The read part does however also work via
SolrJ:
cloudSolrClient.getZkStateReader().getAutoScalingConfig()
Just
shut down the target cluster and just copy the entire data dir
from each source replica to each target replica then start all the
target Solr instances up you'll be fine.
Best,
Erick
On Thu, Oct 19, 2017 at 1:33 PM, Hendrik Haddorp
<hendrik.hadd...@gmx.net> wrote:
Hi,
I want to transfer
There is asynchronous and non-blocking. If I use 100 threads to perform
calls to Solr using the standard Java HTTP client or SolrJ I block 100
threads even if I don't block my program logic threads by using async
calls. However if I perform those HTTP calls using a non-blocking HTTP
client,
Hi,
when I configure my HDFS setup to use a specific replication factor,
like 1, this only effects the index files that Solr writes. The
write.lock files and backups are being created with a different
replication factor. The reason for this should be that HdfsFileWriter is
loading the
Hi,
I'm unable to find how I can do a MODIFYCOLLECTION via Solrj. I would
like to change the replication factor of a collection but can't find it
in the Solrj API. Is that not supported?
regards,
Hendrik
Hi,
we are seeing the following NPE sometimes when we delete a collection
right after we modify the schema:
08:47:46.407 [zkCallback-5-thread-4] INFO
org.apache.solr.rest.ManagedResource 209 processStoredData - Loaded
initArgs {ignoreCase=true} for /schema/analysis/stopwords/text_ar
Hi,
the Solr documentation lists deprecated field types at:
https://lucene.apache.org/solr/guide/7_4/field-types-included-with-solr.html
Below the table the following is stated:
/All Trie* numeric and date field types have been deprecated in favor of
*Point field types. Point field types are
Hi,
are your collections using stateFormat 1 or 2? In version 1 all state
was stored in one file while in version 2 each collection has its own
state.json. I assume that in the old version it could happen that the
common file still contains state for a collection that was deleted. So I
would
?
On 14.03.2018 11:14, Shawn Heisey wrote:
On 3/14/2018 3:04 AM, Hendrik Haddorp wrote:
we have a SolrCloud 6.3 with HDFS setup and plan to upgrade to 7.2.1.
The cluster upgrade instructions on
https://lucene.apache.org/solr/guide/7_2/upgrading-a-solr-cluster.html
does not contain any
Hi,
we have a SolrCloud 6.3 with HDFS setup and plan to upgrade to 7.2.1.
The cluster upgrade instructions on
https://lucene.apache.org/solr/guide/7_2/upgrading-a-solr-cluster.html
does not contain any information on changing the luceneMatchVersion. If
we change the luceneMatchVersion
Thanks for the detailed description!
On 14.03.2018 16:11, Shawn Heisey wrote:
On 3/14/2018 5:56 AM, Hendrik Haddorp wrote:
So you are saying that we do not need to run the IndexUpgrader tool
if we move from 6 to 7. Will the index be then updated automatically
or will we get a problem once we
Hi,
we are running Solr on Marathon/Mesos, which should basically be the
same as DC/OS. Solr and ZooKeeper are running in docker containers. I
wrote my own Mesos framework that handles the assignment to the agents.
There is a public sample that does the same for ElasticSearch. I'm not
aware
Hi,
in
http://lucene.472066.n3.nabble.com/In-Place-Updates-not-working-as-expected-tp4375621p4380035.html
some restrictions on the supported fields are given. I could however not
find if in-place updates are supported for are field types or if they
only work for say numeric fields.
thanks,
1 Apr 2018, at 07:34, Hendrik Haddorp <hendrik.hadd...@gmx.net> wrote:
Hi,
in
http://lucene.472066.n3.nabble.com/In-Place-Updates-not-working-as-expected-tp4375621p4380035.html
some restrictions on the supported fields are given. I could however not find
if in-place updates are supported for
Hi,
with Solr 7.3 it is possible to set arbitrary collection properties
using
https://lucene.apache.org/solr/guide/7_3/collections-api.html#collectionprop
But how do I read out the properties again? So far I could not find a
REST call that would return the properties. I do see my property in
I opened SOLR-12224 for this:
https://issues.apache.org/jira/browse/SOLR-12224
On 14.04.2018 01:49, Shawn Heisey wrote:
On 4/13/2018 5:07 PM, Tomás Fernández Löbbe wrote:
Yes... Unfortunately there is no GET API :S Can you open a Jira? Patch
should be trivial
My suggestion would be to return
)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
On 24.03.2018 03:52, Shawn Heisey wrote:
On 3/23/2018 4:08 AM, Hendrik Haddorp wrote:
I did not define a /get request handler but I also don't see one
being default
that my nodes can restart before the replicas get moved.
Maybe that does then also resolve this type of problem. Issue SOLR-12114
does make changing the config a bit more tricky though but I got it updated.
thanks,
Hendrik
On 24.03.2018 18:31, Shawn Heisey wrote:
On 3/24/2018 11:22 AM, Hendrik
Hi,
I did a simple test on a three node cluster using Solr 7.2.1. The JVMs
(Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 1.8.0_162
25.162-b12) have about 6.5GB heap and 1.5GB metaspace. In my test I have
1000 collections with only 1000 simple documents each. I'm then
triggering
. Shouldn't all collections be loaded during the startup?
On 18.03.2018 17:22, Hendrik Haddorp wrote:
Hi,
I did a simple test on a three node cluster using Solr 7.2.1. The JVMs
(Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 1.8.0_162
25.162-b12) have about 6.5GB heap and 1.5GB metaspace
Hi,
I have a Solr Cloud 7.2.1 setup and used SolrJ (7.2.1) to create 1000
collections with a few documents. During that I got multiple times in
the Solr logs exceptions because an access of the /get handler of a
collection failed. The call stack looks like this:
at
.
On 12.10.2018 19:59, Christopher Schultz wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Hendrik,
On 10/12/18 02:36, Hendrik Haddorp wrote:
Those constraints can be easily set if you are using Docker. The
problem is however that at least up to Oracle Java 8, and I believe
quite a bit further
Those constraints can be easily set if you are using Docker. The problem
is however that at least up to Oracle Java 8, and I believe quite a bit
further, the JVM is not at all aware about those limits. That's why when
running Solr in Docker you really need to make sure that you set the
memory
Beside the heap the JVM has other memory areas, like the metaspace:
https://docs.oracle.com/javase/9/tools/java.htm
-> MaxMetaspaceSize
search for "size" in that document and you'll find tons of further
settings. I have not tried out Oracle Java 9 yet.
regards,
Hendrik
On 11.10.2018 06:08,
How are you indexing the documents? Are you using SolrJ or the plain
REST API?
Are you sending the documents one by one or all in one request? The
performance is far better if you send the 100 documents in one request.
If you send them individual, are you doing any commits between them?
Theoretically you should be able to use the HDFS backend, which you can
configure to use s3. Last time I tried that it did however not work for
some reason. Here is an example for that, which also seems to have
ultimately failed:
Hi Anchal,
the IBM JVM behaves differently in the TLS setup then the Oracle JVM. If
you search for IBM Java TLS 1.2 you find tons of reports of problems
with that. In most cases you can get around that using the system
property "com.ibm.jsse2.overrideDefaultTLS" as documented here:
1 - 100 of 117 matches
Mail list logo