IndexSchema is not mutable error Solr Cloud 7.7.1

2020-07-23 Thread Porritt, Ian
Hi All,

 

I made a change to schema to add new fields in a
collection, this was uploaded to Zookeeper via the
below command:

 

For the Schema

solr zk cp
file:E:\SolrCloud\server\solr\configsets\COLLECTIO
N\conf\schema.xml
zk:/configs/COLLECTION/schema.xml -z
SERVERNAME1.uleaf.site

 

For the Solrconfig

solr zk cp
file:E:\SolrCloud\server\solr\configsets\COLLECTIO
N\conf\solrconfig.xml
zk:/configs/COLLECTION/solrconfig.xml -z
SERVERNAME1.uleaf.site

Note: the solrconfig has  defined.

 

 

When I then go to update a record with the new
field in you get the following error: 

 

org.apache.solr.common.SolrException: This
IndexSchema is not mutable.

at
org.apache.solr.update.processor.AddSchemaFieldsUp
dateProcessorFactory$AddSchemaFieldsUpdateProcesso
r.processAdd(AddSchemaFieldsUpdateProcessorFactory
.java:376)

at
org.apache.solr.update.processor.UpdateRequestProc
essor.processAdd(UpdateRequestProcessor.java:55)

at
org.apache.solr.update.processor.FieldMutatingUpda
teProcessor.processAdd(FieldMutatingUpdateProcesso
r.java:118)

at
org.apache.solr.update.processor.UpdateRequestProc
essor.processAdd(UpdateRequestProcessor.java:55)

at
org.apache.solr.update.processor.FieldMutatingUpda
teProcessor.processAdd(FieldMutatingUpdateProcesso
r.java:118)

at
org.apache.solr.update.processor.UpdateRequestProc
essor.processAdd(UpdateRequestProcessor.java:55)

at
org.apache.solr.update.processor.FieldMutatingUpda
teProcessor.processAdd(FieldMutatingUpdateProcesso
r.java:118)

at
org.apache.solr.update.processor.UpdateRequestProc
essor.processAdd(UpdateRequestProcessor.java:55)

at
org.apache.solr.update.processor.FieldMutatingUpda
teProcessor.processAdd(FieldMutatingUpdateProcesso
r.java:118)

at
org.apache.solr.update.processor.UpdateRequestProc
essor.processAdd(UpdateRequestProcessor.java:55)

at
org.apache.solr.update.processor.FieldNameMutating
UpdateProcessorFactory$1.processAdd(FieldNameMutat
ingUpdateProcessorFactory.java:75)

at
org.apache.solr.update.processor.UpdateRequestProc
essor.processAdd(UpdateRequestProcessor.java:55)

at
org.apache.solr.update.processor.FieldMutatingUpda
teProcessor.processAdd(FieldMutatingUpdateProcesso
r.java:118)

at
org.apache.solr.update.processor.UpdateRequestProc
essor.processAdd(UpdateRequestProcessor.java:55)

at
org.apache.solr.update.processor.AbstractDefaultVa
lueUpdateProcessorFactory$DefaultValueUpdateProces
sor.processAdd(AbstractDefaultValueUpdateProcessor
Factory.java:92)

at
org.apache.solr.handler.loader.JavabinLoader$1.upd
ate(JavabinLoader.java:110)

at
org.apache.solr.client.solrj.request.JavaBinUpdate
RequestCodec$StreamingCodec.readOuterMostDocIterat
or(JavaBinUpdateRequestCodec.java:327)

at
org.apache.solr.client.solrj.request.JavaBinUpdate
RequestCodec$StreamingCodec.readIterator(JavaBinUp
dateRequestCodec.java:280)

at
org.apache.solr.common.util.JavaBinCodec.readObjec
t(JavaBinCodec.java:333)

at
org.apache.solr.common.util.JavaBinCodec.readVal(J
avaBinCodec.java:278)

at
org.apache.solr.client.solrj.request.JavaBinUpdate
RequestCodec$StreamingCodec.readNamedList(JavaBinU
pdateRequestCodec.java:235)

at
org.apache.solr.common.util.JavaBinCodec.readObjec
t(JavaBinCodec.java:298)

at
org.apache.solr.common.util.JavaBinCodec.readVal(J
avaBinCodec.java:278)

at
org.apache.solr.common.util.JavaBinCodec.unmarshal
(JavaBinCodec.java:191)

at
org.apache.solr.client.solrj.request.JavaBinUpdate
RequestCodec.unmarshal(JavaBinUpdateRequestCodec.j
ava:126)

at
org.apache.solr.handler.loader.JavabinLoader.parse
AndLoadDocs(JavabinLoader.java:123)

at
org.apache.solr.handler.loader.JavabinLoader.load(
JavabinLoader.java:70)

at
org.apache.solr.handler.UpdateRequestHandler$1.loa
d(UpdateRequestHandler.java:97)

at
org.apache.solr.handler.ContentStreamHandlerBase.h
andleRequestBody(ContentStreamHandlerBase.java:68)

at
org.apache.solr.handler.RequestHandlerBase.handleR
equest(RequestHandlerBase.java:199)

at
org.apache.solr.core.SolrCore.execute(SolrCore.jav
a:2551)

at
org.apache.solr.servlet.HttpSolrCall.execute(HttpS
olrCall.java:710)

at
org.apache.solr.servlet.HttpSolrCall.call(HttpSolr
Call.java:516)

at
org.apache.solr.servlet.SolrDispatchFilter.doFilte
r(SolrDispatchFilter.java:395)

at
org.apache.solr.servlet.SolrDispatchFilter.doFilte
r(SolrDispatchFilter.java:341)

at
org.eclipse.jetty.servlet.ServletHandler$CachedCha
in.doFilter(ServletHandler.java:1602)

at

RE: Query regarding Solr Cloud Setup

2019-09-06 Thread Porritt, Ian
Hi Jörn/Erick/Shawn thanks for your responses.

@Jörn - much apprecaited for the heads up on Kerberos authentication its 
something we havent really considered at the moment, more production this may 
well be the case. With regards to the Solr Nodes 3 is something we are looking 
as a minimum, when adding a new Solr Node to the cluster will 
settings/configuration be applied by Zookeeper on the new node or is there 
manual intervention?
@Erick - With regards to the core.properties, on standard Solr the 
update.autoCreateFields=false is within the core.properites file however for 
Cloud I have it added within Solrconfig.xml which gets uploaded to Zookeeper, 
apprecaite standard and cloud may work entirely different just wanted to ensure 
it’s the correct way of doing it.
@Shawn - Will try the creation of the lib directory in Solr Home to see if it 
gets picked up and having 5 Zookeepers would more than satisy high availability.


Regards
Ian 

-Original Message-
From: Jörn Franke  

If you have a properly secured cluster eg with Kerberos then you should not 
update files in ZK directly. Use the corresponding Solr REST interfaces then 
you also less likely to mess something up. 

If you want to have HA you should have at least 3 Solr nodes and replicate the 
collection to all three of them (more is not needed from a HA point of view). 
This would also allow you upgrades to the cluster without downtime.

-Original Message-
From: erickerick...@gmail.com>
Having custom core.properties files is “fraught”. First of all, that file can 
be re-written. Second, the collections ADDREPLICA command will create a new 
core.properties file. Third, any mistakes you make when hand-editing the file 
can have grave consequences.

What change exactly do you want to make to core.properties and why?

Trying to reproduce “what a colleague has done on standalone” is not something 
I’d recommend, SolrCloud is a different beast. Reproducing the _behavior_ is 
another thing, so what is the behavior you want in SolrCloud that causes you to 
want to customize core.properties?

Best,
Erick  

-Original Message-
From: Shawn Heisey 

I cannot tell what you are asking here.  The core.properties file lives 
on the disk, not in ZK.

I was under the impression that .jar files could not be loaded into ZK 
and used in a core config.  Documentation saying otherwise was recently 
pointed out to me on the list, but I remain skeptical that this actually 
works, and I have not tried to implement it myself.

The best way to handle custom jar loading is to create a "lib" directory 
under the solr home, and place all jars there.  Solr will automatically 
load them all before any cores are started, and no config commands of 
any kind will be needed to make it happen.

> Also from a high availability aspect, if I effectivly lost 2 of the Solr 
> Servers due to an outage will the system still work as expected? Would I 
> expect any data loss?

If all three Solr servers have a complete copy of all your indexes, then 
you should remain fully operational if two of those Solr servers go down.

Note that if you have three ZK servers and you lose two, that means that 
you have lost zookeeper quorum, and in that situation, SolrCloud will 
transition to read only -- you will not be able to change any index in 
the cloud.  This is how ZK is designed and it cannot be changed.  If you 
want a ZK deployment to survive the loss of two servers, you must have 
at least five total ZK servers, so more than 50 percent of the total 
survives.

Thanks,
Shawn


smime.p7s
Description: S/MIME cryptographic signature


Query regarding Solr Cloud Setup

2019-09-03 Thread Porritt, Ian
Hi,

 

I am relatively new to Solr especially Solr Cloud and have been using it for
a few days now. I think I have setup Solr Cloud correctly however would like
some guidance to ensure I am doing it correctly. I ideally want to be able
to process 40 million documents on production via Solr Cloud. The number of
fields is undefined as the documents may differ but could be around 20+. 

 

The current setup I have at present is as follows: (note this is all on 1
machine for now). A 3 Zookeeper Ensemble (all running on different ports)
and works as expected. 

 

3 Solar Nodes started on separate ports (note: directory path à
D:\solr-7.7.1\example\cloud\Node (1/2/3). 

 



 

Setup of Solr would be similar to the above except its on my local, the
below is the Graph status in Solr Cloud.

 



 

I have a few questions which I cannot seem to find the answer for on the
web. 

 

We have a schema which I have managed to upload to Zookeeper along with the
Solrconfig, how do I get the system to recognise both a lib/.jar extension
and a custom core.properties file? I bypassed the issue of the
core.properties by amending the update.autoCreateField in the Solrconfig.xml
to false however would like to include as a colleague has done on Solr
Standlone.

 

Also from a high availability aspect, if I effectivly lost 2 of the Solr
Servers due to an outage will the system still work as expected? Would I
expect any data loss? 

 

 



smime.p7s
Description: S/MIME cryptographic signature