keeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
> ~[zookeeper-3.6.2.jar:3.6.2]
>at
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1275)
> ~[zookeeper-3.6.2.jar:3.6.2]
>
>
> I tried it also with the older SOLR version 8.6.0 (
mes I get the error "org.apache.solr.common.cloud.ConnectionManager;
zkClient has disconnected"
Is this some kind of version incompatibly of SOLR and zookeeper? A wrong
configuration of the zookeepers not being able to communicate between each
other?
Any help appreciated
Bernhard
Cloudera's default configuration for the HDFSDirectoryFactory
<https://github.com/cloudera/lucene-solr/blob/cdh6.3.3/cloudera/solrconfig.xml#L118>
is very similar to yours in solrconfig.xml. The solr.hdfs.home property is
provided as a java property during Solr startup and we haven
om> wrote:
>
>> Your exception didn't come across - can you paste it in?
>>
>> -Joe
>>
>> On 8/19/2020 10:50 AM, Prashant Jyoti wrote:
>> > You're right Andrew. Even I read about that. But there's a use case for
>> > which we want to
running into?
>
> Thanks in advance!
>
>
> On Wed, 19 Aug, 2020, 5:24 pm Andrew MacKay,
mailto:andrew.mac...@superna.net>>
> wrote:
>
>> I believe HDFS support is being deprecated in Solr. Not sure
you want to
>> con
this is consistent with the data disappearing from Zookeeper due
to misconfiguration and/or some external process removing it when
you reboot.
So here’s what I’d do next:
Go ahead and reboot. You do _not_ need to start Solr to run bin/solr
scripts, and among them are
bin/solr zk ls -r / -z pat
Hi Erick,
Here is the latest most error that I captured which seems to be actually
deleting the cores ( I did noticed that the core folders under the path
../solr/server/solr were deleted one by one when the server came back from
reboot)
2020-08-24 04:41:27.424 ERROR
(coreContainerWorkExecutor-2-
Autopurge shouldn’t matter, that’s just cleaning up old snapshots. That is, it
should be configured, but having it enabled or not should have no bearing on
your data disappearing.
Also, are you absolutely certain that you are using your external ZK? Check the
port on the admin screen. 9983 is t
Thanks Eric for looking into this..
But as I said before I confirmed that the paths in zookeeper were changed to
local path than the /tmp that comes default with package. Does the zoo.cfg need
to have autopurge settings ??which I don’t have in my config
Also I did make sure that zoo.cfg inside
Sounds like you didn’t change Zookeeper data dir. Zookeeper defaults to putting
its data in /tmp/zookeeper, see the zookeeper config file. And, of course, when
you reboot it goes away.
I’ve always disliked this, but the Zookeeper folks did it that way. So if you
just copy zoo_sample.cfg to zoo.
Can someone help me on the below issue??
I have configured solr 8.2 with one zookeeper 3.4 and 3 solr nodes
All the configs were pushed initially and Also Indexed all the data into
multiple collections with 3 replicas on each collection
Now part of server maintenance these solr nodes were rest
HDFS will still be there, just NOT on the core package, but as a plug-in or
contrib.
> On Aug 20, 2020, at 11:07 AM, Aroop Ganguly wrote:
>
> HDFS will still be there, just on the core package, but as a plug-in or
> contrib.
u also aware of what feature we are moving towards instead of HDFS?
> Will you be able to help me with the error that I'm running into?
>
> Thanks in advance!
>
>
> On Wed, 19 Aug, 2020, 5:24 pm Andrew MacKay,
> wrote:
>
>> I believe HDFS support is being depr
>
> > hdfs://
>
> >
>
> >
> hn1-pjhado.tvbhpqtgh3judk1e5ihrx2k21d.tx.internal.cloudapp.net:8020/user/solr-data/newcollsolr2/core_node3/data\
> <http://hn1-pjhado.tvbhpqtgh3judk1e5ihrx2k21d.tx.internal.cloudapp.net:8020/user/solr-data/newcollsolr2/core_node3/data%5C>
>
> >
oo. Thanks!
>
> On Wed, Aug 19, 2020 at 9:37 PM Joe Obernberger <
> joseph.obernber...@gmail.com> wrote:
>
> > Your exception didn't come across - can you paste it in?
> >
> > -Joe
> >
> > On 8/19/2020 10:50 AM, Prashant Jyoti wrote:
> > &g
Hi
I would not go down this road. What is the use case? Is this really the
solution?
Go read all the relevant docs and configuration provided by
Cloudera/HortonWorks and everything else related to SOLR and HDFS.
I am not inclined to help you down a road you do not want to travel. There be
me with the error that I'm running into?
> >
> > Thanks in advance!
> >
> >
> > On Wed, 19 Aug, 2020, 5:24 pm Andrew MacKay,
> > wrote:
> >
> >> I believe HDFS support is being deprecated in Solr. Not sure you want
> to
> >> continu
instead of HDFS?
Will you be able to help me with the error that I'm running into?
Thanks in advance!
On Wed, 19 Aug, 2020, 5:24 pm Andrew MacKay,
wrote:
I believe HDFS support is being deprecated in Solr. Not sure you want to
continue configuration if support will disappear.
On Wed,
d, 19 Aug, 2020, 5:24 pm Andrew MacKay,
wrote:
> I believe HDFS support is being deprecated in Solr. Not sure you want to
> continue configuration if support will disappear.
>
> On Wed, Aug 19, 2020 at 7:52 AM Prashant Jyoti
> wrote:
>
> > Hi all,
> > Hope you are
I believe HDFS support is being deprecated in Solr. Not sure you want to
continue configuration if support will disappear.
On Wed, Aug 19, 2020 at 7:52 AM Prashant Jyoti wrote:
> Hi all,
> Hope you are healthy and safe.
>
> Need some help with HDFS configuration.
>
> Could an
Hi all,
Hope you are healthy and safe.
Need some help with HDFS configuration.
Could anybody of you share an example of the configuration with which you
are running Solr with HDFS in any of your production/dev environments?
I am interested in the parts of SolrConfig.xml / Solr.in.cmd/sh which
iginal Message-
> From: Jan Høydahl
> Sent: Thursday, June 25, 2020 6:50 PM
> To: solr-user@lucene.apache.org
> Subject: Re: SOLR CDCR fails with JWT authorization configuration
>
> EXTERNAL SENDER: Exercise caution with links and attachments.
>
> I’m mostly trying
R CDCR fails with JWT authorization configuration
EXTERNAL SENDER: Exercise caution with links and attachments.
I’m mostly trying to identify whether what you are trying to to is a supported
option at all, or of perhaps CDCR is only tested without authentication in
place.
You would also be inte
> good to get sample security.json
>
> Thanks
>
> -Original Message-
> From: Jan Høydahl
> Sent: Thursday, June 25, 2020 5:25 PM
> To: solr-user@lucene.apache.org
> Subject: Re: SOLR CDCR fails with JWT authorization configuration
>
> EXTERNAL SENDER:
e me considering me as newbie :) . And it will be also good
to get sample security.json
Thanks
-Original Message-
From: Jan Høydahl
Sent: Thursday, June 25, 2020 5:25 PM
To: solr-user@lucene.apache.org
Subject: Re: SOLR CDCR fails with JWT authorization configuration
EXTERNAL SENDER: Ex
"q":
> "dp":
> "dq":
> "qi":
> "alg":"RS256",
> "kid":
>
"dq":
"qi":
"alg":"RS256",
"kid":
"use":
}
}
ut I’m not sure that
is enough here. There may be code changes needed in CDCR components.
Jan
> 24. jun. 2020 kl. 19:42 skrev Phatkar, Swapnil (Contractor)
> :
>
> Hi Team ,
>
> I am trying to configure CDCR for SOLR 8.4.1 .
> With the provided configuration I can abl
Hi Team ,
I am trying to configure CDCR for SOLR 8.4.1 .
With the provided configuration I can able to replicate the indexes from Source
server to Target server. This setup even works with SSL configuration using
Https protocol.
But the moment I have introduced JWT authorization by enforcing
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
All,
Ping. Any options for no-downtime TLS reconfiguration?
- -chris
On 4/23/20 11:35, Christopher Schultz wrote:
> All,
>
> Does anyone know if it is possible to reconfigure Solr's TLS
> configuration (specifically, t
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
All,
Does anyone know if it is possible to reconfigure Solr's TLS
configuration (specifically, the server key and certificate) without a
restart?
I'm looking for a zero-downtime situation with a single-server and an
updated TLS c
Karl, what would you do if that own implementation stalls in GC, or smashes
Solr over?
On Thu, Feb 6, 2020 at 1:04 PM Karl Stoney
wrote:
> Spoke too soon, looks like it memory leaks. After about 1.3m the old gc
> times went through the root and solr was almost unresponsive, had to
> abort. We'
Egor, would you mind to share some best practices regarding cursorMark in
SolrEntityProcessor?
On Thu, Feb 6, 2020 at 1:04 PM Karl Stoney
wrote:
> Spoke too soon, looks like it memory leaks. After about 1.3m the old gc
> times went through the root and solr was almost unresponsive, had to
> abo
Spoke too soon, looks like it memory leaks. After about 1.3m the old gc times
went through the root and solr was almost unresponsive, had to abort. We're
going to write our own implementation to copy data from one core to another
that runs outside of solr.
On 06/02/2020, 09:57, "Karl Stoney"
I cannot believe how much of a difference that cursorMark and sort order made.
Previously it died about 800k docs, now we're at 1.2m without any slowdown.
Thank you so much
On 06/02/2020, 08:14, "Mikhail Khludnev" wrote:
Hello, Karl.
Please check these:
https://eur03.safelinks.pro
Hello, Karl.
Please check these:
https://lucene.apache.org/solr/guide/6_6/pagination-of-results.html#constraints-when-using-cursors
https://lucene.apache.org/solr/guide/6_6/uploading-structured-data-store-data-with-the-data-import-handler.html#solrentityprocessor
cursorMark="true"
Good luck.
On
Hey All,
I'm trying to implement a simplistic reindex strategy to copy all of the data
out of one collection, into another, on a single node (no distributed queries).
It's approx 4 million documents, with an index size of 26gig. Based on your
experience, I'm wondering what people feel sensible
On 10/25/2019 5:44 AM, Danilo Tomasoni wrote:
Another question, is softCommit sufficient to ensure visibility or
should I call a commit to ensure a new searcher will be opened?
softCommit automatically opens a new searcher?
There would be little point to doing a soft commit with openSearcher
Thank you all for your suggestions.
Now I changed my import strategy to ensure that the same document will
be updated eventually by different "batches",
in this way I need a single programmatic softcommit at the end of each
batch.
Configuration-side I enabled autoCommit with op
"For every document processed, a soft commit is performed to make the update
visible to other concurrent update processes.”
Please do not do this! First, Real Time Get will always return the current doc,
whether you’ve opened a new reader or not. Second, this is an anti-pattern. I
agree with Pa
On 10/24/2019 1:52 AM, Danilo Tomasoni wrote:
For every document processed, a soft commit is performed to make the
update visible to other concurrent update processes.
This is not the way to do things. Doing a commit after every document
means that Solr will spend more time doing commits than
egment size, maxBufferedDocs, Merge Policies). We, at
Auto-Suggest, also do atomic updates daily and specifically changing merge
factor gave us a boost of ~4x during indexing. At current configuration,
our core atomically updates ~423 documents per second. I also do few core
optimizations in between the full in
Hello all,
we have a solr 7.3.1 instance with around 40 MLN documents in it.
After the initial one-shot import, we found an issue in the import
software, we updated it and re-run the import that will atomically
update (with set)
the existing documents.
The import is divided into processes,
HS256 is symmetric, so signing with the shared key should be all you need. I
don't see what you did wrong. Perhaps a space has sneaked in somewhere in the
key or how you copy it is not pure ascii? I'd encourage you to try evaluating
the token you build in Java or Javascript using the online tool
Of course the secret key is just for my local development solr instance 🙄
> On Sep 19, 2019, at 10:35 PM, Dave wrote:
>
> I know this has nothing to do with the issue at hand but if you have a public
> facing solr instance you have much bigger issues.
>
>> On Sep 19, 2019, at 10:16 PM, Ty
I know this has nothing to do with the issue at hand but if you have a public
facing solr instance you have much bigger issues.
> On Sep 19, 2019, at 10:16 PM, Tyrone Tse wrote:
>
> I finally got JWT Authentication working on Solr 8.1.1.
> This is my security.json file contents
> {
> "authe
I finally got JWT Authentication working on Solr 8.1.1.
This is my security.json file contents
{
"authentication":{
"class":"solr.JWTAuthPlugin",
"jwk":{
"kty":"oct",
"use":"sig",
"kid":"k1",
"k":"xbQNocUhLJKSmGi0Qp_4hAVfls9CWH5WoTrw543WTXi5H6G-AXFlHRaTK
See answer in other thread. JWT works for 8.1 or later, don’t attempt it in 7.x.
You could try to turn on debug logging for or.apache.solr.security to get more
logging.
Jan Høydahl
> 13. sep. 2019 kl. 00:24 skrev Tyrone Tse :
>
> Jan
>
> I tried using the JWT Plugin https://github.com/cominve
Jan
I tried using the JWT Plugin https://github.com/cominvent/solr-auth-jwt
If my security.json file is
{
"authentication": {
"class":"com.cominvent.solr.JWTAuthPlugin",
"jwk" : {
"kty": "oct",
"use": "sig",
"kid": "solr",
All I could see in the solr.log was ( could it be the java version ?)
main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(8.2.0):C1:[diagnostics={java.vendor=Oracle
Corporation, os=Mac OS X, java.version=1.8.0_60, java.vm.version=25.60-b23,
lucene.version=8.2.0, os.arch=x86_64,
Please check the error message in solr.log on the server side and paste that
here. Could be a bug 🕷
Jan Høydahl
> 10. sep. 2019 kl. 18:51 skrev Tyrone Tse :
>
> Jan using https://mkjwk.org/
> I generated the following JWK
>
> {
>
> "kty": "oct",
> "use": "sig",
> "kid": "solr",
> "k":
>
Jan using https://mkjwk.org/
I generated the following JWK
{
"kty": "oct",
"use": "sig",
"kid": "solr",
"k":
"pIpVnjhuAj9DBg8e2lwya7o_uZMM3Wqo2eK0uchOza0vBS-orZNYTkLcHTLXF9JaCBR08tWfFEWVPENF6sXKuaj8Mn65Kc3QUmS-csblVvjj69dXk2Mi-Zs2iDDM3QyyvdiyRpfxE-xKwwjhU47xs7M0Dq69I1UE5nrFkczLf9qe3b47ha
I think you are confusing JWK with the JWT token. JWK is only for defining the
key, see https://mkjwk.org for an online JWK generator, you can choose HS256 as
algorithm. Put the generated JWK in Solr's config and also use the generated
key to sign your JWT. Then Solr should be able to validate t
Jan
Can my jwk object be something like
{alg": "HS256", "typ": "JWT",
"sub": "1234567890", "name": "John Doe", "iat": 1516239022,
“k" : "secret-key"}
Where k is the JWT secret key?
Sent from my iPhone
> On Sep 9, 2019, at 1:48 AM, Jan Høydahl wrote:
>
> In your security.json, add a JWK ma
In your security.json, add a JWK matching your signing algorithm, using the
“jwk” JSON key.
Example:
“jwk” : { "kty" : "oct", "kid" : "0afee142-a0af-4410-abcc-9f2d44ff45b5", "alg"
: "HS256", "k" : "FdFYFzERwC2uCBB46pZQi4GG85LujR8obt-KWRBICVQ" }
Of course you need to find a way to encode your pa
configure AWT
authentication
https://lucene.apache.org/solr/guide/8_1/jwt-authentication-plugin.html#editing-jwt-authentication-plugin-configuration
Which says that the security.json file will have the the following JSON object
{ "authentication": { "class":"solr.JW
ResourceLoader worked brilliantly - my brain, on the other hand, not so much
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
I'm working on a custom tokenizer (Solr 7.3.0) whose Factory needs to read a
configuration file.
I have been able to run it successfully in my local reading from a local
directory.
I would like to be able to have the configuration read from zookeeper
(similarly to how SynonymGraphFilterFa
As someone else wrote there are a lot of uncertainties and I recommend to test
yourself to find the optimal configuration. Some food for thought:
How many clients do you have and what is their concurrency? What operations
will they do? Do they Access Solr directly? You can use Jmeter to simulate
d further?
For the sake of maintainability and ease of configuration, we wouldn't want
to go beyond 4 SolrClouds. So at this point I would say no. But open to
ideas if you think it would be greatly advantageous.
So if we go with the 3rd configuration option we would be roughly indexing
1 bil
Rahul Goswami wrote:
> We are running Solr 7.2.1 and planning for a deployment which will grow to
> 4 billion documents over time. We have 16 nodes at disposal.I am thinking
> between 3 configurations:
>
> 1 cluster - 16 nodes
> vs
> 2 clusters - 8 nodes each
> vs
> 4 clusters -4 nodes each
You
Hello,
We are running Solr 7.2.1 and planning for a deployment which will grow to
4 billion documents over time. We have 16 nodes at disposal.I am thinking
between 3 configurations:
1 cluster - 16 nodes
vs
2 clusters - 8 nodes each
vs
4 clusters -4 nodes each
Irrespective of the configuration
a little bit larger than your largest
> file).
> If possible you can try to avoid storing the NLP / ML models in Solr but
> provide them on a share or similar where all Solr nodes have access to.
>
> > Am 11.06.2019 um 00:32 schrieb Pratik Patel :
> >
> > Hi,
> >
but
provide them on a share or similar where all Solr nodes have access to.
> Am 11.06.2019 um 00:32 schrieb Pratik Patel :
>
> Hi,
>
> I am trying to upload a configuration to "MiniSolrCloudCluster" in my unit
> test. This configuration has some binary files for
Hi,
I am trying to upload a configuration to "MiniSolrCloudCluster" in my unit
test. This configuration has some binary files for NLP related
functionality. Some of these binary files are bigger than 5 MB. If I try to
upload configuration with these files then it doesn't work. I
t;*
Hope that helps
On Mon, 10 Jun 2019 at 13:01, Paul wrote:
> Hi,
>
> I am not sure if Basic Authentication is possible in SOLR standalone
> configuration (version 7.6). I have a working SOLR installation using SSL.
> When following the docs I add options into solr.in.cm
Hi,
I am not sure if Basic Authentication is possible in SOLR standalone
configuration (version 7.6). I have a working SOLR installation using SSL.
When following the docs I add options into solr.in.cmd, as in:
SOLR_AUTH_TYPE="basic"
SOLR_AUTHENTICATION_OPTS="-Dbasicauth=solr:Sol
Thanks guys, I found that the issue I had was because of some binary files
(NLP models) in my configuration. Once I fixed that, I was able to set up a
cluster. These exceptions are still logged but they are logged as INFO and
were not the real issue.
Thanks Again
Pratik
On Tue, Jun 4, 2019 at 4
This pretty frequently happens when you bounce solr instances around. I usually
use
ps aux | grep solr
to find the pid to kill.
I can’t help you at all about why lsof isn’t working, that has nothing to do
with Solr.
Best,
Erick
> On Jun 5, 2019, at 5:23 AM, Mary Eji wrote:
>
> Hi,
> I’ve b
Hi,
I’ve been using Solr for a while now. I’m having trouble using the software
currently: I’m getting this error "Oops! Looks like port 8983 is already being
used by another process. Please choose a different port”.
I’m using Mac OS. When this happened sometimes ago in the Fall or so, I used
ls
For what I know the configuration files need to be already in the test/resource
directory before runnin. I copy them to the directory using a maven
maven-antrun-plugin in the generate-test-sources phase. And the framework can
"create a collection” without the configfiles, but it will obvi
On the surface, this znode already exists:
/solr/configs/collection2
So it looks like somehow you're
> On Jun 4, 2019, at 12:29 PM, Pratik Patel wrote:
>
> /solr/configs/collection2
Hello Everyone,
I am trying to run a simple unit test using solr test framework. At this
point, all I am trying to achieve is to be able to upload some
configuration and create a collection using solr test framework.
Following is the simple code which I am trying to run.
private static final
-
From: Shawn Heisey
Sent: Friday, April 26, 2019 9:45 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr Cloud configuration
On 4/26/2019 6:14 AM, Sadiki Latty wrote:
> What you're saying makes sense but is it achievable without downtime? i.e: Is
> it achievable to change the
On 4/26/2019 6:14 AM, Sadiki Latty wrote:
What you're saying makes sense but is it achievable without downtime? i.e: Is
it achievable to change the replication factor to 2 as you suggest, and Solr
puts the sharded documents back together then replicate?
Just changing the replicationFactor par
-Original Message-
From: Jörn Franke
Sent: Thursday, April 25, 2019 5:15 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr Cloud configuration
Are the replicas on the other node?
Am 25.04.2019 um 22:44 schrieb Sadiki Latty :
Hi,
I am doing some testing with Solr Cloud and I have a c
Sent: Thursday, April 25, 2019 7:44 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr Cloud configuration
On 4/25/2019 2:44 PM, Sadiki Latty wrote:
> - replica 1
>
> If I need to upgrade Solr, the recommended method is to update one at a time.
> However, when I bring down on
I don't have any replicas the 2 Solr instances are both leaders and are split
between the two.
-Original Message-
From: Jörn Franke
Sent: Thursday, April 25, 2019 5:15 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr Cloud configuration
Are the replicas on the other node?
de that is up. After some research
I am wondering if this is happening because of the shard configuration. Is there any way
for me to do an upgrade without any/siginificant downtime?
If your replicationFactor is 1, you have zero redundancy. As soon as
the Solr instance hosting that single c
longer
> work and I get the error "no servers hosting shard" from the node that is up.
> After some research I am wondering if this is happening because of the shard
> configuration. Is there any way for me to do an upgrade without
> any/siginificant downtime?
>
> For e
the shard
configuration. Is there any way for me to do an upgrade without
any/siginificant downtime?
For example: Is it possible to put everything on 1 node, bring the node down,
update the downed node bring it back up, wait for it to mirror then do the same
to the other node?
Thanks,
Sid
n
>> I do it ?
>>
>> I've created the following question in StackOverflow
>>
>>
>> https://stackoverflow.com/questions/55530142/solr-7-6-0-ignores-configuration-file-bin-solr-in-sh?noredirect=1#comment97766221_55530142
>>
>> Br, Jaana Miettinen
>>
>>
reated the following question in StackOverflow
>
>
> https://stackoverflow.com/questions/55530142/solr-7-6-0-ignores-configuration-file-bin-solr-in-sh?noredirect=1#comment97766221_55530142
>
> Br, Jaana Miettinen
>
>
Dear recipients,
Can you help me with the following issue:
I should present my time stamps in solr in UTC+2 instead of UTC. How can
I do it ?
I've created the following question in StackOverflow
https://stackoverflow.com/questions/55530142/solr-7-6-0-ignores-configuration-file-bin-so
Hi Dominique
Were you able to resolve this ?
I am also stuck with understanding a minimal permission-set to give to a
readonly user to read from the /select endpoint.
Regards
Aroop
> On Jan 1, 2019, at 11:23 PM, Dominique Bejean
> wrote:
>
> Hi,
>
> I created a Jira issue
> https://issues.
Sure. In both cases define a fieldType with those attributes set however you
want. Any field that is defined with that fieldType will have the defaults you
specify unless overridden on the field definition itself.
Best,
Erick
> On Feb 25, 2019, at 9:08 AM, Dionte Smith wrote:
>
> Hi,
>
> I h
Hi,
I have two questions about the field default values for multivalued and indexed.
1. Is it possible to make new fields have the indexed attribute set to false
by default for a schema? I understand this wouldn't normally be the case, but
we have a use case where it would be preferable as
Hi,
I'm running solr v6.6.5 using a pretty generic log4j properties file:
# Default Solr log4j config
# rootLogger log level may be programmatically overridden by -Dsolr.log.level
solr.log=${solr.log.dir}
log4j.rootLogger=INFO, file, CONSOLE
# Console appender will be programmatically disabled w
Hi,
I created a Jira issue
https://issues.apache.org/jira/browse/SOLR-13097
Regards.
Dominique
Le lun. 31 déc. 2018 à 11:26, Dominique Bejean
a écrit :
> Hi,
>
> In debugging mode, I discovered that only in SolrCloud mode the collection
> name is extract from the request path in the init() m
Hi,
In debugging mode, I discovered that only in SolrCloud mode the collection
name is extract from the request path in the init() method of
HttpSolrCall.java
if (cores.isZooKeeperAware()) {
// init collectionList (usually one name but not when there are
aliases)
...
}
So in S
Hi,
After reading more carefully the log file, here is my understanding.
The request
http://2:xx@localhost:8983/solr/biblio/select?indent=on&q=*:*&wt=json
report this in log
2018-12-30 12:24:52.102 INFO (qtp1731656333-20) [ x:biblio]
o.a.s.s.HttpSolrCall USER_REQUIRED auth header Basic
Hi,
I am trying to configure security.json file, in order to define the
following users and permissions :
- user "admin" with all permissions on all collections
- user "read" with read permissions on all collections
- user "1" with only read permissions on biblio collection
- user
t; One benefit of the way recommended in the reference guide is that it lets
>> you use zookeeper upconfig/downconfig as deployment tools on a set of text
>> files, which in turn allows you to manage your Solr configuration like any
>> other bit of source code, e.g. with version co
Best regards,
Edward
Em ter, 20 de nov de 2018 18:21, Adam Constabaris David,
>
> One benefit of the way recommended in the reference guide is that it lets
> you use zookeeper upconfig/downconfig as deployment tools on a set of text
> files, which in turn allows you to manage your Solr c
David,
One benefit of the way recommended in the reference guide is that it lets
you use zookeeper upconfig/downconfig as deployment tools on a set of text
files, which in turn allows you to manage your Solr configuration like any
other bit of source code, e.g. with version control and, if your
seems extremely annoying. wouldnt
> > it just be easier if you could edit the config files raw from the admin
> > UI?
> >
> > On Tue, Nov 20, 2018 at 1:41 PM Pure Host - Wolfgang Freudenberger <
> > w.freudenber...@pure-host.de> wrote:
> >
> >
berger <
> w.freudenber...@pure-host.de> wrote:
>
> > Hi David,
> >
> >
> > You can upload configuration to the zookeeper - it is nearly the same as
> > the standaloneconfig.
> >
> > You can also edit the schema.xml in this file. At least I d
gt; You can upload configuration to the zookeeper - it is nearly the same as
> the standaloneconfig.
>
> You can also edit the schema.xml in this file. At least I do it like this.
>
> Mit freundlichem Gruß / kind regards
>
> Wolfgang Freudenberger
> Pure Host IT-Services
> Müns
Hi David,
You can upload configuration to the zookeeper - it is nearly the same as
the standaloneconfig.
You can also edit the schema.xml in this file. At least I do it like this.
Mit freundlichem Gruß / kind regards
Wolfgang Freudenberger
Pure Host IT-Services
Münsterstr. 14
48341
I cant seem to find the documentation on how to actually edit the schema
file myself, everything seems to lead me to using an API to add fields and
stop words etc. this is more or less obnoxious, and the admin api for
adding fields/field types is not exactly functional. is there a guide or
someth
1 - 100 of 993 matches
Mail list logo