Thank you Chris,
Per the sans, we had attempted that, but had already generated some
certificates. I will see if we can back out of that with a fresh install using
sans. I will give the first option a try, and appreciate the assistance.
Regards,
Kelly
-Original Message-
From:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Kelly,
On 6/1/18 5:41 PM, Kelly Rusk wrote:
> I can directly connect to either node without issue, it is only
> when the Load Balancer routes to either solr1 or solr2 that I get
> the security error (ex. https://solrlb.com:8983/solr). The Load
>
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Shawn,
On 6/1/18 5:25 PM, Shawn Heisey wrote:
> On 6/1/2018 2:01 PM, Kelly Rusk wrote:
>> We have solr1.com and solr2.com self-signed certs that correspond
>> to the two servers. We also have a load balancer with an address
>> named solrlb.com.
Thank you Shawn,
I can directly connect to either node without issue, it is only when the Load
Balancer routes to either solr1 or solr2 that I get the security error (ex.
https://solrlb.com:8983/solr). The Load Balancer is not managing HTTPS but just
acting as a pure TCP proxy. Nothing more
On 6/1/2018 2:01 PM, Kelly Rusk wrote:
> We have solr1.com and solr2.com self-signed certs that correspond to the two
> servers. We also have a load balancer with an address named solrlb.com. When
> we hit the load balancer it gives us an SSL error, as it is passing us back
> to either
Hello all,
We are using self-signed certificates for our two servers in an HTTPS
Master/Slave configuration running on Windows (please no discussions about the
merits of Linux vs. Windows for Solr, it's a requirement).
We have solr1.com and solr2.com self-signed certs that correspond to the
Could probably send it as pure data, but yes it can get annoying.
Either way, the JIRA issue and the patch is probably the best way
forward. That makes it easier for others on different branch level to
test it out as well as to discuss.
In terms of branches for the patch, master is the next big
Agreed, but yes it skips them even when explicitly referenced by name. The
line I linked to (530) will skip any file whose name begins with a dot. If
there's a better workaround than what I've proposed then I'm certainly open
to it.
Best,
Ian
On Fri, Jun 1, 2018 at 1:25 PM, Alexandre Rafalovitch
Does it still skip them if they are provided directly by name? It is rather
a narrow use case.
Regards,
Alex
On Fri, Jun 1, 2018, 1:01 PM Ian Goldsmith-Roooney, <
iangoldsmithroo...@gmail.com> wrote:
> Hello,
>
> I was hoping to make a small change to allow the simple POST tool to accept
>
Hello,
I was hoping to make a small change to allow the simple POST tool to accept
a command line arg (-Dhidden=yes) so that it will not ignore hidden files.
Currently there is no toggle; it always ignores hidden files
you are sending email to wrong address. Send email to
solr-user-unsubscr...@lucene.apache.org.
ThanksImad
On Friday, June 1, 2018, 11:11:06 AM CDT, wrote:
THIS IS MY 2ND REQUEST - PLEASE UNSUBSCRIBE ME
http://lmgtfy.com/?q=solr-user+unsubscribe
schrieb am Fr., 1. Juni 2018, 18:11:
>
> THIS IS MY 2ND REQUEST - PLEASE UNSUBSCRIBE ME
>
>
THIS IS MY 2ND REQUEST - PLEASE UNSUBSCRIBE ME
It simply auto mounts NFS mount points under /net/$host/$export so is no
different than having manually mounted NFS mount points for the purposes
of backing up.
Just be sure that your NFS host is set to export the appropriate file
system location with the needed net mask so that your various
Hello,
I am running Solr Cloud 6.6.4 in Docker containers on three RHEL7
virtual machines. The containers are networked in bridge mode.
I can access the Solr Admin web interface on the nodes and I can see all
nodes in the /live_nodes tree in ZooKeeper.
ZooKeeper is installed directly on the
Hi,
We are running SolrCloud with version 5.5.4. As I understand, Solr
Collection Backup and Restore API are only supported from version 6
onwards. So wondering what is the best mechanism to get our collections
backed-up on older Solr version.
When I ran backup command on a particular node (curl
Thank you for a quick response,
I have moved the
/ /
from // to // section and it
is working fine.
Once again
Thank you
Damian
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
On 5/31/2018 7:19 AM, servus01 wrote:
what i've got:
xml file with a date/description fields which are not part of the index:
(start-date-time="2018-04-01T18:00:00.000+02:00" code-name="MD 28")
(start-date-time="2018-04-07T15:00:00.000+02:00" code-name="MD 29")
On 5/31/2018 11:38 PM, Zheng Lin Edwin Yeo wrote:
I am planning to set up Solr with replica on different machine. How should
I go about configuring the setup? Like for example, should the replica node
be started on the host machine, or on the replica machine?
I will be setting this in Solr
Sorry Francois but I still don’t get it. Is your problem building a query or
retrieving results from Solr? If it is building query, then it seems like your
custom logic. You might be able to index more things is Solr and move that
logic in Solr, but I really don’t get what is in your Test
Hey Emir,
yes of course. My problem is more related to dynamic data in the search
string.
I have an xml which is not part of my solr index and has matchday data e.g.:
> (start-date-time="2018-04-01T18:00:00.000+02:00" code-name="MD 28")
> (start-date-time="2018-04-07T15:00:00.000+02:00"
understood. Thank you!
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
I thought about next solution for the my problem: Atomic Update + Replicas.
I can set my *UpdateProcessorsChain* in the next order:
.. .
MergerUpdateProcessor will use getUpdatedDocument function of
DistibutedUpdateProcessor
There is no quick answer, it really depends on a lot of factors...
*TL;DR* : Updating a single document field will likely take more time in a
bigger collection.
*Partial Document Update*
First of all, which field are you updating ?
Depending on the type and attributes you may end up in different
Hi Francois,
I am not sure I follow your case: “which are NOT part of the index”. You have
to convert your XML doc to Solr input doc and index it in order to be able to
search. Alternatively, you can threat entire doc as a single field and search
it using phrase search, but that solution is
Hi Rushikesh,
There is no update of documents in Solr - it is always indexing a new document
to a new segment. That means that indexing operation is equally heavy on any
collection. But that does not mean that updates will take equal time. There are
other activities that are heavier on larger
Thanks a lot Shawn. I had tried with the documented approach, but since
I use SolrClient.add to add documents to the index, I could not "port"
the documented approach to my case (probably I do miss something).
The custom HttpClient suggestion worked as expected!
On 31/05/2018 06:16 μμ, Shawn
27 matches
Mail list logo