Markus,
I may be stating the obvious here, but I didn't notice garbage
collection mentioned in any of the previous messages, so here goes. In
our experience almost all of the Zookeeper timeouts etc. have been
caused by too long garbage collection pauses. I've summed up my
observations here:
I have configured Solr Managed-schema as following
Below configuration is for Full Text Search:
Following is the configuration for Spell check type of field.
Below i
Done https://issues.apache.org/jira/browse/SOLR-11938.
Thanks!
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
I think SCP will be fine. Shawn's comment is probably the issue.
Best,
Erick
On Thu, Feb 1, 2018 at 4:34 PM, Shawn Heisey wrote:
> On 2/1/2018 4:32 PM, Jeff Dyke wrote:
>> I just created a tar file, actually a tar.gz file and scp'd to a server, at
>> first i was worried that the gzip caused issu
On 2/1/2018 4:32 PM, Jeff Dyke wrote:
> I just created a tar file, actually a tar.gz file and scp'd to a server, at
> first i was worried that the gzip caused issues, but as i mentioned no
> errors on start up, and i thought i would see some. @Erick, how would you
> recommend. This is going to be
On 9/25/2017 7:25 AM, Vikas Mehra wrote:
> Cluster has 1 zookeeper node and 3 solr nodes. There is only one collection
> with 3 shards. Data is continuously indexed using SolrJ API. System is
> running on AWS and I am taking backup on EFS (Elastic File System).
>
> Observed behavior:
> If indexing
I just created a tar file, actually a tar.gz file and scp'd to a server, at
first i was worried that the gzip caused issues, but as i mentioned no
errors on start up, and i thought i would see some. @Erick, how would you
recommend. This is going to be less of an issue b/c i need to build the
inde
Have you considered updateable docValues?
Best,
Erick
On Thu, Feb 1, 2018 at 10:55 AM, Brian Yee wrote:
> Hello,
>
> I want to use external file field to store frequently changing inventory and
> price data. I got a proof of concept working with a mock text file and this
> will suit my needs.
One note, be _very_ sure you copy in binary mode..
On Thu, Feb 1, 2018 at 1:33 PM, Shawn Heisey wrote:
> On 2/1/2018 12:56 PM, Jeff Dyke wrote:
>> That's exactly what i thought as well. The only difference and i can try
>> to downgrade OSX is 7.2, and i grabbed 7.2.1 for install on Ubuntu.
I am having the same problem. when i trying to restore a backup index.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
There's no wildcard for the CLUSTERSTATUS command. You can request
status for specific _collections_ by using the "collection" parameter
though.
Best,
Erick
On Thu, Feb 1, 2018 at 10:15 AM, Atita Arora wrote:
> Hi Erick,
>
> Just as you mentioned about clusterstatus, I am using the same for almo
On 2/1/2018 12:56 PM, Jeff Dyke wrote:
> That's exactly what i thought as well. The only difference and i can try
> to downgrade OSX is 7.2, and i grabbed 7.2.1 for install on Ubuntu. I
> didn't think a point minor point release would matter.
>
> solr@stagingsolr01:~/data/issuers/data$ ls -1
> 98
Hi everyone.
Today I replaced in my code all the IntPoint insertions and range queries
with the NumericDocValuesField ones, since I did not find a way to update
the value of an IntPoint. After such replacements, the previous test cases
still work and the overall performance seems to be the same, s
Hey David,
Thanks for your suggestions! I think I’ve got the right behaviour now; I’ve
done fq={!parent which=is_parent:true score=total v='+is_parent:false
+{!func}density'} desc instead of sort=…
Side note: the grid cells can be POLYGON or MULTIPOLYGON, so BBoxField didn’t
work when I tried
That's exactly what i thought as well. The only difference and i can try
to downgrade OSX is 7.2, and i grabbed 7.2.1 for install on Ubuntu. I
didn't think a point minor point release would matter.
solr@stagingsolr01:~/data/issuers/data$ ls -1
981552
index
_mg8.dii
_mg8.dim
_mg8.fdt
_mg8.fdx
_mg
Hello,
I want to use external file field to store frequently changing inventory and
price data. I got a proof of concept working with a mock text file and this
will suit my needs.
What is the best way to keep this file updated in a fast way. Ideally I would
like to read changes from a Kafka qu
On 2/1/2018 11:14 AM, Jeff Dyke wrote:
I've been developing locally on OSX and am now going through the process of
automating the installation on AWS Ubuntu. I have created a core, added my
fields and then untarred the data directory on my Ubuntu instance,
restarted solr (to hopefully reindex),
Hi Erick,
Just as you mentioned about clusterstatus, I am using the same for almost
the similar Usecase. The only issue I run into is that I need some way I
could use prefix with collection param, is there some way to do that? So
that I can query the specific collection of my interest.
Note : My
I've been developing locally on OSX and am now going through the process of
automating the installation on AWS Ubuntu. I have created a core, added my
fields and then untarred the data directory on my Ubuntu instance,
restarted solr (to hopefully reindex), but no documents are seen.
Nor are any er
Thanks to both.
Finally I've found a way to do it with haproxy. I do what wunder said
sending the command for every collection and see if it's able to answer.
Even in recovering answer, so looks like it takes the data from other nodes
of use the data that have.
Greetings!!
El 1 feb. 2018 17:57,
Also, “recovering” is a status for a particular core in a collection. A Solr
process might have some cores that are healthy and some that are not.
Even if you only have one collection, you can still have multiple cores (with
different status) from the same collection on one node.
Personally, I
ok, good to know that 7.x shows good performance for you too.
1) Regarding the zookeeper problem, do you know for sure that it does not
occur in 6.x ?
I would suggest to write a small load-test that can send a similar
kind of load to 6.x and 7.x clusters and see which one breaks.
I know
The Collections API CLUSTERSTATUS essentially gives you back the ZK
state.json for individual collections (or your cluster, see the
params). One note: Just because the state.json reports a replica as
"active" isn't definitive. If the node died unexpectedly its replicas
can't set the state when shut
And the coupon has no expiration date on it (LOL). Thank you again, Emir!
Best Regards,
Wendy
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
quote: "The problem is that this includes children that DON’T touch the
search area in the sum. How can I only include the shapes from the first
query above in my sort?"
Unless I'm misunderstanding your intent, I think this is a simple matter of
adding the spatial filter to the parent join query y
Hi Wendy,
You are welcome! I’ll put your lunch coupon in my wallet, just in case I get
hungry around NJ ;)
Regards,
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/
> On 1 Feb 2018, at 16:26, Wendy2 wrot
This seems pretty serious. Please create a Jira issue
Sent from my iPhone
> On Feb 1, 2018, at 12:15 AM, dennis nalog
> wrote:
>
> Hi,
> We are using Solr 7.1 and are solr setup is master-slave replication.
> We encounter this issue that when we disable the replication in master via UI
> or U
Hi,
We are using Solr 7.1 and are solr setup is master-slave replication.
We encounter this issue that when we disable the replication in master via UI
or URL (server/solr/solr_core/replication?command=disablereplication), the data
in our slave servers suddenly becomes 0.
Just wanna know if this
Excellent!!! Thank you so much for all your help, Emir!
Both worked now and I got 997 result counts back as the expected number :-)
/rcsb/search?q=method:"x-ray*" "Solution NMR"&mm=1
/rcsb/search?q=+method:"x-ray*" +"Solution NMR"&mm=1
I will keep this in my mind regarding query with multiple p
On 2/1/2018 5:40 AM, solr2020 wrote:
We have installed solr which is running in jetty 9x version. We are trying
to change the default solr url to required URL as given below.
Default url: http://localhost:8983/solr
Required URL :http://test.com/solr
To achieve this we are trying to configure v
It looks like CDCR is entirely broken in 7.2.0
We have been using CDCR to replicate data from our on Prem systems to
solrclouds hosted in Google Cloud.
We used the lucene index upgrade to do an in place upgrade of the indexes
in all our systems
In at least one case we deleted all the rows from a co
Hi, I'm using scale function and facing a issue where my result set
contains only one result or multiple results with same value in this case
scale is sending data at min level/instead of high value,any idea how can I
achieve the high value in case only one result is present or multiple
results wit
Hi Wendy,
Query now looks as expected but you are not getting results as expected. The
reason for that is edismax’s mm parameter is what matters. You are setting it
to 7 and you have two parts to match so it is always AND and you don’t have
such documents. You can set it to 1 and it will be OR.
Good morning, Emir,
Here are the debug output for case 1f-a (q=method:"x-ray*" "Solution NMR"),
1f-b (q=+method:"x-ray*" +"Solution NMR"). both returned zero counts. It
looks that the querystrings are the same. Thanks for following up on my
post and your help! -- Wendy
*=De
Hello,
given the following document structure (books as parent, libraries having these
books as children):
book
1000
Mr. Mercedes
Stephen King
library
1000/100
20160810
We have added virtualHosts block in solr-jetty-context.xml file under
/opt/solr/server/contexts and then restarted solr(jetty). After this while
trying to access solr using the url http://www.host.com:8983/solr
it says site can't be reached.
http://www.eclipse.org/jetty/configure_9_0.dtd";>
Hello guys,
I want to add an option to search document by size. For example, find the
top categories with the biggest documents. I thought about creating a new
update processor wich will counting the bytes of all fields in the document,
but I think it wont work good, because some fields are stored
On 01/02/2018 12:40, solr2020 wrote:
Hi,
We have installed solr which is running in jetty 9x version. We are trying
to change the default solr url to required URL as given below.
Default url: http://localhost:8983/solr
Required URL :http://test.com/solr
To achieve this we are trying to config
Hi,
We have installed solr which is running in jetty 9x version. We are trying
to change the default solr url to required URL as given below.
Default url: http://localhost:8983/solr
Required URL :http://test.com/solr
To achieve this we are trying to configure virtual host in jetty
(solr-jetty-c
Hello,
I'm trying to create a load balancer using HAProxy to detect nodes that are
down or recovering, but I'm not able to find the way to detect if the node
is healthy (the only commands i've seen check the entire cluster).
Is there any way to check the node status using http responses and get on
Thanks I think we'll go for extractOnly cause using a recent version of
Tika causes to many dependency issues.
On Thu, Feb 1, 2018 at 12:25 PM, Emir Arnautović <
emir.arnauto...@sematext.com> wrote:
> Hi Joris,
> I doubt that you can do that. That would require extracting request
> handler to sup
Am 31.01.18 um 16:30 schrieb David Frese:
Am 29.01.18 um 18:05 schrieb Erick Erickson:
Try searching with lowercase the word and. Somehow you have to allow
the parser to distinguish the two.
Oh yeah, the biggest unsolved problem in the ~80 years history of
programming languages... NOT ;-)
Hi Joris,
I doubt that you can do that. That would require extracting request handler to
support incremental updating and I don’t thing it does. In order to update
existing doc, you would have to extract content and send it as incrementa
update request.
You can still use extracting handler to ex
You are welcome!
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/
> On 1 Feb 2018, at 11:04, keenkiller wrote:
>
> Oh, I got it. Thanks a lot!
>
>
>
> --
> Sent from: http://lucene.472066.n3.nabble.co
Oh, thanks for your help. I got it. I misunderstand the meaning of `offset`.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Oh, I got it. Thanks a lot!
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi
I'd like to update a single field of an existing document with the content
of a file.
My current setup looks like this:
final File file = new File("path to file");
ContentStreamUpdateRequest req = new
ContentStreamUpdateRequest("/update/extract");
req.addContentStream(new ContentStrea
Hi Luigi, I don't know much that part of Lucene, I would check blog posts and
the code to understand if you can use NumericDocValues (my gut says yes).
Also, I don't know if it is important, but please note that if you index all
the documents at the beginning your scores will be different - si
Hi,
I did not check it in code, but based on earlier comments on ML, it seems that
in place updates are not as it sounds - it will rewrite doc values for the
segment that is updated. If you really want to avoid index changes, you can
maybe use external field:
https://lucene.apache.org/solr/guid
Reading from the wiki [1]:
" An atomic update operation is performed using this approach only when the
fields to be updated meet these three conditions:
are non-indexed (indexed="false"), non-stored (stored="false"), single
valued (multiValued="false") numeric docValues (docValues="true") fields
" Looks like when using the json facet api,
SimpleFacets is not used, replaced by FacetFieldPorcessorByArrayUIF "
That is expected, I remember Yonik to stress the fact that it is a
completely different approach to faceting ( and different components and
classes are involved).
But your first case
Hi,
When you set facet.offset=1, it applies to all facets - it offsets both
partnerId and partnerName by 1. Since you have only one name, it is empty.
What you should do is set offset only for partnerId facet by setting
f.partnerId.facet.offset=1.
If this is your only usecase, you might consider
Hi Edwin,
Unfortunately, I was not able find regex that would work in your case.
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/
> On 1 Feb 2018, at 05:42, Zheng Lin Edwin Yeo wrote:
>
> Hi,
>
> Have y
Hi,
Thank you for your response. I managed to find the solution. At client side
I have to set -Dzookeeper.sasl.client=false system property to disable SASL
authentication.
On Wed, Jan 31, 2018 at 6:15 PM, Shawn Heisey wrote:
> On 1/31/2018 9:07 AM, Tamás Barta wrote:
>
>> I'm using Solr 6.6.2 a
Sorry for late reply.
Just like the original post, if i set facet.offset=0, everything is OK. The
request is like:
GET
http://172.16.51.98:8983/solr/channel/select?q=channelType:1&rows=0&facet=true&facet.limit=1&facet.offset=0&facet.pivot=partnerId,partnerName&wt=json
And the response is:
{
55 matches
Mail list logo