Hi Shawn,
Thanks a lot for your response!
I'll use this mail thread on tracking the issue in JIRA
https://issues.apache.org/jira/browse/SOLR-9828 .
--
View this message in context:
http://lucene.472066.n3.nabble.com/Very-long-young-generation-stop-the-world-GC-pause-tp4308911.html
Sent
Hi Monti,
As pointed out there is a huge gap of no information. There are two primary
possibilities. One is that something about your resources is depleted. As
Shawn has pointed out... watch them as you start up. Two, Solr is somehow
locked or waiting on something. Since there is no information
That already happens. The ZK client itself will reconnect when it can and
trigger everything to be setup like when the cluster first starts up,
including a live node and leader election, etc.
You may have hit a bug or something else missing from this conversation,
but reconnecting after losing
Thanks for all the replies.
Probably will have to go for long or currency fieldType. Int is still
32-bit, and there will be error in indexing if the amount is larger than
2,147,483,647. Since we are storing it as cents, it will hit the limit with
just $21.4 million.
Regards,
Edwin
On 7
if you want to remove all the data in the then use "null" in set
curl . . . -d '[{"id":"docId","someField":{"set",null}}]'
-Karthik
On Wed, Dec 7, 2016 at 1:31 PM, Richard Bergmann
wrote:
> Hello,
>
> I am new to this and have found no examples or guidance on how to use
>
It's possibly you have autosuggest configured and it's rebuilding on startup.
See "buildOnStartup" here:
https://cwiki.apache.org/confluence/display/solr/Suggester
Depending on the suggester, this will re-read _all_ documents
from the index to build the internal autosuggest structures which
can
Hello,
I am new to this and have found no examples or guidance on how to use
"removeregex" to remove (in my case) all entries in a multi-valued field.
The following curl commands work just fine:
curl . . . -d '[{"id":"docId","someField":{"add",["val1","val2"]}}]'
and
curl . . . -d
Nicole -
Since this is probably off-topic for the solr-user list, let’s take this
offline and over to your Lucidworks support. But while we’re here, here’s an
example of using the Fusion API to create a collection and then the Solr API to
configure the schema. In this example, it’s not
On 12/7/2016 3:24 AM, Monti Chandra wrote:
> I am working on solr version to 6.2.1. It was working so nice for the first
> 20 days and now the server is restarting very slow(15-20 min).
> Please get the hardware specs of my system below:
> Linux version 3.10.0-327.el7.x86_64
I have been testing and setting up CDCR replication between Solrcloud
instances.
We are currently using Solr 6.2
We have a lot of collections and a number of environments for testing and
deployment. It seemed that using properties in the cdcrRequestHandler would
help a lot. Since we have a
Thanks very much every one.
They will probably pursue custom code to see if they can get this data and
log it.
J
--
Thanks,
Jeff Courtade
M: 240.507.6116
On Tue, Dec 6, 2016 at 7:07 PM, John Bickerstaff
wrote:
> You know - if I had to build this, I would consider
Cool, that makes sense!
Esther Quansah
> On Dec 7, 2016, at 9:13 AM, Dorian Hoxha wrote:
>
> Yeah, you always *100 when you store,query,facet, and you always /100 when
> displaying.
>
> On Wed, Dec 7, 2016 at 3:07 PM, wrote:
>
>> I think
Yeah, you always *100 when you store,query,facet, and you always /100 when
displaying.
On Wed, Dec 7, 2016 at 3:07 PM, wrote:
> I think Edwin might be concerned that in storing it as a long type, there
> will be no distinguishing between, in example, $1234.56 and
I think Edwin might be concerned that in storing it as a long type, there will
be no distinguishing between, in example, $1234.56 and $123456.
But correct me if I'm wrong - the latter would be stored as 12345600.
When sending in a search for all values less than $100,000 on a long field,
will
Good suggestion, but unfortunately it does not address this issue as we are
not using the time-based partitioning in this project.
It would be useful to know in which case is the configuration created with
in Solr, what scenario does lead to that so we can
investigate further. Any other
Looks best to file that as a Lucidworks support ticket.
But are you using the time-based sharding feature of Fusion? If that's the
case that might explain it as that creates collections for each time partition.
Erik
> On Dec 7, 2016, at 00:31, Nicole Bilić
Come on dude, just use the int/long.
Source: double is still a float.
On Wed, Dec 7, 2016 at 1:17 PM, Zheng Lin Edwin Yeo
wrote:
> Thanks for the reply.
>
> How about using the double fieldType?
> I tried that it works, as it is 64-bit, as compared to 32-bit for float.
>
Thanks for the reply.
How about using the double fieldType?
I tried that it works, as it is 64-bit, as compared to 32-bit for float.
But will it hit the same issue again if the amount exceeds 64-bit?
Regards,
Edwin
On 7 December 2016 at 15:28, Dorian Hoxha wrote:
>
Hello team,
I am working on solr version to 6.2.1. It was working so nice for the first
20 days and now the server is restarting very slow(15-20 min).
Please get the hardware specs of my system below:
Linux version 3.10.0-327.el7.x86_64 (buil...@kbuilder.dev.centos.org) (gcc
version 4.8.3 20140911
Hi all,
We are using Lucidworks Fusion on top of Solr and recently we’ve
encountered an unexpected behavior. We’ve created bash scripts which we use
to create collections in Solr using the Fusion API and upload the
collection configuration (with bash $ZKCLIENT -cmd upconfig -confdir $path
What do you mean by JVM level? Run Solr on different ports on the same
machine? If you have a 32 core box would you run 2,3,4 JVMs?
On Sun, Dec 4, 2016 at 8:46 PM, Jeff Wartes wrote:
>
> Here’s an earlier post where I mentioned some GC investigation tools:
>
21 matches
Mail list logo