Thanks Shawn.
I cannot complain cuz it actually worked well for me so far but..
I still do not understand if Solr already paginates the results from the
full import, why not do the same for the delta. It is almost the same query:
`select id from t where t.lastmod > ${solrTime}`
`select * from t
Hi,
The readme found in '/contrib/analysis-extras' mentions: OpenNLP relies on
lucene-libs/lucene-analyzers-opennlp-X.Y.jar and lib/opennlp-*.jar
Make sure you are loading these as well as the
'solr-analysis-extras-7.3.0.jar' file in the /dist directory.
Regards,
Ryan
On Mon, 16 Apr 2018 at
On 3/6/2018 6:31 AM, sol...@seznam.cz wrote:
> A would like to use Analytisc component. I configured it by https://lucene.
> apache.org/solr/guide/7_2/analytics.html.
> When I try to send query to solr, exception is thrown.
>
> Reason: Server ErrorCaused by:java.lang.
> IllegalAccessError: tried
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
All,
I have documents that need to appear to have different attributes
depending upon which user is trying to search them. One of the fields
I currently have in the document is called "latest_submission" and
it's a multi-valued text field that
I deleted my collection and rebuilt it to check if there are any issues
with indexing. I didn't see any errors during indexing. My collection is
sharded and we use implicit routing...But after rebuilding my collection
also I am getting errors on group faceting. This is not happening all the
time
Thanks Amrit, Peter. I'll go with option#2 but what else i am seeing is
that after bootstrap, target has not been synched further (even though we
have continous indexing happening in source) which I believe due to the
leaders on source cluster shows updateLogSynchronizer stopped while
replica's
I have from time-to-time posted questions to this list (and received
very prompt and helpful responses). But it seems that many of you are
operating in a very different space from me. The problems (and
lessons-learned) which I encounter are often very different from those
that are reflected in
There are two ways I've gotten around this issue:
1. Add replicas in the target data center after CDCR bootstrapping has
completed.
-or-
2. After the bootstrapping has completed, restart the replica nodes one-at-time
in the target data center (restart, wait for replica to catch up, then
Hi Susheel,
Pretty sure you are talking about this:
https://issues.apache.org/jira/browse/SOLR-11724
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
Medium:
Does anybody know about known issue where CDCR bootstrap sync leaves the
replica's on target cluster non touched/out of sync.
After I stopped and restart CDCR, it builds my target leaders index but
replica's on target cluster still showing old index / not modified.
Thnx
Hi,
I am testing solr 6.6.3 and have been running into intermittent group
faceting errors. I did some bulk indexing to initially setup the
collection I have multiple facet fields it only throws error on one of the
fields. The issue goes away when I reindex the data.
I am upgrading from solr
Lee,
It's worth to send a stacktrace for such kind of inquiries.
I guess it goes from QueryComponent.mergeIds() or so. Shard response should
contains from schema.xml field.
I encounter something like this while troubleshooting
I've created a custom response writer which extends RawResponseWriter. The
basic operation is to output a single field value from the top matching doc
as the entire response. This works when shards = 1 but fails when shards
are greater than 1.
I throw an error if the field in question is missing
Have you tried starting Solr with the "-v" option? That will show you
exactly (in the solr.log file) where every jar is loaded from. Make
sure you are
1> loading the jar at all
2> loading the jar you think you are (i.e. not loading an old version
from somewhere on your classpath).
Copying jar
Hi,
I'm trying to write a function query that needs to stick a score [0..1] to
each doc in the search results based on some logic applied to a
multi-valued field in the document.
This is an excerpt of the schema:
And this is how it looks in a generic document on the index:
"sku_store": [
Hi, can you post final solution how you solve this problem?
I have copied solr-analysis-extras-7.3.0.jar to /contrib/analysis-extras/lib
folder and loaded it like this:
/ /
but error have not gone...
I still see:
org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
BTW, the effort you put into creating a process to totally replace
your index won't be wasted. I pretty much guarantee that you'll have
to re-index everything upon occasion if for no other reason than the
requirements will change. "Oh, you know that field we said never would
be used for phrase
Be a little careful with NOW in filter queries if you intend to have
the cache be effective, see:
https://lucidworks.com/2012/02/23/date-math-now-and-filter-queries/
Best,
Erick
On Mon, Apr 16, 2018 at 2:15 AM, Emir Arnautović
wrote:
> Hi,
> You can probably do
bq: Sounds like wasting a little space with 00:00:00 timestamps is
probably the way to go
What space? Under the covers it's just a long. The doc is slightly bigger
of course.
And, you could use the ParseDateFieldUpdateProcessorFactory, see the
reference guide. It's job is to take various inputs
The metrics code added in 6.4 cause a big performance problem. I believe that
was fixed by 6.5.1, but there might be some lingering issues.
See: https://issues.apache.org/jira/browse/SOLR-10130
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Apr
Yes, Lucene RegExp javadoc seems a bit complicated and even tests do not
cover all syntax variants. But the whole point is: parser doesn't mangle
any characters and using backslashes only for distinguish syntax symbols
from raw characters.
The example might be not a best possible, but I think
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Shawn,
On 4/15/18 4:49 PM, Shawn Heisey wrote:
> On 4/15/2018 2:31 PM, Christopher Schultz wrote:
>> I'd usually call this a "date", but Solr's documentation says
>> that a "date" is what I would call a timestamp (including time
>> zone).
>
> That
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Shawn,
On 4/15/18 4:33 PM, Shawn Heisey wrote:
> On 4/15/2018 2:24 PM, Christopher Schultz wrote:
>> No, it wouldn't have. It doesn't read any configuration files
>> and guesses its way through everything. Simply adding HTTPS
>> support required me
Do you see a performance improvement in your 'Query Times' (6.2.1
vis-a-vis 6.5.1)?
Deepak
"Please stop cruelty to Animals, help by becoming a Vegan"
+91 73500 12833
deic...@gmail.com
Facebook: https://www.facebook.com/deicool
LinkedIn: www.linkedin.com/in/deicool
"Plant a Tree, Go Green"
Hi All,
Term vector component allows to store and retrieve quite a bit of very
useful information, e.g., for a term one can get:
"verlag",[
"tf",6,
"positions",["position",5, "position",42, "position",64,
"position",85, "position",106, "position",127],
"offsets",[
"start",35, "end",41,
Hi,Is solr affected by these two vulnerabilities CVE-2017-15718 and
CVE-2017-15713?
ThanksMikhail
I figured it out that after restarting nodes, source cluster leaders were
switched and causing above warning and cdcr replication to stop. After
stopping the CDCR process and then restart again, above warning disappear
and bootstrap sync stepped in.
On Sun, Apr 15, 2018 at 7:54 PM, Susheel Kumar
It would help if you can trace it down to a version change.
Do you have a test system and start with 6.3.0 as next version above 6.2.1
to see which version change is making you trouble?
You can then try 6.4.0 and 6.5.0 next. And after that go into subversions.
Regards, Bernd
Am 16.04.2018 um
Hi,
You can probably do something like fq=latestEventDate:[$startDate TO $endDate]
OR (!latestEventDate:[* TO *] AND dateCreated:[$startDate TO
$endDate])=.,,=…
But it is probably better to merge those two dates at index time to some
searchDate field and use that one for searching. You can do
Hi Folks, I've started on Solr World short time ago, I've been working on
some filter query changes, the most of question that I've had i could
understand, reading this forum, thanks a lot.
I've a situation, on my schema there are 2 fields that I've to check:
latestEventDate and dateCreated.
By
Hi Bernd,
We didn't change any default settings.
Both 6.2.1 and 6.5.1 is running with same settings, same volume of data,
same code, which means indexing rate is also same.
In Case of 6.2.1 CPU is around 60 to 70%. But in 6.5.1 it's always around
95%. The CPU % in 6.5.1 is alarming for us and
31 matches
Mail list logo