Srinivas this is possible by adding an unique field update processor to the
update processor chain you are using to perform your updates (/update,
/update/json, /update/json/docs, .../a_custom_one)
The Java Documents explain its use nicely
Goutham I suggest you read Hossman's excellent article on deep paging and why
returning rows=(some large number) is a bad idea. It provides an thorough
overview of the concept and will explain it better than I ever could
Just adding some assistance to the Solr-LDAP integration options. A colleague
of mine wrote a plugin that adopts a similar approach to the one Jan suggested
of "plugging-in" an LDAP provider.
He provides the following notes on its design and use
1. It authenticates with LDAP on every
Thank you very much Erick, Emir, and Bram this is extremly useful advice I
sincerely appreciate everyone’s input!
Before I received your responses I ran a controlled DBQ test in our DR
environment and exactly what you said occurred. It was like reading a step by
step playbook of events with
Hey Solr users,
I'd really appreciate some community advice if somebody can spare some time to
assist me. My question relates to initially deleting a large amount of
unwanted data from a Solr Cloud collection, and then advice on best patterns
for managing delete operations on a regular
Hey Richard,
I noticed this issue with the exporter in the 7.x branch. If you look through
the release notes for Solr since then there have been quite a few improvements
to the exporter particularly around thread safety and concurrency (and the
number of nodes it can monitor). The version of
Hey Solr community,
I’m wondering if anyone has ever managed a zookeeper migration while running
SolrCloud or if they have any advice on the process (not a zookeeper upgrade
but a new physical instance migration)? I could not seem to find any endpoints
in the collections or coreadmin api’s
Apologies all I just realised I replied to the wrong thread. This is in
response to "Solr cloud on Docker?" not "SolrCloud" location for solr.xml".
Apologies for the confusion.
Thanks
Dwane
____
From: Dwane Hall
Sent: Monday, 2 March 202
Hey Jan,
Thanks for the info re swap there’s some interesting observations you’ve
mentioned below particularly the container swap by default. There was this
note on the Docker forum describing a similar situation you mention did you
attempt these settings with the same result?
Hey Kaya,
How are you adding documents to your index? Do you control this yourself or do
you have multiple clients (curl, solrJ, calls directly to /update*) updating
data in your index as I suspect (based on your hard and soft commit settings)
that a client may be causing your soft commits
l memory used by all dockerised Solr in
> order to keep free memory on host for MMAPDirectory ?
>
> In short can you explain the memory management ?
>
> Regards
>
> Dominique
>
>
>
>
> Le lun. 23 déc. 2019 à 00:17, Dwane Hall a écrit :
>
> > Hey Walter,
> >
Hey Walter,
I recently migrated our Solr cluster to Docker and am very pleased I did so. We
run relativity large servers and run multiple Solr instances per physical host
and having managed Solr upgrades on bare metal installs since Solr 5,
containerisation has been a blessing (currently Solr
@lucene.apache.org
Subject: Re: Cursor mark page duplicates
On 11/28/2019 1:30 AM, Dwane Hall wrote:
> I asked a question on the forum a couple of weeks ago regarding cursorMark
> duplicates. I initially thought it may be due to HDFSCaching because I was
> unable replicate the issue
9021/solr/my_collection_shard4_replica_n14/"}]
}}
As you can see both documents have the same version number but different
maxScores and Solr_Update_Date's. My understanding is the cursorMark should
only be generated off the id field so I can't see why I would get a different
doc
Thanks Erick/Hossman,
I appreciate your input it's always an interesting read seeing Solr legends
like yourselves work through a problem! I certainly learn a lot from following
your responses in this user group.
As you recommended I ran the distrib=false query on each shard and the results
Hey Solr community,
I'm using Solr's cursor mark feature and noticing duplicates when paging
through results. The duplicate records happen intermittently and appear at
the end of one page, and the beginning of the next (but not on all pages
through the results). So if rows=20 the duplicate
Although I don't use the pdf version I highly recommend watching Cassandra's
talk from Activate last year ( https://m.youtube.com/watch?v=DixlnxAk08s). In
this talk she addresses the challenges of the Solr ref guide including the
'title search' mentioned below and presents a number of options
Thanks Shawn I'll raise a question on the GitHub page. Cheers,
Dwane
From: Shawn Heisey
Sent: Friday, 12 July 2019 10:05 PM
To: solr-user@lucene.apache.org
Subject: Re: Spark-Solr connector
On 7/11/2019 8:50 PM, Dwane Hall wrote:
> I’ve just started look
Hey guys,
I’ve just started looking at the excellent spark-solr project (thanks Tim
Potter, Kiran Chitturi, Kevin Risden and Jason Gerlowski for their efforts with
this project it looks really neat!!).
I’m only at the initial stages of my exploration but I’m running into a class
not found
Hi guys,
Did anyone get an opportunity to confirm this behaviour. If not is the
community happy for me to raise a JIRA ticket for this issue?
Thanks,
Dwane
From: Dwane Hall
Sent: Wednesday, 3 April 2019 7:15 PM
To: solr-user@lucene.apache.org
Subject: Basic
Hi guys,
I’m just following up from an earlier question I raised on the forum regarding
inconsistencies in edismax query behaviour and I think I may have discovered
the cause of the problem. From testing I've noticed that edismax query
behaviour seems to change depending on the field types
Hey Solr community.
I’ve been following a couple of open JIRA tickets relating to use of the basic
auth plugin in a Solr cluster (https://issues.apache.org/jira/browse/SOLR-12584
, https://issues.apache.org/jira/browse/SOLR-12860) and recently I’ve noticed
similar behaviour when adding tlog
Good afternoon solr community. I'm having an issue debugging an edismax query
which appears to behave differently across two separate collections with what I
believe to be the same default query parameters. Each collection contains
different data but are configured using similar default query
g.
Using curl, my command was:
curl -XPOST -H 'Content-type: application/json'
http://localhost:8983/solr/testCollection/update -d '{ "delete":
"123!12345" }'
Are you doing anything differently from that?
Thanks,
Matt
On 11/02/2019 23:24, Dwane Hall wrote:
> Hey Solr comm
Hey Solr community,
I’m having an issue deleting documents from my Solr index and am seeking some
community advice when somebody gets a spare minute. It seems really like a
really simple problem …a requirement to delete a document by its id.
Here’s how my documents are mapped in solr
DOC_ID
ed
and you can point it at your production ZooKeeper ensemble. Do you
still have the same problem? If not, I'd guess that your production
system has somehow mixed-and-matched...
Best,
Erick
On Wed, Jan 23, 2019 at 4:36 PM Dwane Hall wrote:
>
> Hi user community,
>
>
> I recently up
Hi user community,
I recently upgraded a single node solr cloud environment from 7.3.1 to 7.6.
While traversing through the release notes for solr 7.5 to identify any
important changes to consider for the upgrade I noticed two excellent additions
to the Admin UI that took effect in solr 7.5
gt; fetch the stored fields.
So using docvalues rather than stored for "1000s" of rows will avoid that cycle.
You can use the cursorMark to page efficiently, your middleware would
have to be in charge of that.
Best,
Erick
On Wed, Nov 14, 2018 at 6:35 PM Dwane Hall wrote:
>
> Good
Good afternoon Solr community,
I have a situation where I require the following solr features.
1. Highlighting must be available for the matched search results
2. After a user performs a regular solr search (/select, rows=10) I
require a drill down which has the potential to export
"start":0,
"allResults":"false",
"fl":"FIELD_1,FIELD_2,SUMMARY_FIELD",
"fq":"{!switch default=\"{!collapse field=SUMMARY_FIELD}\"
case.true=*:* v=${allResults}}",
Somewhat odd.
Thanks again to Eri
consume huge amounts of memory.
And assuming you could possible return 1M rows, say, what would the
user do with it? Displaying in a browser is problematic for instance.
Best,
Erick
On Wed, Sep 12, 2018 at 5:54 AM Shawn Heisey wrote:
>
> On 9/12/2018 5:47 AM, Dwane Hall wrote:
> > Good
Good afternoon Solr brains trust I'm seeking some community advice if somebody
can spare a minute from their busy schedules.
I'm attempting to use the switch query parser to influence client search
behaviour based on a client specified request parameter.
Essentially I want the following to
ts
you'd expect in that range, although the after and between numbers
total the numFound.
What kind of a field is Value? Given the number of docs missing, I'd
guess you could get the number of docs down really small and post
them. Something like
values 1, 2, 3, 4, 5,
and your range query so we
"400.0",80,
"500.0",0,
"600.0",0,
"700.0",69,
"800.0",0,
"900.0",0,
"1000.0",0,
"1100.0",0,
"1200.0",0,
"13
Good morning Solr community. I'm having a few facet range issues for which I'd
appreciate some advice when somebody gets a spare couple of minutes.
Environment
Solr Cloud (7.3.1)
Single Shard Index, No replicas
Facet Configuration (I'm using the request params API and useParams at runtime)
Hi Sushant,
I had the same issue and unfortunately the exporter does not appear to support
a secure cluster. I raised a JIRA feature request so please upvote it as this
will increase the chances of it being included in a future release.
Good afternoon knowledgeable solr community. I’m experiencing problems using a
document transformer across a multiple shard collection and am wondering if
anyone would please be able to assist or provide some guidance?
The document transformer query below works nicely until I split the
Has anyone had any luck using the Solr 7.3+ exporter for metrics collection on
a Solr instance with the basic auth plugin enabled? The exporter starts without
issue but I have had no luck specifying the credentials when the exporter tries
to call the metrics API. The documentation does not
Good evening solr community. I have not had a lot of luck on another community
source seeking advice on using the unified highlighter so I thought I'd try my
luck with the solr experts. Any recommendations would be appreciated when you
get time.
Apache Solr 6.4 saw the release of the
39 matches
Mail list logo