You're welcome, Nikhilesh!
Best wishes,
Radu
--
Elasticsearch/OpenSearch & Solr Consulting, Production Support & Training
Sematext Cloud - Full Stack Observability
http://sematext.com/
On Wed, Jun 29, 2022 at 10:49 AM Nikhilesh Jannu <
nikhil...@predictspring.com> wrote:
> Thank you Radu for
That is correct.
Joel Bernstein
http://joelsolr.blogspot.com/
On Wed, Jun 29, 2022 at 3:54 PM Kojo wrote:
> Joel,
> follow below solrconfig.xml.
>
>
>
>
> explicit
> 10
> 18
>
> 1
> 0
> 15000
> 1
> AND
>
> edismax
> text
>
Hi Satya,
I think it's a bug with using compositeId. We had the same issue, and had
to use deleteByQuery instead, but like you said, it's much slower. We're
using solr 8.11
On Tue, Jun 28, 2022 at 4:59 AM Satya Nand
wrote:
> Thanks, Peter,
> I am checking that, also UpdateRequest class seems to
Joel,
follow below solrconfig.xml.
explicit
10
18
1
0
15000
1
AND
edismax
text
spellcheck
I understood that if I remove the line edismax it
will fallback to default Lucene query parser, which is now
Nice to see someone actually using FRBR.
Personally, I would flatten work and expression into the main record. Keep all
of the relationships in a main database and update the record when any parent
changes. Queries would be simpler and faster. How often does work-level or
expression-level data
Hi,
Is it possible to do partial/atomic or in-place updates with Update
Streaming expression Decorator? The following simply overwrites.
update(collection1,
select(
search(collection1,
q=*:*,
qt="/export",
Interestingly, I found that
[child childFilter=$pidfilter limit=-1]=+instance.agency:94
also worked - I get the pid that is present at that library. It's when I
restrict at both pid and instance level it does not seem to work.
--
Noah Torp-Smith (n...@dbc.dk)
Thank you Radu for the quick response. I have updated the values as
you suggested.
Nikhilesh Jannu Principal Software Engineer 405.609.4259
<(405)%20741-9895>
On Wed, Jun 29, 2022 at 12:03 AM Radu Gheorghe
wrote:
> Hi Nikhilesh,
>
> Try hard-committing more often. This way you'll
(apologies if this is not the correct way to respond to a comment to my
original post)
Hello Mikhail - thanks for responding so quickly.
Your suggestion does not seem to work for me. Using
[child childFilter=$pidfilter limit=-1]=+pid.material_type:bog
works, but
[child childFilter=$pidfilter
Hi Nikhilesh,
Try hard-committing more often. This way you'll have smaller tlog files and
there will be less data to recover. My suggestion is to add a maxSize
constraint to autoCommit. 100MB is a good rule of thumb, makes sure you
don't replay more than 100MB worth of data (even if you have an
Hello, Noah.
Could i be something like
[child childFilter=$pidfilter limit=-1]=+pid.material_type:bog +
instance.agency:94 +instance.status:onShelf
?
On Wed, Jun 29, 2022 at 8:57 AM Noah Torp-Smith wrote:
> To explain my question, first some domain background. We have a search
> engine
Thank you very much Michael for your answer.
Below the extra information you asked for, and a sample result
QUERY INFORMATION
query=covid
back query = *:*
fore query = mitochondria
sample gene id ="57506" / "54205"
facet code:
"json.facet": "{'titles_gene': {'type': 'terms', 'field':
Dear Users,
We are using the SOLR TRA collection for capturing the logs. We are
writing the logs to SOLR using the Rest API in a batch of 100 and also we
are using SOFT commit interval of 15000 and Hard commit interval of 6.
Solr Version : 8.11.1.
When we restart the SOLR node in the
13 matches
Mail list logo