Hi,
After your suggestion i changed code
String SOLR_URL="http://localhost:7991/solr/actionscomments;;
SolrClient solrClient = new HttpSolrClient.Builder(SOLR_URL).build();
SolrInputDocument document = new SolrInputDocument();
document.addField("id","ACTC6401895");
solrClient.add(document);
Can parent-child relationship be used in this scenario?
Anyone?
I see it needs an update handler:
https://lucene.apache.org/solr/guide/6_6/transforming-and-indexing-custom-json.html#transforming-and-indexing-custom-json
curl
Exactly, Solr is a search index, not a data store. you need to flatten
your relationships. Right tool for the job etc.
On Tue, Apr 9, 2019 at 4:28 PM Shawn Heisey wrote:
> On 4/9/2019 2:04 PM, Abhijit Pawar wrote:
> > Hello Guys,
> >
> > I am trying to index a JSON array in one of my
On 4/9/2019 2:04 PM, Abhijit Pawar wrote:
Hello Guys,
I am trying to index a JSON array in one of my collections in mongoDB in
Solr 6.5.0 however it is not getting indexed.
I am using a DataImportHandler for this.
*Here's how the data looks in mongoDB:*
{
"idStr" :
Hello Guys,
I am trying to index a JSON array in one of my collections in mongoDB in
Solr 6.5.0 however it is not getting indexed.
I am using a DataImportHandler for this.
*Here's how the data looks in mongoDB:*
{
"idStr" : "5ca38e407b154dac08913a96",
"sampleAttr" : "sampleAttrVal",
*
Hmm. I am doing the same thing. But, somehow in my browser, after I select the
core, it does not stay selected to view the stats/cache.
Attaching the gif for when I try it.
Anyway, that is a different issue from my side. Thanks for your input.
-Lewin
-Original Message-
From: Shawn
On 4/9/2019 12:38 PM, Lewin Joy (TMNA) wrote:
I just tried to go to the location you have specified. I could not see a "CACHE" . I can
see the "Statistics" section.
I am using Solr 7.2 on solrcloud mode.
If you are trying to select a *collection* from a dropdown, you will not
see this. It
Hi Shawn,
We are facing an issue where the caches got corrupted.
We are doing a json.facet and pivoting through 3 levels. We are taking
allBuckets from the different levels.
In json.facet query, while doing the inner facets, we are keeping a limit. We
notice that as we change the limit, we
On 4/9/2019 11:51 AM, Lewin Joy (TMNA) wrote:
Hmm. I only tried reloading the collection as a whole. Not the core reload.
Where do I see the cache sizes after reload?
If you do not know how to see the cache sizes, then what information are
you looking at which has led you to the conclusion
I’d like to know this, too. We run benchmarks with log replay, starting with
warming queries, then a measurement run. It is a pain to to a rolling restart
of the whole cluster before each benchmark run.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
>
Thank you for email, Alex.
I have the autowarmCount set as 0.
So, this shouldn't prepopulate with old cache data.
-Lewin
-Original Message-
From: Alexandre Rafalovitch
Sent: Monday, April 8, 2019 6:45 PM
To: solr-user
Subject: Re: Solr Cache clear
You may have warming queries to
Hmm. I only tried reloading the collection as a whole. Not the core reload.
Where do I see the cache sizes after reload?
-Lewin
-Original Message-
From: Shawn Heisey
Sent: Monday, April 8, 2019 5:10 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr Cache clear
On 4/8/2019 2:14 PM,
Hello,
I recently ran in to the following scenario:
Solr, version 7.5, in a docker container, running as cloud, with an
external zookeeper ensemble of 3 zookeepers. Instructions were followed to
make a root first, this was set correctly, as could be seen by the solr
logs outputting the connect
On 4/9/2019 7:03 AM, Erie Data Systems wrote:
Solr 8.0.0, I have a HASHTAG string field I am trying to facet on to get
the most popular hashtags (top 100) across many sources. (SITE field is
string)
/select?facet.field=hashtag=on=0=%2Bhashtag:*%20%2BDT:[" .
date('Y-m-d') . "T00:00:00Z+TO+" .
Glad to hear it. Now, if you want to be really bold (and I haven’t verified it,
but it _should_ work).
Rather than copy the index, try this:
1> spin up a one-replica empty collection
2> use the REPLICATION API to copy the index from the re-indexed source.
3> ADDREPLICAs as before.
<2> looks
Another way to make queries faster is, if you can, identify a subset of
documents that are in general relevant for the users (most recent ones,
most browsed etc etc), index those documents into a separate collection and
then query the small collection and back out to the full one if the small
one
maybe something like q=
({!edismax v=$q1} OR {!edismax v=$q2} OR {!edismax ... v=$q3})
and setting q1, q2, q3 as needed (or all to the same maybe with different qf’s
and such)
Erik
> On Apr 9, 2019, at 09:12, sidharth228 wrote:
>
> I did infact use "bf" parameter for
I did infact use "bf" parameter for individual edismax queries.
However, the reason I can't condense these edismax queries into a single
edismax query is because each of them uses different fields in "qf".
Basically what I'm trying to do is this: each of these edismax queries (q1,
q2, q3) has
Solr 8.0.0, I have a HASHTAG string field I am trying to facet on to get
the most popular hashtags (top 100) across many sources. (SITE field is
string)
/select?facet.field=hashtag=on=0=%2Bhashtag:*%20%2BDT:[" .
date('Y-m-d') . "T00:00:00Z+TO+" . date('Y-m-d') .
"T23:59:59Z]=100=1=fc
It works
Function queries in ‘q’ score EVERY DOCUMENT. Use ‘bf’ or ‘boost’ for the
function part, so its only computed on main query matching docs.
Erik
> On Apr 9, 2019, at 03:29, Sidharth Negi wrote:
>
> Hi,
>
> I'm working with "edismax" and "function-query" parsers in Solr and have
>
Hi guys,
I’m just following up from an earlier question I raised on the forum regarding
inconsistencies in edismax query behaviour and I think I may have discovered
the cause of the problem. From testing I've noticed that edismax query
behaviour seems to change depending on the field types
On 4/8/2019 11:00 PM, vishal patel wrote:
Sorry my mistake there is no class of that.
I have add the data using below code.
CloudSolrServer cloudServer = new CloudSolrServer(zkHost);
cloudServer.setDefaultCollection("actionscomments");
cloudServer.setParallelUpdates(true);
List docs = new
On 4/8/2019 11:47 PM, Srinivas Kashyap wrote:
I'm using DIH to index the data and the structure of the DIH is like below for
solr core:
16 child entities
During indexing, since the number of requests being made to database was
high(to process one document 17 queries) and was utilizing most
Hi,
I'm working with "edismax" and "function-query" parsers in Solr and have
difficulty in understanding whether the query time taken by
"function-query" makes sense. The query I'm trying to optimize looks as
follows:
q={!func sum($q1,$q2,$q3)} where q1,q2,q3 are edismax queries.
The QTime
Thanks so much - your approaches worked a treat!
Best,
Kevin.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi,
I'm working with "edismax" and "function-query" parsers in Solr and have
difficulty in understanding whether the query time taken by
"function-query" makes sense. The query I'm trying to optimize looks as
follows:
q={!func sum($q1,$q2,$q3)} where q1,q2,q3 are edismax queries.
The QTime
Hi all,
We are trying to emulate in Solr 8.0 the behaviour of Solr 3.6 and we are
facing a problem that we cannot solve
When we have duplicated tokens:
- Solr 8.0 scores only once the token but it applies a huge boost
- Solr 3.6 scores individually each token and the final score is lower
We
Hello,
I'm using DIH to index the data and the structure of the DIH is like below for
solr core:
16 child entities
During indexing, since the number of requests being made to database was
high(to process one document 17 queries) and was utilizing most of connections
of database thereby
28 matches
Mail list logo