I was asking about the field definitions from the schema.
It would also be helpful to see the debug info from the query. Just add
debug=true to see how the query and params were executed by solr and how
the calculation was done for each result.
On Thu, Oct 26, 2017 at 1:33 PM ruby
What's the analysis configuration for the object_name field and fieldType?
Perhaps the query is matching your catch-all field, but not the object_name
field, and therefore the pf boost never happens.
On Thu, Oct 26, 2017 at 8:55 AM ruby wrote:
> I'm noticing in my
Can you provide the fieldType definition for text_fr?
Also, when you use the Analysis page in the admin UI, what tokens are
generated during indexing for FRaoo using the text_fr fieldType?
On Tue, Sep 19, 2017 at 12:01 PM Sascha Tuschinski
wrote:
> Hello Community,
>
>
The closest thing to an execution plan that I know of is debug=true.That'll
show timings of some of the components
I also find it useful to add echoParams=all when troubleshooting. That'll
show every param solr is using for the request, including params set in
solrconfig.xml and not passed in the
the
> filter queries?
>
>
>
> On Thu, Aug 31, 2017 at 1:47 PM, Josh Lincoln <josh.linc...@gmail.com>
> wrote:
>
> > Suresh,
> > Two things I noticed.
> > 1) If your intent is to only match records where there's something,
> > anything, in abstract
Suresh,
Two things I noticed.
1) If your intent is to only match records where there's something,
anything, in abstract_or_primary_product_id, you should use fieldname:[* TO
*] but that will exclude records where that field is empty/missing. If you
want to match records even if that field is
of the query and just pass
q="title-123123123-end"
* set qf=title
On Tue, Aug 29, 2017 at 10:25 AM Josh Lincoln <josh.linc...@gmail.com>
wrote:
> Darko,
> Can you use edismax instead?
>
> When using dismax, solr is parsing the title field as if it's a query
> term. E.g. th
Darko,
Can you use edismax instead?
When using dismax, solr is parsing the title field as if it's a query term.
E.g. the query seems to be interpreted as
title "title-123123123-end"
(note the lack of a colon)...which results in querying all your qf fields
for both "title" and
I suspect Erik's right that clean=true is the problem. That's the default
in the DIH interface.
I find that when using DIH, it's best to set preImportDeleteQuery for every
entity. This safely scopes the clean variable to just that entity.
It doesn't look like the docs have examples of using
I had the same issue as Vrinda and found a hacky way to limit the number of
times deltaImportQuery was executed.
As designed, solr executes *deltaQuery* to get a list of ids that need to
be indexed. For each of those it executes *deltaImportQuery*, which is
typically very similar to the full
Sebastian,
You may want to try adding autoGeneratePhraseQueries="true" to the
fieldtype.
With that setting, a query for 978-3-8052-5094-8 will behave just like "978
3 8052 5094 8" (with the quotes)
A few notes about autoGeneratePhraseQueries
a) it used to be set to true by default, but that was
Ravi, for the hyphen issue, try setting autoGeneratePhraseQueries=true for
that fieldType (no re-index needed). As of 1.4, this defaults to false. One
word of caution, autoGeneratePhraseQueries may not work as expected for
langauges that aren't whitespace delimited. As Erick mentioned, the
what if you add your country field to qf with a strong boost? the search
experience would be slightly different than if you filter on country, but
maybe still good enough for your users and certainly simpler to implement
and maintain. You'd likely only want exact matches. Assuming you are using
Have you tried adding autoGeneratePhraseQueries=true to the fieldType
without changing the index analysis behavior.
This works at querytime only, and will convert 12-34 to 12 34, as if the
user entered the query as a phrase. This gives the expected behavior as
long as the tokenization is the same
:
Can you do this data in CSV format? There is a CSV reader in the DIH.
The SEP was not intended to read from files, since there are already
better tools that do that.
Lance
On 10/14/2013 04:44 PM, Josh Lincoln wrote:
Shawn, I'm able to read in a 4mb file using SEP, so I think that rules out
, Shawn Heisey wrote:
On 10/13/2013 10:16 AM, Josh Lincoln wrote:
I have a large solr response in xml format and would like to import it
into
a new solr collection. I'm able to use DIH with solrEntityProcessor, but
only if I first truncate the file to a small subset of the records. I was
hoping
I have a large solr response in xml format and would like to import it into
a new solr collection. I'm able to use DIH with solrEntityProcessor, but
only if I first truncate the file to a small subset of the records. I was
hoping to set stream=true to handle the full file, but I still get an out
Hello Wiki Admins,
I have been using Solr for a few years now and I would like to
contribute back by making minor changes and clarifications to the wiki
documentation.
Wiki User Name : JoshLincoln
Thanks
18 matches
Mail list logo