Re: NPE Issue with atomic update to nested document or child document through SolrJ

2020-09-18 Thread Frank Vos
Good morning all,

Your mail come into my mailbox, please check with your it support why your mail 
come into my mailbox!
If I notice that this is not being done, I will have to take far-reaching 
measures.

Met vriendelijke groet / Kind regards,

Frank Vos
Servicedesk medewerker

T036 760 07 11
M   06 83 33 57 95
Efrank@reddata.nl



Van: Alexandre Rafalovitch 
Datum: donderdag, 17 september 2020 om 21:07
Aan: solr-user 
Onderwerp: Re: NPE Issue with atomic update to nested document or child 
document through SolrJ
The missing underscore is a documentation bug, because it was not
escaped the second time and the asciidoc chewed it up as an
bold/italic indicator. The declaration and references should match.

I am not sure about the code. I hope somebody else will step in on that part.

Regards,
   Alex.

On Thu, 17 Sep 2020 at 14:48, Pratik Patel  wrote:
>
> I am running this in a unit test which deletes the collection after the
> test is over. So every new test run gets a fresh collection.
>
> It is a very simple test where I am first indexing a couple of parent
> documents with few children and then testing an atomic update on one parent
> as I have posted in my previous message. (using UpdateRequest)
>
> I am not sure if I am triggering the atomic update correctly, do you see
> any potential issue in that code?
>
> I noticed something in the documentation here.
> https://lucene.apache.org/solr/guide/8_5/indexing-nested-documents.html#indexing-nested-documents
>
>   name="_nest_path_" type="*nest_path*" />
>
> field_type is declared with name *"_nest_path_"* whereas field is declared
> with type *"nest_path". *
>
> Is this intentional? or should it be as follows?
>
>   name="_nest_path_" type="* _nest_path_ *" />
>
> Also, should we explicitly set index=true and store=true on _nest_path_
> and _nest_parent_ fields?
>
>
>
> On Thu, Sep 17, 2020 at 1:17 PM Alexandre Rafalovitch 
> wrote:
>
> > Did you reindex the original document after you added a new field? If
> > not, then the previously indexed content is missing it and your code
> > paths will get out of sync.
> >
> > Regards,
> >Alex.
> > P.s. I haven't done what you are doing before, so there may be
> > something I am missing myself.
> >
> >
> > On Thu, 17 Sep 2020 at 12:46, Pratik Patel  wrote:
> > >
> > > Thanks for your reply Alexandre.
> > >
> > > I have "_root_" and "_nest_path_" fields in my schema but not
> > > "_nest_parent_".
> > >
> > >
> > > 
> > > 
> > >  > > docValues="false" />
> > > 
> > >  > > name="_nest_path_" class="solr.NestPathField" />
> > >
> > > I ran my test after adding the "_nest_parent_" field and I am not getting
> > > NPE any more which is good. Thanks!
> > >
> > > But looking at the documents in the index, I see that after the atomic
> > > update, now there are two children documents with the same id. One
> > document
> > > has old values and another one has new values. Shouldn't they be merged
> > > based on the "id"? Do we need to specify anything else in the request to
> > > ensure that documents are merged/updated and not duplicated?
> > >
> > > For your reference, below is the test I am running now.
> > >
> > > // update field of one child doc
> > > SolrInputDocument sdoc = new SolrInputDocument(  );
> > > sdoc.addField( "id", testChildPOJO.id() );
> > > sdoc.addField( "conceptid", testChildPOJO.conceptid() );
> > > sdoc.addField( "storeid", "foo" );
> > > sdoc.setField( "fieldName",
> > > java.util.Collections.singletonMap("set", Collections.list("bar" ) ));
> > >
> > > final UpdateRequest req = new UpdateRequest();
> > > req.withRoute( pojo1.id() );// parent id
> > > req.add(sdoc);
> > >
> > > collection.client.request( req,
> > collection.getCollectionName()
> > > );
> > > collection.client.commit();
> > >
> > >
> > > Resulting documents :
> > >
> > > {id=c1_child1, conceptid=c1, storeid=s1,
> > fieldName=c1_child1_field_value1,
> > > startTime=Mon Sep 07 12:40:37 EDT 2020, integerField_iDF=10,
> > > booleanField_bDF=true, _root_=abcd, _version_=1678099970090074112}
> > > {id=c1_child1, conceptid=c1, storeid=foo, fieldName=bar, startTime=Mon
> > Sep
> > > 07 12:40:37 EDT 2020, integerField_iDF=10, booleanField_bDF=true,
> > > _root_=abcd, _version_=1678099970405695488}
> > >
> > >
> > >
> > >
> > >
> > >
> > > On Thu, Sep 17, 2020 at 12:01 PM Alexandre Rafalovitch <
> > arafa...@gmail.com>
> > > wrote:
> > >
> > > > Can you double-check your schema to see if you have all the fields
> > > > required to support nested documents. You are supposed to get away
> > > > with just _root_, but really you should also include _nest_path and
> > > > _nest_parent_. Your particular exception seems to be triggering
> > > > something (maybe a bug) related to - possibly - missing _nest_path_
> > > > field.
> > > >
> > > > See:
> > > >
> > 

Re: NPE Issue with atomic update to nested document or child document through SolrJ

2020-09-18 Thread Frank Vos


Met vriendelijke groet / Kind regards,

Frank Vos
Servicedesk medewerker

T036 760 07 11
M   06 83 33 57 95
Efrank@reddata.nl



Van: Alexandre Rafalovitch 
Datum: donderdag, 17 september 2020 om 21:07
Aan: solr-user 
Onderwerp: Re: NPE Issue with atomic update to nested document or child 
document through SolrJ
The missing underscore is a documentation bug, because it was not
escaped the second time and the asciidoc chewed it up as an
bold/italic indicator. The declaration and references should match.

I am not sure about the code. I hope somebody else will step in on that part.

Regards,
   Alex.

On Thu, 17 Sep 2020 at 14:48, Pratik Patel  wrote:
>
> I am running this in a unit test which deletes the collection after the
> test is over. So every new test run gets a fresh collection.
>
> It is a very simple test where I am first indexing a couple of parent
> documents with few children and then testing an atomic update on one parent
> as I have posted in my previous message. (using UpdateRequest)
>
> I am not sure if I am triggering the atomic update correctly, do you see
> any potential issue in that code?
>
> I noticed something in the documentation here.
> https://lucene.apache.org/solr/guide/8_5/indexing-nested-documents.html#indexing-nested-documents
>
>   name="_nest_path_" type="*nest_path*" />
>
> field_type is declared with name *"_nest_path_"* whereas field is declared
> with type *"nest_path". *
>
> Is this intentional? or should it be as follows?
>
>   name="_nest_path_" type="* _nest_path_ *" />
>
> Also, should we explicitly set index=true and store=true on _nest_path_
> and _nest_parent_ fields?
>
>
>
> On Thu, Sep 17, 2020 at 1:17 PM Alexandre Rafalovitch 
> wrote:
>
> > Did you reindex the original document after you added a new field? If
> > not, then the previously indexed content is missing it and your code
> > paths will get out of sync.
> >
> > Regards,
> >Alex.
> > P.s. I haven't done what you are doing before, so there may be
> > something I am missing myself.
> >
> >
> > On Thu, 17 Sep 2020 at 12:46, Pratik Patel  wrote:
> > >
> > > Thanks for your reply Alexandre.
> > >
> > > I have "_root_" and "_nest_path_" fields in my schema but not
> > > "_nest_parent_".
> > >
> > >
> > > 
> > > 
> > >  > > docValues="false" />
> > > 
> > >  > > name="_nest_path_" class="solr.NestPathField" />
> > >
> > > I ran my test after adding the "_nest_parent_" field and I am not getting
> > > NPE any more which is good. Thanks!
> > >
> > > But looking at the documents in the index, I see that after the atomic
> > > update, now there are two children documents with the same id. One
> > document
> > > has old values and another one has new values. Shouldn't they be merged
> > > based on the "id"? Do we need to specify anything else in the request to
> > > ensure that documents are merged/updated and not duplicated?
> > >
> > > For your reference, below is the test I am running now.
> > >
> > > // update field of one child doc
> > > SolrInputDocument sdoc = new SolrInputDocument(  );
> > > sdoc.addField( "id", testChildPOJO.id() );
> > > sdoc.addField( "conceptid", testChildPOJO.conceptid() );
> > > sdoc.addField( "storeid", "foo" );
> > > sdoc.setField( "fieldName",
> > > java.util.Collections.singletonMap("set", Collections.list("bar" ) ));
> > >
> > > final UpdateRequest req = new UpdateRequest();
> > > req.withRoute( pojo1.id() );// parent id
> > > req.add(sdoc);
> > >
> > > collection.client.request( req,
> > collection.getCollectionName()
> > > );
> > > collection.client.commit();
> > >
> > >
> > > Resulting documents :
> > >
> > > {id=c1_child1, conceptid=c1, storeid=s1,
> > fieldName=c1_child1_field_value1,
> > > startTime=Mon Sep 07 12:40:37 EDT 2020, integerField_iDF=10,
> > > booleanField_bDF=true, _root_=abcd, _version_=1678099970090074112}
> > > {id=c1_child1, conceptid=c1, storeid=foo, fieldName=bar, startTime=Mon
> > Sep
> > > 07 12:40:37 EDT 2020, integerField_iDF=10, booleanField_bDF=true,
> > > _root_=abcd, _version_=1678099970405695488}
> > >
> > >
> > >
> > >
> > >
> > >
> > > On Thu, Sep 17, 2020 at 12:01 PM Alexandre Rafalovitch <
> > arafa...@gmail.com>
> > > wrote:
> > >
> > > > Can you double-check your schema to see if you have all the fields
> > > > required to support nested documents. You are supposed to get away
> > > > with just _root_, but really you should also include _nest_path and
> > > > _nest_parent_. Your particular exception seems to be triggering
> > > > something (maybe a bug) related to - possibly - missing _nest_path_
> > > > field.
> > > >
> > > > See:
> > > >
> > https://lucene.apache.org/solr/guide/8_5/indexing-nested-documents.html#indexing-nested-documents
> > > >
> > > > Regards,
> > > >Alex.
> > > >
> > > > On Wed, 16 Sep 2020 at 13:28, Pratik Patel 
> > wrote:
> > > 

Re: Pining Solr

2020-09-18 Thread Alexandre Rafalovitch
Your builder parameter should be up to the collection, so only
"http://testserver-dtv:8984/solr/cpsearch;.
Then, on your Query object, you set
query.setRequestHandler("/select_cpsearch") as per
https://lucene.apache.org/solr/8_6_2/solr-solrj/org/apache/solr/client/solrj/SolrQuery.html#setRequestHandler-java.lang.String-

I am not sure what is happening with your ping, but I also believe
that there is a definition by default in the latest Solr. You could
see all the definitions (including defaults), by using config API
(see. https://lucene.apache.org/solr/guide/8_6/config-api.html)

Regards,
   Alex.

On Fri, 18 Sep 2020 at 15:18, Steven White  wrote:
>
> Hi Erick,
>
> I'm on Solr 8.6.1.  I did further debugging into this and just noticed that
> my search is not working too now (this is after I changed the request
> handler name from "select" to "select_cpsearch").  I have this very basic
> test code as a test which I think revailes the issue:
>
> try
> {
> SolrClient solrClient = new HttpSolrClient.Builder("
> http://testserver-dtv:8984/solr/cpsearch/select_cpsearch;).build();
> SolrQuery query = new SolrQuery();
> query.set("q", "*");
> QueryResponse response = solrClient.query(query);
> }
> catch (Exception ex)
> {
> ex.printStackTrace();  // has this:
> "URI:/solr/cpsearch/select_cpsearch/select"
> }
>
> In the stack, there is this message (I'm showing the relevant part only):
>
> Error 404 Not Found
> 
> HTTP ERROR 404 Not Found
> 
> URI:/solr/cpsearch/select_cpsearch/select
>
> As you can see "select" got added to the URI.  I think this is the root
> cause for the ping issue too that I'm having, but even if it is not, I have
> to fix this search issue too but I don't know how to tell SolrJ to use my
> named search request handler.  Any ideas?
>
> Thanks.
>
> Steven
>
>
> On Fri, Sep 18, 2020 at 2:24 PM Erick Erickson 
> wrote:
>
> > This looks kind of confused. I’m assuming what you’re after is a way to get
> > to your select_cpsearch request handler to test if Solr is alive and
> > calling that
> > “ping”.
> >
> > The ping request handler is just that, a separate request handler that you
> > hit by going to
> > http://sever:port/solr/admin/ping.
> >
> > It has nothing to do at all with your custom search handler and in recent
> > versions of
> > Solr is implicitly defined so it should just be there.
> >
> > Your custom handler is defined as
> > 
> >
> >


Re: Pining Solr

2020-09-18 Thread Erick Erickson
Well, this is doesn’t look right at all:

/solr/cpsearch/select_cpsearch/select

It should just be:
/solr/cpsearch/select_cpsearch

Best,
Erick

> On Sep 18, 2020, at 3:18 PM, Steven White  wrote:
> 
> /solr/cpsearch/select_cpsearch/select



Re: Pining Solr

2020-09-18 Thread Steven White
Hi Erick,

I'm on Solr 8.6.1.  I did further debugging into this and just noticed that
my search is not working too now (this is after I changed the request
handler name from "select" to "select_cpsearch").  I have this very basic
test code as a test which I think revailes the issue:

try
{
SolrClient solrClient = new HttpSolrClient.Builder("
http://testserver-dtv:8984/solr/cpsearch/select_cpsearch;).build();
SolrQuery query = new SolrQuery();
query.set("q", "*");
QueryResponse response = solrClient.query(query);
}
catch (Exception ex)
{
ex.printStackTrace();  // has this:
"URI:/solr/cpsearch/select_cpsearch/select"
}

In the stack, there is this message (I'm showing the relevant part only):

Error 404 Not Found

HTTP ERROR 404 Not Found

URI:/solr/cpsearch/select_cpsearch/select

As you can see "select" got added to the URI.  I think this is the root
cause for the ping issue too that I'm having, but even if it is not, I have
to fix this search issue too but I don't know how to tell SolrJ to use my
named search request handler.  Any ideas?

Thanks.

Steven


On Fri, Sep 18, 2020 at 2:24 PM Erick Erickson 
wrote:

> This looks kind of confused. I’m assuming what you’re after is a way to get
> to your select_cpsearch request handler to test if Solr is alive and
> calling that
> “ping”.
>
> The ping request handler is just that, a separate request handler that you
> hit by going to
> http://sever:port/solr/admin/ping.
>
> It has nothing to do at all with your custom search handler and in recent
> versions of
> Solr is implicitly defined so it should just be there.
>
> Your custom handler is defined as
> 
>
>


Re: Pining Solr

2020-09-18 Thread Erick Erickson
This looks kind of confused. I’m assuming what you’re after is a way to get
to your select_cpsearch request handler to test if Solr is alive and calling 
that
“ping”.

The ping request handler is just that, a separate request handler that you hit 
by going to 
http://sever:port/solr/admin/ping. 

It has nothing to do at all with your custom search handler and in recent 
versions of
Solr is implicitly defined so it should just be there.

Your custom handler is defined as 




Pining Solr

2020-09-18 Thread Steven White
Hi everyone,

I'm using SolrJ to ping Solr.  This used to work just fine till when I had
to change the name of my search request handler from the default "select"
to "select_cpsearch", i.e.: I now have this:



I looked this up and the solution suggestion on the internet is I need to
add a ping request handler (which was missing) so I added it like so (and I
tried a variations of it):


  
all
solrpingquery
  


But no matter how I change this ping handler, I keep getting this error:

  http://garoush-dtv:8984/solr/cpsearch: Unknown RequestHandler (qt): null

When I add 'qt' to the ping handler like os:

  
all
solrpingquery
  CP_UNIQUE_FIELD 
  


I now get this error: http://garoush-dtv:8984/solr/cpsearch: Unknown
RequestHandler (qt): CP_UNIQUE_FIELD'

Yes, "CP_UNIQUE_FIELD" is in my schema.

What am I missing here?  I cannot go back to the "select" request handler
and I need to be able to ping my collection.

Thanks,

Steven


Stop an async job submitted?

2020-09-18 Thread Jae Joo
HI,

Is there any way to stop the job running in Async mode?

Thanks,


Fetching and Adding Issue Solr 8.6.2

2020-09-18 Thread Anuj Bhargava
When importing data from *Solr Admin UI*

When *Start, Row* not defined, it fetched but did not Add -
Last Update: 17:11:54
Requests: 1 , Fetched: 1,750,700 , Skipped: 0 , Processed: 1,750,700

However, When *Start, Row* are defined, it *Fetches* and also *Adds* -
Last Update: 17:19:04
Indexing completed. Added/Updated: 1750700 documents. Deleted 0 documents.
(Duration: 5m)
Requests: 1 , Fetched: 1,750,700 5,836/s, Skipped: 0 , Processed: 1,750,700
5,836/s

Attaching the Screenshots


SOLR 8.6.1 - Request Log - timeZone & Response Time

2020-09-18 Thread Doss
Hi,

We have upgraded our SOLR instances to 8.6.1, we are trying to change the
timeZone and enabling Response Time, below is our configuration, but the
change is not reflecting. true adding this
parameter throws exception.

File: server/etc/jetty_requestlog_xml


  

  
  

  /_mm_dd.request.log
  _MM_dd
  90
  true
  IST

  

  
  

  


Sample Logs
0:0:0:0:0:0:0:1 - - [18/Sep/2020:07:53:11 +] "GET
/solr/admin/info/logging?_=1600414902762=0=json HTTP/1.1" 200 802
0:0:0:0:0:0:0:1 - - [18/Sep/2020:07:53:22 +] "GET
/solr/admin/info/logging?_=1600414902762=0=json HTTP/1.1" 200 802
0:0:0:0:0:0:0:1 - - [18/Sep/2020:07:53:33 +] "GET
/solr/admin/info/logging?_=1600414902762=0=json HTTP/1.1" 200 802
0:0:0:0:0:0:0:1 - - [18/Sep/2020:07:53:44 +] "GET
/solr/admin/info/logging?_=1600414902762=0=json HTTP/1.1" 200 802
0:0:0:0:0:0:0:1 - - [18/Sep/2020:07:53:55 +] "GET
/solr/admin/info/logging?_=1600414902762=0=json HTTP/1.1" 200 802
0:0:0:0:0:0:0:1 - - [18/Sep/2020:07:54:06 +] "GET
/solr/admin/info/logging?_=1600414902762=0=json HTTP/1.1" 200 802
0:0:0:0:0:0:0:1 - - [18/Sep/2020:07:54:17 +] "GET
/solr/admin/info/logging?_=1600414902762=0=json HTTP/1.1" 200 802
0:0:0:0:0:0:0:1 - - [18/Sep/2020:07:54:28 +] "GET
/solr/admin/info/logging?_=1600414902762=0=json HTTP/1.1" 200 802

Thanks.
Doss.


Re: Fetched but not Added Solr 8.6.2

2020-09-18 Thread Anuj Bhargava
Thanks Shawn, it worked.

However, I am getting the message - *The solrconfig.xml file for this index
does not have an operational DataImportHandler defined!*

The following lines are inserted in the solrconfig.xml

  
  
  
  


  

  db-data-config.xml

  

On Fri, 18 Sep 2020 at 15:20, Shawn Heisey  wrote:

> On 9/18/2020 1:27 AM, Anuj Bhargava wrote:
> > In managed schema, I have  > stored="true" required="false" multiValued="false" />
> >
> > Still getting the following error-
> >
> > org.apache.solr.common.SolrException: Document is missing mandatory
> > uniqueKey field: id
>
> The problem is that the document that has been fetched with DIH does NOT
> have a field named id.  Because your schema has named the id field as
> uniqueKey, that field is required -- it *must* exist in any document
> that is indexed.
>
> Your DIH config suggests that the database has a field named posting_id
> ... perhaps your Solr schema should use that field as the uniqueKey
> instead?
>
> Thanks,
> Shawn
>


Autoscaling Rule for replica distribution across zones

2020-09-18 Thread Dominique Bejean
Hi,

I have 4 nodes solrcloud cluster. 2 nodes (solr1 and solr3) are started
with the parametrer -Dzone=dc1 and  the 2 other nodes (solr 2 and Solr4)
are started with the parametrer -Dzone=dc2

I want to create Autoscaling placement Rule in order to equally distribute
replicas of a shard over zone (never 2 replicas of a shard in the same
zone). According documentation, I created this rule

{ "set-policy": { "policyzone": [ {"replica": "#EQUAL", "shard": "#EACH",
"sysprop.zone": ["dc1", "dc2"]} ] } }

I create a collection with 2 shards and 2 replicas, and the 4 cores are
created on solr2 and solr4 nodes so only in zone=dc2

What is wrong in my rule ?

Regards.

Dominique Béjean


Re: Fetched but not Added Solr 8.6.2

2020-09-18 Thread Shawn Heisey

On 9/18/2020 1:27 AM, Anuj Bhargava wrote:

In managed schema, I have 

Still getting the following error-

org.apache.solr.common.SolrException: Document is missing mandatory
uniqueKey field: id


The problem is that the document that has been fetched with DIH does NOT 
have a field named id.  Because your schema has named the id field as 
uniqueKey, that field is required -- it *must* exist in any document 
that is indexed.


Your DIH config suggests that the database has a field named posting_id 
... perhaps your Solr schema should use that field as the uniqueKey instead?


Thanks,
Shawn


Re: How to remove duplicate tokens from solr

2020-09-18 Thread Rajdeep Sahoo
Hi all,
 I have found the below details in stackoverflow but not sure how to
include the jar. Can any one help with this?


I've created a new filter class from "FilteringTokenFilter". The task is
pretty simple. I would check before adding into the list.

I have created a simple plugin Eliminate duplicate words


To load the plugins, JAR files (along with EliminateDuplicate-*.jar, which
can be created by executing mvn package command or
https://github.com/volkan/lucene-solr-filter-eliminateduplicate/tree/master/solr/lib)
in a lib directory in the Solr Home directory. The location for the lib
directory is near the solr.xml file.

On Fri, 18 Sep, 2020, 1:04 am Rajdeep Sahoo, 
wrote:

> But not sure why these type of search string is causing high cpu
> utilization.
>
> On Fri, 18 Sep, 2020, 12:49 am Rahul Goswami, 
> wrote:
>
>> Is this for a phrase search? If yes then the position of the token would
>> matter too and not sure which token would you want to remove. "eg
>> "tshirt hat tshirt".
>> Also, are you looking to save space and want this at index time? Or just
>> want to remove duplicates from the search string?
>>
>> If this is at search time AND is not a phrase search, there are a couple
>> approaches I could think of :
>>
>> 1) You could either handle this in the application layer to only pass the
>> deduplicated string before it hits solr
>> 2) You can write a custom search component and configure it in the
>>   list to process the search string and remove
>> duplicates
>> before it hits the default search components. See here (
>>
>> https://lucene.apache.org/solr/guide/7_7/requesthandlers-and-searchcomponents-in-solrconfig.html#first-components-and-last-components
>> ).
>>
>> However if for search, I would still evaluate if writing those extra lines
>> of code is worth the investment. I say so since my assumption is that for
>> duplicated tokens in search string, lucene would have the intelligence to
>> not fetch the doc ids again, so you should not be worried about spending
>> computation resources to reevaluate the same tokens (Someone correct me if
>> I am wrong!)
>>
>> -Rahul
>>
>> On Thu, Sep 17, 2020 at 2:56 PM Rajdeep Sahoo > >
>> wrote:
>>
>> > If someone is searching with " tshirt tshirt tshirt tshirt tshirt
>> tshirt"
>> > we need to remove the duplicates and search with tshirt.
>> >
>> >
>> > On Fri, 18 Sep, 2020, 12:19 am Alexandre Rafalovitch, <
>> arafa...@gmail.com>
>> > wrote:
>> >
>> > > This is not quite enough information.
>> > > There is
>> > >
>> >
>> https://lucene.apache.org/solr/guide/8_6/filter-descriptions.html#remove-duplicates-token-filter
>> > > but it has specific limitations.
>> > >
>> > > What is the problem that you are trying to solve that you feel is due
>> > > to duplicate tokens? Why are they duplicates? Is it about storage or
>> > > relevancy?
>> > >
>> > > Regards,
>> > >Alex.
>> > >
>> > > On Thu, 17 Sep 2020 at 14:35, Rajdeep Sahoo <
>> rajdeepsahoo2...@gmail.com>
>> > > wrote:
>> > > >
>> > > > Hi team,
>> > > >  Is there any way to remove duplicate tokens from solr. Is there any
>> > > filter
>> > > > for this.
>> > >
>> >
>>
>


Re: Fetched but not Added Solr 8.6.2

2020-09-18 Thread Anuj Bhargava
In managed schema, I have 

Still getting the following error-

org.apache.solr.common.SolrException: Document is missing mandatory
uniqueKey field: id
at
org.apache.solr.update.AddUpdateCommand.getIndexedId(AddUpdateCommand.java:124)
at
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:279)
at
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:225)
at
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:106)
at org.apache.solr.handler.dataimport.SolrWriter.upload(SolrWriter.java:80)
at
org.apache.solr.handler.dataimport.DataImportHandler$1.upload(DataImportHandler.java:272)
at
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:531)
at
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:419)
at
org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:334)
at
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:234)
at
org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:427)
at
org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:486)
at
org.apache.solr.handler.dataimport.DataImporter.lambda$runAsync$0(DataImporter.java:469)
at java.lang.Thread.run(Thread.java:748)

On Thu, 17 Sep 2020 at 15:53, Jörn Franke  wrote:

> Log file will tell you the issue.
>
> > Am 17.09.2020 um 10:54 schrieb Anuj Bhargava :
> >
> > We just installed Solr 8.6.2
> > It is fetching the data but not adding
> >
> > Indexing completed. *Added/Updated: 0 *documents. Deleted 0 documents.
> > (Duration: 06s)
> > Requests: 1 ,* Fetched: 100* 17/s, Skipped: 0 , Processed: 0
> >
> > The *data-config.xml*
> >
> > 
> > >driver="com.mysql.jdbc.Driver"
> >batchSize="-1"
> >autoReconnect="true"
> >socketTimeout="0"
> >connectTimeout="0"
> >encoding="UTF-8"
> >url="jdbc:mysql://zeroDateTimeBehavior=convertToNull"
> >user="xxx"
> >password="xxx"/>
> >
> > >deltaQuery="select posting_id from countries where
> > last_modified > '${dataimporter.last_index_time}'">
> >
> >
> > 
>