Re: NPE when using timeAllowed in the /export handler

2017-01-21 Thread radha krishnan
can you give some estimate on when they will be compatible . with out this,
we cannot use timeAllowed with the map reduce mode of /sql handler right

Thanks,
Radhakrishnan D


On Sat, Jan 21, 2017 at 6:03 PM, Joel Bernstein  wrote:

> I'm pretty sure that time allowed and the /export handler are not currently
> compatible.
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Fri, Jan 20, 2017 at 8:57 PM, radha krishnan <
> dradhakrishna...@gmail.com>
> wrote:
>
> > Hi,
> >
> > am trying to query a core with 60 million docs by specifying timeAllowed
> as
> > 100 ms just to test the timeAllowed feature.
> >
> > This is the query
> >
> > http://10.20.132.162:8983/solr/large_core/export?indent=
> > on&q=*:*&distrib=false&fl=logtype&timeAllowed=100&sort=
> > logtype+asc&wt=json&version=2.2
> >
> > when i ran in the browser, i got the below NPE
> >
> > the /export query  has 17 million docs has hits but there was an NPE
> after
> > the /export was called.
> >
> > Can you tell if any thing is wrong in the query or is there is a known
> bug
> > for the NPE
> >
> > HTTP ERROR 500
> >
> > Problem accessing /solr/logs_core_new/export. Reason:
> >
> > {trace=java.lang.NullPointerException
> > at org.apache.lucene.util.BitSetIterator.(
> > BitSetIterator.java:61)
> > at org.apache.solr.response.SortingResponseWriter.write(
> > SortingResponseWriter.java:176)
> > at org.apache.solr.response.QueryResponseWriterUtil.
> > writeQueryResponse(QueryResponseWriterUtil.java:65)
> > at org.apache.solr.servlet.HttpSolrCall.writeResponse(
> > HttpSolrCall.java:728)
> > at org.apache.solr.servlet.HttpSolrCall.call(
> > HttpSolrCall.java:469)
> > at org.apache.solr.servlet.SolrDispatchFilter.doFilter(
> > SolrDispatchFilter.java:303)
> > at org.apache.solr.servlet.SolrDispatchFilter.doFilter(
> > SolrDispatchFilter.java:254)
> > at org.eclipse.jetty.servlet.ServletHandler$CachedChain.
> > doFilter(ServletHandler.java:1668)
> > at org.eclipse.jetty.servlet.ServletHandler.doHandle(
> > ServletHandler.java:581)
> > at org.eclipse.jetty.server.handler.ScopedHandler.handle(
> > ScopedHandler.java:143)
> > at org.eclipse.jetty.security.SecurityHandler.handle(
> > SecurityHandler.java:548)
> > at org.eclipse.jetty.server.session.SessionHandler.
> > doHandle(SessionHandler.java:226)
> > at org.eclipse.jetty.server.handler.ContextHandler.
> > doHandle(ContextHandler.java:1160)
> > at org.eclipse.jetty.servlet.ServletHandler.doScope(
> > ServletHandler.java:511)
> > at org.eclipse.jetty.server.session.SessionHandler.
> > doScope(SessionHandler.java:185)
> > at org.eclipse.jetty.server.handler.ContextHandler.
> > doScope(ContextHandler.java:1092)
> > at org.eclipse.jetty.server.handler.ScopedHandler.handle(
> > ScopedHandler.java:141)
> > at org.eclipse.jetty.server.handler.ContextHandlerCollection.
> > handle(ContextHandlerCollection.java:213)
> > at org.eclipse.jetty.server.handler.HandlerCollection.
> > handle(HandlerCollection.java:119)
> > at org.eclipse.jetty.server.handler.HandlerWrapper.handle(
> > HandlerWrapper.java:134)
> > at org.eclipse.jetty.server.Server.handle(Server.java:518)
> > at org.eclipse.jetty.server.HttpChannel.handle(
> > HttpChannel.java:308)
> > at org.eclipse.jetty.server.HttpConnection.onFillable(
> > HttpConnection.java:244)
> > at org.eclipse.jetty.io.AbstractConnection$
> ReadCallback.succeeded(
> > AbstractConnection.java:273)
> > at org.eclipse.jetty.io.FillInterest.fillable(
> > FillInterest.java:95)
> > at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(
> > SelectChannelEndPoint.java:93)
> > at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.
> > produceAndRun(ExecuteProduceConsume.java:246)
> > at org.eclipse.jetty.util.thread.strategy.
> > ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
> > at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(
> > QueuedThreadPool.java:654)
> > at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(
> > QueuedThreadPool.java:572)
> > at java.lang.Thread.run(Thread.java:745)
> > ,code=500}
> >
> >
> >
> > RELATED SERVER LOGS
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > 2017-01-21 01:55:12.257 WARN  (qtp1397616978-21) [   x:logs_core_new]
> > org.apache.solr.search.SolrIndexSearcher Query: []; Elapsed time: 120.
> > Exceeded allowed search time: 100 ms.
> >
> > 2017-01-21 01:55:12.258 INFO  (qtp1397616978-21) [   x:logs_core_new]
> > org.apache.solr.core.SolrCore.Request [logs_core_new]  webapp=/solr
> > path=/export params={q=*:*&distrib=false&indent=on&fl=vmw_vr_ops_
> > logtype&timeAllowed=100&sort=vmw_vr_ops_logtype+asc&wt=json&version=2.2}
> > hits=17579862 status=0 QTime=120
> >
> > 2017-01-21 01:55:12.718 ERROR (qtp1397616978-21) [   x:logs_core_new]
> > org.apache.solr.servlet.HttpSolrCall

Re: NPE when using timeAllowed in the /export handler

2017-01-21 Thread Joel Bernstein
I'm pretty sure that time allowed and the /export handler are not currently
compatible.

Joel Bernstein
http://joelsolr.blogspot.com/

On Fri, Jan 20, 2017 at 8:57 PM, radha krishnan 
wrote:

> Hi,
>
> am trying to query a core with 60 million docs by specifying timeAllowed as
> 100 ms just to test the timeAllowed feature.
>
> This is the query
>
> http://10.20.132.162:8983/solr/large_core/export?indent=
> on&q=*:*&distrib=false&fl=logtype&timeAllowed=100&sort=
> logtype+asc&wt=json&version=2.2
>
> when i ran in the browser, i got the below NPE
>
> the /export query  has 17 million docs has hits but there was an NPE after
> the /export was called.
>
> Can you tell if any thing is wrong in the query or is there is a known bug
> for the NPE
>
> HTTP ERROR 500
>
> Problem accessing /solr/logs_core_new/export. Reason:
>
> {trace=java.lang.NullPointerException
> at org.apache.lucene.util.BitSetIterator.(
> BitSetIterator.java:61)
> at org.apache.solr.response.SortingResponseWriter.write(
> SortingResponseWriter.java:176)
> at org.apache.solr.response.QueryResponseWriterUtil.
> writeQueryResponse(QueryResponseWriterUtil.java:65)
> at org.apache.solr.servlet.HttpSolrCall.writeResponse(
> HttpSolrCall.java:728)
> at org.apache.solr.servlet.HttpSolrCall.call(
> HttpSolrCall.java:469)
> at org.apache.solr.servlet.SolrDispatchFilter.doFilter(
> SolrDispatchFilter.java:303)
> at org.apache.solr.servlet.SolrDispatchFilter.doFilter(
> SolrDispatchFilter.java:254)
> at org.eclipse.jetty.servlet.ServletHandler$CachedChain.
> doFilter(ServletHandler.java:1668)
> at org.eclipse.jetty.servlet.ServletHandler.doHandle(
> ServletHandler.java:581)
> at org.eclipse.jetty.server.handler.ScopedHandler.handle(
> ScopedHandler.java:143)
> at org.eclipse.jetty.security.SecurityHandler.handle(
> SecurityHandler.java:548)
> at org.eclipse.jetty.server.session.SessionHandler.
> doHandle(SessionHandler.java:226)
> at org.eclipse.jetty.server.handler.ContextHandler.
> doHandle(ContextHandler.java:1160)
> at org.eclipse.jetty.servlet.ServletHandler.doScope(
> ServletHandler.java:511)
> at org.eclipse.jetty.server.session.SessionHandler.
> doScope(SessionHandler.java:185)
> at org.eclipse.jetty.server.handler.ContextHandler.
> doScope(ContextHandler.java:1092)
> at org.eclipse.jetty.server.handler.ScopedHandler.handle(
> ScopedHandler.java:141)
> at org.eclipse.jetty.server.handler.ContextHandlerCollection.
> handle(ContextHandlerCollection.java:213)
> at org.eclipse.jetty.server.handler.HandlerCollection.
> handle(HandlerCollection.java:119)
> at org.eclipse.jetty.server.handler.HandlerWrapper.handle(
> HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:518)
> at org.eclipse.jetty.server.HttpChannel.handle(
> HttpChannel.java:308)
> at org.eclipse.jetty.server.HttpConnection.onFillable(
> HttpConnection.java:244)
> at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(
> AbstractConnection.java:273)
> at org.eclipse.jetty.io.FillInterest.fillable(
> FillInterest.java:95)
> at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(
> SelectChannelEndPoint.java:93)
> at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.
> produceAndRun(ExecuteProduceConsume.java:246)
> at org.eclipse.jetty.util.thread.strategy.
> ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
> at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(
> QueuedThreadPool.java:654)
> at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(
> QueuedThreadPool.java:572)
> at java.lang.Thread.run(Thread.java:745)
> ,code=500}
>
>
>
> RELATED SERVER LOGS
>
>
>
>
>
>
>
>
>
> 2017-01-21 01:55:12.257 WARN  (qtp1397616978-21) [   x:logs_core_new]
> org.apache.solr.search.SolrIndexSearcher Query: []; Elapsed time: 120.
> Exceeded allowed search time: 100 ms.
>
> 2017-01-21 01:55:12.258 INFO  (qtp1397616978-21) [   x:logs_core_new]
> org.apache.solr.core.SolrCore.Request [logs_core_new]  webapp=/solr
> path=/export params={q=*:*&distrib=false&indent=on&fl=vmw_vr_ops_
> logtype&timeAllowed=100&sort=vmw_vr_ops_logtype+asc&wt=json&version=2.2}
> hits=17579862 status=0 QTime=120
>
> 2017-01-21 01:55:12.718 ERROR (qtp1397616978-21) [   x:logs_core_new]
> org.apache.solr.servlet.HttpSolrCall
> null:java.lang.NullPointerException
>
> at org.apache.lucene.util.BitSetIterator.(
> BitSetIterator.java:61)
>
> at org.apache.solr.response.SortingResponseWriter.write(
> SortingResponseWriter.java:176)
>
> at org.apache.solr.response.QueryResponseWriterUtil.
> writeQueryResponse(QueryResponseWriterUtil.java:65)
>
> at org.apache.solr.servlet.HttpSolrCall.writeResponse(
> HttpSolrCall.java:728)
>
> at org.apache.solr.servlet.HttpSolrCall.call(
> HttpSolrCall.j

Re: Streams return default values for fields that doesn't exist in the document

2017-01-21 Thread Joel Bernstein
Also take a look at:
http://joelsolr.blogspot.com/2016/10/solr-63-batch-jobs-parallel-etl-and.html

It describes a very flexible approach to do batch re-indexing jobs.

Joel Bernstein
http://joelsolr.blogspot.com/

On Sat, Jan 21, 2017 at 6:44 PM, Yago Riveiro 
wrote:

> 6.3.0
>
> I will try again with 6.4.0
>
> Thank Erick
>
> --
>
> /Yago Riveiro
>
> On 21 Jan 2017, 21:23 +, Erick Erickson ,
> wrote:
> > What version of Solr? See: https://issues.apache.org/
> jira/browse/SOLR-9166
> >
> > Best,
> > Erick
> >
> > On Sat, Jan 21, 2017 at 1:08 PM, Yago Riveiro 
> wrote:
> > > I'm trying to use the streaming API to reindex data from one
> collection to
> > > another.
> > >
> > > I have a lot of dynamic fields on my documents and not every document
> has
> > > the same fields, therefore, to fetch the list if fields that exists in
> the
> > > collection, I need to run a luke query to fetch all of them.
> > >
> > > I ran the stream with the fl with all the fields returned by the luke
> query
> > > an all documents returned has the same fields, so far so good.
> > >
> > > The main problem is that the field doesn't exist in the returned
> document is
> > > filled with the default value of the field type, not ok.
> > >
> > > If the field doesn't exist in the document the return value can't be
> the
> > > default value, should be something that we can identify as "this field
> > > doesn't exists in this document"
> > >
> > > Right now I have a docs with 2 integer fields with value 0, one indeed
> > > belongs to the document and was indexed with 0 as the correct value,
> the
> > > other doesn't exists in the source document.
> > >
> > > Why not return the value as null? when indexing a field with null
> value is
> > > ignored, the reverse operation should returns the same ...
> > >
> > >
> > >
> > > -
> > > Best regards
> > >
> > > /Yago
> > > --
> > > View this message in context: http://lucene.472066.n3.
> nabble.com/Streams-return-default-values-for-fields-
> that-doesn-t-exist-in-the-document-tp4315229.html
> > > Sent from the Solr - User mailing list archive at Nabble.com.
>


Re: Streams return default values for fields that doesn't exist in the document

2017-01-21 Thread Yago Riveiro
6.3.0

I will try again with 6.4.0

Thank Erick

--

/Yago Riveiro

On 21 Jan 2017, 21:23 +, Erick Erickson , wrote:
> What version of Solr? See: https://issues.apache.org/jira/browse/SOLR-9166
>
> Best,
> Erick
>
> On Sat, Jan 21, 2017 at 1:08 PM, Yago Riveiro  wrote:
> > I'm trying to use the streaming API to reindex data from one collection to
> > another.
> >
> > I have a lot of dynamic fields on my documents and not every document has
> > the same fields, therefore, to fetch the list if fields that exists in the
> > collection, I need to run a luke query to fetch all of them.
> >
> > I ran the stream with the fl with all the fields returned by the luke query
> > an all documents returned has the same fields, so far so good.
> >
> > The main problem is that the field doesn't exist in the returned document is
> > filled with the default value of the field type, not ok.
> >
> > If the field doesn't exist in the document the return value can't be the
> > default value, should be something that we can identify as "this field
> > doesn't exists in this document"
> >
> > Right now I have a docs with 2 integer fields with value 0, one indeed
> > belongs to the document and was indexed with 0 as the correct value, the
> > other doesn't exists in the source document.
> >
> > Why not return the value as null? when indexing a field with null value is
> > ignored, the reverse operation should returns the same ...
> >
> >
> >
> > -
> > Best regards
> >
> > /Yago
> > --
> > View this message in context: 
> > http://lucene.472066.n3.nabble.com/Streams-return-default-values-for-fields-that-doesn-t-exist-in-the-document-tp4315229.html
> > Sent from the Solr - User mailing list archive at Nabble.com.


Re: Streams return default values for fields that doesn't exist in the document

2017-01-21 Thread Erick Erickson
What version of Solr? See: https://issues.apache.org/jira/browse/SOLR-9166

Best,
Erick

On Sat, Jan 21, 2017 at 1:08 PM, Yago Riveiro  wrote:
> I'm trying to use the streaming API to reindex data from one collection to
> another.
>
> I have a lot of dynamic fields on my documents and not every document has
> the same fields, therefore, to fetch the list if fields that exists in the
> collection, I need to run a luke query to fetch all of them.
>
> I ran the stream with the fl with all the fields returned by the luke query
> an all documents returned has the same fields, so far so good.
>
> The main problem is that the field doesn't exist in the returned document is
> filled with the default value of the field type, not ok.
>
> If the field doesn't exist in the document the return value can't be the
> default value, should be something that we can identify as "this field
> doesn't exists in this document"
>
> Right now I have a docs with 2 integer fields with value 0, one indeed
> belongs to the document and was indexed with 0 as the correct value, the
> other doesn't exists in the source document.
>
> Why not return the value as null? when indexing a field with null value is
> ignored, the reverse operation should returns the same ...
>
>
>
> -
> Best regards
>
> /Yago
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Streams-return-default-values-for-fields-that-doesn-t-exist-in-the-document-tp4315229.html
> Sent from the Solr - User mailing list archive at Nabble.com.


Streams return default values for fields that doesn't exist in the document

2017-01-21 Thread Yago Riveiro
I'm trying to use the streaming API to reindex data from one collection to
another.

I have a lot of dynamic fields on my documents and not every document has
the same fields, therefore, to fetch the list if fields that exists in the
collection, I need to run a luke query to fetch all of them.

I ran the stream with the fl with all the fields returned by the luke query
an all documents returned has the same fields, so far so good.

The main problem is that the field doesn't exist in the returned document is
filled with the default value of the field type, not ok. 

If the field doesn't exist in the document the return value can't be the
default value, should be something that we can identify as "this field
doesn't exists in this document"

Right now I have a docs with 2 integer fields with value 0, one indeed
belongs to the document and was indexed with 0 as the correct value, the
other doesn't exists in the source document.

Why not return the value as null? when indexing a field with null value is
ignored, the reverse operation should returns the same ...



-
Best regards

/Yago
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Streams-return-default-values-for-fields-that-doesn-t-exist-in-the-document-tp4315229.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: A collection gone missing: uninteresting collection

2017-01-21 Thread Chetas Joshi
Is this visible in the logs? I mean how do I find out that a "DELETE
collection" API​ call was made?

Is the following indicative of the fact that the API call was made?

2017-01-20 20:42:39,822 INFO org.apache.solr.cloud.
ShardLeaderElectionContextBase: Removing leader registration node on
cancel: /collections/3044_01_17/leaders/shard4/leader 9

2017-01-20 20:42:39,832 INFO org.apache.solr.cloud.ElectionContext:
Canceling election /collections/3044_01_17/leader_elect/shard4/election/
241183598302995297-core_node3-n_08

2017-01-20 20:42:39,833 INFO org.apache.solr.common.cloud.ZkStateReader:
Removing watch for uninteresting collection [3044_01_17]

  "core":"3044_01_17_shard4_replica1",

  "collection":"3044_01_17",


I am confused as the logs just talk about shard4 not all the shards of the
collection.


Re: A collection gone missing: uninteresting collection

2017-01-21 Thread Erick Erickson
Looks like someone issued a Collections API DELETE command.

On Fri, Jan 20, 2017 at 9:56 PM, Chetas Joshi  wrote:
> Hello,
>
> I have been running Solr (5.5.0) on HDFS.
>
> Recently a collection just went missing with all the instanceDirs and
> Datadirs getting deleted. The following logs in the solrCloud overseer.
>
> 2017-01-20 20:42:39,515 INFO org.apache.solr.core.SolrCore:
> [3044_01_17_shard4_replica1]  CLOSING SolrCore
> org.apache.solr.core.SolrCore@2e2e0a23
>
> 2017-01-20 20:42:39,665 INFO org.apache.solr.core.SolrCore:
> [3044_01_17_shard4_replica1] Closing main searcher on request.
>
> 2017-01-20 20:42:39,690 INFO org.apache.solr.core.CachingDirectoryFactory:
> looking to close hdfs://Ingest/solr53/3044_01_17/core_node3/data
> [CachedDir<>]
>
> 2017-01-20 20:42:39,690 INFO org.apache.solr.core.CachingDirectoryFactory:
> looking to close hdfs://Ingest/solr53/3044_01_17/core_node3/data/index
> [CachedDir<>,
> CachedDir<>]
>
> 2017-01-20 20:42:39,690 INFO org.apache.solr.core.CachingDirectoryFactory:
> Closing directory: hdfs://Ingest/solr53/3044_01_17/core_node3/data/index
>
> 2017-01-20 20:42:39,712 INFO org.apache.solr.store.hdfs.HdfsDirectory:
> Closing hdfs directory hdfs://Ingest/solr53/3044_01_17/core_node3/data/index
>
> 2017-01-20 20:42:39,713 INFO org.apache.solr.core.CachingDirectoryFactory:
> Closing directory: hdfs://Ingest/solr53/3044_01_17/core_node3/data
>
> 2017-01-20 20:42:39,713 INFO org.apache.solr.store.hdfs.HdfsDirectory:
> Closing hdfs directory hdfs://Ingest/solr53/3044_01_17/core_node3/data
>
> 2017-01-20 20:42:39,713 INFO org.apache.solr.core.CachingDirectoryFactory:
> looking to close hdfs://Ingest/solr53/3044_01_17/core_node3/data []
>
> 2017-01-20 20:42:39,713 INFO org.apache.solr.core.CachingDirectoryFactory:
> Removing directory after core close:
> hdfs://Ingest/solr53/3044_01_17/core_node3/data
>
>   "core":"3044_01_17_shard13_replica1",
>
>   "collection":"3044_01_17",
>
> 2017-01-20 20:42:39,780 INFO org.apache.solr.cloud.overseer.ZkStateWriter:
> going to update_collection /collections/3044_01_17/state.json version: 2164
>
> 2017-01-20 20:42:39,782 INFO org.apache.solr.common.cloud.ZkStateReader: A
> cluster state change: [WatchedEvent state:SyncConnected
> type:NodeDataChanged path:/collections/3044_01_17/state.json] for
> collection [3044_01_17] has occurred - updating... (live nodes size: [85])
>
> 2017-01-20 20:42:39,783 INFO org.apache.solr.common.cloud.ZkStateReader:
> Updating data for [3044_01_17] from [2164] to [2165]
>
>   "core":"3044_01_17_shard32_replica1",
>
>   "collection":"3044_01_17",
>
>   "core":"3044_01_17_shard22_replica1",
>
>   "collection":"3044_01_17",
>
>   "core":"3044_01_17_shard27_replica1",
>
>   "collection":"3044_01_17",
>
> 2017-01-20 20:42:39,822 INFO
> org.apache.solr.cloud.ShardLeaderElectionContextBase: Removing leader
> registration node on cancel: /collections/3044_01_17/leaders/shard4/leader 9
>
> 2017-01-20 20:42:39,832 INFO org.apache.solr.cloud.ElectionContext:
> Canceling election
> /collections/3044_01_17/leader_elect/shard4/election/241183598302995297-core_node3-n_08
>
> 2017-01-20 20:42:39,833 INFO org.apache.solr.common.cloud.ZkStateReader:
> Removing watch for uninteresting collection [3044_01_17]
>
>   "core":"3044_01_17_shard4_replica1",
>
>   "collection":"3044_01_17",
>
> 2017-01-20 20:42:39,852 INFO org.apache.solr.servlet.HttpSolrCall: [admin]
> webapp=null path=/admin/cores
> params={deleteInstanceDir=true&action=UNLOAD&core=3044_01_17_shard4_replica1&wt=javabin&qt=/admin/cores&deleteDataDir=true&version=2}
> status=0 QTime=341
>
> 2017-01-20 20:42:39,938 INFO org.apache.solr.cloud.overseer.ZkStateWriter:
> going to update_collection /collections/3044_01_17/state.json version: 2165
>
> 2017-01-20 20:42:39,940 INFO org.apache.solr.common.cloud.ZkStateReader:
> Uninteresting collection [3044_01_17]
>
>   "core":"3044_01_17_shard14_replica1",
>
>   "collection":"3044_01_17",
>
>   "core":"3044_01_17_shard35_replica1",
>
>   "collection":"3044_01_17",
>
>   "core":"3044_01_17_shard11_replica1",
>
>   "collection":"3044_01_17",
>
>   "core":"3044_01_17_shard3_replica1",
>
>   "collection":"3044_01_17",
>
>   "core":"3044_01_17_shard1_replica1",
>
>   "collection":"3044_01_17",
>
>   "core":"3044_01_17_shard2_replica1",
>
>   "collection":"3044_01_17",
>
>   "core":"3044_01_17_shard5_replica1",
>
>   "collection":"3044_01_17",
>
>   "core":"3044_01_17_shard42_replica1",
>
>   "collection":"3044_01_17",
>
>   "core":"3044_01_17_shard28_replica1",
>
>   "collection":"3044_01_17",
>
>   "core":"3044_01_17_shard45_replica1",
>
>   "collection":"3044_01_17",
>
>   "core":"3044_01_17_shard8_replica1",
>
>   "collection":"3044_01_17",
>
>   "core":"3044_01_17_shard49_replica1",
>
>   "collection":"3044_01_17",
>
>   "core":"3044_01_17_shard9_replica1",
>
>   "collection":"3044_01_17",
>
>   "core":"3044_01_17_shard34_replica1",
>
>   "collection":"3044_01_17",
>
>   "core":"3044_01_17_shard25_replica1",
>
>   "c

Re: DIH do not work. Child-entity cannot refer parent's id.

2017-01-21 Thread Keiichi MORITA
Hi Shawn,

Thank you for helpful information and suggestions.

> Are you using the Oracle JVM?  This is recommended.  Version 1.8.x (Java
> 8) 
> is required for Solr 6.3.0.

I'm using Oracle Java 8 (1.8.0_111).

In response to your advice, I've changed the logging level for
JdbcDataSource to DEBUG.
Looks like rootEntity's query was running.

> Executing SQL: select book_id, title from books where deleted = 0

(excerpt)
2017-01-21 13:39:07.782 INFO  (Thread-16) [   x:bookstore]
o.a.s.h.d.SimplePropertiesWriter Read dataimport.properties
2017-01-21 13:39:07.803 INFO  (Thread-16) [   x:bookstore]
o.a.s.s.SolrIndexSearcher Opening [Searcher@32c6746f[bookstore] realtime]
2017-01-21 13:39:07.807 INFO  (Thread-16) [   x:bookstore]
o.a.s.h.d.JdbcDataSource Creating a connection for entity books with URL:
jdbc:oracle:thin:@
2017-01-21 13:39:09.980 INFO  (Thread-16) [   x:bookstore]
o.a.s.h.d.JdbcDataSource Time taken for getConnection(): 2164
2017-01-21 13:39:09.980 DEBUG (Thread-16) [   x:bookstore]
o.a.s.h.d.JdbcDataSource Executing SQL: select book_id, title from books
where deleted = 0
2017-01-21 13:39:10.041 INFO  (Thread-16) [   x:bookstore]
o.a.s.h.d.JdbcDataSource Creating a connection for entity contents with URL:
jdbc:oracle:thin:@
2017-01-21 13:39:10.640 INFO  (Thread-16) [   x:bookstore]
o.a.s.h.d.JdbcDataSource Time taken for getConnection(): 598 
2017-01-21 13:39:10.640 DEBUG (Thread-16) [   x:bookstore]
o.a.s.h.d.JdbcDataSource Executing SQL: select book_id, content_id, content
from contents where book_id =  and deleted = 0 
2017-01-21 13:39:10.750 ERROR (Thread-16) [   x:bookstore]
o.a.s.h.d.DocBuilder Exception while processing: books document :
SolrInputDocument(fields: [book_id=1,
title=First]):org.apache.solr.handler.dataimport.DataImportHandlerException:
Unable to execute query: select book_id, content_id, content from contents
where book_id =  and deleted = 0 Processing Document # 1
(/excerpt)


When enclosing the property ${books.book_id} in single quotes, the condition 
"where book_id = '' and delete = 0" was applied, so query could not hit
child entity.

> It is possible that your DIH config contains invisible characters in the 
> section where the property is referenced, that is causing the property 
> to not be correctly inserted.  I don't know how likely this is, but it 
> is something that could happen.  This can usually be seen in a hex editor. 

I see. I tried to confirm that config files contain no illegal characters.
Now I've made the core of vanilla and applied the settings again,
hope that the possibility of including illegal characters should be low.

And then, I compared my environment with others.

1. Not only my virtual server, but Amazon RDS for Oracle also gave the same
results.
2. Using MySQL, those config and queries work out (with only changing Data
Types of DB column).


If possible, I'd like to test other versions of Solr (6.2, 6.1, ...).

Kind,
Keiichi




--
View this message in context: 
http://lucene.472066.n3.nabble.com/DIH-do-not-work-Child-entity-cannot-refer-parent-s-id-tp4315023p4315192.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: Solrj similarity setting

2017-01-21 Thread Markus Jelsma
No, this is not possible. A similarity is an index time setting because it can 
have index time properties. There is no way to control this query time.

M

 
 
-Original message-
> From:Farida Sabry 
> Sent: Saturday 21st January 2017 9:58
> To: solr-user@lucene.apache.org
> Subject: Solrj similarity setting
> 
> Is there a way to set the similarity in Solrj query search like we do for
> lucene IndexSearcher e.g. searcher.setSimilarity(sim);
> I need to define a custom similarity and at the same time get the fields
> provided by SolrInputDocument in the returned results by
> SolrDocumentList results =  QueryResponse response.getResults()
> 
> Any clues how to do that?
> 


Solrj similarity setting

2017-01-21 Thread Farida Sabry
Is there a way to set the similarity in Solrj query search like we do for
lucene IndexSearcher e.g. searcher.setSimilarity(sim);
I need to define a custom similarity and at the same time get the fields
provided by SolrInputDocument in the returned results by
SolrDocumentList results =  QueryResponse response.getResults()

Any clues how to do that?