Automatic conversion to Range Query

2017-05-04 Thread Aman Deep Singh
Hi,
I'm facing a issue when i'm querying the Solr
my query is "xiomi Mi 5 -white [64GB/ 3GB]"
while my search field definition is

  

  
  
  
  


  
  
  
  
  

  


My generated query is


+(((Synonym(nameSearch:xiaomi nameSearch:xiomi)) (nameSearch:mi)
(nameSearch:5) -(Synonym(nameSearch:putih
nameSearch:white))*(nameSearch:[64gb/ TO 3gb])*)~4)


Now due to automatic conversion of query  to Range query i'm not able
to find the result


Solr Version-6.4.2

Parser- edismax

Thanks,

Aman Deep Singh


Re: Export endpoint broken in solr 6.5.1?

2017-05-04 Thread Joel Bernstein
Ok, I suspect the changes in the config happened with this ticket:

https://issues.apache.org/jira/browse/SOLR-9721

So I think you just need to take the new ImplicitPlugins.json to get the
latest configs. Also check to make sure the /export handler is not
referenced in the solrconfig.

SOLR-9721 allows you to specify wt=javabin in the search expression, for a
20% performance improvement when shuffling.



Joel Bernstein
http://joelsolr.blogspot.com/

On Thu, May 4, 2017 at 5:00 PM, Yago Riveiro  wrote:

> Older build with that was upgraded from 6.3.0 to 6.5.1.
>
> The config used in 6.3.0 are the same used in 6.5.1 without changes.
>
> Should I update my configs?
>
> --
>
> /Yago Riveiro
>
> On 4 May 2017, 21:45 +0100, Joel Bernstein , wrote:
> > Did this error come from a standard 6.5.1 build, or form a build that was
> > upgraded to 6.5.1 with older config files?
> >
> > Joel Bernstein
> > http://joelsolr.blogspot.com/
> >
> > On Thu, May 4, 2017 at 1:57 PM, Yago Riveiro 
> wrote:
> >
> > > I'm trying to run this streaming expression
> > >
> > > search(data,qt="/export",q="*:*",fl="id",sort="id asc")
> > >
> > > and I'm hitting this exception:
> > >
> > > 2017-05-04 17:24:05.156 ERROR (qtp1937348256-378) [c:data s:shard7
> > > r:core_node38 x:data_shard7_replica1] o.a.s.c.s.i.s.ExceptionStream
> > > java.io.IOException: java.util.concurrent.ExecutionException:
> > > java.io.IOException: --> http://solr-node-1:8983/solr/
> > > data_shard2_replica1/:
> > > An exception has occurred on the server, refer to server log for
> details.
> > > at
> > > org.apache.solr.client.solrj.io.stream.CloudSolrStream.
> > > openStreams(CloudSolrStream.java:451)
> > > at
> > > org.apache.solr.client.solrj.io.stream.CloudSolrStream.
> > > open(CloudSolrStream.java:308)
> > > at
> > > org.apache.solr.client.solrj.io.stream.ExceptionStream.
> > > open(ExceptionStream.java:51)
> > > at
> > > org.apache.solr.handler.StreamHandler$TimerStream.
> > > open(StreamHandler.java:490)
> > > at
> > > org.apache.solr.client.solrj.io.stream.TupleStream.
> > > writeMap(TupleStream.java:78)
> > > at
> > > org.apache.solr.response.JSONWriter.writeMap(
> JSONResponseWriter.java:547)
> > > at
> > > org.apache.solr.response.TextResponseWriter.writeVal(
> > > TextResponseWriter.java:193)
> > > at
> > > org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(
> > > JSONResponseWriter.java:209)
> > > at
> > > org.apache.solr.response.JSONWriter.writeNamedList(
> > > JSONResponseWriter.java:325)
> > > at
> > > org.apache.solr.response.JSONWriter.writeResponse(
> > > JSONResponseWriter.java:120)
> > > at
> > > org.apache.solr.response.JSONResponseWriter.write(
> > > JSONResponseWriter.java:71)
> > > at
> > > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(
> > > QueryResponseWriterUtil.java:65)
> > > at
> > > org.apache.solr.servlet.HttpSolrCall.writeResponse(
> HttpSolrCall.java:809)
> > > at org.apache.solr.servlet.HttpSolrCall.call(
> > > HttpSolrCall.java:538)
> > > at
> > > org.apache.solr.servlet.SolrDispatchFilter.doFilter(
> > > SolrDispatchFilter.java:347)
> > > at
> > > org.apache.solr.servlet.SolrDispatchFilter.doFilter(
> > > SolrDispatchFilter.java:298)
> > > at
> > > org.eclipse.jetty.servlet.ServletHandler$CachedChain.
> > > doFilter(ServletHandler.java:1691)
> > > at
> > > org.eclipse.jetty.servlet.ServletHandler.doHandle(
> ServletHandler.java:582)
> > > at
> > > org.eclipse.jetty.server.handler.ScopedHandler.handle(
> > > ScopedHandler.java:143)
> > > at
> > > org.eclipse.jetty.security.SecurityHandler.handle(
> > > SecurityHandler.java:548)
> > > at
> > > org.eclipse.jetty.server.session.SessionHandler.
> > > doHandle(SessionHandler.java:226)
> > > at
> > > org.eclipse.jetty.server.handler.ContextHandler.
> > > doHandle(ContextHandler.java:1180)
> > > at
> > > org.eclipse.jetty.servlet.ServletHandler.doScope(
> ServletHandler.java:512)
> > > at
> > > org.eclipse.jetty.server.session.SessionHandler.
> > > doScope(SessionHandler.java:185)
> > > at
> > > org.eclipse.jetty.server.handler.ContextHandler.
> > > doScope(ContextHandler.java:1112)
> > > at
> > > org.eclipse.jetty.server.handler.ScopedHandler.handle(
> > > ScopedHandler.java:141)
> > > at
> > > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(
> > > ContextHandlerCollection.java:213)
> > > at
> > > org.eclipse.jetty.server.handler.HandlerCollection.
> > > handle(HandlerCollection.java:119)
> > > at
> > > org.eclipse.jetty.server.handler.HandlerWrapper.handle(
> > > HandlerWrapper.java:134)
> > > at
> > > org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(
> > > RewriteHandler.java:335)
> > > at
> > > org.eclipse.jetty.server.handler.HandlerWrapper.handle(
> > > HandlerWrapper.java:134)
> > > at org.eclipse.jetty.server.Server.handle(Server.java:534)
> > > at org.eclipse.jetty.server.HttpChannel.handle(
> > > HttpChannel.java:320)
> > > at
> > > 

Re: Joining more than 2 collections

2017-05-04 Thread Zheng Lin Edwin Yeo
Hi Joel,

Yes, the /export works after I remove the /export handler from
solrconfig.xml. Thanks for the advice.

For *:*, there will be result returned when using /export.
But if one of the queries is *:*, this means the entire resultset will
contains all the records from the query which has *:*?

Regards,
Edwin


On 5 May 2017 at 01:46, Joel Bernstein  wrote:

> No *:* will simply return all the results from one of the queries. It
> should still join properly. If you are using the /select handler joins will
> not work properly.
>
>
> This example worked properly for me:
>
> hashJoin(parallel(collection2, j
> workers=3,
> sort="id asc",
> innerJoin(search(collection2, q="*:*", fl="id",
> sort="id asc", qt="/export", partitionKeys="id"),
> search(collection2,
> q="year_i:42", fl="id, year_i", sort="id asc", qt="/export",
> partitionKeys="id"),
> on="id")),
> hashed=search(collection2, q="day_i:7", fl="id, day_i",
> sort="id asc", qt="/export"),
> on="id")
>
>
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Thu, May 4, 2017 at 12:28 PM, Zheng Lin Edwin Yeo  >
> wrote:
>
> > Hi Joel,
> >
> > For the join queries, is it true that if we use q=*:* for the query for
> one
> > of the join, there will not be any results return?
> >
> > Currently I found this is the case, if I just put q=*:*.
> >
> > Regards,
> > Edwin
> >
> >
> > On 4 May 2017 at 23:38, Zheng Lin Edwin Yeo 
> wrote:
> >
> > > Hi Joel,
> > >
> > > I think that might be one of the reason.
> > > This is what I have for the /export handler in my solrconfig.xml
> > >
> > >   > > "invariants"> {!xport} xsort
> <
> > > str name="distrib">false  
> > query > > str>  
> > >
> > > This is the error message that I get when I use the /export handler.
> > >
> > > java.io.IOException: java.util.concurrent.ExecutionException:
> > > java.io.IOException: --> http://localhost:8983/solr/
> > > collection1_shard1_replica1/: An exception has occurred on the server,
> > > refer to server log for details.
> > > at org.apache.solr.client.solrj.io.stream.CloudSolrStream.
> > > openStreams(CloudSolrStream.java:451)
> > > at org.apache.solr.client.solrj.io.stream.CloudSolrStream.
> > > open(CloudSolrStream.java:308)
> > > at org.apache.solr.client.solrj.io.stream.PushBackStream.open(
> > > PushBackStream.java:70)
> > > at org.apache.solr.client.solrj.io.stream.JoinStream.open(
> > > JoinStream.java:147)
> > > at org.apache.solr.client.solrj.io.stream.ExceptionStream.
> > > open(ExceptionStream.java:51)
> > > at org.apache.solr.handler.StreamHandler$TimerStream.
> > > open(StreamHandler.java:457)
> > > at org.apache.solr.client.solrj.io.stream.TupleStream.
> > > writeMap(TupleStream.java:63)
> > > at org.apache.solr.response.JSONWriter.writeMap(
> > > JSONResponseWriter.java:547)
> > > at org.apache.solr.response.TextResponseWriter.writeVal(
> > > TextResponseWriter.java:193)
> > > at org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(
> > > JSONResponseWriter.java:209)
> > > at org.apache.solr.response.JSONWriter.writeNamedList(
> > > JSONResponseWriter.java:325)
> > > at org.apache.solr.response.JSONWriter.writeResponse(
> > > JSONResponseWriter.java:120)
> > > at org.apache.solr.response.JSONResponseWriter.write(
> > > JSONResponseWriter.java:71)
> > > at org.apache.solr.response.QueryResponseWriterUtil.
> writeQueryResponse(
> > > QueryResponseWriterUtil.java:65)
> > > at org.apache.solr.servlet.HttpSolrCall.writeResponse(
> > > HttpSolrCall.java:732)
> > > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:473)
> > > at org.apache.solr.servlet.SolrDispatchFilter.doFilter(
> > > SolrDispatchFilter.java:345)
> > > at org.apache.solr.servlet.SolrDispatchFilter.doFilter(
> > > SolrDispatchFilter.java:296)
> > > at org.eclipse.jetty.servlet.ServletHandler$CachedChain.
> > > doFilter(ServletHandler.java:1691)
> > > at org.eclipse.jetty.servlet.ServletHandler.doHandle(
> > > ServletHandler.java:582)
> > > at org.eclipse.jetty.server.handler.ScopedHandler.handle(
> > > ScopedHandler.java:143)
> > > at org.eclipse.jetty.security.SecurityHandler.handle(
> > > SecurityHandler.java:548)
> > > at org.eclipse.jetty.server.session.SessionHandler.
> > > doHandle(SessionHandler.java:226)
> > > at org.eclipse.jetty.server.handler.ContextHandler.
> > > doHandle(ContextHandler.java:1180)
> > > at org.eclipse.jetty.servlet.ServletHandler.doScope(
> > > ServletHandler.java:512)
> > > at org.eclipse.jetty.server.session.SessionHandler.
> > > doScope(SessionHandler.java:185)
> > > at org.eclipse.jetty.server.handler.ContextHandler.
> > > doScope(ContextHandler.java:1112)
> > > at org.eclipse.jetty.server.handler.ScopedHandler.handle(
> > > ScopedHandler.java:141)
> > > at 

solr authentication error

2017-05-04 Thread Satya Marivada
Hi,


Can someone please say what I am missing in this case? I have solr
6.3.0, and enabled http authentication, the configuration has been
uploaded to zookeeper. But I do see below error in logs sometimes. Are
the nodes not able to ciommunicate because of this error? I am not
seeing any functionality loss.

Authentication for the admin screen works great.


In solr.in.sh, should I set? SOLR_AUTHENTICATION_OPTS=""



There was a problem making a request to the
leader:org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:
Error from server at https://:15111/solr: Expected mime type
application/octet-stream but got text/html. 


Error 401 require authentication

HTTP ERROR 401
Problem accessing /solr/admin/cores. Reason:
require authentication



at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:561)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.ZkController.waitForLeaderToSeeDownState(ZkController.java:1647)
at 
org.apache.solr.cloud.ZkController.registerAllCoresAsDown(ZkController.java:471)
at org.apache.solr.cloud.ZkController.access$500(ZkController.java:119)
at org.apache.solr.cloud.ZkController$1.command(ZkController.java:335)
at 
org.apache.solr.common.cloud.ConnectionManager$1.update(ConnectionManager.java:168)
at 
org.apache.solr.common.cloud.DefaultConnectionStrategy.reconnect(DefaultConnectionStrategy.java:57)
at 
org.apache.solr.common.cloud.ConnectionManager.process(ConnectionManager.java:142)
at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)


Re: Export endpoint broken in solr 6.5.1?

2017-05-04 Thread Yago Riveiro
Older build with that was upgraded from 6.3.0 to 6.5.1.

The config used in 6.3.0 are the same used in 6.5.1 without changes.

Should I update my configs?

--

/Yago Riveiro

On 4 May 2017, 21:45 +0100, Joel Bernstein , wrote:
> Did this error come from a standard 6.5.1 build, or form a build that was
> upgraded to 6.5.1 with older config files?
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Thu, May 4, 2017 at 1:57 PM, Yago Riveiro  wrote:
>
> > I'm trying to run this streaming expression
> >
> > search(data,qt="/export",q="*:*",fl="id",sort="id asc")
> >
> > and I'm hitting this exception:
> >
> > 2017-05-04 17:24:05.156 ERROR (qtp1937348256-378) [c:data s:shard7
> > r:core_node38 x:data_shard7_replica1] o.a.s.c.s.i.s.ExceptionStream
> > java.io.IOException: java.util.concurrent.ExecutionException:
> > java.io.IOException: --> http://solr-node-1:8983/solr/
> > data_shard2_replica1/:
> > An exception has occurred on the server, refer to server log for details.
> > at
> > org.apache.solr.client.solrj.io.stream.CloudSolrStream.
> > openStreams(CloudSolrStream.java:451)
> > at
> > org.apache.solr.client.solrj.io.stream.CloudSolrStream.
> > open(CloudSolrStream.java:308)
> > at
> > org.apache.solr.client.solrj.io.stream.ExceptionStream.
> > open(ExceptionStream.java:51)
> > at
> > org.apache.solr.handler.StreamHandler$TimerStream.
> > open(StreamHandler.java:490)
> > at
> > org.apache.solr.client.solrj.io.stream.TupleStream.
> > writeMap(TupleStream.java:78)
> > at
> > org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547)
> > at
> > org.apache.solr.response.TextResponseWriter.writeVal(
> > TextResponseWriter.java:193)
> > at
> > org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(
> > JSONResponseWriter.java:209)
> > at
> > org.apache.solr.response.JSONWriter.writeNamedList(
> > JSONResponseWriter.java:325)
> > at
> > org.apache.solr.response.JSONWriter.writeResponse(
> > JSONResponseWriter.java:120)
> > at
> > org.apache.solr.response.JSONResponseWriter.write(
> > JSONResponseWriter.java:71)
> > at
> > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(
> > QueryResponseWriterUtil.java:65)
> > at
> > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
> > at org.apache.solr.servlet.HttpSolrCall.call(
> > HttpSolrCall.java:538)
> > at
> > org.apache.solr.servlet.SolrDispatchFilter.doFilter(
> > SolrDispatchFilter.java:347)
> > at
> > org.apache.solr.servlet.SolrDispatchFilter.doFilter(
> > SolrDispatchFilter.java:298)
> > at
> > org.eclipse.jetty.servlet.ServletHandler$CachedChain.
> > doFilter(ServletHandler.java:1691)
> > at
> > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> > at
> > org.eclipse.jetty.server.handler.ScopedHandler.handle(
> > ScopedHandler.java:143)
> > at
> > org.eclipse.jetty.security.SecurityHandler.handle(
> > SecurityHandler.java:548)
> > at
> > org.eclipse.jetty.server.session.SessionHandler.
> > doHandle(SessionHandler.java:226)
> > at
> > org.eclipse.jetty.server.handler.ContextHandler.
> > doHandle(ContextHandler.java:1180)
> > at
> > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
> > at
> > org.eclipse.jetty.server.session.SessionHandler.
> > doScope(SessionHandler.java:185)
> > at
> > org.eclipse.jetty.server.handler.ContextHandler.
> > doScope(ContextHandler.java:1112)
> > at
> > org.eclipse.jetty.server.handler.ScopedHandler.handle(
> > ScopedHandler.java:141)
> > at
> > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(
> > ContextHandlerCollection.java:213)
> > at
> > org.eclipse.jetty.server.handler.HandlerCollection.
> > handle(HandlerCollection.java:119)
> > at
> > org.eclipse.jetty.server.handler.HandlerWrapper.handle(
> > HandlerWrapper.java:134)
> > at
> > org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(
> > RewriteHandler.java:335)
> > at
> > org.eclipse.jetty.server.handler.HandlerWrapper.handle(
> > HandlerWrapper.java:134)
> > at org.eclipse.jetty.server.Server.handle(Server.java:534)
> > at org.eclipse.jetty.server.HttpChannel.handle(
> > HttpChannel.java:320)
> > at
> > org.eclipse.jetty.server.HttpConnection.onFillable(
> > HttpConnection.java:251)
> > at
> > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(
> > AbstractConnection.java:273)
> > at org.eclipse.jetty.io.FillInterest.fillable(
> > FillInterest.java:95)
> > at
> > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(
> > SelectChannelEndPoint.java:93)
> > at
> > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.
> > executeProduceConsume(ExecuteProduceConsume.java:303)
> > at
> > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.
> > produceConsume(ExecuteProduceConsume.java:148)
> > at
> > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(
> > ExecuteProduceConsume.java:136)
> > at
> > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(
> > 

Re: Export endpoint broken in solr 6.5.1?

2017-05-04 Thread Joel Bernstein
Did this error come from a standard 6.5.1 build, or form a build that was
upgraded to 6.5.1 with older config files?

Joel Bernstein
http://joelsolr.blogspot.com/

On Thu, May 4, 2017 at 1:57 PM, Yago Riveiro  wrote:

> I'm trying to run this streaming expression
>
> search(data,qt="/export",q="*:*",fl="id",sort="id asc")
>
> and I'm hitting this exception:
>
> 2017-05-04 17:24:05.156 ERROR (qtp1937348256-378) [c:data s:shard7
> r:core_node38 x:data_shard7_replica1] o.a.s.c.s.i.s.ExceptionStream
> java.io.IOException: java.util.concurrent.ExecutionException:
> java.io.IOException: --> http://solr-node-1:8983/solr/
> data_shard2_replica1/:
> An exception has occurred on the server, refer to server log for details.
> at
> org.apache.solr.client.solrj.io.stream.CloudSolrStream.
> openStreams(CloudSolrStream.java:451)
> at
> org.apache.solr.client.solrj.io.stream.CloudSolrStream.
> open(CloudSolrStream.java:308)
> at
> org.apache.solr.client.solrj.io.stream.ExceptionStream.
> open(ExceptionStream.java:51)
> at
> org.apache.solr.handler.StreamHandler$TimerStream.
> open(StreamHandler.java:490)
> at
> org.apache.solr.client.solrj.io.stream.TupleStream.
> writeMap(TupleStream.java:78)
> at
> org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547)
> at
> org.apache.solr.response.TextResponseWriter.writeVal(
> TextResponseWriter.java:193)
> at
> org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(
> JSONResponseWriter.java:209)
> at
> org.apache.solr.response.JSONWriter.writeNamedList(
> JSONResponseWriter.java:325)
> at
> org.apache.solr.response.JSONWriter.writeResponse(
> JSONResponseWriter.java:120)
> at
> org.apache.solr.response.JSONResponseWriter.write(
> JSONResponseWriter.java:71)
> at
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(
> QueryResponseWriterUtil.java:65)
> at
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
> at org.apache.solr.servlet.HttpSolrCall.call(
> HttpSolrCall.java:538)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(
> SolrDispatchFilter.java:347)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(
> SolrDispatchFilter.java:298)
> at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.
> doFilter(ServletHandler.java:1691)
> at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(
> ScopedHandler.java:143)
> at
> org.eclipse.jetty.security.SecurityHandler.handle(
> SecurityHandler.java:548)
> at
> org.eclipse.jetty.server.session.SessionHandler.
> doHandle(SessionHandler.java:226)
> at
> org.eclipse.jetty.server.handler.ContextHandler.
> doHandle(ContextHandler.java:1180)
> at
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
> at
> org.eclipse.jetty.server.session.SessionHandler.
> doScope(SessionHandler.java:185)
> at
> org.eclipse.jetty.server.handler.ContextHandler.
> doScope(ContextHandler.java:1112)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(
> ScopedHandler.java:141)
> at
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(
> ContextHandlerCollection.java:213)
> at
> org.eclipse.jetty.server.handler.HandlerCollection.
> handle(HandlerCollection.java:119)
> at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(
> HandlerWrapper.java:134)
> at
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(
> RewriteHandler.java:335)
> at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(
> HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:534)
> at org.eclipse.jetty.server.HttpChannel.handle(
> HttpChannel.java:320)
> at
> org.eclipse.jetty.server.HttpConnection.onFillable(
> HttpConnection.java:251)
> at
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(
> AbstractConnection.java:273)
> at org.eclipse.jetty.io.FillInterest.fillable(
> FillInterest.java:95)
> at
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(
> SelectChannelEndPoint.java:93)
> at
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.
> executeProduceConsume(ExecuteProduceConsume.java:303)
> at
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.
> produceConsume(ExecuteProduceConsume.java:148)
> at
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(
> ExecuteProduceConsume.java:136)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(
> QueuedThreadPool.java:671)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(
> QueuedThreadPool.java:589)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: 

Re: Underlying file changed by an external force

2017-05-04 Thread Erick Erickson
You need to look at all of your core.properties files and see if any
of them point to the same data directory.

Second: if you issue a "kill -9" you can leave write locks lingering.

Best,
Erick

On Thu, May 4, 2017 at 11:00 AM, Oakley, Craig (NIH/NLM/NCBI) [C]
 wrote:
> We have been having problems with different collections on different 
> SolrCloud clusters, all seeming to be related to the write.lock file with 
> stack traces similar to the following. Are there any suggestions what might 
> be the cause and what might be the solution? Thanks
>
>
> org.apache.lucene.store.AlreadyClosedException: Underlying file changed by an 
> external force at 2017-04-13T20:43:08.630152Z, 
> (lock=NativeFSLock(path=/data/solr/biosample/dba_test_shard1_replica1/data/index/write.lock,impl=sun.nio.ch.FileLockImpl[0:9223372036854775807
>  exclusive valid],ctime=2017-04-13T20:43:08.630152Z))
>
>at 
> org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.ensureValid(NativeFSLockFactory.java:179)
>
>at 
> org.apache.lucene.store.LockValidatingDirectoryWrapper.deleteFile(LockValidatingDirectoryWrapper.java:37)
>
>at 
> org.apache.lucene.index.IndexFileDeleter.deleteFile(IndexFileDeleter.java:732)
>
>at 
> org.apache.lucene.index.IndexFileDeleter.deletePendingFiles(IndexFileDeleter.java:503)
>
>at 
> org.apache.lucene.index.IndexFileDeleter.refresh(IndexFileDeleter.java:448)
>
>at 
> org.apache.lucene.index.IndexWriter.rollbackInternalNoCommit(IndexWriter.java:2099)
>
>at 
> org.apache.lucene.index.IndexWriter.rollbackInternal(IndexWriter.java:2041)
>
>at org.apache.lucene.index.IndexWriter.shutdown(IndexWriter.java:1083)
>
>at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1125)
>
>at 
> org.apache.solr.update.SolrIndexWriter.close(SolrIndexWriter.java:131)
>
>at 
> org.apache.solr.update.DefaultSolrCoreState.changeWriter(DefaultSolrCoreState.java:183)
>
>at 
> org.apache.solr.update.DefaultSolrCoreState.newIndexWriter(DefaultSolrCoreState.java:207)
>
>at org.apache.solr.core.SolrCore.reload(SolrCore.java:472)
>
>at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:849)
>
>at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleReloadAction(CoreAdminHandler.java:768)
>
>at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:230)
>
>at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:184)
>
>at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
>
>at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:664)
>
>at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:438)
>
>at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:223)
>
>at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
>
>at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>
>at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>
>at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>
>at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>
>at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>
>at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>
>at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>
>at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>
>at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>
>at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>
>at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>
>at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>
>at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>
>at org.eclipse.jetty.server.Server.handle(Server.java:499)
>
>at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>
>at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>
>at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>
>at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>
>at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>
>at java.lang.Thread.run(Thread.java:745)
>


Underlying file changed by an external force

2017-05-04 Thread Oakley, Craig (NIH/NLM/NCBI) [C]
We have been having problems with different collections on different SolrCloud 
clusters, all seeming to be related to the write.lock file with stack traces 
similar to the following. Are there any suggestions what might be the cause and 
what might be the solution? Thanks


org.apache.lucene.store.AlreadyClosedException: Underlying file changed by an 
external force at 2017-04-13T20:43:08.630152Z, 
(lock=NativeFSLock(path=/data/solr/biosample/dba_test_shard1_replica1/data/index/write.lock,impl=sun.nio.ch.FileLockImpl[0:9223372036854775807
 exclusive valid],ctime=2017-04-13T20:43:08.630152Z))

   at 
org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.ensureValid(NativeFSLockFactory.java:179)

   at 
org.apache.lucene.store.LockValidatingDirectoryWrapper.deleteFile(LockValidatingDirectoryWrapper.java:37)

   at 
org.apache.lucene.index.IndexFileDeleter.deleteFile(IndexFileDeleter.java:732)

   at 
org.apache.lucene.index.IndexFileDeleter.deletePendingFiles(IndexFileDeleter.java:503)

   at 
org.apache.lucene.index.IndexFileDeleter.refresh(IndexFileDeleter.java:448)

   at 
org.apache.lucene.index.IndexWriter.rollbackInternalNoCommit(IndexWriter.java:2099)

   at 
org.apache.lucene.index.IndexWriter.rollbackInternal(IndexWriter.java:2041)

   at org.apache.lucene.index.IndexWriter.shutdown(IndexWriter.java:1083)

   at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1125)

   at org.apache.solr.update.SolrIndexWriter.close(SolrIndexWriter.java:131)

   at 
org.apache.solr.update.DefaultSolrCoreState.changeWriter(DefaultSolrCoreState.java:183)

   at 
org.apache.solr.update.DefaultSolrCoreState.newIndexWriter(DefaultSolrCoreState.java:207)

   at org.apache.solr.core.SolrCore.reload(SolrCore.java:472)

   at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:849)

   at 
org.apache.solr.handler.admin.CoreAdminHandler.handleReloadAction(CoreAdminHandler.java:768)

   at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:230)

   at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:184)

   at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)

   at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:664)

   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:438)

   at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:223)

   at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)

   at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)

   at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)

   at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)

   at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)

   at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)

   at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)

   at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)

   at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)

   at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)

   at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)

   at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)

   at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)

   at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)

   at org.eclipse.jetty.server.Server.handle(Server.java:499)

   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)

   at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)

   at 
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)

   at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)

   at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)

   at java.lang.Thread.run(Thread.java:745)



Export endpoint broken in solr 6.5.1?

2017-05-04 Thread Yago Riveiro
I'm trying to run this streaming expression

search(data,qt="/export",q="*:*",fl="id",sort="id asc")

and I'm hitting this exception:

2017-05-04 17:24:05.156 ERROR (qtp1937348256-378) [c:data s:shard7
r:core_node38 x:data_shard7_replica1] o.a.s.c.s.i.s.ExceptionStream
java.io.IOException: java.util.concurrent.ExecutionException:
java.io.IOException: --> http://solr-node-1:8983/solr/data_shard2_replica1/:
An exception has occurred on the server, refer to server log for details.
at
org.apache.solr.client.solrj.io.stream.CloudSolrStream.openStreams(CloudSolrStream.java:451)
at
org.apache.solr.client.solrj.io.stream.CloudSolrStream.open(CloudSolrStream.java:308)
at
org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:51)
at
org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:490)
at
org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
at
org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547)
at
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:193)
at
org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209)
at
org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325)
at
org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120)
at
org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71)
at
org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
at
org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:347)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:298)
at
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
at
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
at
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:534)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
at
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.ExecutionException: java.io.IOException: -->
http://solr-node-1:8983/solr/data_shard2_replica1/: An exception has
occurred on the server, refer to server log for details.
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at
org.apache.solr.client.solrj.io.stream.CloudSolrStream.openStreams(CloudSolrStream.java:445)
... 42 more
Caused by: java.io.IOException: -->

Re: Joining more than 2 collections

2017-05-04 Thread Joel Bernstein
No *:* will simply return all the results from one of the queries. It
should still join properly. If you are using the /select handler joins will
not work properly.


This example worked properly for me:

hashJoin(parallel(collection2, j
workers=3,
sort="id asc",
innerJoin(search(collection2, q="*:*", fl="id",
sort="id asc", qt="/export", partitionKeys="id"),
search(collection2,
q="year_i:42", fl="id, year_i", sort="id asc", qt="/export",
partitionKeys="id"),
on="id")),
hashed=search(collection2, q="day_i:7", fl="id, day_i",
sort="id asc", qt="/export"),
on="id")




Joel Bernstein
http://joelsolr.blogspot.com/

On Thu, May 4, 2017 at 12:28 PM, Zheng Lin Edwin Yeo 
wrote:

> Hi Joel,
>
> For the join queries, is it true that if we use q=*:* for the query for one
> of the join, there will not be any results return?
>
> Currently I found this is the case, if I just put q=*:*.
>
> Regards,
> Edwin
>
>
> On 4 May 2017 at 23:38, Zheng Lin Edwin Yeo  wrote:
>
> > Hi Joel,
> >
> > I think that might be one of the reason.
> > This is what I have for the /export handler in my solrconfig.xml
> >
> >   > "invariants"> {!xport} xsort <
> > str name="distrib">false  
> query > str>  
> >
> > This is the error message that I get when I use the /export handler.
> >
> > java.io.IOException: java.util.concurrent.ExecutionException:
> > java.io.IOException: --> http://localhost:8983/solr/
> > collection1_shard1_replica1/: An exception has occurred on the server,
> > refer to server log for details.
> > at org.apache.solr.client.solrj.io.stream.CloudSolrStream.
> > openStreams(CloudSolrStream.java:451)
> > at org.apache.solr.client.solrj.io.stream.CloudSolrStream.
> > open(CloudSolrStream.java:308)
> > at org.apache.solr.client.solrj.io.stream.PushBackStream.open(
> > PushBackStream.java:70)
> > at org.apache.solr.client.solrj.io.stream.JoinStream.open(
> > JoinStream.java:147)
> > at org.apache.solr.client.solrj.io.stream.ExceptionStream.
> > open(ExceptionStream.java:51)
> > at org.apache.solr.handler.StreamHandler$TimerStream.
> > open(StreamHandler.java:457)
> > at org.apache.solr.client.solrj.io.stream.TupleStream.
> > writeMap(TupleStream.java:63)
> > at org.apache.solr.response.JSONWriter.writeMap(
> > JSONResponseWriter.java:547)
> > at org.apache.solr.response.TextResponseWriter.writeVal(
> > TextResponseWriter.java:193)
> > at org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(
> > JSONResponseWriter.java:209)
> > at org.apache.solr.response.JSONWriter.writeNamedList(
> > JSONResponseWriter.java:325)
> > at org.apache.solr.response.JSONWriter.writeResponse(
> > JSONResponseWriter.java:120)
> > at org.apache.solr.response.JSONResponseWriter.write(
> > JSONResponseWriter.java:71)
> > at org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(
> > QueryResponseWriterUtil.java:65)
> > at org.apache.solr.servlet.HttpSolrCall.writeResponse(
> > HttpSolrCall.java:732)
> > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:473)
> > at org.apache.solr.servlet.SolrDispatchFilter.doFilter(
> > SolrDispatchFilter.java:345)
> > at org.apache.solr.servlet.SolrDispatchFilter.doFilter(
> > SolrDispatchFilter.java:296)
> > at org.eclipse.jetty.servlet.ServletHandler$CachedChain.
> > doFilter(ServletHandler.java:1691)
> > at org.eclipse.jetty.servlet.ServletHandler.doHandle(
> > ServletHandler.java:582)
> > at org.eclipse.jetty.server.handler.ScopedHandler.handle(
> > ScopedHandler.java:143)
> > at org.eclipse.jetty.security.SecurityHandler.handle(
> > SecurityHandler.java:548)
> > at org.eclipse.jetty.server.session.SessionHandler.
> > doHandle(SessionHandler.java:226)
> > at org.eclipse.jetty.server.handler.ContextHandler.
> > doHandle(ContextHandler.java:1180)
> > at org.eclipse.jetty.servlet.ServletHandler.doScope(
> > ServletHandler.java:512)
> > at org.eclipse.jetty.server.session.SessionHandler.
> > doScope(SessionHandler.java:185)
> > at org.eclipse.jetty.server.handler.ContextHandler.
> > doScope(ContextHandler.java:1112)
> > at org.eclipse.jetty.server.handler.ScopedHandler.handle(
> > ScopedHandler.java:141)
> > at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(
> > ContextHandlerCollection.java:213)
> > at org.eclipse.jetty.server.handler.HandlerCollection.
> > handle(HandlerCollection.java:119)
> > at org.eclipse.jetty.server.handler.HandlerWrapper.handle(
> > HandlerWrapper.java:134)
> > at org.eclipse.jetty.server.Server.handle(Server.java:534)
> > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
> > at org.eclipse.jetty.server.HttpConnection.onFillable(
> > HttpConnection.java:251)
> > at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(
> > AbstractConnection.java:273)
> > at 

Re: Joining more than 2 collections

2017-05-04 Thread Zheng Lin Edwin Yeo
Hi Joel,

For the join queries, is it true that if we use q=*:* for the query for one
of the join, there will not be any results return?

Currently I found this is the case, if I just put q=*:*.

Regards,
Edwin


On 4 May 2017 at 23:38, Zheng Lin Edwin Yeo  wrote:

> Hi Joel,
>
> I think that might be one of the reason.
> This is what I have for the /export handler in my solrconfig.xml
>
>   "invariants"> {!xport} xsort <
> str name="distrib">false   query str>  
>
> This is the error message that I get when I use the /export handler.
>
> java.io.IOException: java.util.concurrent.ExecutionException:
> java.io.IOException: --> http://localhost:8983/solr/
> collection1_shard1_replica1/: An exception has occurred on the server,
> refer to server log for details.
> at org.apache.solr.client.solrj.io.stream.CloudSolrStream.
> openStreams(CloudSolrStream.java:451)
> at org.apache.solr.client.solrj.io.stream.CloudSolrStream.
> open(CloudSolrStream.java:308)
> at org.apache.solr.client.solrj.io.stream.PushBackStream.open(
> PushBackStream.java:70)
> at org.apache.solr.client.solrj.io.stream.JoinStream.open(
> JoinStream.java:147)
> at org.apache.solr.client.solrj.io.stream.ExceptionStream.
> open(ExceptionStream.java:51)
> at org.apache.solr.handler.StreamHandler$TimerStream.
> open(StreamHandler.java:457)
> at org.apache.solr.client.solrj.io.stream.TupleStream.
> writeMap(TupleStream.java:63)
> at org.apache.solr.response.JSONWriter.writeMap(
> JSONResponseWriter.java:547)
> at org.apache.solr.response.TextResponseWriter.writeVal(
> TextResponseWriter.java:193)
> at org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(
> JSONResponseWriter.java:209)
> at org.apache.solr.response.JSONWriter.writeNamedList(
> JSONResponseWriter.java:325)
> at org.apache.solr.response.JSONWriter.writeResponse(
> JSONResponseWriter.java:120)
> at org.apache.solr.response.JSONResponseWriter.write(
> JSONResponseWriter.java:71)
> at org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(
> QueryResponseWriterUtil.java:65)
> at org.apache.solr.servlet.HttpSolrCall.writeResponse(
> HttpSolrCall.java:732)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:473)
> at org.apache.solr.servlet.SolrDispatchFilter.doFilter(
> SolrDispatchFilter.java:345)
> at org.apache.solr.servlet.SolrDispatchFilter.doFilter(
> SolrDispatchFilter.java:296)
> at org.eclipse.jetty.servlet.ServletHandler$CachedChain.
> doFilter(ServletHandler.java:1691)
> at org.eclipse.jetty.servlet.ServletHandler.doHandle(
> ServletHandler.java:582)
> at org.eclipse.jetty.server.handler.ScopedHandler.handle(
> ScopedHandler.java:143)
> at org.eclipse.jetty.security.SecurityHandler.handle(
> SecurityHandler.java:548)
> at org.eclipse.jetty.server.session.SessionHandler.
> doHandle(SessionHandler.java:226)
> at org.eclipse.jetty.server.handler.ContextHandler.
> doHandle(ContextHandler.java:1180)
> at org.eclipse.jetty.servlet.ServletHandler.doScope(
> ServletHandler.java:512)
> at org.eclipse.jetty.server.session.SessionHandler.
> doScope(SessionHandler.java:185)
> at org.eclipse.jetty.server.handler.ContextHandler.
> doScope(ContextHandler.java:1112)
> at org.eclipse.jetty.server.handler.ScopedHandler.handle(
> ScopedHandler.java:141)
> at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(
> ContextHandlerCollection.java:213)
> at org.eclipse.jetty.server.handler.HandlerCollection.
> handle(HandlerCollection.java:119)
> at org.eclipse.jetty.server.handler.HandlerWrapper.handle(
> HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:534)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
> at org.eclipse.jetty.server.HttpConnection.onFillable(
> HttpConnection.java:251)
> at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(
> AbstractConnection.java:273)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(
> SelectChannelEndPoint.java:93)
> at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.
> executeProduceConsume(ExecuteProduceConsume.java:303)
> at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.
> produceConsume(ExecuteProduceConsume.java:148)
> at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(
> ExecuteProduceConsume.java:136)
> at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(
> QueuedThreadPool.java:671)
> at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(
> QueuedThreadPool.java:589)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.concurrent.ExecutionException: java.io.IOException:
> --> http://localhost:8983/solr/collection1_shard1_replica1/: An exception
> has occurred on the server, refer to server log for details.
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at 

Re: Indexing I/O errors and CorruptIndex messages

2017-05-04 Thread Rick Leir
Simon 
After hearing about the weird time issue in EC2, I am going to ask if you have 
a real server handy for testing. No, I have no hard facts, this is just a 
suggestion. 

And I have no beef with AWS, they have served me really well for other servers.
Cheers -- Rick

On May 4, 2017 10:49:25 AM EDT, simon  wrote:
>I've pretty much ruled out system/hardware issues - the AWS instance
>has
>been rebooted,  and indexing to a core on a new and empty  disk/file
>system
>fails in the same way with a CorruptIndexException.
>I can  generally get indexing to complete by significantly dialing down
>the
>number of indexer scripts running concurrently, but the duration goes
>up
>proportionately.
>
>-Simon
>
>
>On Thu, Apr 27, 2017 at 9:26 AM, simon  wrote:
>
>> Nope ... huge file system (600gb) only 50% full, and a complete index
>> would be 80gb max.
>>
>> On Wed, Apr 26, 2017 at 4:04 PM, Erick Erickson
>
>> wrote:
>>
>>> Disk space issue? Lucene requires at least as much free disk space
>as
>>> your index size. Note that the disk full issue will be transient,
>IOW
>>> if you look now and have free space it still may have been all used
>up
>>> but had some space reclaimed.
>>>
>>> Best,
>>> Erick
>>>
>>> On Wed, Apr 26, 2017 at 12:02 PM, simon  wrote:
>>> > reposting this as the problem described is happening again and
>there
>>> were
>>> > no responses to the original email. Anyone ?
>>> > 
>>> > I'm seeing an odd error during indexing for which I can't find any
>>> reason.
>>> >
>>> > The relevant solr log entry:
>>> >
>>> > 2017-03-24 19:09:35.363 ERROR (commitScheduler-30-thread-1) [
>>> > x:build0324] o.a.s.u.CommitTracker auto commit
>>> > error...:java.io.EOFException: read past EOF:
>>> MMapIndexInput(path="/
>>> > indexes/solrindexes/build0324/index/_4ku.fdx")
>>> >  at org.apache.lucene.store.ByteBufferIndexInput.readByte(
>>> > ByteBufferIndexInput.java:75)
>>> > ...
>>> > Suppressed: org.apache.lucene.index.CorruptIndexException:
>checksum
>>> > status indeterminate: remaining=0, please run checkindex for more
>>> details
>>> > (resource= BufferedChecksumIndexInput(MM
>>> apIndexInput(path="/indexes/
>>> > solrindexes/build0324/index/_4ku.fdx")))
>>> >  at org.apache.lucene.codecs.CodecUtil.checkFooter(
>>> > CodecUtil.java:451)
>>> >  at org.apache.lucene.codecs.compressing.
>>> > CompressingStoredFieldsReader.(CompressingStoredFields
>>> Reader.java:140)
>>> >  followed within a few seconds by
>>> >
>>> >  2017-03-24 19:09:56.402 ERROR (commitScheduler-31-thread-1) [
>>> > x:build0324] o.a.s.u.CommitTracker auto commit
>>> > error...:org.apache.solr.common.SolrException:
>>> > Error opening new searcher
>>> > at
>org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:
>>> 1820)
>>> > at
>org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1931)
>>> > ...
>>> > Caused by: java.io.EOFException: read past EOF:
>>> >
>MMapIndexInput(path="/indexes/solrindexes/build0324/index/_4ku.fdx")
>>> > at org.apache.lucene.store.ByteBufferIndexInput.readByte(
>>> > ByteBufferIndexInput.java:75)
>>> >
>>> > This error is repeated a few times as the indexing continued and
>further
>>> > autocommits were triggered.
>>> >
>>> > I stopped the indexing process, made a backup snapshot of the
>index,
>>> >  restarted indexing at a checkpoint, and everything then completed
>>> without
>>> > further incidents
>>> >
>>> > I ran checkIndex on the saved snapshot and it reported no errors
>>> > whatsoever. Operations on the complete index (inclcuing an
>optimize and
>>> > several query scripts) have all been error-free.
>>> >
>>> > Some background:
>>> >  Solr information from the beginning of the checkindex output:
>>> >  ---
>>> >  Opening index @ /indexes/solrindexes/build0324.bad/index
>>> >
>>> > Segments file=segments_9s numSegments=105 version=6.3.0
>>> > id=7m1ldieoje0m6sljp7xocbz9l
>userData={commitTimeMSec=1490400514324}
>>> >   1 of 105: name=_be maxDoc=1227144
>>> > version=6.3.0
>>> > id=7m1ldieoje0m6sljp7xocburb
>>> > codec=Lucene62
>>> > compound=false
>>> > numFiles=14
>>> > size (MB)=4,926.186
>>> > diagnostics = {os=Linux, java.vendor=Oracle Corporation,
>>> > java.version=1.8.0_45, java.vm.version=25.45-b02,
>lucene.version=6.3.0,
>>> > mergeMaxNumSegments=-1, os.arch=amd64,
>java.runtime.version=1.8.0_45-
>>> b13,
>>> > source=merge, mergeFactor=19,
>os.version=3.10.0-229.1.2.el7.x86_64,
>>> > timestamp=1490380905920}
>>> > no deletions
>>> > test: open reader.OK [took 0.176 sec]
>>> > test: check integrity.OK [took 37.399 sec]
>>> > test: check live docs.OK [took 0.000 sec]
>>> > test: field infos.OK [49 fields] [took 0.000 sec]
>>> > test: field norms.OK [17 fields] [took 0.030 sec]
>>> > test: terms, freq, prox...OK [14568108 terms; 612537186
>terms/docs

Re: Joining more than 2 collections

2017-05-04 Thread Joel Bernstein
Yeah, the newest configurations are in implicitPlugins.json. So in the
standard release now there is nothing about the /export handler in the
solrconfig.

Joel Bernstein
http://joelsolr.blogspot.com/

On Thu, May 4, 2017 at 11:38 AM, Zheng Lin Edwin Yeo 
wrote:

> Hi Joel,
>
> I think that might be one of the reason.
> This is what I have for the /export handler in my solrconfig.xml
>
>   "invariants"> {!xport} xsort  name="distrib">false   query
>  
>
> This is the error message that I get when I use the /export handler.
>
> java.io.IOException: java.util.concurrent.ExecutionException:
> java.io.IOException: -->
> http://localhost:8983/solr/collection1_shard1_replica1/: An exception has
> occurred on the server, refer to server log for details.
> at
> org.apache.solr.client.solrj.io.stream.CloudSolrStream.
> openStreams(CloudSolrStream.java:451)
> at
> org.apache.solr.client.solrj.io.stream.CloudSolrStream.
> open(CloudSolrStream.java:308)
> at
> org.apache.solr.client.solrj.io.stream.PushBackStream.open(
> PushBackStream.java:70)
> at
> org.apache.solr.client.solrj.io.stream.JoinStream.open(
> JoinStream.java:147)
> at
> org.apache.solr.client.solrj.io.stream.ExceptionStream.
> open(ExceptionStream.java:51)
> at
> org.apache.solr.handler.StreamHandler$TimerStream.
> open(StreamHandler.java:457)
> at
> org.apache.solr.client.solrj.io.stream.TupleStream.
> writeMap(TupleStream.java:63)
> at org.apache.solr.response.JSONWriter.writeMap(
> JSONResponseWriter.java:547)
> at
> org.apache.solr.response.TextResponseWriter.writeVal(
> TextResponseWriter.java:193)
> at
> org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(
> JSONResponseWriter.java:209)
> at
> org.apache.solr.response.JSONWriter.writeNamedList(
> JSONResponseWriter.java:325)
> at
> org.apache.solr.response.JSONWriter.writeResponse(
> JSONResponseWriter.java:120)
> at
> org.apache.solr.response.JSONResponseWriter.write(
> JSONResponseWriter.java:71)
> at
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(
> QueryResponseWriterUtil.java:65)
> at org.apache.solr.servlet.HttpSolrCall.writeResponse(
> HttpSolrCall.java:732)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:473)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(
> SolrDispatchFilter.java:345)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(
> SolrDispatchFilter.java:296)
> at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.
> doFilter(ServletHandler.java:1691)
> at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(
> ScopedHandler.java:143)
> at
> org.eclipse.jetty.security.SecurityHandler.handle(
> SecurityHandler.java:548)
> at
> org.eclipse.jetty.server.session.SessionHandler.
> doHandle(SessionHandler.java:226)
> at
> org.eclipse.jetty.server.handler.ContextHandler.
> doHandle(ContextHandler.java:1180)
> at org.eclipse.jetty.servlet.ServletHandler.doScope(
> ServletHandler.java:512)
> at
> org.eclipse.jetty.server.session.SessionHandler.
> doScope(SessionHandler.java:185)
> at
> org.eclipse.jetty.server.handler.ContextHandler.
> doScope(ContextHandler.java:1112)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(
> ScopedHandler.java:141)
> at
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(
> ContextHandlerCollection.java:213)
> at
> org.eclipse.jetty.server.handler.HandlerCollection.
> handle(HandlerCollection.java:119)
> at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(
> HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:534)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
> at
> org.eclipse.jetty.server.HttpConnection.onFillable(
> HttpConnection.java:251)
> at
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(
> AbstractConnection.java:273)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> at
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(
> SelectChannelEndPoint.java:93)
> at
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.
> executeProduceConsume(ExecuteProduceConsume.java:303)
> at
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.
> produceConsume(ExecuteProduceConsume.java:148)
> at
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(
> ExecuteProduceConsume.java:136)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(
> QueuedThreadPool.java:671)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(
> QueuedThreadPool.java:589)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.concurrent.ExecutionException: java.io.IOException:
> --> http://localhost:8983/solr/collection1_shard1_replica1/: An exception
> has occurred on the server, refer to server log for details.
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at 

Re: Joining more than 2 collections

2017-05-04 Thread Zheng Lin Edwin Yeo
Hi Joel,

I think that might be one of the reason.
This is what I have for the /export handler in my solrconfig.xml

  {!xport} xsort false   query
 

This is the error message that I get when I use the /export handler.

java.io.IOException: java.util.concurrent.ExecutionException:
java.io.IOException: -->
http://localhost:8983/solr/collection1_shard1_replica1/: An exception has
occurred on the server, refer to server log for details.
at
org.apache.solr.client.solrj.io.stream.CloudSolrStream.openStreams(CloudSolrStream.java:451)
at
org.apache.solr.client.solrj.io.stream.CloudSolrStream.open(CloudSolrStream.java:308)
at
org.apache.solr.client.solrj.io.stream.PushBackStream.open(PushBackStream.java:70)
at
org.apache.solr.client.solrj.io.stream.JoinStream.open(JoinStream.java:147)
at
org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:51)
at
org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:457)
at
org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:63)
at org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547)
at
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:193)
at
org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209)
at
org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325)
at
org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120)
at
org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71)
at
org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
at org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:732)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:473)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:296)
at
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
at
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
at
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:534)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
at
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.ExecutionException: java.io.IOException:
--> http://localhost:8983/solr/collection1_shard1_replica1/: An exception
has occurred on the server, refer to server log for details.
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at
org.apache.solr.client.solrj.io.stream.CloudSolrStream.openStreams(CloudSolrStream.java:445)
... 42 more
Caused by: java.io.IOException: -->
http://localhost:8983/solr/collection1_shard1_replica1/: An exception has
occurred on the server, refer to server log for details.
at
org.apache.solr.client.solrj.io.stream.SolrStream.read(SolrStream.java:238)
at
org.apache.solr.client.solrj.io.stream.CloudSolrStream$TupleWrapper.next(CloudSolrStream.java:541)
at
org.apache.solr.client.solrj.io.stream.CloudSolrStream$StreamOpener.call(CloudSolrStream.java:564)
at

Re: in-place atomic updates for numeric docValue field

2017-05-04 Thread Dan .
Hi Emir,

Yes I though of representing -1 as null, but  this makes the index
unnecessarily larger, particularly if we have to default all docs to this
value.

Cheers,
Dan

On 4 May 2017 at 15:16, Emir Arnautovic 
wrote:

> Hi Dan,
>
> Remove does not make sense when it comes to in-place updates of docValues
> - it has to have some value, so only thing that you can do is introduce
> some int value as null.
>
> HTH,
> Emir
>
>
>
> On 04.05.2017 15:40, Dan . wrote:
>
>> Hi,
>>
>> I have a field like this:
>>
>> 
>> > docValues="true" multiValued="false"/>
>>
>> so I can do a fast in-place atomic updates
>>
>> However if I do e.g.
>>
>> curl -H 'Content-Type: application/json'
>> 'http://localhost:8983/solr/collection/update?commit=true'
>> --data-binary '
>> [{
>>   "id":"my_id",
>>   "popularity":{"set":null}
>> }]'
>>
>> then I'd expect the popularity field to be removed, however it's not.
>>
>> I this a bug? or is there a know workaround for this for in-place atomic
>> updates?
>>
>> Cheers,
>> Dan
>>
>>
> --
> Monitoring * Alerting * Anomaly Detection * Centralized Log Management
> Solr & Elasticsearch Support * http://sematext.com/
>
>


Re: in-place atomic updates for numeric docValue field

2017-05-04 Thread Dan .
Hi Shawn,

Thanks for the suggestion.

I gave that a try but unfortunately it didn't work.

Delete somehow would be really useful, seems wasteful to have e.g. -1
representing null.

Cheers,
Dan

On 4 May 2017 at 15:30, Shawn Heisey  wrote:

> On 5/4/2017 7:40 AM, Dan . wrote:
> > I have a field like this:
> >
> > 
> >  > docValues="true" multiValued="false"/>
> >
> > so I can do a fast in-place atomic updates
> >
> > However if I do e.g.
> >
> > curl -H 'Content-Type: application/json'
> > 'http://localhost:8983/solr/collection/update?commit=true'
> > --data-binary '
> > [{
> >  "id":"my_id",
> >  "popularity":{"set":null}
> > }]'
> >
> > then I'd expect the popularity field to be removed, however it's not.
>
> I'm not really sure how that "null" value will be interpreted.  It's
> entirely possible that this won't actually delete the field.
>
> I think we need a "delete" action for Atomic Updates, to entirely remove
> the field regardless of what it currently contains.  There is "remove"
> and "removeRegex", which MIGHT be enough, but I think delete would be
> useful syntactic sugar.
>
> Dan, can you give the following update JSON a try instead?  I am not
> guaranteeing that this will do the job, but given the current
> functionality, I think this is the option most likely to work:
>
> {
>  "id":"my_id",
>  "popularity":{"removeRegex":".*"}
> }
>
> Thanks,
> Shawn
>
>


Re: Joining more than 2 collections

2017-05-04 Thread Joel Bernstein
I suspect that there is something not quite right about the how the /export
handler is configured. Straight out of the box in solr 6.4.2  /export will
be automatically configured. Are you using a Solr instance that has been
upgraded in the past and doesn't have standard 6.4.2 configs?

To really do joins properly you'll have to use the /export handler because
/select will not stream entire result sets (unless they are pretty small).
So your results will be missing data possibly.

I would take a close look at the logs and see what all the exceptions are
when you run the a search using qt=/export. If you can post all the stack
traces that get generated when you run the search we'll probably be able to
spot the issue.

About the field ordering. There is support for field ordering in the
Streaming classes but only a few places actually enforce the order. The 6.5
SQL interface does keep the fields in order as does the new Tuple
expression in Solr 6.6. But the expressions you are working with currently
don't enforce field ordering.




Joel Bernstein
http://joelsolr.blogspot.com/

On Thu, May 4, 2017 at 2:41 AM, Zheng Lin Edwin Yeo 
wrote:

> Hi Joel,
>
> I have managed to get the Join to work, but so far it is only working when
> I use qt="/select". It is not working when I use qt="/export".
>
> For the display of the field, is there a way to allow it to list them in
> the order which I want?
> Currently, the display is quite random, and I can get a field in
> collection1, followed by a field in collection3, then collection1 again,
> and then collection2.
>
> It will be good if we can arrange the field to display in the order that we
> want.
>
> Regards,
> Edwin
>
>
>
> On 4 May 2017 at 09:56, Zheng Lin Edwin Yeo  wrote:
>
> > Hi Joel,
> >
> > It works when I started off with just one expression.
> >
> > Could it be that the data size is too big for export after the join,
> which
> > causes the error?
> >
> > Regards,
> > Edwin
> >
> > On 4 May 2017 at 02:53, Joel Bernstein  wrote:
> >
> >> I was just testing with the query below and it worked for me. Some of
> the
> >> error messages I was getting with the syntax was not what I was
> expecting
> >> though, so I'll look into the error handling. But the joins do work when
> >> the syntax correct. The query below is joining to the same collection
> >> three
> >> times, but the mechanics are exactly the same joining three different
> >> tables. In this example each join narrows down the result set.
> >>
> >> hashJoin(parallel(collection2,
> >> workers=3,
> >> sort="id asc",
> >> innerJoin(search(collection2, q="*:*",
> >> fl="id",
> >> sort="id asc", qt="/export", partitionKeys="id"),
> >> search(collection2,
> >> q="year_i:42", fl="id, year_i", sort="id asc", qt="/export",
> >> partitionKeys="id"),
> >> on="id")),
> >> hashed=search(collection2, q="day_i:7", fl="id, day_i",
> >> sort="id asc", qt="/export"),
> >> on="id")
> >>
> >> Joel Bernstein
> >> http://joelsolr.blogspot.com/
> >>
> >> On Wed, May 3, 2017 at 1:29 PM, Joel Bernstein 
> >> wrote:
> >>
> >> > Start off with just this expression:
> >> >
> >> > search(collection2,
> >> > q=*:*,
> >> > fl="a_s,b_s,c_s,d_s,e_s",
> >> > sort="a_s asc",
> >> > qt="/export")
> >> >
> >> > And then check the logs for exceptions.
> >> >
> >> > Joel Bernstein
> >> > http://joelsolr.blogspot.com/
> >> >
> >> > On Wed, May 3, 2017 at 12:35 PM, Zheng Lin Edwin Yeo <
> >> edwinye...@gmail.com
> >> > > wrote:
> >> >
> >> >> Hi Joel,
> >> >>
> >> >> I am getting this error after I change add qt=/export and removed the
> >> rows
> >> >> param. Do you know what could be the reason?
> >> >>
> >> >> {
> >> >>   "error":{
> >> >> "metadata":[
> >> >>   "error-class","org.apache.solr.common.SolrException",
> >> >>   "root-error-class","org.apache.http.MalformedChunkCodingExce
> >> >> ption"],
> >> >> "msg":"org.apache.http.MalformedChunkCodingException: CRLF
> >> expected
> >> >> at
> >> >> end of chunk",
> >> >> "trace":"org.apache.solr.common.SolrException:
> >> >> org.apache.http.MalformedChunkCodingException: CRLF expected at end
> of
> >> >> chunk\r\n\tat
> >> >> org.apache.solr.client.solrj.io.stream.TupleStream.lambda$wr
> >> >> iteMap$0(TupleStream.java:79)\r\n\tat
> >> >> org.apache.solr.response.JSONWriter.writeIterator(JSONRespon
> >> >> seWriter.java:523)\r\n\tat
> >> >> org.apache.solr.response.TextResponseWriter.writeVal(TextRes
> >> >> ponseWriter.java:175)\r\n\tat
> >> >> org.apache.solr.response.JSONWriter$2.put(JSONResponseWriter
> >> >> .java:559)\r\n\tat
> >> >> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(
> >> >> TupleStream.java:64)\r\n\tat
> >> 

Re: Indexing I/O errors and CorruptIndex messages

2017-05-04 Thread simon
I've pretty much ruled out system/hardware issues - the AWS instance has
been rebooted,  and indexing to a core on a new and empty  disk/file system
fails in the same way with a CorruptIndexException.
I can  generally get indexing to complete by significantly dialing down the
number of indexer scripts running concurrently, but the duration goes up
proportionately.

-Simon


On Thu, Apr 27, 2017 at 9:26 AM, simon  wrote:

> Nope ... huge file system (600gb) only 50% full, and a complete index
> would be 80gb max.
>
> On Wed, Apr 26, 2017 at 4:04 PM, Erick Erickson 
> wrote:
>
>> Disk space issue? Lucene requires at least as much free disk space as
>> your index size. Note that the disk full issue will be transient, IOW
>> if you look now and have free space it still may have been all used up
>> but had some space reclaimed.
>>
>> Best,
>> Erick
>>
>> On Wed, Apr 26, 2017 at 12:02 PM, simon  wrote:
>> > reposting this as the problem described is happening again and there
>> were
>> > no responses to the original email. Anyone ?
>> > 
>> > I'm seeing an odd error during indexing for which I can't find any
>> reason.
>> >
>> > The relevant solr log entry:
>> >
>> > 2017-03-24 19:09:35.363 ERROR (commitScheduler-30-thread-1) [
>> > x:build0324] o.a.s.u.CommitTracker auto commit
>> > error...:java.io.EOFException: read past EOF:
>> MMapIndexInput(path="/
>> > indexes/solrindexes/build0324/index/_4ku.fdx")
>> >  at org.apache.lucene.store.ByteBufferIndexInput.readByte(
>> > ByteBufferIndexInput.java:75)
>> > ...
>> > Suppressed: org.apache.lucene.index.CorruptIndexException: checksum
>> > status indeterminate: remaining=0, please run checkindex for more
>> details
>> > (resource= BufferedChecksumIndexInput(MM
>> apIndexInput(path="/indexes/
>> > solrindexes/build0324/index/_4ku.fdx")))
>> >  at org.apache.lucene.codecs.CodecUtil.checkFooter(
>> > CodecUtil.java:451)
>> >  at org.apache.lucene.codecs.compressing.
>> > CompressingStoredFieldsReader.(CompressingStoredFields
>> Reader.java:140)
>> >  followed within a few seconds by
>> >
>> >  2017-03-24 19:09:56.402 ERROR (commitScheduler-31-thread-1) [
>> > x:build0324] o.a.s.u.CommitTracker auto commit
>> > error...:org.apache.solr.common.SolrException:
>> > Error opening new searcher
>> > at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:
>> 1820)
>> > at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1931)
>> > ...
>> > Caused by: java.io.EOFException: read past EOF:
>> > MMapIndexInput(path="/indexes/solrindexes/build0324/index/_4ku.fdx")
>> > at org.apache.lucene.store.ByteBufferIndexInput.readByte(
>> > ByteBufferIndexInput.java:75)
>> >
>> > This error is repeated a few times as the indexing continued and further
>> > autocommits were triggered.
>> >
>> > I stopped the indexing process, made a backup snapshot of the index,
>> >  restarted indexing at a checkpoint, and everything then completed
>> without
>> > further incidents
>> >
>> > I ran checkIndex on the saved snapshot and it reported no errors
>> > whatsoever. Operations on the complete index (inclcuing an optimize and
>> > several query scripts) have all been error-free.
>> >
>> > Some background:
>> >  Solr information from the beginning of the checkindex output:
>> >  ---
>> >  Opening index @ /indexes/solrindexes/build0324.bad/index
>> >
>> > Segments file=segments_9s numSegments=105 version=6.3.0
>> > id=7m1ldieoje0m6sljp7xocbz9l userData={commitTimeMSec=1490400514324}
>> >   1 of 105: name=_be maxDoc=1227144
>> > version=6.3.0
>> > id=7m1ldieoje0m6sljp7xocburb
>> > codec=Lucene62
>> > compound=false
>> > numFiles=14
>> > size (MB)=4,926.186
>> > diagnostics = {os=Linux, java.vendor=Oracle Corporation,
>> > java.version=1.8.0_45, java.vm.version=25.45-b02, lucene.version=6.3.0,
>> > mergeMaxNumSegments=-1, os.arch=amd64, java.runtime.version=1.8.0_45-
>> b13,
>> > source=merge, mergeFactor=19, os.version=3.10.0-229.1.2.el7.x86_64,
>> > timestamp=1490380905920}
>> > no deletions
>> > test: open reader.OK [took 0.176 sec]
>> > test: check integrity.OK [took 37.399 sec]
>> > test: check live docs.OK [took 0.000 sec]
>> > test: field infos.OK [49 fields] [took 0.000 sec]
>> > test: field norms.OK [17 fields] [took 0.030 sec]
>> > test: terms, freq, prox...OK [14568108 terms; 612537186 terms/docs
>> > pairs; 801208966 tokens] [took 30.005 sec]
>> > test: stored fields...OK [150164874 total field count; avg 122.4
>> > fields per doc] [took 35.321 sec]
>> > test: term vectorsOK [4804967 total term vector count; avg
>> 3.9
>> > term/freq vector fields per doc] [took 55.857 sec]
>> > test: docvalues...OK [4 docvalues fields; 0 BINARY; 1
>> NUMERIC;
>> > 2 SORTED; 0 SORTED_NUMERIC; 1 SORTED_SET] [took 0.954 sec]
>> > test: 

RE: Term no longer matches if PositionLengthAttr is set to two

2017-05-04 Thread Markus Jelsma
Ok, we decided not to implement PositionLengthAttribute for now due to, it 
either is a bad applied (how could one even misapply that attribute?) or Solr's 
QueryBuilder has a weird way of dealing with it or.. well.

Thanks, 
Markus
 
-Original message-
> From:Markus Jelsma 
> Sent: Monday 1st May 2017 12:33
> To: java-u...@lucene.apache.org; solr-user 
> Subject: RE: Term no longer matches if PositionLengthAttr is set to two
> 
> Hello again, apologies for cross-posting and having to get back to this 
> unsolved problem.
> 
> Initially i thought this is a problem i have with, or in Lucene. Maybe not, 
> so is this problem in Solr? Is here anyone who has seen this problem before?
> 
> Many thanks,
> Markus
> 
> -Original message-
> > From:Markus Jelsma 
> > Sent: Tuesday 25th April 2017 13:40
> > To: java-u...@lucene.apache.org
> > Subject: Term no longer matches if PositionLengthAttr is set to two
> > 
> > Hello,
> > 
> > We have a decompounder and recently implemented the PositionLengthAttribute 
> > in it and set it to 2 for a two-word compound such as drinkwater (drinking 
> > water in dutch). The decompounder runs both at index- and query-time on 
> > Solr 6.5.0.
> > 
> > The problem is, q=content_nl:drinkwater no longer returns documents 
> > containing drinkwater when posLenAtt = 2 at query time.
> > 
> > This is Solr's debug output for drinkwater with posLenAtt = 2:
> > 
> > content_nl:drinkwater
> > content_nl:drinkwater
> > SynonymQuery(Synonym())
> > Synonym()
> > 
> > This is the output where i reverted the decompounder, thus a posLenAtt = 1:
> > 
> > content_nl:drinkwater
> > content_nl:drinkwater
> > SynonymQuery(Synonym(content_nl:drink 
> > content_nl:drinkwater)) content_nl:water
> > Synonym(content_nl:drink 
> > content_nl:drinkwater) content_nl:water
> > 
> > The indexed terms still have posLenAtt = 2, but having a posLenAtt = 2 at 
> > query time seems to be a problem.
> > 
> > Any thoughts on this issue? Is it a bug? Do i not understand 
> > PositionLengthAttribute? Why does it affect term/document matching? At 
> > query time but not at index time?
> > 
> > Many thanks,
> > Markus
> > 
> > -
> > To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: java-user-h...@lucene.apache.org
> > 
> > 
> 
> -
> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
> For additional commands, e-mail: java-user-h...@lucene.apache.org
> 
> 


Re: in-place atomic updates for numeric docValue field

2017-05-04 Thread Shawn Heisey
On 5/4/2017 7:40 AM, Dan . wrote:
> I have a field like this:
>
> 
>  docValues="true" multiValued="false"/>
>
> so I can do a fast in-place atomic updates
>
> However if I do e.g.
>
> curl -H 'Content-Type: application/json'
> 'http://localhost:8983/solr/collection/update?commit=true'
> --data-binary '
> [{
>  "id":"my_id",
>  "popularity":{"set":null}
> }]'
>
> then I'd expect the popularity field to be removed, however it's not.

I'm not really sure how that "null" value will be interpreted.  It's
entirely possible that this won't actually delete the field.

I think we need a "delete" action for Atomic Updates, to entirely remove
the field regardless of what it currently contains.  There is "remove"
and "removeRegex", which MIGHT be enough, but I think delete would be
useful syntactic sugar.

Dan, can you give the following update JSON a try instead?  I am not
guaranteeing that this will do the job, but given the current
functionality, I think this is the option most likely to work:

{
 "id":"my_id",
 "popularity":{"removeRegex":".*"}
}

Thanks,
Shawn



Re: in-place atomic updates for numeric docValue field

2017-05-04 Thread Emir Arnautovic

Hi Dan,

Remove does not make sense when it comes to in-place updates of 
docValues - it has to have some value, so only thing that you can do is 
introduce some int value as null.


HTH,
Emir


On 04.05.2017 15:40, Dan . wrote:

Hi,

I have a field like this:




so I can do a fast in-place atomic updates

However if I do e.g.

curl -H 'Content-Type: application/json'
'http://localhost:8983/solr/collection/update?commit=true'
--data-binary '
[{
  "id":"my_id",
  "popularity":{"set":null}
}]'

then I'd expect the popularity field to be removed, however it's not.

I this a bug? or is there a know workaround for this for in-place atomic
updates?

Cheers,
Dan



--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr & Elasticsearch Support * http://sematext.com/



in-place atomic updates for numeric docValue field

2017-05-04 Thread Dan .
Hi,

I have a field like this:




so I can do a fast in-place atomic updates

However if I do e.g.

curl -H 'Content-Type: application/json'
'http://localhost:8983/solr/collection/update?commit=true'
--data-binary '
[{
 "id":"my_id",
 "popularity":{"set":null}
}]'

then I'd expect the popularity field to be removed, however it's not.

I this a bug? or is there a know workaround for this for in-place atomic
updates?

Cheers,
Dan


Re: SolrCloud - Connection to Solr lost

2017-05-04 Thread Bernd Fehling
After many, many tests it is "time to say goodbye" to SolrCloud and
I belief it is not running and useful at all. :-(


I reduced to only 3 servers (Solr and Zookeeper) and tried to
_only_ create a simple single collection, but even this fails.

bin/solr create -c base -d 
/home/solr/solr/solr/server/solr/configsets/base_configs/


Connecting to ZooKeeper at solrmn01:2181,solrmn02:2181,solrmn03:2181 ...
INFO  - 2017-05-04 13:56:12.889; 
org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at
solrmn01:2181,solrmn02:2181,solrmn03:2181 ready
Uploading /home/solr/solr/solr/server/solr/configsets/base_configs/conf for 
config base to ZooKeeper at solrmn01:2181,solrmn02:2181,solrmn03:2181

Creating new collection 'base' using command:
http://solrmn01.ub.de:8983/solr/admin/collections?action=CREATE=base=1;
replicationFactor=1=1=base

INFO  - 2017-05-04 13:57:01.341; 
org.apache.http.impl.client.DefaultRequestDirector; I/O exception 
(org.apache.http.NoHttpResponseException)
caught when processing request to {}->http://solrmn01.ub.de:8983: The target 
server failed to respond
INFO  - 2017-05-04 13:57:01.343; 
org.apache.http.impl.client.DefaultRequestDirector; Retrying request to 
{}->http://solrmn01.ub.de:8983

ERROR: Connection refused (Connection refused)


Regards
Bernd


Am 04.05.2017 um 10:13 schrieb Bernd Fehling:
> Hi list,
> next problem with SolrCloud.
> Situation:
> - 5 x Zookeeper fresh, clean on 5 server
> - 5 x Solr 6.5.1 fresh, clean on 5 server
> - start of Zookeepers
> - upload of configset with Solr to Zookeepers
> - start of only one Solr instance port 8983 on each server
> - With Solr Admin GUI check that all Solr instances are up and in Zookeeper
> - click on Collection -> Add Collection
> - fill in "name", "config set" (the uploaded config), numShards 5,
>   replicationFactor 1
> - click on "Add Collection"
> 
> Response ist red banner with "Connection to Solr lost" and
> "Please check the Solr instance".
> 
> "bin/solr status" says that _all_ Solr instances on _all_ servers are gone.
> 
> What am I doing wrong?
> 
> I just want to setup 5 Zookeeper on 5 server, have 5 Shards on 5 server
> and want to create a new Collection with Admin Gui.
> Is this at all possible?
> 
> Regards
> Bernd
> 


Re: solr 6.3.0 monitoring

2017-05-04 Thread Emir Arnautovic

Hi Satya,

In order to have more complete picture of your production (host, JVM, 
ZK, Solr metrics), I would suggest using one of monitoring solutions. 
One such solution is Sematext's SPM: http://sematext.com/spm/.


It is much easier if you are up to SaaS setup, but we also provide on 
premise installation.


HTH,
Emir


On 03.05.2017 21:36, Satya Marivada wrote:

Hi,

We stood up solr 6.3.0 with external zookeeper 3.4.9. We are moving to
production and setting up monitoring for solr, to check on all cores of a
collection to see they are up. Similary any other pointers towards the
entire collection monitoring or any other suggestions would be useful.

For zookeeper, planning to use MNTR command to check on its status.

Thanks,
Satya



--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr & Elasticsearch Support * http://sematext.com/



Re: logging in SolrCloud

2017-05-04 Thread Shalin Shekhar Mangar
I'm not a fan of auto-archiving myself and we definitely shouldn't be
doing it before checking if an instance is running. Can you please
open an issue?

On Thu, May 4, 2017 at 12:52 PM, Bernd Fehling
 wrote:
> Hi Shalin,
>
> sounds like all or nothing method :-)
>
> How about a short check if an instance is still running
> and using that log file before moving it to archived?
>
> Regards
> Bernd
>
> Am 04.05.2017 um 09:07 schrieb Shalin Shekhar Mangar:
>> Yes this is expected. On startup old console logs and gc logs are
>> moved into the archived folder by default. This can be disabled by
>> setting SOLR_LOG_PRESTART_ROTATION=false as a environment variable
>> (search for its usage in bin/solr) but it will also disable all log
>> rotation.
>>
>> On Wed, May 3, 2017 at 5:59 PM, Bernd Fehling
>>  wrote:
>>> While looking into SolrCloud I noticed that my logging
>>> gets moved to archived dir by starting a new node.
>>>
>>> E.g.:
>>> bin/solr start -cloud -p 8983
>>> -> server/logs/ has solr-8983-console.log
>>>
>>> bin/solr start -cloud -p 7574
>>> -> solr-8983-console.log is moved to server/logs/archived/
>>> -> server/logs/ has solr-7574-console.log
>>>
>>> Is this how it should be or do I have a misconfig?
>>>
>>> Regards
>>> Bernd
>>
>>
>>



-- 
Regards,
Shalin Shekhar Mangar.


SolrCloud - Connection to Solr lost

2017-05-04 Thread Bernd Fehling
Hi list,
next problem with SolrCloud.
Situation:
- 5 x Zookeeper fresh, clean on 5 server
- 5 x Solr 6.5.1 fresh, clean on 5 server
- start of Zookeepers
- upload of configset with Solr to Zookeepers
- start of only one Solr instance port 8983 on each server
- With Solr Admin GUI check that all Solr instances are up and in Zookeeper
- click on Collection -> Add Collection
- fill in "name", "config set" (the uploaded config), numShards 5,
  replicationFactor 1
- click on "Add Collection"

Response ist red banner with "Connection to Solr lost" and
"Please check the Solr instance".

"bin/solr status" says that _all_ Solr instances on _all_ servers are gone.

What am I doing wrong?

I just want to setup 5 Zookeeper on 5 server, have 5 Shards on 5 server
and want to create a new Collection with Admin Gui.
Is this at all possible?

Regards
Bernd


Re: logging in SolrCloud

2017-05-04 Thread Bernd Fehling
Hi Shalin,

sounds like all or nothing method :-)

How about a short check if an instance is still running
and using that log file before moving it to archived?

Regards
Bernd

Am 04.05.2017 um 09:07 schrieb Shalin Shekhar Mangar:
> Yes this is expected. On startup old console logs and gc logs are
> moved into the archived folder by default. This can be disabled by
> setting SOLR_LOG_PRESTART_ROTATION=false as a environment variable
> (search for its usage in bin/solr) but it will also disable all log
> rotation.
> 
> On Wed, May 3, 2017 at 5:59 PM, Bernd Fehling
>  wrote:
>> While looking into SolrCloud I noticed that my logging
>> gets moved to archived dir by starting a new node.
>>
>> E.g.:
>> bin/solr start -cloud -p 8983
>> -> server/logs/ has solr-8983-console.log
>>
>> bin/solr start -cloud -p 7574
>> -> solr-8983-console.log is moved to server/logs/archived/
>> -> server/logs/ has solr-7574-console.log
>>
>> Is this how it should be or do I have a misconfig?
>>
>> Regards
>> Bernd
> 
> 
> 


Re: logging in SolrCloud

2017-05-04 Thread Bernd Fehling
Hi Erik,

about 1>
I have no core.properties at all, just a clean new installation.
- 5 x Zookeeper on 5 different server
- 5 x Solr 6.5.1 on 5 different server
- uploaded a configset with "bin/solr zk upconfig ..."
- started first Solr node with port 8983 of first server
- started second Solr node with port 7574 of first server
No core, no cluster, no collection, no nodes.

about 2>
Sysvars just JAVA_HOME in .bashrc and in solr.in.sh SOLR_JAVA_HOME,
SOLR_STOP_WAIT, SOLR_HEAP, ZK_HOST, SOLR_HOST


I tried "bin/solr -e cloud" and have in example/cloud/node1/ and node2/
separate logs directories for each node. So I thought of a misconfig
at my fresh clean Solr installation.
But if there is no core or node jet it makes sense to write logs into
a general single directory, but don't move existing logs to archived.

Regards,
Bernd

Am 04.05.2017 um 04:54 schrieb Erick Erickson:
> Bernd:
> 
> Do check two things:
> 
> 1> your core.properties files. Do you have properties set in the
> core.properties files that could possibly confuse things?
> 
> 2> when you start your Solr instances, do you define any sysvars that
> could confuse the archive directories?
> 
> These are wild shots in the dark mind you...
> 
> Best,
> Erick
> 
> On Wed, May 3, 2017 at 7:35 PM, Zheng Lin Edwin Yeo
>  wrote:
>> Which version of Solr are you using?
>>
>> I am using Solr 6.4.2, it seems that both nodes are trying to write to the
>> same archived file.
>>
>>
>> Exception in thread "main" java.nio.file.FileSystemException:
>> C:\edwin\solr\server\logs\solr_gc.log.0.current ->
>> C:\edwin\solr\server\logs\archived\solr_gc.log.0.current: The process
>>  cannot access the file because it is being used by another process.
>>
>>
>> Regards,
>> Edwin
>>
>>
>> On 3 May 2017 at 23:42, Erick Erickson  wrote:
>>
>>> That does look weird. Does the 7574 console log really get archived or
>>> is the 8983 console log archived twice? If 7574 doesn't get moved to
>>> the archive, this sounds like a JIRA, I'd go ahead and raise it.
>>>
>>> Actually either way I think it needs a JIRA. Either the wrong log is
>>> getting moved or the message needs to be fixed.
>>>
>>> Best,
>>> Erick
>>>
>>> On Wed, May 3, 2017 at 5:29 AM, Bernd Fehling
>>>  wrote:
 While looking into SolrCloud I noticed that my logging
 gets moved to archived dir by starting a new node.

 E.g.:
 bin/solr start -cloud -p 8983
 -> server/logs/ has solr-8983-console.log

 bin/solr start -cloud -p 7574
 -> solr-8983-console.log is moved to server/logs/archived/
 -> server/logs/ has solr-7574-console.log

 Is this how it should be or do I have a misconfig?

 Regards
 Bernd
>>>

-- 
*
Bernd FehlingBielefeld University Library
Dipl.-Inform. (FH)LibTec - Library Technology
Universitätsstr. 25  and Knowledge Management
33615 Bielefeld
Tel. +49 521 106-4060   bernd.fehling(at)uni-bielefeld.de

BASE - Bielefeld Academic Search Engine - www.base-search.net
*


Re: logging in SolrCloud

2017-05-04 Thread Shalin Shekhar Mangar
Yes this is expected. On startup old console logs and gc logs are
moved into the archived folder by default. This can be disabled by
setting SOLR_LOG_PRESTART_ROTATION=false as a environment variable
(search for its usage in bin/solr) but it will also disable all log
rotation.

On Wed, May 3, 2017 at 5:59 PM, Bernd Fehling
 wrote:
> While looking into SolrCloud I noticed that my logging
> gets moved to archived dir by starting a new node.
>
> E.g.:
> bin/solr start -cloud -p 8983
> -> server/logs/ has solr-8983-console.log
>
> bin/solr start -cloud -p 7574
> -> solr-8983-console.log is moved to server/logs/archived/
> -> server/logs/ has solr-7574-console.log
>
> Is this how it should be or do I have a misconfig?
>
> Regards
> Bernd



-- 
Regards,
Shalin Shekhar Mangar.


Re: Joining more than 2 collections

2017-05-04 Thread Zheng Lin Edwin Yeo
Hi Joel,

I have managed to get the Join to work, but so far it is only working when
I use qt="/select". It is not working when I use qt="/export".

For the display of the field, is there a way to allow it to list them in
the order which I want?
Currently, the display is quite random, and I can get a field in
collection1, followed by a field in collection3, then collection1 again,
and then collection2.

It will be good if we can arrange the field to display in the order that we
want.

Regards,
Edwin



On 4 May 2017 at 09:56, Zheng Lin Edwin Yeo  wrote:

> Hi Joel,
>
> It works when I started off with just one expression.
>
> Could it be that the data size is too big for export after the join, which
> causes the error?
>
> Regards,
> Edwin
>
> On 4 May 2017 at 02:53, Joel Bernstein  wrote:
>
>> I was just testing with the query below and it worked for me. Some of the
>> error messages I was getting with the syntax was not what I was expecting
>> though, so I'll look into the error handling. But the joins do work when
>> the syntax correct. The query below is joining to the same collection
>> three
>> times, but the mechanics are exactly the same joining three different
>> tables. In this example each join narrows down the result set.
>>
>> hashJoin(parallel(collection2,
>> workers=3,
>> sort="id asc",
>> innerJoin(search(collection2, q="*:*",
>> fl="id",
>> sort="id asc", qt="/export", partitionKeys="id"),
>> search(collection2,
>> q="year_i:42", fl="id, year_i", sort="id asc", qt="/export",
>> partitionKeys="id"),
>> on="id")),
>> hashed=search(collection2, q="day_i:7", fl="id, day_i",
>> sort="id asc", qt="/export"),
>> on="id")
>>
>> Joel Bernstein
>> http://joelsolr.blogspot.com/
>>
>> On Wed, May 3, 2017 at 1:29 PM, Joel Bernstein 
>> wrote:
>>
>> > Start off with just this expression:
>> >
>> > search(collection2,
>> > q=*:*,
>> > fl="a_s,b_s,c_s,d_s,e_s",
>> > sort="a_s asc",
>> > qt="/export")
>> >
>> > And then check the logs for exceptions.
>> >
>> > Joel Bernstein
>> > http://joelsolr.blogspot.com/
>> >
>> > On Wed, May 3, 2017 at 12:35 PM, Zheng Lin Edwin Yeo <
>> edwinye...@gmail.com
>> > > wrote:
>> >
>> >> Hi Joel,
>> >>
>> >> I am getting this error after I change add qt=/export and removed the
>> rows
>> >> param. Do you know what could be the reason?
>> >>
>> >> {
>> >>   "error":{
>> >> "metadata":[
>> >>   "error-class","org.apache.solr.common.SolrException",
>> >>   "root-error-class","org.apache.http.MalformedChunkCodingExce
>> >> ption"],
>> >> "msg":"org.apache.http.MalformedChunkCodingException: CRLF
>> expected
>> >> at
>> >> end of chunk",
>> >> "trace":"org.apache.solr.common.SolrException:
>> >> org.apache.http.MalformedChunkCodingException: CRLF expected at end of
>> >> chunk\r\n\tat
>> >> org.apache.solr.client.solrj.io.stream.TupleStream.lambda$wr
>> >> iteMap$0(TupleStream.java:79)\r\n\tat
>> >> org.apache.solr.response.JSONWriter.writeIterator(JSONRespon
>> >> seWriter.java:523)\r\n\tat
>> >> org.apache.solr.response.TextResponseWriter.writeVal(TextRes
>> >> ponseWriter.java:175)\r\n\tat
>> >> org.apache.solr.response.JSONWriter$2.put(JSONResponseWriter
>> >> .java:559)\r\n\tat
>> >> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(
>> >> TupleStream.java:64)\r\n\tat
>> >> org.apache.solr.response.JSONWriter.writeMap(JSONResponseWri
>> >> ter.java:547)\r\n\tat
>> >> org.apache.solr.response.TextResponseWriter.writeVal(TextRes
>> >> ponseWriter.java:193)\r\n\tat
>> >> org.apache.solr.response.JSONWriter.writeNamedListAsMapWithD
>> >> ups(JSONResponseWriter.java:209)\r\n\tat
>> >> org.apache.solr.response.JSONWriter.writeNamedList(JSONRespo
>> >> nseWriter.java:325)\r\n\tat
>> >> org.apache.solr.response.JSONWriter.writeResponse(JSONRespon
>> >> seWriter.java:120)\r\n\tat
>> >> org.apache.solr.response.JSONResponseWriter.write(JSONRespon
>> >> seWriter.java:71)\r\n\tat
>> >> org.apache.solr.response.QueryResponseWriterUtil.writeQueryR
>> >> esponse(QueryResponseWriterUtil.java:65)\r\n\tat
>> >> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrC
>> >> all.java:732)\r\n\tat
>> >> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:
>> 473)\r\n\tat
>> >> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDisp
>> >> atchFilter.java:345)\r\n\tat
>> >> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDisp
>> >> atchFilter.java:296)\r\n\tat
>> >> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilte
>> >> r(ServletHandler.java:1691)\r\n\tat
>> >> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHan
>> >> dler.java:582)\r\n\tat
>> >> org.eclipse.jetty.server.handler.ScopedHandler.handle(Scoped
>> >> 

Re: logging in SolrCloud

2017-05-04 Thread Bernd Fehling
Hi Edwin,

I'm using Solr 6.5.1


Am 04.05.2017 um 04:35 schrieb Zheng Lin Edwin Yeo:
> Which version of Solr are you using?
> 
> I am using Solr 6.4.2, it seems that both nodes are trying to write to the
> same archived file.
> 
> 
> Exception in thread "main" java.nio.file.FileSystemException:
> C:\edwin\solr\server\logs\solr_gc.log.0.current ->
> C:\edwin\solr\server\logs\archived\solr_gc.log.0.current: The process
>  cannot access the file because it is being used by another process.
> 
> 
> Regards,
> Edwin
> 
> 
> On 3 May 2017 at 23:42, Erick Erickson  wrote:
> 
>> That does look weird. Does the 7574 console log really get archived or
>> is the 8983 console log archived twice? If 7574 doesn't get moved to
>> the archive, this sounds like a JIRA, I'd go ahead and raise it.
>>
>> Actually either way I think it needs a JIRA. Either the wrong log is
>> getting moved or the message needs to be fixed.
>>
>> Best,
>> Erick
>>
>> On Wed, May 3, 2017 at 5:29 AM, Bernd Fehling
>>  wrote:
>>> While looking into SolrCloud I noticed that my logging
>>> gets moved to archived dir by starting a new node.
>>>
>>> E.g.:
>>> bin/solr start -cloud -p 8983
>>> -> server/logs/ has solr-8983-console.log
>>>
>>> bin/solr start -cloud -p 7574
>>> -> solr-8983-console.log is moved to server/logs/archived/
>>> -> server/logs/ has solr-7574-console.log
>>>
>>> Is this how it should be or do I have a misconfig?
>>>
>>> Regards
>>> Bernd
>>
> 

-- 
*
Bernd FehlingBielefeld University Library
Dipl.-Inform. (FH)LibTec - Library Technology
Universitätsstr. 25  and Knowledge Management
33615 Bielefeld
Tel. +49 521 106-4060   bernd.fehling(at)uni-bielefeld.de

BASE - Bielefeld Academic Search Engine - www.base-search.net
*