Solr logging in local time

2015-11-16 Thread tedsolr
Is it possible to define a timezone for Solr so that logging occurs in local
time? My logs appear to be in UTC. Due to daylight savings, I don't think
defining a GMT offset in the log4j.properties files will work.

thanks! Ted
v. 5.2.1



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-logging-in-local-time-tp4240369.html
Sent from the Solr - User mailing list archive at Nabble.com.


Expand Component Fields Response

2015-11-16 Thread Sanders, Marshall (AT - Atlanta)
Is it possible to specify a separate set of fields to return from the expand 
component which is different from the standard fl parameter?  Something like 
this:

fl=fielda=fieldb

Our current use case means we actually only care about the numFound from the 
expand component and not any of the actual fields.  We could also use a facet 
for the field we're collapsing on, but this means mapping from the field we 
collapsed on to the different facets and isn't very elegant, and we also have 
to ask for a large facet.limit to make sure that we get the appropriate counts 
back.  This is pretty poor for high cardinality fields.  The alternative is the 
current where we ask for the expand component and get TONS of information back 
that we don't care about.

Thanks for any help!

Marshall Sanders
Technical Lead - Software Engineer
Autotrader.com
404-568-7130



Re: Solr Cloud 5.3.0 Errors in Logs

2015-11-16 Thread Erick Erickson
Having 6 warming serachers is an anti-pattern. What it means is that commits
are happening faster than your searcher can be opened. There is _no_ good
reason that I know of for changing it from 2, having changed it in
solrconfig.xml
to 6 almost always indicates an improper configuration.

Places to look:
1> that commits are happening too often, and especially if the commits
are happening
from a client. If commits aren't being sent by a client, then look at
autoCommit and
softAutoCommit in solrconfig.xml (if you can).

2> excessive autowarm settings, again in solronfig.xml.

If, as you say all of Solr is a black box, then talk to the Sitecore
folks, on the surface
Solr is just poorly configured.

Best,
Erick

On Mon, Nov 16, 2015 at 4:33 AM, Adrian Liew  wrote:
> Hi Emir,
>
> I am working with a third party platform, Sitecore. The product is a black 
> box that encapsulates the internal workings of solr queries and so on. If 
> there are any questions you have with regards with the below, let me know. It 
> will be useful for me to communicate what could cause the issues below.
>
> Regards,
> Adrian
>
> -Original Message-
> From: Emir Arnautovic [mailto:emir.arnauto...@sematext.com]
> Sent: Monday, November 16, 2015 4:47 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Solr Cloud 5.3.0 Errors in Logs
>
> Hi Adrian,
> Can you give us bit more details about warmup queries you use and test that 
> you are running when error occurs.
>
> Thanks,
> Emir
>
> On 16.11.2015 08:40, Adrian Liew wrote:
>> Hi there,
>>
>> Will like to get some opinions on the errors encountered below. I have 
>> currently setup a SolrCloud cluster of 3 servers (each server hosting a Solr 
>> instance and a Zookeeper instance).
>>
>> I am encountering the errors below in the logs:
>> Monday, November 16, 2015 3:22:54 PM ERROR null SolrCore 
>> org.apache.solr.common.SolrException: Error opening new searcher. exceeded 
>> limit of maxWarmingSearchers=6, try again later.
>> Monday, November 16, 2015 3:22:54 PM ERROR null SolrCore 
>> org.apache.solr.common.SolrException: Error opening new searcher. exceeded 
>> limit of maxWarmingSearchers=6, try again later.
>> Monday, November 16, 2015 3:22:54 PM ERROR null SolrCore 
>> org.apache.solr.common.SolrException: Error opening new searcher. exceeded 
>> limit of maxWarmingSearchers=6, try again later.
>> Monday, November 16, 2015 3:22:54 PM ERROR null SolrCore 
>> org.apache.solr.common.SolrException: Error opening new searcher. exceeded 
>> limit of maxWarmingSearchers=6, try again later.
>> Monday, November 16, 2015 3:22:54 PM ERROR null SolrCmdDistributor 
>> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
>> from server at 
>> http://172.18.111.112:8983/solr/sitecore_master_index_shard1_replica1: Error 
>> opening new searcher. exceeded limit of maxWarmingSearchers=6, try again 
>> later.
>> Monday, November 16, 2015 3:22:54 PM ERROR null SolrCmdDistributor 
>> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
>> from server at 
>> http://172.18.111.112:8983/solr/sitecore_master_index_shard1_replica1: Error 
>> opening new searcher. exceeded limit of maxWarmingSearchers=6, try again 
>> later.
>> Monday, November 16, 2015 3:22:54 PM WARN null
>> DistributedUpdateProcessor Error sending update to
>> http://172.18.111.112:8983/solr Monday, November 16, 2015 3:22:54 PM
>> WARN null DistributedUpdateProcessor Error sending update to
>> http://172.18.111.112:8983/solr Monday, November 16, 2015 3:22:54 PM
>> WARN null DistributedUpdateProcessor Error sending update to
>> http://172.18.111.112:8983/solr Monday, November 16, 2015 3:22:54 PM
>> WARN null DistributedUpdateProcessor Error sending update to
>> http://172.18.111.112:8983/solr
>>
>> 11/16/2015, 3:17:09 PM
>>
>> WARN
>>
>> null
>>
>> DistributedUpdateProcessor
>>
>> Error sending update to http://172.18.111.112:8983/solr
>>
>> 11/16/2015, 3:17:09 PM
>>
>> WARN
>>
>> null
>>
>> DistributedUpdateProcessor
>>
>> Error sending update to http://172.18.111.112:8983/solr
>>
>> 11/16/2015, 3:22:26 PM
>>
>> ERROR
>>
>> null
>>
>> SolrCmdDistributor
>>
>> org.apache.solr.client.solrj.SolrServerException: Timeout occured
>> while waiting response from server at:
>> http://172.18.111.112:8983/solr/sitecore_master_index_shard1_replica1
>>
>>
>>
>> Main errors are Timeout occurred exceptions, maxWarmingSearchers exceeded. 
>> Is anyone able to advise or have experienced something the same as the above 
>> in their SolrCloud setup?
>>
>> Regards,
>> Adrian
>>
>>
>>
>
> --
> Monitoring * Alerting * Anomaly Detection * Centralized Log Management Solr & 
> Elasticsearch Support * http://sematext.com/
>


Re: StringIndexOutOfBoundsException using spellcheck and synonyms

2015-11-16 Thread Scott Stults
Hi Derek,

Could you please add what version of Solr you see this in? I didn't see a
related Jira, so this might warrant a new one.


k/r,
Scott

On Sun, Nov 15, 2015 at 11:01 PM, Derek Poh  wrote:

> Hi
> Iam using spellcheck and synonyms.I am getting
> "java.lang.StringIndexOutOfBoundsException: String index out of range: -1"
> for some keywords.
>
> I think I managed to narrow down to the likely caused of it.
> I have thisline of entry in the synonyms.txt file,
>
> body spray,cologne,parfum,parfume,perfume,purfume,toilette
>
> When I search for 'cologne' it will hit the exception.
> If I removed the'body spray' from the line, I will not hit the exception.
>
> cologne,parfum,parfume,perfume,purfume,toilette
>
> It seems like it could be due to multi terms in the synonyms files but
> there are some keywords with multi terms in synonyms that does not has the
> issue.
> This line has a multi term "paint ball" in it, when I search for paintball
> or paintballs it does not hit the exception.
>
> paintball,paintballs,paint ball
>
>
> Any advice how can I resolve this issue?
>
>
> The field use for spellcheck:
> 
>
>  multiValued="true"/>
>
>  positionIncrementGap="100">
>   
> 
>  words="stopwords.txt" />
> 
> 
>   
>   
> 
>  words="stopwords.txt" />
>  ignoreCase="true" expand="true"/>
> 
>   
> 
>
>
> Exception stacktrace:
> 2015-11-16T07:06:43,055 - ERROR [qtp744979286-193443:SolrException@142] -
> null:java.lang.StringIndexOutOfBoundsException: String index out of range:
> -1
> at
> java.lang.AbstractStringBuilder.replace(AbstractStringBuilder.java:789)
> at java.lang.StringBuilder.replace(StringBuilder.java:266)
> at
> org.apache.solr.spelling.SpellCheckCollator.getCollation(SpellCheckCollator.java:235)
> at
> org.apache.solr.spelling.SpellCheckCollator.collate(SpellCheckCollator.java:92)
> at
> org.apache.solr.handler.component.SpellCheckComponent.addCollationsToResponse(SpellCheckComponent.java:230)
> at
> org.apache.solr.handler.component.SpellCheckComponent.process(SpellCheckComponent.java:197)
> at
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
> at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1976)
> at
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
> at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:497)
> at
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at org.eclipse.jetty.io
> .AbstractConnection$2.run(AbstractConnection.java:540)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:722)
>
> Derek
>
> --
> CONFIDENTIALITY NOTICE
> This e-mail (including any attachments) may contain confidential and/or
> privileged information. If you are not the intended recipient or have
> received this e-mail in error, please inform the sender immediately and
> delete this e-mail (including any attachments) from your computer, and you
> must not use, disclose to anyone else or copy 

Re: Solr 5.3 spellcheck always return lower case?

2015-11-16 Thread Alessandro Benedetti
Hi QuestionNews,
can you send us the schema.xml and the field involved in the problem ?
I agree with Erick, are you sure your field type doesn't do any lowercasing
?

Cheers

On 13 November 2015 at 13:39, QuestionNews .  wrote:

> The data displayed when doing a query is correct case. The fieldType
> doesn't do any case manipulation and the requestHandler/searchComponent
> don't have any settings declared that I can see.
>
> Why is my spellcheck returning results that are all lower case?
>
> Is there a way for me to stop this from happening or have spellcheck return
> an additional field.
>
> Thanks for your help and pardon me if I am not using this mailing list
> properly.  It is my first time utilizing it.
>



-- 
--

Benedetti Alessandro
Visiting card : http://about.me/alessandro_benedetti

"Tyger, tyger burning bright
In the forests of the night,
What immortal hand or eye
Could frame thy fearful symmetry?"

William Blake - Songs of Experience -1794 England


Re: Boost query at search time according set of roles with least performance impact

2015-11-16 Thread Alessandro Benedetti
 is the readability that is stopping you to use the bq parameter with
all your roles ?
A custom function is off course  a way, but why you think is going to
satisfy better your requirement in comparison with the bq ?

Cheers

On 13 November 2015 at 18:41, Andrea Open Source <
andrearoggerone.o...@gmail.com> wrote:

> Hi Alessandro,
> Thanks for answering. Unfortunately bq is not enough as I have several
> roles that I need to score in different ways. I was thinking of building a
> custom function that reads the weights of the roles from solr config and
> applies them at runtime. I am a bit concerned about performance though and
> that's the reason behind my question. What's your thought about such
> solution?
>
> King Regards,
> Andrea Roggerone
>
> > On 09/nov/2015, at 12:29, Alessandro Benedetti 
> wrote:
> >
> > ehehe your request is kinda delicate :
> > 1)  I can't store the
> > payload at index time
> > 2) Passing all the weights at query time is not an option
> >
> > So you seem to exclude all the possible solutions ...
> > Anyway, just thinking loud, have you tried the edismax query parser and
> the
> > boost query feature?
> >
> > 1) the first strategy is the one you would prefer to avoid :
> > you define the AuthorRole, then you use the Boost Query parameter to
> boost
> > differently your roles :
> > AuthorRole:"ADMIN"^100 AuthorRole:"ARCHITECT"^50 ect ...
> > If you have 20 roles , the query could be not readable.
> >
> > 2) you index the "weight" for the role in the original document.
> > The you use a Boost Function according to your requirement ( using there
> > "weight" field)
> >
> > Hope this helps,
> >
> > Cheers
> >
> > e.g. from the Solr wiki
> > The bq (Boost Query) Parameter
> >
> > The bq parameter specifies an additional, optional, query clause that
> will
> > be added to the user's main query to influence the score. For example, if
> > you wanted to add a relevancy boost for recent documents:
> > q=cheese
> > bq=date:[NOW/DAY-1YEAR TO NOW/DAY]
> >
> > You can specify multiple bq parameters. If you want your query to be
> parsed
> > as separate clauses with separate boosts, use multiple bq parameters.
> > The bf (Boost Functions) Parameter
> >
> > The bf parameter specifies functions (with optional boosts) that will be
> > used to construct FunctionQueries which will be added to the user's main
> > query as optional clauses that will influence the score. Any function
> > supported natively by Solr can be used, along with a boost value. For
> > example:
> > recip(rord(myfield),1,2,3)^1.5
> >
> > Specifying functions with the bf parameter is essentially just shorthand
> > for using the bq param combined with the {!func} parser.
> >
> > For example, if you want to show the most recent documents first, you
> could
> > use either of the following:
> > bf=recip(rord(creationDate),1,1000,1000)
> >  ...or...
> > bq={!func}recip(rord(creationDate),1,1000,1000)
> >
> > On 6 November 2015 at 16:44, Andrea Roggerone <
> > andrearoggerone.o...@gmail.com> wrote:
> >
> >> Hi all,
> >> I am working on a mechanism that applies additional boosts to documents
> >> according to the role covered by the author. For instance we have
> >>
> >> CEO|5 Architect|3 Developer|1 TeamLeader|2
> >>
> >> keeping in mind that an author could cover multiple roles (e.g. for a
> >> design document, a Team Leader could be also a Developer).
> >>
> >> I am aware that is possible to implement a function that leverages
> >> payloads, however the weights need to be configurable so I can't store
> the
> >> payload at index time.
> >> Passing all the weights at query time is not an option as we have more
> than
> >> 20 roles and query readability and performance would be heavily
> affected.
> >>
> >> Do we have any "out of the box mechanism" in Solr to implement the
> >> described behavior? If not, what other options do we have?
> >
> >
> >
> > --
> > --
> >
> > Benedetti Alessandro
> > Visiting card : http://about.me/alessandro_benedetti
> >
> > "Tyger, tyger burning bright
> > In the forests of the night,
> > What immortal hand or eye
> > Could frame thy fearful symmetry?"
> >
> > William Blake - Songs of Experience -1794 England
>



-- 
--

Benedetti Alessandro
Visiting card : http://about.me/alessandro_benedetti

"Tyger, tyger burning bright
In the forests of the night,
What immortal hand or eye
Could frame thy fearful symmetry?"

William Blake - Songs of Experience -1794 England


Re: Solr logging in local time

2015-11-16 Thread Walter Underwood
I’m sure it is possible, but think twice before logging in local time. Do you 
really want one day with 23 hours and one day with 25 hours each year?

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)


> On Nov 16, 2015, at 8:04 AM, tedsolr  wrote:
> 
> Is it possible to define a timezone for Solr so that logging occurs in local
> time? My logs appear to be in UTC. Due to daylight savings, I don't think
> defining a GMT offset in the log4j.properties files will work.
> 
> thanks! Ted
> v. 5.2.1
> 
> 
> 
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Solr-logging-in-local-time-tp4240369.html
> Sent from the Solr - User mailing list archive at Nabble.com.



Re: CloudSolrCloud - Commit returns but not all data is visible (occasionally)

2015-11-16 Thread Erick Erickson
maxwarmingsearchers is not going to help, and in fact may indicate your problem.

I suspect that your autowarming is taking a long time _and_ the commit
call is timing out rather than returning. Your Solr log should show you
the autowarm times and help figure out whether this is on the right track
or not.

Best,
Erick

On Mon, Nov 16, 2015 at 8:04 AM, adfel70  wrote:
> Hi,
> I am using Solr 5.2.1 with the solrj client 5.2.1. (I know CloudSolrCloud is
> deprecated)
>
> I am running the command:
> *cloudSolrServer.commit(false, true, true)*
> the parameters are: waitFlush (false), waitSearcher (true), softCommit
> (true)
>
> The problem is that the client returns as if it already ended committing and
> the searchers are refreshed when actually not all the data is visible for
> users (it takes several minutes more).
>
> The problem is that I have to wait till the searchers are up and the new
> data is visible for users before I finish my process (and I don't want to
> put 'sleep' in my code :-))
>
> I have tried increasing the maxWarmingSearchers to 5 - it helped but the
> problem still occasionally happens.
>
> What could I configure more?
>
>
> Thanks a lot,
>
>
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/CloudSolrCloud-Commit-returns-but-not-all-data-is-visible-occasionally-tp4240368.html
> Sent from the Solr - User mailing list archive at Nabble.com.


CloudSolrCloud - Commit returns but not all data is visible (occasionally)

2015-11-16 Thread adfel70
Hi,
I am using Solr 5.2.1 with the solrj client 5.2.1. (I know CloudSolrCloud is
deprecated)

I am running the command:
*cloudSolrServer.commit(false, true, true)*
the parameters are: waitFlush (false), waitSearcher (true), softCommit
(true)

The problem is that the client returns as if it already ended committing and
the searchers are refreshed when actually not all the data is visible for
users (it takes several minutes more).

The problem is that I have to wait till the searchers are up and the new
data is visible for users before I finish my process (and I don't want to
put 'sleep' in my code :-))

I have tried increasing the maxWarmingSearchers to 5 - it helped but the
problem still occasionally happens.

What could I configure more?


Thanks a lot,



--
View this message in context: 
http://lucene.472066.n3.nabble.com/CloudSolrCloud-Commit-returns-but-not-all-data-is-visible-occasionally-tp4240368.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: Solr Cloud 5.3.0 Errors in Logs

2015-11-16 Thread Adrian Liew
Hi Emir,

I am working with a third party platform, Sitecore. The product is a black box 
that encapsulates the internal workings of solr queries and so on. If there are 
any questions you have with regards with the below, let me know. It will be 
useful for me to communicate what could cause the issues below.

Regards,
Adrian

-Original Message-
From: Emir Arnautovic [mailto:emir.arnauto...@sematext.com] 
Sent: Monday, November 16, 2015 4:47 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr Cloud 5.3.0 Errors in Logs

Hi Adrian,
Can you give us bit more details about warmup queries you use and test that you 
are running when error occurs.

Thanks,
Emir

On 16.11.2015 08:40, Adrian Liew wrote:
> Hi there,
>
> Will like to get some opinions on the errors encountered below. I have 
> currently setup a SolrCloud cluster of 3 servers (each server hosting a Solr 
> instance and a Zookeeper instance).
>
> I am encountering the errors below in the logs:
> Monday, November 16, 2015 3:22:54 PM ERROR null SolrCore 
> org.apache.solr.common.SolrException: Error opening new searcher. exceeded 
> limit of maxWarmingSearchers=6,​ try again later.
> Monday, November 16, 2015 3:22:54 PM ERROR null SolrCore 
> org.apache.solr.common.SolrException: Error opening new searcher. exceeded 
> limit of maxWarmingSearchers=6,​ try again later.
> Monday, November 16, 2015 3:22:54 PM ERROR null SolrCore 
> org.apache.solr.common.SolrException: Error opening new searcher. exceeded 
> limit of maxWarmingSearchers=6,​ try again later.
> Monday, November 16, 2015 3:22:54 PM ERROR null SolrCore 
> org.apache.solr.common.SolrException: Error opening new searcher. exceeded 
> limit of maxWarmingSearchers=6,​ try again later.
> Monday, November 16, 2015 3:22:54 PM ERROR null SolrCmdDistributor 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at 
> http://172.18.111.112:8983/solr/sitecore_master_index_shard1_replica1: Error 
> opening new searcher. exceeded limit of maxWarmingSearchers=6,​ try again 
> later.
> Monday, November 16, 2015 3:22:54 PM ERROR null SolrCmdDistributor 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at 
> http://172.18.111.112:8983/solr/sitecore_master_index_shard1_replica1: Error 
> opening new searcher. exceeded limit of maxWarmingSearchers=6,​ try again 
> later.
> Monday, November 16, 2015 3:22:54 PM WARN null 
> DistributedUpdateProcessor Error sending update to 
> http://172.18.111.112:8983/solr Monday, November 16, 2015 3:22:54 PM 
> WARN null DistributedUpdateProcessor Error sending update to 
> http://172.18.111.112:8983/solr Monday, November 16, 2015 3:22:54 PM 
> WARN null DistributedUpdateProcessor Error sending update to 
> http://172.18.111.112:8983/solr Monday, November 16, 2015 3:22:54 PM 
> WARN null DistributedUpdateProcessor Error sending update to 
> http://172.18.111.112:8983/solr
>
> 11/16/2015, 3:17:09 PM
>
> WARN
>
> null
>
> DistributedUpdateProcessor
>
> Error sending update to http://172.18.111.112:8983/solr
>
> 11/16/2015, 3:17:09 PM
>
> WARN
>
> null
>
> DistributedUpdateProcessor
>
> Error sending update to http://172.18.111.112:8983/solr
>
> 11/16/2015, 3:22:26 PM
>
> ERROR
>
> null
>
> SolrCmdDistributor
>
> org.apache.solr.client.solrj.SolrServerException: Timeout occured 
> while waiting response from server at: 
> http://172.18.111.112:8983/solr/sitecore_master_index_shard1_replica1
>
>
>
> Main errors are Timeout occurred exceptions, maxWarmingSearchers exceeded. Is 
> anyone able to advise or have experienced something the same as the above in 
> their SolrCloud setup?
>
> Regards,
> Adrian
>
>
>

--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management Solr & 
Elasticsearch Support * http://sematext.com/



Re: DIH Caching w/ BerkleyBackedCache

2015-11-16 Thread Todd Long
Mikhail Khludnev wrote
> "External merge" join helps to avoid boilerplate caching in such simple
> cases.

Thank you for the reply. I can certainly look into this though I would have
to apply the patch for our version (i.e. 4.8.1). I really just simplified
our data configuration here which actually consists of many sub-entities
that are successfully using the SortedMapBackedCache cache. I imagine this
would still apply to those as the queries themselves are simple for the most
part. I assume performance-wise this would only require the single table
scan?

I'm still very much interested in resolving this Berkley database cache
issue. I'm sure there is some minor configuration I'm missing that is
causing this behavior. Again, I've had no issues with the
SortedMapBackedCache for its caching purpose... I've tried simplifying our
data configuration to only one thread with a single sub-entity with the same
results. Again, any help would be greatly appreciated with this.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/DIH-Caching-w-BerkleyBackedCache-tp4240142p4240356.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Jetty Vs Tomcat (Performance issue)

2015-11-16 Thread Upayavira
Just to be sure, are you installing Solr inside a different Jetty, or
using the Jetty that comes with Solr?

You would be expected to use the one installed and managed by Solr.

Upayavira

On Mon, Nov 16, 2015, at 11:58 AM, Behzad Qureshi wrote:
> Hi All,
> 
> I am using Tomcat server with solr 4.10.3. I want to shift to Jetty as
> replacement of Tomcat server but I am not getting any good results with
> respect to performance. I have tried solr 4.10.3 on both Jetty 8 and
> Jetty
> 9 with java 8. Below are configurations I have used.
> 
> Can anyone please tell me if I am missing anything?
> 
> *Jetty:*
> 
> Xms: 5GB
> Xmx: 50GB
> Xss: 256MB
> 
> 
> *Tomcat:*
> 
> Xms: 5GB
> Xmx: 50GB
> Xss: Default
> 
> 
> *Index Size:*
> 
> 1TB (20 cores)
> 
> 
> 
> -- 
> 
> Regards,
> 
> Behzad Qureshi


Re: DIH Caching w/ BerkleyBackedCache

2015-11-16 Thread Mikhail Khludnev
On Mon, Nov 16, 2015 at 5:08 PM, Todd Long  wrote:

> Mikhail Khludnev wrote
> > "External merge" join helps to avoid boilerplate caching in such simple
> > cases.
>
> Thank you for the reply. I can certainly look into this though I would have
> to apply the patch for our version (i.e. 4.8.1). I really just simplified
> our data configuration here which actually consists of many sub-entities
> that are successfully using the SortedMapBackedCache cache. I imagine this
> would still apply to those as the queries themselves are simple for the
> most
> part.

It's worth to mention that for really complex relations scheme it might be
challenging to organize all of them into parallel ordered streams.


> I assume performance-wise this would only require the single table
> scan?
>
It sounds like that. But I'm an expert to comment in precise terms.


>
> I'm still very much interested in resolving this Berkley database cache
> issue. I'm sure there is some minor configuration I'm missing that is
> causing this behavior. Again, I've had no issues with the
> SortedMapBackedCache for its caching purpose... I've tried simplifying our
> data configuration to only one thread with a single sub-entity with the
> same
> results. Again, any help would be greatly appreciated with this.
>

threads... you said? Which ones? Declarative parallelization in
EntityProcessor worked only with certain 3.x version.



>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/DIH-Caching-w-BerkleyBackedCache-tp4240142p4240356.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>



-- 
Sincerely yours
Mikhail Khludnev
Principal Engineer,
Grid Dynamics





Re: Query gives response multiple times

2015-11-16 Thread Andrea Gazzarini
Hi Shane,
If the field is multivalued and contains 8 doubles I believe it is the
expected behaviour.

If I misunderstood something please expand a bit. What's wrong with that
response?

Best,
Andrea
On 16 Nov 2015 19:50, "Shane McCarthy"  wrote:

> I am having an issue with Solr and want to know if this is the usual
> behaviour.
>
> I query the database and receive a response which has the value of the
> field I requested repeated 8 times.  The field is a multivalued and
> contains doubles.
>
> Is their something I could add to the schema.xml to remedy this or is this
> simply the way it is?
>
> Thanks,
>
> Shane
>


Re: Solr logging in local time

2015-11-16 Thread tedsolr
There are more than a dozen logging sources that are aggregated into Splunk
for my application. Solr is only one of them. All the others are logging in
local time. Perhaps there is a Splunk centric solution, but I would like to
know what the alternatives are. Anyone know how to "fix" (as in define, not
correct) the timezone for Solr logging?

thanks



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-logging-in-local-time-tp4240369p4240400.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Jetty Vs Tomcat (Performance issue)

2015-11-16 Thread Ishan Chattopadhyaya
Also, what are the specific performance issues you are observing?

On Mon, Nov 16, 2015 at 6:41 PM, Upayavira  wrote:

> Just to be sure, are you installing Solr inside a different Jetty, or
> using the Jetty that comes with Solr?
>
> You would be expected to use the one installed and managed by Solr.
>
> Upayavira
>
> On Mon, Nov 16, 2015, at 11:58 AM, Behzad Qureshi wrote:
> > Hi All,
> >
> > I am using Tomcat server with solr 4.10.3. I want to shift to Jetty as
> > replacement of Tomcat server but I am not getting any good results with
> > respect to performance. I have tried solr 4.10.3 on both Jetty 8 and
> > Jetty
> > 9 with java 8. Below are configurations I have used.
> >
> > Can anyone please tell me if I am missing anything?
> >
> > *Jetty:*
> >
> > Xms: 5GB
> > Xmx: 50GB
> > Xss: 256MB
> >
> >
> > *Tomcat:*
> >
> > Xms: 5GB
> > Xmx: 50GB
> > Xss: Default
> >
> >
> > *Index Size:*
> >
> > 1TB (20 cores)
> >
> >
> >
> > --
> >
> > Regards,
> >
> > Behzad Qureshi
>


Query gives response multiple times

2015-11-16 Thread Shane McCarthy
I am having an issue with Solr and want to know if this is the usual
behaviour.

I query the database and receive a response which has the value of the
field I requested repeated 8 times.  The field is a multivalued and
contains doubles.

Is their something I could add to the schema.xml to remedy this or is this
simply the way it is?

Thanks,

Shane


Re: Query gives response multiple times

2015-11-16 Thread Alexandre Rafalovitch
I would check for copyField into that target field or something in
UpdateRequestProcessors (in solrconfig.xml) that copies into that
field.

Baring those two, the field should return what you put into it.

Regards,
   Alex.

Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
http://www.solr-start.com/


On 16 November 2015 at 13:50, Shane McCarthy  wrote:
> I am having an issue with Solr and want to know if this is the usual
> behaviour.
>
> I query the database and receive a response which has the value of the
> field I requested repeated 8 times.  The field is a multivalued and
> contains doubles.
>
> Is their something I could add to the schema.xml to remedy this or is this
> simply the way it is?
>
> Thanks,
>
> Shane


Re: Query gives response multiple times

2015-11-16 Thread Shane McCarthy
I am using an instance of Islandora.  The database is housed on a server I
don't have access to.  Hopefully it is moved soon,  But this is the url I
make the request with.

The query url is '
https://upeichem.clients.discoverygarden.ca/islandora/rest/v1/solr/PID%3A%28%22islandora%3A1199%22%29?fl=PID%2Ccml_hfenergy_md%2Cfedora_datastreams_ms=2147483647=edismax
'

For the field with only 1 value it is repeated 8 times.

The field with 8 values is only repeated 4 times. The first 8 values
returned are in the correct order but the remaining 24 values are not in
any order as far as I can tell.



On Mon, Nov 16, 2015 at 4:48 PM, Alexandre Rafalovitch 
wrote:

> What does the query looks like that you get this? And is it exactly
> the same value 8 times?
>
> 
> Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
> http://www.solr-start.com/
>
>
> On 16 November 2015 at 15:32, Shane McCarthy  wrote:
> > Thank you for the quick responses.
> >
> > @Andrea Gazzarini
> > The field can have one or eight doubles in it.  However, the response of
> > the query has 8 doubles and 64 doubles respectively.  The values are
> > repeated 8 times.
> >
> > @Alexandre Rafalovitch
> > Thanks for the link.  I am just getting started using Solr so that should
> > help.
> >
> > On Mon, Nov 16, 2015 at 3:13 PM, Alexandre Rafalovitch <
> arafa...@gmail.com>
> > wrote:
> >
> >> I would check for copyField into that target field or something in
> >> UpdateRequestProcessors (in solrconfig.xml) that copies into that
> >> field.
> >>
> >> Baring those two, the field should return what you put into it.
> >>
> >> Regards,
> >>Alex.
> >> 
> >> Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
> >> http://www.solr-start.com/
> >>
> >>
> >> On 16 November 2015 at 13:50, Shane McCarthy  wrote:
> >> > I am having an issue with Solr and want to know if this is the usual
> >> > behaviour.
> >> >
> >> > I query the database and receive a response which has the value of the
> >> > field I requested repeated 8 times.  The field is a multivalued and
> >> > contains doubles.
> >> >
> >> > Is their something I could add to the schema.xml to remedy this or is
> >> this
> >> > simply the way it is?
> >> >
> >> > Thanks,
> >> >
> >> > Shane
> >>
>


Re: Jetty Vs Tomcat (Performance issue)

2015-11-16 Thread Timothy Potter
I hope 256MB of Xss is a typo and you really meant 256k right?


On Mon, Nov 16, 2015 at 4:58 AM, Behzad Qureshi
 wrote:
> Hi All,
>
> I am using Tomcat server with solr 4.10.3. I want to shift to Jetty as
> replacement of Tomcat server but I am not getting any good results with
> respect to performance. I have tried solr 4.10.3 on both Jetty 8 and Jetty
> 9 with java 8. Below are configurations I have used.
>
> Can anyone please tell me if I am missing anything?
>
> *Jetty:*
>
> Xms: 5GB
> Xmx: 50GB
> Xss: 256MB
>
>
> *Tomcat:*
>
> Xms: 5GB
> Xmx: 50GB
> Xss: Default
>
>
> *Index Size:*
>
> 1TB (20 cores)
>
>
>
> --
>
> Regards,
>
> Behzad Qureshi


Solr/jetty and datasource

2015-11-16 Thread fabigol
Hi,
I want to use a datasource for my security module.
Mais i don't know where i must declare it.
I try declaration in jetty.xml locate .../etc
but nothing not datasource activate
here my environment declaration:

  
   
   environememt
   /stage/solr/server/solr
  

here my datasource declaration 
  
   
   jdbc/test
   

   postgres
   
   test
   172.10.20.192
   5432
 


  

i lost.. help!!!



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-jetty-and-datasource-tp4240426.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Query gives response multiple times

2015-11-16 Thread Shane McCarthy
Thank you for the quick responses.

@Andrea Gazzarini
The field can have one or eight doubles in it.  However, the response of
the query has 8 doubles and 64 doubles respectively.  The values are
repeated 8 times.

@Alexandre Rafalovitch
Thanks for the link.  I am just getting started using Solr so that should
help.

On Mon, Nov 16, 2015 at 3:13 PM, Alexandre Rafalovitch 
wrote:

> I would check for copyField into that target field or something in
> UpdateRequestProcessors (in solrconfig.xml) that copies into that
> field.
>
> Baring those two, the field should return what you put into it.
>
> Regards,
>Alex.
> 
> Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
> http://www.solr-start.com/
>
>
> On 16 November 2015 at 13:50, Shane McCarthy  wrote:
> > I am having an issue with Solr and want to know if this is the usual
> > behaviour.
> >
> > I query the database and receive a response which has the value of the
> > field I requested repeated 8 times.  The field is a multivalued and
> > contains doubles.
> >
> > Is their something I could add to the schema.xml to remedy this or is
> this
> > simply the way it is?
> >
> > Thanks,
> >
> > Shane
>


Re: Solr logging in local time

2015-11-16 Thread Alexandre Rafalovitch
The logging format is defined by log4j properties. Looking at Solr
5.3.1, we are using EnhancedPatternLayout, which apparently supports
just putting the timezone in braces after the date format:
http://stackoverflow.com/questions/9116425/apache-log4j-logging-with-specific-timezone

I'd try that as a first step.

Regards,
Alex.

Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
http://www.solr-start.com/


On 16 November 2015 at 12:52, tedsolr  wrote:
> There are more than a dozen logging sources that are aggregated into Splunk
> for my application. Solr is only one of them. All the others are logging in
> local time. Perhaps there is a Splunk centric solution, but I would like to
> know what the alternatives are. Anyone know how to "fix" (as in define, not
> correct) the timezone for Solr logging?
>
> thanks
>
>
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Solr-logging-in-local-time-tp4240369p4240400.html
> Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr/jetty and datasource

2015-11-16 Thread fabigol
I try WEB-INF/jetty-env.xml  too
nothing too



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-jetty-and-datasource-tp4240426p4240427.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr logging in local time

2015-11-16 Thread tedsolr
There is a property for timezone. Just set that in solr.in.sh and logging
will use it. The default is UTC.

SOLR_TIMEZONE="EST"



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-logging-in-local-time-tp4240369p4240434.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Query gives response multiple times

2015-11-16 Thread Alexandre Rafalovitch
What does the query looks like that you get this? And is it exactly
the same value 8 times?


Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
http://www.solr-start.com/


On 16 November 2015 at 15:32, Shane McCarthy  wrote:
> Thank you for the quick responses.
>
> @Andrea Gazzarini
> The field can have one or eight doubles in it.  However, the response of
> the query has 8 doubles and 64 doubles respectively.  The values are
> repeated 8 times.
>
> @Alexandre Rafalovitch
> Thanks for the link.  I am just getting started using Solr so that should
> help.
>
> On Mon, Nov 16, 2015 at 3:13 PM, Alexandre Rafalovitch 
> wrote:
>
>> I would check for copyField into that target field or something in
>> UpdateRequestProcessors (in solrconfig.xml) that copies into that
>> field.
>>
>> Baring those two, the field should return what you put into it.
>>
>> Regards,
>>Alex.
>> 
>> Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
>> http://www.solr-start.com/
>>
>>
>> On 16 November 2015 at 13:50, Shane McCarthy  wrote:
>> > I am having an issue with Solr and want to know if this is the usual
>> > behaviour.
>> >
>> > I query the database and receive a response which has the value of the
>> > field I requested repeated 8 times.  The field is a multivalued and
>> > contains doubles.
>> >
>> > Is their something I could add to the schema.xml to remedy this or is
>> this
>> > simply the way it is?
>> >
>> > Thanks,
>> >
>> > Shane
>>


RE: Solr Cloud 5.3.0 Errors in Logs

2015-11-16 Thread Adrian Liew
Thanks Eric.

Here is my reply

>> 1> that commits are happening too often, and especially if the commits
>> are happening
>> from a client. If commits aren't being sent by a client, then look at 
>> autoCommit and softAutoCommit in solrconfig.xml (if you can).
Understand what you mean. Besides talking to to the folks at Sitecore on where 
they issue commits, is there a way I can balance these with autoCommit and 
softAutoCommit in solrconfig.xml? Best is, can you recommend any articles that 
talk about best practice configuration for a production setup.

>> 2> excessive autowarm settings, again in solronfig.xml.
>> If, as you say all of Solr is a black box, then talk to the Sitecore folks, 
>> on the surface Solr is just poorly configured.
I will raise with Sitecore guys on this. Particularly asking them why commits 
are happening faster than searchers can be opened. I have seen 
overlappingDeckSearchers have exceeded the limit errors as well. Let you know 
what I get back from the guys.

Regards,
Adrian

-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com] 
Sent: Tuesday, November 17, 2015 12:44 AM
To: solr-user 
Subject: Re: Solr Cloud 5.3.0 Errors in Logs

Having 6 warming serachers is an anti-pattern. What it means is that commits 
are happening faster than your searcher can be opened. There is _no_ good 
reason that I know of for changing it from 2, having changed it in 
solrconfig.xml to 6 almost always indicates an improper configuration.

Places to look:
1> that commits are happening too often, and especially if the commits
are happening
from a client. If commits aren't being sent by a client, then look at 
autoCommit and softAutoCommit in solrconfig.xml (if you can).

2> excessive autowarm settings, again in solronfig.xml.

If, as you say all of Solr is a black box, then talk to the Sitecore folks, on 
the surface Solr is just poorly configured.

Best,
Erick

On Mon, Nov 16, 2015 at 4:33 AM, Adrian Liew  wrote:
> Hi Emir,
>
> I am working with a third party platform, Sitecore. The product is a black 
> box that encapsulates the internal workings of solr queries and so on. If 
> there are any questions you have with regards with the below, let me know. It 
> will be useful for me to communicate what could cause the issues below.
>
> Regards,
> Adrian
>
> -Original Message-
> From: Emir Arnautovic [mailto:emir.arnauto...@sematext.com]
> Sent: Monday, November 16, 2015 4:47 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Solr Cloud 5.3.0 Errors in Logs
>
> Hi Adrian,
> Can you give us bit more details about warmup queries you use and test that 
> you are running when error occurs.
>
> Thanks,
> Emir
>
> On 16.11.2015 08:40, Adrian Liew wrote:
>> Hi there,
>>
>> Will like to get some opinions on the errors encountered below. I have 
>> currently setup a SolrCloud cluster of 3 servers (each server hosting a Solr 
>> instance and a Zookeeper instance).
>>
>> I am encountering the errors below in the logs:
>> Monday, November 16, 2015 3:22:54 PM ERROR null SolrCore 
>> org.apache.solr.common.SolrException: Error opening new searcher. exceeded 
>> limit of maxWarmingSearchers=6, try again later.
>> Monday, November 16, 2015 3:22:54 PM ERROR null SolrCore 
>> org.apache.solr.common.SolrException: Error opening new searcher. exceeded 
>> limit of maxWarmingSearchers=6, try again later.
>> Monday, November 16, 2015 3:22:54 PM ERROR null SolrCore 
>> org.apache.solr.common.SolrException: Error opening new searcher. exceeded 
>> limit of maxWarmingSearchers=6, try again later.
>> Monday, November 16, 2015 3:22:54 PM ERROR null SolrCore 
>> org.apache.solr.common.SolrException: Error opening new searcher. exceeded 
>> limit of maxWarmingSearchers=6, try again later.
>> Monday, November 16, 2015 3:22:54 PM ERROR null SolrCmdDistributor 
>> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
>> from server at 
>> http://172.18.111.112:8983/solr/sitecore_master_index_shard1_replica1: Error 
>> opening new searcher. exceeded limit of maxWarmingSearchers=6, try again 
>> later.
>> Monday, November 16, 2015 3:22:54 PM ERROR null SolrCmdDistributor 
>> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
>> from server at 
>> http://172.18.111.112:8983/solr/sitecore_master_index_shard1_replica1: Error 
>> opening new searcher. exceeded limit of maxWarmingSearchers=6, try again 
>> later.
>> Monday, November 16, 2015 3:22:54 PM WARN null 
>> DistributedUpdateProcessor Error sending update to 
>> http://172.18.111.112:8983/solr Monday, November 16, 2015 3:22:54 PM 
>> WARN null DistributedUpdateProcessor Error sending update to 
>> http://172.18.111.112:8983/solr Monday, November 16, 2015 3:22:54 PM 
>> WARN null DistributedUpdateProcessor Error sending update to 
>> http://172.18.111.112:8983/solr Monday, November 16, 2015 3:22:54 PM 
>> WARN null DistributedUpdateProcessor 

Re: Solr Cloud 5.3.0 Errors in Logs

2015-11-16 Thread Erick Erickson
Here's perhaps more than you really want to know about commits

https://lucidworks.com/blog/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/

The short form is that most setups set autoCommit to a relatively
short interval (15-60 seconds at least under heavy indexing loads,
I've seen 10-15 minutes under relatively light loads) with
openSearcher set to false. And I rarely set maxDocs in that
configuration, it's actually not that useful IMO.

Then set up an autoSoftCommit to be as long as you can tolerate, but
IMO rarely shorter than 60 seconds unless you have very aggressive
near-real-time (NRT) requirements.

And if your product manager simply insists on very aggressive NRT
settings, you should consider making your filterCache and
queryResultCache relatively small with minimal autowarming.

Any time you exceed maxWarmingSearchers, it indicates poorly
configured Solr instances and/or commits happening far too often.
Bumping that up to numbers greater than two is almost always a
band-aid over that misconfiguration.

On Mon, Nov 16, 2015 at 3:45 PM, Adrian Liew  wrote:
> Thanks Eric.
>
> Here is my reply
>
>>> 1> that commits are happening too often, and especially if the commits
>>> are happening
>>> from a client. If commits aren't being sent by a client, then look at 
>>> autoCommit and softAutoCommit in solrconfig.xml (if you can).
> Understand what you mean. Besides talking to to the folks at Sitecore on 
> where they issue commits, is there a way I can balance these with autoCommit 
> and softAutoCommit in solrconfig.xml? Best is, can you recommend any articles 
> that talk about best practice configuration for a production setup.
>
>>> 2> excessive autowarm settings, again in solronfig.xml.
>>> If, as you say all of Solr is a black box, then talk to the Sitecore folks, 
>>> on the surface Solr is just poorly configured.
> I will raise with Sitecore guys on this. Particularly asking them why commits 
> are happening faster than searchers can be opened. I have seen 
> overlappingDeckSearchers have exceeded the limit errors as well. Let you know 
> what I get back from the guys.
>
> Regards,
> Adrian
>
> -Original Message-
> From: Erick Erickson [mailto:erickerick...@gmail.com]
> Sent: Tuesday, November 17, 2015 12:44 AM
> To: solr-user 
> Subject: Re: Solr Cloud 5.3.0 Errors in Logs
>
> Having 6 warming serachers is an anti-pattern. What it means is that commits 
> are happening faster than your searcher can be opened. There is _no_ good 
> reason that I know of for changing it from 2, having changed it in 
> solrconfig.xml to 6 almost always indicates an improper configuration.
>
> Places to look:
> 1> that commits are happening too often, and especially if the commits
> are happening
> from a client. If commits aren't being sent by a client, then look at 
> autoCommit and softAutoCommit in solrconfig.xml (if you can).
>
> 2> excessive autowarm settings, again in solronfig.xml.
>
> If, as you say all of Solr is a black box, then talk to the Sitecore folks, 
> on the surface Solr is just poorly configured.
>
> Best,
> Erick
>
> On Mon, Nov 16, 2015 at 4:33 AM, Adrian Liew  wrote:
>> Hi Emir,
>>
>> I am working with a third party platform, Sitecore. The product is a black 
>> box that encapsulates the internal workings of solr queries and so on. If 
>> there are any questions you have with regards with the below, let me know. 
>> It will be useful for me to communicate what could cause the issues below.
>>
>> Regards,
>> Adrian
>>
>> -Original Message-
>> From: Emir Arnautovic [mailto:emir.arnauto...@sematext.com]
>> Sent: Monday, November 16, 2015 4:47 PM
>> To: solr-user@lucene.apache.org
>> Subject: Re: Solr Cloud 5.3.0 Errors in Logs
>>
>> Hi Adrian,
>> Can you give us bit more details about warmup queries you use and test that 
>> you are running when error occurs.
>>
>> Thanks,
>> Emir
>>
>> On 16.11.2015 08:40, Adrian Liew wrote:
>>> Hi there,
>>>
>>> Will like to get some opinions on the errors encountered below. I have 
>>> currently setup a SolrCloud cluster of 3 servers (each server hosting a 
>>> Solr instance and a Zookeeper instance).
>>>
>>> I am encountering the errors below in the logs:
>>> Monday, November 16, 2015 3:22:54 PM ERROR null SolrCore 
>>> org.apache.solr.common.SolrException: Error opening new searcher. exceeded 
>>> limit of maxWarmingSearchers=6, try again later.
>>> Monday, November 16, 2015 3:22:54 PM ERROR null SolrCore 
>>> org.apache.solr.common.SolrException: Error opening new searcher. exceeded 
>>> limit of maxWarmingSearchers=6, try again later.
>>> Monday, November 16, 2015 3:22:54 PM ERROR null SolrCore 
>>> org.apache.solr.common.SolrException: Error opening new searcher. exceeded 
>>> limit of maxWarmingSearchers=6, try again later.
>>> Monday, November 16, 2015 3:22:54 PM ERROR null SolrCore 
>>> org.apache.solr.common.SolrException: 

Re: Query gives response multiple times

2015-11-16 Thread Alexandre Rafalovitch
On 16 November 2015 at 17:40, Shane McCarthy  wrote:
> I am using an instance of Islandora.

Ah. This complicates the situation as there is an unknown - to most of
us - layer in between. So, it is not clear whether this multiplication
is happening in Solr or in Islandora.

Your best option is to hit Solr server directly and basically do a
query for a specific record's id with the fields that you are having a
problem with. If that field for that record shows the problem the same
way as through the full Islandora path, the problem is Solr. Then, you
review the copyFields, etc. If it does not...

Also, is this only happening with one "double" field but not another,
with all "double" fields or with some other combination?

And did it start at some point or was this always like that?

You need to figure out something to contrast the observed behavior against.

Regards,
Alex.




Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
http://www.solr-start.com/


Re: StringIndexOutOfBoundsException using spellcheck and synonyms

2015-11-16 Thread Derek Poh

Hi Scott

I amusing Solr 4.10.4.

On 11/16/2015 10:06 PM, Scott Stults wrote:

Hi Derek,

Could you please add what version of Solr you see this in? I didn't see a
related Jira, so this might warrant a new one.


k/r,
Scott

On Sun, Nov 15, 2015 at 11:01 PM, Derek Poh  wrote:


Hi
Iam using spellcheck and synonyms.I am getting
"java.lang.StringIndexOutOfBoundsException: String index out of range: -1"
for some keywords.

I think I managed to narrow down to the likely caused of it.
I have thisline of entry in the synonyms.txt file,

body spray,cologne,parfum,parfume,perfume,purfume,toilette

When I search for 'cologne' it will hit the exception.
If I removed the'body spray' from the line, I will not hit the exception.

cologne,parfum,parfume,perfume,purfume,toilette

It seems like it could be due to multi terms in the synonyms files but
there are some keywords with multi terms in synonyms that does not has the
issue.
This line has a multi term "paint ball" in it, when I search for paintball
or paintballs it does not hit the exception.

paintball,paintballs,paint ball


Any advice how can I resolve this issue?


The field use for spellcheck:




 
   
 
 
 
 
   
   
 
 
 
 
   
 


Exception stacktrace:
2015-11-16T07:06:43,055 - ERROR [qtp744979286-193443:SolrException@142] -
null:java.lang.StringIndexOutOfBoundsException: String index out of range:
-1
 at
java.lang.AbstractStringBuilder.replace(AbstractStringBuilder.java:789)
 at java.lang.StringBuilder.replace(StringBuilder.java:266)
 at
org.apache.solr.spelling.SpellCheckCollator.getCollation(SpellCheckCollator.java:235)
 at
org.apache.solr.spelling.SpellCheckCollator.collate(SpellCheckCollator.java:92)
 at
org.apache.solr.handler.component.SpellCheckComponent.addCollationsToResponse(SpellCheckComponent.java:230)
 at
org.apache.solr.handler.component.SpellCheckComponent.process(SpellCheckComponent.java:197)
 at
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
 at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1976)
 at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
 at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
 at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
 at
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
 at
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
 at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
 at
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
 at
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
 at
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
 at
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
 at
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
 at
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
 at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
 at
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
 at
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
 at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
 at org.eclipse.jetty.server.Server.handle(Server.java:497)
 at
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
 at
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
 at org.eclipse.jetty.io
.AbstractConnection$2.run(AbstractConnection.java:540)
 at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
 at
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
 at java.lang.Thread.run(Thread.java:722)

Derek

--
CONFIDENTIALITY NOTICE
This e-mail (including any attachments) may contain confidential and/or
privileged information. If you are not the intended recipient or have
received this e-mail in error, please inform the sender immediately and
delete this e-mail (including any attachments) from your computer, and you
must not use, disclose to anyone else or copy this e-mail (including any
attachments), whether in whole or in part.
This e-mail and any reply to it may be monitored for security, legal,
regulatory compliance and/or other appropriate reasons.







--

Jetty Vs Tomcat (Performance issue)

2015-11-16 Thread Behzad Qureshi
Hi All,

I am using Tomcat server with solr 4.10.3. I want to shift to Jetty as
replacement of Tomcat server but I am not getting any good results with
respect to performance. I have tried solr 4.10.3 on both Jetty 8 and Jetty
9 with java 8. Below are configurations I have used.

Can anyone please tell me if I am missing anything?

*Jetty:*

Xms: 5GB
Xmx: 50GB
Xss: 256MB


*Tomcat:*

Xms: 5GB
Xmx: 50GB
Xss: Default


*Index Size:*

1TB (20 cores)



-- 

Regards,

Behzad Qureshi


how to join search mutiple collection in sorlcloud

2015-11-16 Thread soledede_w...@ehsy.com
Dear @solr_lucene
currently,I am using solr5.3.1,I have a requirement, I need search like in 
relation database(select * from A ,B where A.id=B.id),Can we implments with 
solr5.3 in SolrCloud mode,I have two collection,2 shards per collection.
  Help me please.

Thanks


soledede_w...@ehsy.com


Re: Solr logging in local time

2015-11-16 Thread Shawn Heisey
On 11/16/2015 9:04 AM, tedsolr wrote:
> Is it possible to define a timezone for Solr so that logging occurs in local
> time? My logs appear to be in UTC. Due to daylight savings, I don't think
> defining a GMT offset in the log4j.properties files will work.

I noticed this today when I upgraded from 5.2.1 to a 5.3.2 snapshot.

This is the adjustment that I made to my log4j.properties file to get
previous behavior back:

log4j.appender.file.layout.ConversionPattern=%d{-MM-dd
HH:mm:ss.SSS}{America/Denver} %-5p (%t) [%X{collection} %X{shard}
%X{replica} %X{core}] %c{2.} %m\n

I'm not sure how that's going to end up wrapping, but as I edit the
message it is wrapping at a place that might cause confusion.  To clear
that confusion up:  There is no space between the right curly brace
after the date pattern and the left curly brace before the timezone.

I'm aware that for one hour out of the year, there will be timestamp
overlap ... but to me, it's a worthwhile tradeoff for having timestamps
that can be easily compared to a wall clock.

Thanks,
Shawn



Re: how to join search mutiple collection in sorlcloud

2015-11-16 Thread Erick Erickson
In a word, no. At least probably not.

There are some JIRA tickets dealing with distributed joins, and some
with certain restrictions, specifically if the second (from)
collection can be reproduced on every slice of the first (to)
collection.

In the trunk (6.0), there's the ParallelSQL stuff which has some
relevance, but it's still not a full RDBMS type join.

The usual recommendation is to flatten your data if at all possible so
you don't _have_ two collections.

Solr is a wonderful search engine. It is not an RDBMS and whenever I
find myself trying to make it behave like an RDBMS I try to rethink
the architecture.

On Mon, Nov 16, 2015 at 6:56 PM, soledede_w...@ehsy.com
 wrote:
> Dear @solr_lucene
> currently,I am using solr5.3.1,I have a requirement, I need search like in 
> relation database(select * from A ,B where A.id=B.id),Can we implments with 
> solr5.3 in SolrCloud mode,I have two collection,2 shards per collection.
>   Help me please.
>
> Thanks
>
>
> soledede_w...@ehsy.com


RE: Solr Cloud 5.3.0 Errors in Logs

2015-11-16 Thread Adrian Liew
Thanks for the tip Eric. Really useful article to know.

I will keep you posted on my findings!

Regards,
Adrian

-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com] 
Sent: Tuesday, November 17, 2015 8:56 AM
To: solr-user 
Subject: Re: Solr Cloud 5.3.0 Errors in Logs

Here's perhaps more than you really want to know about commits

https://lucidworks.com/blog/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/

The short form is that most setups set autoCommit to a relatively short 
interval (15-60 seconds at least under heavy indexing loads, I've seen 10-15 
minutes under relatively light loads) with openSearcher set to false. And I 
rarely set maxDocs in that configuration, it's actually not that useful IMO.

Then set up an autoSoftCommit to be as long as you can tolerate, but IMO rarely 
shorter than 60 seconds unless you have very aggressive near-real-time (NRT) 
requirements.

And if your product manager simply insists on very aggressive NRT settings, you 
should consider making your filterCache and queryResultCache relatively small 
with minimal autowarming.

Any time you exceed maxWarmingSearchers, it indicates poorly configured Solr 
instances and/or commits happening far too often.
Bumping that up to numbers greater than two is almost always a band-aid over 
that misconfiguration.

On Mon, Nov 16, 2015 at 3:45 PM, Adrian Liew  wrote:
> Thanks Eric.
>
> Here is my reply
>
>>> 1> that commits are happening too often, and especially if the 
>>> 1> commits
>>> are happening
>>> from a client. If commits aren't being sent by a client, then look at 
>>> autoCommit and softAutoCommit in solrconfig.xml (if you can).
> Understand what you mean. Besides talking to to the folks at Sitecore on 
> where they issue commits, is there a way I can balance these with autoCommit 
> and softAutoCommit in solrconfig.xml? Best is, can you recommend any articles 
> that talk about best practice configuration for a production setup.
>
>>> 2> excessive autowarm settings, again in solronfig.xml.
>>> If, as you say all of Solr is a black box, then talk to the Sitecore folks, 
>>> on the surface Solr is just poorly configured.
> I will raise with Sitecore guys on this. Particularly asking them why commits 
> are happening faster than searchers can be opened. I have seen 
> overlappingDeckSearchers have exceeded the limit errors as well. Let you know 
> what I get back from the guys.
>
> Regards,
> Adrian
>
> -Original Message-
> From: Erick Erickson [mailto:erickerick...@gmail.com]
> Sent: Tuesday, November 17, 2015 12:44 AM
> To: solr-user 
> Subject: Re: Solr Cloud 5.3.0 Errors in Logs
>
> Having 6 warming serachers is an anti-pattern. What it means is that commits 
> are happening faster than your searcher can be opened. There is _no_ good 
> reason that I know of for changing it from 2, having changed it in 
> solrconfig.xml to 6 almost always indicates an improper configuration.
>
> Places to look:
> 1> that commits are happening too often, and especially if the commits
> are happening
> from a client. If commits aren't being sent by a client, then look at 
> autoCommit and softAutoCommit in solrconfig.xml (if you can).
>
> 2> excessive autowarm settings, again in solronfig.xml.
>
> If, as you say all of Solr is a black box, then talk to the Sitecore folks, 
> on the surface Solr is just poorly configured.
>
> Best,
> Erick
>
> On Mon, Nov 16, 2015 at 4:33 AM, Adrian Liew  wrote:
>> Hi Emir,
>>
>> I am working with a third party platform, Sitecore. The product is a black 
>> box that encapsulates the internal workings of solr queries and so on. If 
>> there are any questions you have with regards with the below, let me know. 
>> It will be useful for me to communicate what could cause the issues below.
>>
>> Regards,
>> Adrian
>>
>> -Original Message-
>> From: Emir Arnautovic [mailto:emir.arnauto...@sematext.com]
>> Sent: Monday, November 16, 2015 4:47 PM
>> To: solr-user@lucene.apache.org
>> Subject: Re: Solr Cloud 5.3.0 Errors in Logs
>>
>> Hi Adrian,
>> Can you give us bit more details about warmup queries you use and test that 
>> you are running when error occurs.
>>
>> Thanks,
>> Emir
>>
>> On 16.11.2015 08:40, Adrian Liew wrote:
>>> Hi there,
>>>
>>> Will like to get some opinions on the errors encountered below. I have 
>>> currently setup a SolrCloud cluster of 3 servers (each server hosting a 
>>> Solr instance and a Zookeeper instance).
>>>
>>> I am encountering the errors below in the logs:
>>> Monday, November 16, 2015 3:22:54 PM ERROR null SolrCore 
>>> org.apache.solr.common.SolrException: Error opening new searcher. exceeded 
>>> limit of maxWarmingSearchers=6, try again later.
>>> Monday, November 16, 2015 3:22:54 PM ERROR null SolrCore 
>>> org.apache.solr.common.SolrException: Error opening new searcher. exceeded 
>>> 

Re: Re: how to join search mutiple collection in sorlcloud

2015-11-16 Thread soledede_w...@ehsy.com

Thanks Erick

I think if can use hash for doc_id to all shards, then do join,Last merge the 
result in a node.


soledede_w...@ehsy.com
 
From: Erick Erickson
Date: 2015-11-17 11:10
To: solr-user
Subject: Re: how to join search mutiple collection in sorlcloud
In a word, no. At least probably not.
 
There are some JIRA tickets dealing with distributed joins, and some
with certain restrictions, specifically if the second (from)
collection can be reproduced on every slice of the first (to)
collection.
 
In the trunk (6.0), there's the ParallelSQL stuff which has some
relevance, but it's still not a full RDBMS type join.
 
The usual recommendation is to flatten your data if at all possible so
you don't _have_ two collections.
 
Solr is a wonderful search engine. It is not an RDBMS and whenever I
find myself trying to make it behave like an RDBMS I try to rethink
the architecture.
 
On Mon, Nov 16, 2015 at 6:56 PM, soledede_w...@ehsy.com
 wrote:
> Dear @solr_lucene
> currently,I am using solr5.3.1,I have a requirement, I need search like in 
> relation database(select * from A ,B where A.id=B.id),Can we implments with 
> solr5.3 in SolrCloud mode,I have two collection,2 shards per collection.
>   Help me please.
>
> Thanks
>
>
> soledede_w...@ehsy.com


Re: Solr Cloud 5.3.0 Errors in Logs

2015-11-16 Thread Emir Arnautovic

Hi Adrian,
Can you give us bit more details about warmup queries you use and test 
that you are running when error occurs.


Thanks,
Emir

On 16.11.2015 08:40, Adrian Liew wrote:

Hi there,

Will like to get some opinions on the errors encountered below. I have 
currently setup a SolrCloud cluster of 3 servers (each server hosting a Solr 
instance and a Zookeeper instance).

I am encountering the errors below in the logs:
Monday, November 16, 2015 3:22:54 PM ERROR null SolrCore 
org.apache.solr.common.SolrException: Error opening new searcher. exceeded 
limit of maxWarmingSearchers=6,​ try again later.
Monday, November 16, 2015 3:22:54 PM ERROR null SolrCore 
org.apache.solr.common.SolrException: Error opening new searcher. exceeded 
limit of maxWarmingSearchers=6,​ try again later.
Monday, November 16, 2015 3:22:54 PM ERROR null SolrCore 
org.apache.solr.common.SolrException: Error opening new searcher. exceeded 
limit of maxWarmingSearchers=6,​ try again later.
Monday, November 16, 2015 3:22:54 PM ERROR null SolrCore 
org.apache.solr.common.SolrException: Error opening new searcher. exceeded 
limit of maxWarmingSearchers=6,​ try again later.
Monday, November 16, 2015 3:22:54 PM ERROR null SolrCmdDistributor 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at 
http://172.18.111.112:8983/solr/sitecore_master_index_shard1_replica1: Error 
opening new searcher. exceeded limit of maxWarmingSearchers=6,​ try again later.
Monday, November 16, 2015 3:22:54 PM ERROR null SolrCmdDistributor 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at 
http://172.18.111.112:8983/solr/sitecore_master_index_shard1_replica1: Error 
opening new searcher. exceeded limit of maxWarmingSearchers=6,​ try again later.
Monday, November 16, 2015 3:22:54 PM WARN null DistributedUpdateProcessor Error 
sending update to http://172.18.111.112:8983/solr
Monday, November 16, 2015 3:22:54 PM WARN null DistributedUpdateProcessor Error 
sending update to http://172.18.111.112:8983/solr
Monday, November 16, 2015 3:22:54 PM WARN null DistributedUpdateProcessor Error 
sending update to http://172.18.111.112:8983/solr
Monday, November 16, 2015 3:22:54 PM WARN null DistributedUpdateProcessor Error 
sending update to http://172.18.111.112:8983/solr

11/16/2015, 3:17:09 PM

WARN

null

DistributedUpdateProcessor

Error sending update to http://172.18.111.112:8983/solr

11/16/2015, 3:17:09 PM

WARN

null

DistributedUpdateProcessor

Error sending update to http://172.18.111.112:8983/solr

11/16/2015, 3:22:26 PM

ERROR

null

SolrCmdDistributor

org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: 
http://172.18.111.112:8983/solr/sitecore_master_index_shard1_replica1



Main errors are Timeout occurred exceptions, maxWarmingSearchers exceeded. Is 
anyone able to advise or have experienced something the same as the above in 
their SolrCloud setup?

Regards,
Adrian





--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr & Elasticsearch Support * http://sematext.com/



Re: Best way to track cumulative GC pauses in Solr

2015-11-16 Thread Tom Evans
On Fri, Nov 13, 2015 at 4:50 PM, Walter Underwood  wrote:
> Also, what GC settings are you using? We may be able to make some suggestions.
>
> Cumulative GC pauses aren’t very interesting to me. I’m more interested in 
> the longest ones, 90th percentile, 95th, etc.
>

Any advice would be great, but what I'm primarily interested in is how
people are monitoring these statistics in real time, for all time, on
production servers. Eg, for looking at the disk or RAM usage of one of
my servers, I can look at the historical usage in the last week, last
month, last year and so on.

I need to get these stats in to the same monitoring tools as we use
for monitoring every other vital aspect of our servers. Looking at log
files can be useful, but I don't want to keep arbitrarily large log
files on our servers, nor extract data from them, I want to record it
for posterity in one system that understands sampling.

We already use and maintain our own munin systems, so I'm not
interested in paid-for equivalents of munin - regardless of how simple
to set up they are, they don't integrate with our other performance
monitoring stats, and I would never get budget anyway.

So really:

1) Is it OK to turn JMX monitoring on on production systems? The
comments in solr.in.sh suggest not.

2) What JMX beans and attributes should I be using to monitor GC
pauses, particularly maximum length of a single pause in a period, and
the total length of pauses in that period?

Cheers

Tom


Re: Jetty Vs Tomcat (Performance issue)

2015-11-16 Thread Behzad Qureshi
Upayavira:: Just to be sure, are you installing Solr inside a different
Jetty, or using the Jetty that comes with Solr?
*Behzad:: *Jetty that comes with solr.

Jetty-8.1.10.v20130312
Embedded Solr 4.10.3

Also used Jetty9 but not embedded. Tried solr 4.10.3 with Jetty 9 but still
facing same issue.



Ishan:: Also, what are the specific performance issues you are observing?
*Behzad:: *Elapsed time (QTime) of SOLR with Jetty is more than Elapsed
time of Tomcat.



Timothy:: I hope 256MB of Xss is a typo and you really meant 256k right?
*Behzad:: *Right. My bad.

On Tue, Nov 17, 2015 at 3:17 AM, Timothy Potter 
wrote:

> I hope 256MB of Xss is a typo and you really meant 256k right?
>
>
> On Mon, Nov 16, 2015 at 4:58 AM, Behzad Qureshi
>  wrote:
> > Hi All,
> >
> > I am using Tomcat server with solr 4.10.3. I want to shift to Jetty as
> > replacement of Tomcat server but I am not getting any good results with
> > respect to performance. I have tried solr 4.10.3 on both Jetty 8 and
> Jetty
> > 9 with java 8. Below are configurations I have used.
> >
> > Can anyone please tell me if I am missing anything?
> >
> > *Jetty:*
> >
> > Xms: 5GB
> > Xmx: 50GB
> > Xss: 256MB
> >
> >
> > *Tomcat:*
> >
> > Xms: 5GB
> > Xmx: 50GB
> > Xss: Default
> >
> >
> > *Index Size:*
> >
> > 1TB (20 cores)
> >
> >
> >
> > --
> >
> > Regards,
> >
> > Behzad Qureshi
>



-- 

Regards,

Behzad Qureshi

Senior Software Engineer | NorthBay Solutions (Pvt.) Ltd

410-G4, Johar Town, Lahore, Pakistan
Ph: +92 42 35290152-56

Skype ID: behzadqureshi.nbs


Error in log after upgrading Solr

2015-11-16 Thread Shawn Heisey
I have upgraded from 5.2.1 to a 5.3.2 snapshot -- the lucene_solr_5_3
branch plus the patch for SOLR-6188.

I'm getting errors in my log every time I make a commit on a core.

2015-11-16 20:28:11.554 ERROR
(searcherExecutor-82-thread-1-processing-x:sparkinclive) [  
x:sparkinclive] o.a.s.c.SolrCore Previous SolrRequestInfo was not
closed! 
req=waitSearcher=true=true=javabin=2=true
2015-11-16 20:28:11.554 ERROR
(searcherExecutor-82-thread-1-processing-x:sparkinclive) [  
x:sparkinclive] o.a.s.c.SolrCore prev == info : false
2015-11-16 20:28:11.554 INFO 
(searcherExecutor-82-thread-1-processing-x:sparkinclive) [  
x:sparkinclive] o.a.s.c.S.Request [sparkinclive] webapp=null path=null
params={sort=post_date+desc=newSearcher=*:*=false=/lbcheck=1}
hits=459866 status=0 QTime=0
2015-11-16 20:28:11.554 INFO 
(searcherExecutor-82-thread-1-processing-x:sparkinclive) [  
x:sparkinclive] o.a.s.c.SolrCore QuerySenderListener done.

This core has been optimized several times since the upgrade, so there
are no longer any segments built by 5.2.1.

Is this a problem?  I found the code that generates the errors.  It says
it is a temporary sanity check, that it can be changed to only an assert
in the future -- an assert that currently would fail.  The fact that
whoever wrote the code chose to log at ERROR has me a little worried.

I'm completely rebuilding one of the indexes handled by this server, to
see whether this error still happens on an index built from scratch.  It
has several more hours before it will be ready.

Thanks,
Shawn



Undo Split Shard

2015-11-16 Thread kiyer_adobe
We had 32 shards of 30GB each. The query performance was awful. We decided to
split shards for all of them. Most of them went fine but for 3 shards that
got split with _1 to 16GB but _0 in low MB's. The _1 is
fine but _0 is definitely wrong. The parent shard is inactive and now
the split shards are active. 
I tried deleteshard on the splitshards to split it again but it does not all
deleteshard on active shards. Running splitshard again on parent shard
failed. 

I am unsure of what the options are at this point and query went from bad
performance to not working at all.

Please advise.

Thanks.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Undo-Split-Shard-tp4240508.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Re: how to join search mutiple collection in sorlcloud

2015-11-16 Thread Paul Blanchaert
You might want to take a look at/follow up upon SOLR-8297



On Tue, 17 Nov 2015 at 04:14 soledede_w...@ehsy.com 
wrote:

>
> Thanks Erick
>
> I think if can use hash for doc_id to all shards, then do join,Last merge
> the result in a node.
>
>
> soledede_w...@ehsy.com
>
> From: Erick Erickson
> Date: 2015-11-17 11:10
> To: solr-user
> Subject: Re: how to join search mutiple collection in sorlcloud
> In a word, no. At least probably not.
>
> There are some JIRA tickets dealing with distributed joins, and some
> with certain restrictions, specifically if the second (from)
> collection can be reproduced on every slice of the first (to)
> collection.
>
> In the trunk (6.0), there's the ParallelSQL stuff which has some
> relevance, but it's still not a full RDBMS type join.
>
> The usual recommendation is to flatten your data if at all possible so
> you don't _have_ two collections.
>
> Solr is a wonderful search engine. It is not an RDBMS and whenever I
> find myself trying to make it behave like an RDBMS I try to rethink
> the architecture.
>
> On Mon, Nov 16, 2015 at 6:56 PM, soledede_w...@ehsy.com
>  wrote:
> > Dear @solr_lucene
> > currently,I am using solr5.3.1,I have a requirement, I need search like
> in relation database(select * from A ,B where A.id=B.id),Can we implments
> with solr5.3 in SolrCloud mode,I have two collection,2 shards per
> collection.
> >   Help me please.
> >
> > Thanks
> >
> >
> > soledede_w...@ehsy.com
>
-- 
--


Kind regards,

Paul Blanchaert 
www.search-solutions.net
Tel: +32 497 05.01.03
[image: View my profile on LinkedIn]
view
 or connect


--

Please consider the environment before printing this e-mail.

This message is explicitly subject to the conditions of the e-mail
disclaimer, available via the following link: mail-disclaimer
. If you are
unable to consult this e-mail disclaimer, please notify the sender at once.