Re: Facet Query performance

2019-07-08 Thread Midas A
Hi
How i can know whether DocValues are getting used or not ?
Please help me here .

On Mon, Jul 8, 2019 at 2:38 PM Midas A  wrote:

> Hi ,
>
> I have enabled docvalues on facet field but query is still taking time.
>
> How i can improve the Query time .
>  docValues="true" multiValued="true" termVectors="true" /> 
>
> *Query: *
> http://X.X.X.X:
> /solr/search/select?df=ttl=0=true=id,upt=1=true=1=OR=NOT+hemp:(%22xgidx29760%22+%22xmwxmonster%22+%22xmwxmonsterindia%22+%22xmwxcom%22+%22xswxmonster+com%22+%22xswxmonster%22+%22xswxmonsterindia+com%22+%22xswxmonsterindia%22)=NOT+cEmp:(%
> 22nomster.com%22+OR+%22utyu%22)=NOT+pEmp:(%22nomster.com
> %22+OR+%22utyu%22)=ind:(5)=NOT+is_udis:2=NOT+id:(92197+OR+240613+OR+249717+OR+1007148+OR+2500513+OR+2534675+OR+2813498+OR+9401682)=true=0=is_resume:0^-1000=upt_date:[*+TO+NOW/DAY-36MONTHS]^2=upt_date:[NOW/DAY-36MONTHS+TO+NOW/DAY-24MONTHS]^3=upt_date:[NOW/DAY-24MONTHS+TO+NOW/DAY-12MONTHS]^4=upt_date:[NOW/DAY-12MONTHS+TO+NOW/DAY-9MONTHS]^5=upt_date:[NOW/DAY-9MONTHS+TO+NOW/DAY-6MONTHS]^10=upt_date:[NOW/DAY-6MONTHS+TO+NOW/DAY-3MONTHS]^15=upt_date:[NOW/DAY-3MONTHS+TO+*]^20=NOT+country:isoin^-10=exp:[+10+TO+11+]=exp:[+11+TO+13+]=exp:[+13+TO+15+]=exp:[+15+TO+17+]=exp:[+17+TO+20+]=exp:[+20+TO+25+]=exp:[+25+TO+109+]=ctc:[+100+TO+101+]=ctc:[+101+TO+101.5+]=ctc:[+101.5+TO+102+]=ctc:[+102+TO+103+]=ctc:[+103+TO+104+]=ctc:[+104+TO+105+]=ctc:[+105+TO+107.5+]=ctc:[+107.5+TO+110+]=ctc:[+110+TO+115+]=ctc:[+115+TO+10100+]=0=contents^0.05+currdesig^1.5+predesig^1.5+lng^2+ttl+kw_skl+kw_it=1=false=ttl,kw_skl,kw_it,contents=json=1=0=ind=cat=rol=cl=pref=timing=/resumesearch=1=0=40=2=*=10=id==1=id=id=true=false
>
>


Facet Query performance

2019-07-08 Thread Midas A
Hi ,

I have enabled docvalues on facet field but query is still taking time.

How i can improve the Query time .
 

*Query: *
http://X.X.X.X:
/solr/search/select?df=ttl=0=true=id,upt=1=true=1=OR=NOT+hemp:(%22xgidx29760%22+%22xmwxmonster%22+%22xmwxmonsterindia%22+%22xmwxcom%22+%22xswxmonster+com%22+%22xswxmonster%22+%22xswxmonsterindia+com%22+%22xswxmonsterindia%22)=NOT+cEmp:(%
22nomster.com%22+OR+%22utyu%22)=NOT+pEmp:(%22nomster.com
%22+OR+%22utyu%22)=ind:(5)=NOT+is_udis:2=NOT+id:(92197+OR+240613+OR+249717+OR+1007148+OR+2500513+OR+2534675+OR+2813498+OR+9401682)=true=0=is_resume:0^-1000=upt_date:[*+TO+NOW/DAY-36MONTHS]^2=upt_date:[NOW/DAY-36MONTHS+TO+NOW/DAY-24MONTHS]^3=upt_date:[NOW/DAY-24MONTHS+TO+NOW/DAY-12MONTHS]^4=upt_date:[NOW/DAY-12MONTHS+TO+NOW/DAY-9MONTHS]^5=upt_date:[NOW/DAY-9MONTHS+TO+NOW/DAY-6MONTHS]^10=upt_date:[NOW/DAY-6MONTHS+TO+NOW/DAY-3MONTHS]^15=upt_date:[NOW/DAY-3MONTHS+TO+*]^20=NOT+country:isoin^-10=exp:[+10+TO+11+]=exp:[+11+TO+13+]=exp:[+13+TO+15+]=exp:[+15+TO+17+]=exp:[+17+TO+20+]=exp:[+20+TO+25+]=exp:[+25+TO+109+]=ctc:[+100+TO+101+]=ctc:[+101+TO+101.5+]=ctc:[+101.5+TO+102+]=ctc:[+102+TO+103+]=ctc:[+103+TO+104+]=ctc:[+104+TO+105+]=ctc:[+105+TO+107.5+]=ctc:[+107.5+TO+110+]=ctc:[+110+TO+115+]=ctc:[+115+TO+10100+]=0=contents^0.05+currdesig^1.5+predesig^1.5+lng^2+ttl+kw_skl+kw_it=1=false=ttl,kw_skl,kw_it,contents=json=1=0=ind=cat=rol=cl=pref=timing=/resumesearch=1=0=40=2=*=10=id==1=id=id=true=false


Re: query optimization

2019-07-03 Thread Midas A
Please suggest here

On Wed, Jul 3, 2019 at 10:23 AM Midas A  wrote:

> Hi,
>
> How can i optimize following query it is taking time
>
>  webapp=/solr path=/search params={
> df=ttl=0=true=1=true=true=0=0=contents^0.05+currdesig^1.5+predesig^1.5+lng^2+ttl+kw_skl+kw_it=false=ttl,kw_skl,kw_it,contents==1=ttl^0.1+currdesig^0.1+predesig^0.1=0=/resumesearch="mbbss"+OR+"medicine"=2=true=mbbs,+"medical+officer",+doctor,+physician+("medical+officer")+"medical+officer"+"physician""+""general+physician""+""physicians""+""consultant+physician""+""house+physician"+"physician"+"doctor"+"mbbs"+"general+physician"+"physicians"+"consultant+physician"+"house+physician"=(293)=false==none=id,upt=1=OR=NOT+contents:("liaise+with+medical+officer"+"worked+with+medical+officer"+"working+with+medical+officer"+"reported+to+medical+officer"+"references+are+medical+officer"+"coordinated+with+medical+officer"+"closely+with+medical+officer"+"signature+of+medical+officer"+"seal+of++medical+officer"+"liaise+with+physician"+"worked+with+physician"+"working+with+physician"+"reported+to+physician"+"references+are+physician"+"coordinated+with+physician"+"closely+with+physician"+"signature+of+physician"+"seal+of++physician"+"liaise+with+doctor"+"worked+with+doctor"+"working+with+doctor"+"reported+to+doctor"+"references+are+doctor"+"coordinated+with+doctor"+"closely+with+doctor"+"signature+of+doctor"+"seal+of++doctor")=NOT+hemp:("xmwxagency"+"xmwxlimited"+"xmwxplacement"+"xmwxplus"+"xmwxprivate"+"xmwxsecurity"+"xmwxz2"+"xmwxand"+"xswxz2+plus+placement+and+security+agency+private+limited"+"xswxz2+plus+placement+and+security+agency+private"+"xswxz2+plus+placement+and+security+agency"+"xswxz2+plus+placement+and+security"+"xswxz2+plus+placement+and"+"xswxz2+plus+placement"+"xswxz2+plus"+"xswxz2")=ctc:[100.0+TO+107.2]+OR+ctc:[-1.0+TO+-1.0]=(dlh:(22))=ind:(24++42++24++8)=(rol:(292+293+294+322))=(cat:(9))=cat:(1000+OR+907+OR+1+OR+2+OR+3+OR+786+OR+4+OR+5+OR+6+OR+7+OR+8+OR+9+OR+10+OR+11+OR+12+OR+13+OR+14+OR+785+OR+15+OR+16+OR+17+OR+18+OR+908+OR+19+OR+20+OR+21+OR+23+OR+24)=NOT+is_udis:2=is_resume:0^-1000=upt_date:[*+TO+NOW/DAY-36MONTHS]^2=upt_date:[NOW/DAY-36MONTHS+TO+NOW/DAY-24MONTHS]^3=upt_date:[NOW/DAY-24MONTHS+TO+NOW/DAY-12MONTHS]^4=upt_date:[NOW/DAY-12MONTHS+TO+NOW/DAY-9MONTHS]^5=upt_date:[NOW/DAY-9MONTHS+TO+NOW/DAY-6MONTHS]^10=upt_date:[NOW/DAY-6MONTHS+TO+NOW/DAY-3MONTHS]^15=upt_date:[NOW/DAY-3MONTHS+TO+*]^20=_query_:"{!edismax+qf%3Drol^2+pf%3Did+ps%3D1+pf2%3Did+ps2%3D1+pf3%3Did+ps3%3D1+v%3D$typeId+q.op%3DOR+bq%3D\$bq1+bf%3D}"=_query_:"{!edismax+qf%3Drol^2+pf%3Did+ps%3D1+pf2%3Did+ps2%3D1+pf3%3Did+ps3%3D1+v%3D$typeId+q.op%3DOR+bq%3D\$bq1+bf%3D}"=_query_:"{!edismax+qf%3Drol^2+pf%3Did+ps%3D1+pf2%3Did+ps2%3D1+pf3%3Did+ps3%3D1+v%3D$typeId+q.op%3DOR+bq%3D\$bq1+bf%3D}"=dlh:(22)^8={!boost+b%3D4}+_query_:{!edismax+qf%3D"currdesig^8+predesig^6+ttl^3+kw_skl^2+contents"+v%3D"\"doctor\"+\"medical+officer\"+\"physician\""+q.op%3DAND+bq%3D}=_query_:{!edismax+qf%3D"currdesig+predesig+ttl+kw_skl+contents^0.01"+v%3D"\"doctor\"+\"medical+officer\"+\"physician\""+q.op%3DOR+bq%3D}=NOT+country:isoin^-10=exp:[+10+TO+11+]=exp:[+11+TO+13+]=exp:[+13+TO+15+]=exp:[+15+TO+17+]=exp:[+17+TO+20+]=exp:[+20+TO+25+]=exp:[+25+TO+109+]=ctc:[+100+TO+101+]=ctc:[+101+TO+101.5+]=ctc:[+101.5+TO+102+]=ctc:[+102+TO+103+]=ctc:[+103+TO+104+]=ctc:[+104+TO+105+]=ctc:[+105+TO+107.5+]=ctc:[+107.5+TO+110+]=ctc:[+110+TO+115+]=ctc:[+115+TO+10100+]=1=(22)=javabin=(293)=(294)=(322)=ind=cat=rol=cl=pref=false=1=0=40=((mbbs+OR+_query_:"{!edismax+qf%3Ddlh+pf%3Did+ps%3D1+pf2%3Did+ps2%3D1+pf3%3Did+ps3%3D1+v%3D$queryany3+q.op%3DOR+bq%3D$bq1+bf%3D}")+OR+((("medical+officer")+OR+"medical+officer"~0)+OR+_query_:"{!edismax+qf%3Drol+pf%3Did+ps%3D1+pf2%3Did+ps2%3D1+pf3%3Did+ps3%3D1+v%3D$queryany0+q.op%3DOR+bq%3D$bq1+bf%3D}")+OR+(("doctor"+OR+doctor)+OR+_query_:"{!edismax+qf%3Drol+pf%3Did+ps%3D1+pf2%3Did+ps2%3D1+pf3%3Did+ps3%3D1+v%3D$queryany2+q.op%3DOR+bq%3D$bq1+bf%3D}")+OR+(("physician"+OR+"physicians"+OR+"general+physician"+OR+"house+physician"+OR+"consultant+physician"+OR+physician)+OR+_query_:"{!edismax+qf%3Drol+pf%3Did+ps%3D1+pf2%3Did+ps2%3D1+pf3%3Did+ps3%3D1+v%3D$queryany1+q.op%3DOR+bq%3D$bq1+bf%3D}")+OR+_query_:"{!edismax+qf%3D\$semanticfieldskl+pf%3Did+ps%3D1+pf2%3Did+ps2%3D1+pf3%3Did+ps3%3D1+v%3D\$semantictermsskl+q.op%3DOR+bq%3D\$bq1+bf%3D}"+OR+_query_:"{!edismax+qf%3D\$semanticfieldttl+pf%3Did+ps%3D1+pf2%3Did+ps2%3D1+pf3%3Did+ps3%3D1+v%3D\$semantictermsttl+q.op%3DAND+bq%3D\$bq1+bf%3D}")=10=id=kw_skl^0.05+kw_it^0.05+ttl^0.05+currdesig^0.05+predesig^0.05=1=id=id=true}
> hits=20268 status=0 QTime=10659
>


query optimization

2019-07-02 Thread Midas A
Hi,

How can i optimize following query it is taking time

 webapp=/solr path=/search params={
df=ttl=0=true=1=true=true=0=0=contents^0.05+currdesig^1.5+predesig^1.5+lng^2+ttl+kw_skl+kw_it=false=ttl,kw_skl,kw_it,contents==1=ttl^0.1+currdesig^0.1+predesig^0.1=0=/resumesearch="mbbss"+OR+"medicine"=2=true=mbbs,+"medical+officer",+doctor,+physician+("medical+officer")+"medical+officer"+"physician""+""general+physician""+""physicians""+""consultant+physician""+""house+physician"+"physician"+"doctor"+"mbbs"+"general+physician"+"physicians"+"consultant+physician"+"house+physician"=(293)=false==none=id,upt=1=OR=NOT+contents:("liaise+with+medical+officer"+"worked+with+medical+officer"+"working+with+medical+officer"+"reported+to+medical+officer"+"references+are+medical+officer"+"coordinated+with+medical+officer"+"closely+with+medical+officer"+"signature+of+medical+officer"+"seal+of++medical+officer"+"liaise+with+physician"+"worked+with+physician"+"working+with+physician"+"reported+to+physician"+"references+are+physician"+"coordinated+with+physician"+"closely+with+physician"+"signature+of+physician"+"seal+of++physician"+"liaise+with+doctor"+"worked+with+doctor"+"working+with+doctor"+"reported+to+doctor"+"references+are+doctor"+"coordinated+with+doctor"+"closely+with+doctor"+"signature+of+doctor"+"seal+of++doctor")=NOT+hemp:("xmwxagency"+"xmwxlimited"+"xmwxplacement"+"xmwxplus"+"xmwxprivate"+"xmwxsecurity"+"xmwxz2"+"xmwxand"+"xswxz2+plus+placement+and+security+agency+private+limited"+"xswxz2+plus+placement+and+security+agency+private"+"xswxz2+plus+placement+and+security+agency"+"xswxz2+plus+placement+and+security"+"xswxz2+plus+placement+and"+"xswxz2+plus+placement"+"xswxz2+plus"+"xswxz2")=ctc:[100.0+TO+107.2]+OR+ctc:[-1.0+TO+-1.0]=(dlh:(22))=ind:(24++42++24++8)=(rol:(292+293+294+322))=(cat:(9))=cat:(1000+OR+907+OR+1+OR+2+OR+3+OR+786+OR+4+OR+5+OR+6+OR+7+OR+8+OR+9+OR+10+OR+11+OR+12+OR+13+OR+14+OR+785+OR+15+OR+16+OR+17+OR+18+OR+908+OR+19+OR+20+OR+21+OR+23+OR+24)=NOT+is_udis:2=is_resume:0^-1000=upt_date:[*+TO+NOW/DAY-36MONTHS]^2=upt_date:[NOW/DAY-36MONTHS+TO+NOW/DAY-24MONTHS]^3=upt_date:[NOW/DAY-24MONTHS+TO+NOW/DAY-12MONTHS]^4=upt_date:[NOW/DAY-12MONTHS+TO+NOW/DAY-9MONTHS]^5=upt_date:[NOW/DAY-9MONTHS+TO+NOW/DAY-6MONTHS]^10=upt_date:[NOW/DAY-6MONTHS+TO+NOW/DAY-3MONTHS]^15=upt_date:[NOW/DAY-3MONTHS+TO+*]^20=_query_:"{!edismax+qf%3Drol^2+pf%3Did+ps%3D1+pf2%3Did+ps2%3D1+pf3%3Did+ps3%3D1+v%3D$typeId+q.op%3DOR+bq%3D\$bq1+bf%3D}"=_query_:"{!edismax+qf%3Drol^2+pf%3Did+ps%3D1+pf2%3Did+ps2%3D1+pf3%3Did+ps3%3D1+v%3D$typeId+q.op%3DOR+bq%3D\$bq1+bf%3D}"=_query_:"{!edismax+qf%3Drol^2+pf%3Did+ps%3D1+pf2%3Did+ps2%3D1+pf3%3Did+ps3%3D1+v%3D$typeId+q.op%3DOR+bq%3D\$bq1+bf%3D}"=dlh:(22)^8={!boost+b%3D4}+_query_:{!edismax+qf%3D"currdesig^8+predesig^6+ttl^3+kw_skl^2+contents"+v%3D"\"doctor\"+\"medical+officer\"+\"physician\""+q.op%3DAND+bq%3D}=_query_:{!edismax+qf%3D"currdesig+predesig+ttl+kw_skl+contents^0.01"+v%3D"\"doctor\"+\"medical+officer\"+\"physician\""+q.op%3DOR+bq%3D}=NOT+country:isoin^-10=exp:[+10+TO+11+]=exp:[+11+TO+13+]=exp:[+13+TO+15+]=exp:[+15+TO+17+]=exp:[+17+TO+20+]=exp:[+20+TO+25+]=exp:[+25+TO+109+]=ctc:[+100+TO+101+]=ctc:[+101+TO+101.5+]=ctc:[+101.5+TO+102+]=ctc:[+102+TO+103+]=ctc:[+103+TO+104+]=ctc:[+104+TO+105+]=ctc:[+105+TO+107.5+]=ctc:[+107.5+TO+110+]=ctc:[+110+TO+115+]=ctc:[+115+TO+10100+]=1=(22)=javabin=(293)=(294)=(322)=ind=cat=rol=cl=pref=false=1=0=40=((mbbs+OR+_query_:"{!edismax+qf%3Ddlh+pf%3Did+ps%3D1+pf2%3Did+ps2%3D1+pf3%3Did+ps3%3D1+v%3D$queryany3+q.op%3DOR+bq%3D$bq1+bf%3D}")+OR+((("medical+officer")+OR+"medical+officer"~0)+OR+_query_:"{!edismax+qf%3Drol+pf%3Did+ps%3D1+pf2%3Did+ps2%3D1+pf3%3Did+ps3%3D1+v%3D$queryany0+q.op%3DOR+bq%3D$bq1+bf%3D}")+OR+(("doctor"+OR+doctor)+OR+_query_:"{!edismax+qf%3Drol+pf%3Did+ps%3D1+pf2%3Did+ps2%3D1+pf3%3Did+ps3%3D1+v%3D$queryany2+q.op%3DOR+bq%3D$bq1+bf%3D}")+OR+(("physician"+OR+"physicians"+OR+"general+physician"+OR+"house+physician"+OR+"consultant+physician"+OR+physician)+OR+_query_:"{!edismax+qf%3Drol+pf%3Did+ps%3D1+pf2%3Did+ps2%3D1+pf3%3Did+ps3%3D1+v%3D$queryany1+q.op%3DOR+bq%3D$bq1+bf%3D}")+OR+_query_:"{!edismax+qf%3D\$semanticfieldskl+pf%3Did+ps%3D1+pf2%3Did+ps2%3D1+pf3%3Did+ps3%3D1+v%3D\$semantictermsskl+q.op%3DOR+bq%3D\$bq1+bf%3D}"+OR+_query_:"{!edismax+qf%3D\$semanticfieldttl+pf%3Did+ps%3D1+pf2%3Did+ps2%3D1+pf3%3Did+ps3%3D1+v%3D\$semantictermsttl+q.op%3DAND+bq%3D\$bq1+bf%3D}")=10=id=kw_skl^0.05+kw_it^0.05+ttl^0.05+currdesig^0.05+predesig^0.05=1=id=id=true}
hits=20268 status=0 QTime=10659


Apache Solr warning on 6.2.1

2019-07-01 Thread Nitin Midas
Hello,

We have Apache Solr version 6.2.1 installed on server and we are getting
this warning on Apache Solr log from few days which has affected
performance of solr queries and put latency on our App:

SolrCore [user_details] PERFORMANCE WARNING: Overlapping onDeckSearchers=2

So we have followed this article
https://support.datastax.com/hc/en-us/articles/207690673-FAQ-Solr-logging-PERFORMANCE-WARNING-Overlapping-onDeckSearchers-and-its-meaning
and made changes in SolrConfig.xml file of user_details like this:

16

and also we have reduced number of autowarmCount



however still we are getting this warning. Can you please help us how can
we improve the performance of solr queries on our app.

Regards,
Nitin.


Re: refused connection

2019-06-28 Thread Midas A
We are doing bulk indexing here . Might it be possible due to heavy
indexing . jetty connection related thing ?

On Fri, Jun 28, 2019 at 1:47 PM Markus Jelsma 
wrote:

> Hello,
>
> If you get a Connection Refused, then normally the server is just offline.
> But, something weird is hiding in your stack trace, you should check it out
> further:
>
> > Caused by: java.net.ConnectException: Cannot assign requested address
> > (connect failed)
>
> I have not seen this before.
>
> Regards,
> Markus
>
> -Original message-
> > From:Midas A 
> > Sent: Friday 28th June 2019 10:03
> > To: solr-user@lucene.apache.org
> > Subject: Re: refused connection
> >
> > Please reply .  THis error is coming intermittently.
> >
> > On Fri, Jun 28, 2019 at 11:50 AM Midas A  wrote:
> >
> > > Hi All ,
> > >
> > > I am getting following error while indexing . Please suggest
> resolution.
> > >
> > > We are using kafka consumer to index solr .
> > >
> > >
> > > org.apache.solr.client.solrj.SolrServerException: Server
> > > *refused connection* at: http://host:port/solr/research
> > > at
> > >
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:656)
> > > ~[solr-solrj-8.1.1.jar!/:8.1.1
> fcbe46c28cef11bc058779afba09521de1b19bef -
> > > ab - 2019-05-22 15:20:04]
> > > at
> > >
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
> > > ~[solr-solrj-8.1.1.jar!/:8.1.1
> fcbe46c28cef11bc058779afba09521de1b19bef -
> > > ab - 2019-05-22 15:20:04]
> > > at
> > >
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
> > > ~[solr-solrj-8.1.1.jar!/:8.1.1
> fcbe46c28cef11bc058779afba09521de1b19bef -
> > > ab - 2019-05-22 15:20:04]
> > > at
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
> > > ~[solr-solrj-8.1.1.jar!/:8.1.1
> fcbe46c28cef11bc058779afba09521de1b19bef -
> > > ab - 2019-05-22 15:20:04]
> > > at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:177)
> > > ~[solr-solrj-8.1.1.jar!/:8.1.1
> fcbe46c28cef11bc058779afba09521de1b19bef -
> > > ab - 2019-05-22 15:20:04]
> > > at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
> > > ~[solr-solrj-8.1.1.jar!/:8.1.1
> fcbe46c28cef11bc058779afba09521de1b19bef -
> > > ab - 2019-05-22 15:20:04]
> > > at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:156)
> > > ~[solr-solrj-8.1.1.jar!/:8.1.1
> fcbe46c28cef11bc058779afba09521de1b19bef -
> > > ab - 2019-05-22 15:20:04]
> > > at
> > >
> com.monster.blue.jay.repositories.impl.ResumesDocumentRepositoryImpl.pushToSolr(ResumesDocumentRepositoryImpl.java:425)
> > > [classes!/:1.0.0]
> > > at
> > >
> com.monster.blue.jay.repositories.impl.ResumesDocumentRepositoryImpl.createResumeDocument(ResumesDocumentRepositoryImpl.java:397)
> > > [classes!/:1.0.0]
> > > at
> > >
> com.monster.blue.jay.repositories.impl.ResumesDocumentRepositoryImpl$$FastClassBySpringCGLIB$$e5ddf9e4.invoke()
> > > [classes!/:1.0.0]
> > > at
> > >
> org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
> > > [spring-core-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> > > at
> > >
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:746)
> > > [spring-aop-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> > > at
> > >
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
> > > [spring-aop-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> > > at
> > >
> org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:139)
> > > [spring-tx-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> > > at
> > >
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185)
> > > [spring-aop-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> > > at
> > >
> org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:688)
> > > [spring-aop-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> > > at
> > >
> com.monster.blue.jay.repositories.impl.ResumesDocumentRepositoryImpl$$EnhancerBySpringCGLIB$$3885a0b4.createResumeDocument()
> > > [classes!/:1.0.0]
> > > at
> > >
> com.mon

Re: refused connection

2019-06-28 Thread Midas A
Please reply .  THis error is coming intermittently.

On Fri, Jun 28, 2019 at 11:50 AM Midas A  wrote:

> Hi All ,
>
> I am getting following error while indexing . Please suggest resolution.
>
> We are using kafka consumer to index solr .
>
>
> org.apache.solr.client.solrj.SolrServerException: Server
> *refused connection* at: http://host:port/solr/research
> at
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:656)
> ~[solr-solrj-8.1.1.jar!/:8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef -
> ab - 2019-05-22 15:20:04]
> at
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
> ~[solr-solrj-8.1.1.jar!/:8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef -
> ab - 2019-05-22 15:20:04]
> at
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
> ~[solr-solrj-8.1.1.jar!/:8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef -
> ab - 2019-05-22 15:20:04]
> at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
> ~[solr-solrj-8.1.1.jar!/:8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef -
> ab - 2019-05-22 15:20:04]
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:177)
> ~[solr-solrj-8.1.1.jar!/:8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef -
> ab - 2019-05-22 15:20:04]
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
> ~[solr-solrj-8.1.1.jar!/:8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef -
> ab - 2019-05-22 15:20:04]
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:156)
> ~[solr-solrj-8.1.1.jar!/:8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef -
> ab - 2019-05-22 15:20:04]
> at
> com.monster.blue.jay.repositories.impl.ResumesDocumentRepositoryImpl.pushToSolr(ResumesDocumentRepositoryImpl.java:425)
> [classes!/:1.0.0]
> at
> com.monster.blue.jay.repositories.impl.ResumesDocumentRepositoryImpl.createResumeDocument(ResumesDocumentRepositoryImpl.java:397)
> [classes!/:1.0.0]
> at
> com.monster.blue.jay.repositories.impl.ResumesDocumentRepositoryImpl$$FastClassBySpringCGLIB$$e5ddf9e4.invoke()
> [classes!/:1.0.0]
> at
> org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
> [spring-core-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> at
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:746)
> [spring-aop-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> at
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
> [spring-aop-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> at
> org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:139)
> [spring-tx-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> at
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185)
> [spring-aop-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> at
> org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:688)
> [spring-aop-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> at
> com.monster.blue.jay.repositories.impl.ResumesDocumentRepositoryImpl$$EnhancerBySpringCGLIB$$3885a0b4.createResumeDocument()
> [classes!/:1.0.0]
> at
> com.monster.blue.jay.services.ResumeDocumentService.getResumeDocument(ResumeDocumentService.java:46)
> [classes!/:1.0.0]
> at
> com.monster.blue.jay.runable.impl.ParallelGroupProcessor$GroupIndexingTaskCallable.call(ParallelGroupProcessor.java:200)
> [classes!/:1.0.0]
> at
> com.monster.blue.jay.runable.impl.ParallelGroupProcessor$GroupIndexingTaskCallable.call(ParallelGroupProcessor.java:148)
> [classes!/:1.0.0]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_121]
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [na:1.8.0_121]
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> [na:1.8.0_121]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
> Caused by: org.apache.http.conn.HttpHostConnectException: Connect to
> 10.216.204.70:3112 [/10.216.204.70] failed: Cannot assign requested
> address (connect failed)
> at
> org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:159)
> ~[httpclient-4.5.5.jar!/:4.5.5]
> at
> org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:373)
> ~[httpclient-4.5.5.jar!/:4.5.5]
> at
> org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:381)
> ~[httpclient-4.5.5.jar!/:4.5.5]
> at
> org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:237)
> ~[httpclient-4.5.5.jar!/:4.5

refused connection

2019-06-28 Thread Midas A
Hi All ,

I am getting following error while indexing . Please suggest resolution.

We are using kafka consumer to index solr .


org.apache.solr.client.solrj.SolrServerException: Server
*refused connection* at: http://host:port/solr/research
at
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:656)
~[solr-solrj-8.1.1.jar!/:8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef -
ab - 2019-05-22 15:20:04]
at
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
~[solr-solrj-8.1.1.jar!/:8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef -
ab - 2019-05-22 15:20:04]
at
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
~[solr-solrj-8.1.1.jar!/:8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef -
ab - 2019-05-22 15:20:04]
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
~[solr-solrj-8.1.1.jar!/:8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef -
ab - 2019-05-22 15:20:04]
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:177)
~[solr-solrj-8.1.1.jar!/:8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef -
ab - 2019-05-22 15:20:04]
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
~[solr-solrj-8.1.1.jar!/:8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef -
ab - 2019-05-22 15:20:04]
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:156)
~[solr-solrj-8.1.1.jar!/:8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef -
ab - 2019-05-22 15:20:04]
at
com.monster.blue.jay.repositories.impl.ResumesDocumentRepositoryImpl.pushToSolr(ResumesDocumentRepositoryImpl.java:425)
[classes!/:1.0.0]
at
com.monster.blue.jay.repositories.impl.ResumesDocumentRepositoryImpl.createResumeDocument(ResumesDocumentRepositoryImpl.java:397)
[classes!/:1.0.0]
at
com.monster.blue.jay.repositories.impl.ResumesDocumentRepositoryImpl$$FastClassBySpringCGLIB$$e5ddf9e4.invoke()
[classes!/:1.0.0]
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
[spring-core-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at
org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:746)
[spring-aop-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
[spring-aop-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at
org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:139)
[spring-tx-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185)
[spring-aop-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at
org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:688)
[spring-aop-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at
com.monster.blue.jay.repositories.impl.ResumesDocumentRepositoryImpl$$EnhancerBySpringCGLIB$$3885a0b4.createResumeDocument()
[classes!/:1.0.0]
at
com.monster.blue.jay.services.ResumeDocumentService.getResumeDocument(ResumeDocumentService.java:46)
[classes!/:1.0.0]
at
com.monster.blue.jay.runable.impl.ParallelGroupProcessor$GroupIndexingTaskCallable.call(ParallelGroupProcessor.java:200)
[classes!/:1.0.0]
at
com.monster.blue.jay.runable.impl.ParallelGroupProcessor$GroupIndexingTaskCallable.call(ParallelGroupProcessor.java:148)
[classes!/:1.0.0]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_121]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[na:1.8.0_121]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[na:1.8.0_121]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
Caused by: org.apache.http.conn.HttpHostConnectException: Connect to
10.216.204.70:3112 [/10.216.204.70] failed: Cannot assign requested address
(connect failed)
at
org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:159)
~[httpclient-4.5.5.jar!/:4.5.5]
at
org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:373)
~[httpclient-4.5.5.jar!/:4.5.5]
at
org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:381)
~[httpclient-4.5.5.jar!/:4.5.5]
at
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:237)
~[httpclient-4.5.5.jar!/:4.5.5]
at
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
~[httpclient-4.5.5.jar!/:4.5.5]
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
~[httpclient-4.5.5.jar!/:4.5.5]
at
org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:111)
~[httpclient-4.5.5.jar!/:4.5.5]
at
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
~[httpclient-4.5.5.jar!/:4.5.5]
at
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
~[httpclient-4.5.5.jar!/:4.5.5]
at

Jdbc driver issue on cloud

2019-06-25 Thread Midas A
Hi ,
i am  using streaming expression getting  following error .

Failed to open JDBC connection

{
  "result-set":{
"docs":[{
"EXCEPTION":"Failed to open JDBC connection to
'jdbc:mysql://localhost/users?user=root=solr'",
"EOF":true,
"RESPONSE_TIME":99}]}}


Solr cloud setup

2019-06-07 Thread Midas A
Hi ,

Currently we are in master slave architechture we want to move in solr
cloud architechture .
how i should decide shard number in solr cloud ?

My current solr in version 6 and index size is 300 GB.



Regards,
Abhishek Tiwari


Re: not able to optimize

2019-06-04 Thread Midas A
400GB index is good ?
Are we should shard it .?

When we should start caring about inex size .?

On Tue, Jun 4, 2019 at 3:04 PM Midas A  wrote:

> So we should not optimize our index ?
>
> On Tue, Jun 4, 2019 at 2:37 PM Toke Eskildsen  wrote:
>
>> On Tue, 2019-06-04 at 11:48 +0530, Midas A wrote:
>> >  Index size is 400GB. we used master slave architecture .
>> >
>> > commit is taking time while not able to perform optimize .
>>
>> Why do you want to optimize in the first place? What are you hoping to
>> achieve?
>>
>> There should be an error message in your Solr log from the failed
>> optimize, so see if you can find that. At least for some Solr versions,
>> optimize can be quite memory hungry so a very quick guess is that you
>> have hit an OutOfMemory. Not enough disk space is also a guess: Make
>> sure you have 2*index size free space before optimizing as worst case
>> for storage usage during optimize is a total of 3*index size.
>>
>> - Toke Eskildsen, Royal Danish Library
>>
>>
>>


Re: not able to optimize

2019-06-04 Thread Midas A
So we should not optimize our index ?

On Tue, Jun 4, 2019 at 2:37 PM Toke Eskildsen  wrote:

> On Tue, 2019-06-04 at 11:48 +0530, Midas A wrote:
> >  Index size is 400GB. we used master slave architecture .
> >
> > commit is taking time while not able to perform optimize .
>
> Why do you want to optimize in the first place? What are you hoping to
> achieve?
>
> There should be an error message in your Solr log from the failed
> optimize, so see if you can find that. At least for some Solr versions,
> optimize can be quite memory hungry so a very quick guess is that you
> have hit an OutOfMemory. Not enough disk space is also a guess: Make
> sure you have 2*index size free space before optimizing as worst case
> for storage usage during optimize is a total of 3*index size.
>
> - Toke Eskildsen, Royal Danish Library
>
>
>


not able to optimize

2019-06-04 Thread Midas A
Hi ,
 Index size is 400GB. we used master slave architecture .

commit is taking time while not able to perform optimize .

what should i do .


Re: Autosuggest help

2019-04-06 Thread Midas A
Any update?

On Thu, 4 Apr 2019, 1:09 pm Midas A,  wrote:

> Hi,
>
> We need to use auto suggest click stream data in Auto suggestion . How we
> can achieve this ?
>
> Currently we are using suggester for auto suggestions .
>
>
> Regards,
> Midas
>


Autosuggest help

2019-04-04 Thread Midas A
Hi,

We need to use auto suggest click stream data in Auto suggestion . How we
can achieve this ?

Currently we are using suggester for auto suggestions .


Regards,
Midas


Re: dynamic field issue

2019-02-21 Thread Midas A
Here we are indexing dynamic fields and  we are using one of this field in*
bf *.
Would only indexing dynamic field will increase heap and load of master -
slave solr servers ?


Regards,
Midas

On Thu, Feb 21, 2019 at 10:03 PM Erick Erickson 
wrote:

> 300 is still not excessive. As far as memory goes, sure. If you’re
> faceting, grouping, or sorting docValues would _certainly_ help with memory
> consumption.
>
> > On Feb 21, 2019, at 8:31 AM, Midas A  wrote:
> >
> > Hi ,
> > Plelase help me here we have crossed 100+ fields per dyanmic fields and
> we
> > have three dynamic fields.
> > using docValues in dynamic fields will help in improving heap and query
> > time ?
> >
> > Regards,
> > Abhishek Tiwari
> >
> >
> > On Thu, Feb 21, 2019 at 9:38 PM Midas A  wrote:
> >
> >> Yes . We have crossed  100 fields .
> >>
> >> Would docValues help here ?
> >>
> >> What kind of information you want from my side ?
> >>
> >> On Thu, 21 Feb 2019, 9:31 pm Erick Erickson, 
> >> wrote:
> >>
> >>> There’s no way to answer this given you’ve provided almost no
> >>> information.
> >>>
> >>> Do note that once you get to more than a few hundred fields,
> >>> Solr still functions, but I’ve seen performance degrade and
> >>> memory increase.
> >>>
> >>> Best,
> >>> Erick
> >>>
> >>>> On Feb 21, 2019, at 7:54 AM, Midas A  wrote:
> >>>>
> >>>> Thanks for quick reply .
> >>>>
> >>>> we are creating  search *query(keyword)*  for dynamic field creation
> to
> >>>> use click ,cart  and order data  in search.
> >>>>
> >>>> But we are experiencing  more heap and increase in query time .
> >>>> What could be the problem? can be anything related to it ?
> >>>>
> >>>>
> >>>> On Thu, Feb 21, 2019 at 8:43 PM Shawn Heisey 
> >>> wrote:
> >>>>
> >>>>> On 2/21/2019 8:01 AM, Midas A wrote:
> >>>>>> How many dynamic field we can create in solr ?. is there any
> >>> limitation ?
> >>>>>> Is indexing dynamic field can increase heap memory on server .
> >>>>>
> >>>>> At the Lucene level, there is absolutely no difference between a
> >>>>> standard field and a dynamic field.  The difference in Solr is how
> the
> >>>>> field is defined, nothing more.
> >>>>>
> >>>>> Lucene has no hard limitations on the number of fields you can
> create,
> >>>>> but the more you have the larger your index will probably be.  Larger
> >>>>> indexes perform slower than smaller ones and require more resources
> >>> like
> >>>>> memory.
> >>>>>
> >>>>> Thanks,
> >>>>> Shawn
> >>>>>
> >>>
> >>>
>
>


Re: dynamic field issue

2019-02-21 Thread Midas A
Hi ,
Plelase help me here we have crossed 100+ fields per dyanmic fields and we
have three dynamic fields.
using docValues in dynamic fields will help in improving heap and query
time ?

Regards,
Abhishek Tiwari


On Thu, Feb 21, 2019 at 9:38 PM Midas A  wrote:

> Yes . We have crossed  100 fields .
>
> Would docValues help here ?
>
> What kind of information you want from my side ?
>
> On Thu, 21 Feb 2019, 9:31 pm Erick Erickson, 
> wrote:
>
>> There’s no way to answer this given you’ve provided almost no
>> information.
>>
>> Do note that once you get to more than a few hundred fields,
>> Solr still functions, but I’ve seen performance degrade and
>> memory increase.
>>
>> Best,
>> Erick
>>
>> > On Feb 21, 2019, at 7:54 AM, Midas A  wrote:
>> >
>> > Thanks for quick reply .
>> >
>> > we are creating  search *query(keyword)*  for dynamic field creation  to
>> > use click ,cart  and order data  in search.
>> >
>> > But we are experiencing  more heap and increase in query time .
>> > What could be the problem? can be anything related to it ?
>> >
>> >
>> > On Thu, Feb 21, 2019 at 8:43 PM Shawn Heisey 
>> wrote:
>> >
>> >> On 2/21/2019 8:01 AM, Midas A wrote:
>> >>> How many dynamic field we can create in solr ?. is there any
>> limitation ?
>> >>> Is indexing dynamic field can increase heap memory on server .
>> >>
>> >> At the Lucene level, there is absolutely no difference between a
>> >> standard field and a dynamic field.  The difference in Solr is how the
>> >> field is defined, nothing more.
>> >>
>> >> Lucene has no hard limitations on the number of fields you can create,
>> >> but the more you have the larger your index will probably be.  Larger
>> >> indexes perform slower than smaller ones and require more resources
>> like
>> >> memory.
>> >>
>> >> Thanks,
>> >> Shawn
>> >>
>>
>>


Re: dynamic field issue

2019-02-21 Thread Midas A
Yes . We have crossed  100 fields .

Would docValues help here ?

What kind of information you want from my side ?

On Thu, 21 Feb 2019, 9:31 pm Erick Erickson, 
wrote:

> There’s no way to answer this given you’ve provided almost no
> information.
>
> Do note that once you get to more than a few hundred fields,
> Solr still functions, but I’ve seen performance degrade and
> memory increase.
>
> Best,
> Erick
>
> > On Feb 21, 2019, at 7:54 AM, Midas A  wrote:
> >
> > Thanks for quick reply .
> >
> > we are creating  search *query(keyword)*  for dynamic field creation  to
> > use click ,cart  and order data  in search.
> >
> > But we are experiencing  more heap and increase in query time .
> > What could be the problem? can be anything related to it ?
> >
> >
> > On Thu, Feb 21, 2019 at 8:43 PM Shawn Heisey 
> wrote:
> >
> >> On 2/21/2019 8:01 AM, Midas A wrote:
> >>> How many dynamic field we can create in solr ?. is there any
> limitation ?
> >>> Is indexing dynamic field can increase heap memory on server .
> >>
> >> At the Lucene level, there is absolutely no difference between a
> >> standard field and a dynamic field.  The difference in Solr is how the
> >> field is defined, nothing more.
> >>
> >> Lucene has no hard limitations on the number of fields you can create,
> >> but the more you have the larger your index will probably be.  Larger
> >> indexes perform slower than smaller ones and require more resources like
> >> memory.
> >>
> >> Thanks,
> >> Shawn
> >>
>
>


Re: dynamic field issue

2019-02-21 Thread Midas A
Thanks for quick reply .

we are creating  search *query(keyword)*  for dynamic field creation  to
use click ,cart  and order data  in search.

But we are experiencing  more heap and increase in query time .
What could be the problem? can be anything related to it ?


On Thu, Feb 21, 2019 at 8:43 PM Shawn Heisey  wrote:

> On 2/21/2019 8:01 AM, Midas A wrote:
> > How many dynamic field we can create in solr ?. is there any limitation ?
> > Is indexing dynamic field can increase heap memory on server .
>
> At the Lucene level, there is absolutely no difference between a
> standard field and a dynamic field.  The difference in Solr is how the
> field is defined, nothing more.
>
> Lucene has no hard limitations on the number of fields you can create,
> but the more you have the larger your index will probably be.  Larger
> indexes perform slower than smaller ones and require more resources like
> memory.
>
> Thanks,
> Shawn
>


dynamic field issue

2019-02-21 Thread Midas A
Hi All,

How many dynamic field we can create in solr ?. is there any limitation ?
Is indexing dynamic field can increase heap memory on server .


Regards,
Midas


Re: boost query

2018-12-06 Thread Midas A
Thanks Erik.
Please confirm
if keyword = "nokia"
*bq=_val_:%22payload(vals_dpf,noika)%22=edismax*
*wil this query work for me ?.*





On Fri, Dec 7, 2018 at 12:12 PM Erik Hatcher  wrote:

> This blog I wrote will help.   Let us know how it goes.
>
>  https://lucidworks.com/2017/09/14/solr-payloads/
>
>Erik
>
> > On Dec 7, 2018, at 01:31, Midas A  wrote:
> >
> > I have a field at my schema named  *val_dpf* . I want that *val_dpf*
> should
> > have payloaded values. i.e.
> >
> > noika|0.46  mobile|0.37  samsung|0.19 redmi|0.22
> >
> > When a user searches for a keyword i.e. nokia I want to add 0.46 to usual
> > score. If user searches for samsung, 0.19 should be added .
> >
> > how can i achieve this .
>


boost query

2018-12-06 Thread Midas A
I have a field at my schema named  *val_dpf* . I want that *val_dpf* should
have payloaded values. i.e.

noika|0.46  mobile|0.37  samsung|0.19 redmi|0.22

When a user searches for a keyword i.e. nokia I want to add 0.46 to usual
score. If user searches for samsung, 0.19 should be added .

how can i achieve this .


Re: LTR features on solr

2018-10-26 Thread Midas A
*Thanks for relpy . Please find my answers below inline.*


On Fri, Oct 26, 2018 at 2:41 PM Kamuela Lau  wrote:

> Hi,
>
> Just to confirm, are you asking about the following?
>
> For a particular query, you have a list of documents, and for each
> document, you have data
> on the number of times the document was clicked on, added to a cart, and
> ordered, and you
> would like to use this data for features. Is this correct?
> *[ME] :Yes*
> If this is the case, are you indexing that data?
>
   *[ME]* : *Yes we are planing to index the data but my question is how we
should store it in to solr .*
* should i create dynamic field to store the click, cart and order
data per query for document?.*
* Please guide me how we should store. *

>
> I believe that the features which can be used for the LTR module is
> information that is either indexed,
> or indexed information which has been manipulated through the use of
> function queries.
>
> https://lucene.apache.org/solr/guide/7_5/learning-to-rank.html
>
> It seems to me that you would have to frequently index the click data, if
> you need to refresh the data frequently
>
  *  [ME] : we are planing to refresh this data weekly.*

>
> On Fri, Oct 26, 2018 at 4:24 PM Midas A  wrote:
>
> > Hi  All,
> >
> > I am new in implementing solr LTR .  so facing few challenges
> > Broadly  we have 3 kind of features
> > a) Based on query
> > b) based on document
> > *c) Based on query-document from click ,cart and order  from tracker
> data.*
> >
> > So my question here is how to store c) type of features
> >- Old queries and corresponding clicks ((query-clicks)
> > - Old query -cart addition  and
> >   - Old query -order data
> >  into solr to run LTR model
> > and secoundly how to build features for query-clicks, query-cart and
> > query-orders because we need to refresh  this data frequently.
> >
> > What approch should i follow .
> >
> > Hope i am able to explain my problem.
> >
>


LTR features on solr

2018-10-26 Thread Midas A
Hi  All,

I am new in implementing solr LTR .  so facing few challenges
Broadly  we have 3 kind of features
a) Based on query
b) based on document
*c) Based on query-document from click ,cart and order  from tracker data.*

So my question here is how to store c) type of features
   - Old queries and corresponding clicks ((query-clicks)
- Old query -cart addition  and
  - Old query -order data
 into solr to run LTR model
and secoundly how to build features for query-clicks, query-cart and
query-orders because we need to refresh  this data frequently.

What approch should i follow .

Hope i am able to explain my problem.


Re: LTR feature extraction

2018-10-15 Thread Midas A
Please reply.

On Mon, 15 Oct 2018, 3:21 pm Midas A,  wrote:

> Hi ,
> i am new to LTR solr and i have following queries regarding same .
>
>
> How i can write feaqture for follwing in solr
>
> a)  Covered_query_term_number: i.e. if search query has n terms and
> document cover 2 term then Covered_query_term_number is two.
>
> b) number of carts for this query-document pair in the past
> week/totalcarts
> c) Query_Brand: brand coming in query
>
>
>
> I am first time doer so explain me how can i achieve above
>
> Regards,
> Midas
>


LTR feature extraction

2018-10-15 Thread Midas A
Hi ,
i am new to LTR solr and i have following queries regarding same .


How i can write feaqture for follwing in solr

a)  Covered_query_term_number: i.e. if search query has n terms and
document cover 2 term then Covered_query_term_number is two.

b) number of carts for this query-document pair in the past week/totalcarts
c) Query_Brand: brand coming in query



I am first time doer so explain me how can i achieve above

Regards,
Midas


how to get current timestamp

2018-08-16 Thread Midas A
Hi,
i my use case i want to get current timestamp in response of solr query.

how can i do it . is it doable ?

Regards,
Midas


Re: Solr main replica down, another replica taking over

2018-03-21 Thread Midas A
Thanks Shawn,

We want to send less traffic over virtual machines and more on physical
servers . How can we achieve this

On Wed, Mar 21, 2018 at 11:02 AM, Shawn Heisey <apa...@elyograg.org> wrote:

> On 3/20/2018 11:18 PM, Midas A wrote:
>
>> I have one question here
>> a) solr cloud load balance requests internally (Round robin or anything
>> else ).
>>
>
> Yes, SolrCloud does load balance requests across active replicas in the
> entire cloud.  I do not know what algorithm it uses for load balancing --
> whether that's round-robin or something else.
>
> b) How can i change this behaviour (Note. I have solr cloud with mix of
>> machines physical and virtual  )
>>
>
> There is some effort underway to allow SolrCloud to prefer specific
> replica types.  Recent versions of Solr added TLOG and PULL types, to
> supplement the NRT type that all versions of SolrCloud have.  There is
> strong interest in being able to prefer one of the new types and let the
> NRT replicas handle indexing only when possible.
>
> There is already a "preferLocalShards" parameter ... but enabling this
> parameter can actually make performance *worse*, by concentrating requests
> onto a single machine and leaving the other machines in the cloud idle.
>
> Thanks,
> Shawn
>
>


Re: Solr main replica down, another replica taking over

2018-03-20 Thread Midas A
Hi Shawn,

I have one question here
a) solr cloud load balance requests internally (Round robin or anything
else ).
b) How can i change this behaviour (Note. I have solr cloud with mix of
machines physical and virtual  )

Regards,
Midas

On Wed, Mar 21, 2018 at 6:36 AM, Zheng Lin Edwin Yeo <edwinye...@gmail.com>
wrote:

> Hi Shawn,
>
> Thanks for your reply.
>
> Yes, I'm using SolrCloud and my clients are Java. Will look into
> CloudSolrClient.
>
> Regards,
> Edwin
>
> On 20 March 2018 at 20:36, Shawn Heisey <apa...@elyograg.org> wrote:
>
> > On 3/20/2018 2:22 AM, Zheng Lin Edwin Yeo wrote:
> >
> >> However, for query that are using URL, if the URL is still pointing to
> the
> >> main replica  http://192.168.2.11:8983/solr, it will not go through. We
> >> have to manually change the URL to point it to the other replica
> >> http://192.168.2.12:8984/solr before the query can work.
> >>
> >> Is there anyway that we can make this automatic?
> >>
> >
> > One solution is to write code that can switch the URL when it goes down.
> >
> > An easier solution would be to put a load balancer in front of Solr and
> > point your clients at the load balancer.  I'm using haproxy for this.
> >
> > If your servers are in cloud mode, you have a fault tolerant ZK setup,
> and
> > your clients are Java, then you can use CloudSolrClient, and it will
> > automatically adjust when servers go down, without the need for a load
> > balancer.  I think somebody might have invented a cloud-aware client for
> > Python too, but if they have, it's third-party.
> >
> > Thanks,
> > Shawn
> >
> >
>


solr commit is taking time

2017-08-11 Thread Midas A
Hi all

our solr commit is taking time

10.20.73.92 - - [11/Aug/2017:15:44:00 +0530] "POST
/solr/##/update?wt=javabin=2 HTTP/1.1" 200 - 12594 What should
i check ?


Re: master slave replication taking time

2017-06-29 Thread Midas A
Erick,

when we copy entire index it takes 8- 10 mins .


On Wed, Jun 28, 2017 at 9:22 PM, Erick Erickson <erickerick...@gmail.com>
wrote:

> How long it takes to copy the entire index from one machine to another
> over your network. Solr can't go any faster than your network can
> support. Consider SolrCloud if you need something closer to NRT.
>
> Best,
> Erick
>
> On Tue, Jun 27, 2017 at 11:31 PM, Midas A <test.mi...@gmail.com> wrote:
> > Hi,
> >
> > we have around 2000 documents and our master to slave replication is
> > taking time  upto 20 second.
> >
> > What should i check ?
>


master slave replication taking time

2017-06-28 Thread Midas A
Hi,

we have around 2000 documents and our master to slave replication is
taking time  upto 20 second.

What should i check ?


Re: How to get field names of dynamic field

2017-04-17 Thread Midas A
Here if i am saying dynamic say *by_* *is the  field then by_color, by_size
.. etc . would be the dynamic fields .

Regards,
Abhishek Tiwari

On Mon, Apr 17, 2017 at 1:47 PM, Midas A <test.mi...@gmail.com> wrote:

> Thanks alex,
>
> I want all dynamic fields related to a query( i.e. category_id : 199
>  where 199 category has around 1 docs) .
>
> How we can get this with help of luke Request handler.
>
> On Mon, Apr 17, 2017 at 1:39 PM, Alexandre Rafalovitch <arafa...@gmail.com
> > wrote:
>
>> You could externalize that by periodically:
>> 1) Running a luke query or Schema API to get the list of fields
>> 2) Running Request Parameter API to update the list of field to return
>> (this does not cause core reload)
>> 3) If you have permanent field list on top of dynamic ones, you could
>> use parameter substitution as well to combine them
>> 4) if you really, really need to be in sync, you could use commit hook
>> to trigger this
>>
>> Regards,
>>Alex.
>> 
>> http://www.solr-start.com/ - Resources for Solr users, new and
>> experienced
>>
>>
>> On 17 April 2017 at 11:00, Midas A <test.mi...@gmail.com> wrote:
>> > Can we do faceting for all fields created by dynamically created fields
>> > rather sending explicitly sending it by query
>> > will facet.fields=*by_* *
>> > work or any other alternate ??
>> >
>> > I was thinking that i will get all dynamic fields with help of Luke
>> handler
>> > . But probably it is not possible through luke for a query
>> >
>> >
>> >
>> > On Fri, Apr 14, 2017 at 5:32 PM, Ahmet Arslan <iori...@yahoo.com.invalid
>> >
>> > wrote:
>> >
>> >> Hi Midas,
>> >>
>> >> LukeRequestHandler shows that information.
>> >>
>> >> Ahmet
>> >>
>> >> On Friday, April 14, 2017, 1:16:09 PM GMT+3, Midas A <
>> test.mi...@gmail.com>
>> >> wrote:
>> >> Actually , i am looking for APi
>> >>
>> >> On Fri, Apr 14, 2017 at 3:36 PM, Andrea Gazzarini <gxs...@gmail.com>
>> >> wrote:
>> >>
>> >> > I can see those names in the "Schema  browser" of the admin UI, so I
>> >> guess
>> >> > using the (lucene?) API it shouldn't be hard to get this info.
>> >> >
>> >> > I don' know if the schema api (or some other service) offer this
>> service
>> >> >
>> >> > Andrea
>> >> >
>> >> > On 14 Apr 2017 10:03, "Midas A" <test.mi...@gmail.com> wrote:
>> >> >
>> >> > > Hi,
>> >> > >
>> >> > >
>> >> > > Can i get all the field created for dynamic field in solr .
>> >> > >
>> >> > > Like
>> >> > > my dynamic field is by_*
>> >> > >
>> >> > > and i have index
>> >> > > by_color
>> >> > > by_size ..
>> >> > > etc
>> >> > >
>> >> > > I want to retrieve all these field name .
>> >> > > Is there any way to do this  based on some query
>> >> > >
>> >> >
>> >>
>>
>
>


Re: How to get field names of dynamic field

2017-04-17 Thread Midas A
Thanks alex,

I want all dynamic fields related to a query( i.e. category_id : 199  where
199 category has around 1 docs) .

How we can get this with help of luke Request handler.

On Mon, Apr 17, 2017 at 1:39 PM, Alexandre Rafalovitch <arafa...@gmail.com>
wrote:

> You could externalize that by periodically:
> 1) Running a luke query or Schema API to get the list of fields
> 2) Running Request Parameter API to update the list of field to return
> (this does not cause core reload)
> 3) If you have permanent field list on top of dynamic ones, you could
> use parameter substitution as well to combine them
> 4) if you really, really need to be in sync, you could use commit hook
> to trigger this
>
> Regards,
>Alex.
> 
> http://www.solr-start.com/ - Resources for Solr users, new and experienced
>
>
> On 17 April 2017 at 11:00, Midas A <test.mi...@gmail.com> wrote:
> > Can we do faceting for all fields created by dynamically created fields
> > rather sending explicitly sending it by query
> > will facet.fields=*by_* *
> > work or any other alternate ??
> >
> > I was thinking that i will get all dynamic fields with help of Luke
> handler
> > . But probably it is not possible through luke for a query
> >
> >
> >
> > On Fri, Apr 14, 2017 at 5:32 PM, Ahmet Arslan <iori...@yahoo.com.invalid
> >
> > wrote:
> >
> >> Hi Midas,
> >>
> >> LukeRequestHandler shows that information.
> >>
> >> Ahmet
> >>
> >> On Friday, April 14, 2017, 1:16:09 PM GMT+3, Midas A <
> test.mi...@gmail.com>
> >> wrote:
> >> Actually , i am looking for APi
> >>
> >> On Fri, Apr 14, 2017 at 3:36 PM, Andrea Gazzarini <gxs...@gmail.com>
> >> wrote:
> >>
> >> > I can see those names in the "Schema  browser" of the admin UI, so I
> >> guess
> >> > using the (lucene?) API it shouldn't be hard to get this info.
> >> >
> >> > I don' know if the schema api (or some other service) offer this
> service
> >> >
> >> > Andrea
> >> >
> >> > On 14 Apr 2017 10:03, "Midas A" <test.mi...@gmail.com> wrote:
> >> >
> >> > > Hi,
> >> > >
> >> > >
> >> > > Can i get all the field created for dynamic field in solr .
> >> > >
> >> > > Like
> >> > > my dynamic field is by_*
> >> > >
> >> > > and i have index
> >> > > by_color
> >> > > by_size ..
> >> > > etc
> >> > >
> >> > > I want to retrieve all these field name .
> >> > > Is there any way to do this  based on some query
> >> > >
> >> >
> >>
>


Re: How to get field names of dynamic field

2017-04-17 Thread Midas A
Can we do faceting for all fields created by dynamically created fields
rather sending explicitly sending it by query
will facet.fields=*by_* *
work or any other alternate ??

I was thinking that i will get all dynamic fields with help of Luke handler
. But probably it is not possible through luke for a query



On Fri, Apr 14, 2017 at 5:32 PM, Ahmet Arslan <iori...@yahoo.com.invalid>
wrote:

> Hi Midas,
>
> LukeRequestHandler shows that information.
>
> Ahmet
>
> On Friday, April 14, 2017, 1:16:09 PM GMT+3, Midas A <test.mi...@gmail.com>
> wrote:
> Actually , i am looking for APi
>
> On Fri, Apr 14, 2017 at 3:36 PM, Andrea Gazzarini <gxs...@gmail.com>
> wrote:
>
> > I can see those names in the "Schema  browser" of the admin UI, so I
> guess
> > using the (lucene?) API it shouldn't be hard to get this info.
> >
> > I don' know if the schema api (or some other service) offer this service
> >
> > Andrea
> >
> > On 14 Apr 2017 10:03, "Midas A" <test.mi...@gmail.com> wrote:
> >
> > > Hi,
> > >
> > >
> > > Can i get all the field created for dynamic field in solr .
> > >
> > > Like
> > > my dynamic field is by_*
> > >
> > > and i have index
> > > by_color
> > > by_size ..
> > > etc
> > >
> > > I want to retrieve all these field name .
> > > Is there any way to do this  based on some query
> > >
> >
>


Re: How to get field names of dynamic field

2017-04-14 Thread Midas A
Actually , i am looking for APi

On Fri, Apr 14, 2017 at 3:36 PM, Andrea Gazzarini <gxs...@gmail.com> wrote:

> I can see those names in the "Schema  browser" of the admin UI, so I guess
> using the (lucene?) API it shouldn't be hard to get this info.
>
> I don' know if the schema api (or some other service) offer this service
>
> Andrea
>
> On 14 Apr 2017 10:03, "Midas A" <test.mi...@gmail.com> wrote:
>
> > Hi,
> >
> >
> > Can i get all the field created for dynamic field in solr .
> >
> > Like
> > my dynamic field is by_*
> >
> > and i have index
> > by_color
> > by_size ..
> > etc
> >
> > I want to retrieve all these field name .
> > Is there any way to do this  based on some query
> >
>


How to get field names of dynamic field

2017-04-14 Thread Midas A
Hi,


Can i get all the field created for dynamic field in solr .

Like
my dynamic field is by_*

and i have index
by_color
by_size ..
etc

I want to retrieve all these field name .
Is there any way to do this  based on some query


Re: dynamic field sorting

2017-03-21 Thread Midas A
waiting for reply . Actually Heap utilization increases when we sort with
dynamic fields

On Tue, Mar 21, 2017 at 10:37 AM, Midas A <test.mi...@gmail.com> wrote:

> Hi ,
>
> How can i improve the performance of dynamic field sorting .
>
> index size is : 20 GB
>
> Regards,
> Midas
>


dynamic field sorting

2017-03-20 Thread Midas A
Hi ,

How can i improve the performance of dynamic field sorting .

index size is : 20 GB

Regards,
Midas


Solr test tagger

2017-02-17 Thread Midas A
Hi ,

i would like to use solr text tagger for entity extraction . Please guide
me that how can i use this for a eCommerce web site .


Regards

Midas


Solr partial update

2017-02-09 Thread Midas A
Hi,

i want solr doc partially if unique id exist else we donot want to do any
thing .

how can i achieve this .

Regards,
Midas


Re: compilation error

2016-11-17 Thread Midas A
sorry ,

i am using solr 5.2.1 version

On Thu, Nov 17, 2016 at 2:22 PM, Daniel Collins <danwcoll...@gmail.com>
wrote:

> Also, remember a significant number of the people on this group are in the
> US.  Asking for a rapid response at 1am is a pretty harsh SLA
> expectation...
>
> On 17 November 2016 at 08:51, Daniel Collins <danwcoll...@gmail.com>
> wrote:
>
> > Can you be more specific?  What version are you compiling, what command
> do
> > you use?  That looks to me like maven output, not ant?
> >
> > On 17 November 2016 at 06:30, Midas A <test.mi...@gmail.com> wrote:
> >
> >> Please reply?
> >>
> >> On Thu, Nov 17, 2016 at 11:31 AM, Midas A <test.mi...@gmail.com> wrote:
> >>
> >> > gettting following error while compiling .
> >> >  .
> >> > org.apache.avro#avro;1.7.5: configuration not found in
> >> > org.apache.avro#avro;1.7.5: 'master'. It was required from
> >> > org.apache.solr#morphlines-core;
> >> >
> >> >
> >> > and not able to resolve . please help in resolving .
> >> >
> >>
> >
> >
>


Re: compilation error

2016-11-16 Thread Midas A
Please reply?

On Thu, Nov 17, 2016 at 11:31 AM, Midas A <test.mi...@gmail.com> wrote:

> gettting following error while compiling .
>  .
> org.apache.avro#avro;1.7.5: configuration not found in
> org.apache.avro#avro;1.7.5: 'master'. It was required from
> org.apache.solr#morphlines-core;
>
>
> and not able to resolve . please help in resolving .
>


compilation error

2016-11-16 Thread Midas A
gettting following error while compiling .
 .
org.apache.avro#avro;1.7.5: configuration not found in
org.apache.avro#avro;1.7.5: 'master'. It was required from
org.apache.solr#morphlines-core;


and not able to resolve . please help in resolving .


getting following error while building solr wit ant

2016-11-15 Thread Midas A
io problem while parsing ivy file:
http://repo1.maven.org/maven2/org/apache/ant/ant/1.8.2/ant-1.8.2.pom:


Re: Multi word synonyms

2016-11-15 Thread Midas A
I am new with solr  . How i should solve this problem ?

Can we do something at query time ?

On Tue, Nov 15, 2016 at 5:35 PM, Vincenzo D'Amore <v.dam...@gmail.com>
wrote:

> Hi Michael,
>
> an update, reading the article I double checked if at least one of the
> issues were fixed.
> The good news is that https://issues.apache.org/jira/browse/LUCENE-2605
> has
> been closed and is available in 6.2.
>
> On Tue, Nov 15, 2016 at 12:32 PM, Michael Kuhlmann <k...@solr.info> wrote:
>
> > This is a nice reading though, but that solution depends on the
> > precondition that you'll already know your synonyms at index time.
> >
> > While having synonyms in the index is mostly the better solution anyway,
> > it's sometimes not feasible.
> >
> > -Michael
> >
> > Am 15.11.2016 um 12:14 schrieb Vincenzo D'Amore:
> > > Hi Midas,
> > >
> > > I suggest this interesting reading:
> > >
> > > https://lucidworks.com/blog/2014/07/12/solution-for-multi-
> > term-synonyms-in-lucenesolr-using-the-auto-phrasing-tokenfilter/
> > >
> > >
> > >
> > > On Tue, Nov 15, 2016 at 11:00 AM, Michael Kuhlmann <k...@solr.info>
> > wrote:
> > >
> > >> It's not working out of the box, sorry.
> > >>
> > >> We're using this plugin:
> > >> https://github.com/healthonnet/hon-lucene-synonyms#getting-started
> > >>
> > >> It's working nicely, but can lead to OOME when you add many synonyms
> > >> with multiple terms. And I'm not sure whether it#s still working with
> > >> Solr 6.0.
> > >>
> > >> -Michael
> > >>
> > >> Am 15.11.2016 um 10:29 schrieb Midas A:
> > >>> - i have to  use multi word synonyms at query time .
> > >>>
> > >>> Please suggest how can i do it .
> > >>> and let me know it whether it would be visible in debug query or not
> .
> > >>>
> > >>
> > >
> >
> >
>
>
> --
> Vincenzo D'Amore
> email: v.dam...@gmail.com
> skype: free.dev
> mobile: +39 349 8513251
>


Multi word synonyms

2016-11-15 Thread Midas A
- i have to  use multi word synonyms at query time .

Please suggest how can i do it .
and let me know it whether it would be visible in debug query or not .


Re: price sort

2016-11-14 Thread Midas A
Thanks for replying ,

i want to maintain relevancy  along with price sorting \

for example if i search "nike shoes"

According to relevance  "nike shoes"  come first then tshirt (other
product) from nike .

and now if we sort the results  tshirt from nike come on the top . this is
some thing that is not users intent .

In this situation we have to adopt mediocre approach  that does not change
users intent .


On Mon, Nov 14, 2016 at 2:38 PM, Emir Arnautovic <
emir.arnauto...@sematext.com> wrote:

> Hi Midas,
>
> Sorting by price means that score (~relevancy) is ignored/used as second
> sorting criteria. My assumption is that you have long tail of false
> positives causing sort by price to sort cheap, unrelated items first just
> because they matched by some stop word.
>
> Or I missed your question?
>
> Emir
>
>
>
> On 14.11.2016 06:39, Midas A wrote:
>
>> Hi,
>>
>> we are in e-commerce business  and we have to give price sort
>> functionality
>> .
>> what logic should we use that does not break the relevance .
>> please give the query for the same assuming dummy fields.
>>
>>
> --
> Monitoring * Alerting * Anomaly Detection * Centralized Log Management
> Solr & Elasticsearch Support * http://sematext.com/
>
>


spell checking on query

2016-11-13 Thread Midas A
How can we do the query time spell checking with help of solr .


facet query performance

2016-11-13 Thread Midas A
How to improve facet query performance


price sort

2016-11-13 Thread Midas A
Hi,

we are in e-commerce business  and we have to give price sort functionality
.
what logic should we use that does not break the relevance .
please give the query for the same assuming dummy fields.


facet on dynamic field

2016-11-04 Thread Midas A
i want to create facet on all dynamic field (by_*) . what should be the
query ?


Re: Boost according to values

2016-09-19 Thread Midas A
my use case do not suggest me to sort  . i have set of data with same
relevance.

what should be query in that case  .

On Mon, Sep 19, 2016 at 11:51 AM, Rajendra Gaikwad <rajendra...@gmail.com>
wrote:

> Hi Midas,
>
> Sort search results on popularity field by desc order.
> E.g popularity is field in the index which stores popularity information.
>
> http://localhost:8983/solr/mycollection/select?q=*:*=popularity desc
>
> Thanks,
> Rajendra Gaikwad
> Please execuse typo
>
>
>
> On Mon, Sep 19, 2016, 11:36 AM Midas A <test.mi...@gmail.com> wrote:
>
> > i have n items in my search result  with popularity (1,2,3,4n) . I
> want
> > higher popularity item should come first then next popularity item
> >
> >
> > say for example
> > a) item with popularity n,
> > b) item with popularity n -1,
> > c) item with popularity n -2,
> > d) item with popularity n - 3,
> > e) item with popularity n - 4,
> > f) item with popularity n - 5,
> > 
> > 
> > y) item with popularity 2,
> > z) item with popularity 1,
> >
> >
> > what should be my query  if relevance for items are constant
> >
> --
>
> sent from mobile, execuse typo
>


Boost according to values

2016-09-19 Thread Midas A
i have n items in my search result  with popularity (1,2,3,4n) . I want
higher popularity item should come first then next popularity item


say for example
a) item with popularity n,
b) item with popularity n -1,
c) item with popularity n -2,
d) item with popularity n - 3,
e) item with popularity n - 4,
f) item with popularity n - 5,


y) item with popularity 2,
z) item with popularity 1,


what should be my query  if relevance for items are constant


Re: commit it taking 1300 ms

2016-08-11 Thread Midas A
Emir,

other queries:

a) Solr cloud : NO
b) 
c)  
d) 
e) we are using multi threaded system.

On Thu, Aug 11, 2016 at 11:48 AM, Midas A <test.mi...@gmail.com> wrote:

> Emir,
>
> we post json documents through the curl it takes the time (same time i
> would like to say that we are not hard committing ). that curl takes time
> i.e. 1.3 sec.
>
> On Wed, Aug 10, 2016 at 2:29 PM, Emir Arnautovic <
> emir.arnauto...@sematext.com> wrote:
>
>> Hi Midas,
>>
>> According to your autocommit configuration and your worry about commit
>> time I assume that you are doing explicit commits from client code and that
>> 1.3s is client observed commit time. If that is the case, than it might be
>> opening searcher that is taking time.
>>
>> How do you index data - single threaded or multithreaded? How frequently
>> do you commit from client? Can you let Solr do soft commits instead of
>> explicitly committing? Do you have warmup queries? Is this SolrCloud? What
>> is number of servers (what spec), shards, docs?
>>
>> In any case monitoring can give you more info about server/Solr behavior
>> and help you diagnose issues more easily/precisely. One such monitoring
>> tool is our SPM <http://sematext.com/spm>.
>>
>> Regards,
>> Emir
>>
>> --
>> Monitoring * Alerting * Anomaly Detection * Centralized Log Management
>> Solr & Elasticsearch Support * http://sematext.com/
>>
>> On 10.08.2016 05:20, Midas A wrote:
>>
>>> Thanks for replying
>>>
>>> index size:9GB
>>> 2000 docs/sec.
>>>
>>> Actually earlier it was taking less but suddenly it has increased .
>>>
>>> Currently we do not have any monitoring  tool.
>>>
>>> On Tue, Aug 9, 2016 at 7:00 PM, Emir Arnautovic <
>>> emir.arnauto...@sematext.com> wrote:
>>>
>>> Hi Midas,
>>>>
>>>> Can you give us more details on your index: size, number of new docs
>>>> between commits. Why do you think 1.3s for commit is to much and why do
>>>> you
>>>> need it to take less? Did you do any system/Solr monitoring?
>>>>
>>>> Emir
>>>>
>>>>
>>>> On 09.08.2016 14:10, Midas A wrote:
>>>>
>>>> please reply it is urgent.
>>>>>
>>>>> On Tue, Aug 9, 2016 at 11:17 AM, Midas A <test.mi...@gmail.com> wrote:
>>>>>
>>>>> Hi ,
>>>>>
>>>>>> commit is taking more than 1300 ms . what should i check on server.
>>>>>>
>>>>>> below is my configuration .
>>>>>>
>>>>>>  ${solr.autoCommit.maxTime:15000} <
>>>>>> openSearcher>false  
>>>>>> 
>>>>>> ${solr.autoSoftCommit.maxTime:-1} 
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>> Monitoring * Alerting * Anomaly Detection * Centralized Log Management
>>>> Solr & Elasticsearch Support * http://sematext.com/
>>>>
>>>>
>>>>
>


Re: commit it taking 1300 ms

2016-08-11 Thread Midas A
Emir,

we post json documents through the curl it takes the time (same time i
would like to say that we are not hard committing ). that curl takes time
i.e. 1.3 sec.

On Wed, Aug 10, 2016 at 2:29 PM, Emir Arnautovic <
emir.arnauto...@sematext.com> wrote:

> Hi Midas,
>
> According to your autocommit configuration and your worry about commit
> time I assume that you are doing explicit commits from client code and that
> 1.3s is client observed commit time. If that is the case, than it might be
> opening searcher that is taking time.
>
> How do you index data - single threaded or multithreaded? How frequently
> do you commit from client? Can you let Solr do soft commits instead of
> explicitly committing? Do you have warmup queries? Is this SolrCloud? What
> is number of servers (what spec), shards, docs?
>
> In any case monitoring can give you more info about server/Solr behavior
> and help you diagnose issues more easily/precisely. One such monitoring
> tool is our SPM <http://sematext.com/spm>.
>
> Regards,
> Emir
>
> --
> Monitoring * Alerting * Anomaly Detection * Centralized Log Management
> Solr & Elasticsearch Support * http://sematext.com/
>
> On 10.08.2016 05:20, Midas A wrote:
>
>> Thanks for replying
>>
>> index size:9GB
>> 2000 docs/sec.
>>
>> Actually earlier it was taking less but suddenly it has increased .
>>
>> Currently we do not have any monitoring  tool.
>>
>> On Tue, Aug 9, 2016 at 7:00 PM, Emir Arnautovic <
>> emir.arnauto...@sematext.com> wrote:
>>
>> Hi Midas,
>>>
>>> Can you give us more details on your index: size, number of new docs
>>> between commits. Why do you think 1.3s for commit is to much and why do
>>> you
>>> need it to take less? Did you do any system/Solr monitoring?
>>>
>>> Emir
>>>
>>>
>>> On 09.08.2016 14:10, Midas A wrote:
>>>
>>> please reply it is urgent.
>>>>
>>>> On Tue, Aug 9, 2016 at 11:17 AM, Midas A <test.mi...@gmail.com> wrote:
>>>>
>>>> Hi ,
>>>>
>>>>> commit is taking more than 1300 ms . what should i check on server.
>>>>>
>>>>> below is my configuration .
>>>>>
>>>>>  ${solr.autoCommit.maxTime:15000} <
>>>>> openSearcher>false  
>>>>> 
>>>>> ${solr.autoSoftCommit.maxTime:-1} 
>>>>>
>>>>>
>>>>>
>>>>> --
>>> Monitoring * Alerting * Anomaly Detection * Centralized Log Management
>>> Solr & Elasticsearch Support * http://sematext.com/
>>>
>>>
>>>


curl post taking time to solr server

2016-08-10 Thread Midas A
Hi ,

we are indexing to 2 core say core1 and core2 with help of curl post .
when we post core1 is taking very less time than core2.

while doc size is same in both server .

it causes core2 indexing very slow . the only difference is core2 has heavy
indexing rate. we indexing more docs/sec on core2


What could be the reason.
there is core ime of curl post on solr server.
How can i minimize the t


Re: commit it taking 1300 ms

2016-08-09 Thread Midas A
Thanks for replying

index size:9GB
2000 docs/sec.

Actually earlier it was taking less but suddenly it has increased .

Currently we do not have any monitoring  tool.

On Tue, Aug 9, 2016 at 7:00 PM, Emir Arnautovic <
emir.arnauto...@sematext.com> wrote:

> Hi Midas,
>
> Can you give us more details on your index: size, number of new docs
> between commits. Why do you think 1.3s for commit is to much and why do you
> need it to take less? Did you do any system/Solr monitoring?
>
> Emir
>
>
> On 09.08.2016 14:10, Midas A wrote:
>
>> please reply it is urgent.
>>
>> On Tue, Aug 9, 2016 at 11:17 AM, Midas A <test.mi...@gmail.com> wrote:
>>
>> Hi ,
>>>
>>> commit is taking more than 1300 ms . what should i check on server.
>>>
>>> below is my configuration .
>>>
>>>  ${solr.autoCommit.maxTime:15000} <
>>> openSearcher>false  
>>> 
>>> ${solr.autoSoftCommit.maxTime:-1} 
>>>
>>>
>>>
> --
> Monitoring * Alerting * Anomaly Detection * Centralized Log Management
> Solr & Elasticsearch Support * http://sematext.com/
>
>


Re: commit it taking 1300 ms

2016-08-09 Thread Midas A
please reply it is urgent.

On Tue, Aug 9, 2016 at 11:17 AM, Midas A <test.mi...@gmail.com> wrote:

> Hi ,
>
> commit is taking more than 1300 ms . what should i check on server.
>
> below is my configuration .
>
>  ${solr.autoCommit.maxTime:15000} <
> openSearcher>false   
> ${solr.autoSoftCommit.maxTime:-1} 
>
>


commit it taking 1300 ms

2016-08-08 Thread Midas A
Hi ,

commit is taking more than 1300 ms . what should i check on server.

below is my configuration .

 ${solr.autoCommit.maxTime:15000} <
openSearcher>false   
${solr.autoSoftCommit.maxTime:-1} 


Re: solr error

2016-08-01 Thread Midas A
Jürgen,
we are using Php solrclient  and getting above exception . what could be
the reason for the same  please elaborate.

On Tue, Aug 2, 2016 at 11:10 AM, Midas A <test.mi...@gmail.com> wrote:

> curl: (52) Empty reply from server
> what could be the case .and what should i do to minimize.
>
>
>
>
> On Tue, Aug 2, 2016 at 10:38 AM, Walter Underwood <wun...@wunderwood.org>
> wrote:
>
>> I recommend you look at the PHP documentation to find out what “HTTP
>> Error 52” means.
>>
>> You can start by searching the web for this: php http error 52
>>
>> wunder
>> Walter Underwood
>> wun...@wunderwood.org
>> http://observer.wunderwood.org/  (my blog)
>>
>>
>> > On Aug 1, 2016, at 10:04 PM, Midas A <test.mi...@gmail.com> wrote:
>> >
>> > please reply .
>> >
>> > On Tue, Aug 2, 2016 at 10:24 AM, Midas A <test.mi...@gmail.com> wrote:
>> >
>> >> Hi,
>> >>
>> >> i am connecting solr with php and getting *HTTP Error 52, and *HTTP
>> Error
>> >> 20* error *frequently .
>> >> what should i do to minimize these issues .
>> >>
>> >> Regards,
>> >> Abhishek T
>> >>
>> >>
>>
>>
>


Re: solr error

2016-08-01 Thread Midas A
curl: (52) Empty reply from server
what could be the case .and what should i do to minimize.




On Tue, Aug 2, 2016 at 10:38 AM, Walter Underwood <wun...@wunderwood.org>
wrote:

> I recommend you look at the PHP documentation to find out what “HTTP Error
> 52” means.
>
> You can start by searching the web for this: php http error 52
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
>
> > On Aug 1, 2016, at 10:04 PM, Midas A <test.mi...@gmail.com> wrote:
> >
> > please reply .
> >
> > On Tue, Aug 2, 2016 at 10:24 AM, Midas A <test.mi...@gmail.com> wrote:
> >
> >> Hi,
> >>
> >> i am connecting solr with php and getting *HTTP Error 52, and *HTTP
> Error
> >> 20* error *frequently .
> >> what should i do to minimize these issues .
> >>
> >> Regards,
> >> Abhishek T
> >>
> >>
>
>


Re: solr error

2016-08-01 Thread Midas A
please reply .

On Tue, Aug 2, 2016 at 10:24 AM, Midas A <test.mi...@gmail.com> wrote:

> Hi,
>
> i am connecting solr with php and getting *HTTP Error 52, and *HTTP Error
> 20* error *frequently .
> what should i do to minimize these issues .
>
> Regards,
> Abhishek T
>
>


solr error

2016-08-01 Thread Midas A
Hi,

i am connecting solr with php and getting *HTTP Error 52, and *HTTP Error 20*
error *frequently .
what should i do to minimize these issues .

Regards,
Abhishek T


Re: Query optimization

2016-07-29 Thread Midas A
please reply .

On Fri, Jul 29, 2016 at 10:26 AM, Midas A <test.mi...@gmail.com> wrote:

> a) my index size is 10 gb   for higher start is query response got slow .
> what should i do to optimize this query for higher start value in query
>


Query optimization

2016-07-28 Thread Midas A
a) my index size is 10 gb   for higher start is query response got slow .
what should i do to optimize this query for higher start value in query


Re: Query optimization

2016-07-13 Thread Midas A
Hi ,

One more thing i would like to add here is  we build facet queries over
dynamic fields so my question is
a) Is there any harm of using docValues true with dynamic fields.
b) Other suggestion that we can implement to optimize this query my index
size is 8GB  and query is taking more tha 3 seconds.

Regards,
Abhishek Tiwari

On Thu, Jul 14, 2016 at 6:42 AM, Erick Erickson <erickerick...@gmail.com>
wrote:

> DocValues are now the preferred mechanism
> whenever you need to sort, facet or group. It'll
> make your on-disk index bigger, but the on-disk
> structure would have been built in Java's memory
> if you didn't use DocValues whereas if you do
> it's MMap'd.
>
> So overall, use DocValues by preference.
>
> Best,
> Erick
>
> On Wed, Jul 13, 2016 at 5:36 AM, sara hajili <hajili.s...@gmail.com>
> wrote:
> > as i know when you use docValue=true
> > solr when indexing doc,
> > solr although store doc and docValue=true field in memory.to use that in
> > facet query and sort query result.
> > so maybe use a lot docvalue=true use a lot  memory of ur system.
> > but use it in logical way.can make better query response time
> >
> > On Wed, Jul 13, 2016 at 5:11 AM, Midas A <test.mi...@gmail.com> wrote:
> >
> >> Is there any draw back of using docValues=true ?
> >>
> >> On Wed, Jul 13, 2016 at 2:28 PM, sara hajili <hajili.s...@gmail.com>
> >> wrote:
> >>
> >> > Hi.
> >> > Facet query take a long time.you vcan use group query.
> >> > Or in fileds in schema that you run facet query on that filed.
> >> > Set doc value=true.
> >> > To get better answer.in quick time.
> >> > On Jul 13, 2016 11:54 AM, "Midas A" <test.mi...@gmail.com> wrote:
> >> >
> >> > > http://
> >> > >
> >> > >
> >> >
> >>
> #:8983/solr/prod/select?q=id_path_ids:166=sort_price:[0%20TO%20*]=status:A=company_status:A=true=1=show_meta_id=show_brand=product_amount_available=by_processor=by_system_memory=by_screen_size=by_operating_system=by_laptop_type=by_processor_brand=by_hard_drive_capacity=by_touchscreen=by_warranty=by_graphic_memory=is_trm=show_merchant=is_cod=show_market={!ex=p_r%20key=product_rating:[4-5]}product_rating:[4%20TO%205]={!ex=p_r%20key=product_rating:[3-5]}product_rating:[3%20TO%205]={!ex=p_r%20key=product_rating:[2-5]}product_rating:[2%20TO%205]={!ex=p_r%20key=product_rating:[1-5]}product_rating:[1%20TO%205]={!ex=m_r%20key=merchant_rating:[4-5]}merchant_rating:[4%20TO%205]={!ex=m_r%20key=merchant_rating:[3-5]}merchant_rating:[3%20TO%205]={!ex=m_r%20key=merchant_rating:[2-5]}merchant_rating:[2%20TO%205]={!ex=m_r%20key=merchant_rating:[1-5]}merchant_rating:[1%20TO%205]=500=true=sort_price=0=10=product_amount_available%20desc,boost_index%20asc,popularity%20desc,is_cod%20desc
> >> > >
> >> > >
> >> > > What kind of optimization we can do in above query . it is taking
> 2400
> >> > ms .
> >> > >
> >> >
> >>
>


Re: Query optimization

2016-07-13 Thread Midas A
Is there any draw back of using docValues=true ?

On Wed, Jul 13, 2016 at 2:28 PM, sara hajili <hajili.s...@gmail.com> wrote:

> Hi.
> Facet query take a long time.you vcan use group query.
> Or in fileds in schema that you run facet query on that filed.
> Set doc value=true.
> To get better answer.in quick time.
> On Jul 13, 2016 11:54 AM, "Midas A" <test.mi...@gmail.com> wrote:
>
> > http://
> >
> >
> #:8983/solr/prod/select?q=id_path_ids:166=sort_price:[0%20TO%20*]=status:A=company_status:A=true=1=show_meta_id=show_brand=product_amount_available=by_processor=by_system_memory=by_screen_size=by_operating_system=by_laptop_type=by_processor_brand=by_hard_drive_capacity=by_touchscreen=by_warranty=by_graphic_memory=is_trm=show_merchant=is_cod=show_market={!ex=p_r%20key=product_rating:[4-5]}product_rating:[4%20TO%205]={!ex=p_r%20key=product_rating:[3-5]}product_rating:[3%20TO%205]={!ex=p_r%20key=product_rating:[2-5]}product_rating:[2%20TO%205]={!ex=p_r%20key=product_rating:[1-5]}product_rating:[1%20TO%205]={!ex=m_r%20key=merchant_rating:[4-5]}merchant_rating:[4%20TO%205]={!ex=m_r%20key=merchant_rating:[3-5]}merchant_rating:[3%20TO%205]={!ex=m_r%20key=merchant_rating:[2-5]}merchant_rating:[2%20TO%205]={!ex=m_r%20key=merchant_rating:[1-5]}merchant_rating:[1%20TO%205]=500=true=sort_price=0=10=product_amount_available%20desc,boost_index%20asc,popularity%20desc,is_cod%20desc
> >
> >
> > What kind of optimization we can do in above query . it is taking 2400
> ms .
> >
>


Query optimization

2016-07-13 Thread Midas A
http://
#:8983/solr/prod/select?q=id_path_ids:166=sort_price:[0%20TO%20*]=status:A=company_status:A=true=1=show_meta_id=show_brand=product_amount_available=by_processor=by_system_memory=by_screen_size=by_operating_system=by_laptop_type=by_processor_brand=by_hard_drive_capacity=by_touchscreen=by_warranty=by_graphic_memory=is_trm=show_merchant=is_cod=show_market={!ex=p_r%20key=product_rating:[4-5]}product_rating:[4%20TO%205]={!ex=p_r%20key=product_rating:[3-5]}product_rating:[3%20TO%205]={!ex=p_r%20key=product_rating:[2-5]}product_rating:[2%20TO%205]={!ex=p_r%20key=product_rating:[1-5]}product_rating:[1%20TO%205]={!ex=m_r%20key=merchant_rating:[4-5]}merchant_rating:[4%20TO%205]={!ex=m_r%20key=merchant_rating:[3-5]}merchant_rating:[3%20TO%205]={!ex=m_r%20key=merchant_rating:[2-5]}merchant_rating:[2%20TO%205]={!ex=m_r%20key=merchant_rating:[1-5]}merchant_rating:[1%20TO%205]=500=true=sort_price=0=10=product_amount_available%20desc,boost_index%20asc,popularity%20desc,is_cod%20desc


What kind of optimization we can do in above query . it is taking 2400 ms .


solr server heap out

2016-07-12 Thread Midas A
Hi,
I frequently getting solr heap out once or twice a day. what could be the
possible reasons for the same and is there any way to log memory used by
the query in solr.log .

Thanks ,
Abhishek Tiwari


Re: Error

2016-05-11 Thread Midas A
thanks for replying .

PERFORMANCE WARNING: Overlapping onDeckSearchers=2
one more warning is coming please suggest for this also.

On Wed, May 11, 2016 at 7:53 PM, Ahmet Arslan <iori...@yahoo.com.invalid>
wrote:

> Hi Midas,
>
> It looks like you are committing too frequently, cache warming cannot
> catchup.
> Either lower your commit rate, or disable cache auto warm
> (autowarmCount=0).
> You can also remove queries registered at newSearcher event if you have
> defined some.
>
> Ahmet
>
>
>
> On Wednesday, May 11, 2016 2:51 PM, Midas A <test.mi...@gmail.com> wrote:
> Hi i am getting following error
>
> org.apache.solr.common.SolrException: Error opening new searcher.
> exceeded limit of maxWarmingSearchers=2, try again later.
>
>
>
> what should i do to remove it .
>


Error

2016-05-11 Thread Midas A
Hi i am getting following error

org.apache.solr.common.SolrException: Error opening new searcher.
exceeded limit of maxWarmingSearchers=2, try again later.



what should i do to remove it .


solcloud on production

2016-04-08 Thread Midas A
Hi All ,

we are moving from master slave architecture to solr cloud architecture .
so i would like to know following

-  what kind of challenges we can face on production  .

-  Is there any drawback of solrcloud

-  How solr cloud distributes requests between nodes and how node will
behave on heavy traffic .

- Is there any way to shard node with custom logic




Regards,
MA


Re: search design question

2016-04-05 Thread Midas A
thanks Binoy for replying ,

i am giving you few use cases

a)  shoes in nike  or nike shoes

Here "nike " is brand and in this case  my query  entity is shoe and entity
type is brand

and my result should only pink nike shoes


b)  " 32 inch  LCD TV  sony "

32 inch is size ,  LCD is entity type and sony is brand


in this case my solr query should be build in different manner to get
accurate results .




Probably, now u can understand my problem.


On Wed, Apr 6, 2016 at 11:12 AM, Binoy Dalal <binoydala...@gmail.com> wrote:

> Could you describe your problem in more detail with examples of your use
> cases.
>
> On Wed, 6 Apr 2016, 11:03 Midas A, <test.mi...@gmail.com> wrote:
>
> >  i have to do entity and entity type mapping with help of search query
> > while building solr query.
> >
> > how i should i design with the solr  for search.
> >
> > Please guide me .
> >
> --
> Regards,
> Binoy Dalal
>


search design question

2016-04-05 Thread Midas A
 i have to do entity and entity type mapping with help of search query
while building solr query.

how i should i design with the solr  for search.

Please guide me .


Re: Need to move on SOlr cloud (help required)

2016-02-15 Thread Midas A
Susheel,

Is there any client available in php for solr cloud which maintain the same
??


On Tue, Feb 16, 2016 at 7:31 AM, Susheel Kumar <susheel2...@gmail.com>
wrote:

> In SolrJ, you would use CloudSolrClient which interacts with Zookeeper
> (which maintains Cluster State). See CloudSolrClient API. So that's how
> SolrJ would know which node is down or not.
>
>
> Thanks,
> Susheel
>
> On Mon, Feb 15, 2016 at 12:07 AM, Midas A <test.mi...@gmail.com> wrote:
>
> > Erick,
> >
> > We are using  php for our application so client would you suggest .
> > currently we are using pecl solr client .
> >
> >
> > but i want to understand that  suppose we sent a request to a node and
> that
> > node is down that time how solrj  figure out where request should go.
> >
> > On Fri, Feb 12, 2016 at 9:44 PM, Erick Erickson <erickerick...@gmail.com
> >
> > wrote:
> >
> > > bq: in case of solrcloud architecture we need not to have load balancer
> > >
> > > First, my comment about a load balancer was for the master/slave
> > > architecture where the load balancer points to the slaves.
> > >
> > > Second, for SolrCloud you don't necessarily need a load balancer as
> > > if you're using a SolrJ client requests are distributed across the
> > replicas
> > > via an internal load balancer.
> > >
> > > Best,
> > > Erick
> > >
> > > On Thu, Feb 11, 2016 at 9:19 PM, Midas A <test.mi...@gmail.com> wrote:
> > > > Erick ,
> > > >
> > > > bq: We want the hits on solr servers to be distributed
> > > >
> > > > True, this happens automatically in SolrCloud, but a simple load
> > > > balancer in front of master/slave does the same thing.
> > > >
> > > > Midas : in case of solrcloud architecture we need not to have load
> > > balancer
> > > > ? .
> > > >
> > > > On Thu, Feb 11, 2016 at 11:42 PM, Erick Erickson <
> > > erickerick...@gmail.com>
> > > > wrote:
> > > >
> > > >> bq: We want the hits on solr servers to be distributed
> > > >>
> > > >> True, this happens automatically in SolrCloud, but a simple load
> > > >> balancer in front of master/slave does the same thing.
> > > >>
> > > >> bq: what if master node fail what should be our fail over strategy
> ?
> > > >>
> > > >> This is, indeed one of the advantages for SolrCloud, you don't have
> > > >> to worry about this any more.
> > > >>
> > > >> Another benefit (and you haven't touched on whether this matters)
> > > >> is that in SolrCloud you do not have the latency of polling and
> > > >> replicating from master to slave, in other words it supports Near
> Real
> > > >> Time.
> > > >>
> > > >> This comes at some additional complexity however. If you have
> > > >> your master node failing often enough to be a problem, you have
> > > >> other issues ;)...
> > > >>
> > > >> And the recovery strategy if the master fails is straightforward:
> > > >> 1> pick one of the slaves to be the master.
> > > >> 2> update the other nodes to point to the new master
> > > >> 3> re-index the docs from before the old master failed to the new
> > > master.
> > > >>
> > > >> You can use system variables to not even have to manually edit all
> of
> > > the
> > > >> solrconfig files, just supply different -D parameters on startup.
> > > >>
> > > >> Best,
> > > >> Erick
> > > >>
> > > >> On Wed, Feb 10, 2016 at 10:39 PM, kshitij tyagi
> > > >> <kshitij.shopcl...@gmail.com> wrote:
> > > >> > @Jack
> > > >> >
> > > >> > Currently we have around 55,00,000 docs
> > > >> >
> > > >> > Its not about load on one node we have load on different nodes at
> > > >> different
> > > >> > times as our traffic is huge around 60k users at a given point of
> > time
> > > >> >
> > > >> > We want the hits on solr servers to be distributed so we are
> > planning
> > > to
> > > >> > move on solr cloud as it would be fault tolerant.
> > > >> >
> > > >> >
> > >

Re: Need to move on SOlr cloud (help required)

2016-02-14 Thread Midas A
Erick,

We are using  php for our application so client would you suggest .
currently we are using pecl solr client .


but i want to understand that  suppose we sent a request to a node and that
node is down that time how solrj  figure out where request should go.

On Fri, Feb 12, 2016 at 9:44 PM, Erick Erickson <erickerick...@gmail.com>
wrote:

> bq: in case of solrcloud architecture we need not to have load balancer
>
> First, my comment about a load balancer was for the master/slave
> architecture where the load balancer points to the slaves.
>
> Second, for SolrCloud you don't necessarily need a load balancer as
> if you're using a SolrJ client requests are distributed across the replicas
> via an internal load balancer.
>
> Best,
> Erick
>
> On Thu, Feb 11, 2016 at 9:19 PM, Midas A <test.mi...@gmail.com> wrote:
> > Erick ,
> >
> > bq: We want the hits on solr servers to be distributed
> >
> > True, this happens automatically in SolrCloud, but a simple load
> > balancer in front of master/slave does the same thing.
> >
> > Midas : in case of solrcloud architecture we need not to have load
> balancer
> > ? .
> >
> > On Thu, Feb 11, 2016 at 11:42 PM, Erick Erickson <
> erickerick...@gmail.com>
> > wrote:
> >
> >> bq: We want the hits on solr servers to be distributed
> >>
> >> True, this happens automatically in SolrCloud, but a simple load
> >> balancer in front of master/slave does the same thing.
> >>
> >> bq: what if master node fail what should be our fail over strategy  ?
> >>
> >> This is, indeed one of the advantages for SolrCloud, you don't have
> >> to worry about this any more.
> >>
> >> Another benefit (and you haven't touched on whether this matters)
> >> is that in SolrCloud you do not have the latency of polling and
> >> replicating from master to slave, in other words it supports Near Real
> >> Time.
> >>
> >> This comes at some additional complexity however. If you have
> >> your master node failing often enough to be a problem, you have
> >> other issues ;)...
> >>
> >> And the recovery strategy if the master fails is straightforward:
> >> 1> pick one of the slaves to be the master.
> >> 2> update the other nodes to point to the new master
> >> 3> re-index the docs from before the old master failed to the new
> master.
> >>
> >> You can use system variables to not even have to manually edit all of
> the
> >> solrconfig files, just supply different -D parameters on startup.
> >>
> >> Best,
> >> Erick
> >>
> >> On Wed, Feb 10, 2016 at 10:39 PM, kshitij tyagi
> >> <kshitij.shopcl...@gmail.com> wrote:
> >> > @Jack
> >> >
> >> > Currently we have around 55,00,000 docs
> >> >
> >> > Its not about load on one node we have load on different nodes at
> >> different
> >> > times as our traffic is huge around 60k users at a given point of time
> >> >
> >> > We want the hits on solr servers to be distributed so we are planning
> to
> >> > move on solr cloud as it would be fault tolerant.
> >> >
> >> >
> >> >
> >> > On Thu, Feb 11, 2016 at 11:10 AM, Midas A <test.mi...@gmail.com>
> wrote:
> >> >
> >> >> hi,
> >> >> what if master node fail what should be our fail over strategy  ?
> >> >>
> >> >> On Wed, Feb 10, 2016 at 9:12 PM, Jack Krupansky <
> >> jack.krupan...@gmail.com>
> >> >> wrote:
> >> >>
> >> >> > What exactly is your motivation? I mean, the primary benefit of
> >> SolrCloud
> >> >> > is better support for sharding, and you have only a single shard.
> If
> >> you
> >> >> > have no need for sharding and your master-slave replicated Solr has
> >> been
> >> >> > working fine, then stick with it. If only one machine is having a
> load
> >> >> > problem, then that one node should be replaced. There are indeed
> >> plenty
> >> >> of
> >> >> > good reasons to prefer SolrCloud over traditional master-slave
> >> >> replication,
> >> >> > but so far you haven't touched on any of them.
> >> >> >
> >> >> > How much data (number of documents) do you have?
> >> >> >
> >> >> > What is your typical query l

query knowledge graph

2016-02-12 Thread Midas A
 Please suggest how to create query knowledge graph for e-commerce
application .


please describe in detail . our mote is to improve relevancy . we are from
LAMP back ground .


error

2016-02-11 Thread Midas A
we have upgraded solr version last night getting following error

org.apache.solr.common.SolrException: Bad content Type for search handler
:application/octet-stream

what i should do ? to remove this .


Re: error

2016-02-11 Thread Midas A
my log is increasing . it is urgent ..

On Fri, Feb 12, 2016 at 10:43 AM, Midas A <test.mi...@gmail.com> wrote:

> we have upgraded solr version last night getting following error
>
> org.apache.solr.common.SolrException: Bad content Type for search handler
> :application/octet-stream
>
> what i should do ? to remove this .
>


Re: error

2016-02-11 Thread Midas A
solr 5.2.1

On Fri, Feb 12, 2016 at 12:59 PM, Shawn Heisey <apa...@elyograg.org> wrote:

> On 2/11/2016 10:13 PM, Midas A wrote:
> > we have upgraded solr version last night getting following error
> >
> > org.apache.solr.common.SolrException: Bad content Type for search handler
> > :application/octet-stream
> >
> > what i should do ? to remove this .
>
> What version did you upgrade from and what version did you upgrade to?
> How was the new version installed, and how are you starting it?  What
> kind of software are you using for your clients?
>
> We also need to see all error messages in the solr logfile, including
> stacktraces.  Having access to the entire logfile would be very helpful,
> but before sharing that, you might want to check it for sensitive
> information and redact it.
>
> Thanks,
> Shawn
>
>


Re: Need to move on SOlr cloud (help required)

2016-02-11 Thread Midas A
Erick ,

bq: We want the hits on solr servers to be distributed

True, this happens automatically in SolrCloud, but a simple load
balancer in front of master/slave does the same thing.

Midas : in case of solrcloud architecture we need not to have load balancer
? .

On Thu, Feb 11, 2016 at 11:42 PM, Erick Erickson <erickerick...@gmail.com>
wrote:

> bq: We want the hits on solr servers to be distributed
>
> True, this happens automatically in SolrCloud, but a simple load
> balancer in front of master/slave does the same thing.
>
> bq: what if master node fail what should be our fail over strategy  ?
>
> This is, indeed one of the advantages for SolrCloud, you don't have
> to worry about this any more.
>
> Another benefit (and you haven't touched on whether this matters)
> is that in SolrCloud you do not have the latency of polling and
> replicating from master to slave, in other words it supports Near Real
> Time.
>
> This comes at some additional complexity however. If you have
> your master node failing often enough to be a problem, you have
> other issues ;)...
>
> And the recovery strategy if the master fails is straightforward:
> 1> pick one of the slaves to be the master.
> 2> update the other nodes to point to the new master
> 3> re-index the docs from before the old master failed to the new master.
>
> You can use system variables to not even have to manually edit all of the
> solrconfig files, just supply different -D parameters on startup.
>
> Best,
> Erick
>
> On Wed, Feb 10, 2016 at 10:39 PM, kshitij tyagi
> <kshitij.shopcl...@gmail.com> wrote:
> > @Jack
> >
> > Currently we have around 55,00,000 docs
> >
> > Its not about load on one node we have load on different nodes at
> different
> > times as our traffic is huge around 60k users at a given point of time
> >
> > We want the hits on solr servers to be distributed so we are planning to
> > move on solr cloud as it would be fault tolerant.
> >
> >
> >
> > On Thu, Feb 11, 2016 at 11:10 AM, Midas A <test.mi...@gmail.com> wrote:
> >
> >> hi,
> >> what if master node fail what should be our fail over strategy  ?
> >>
> >> On Wed, Feb 10, 2016 at 9:12 PM, Jack Krupansky <
> jack.krupan...@gmail.com>
> >> wrote:
> >>
> >> > What exactly is your motivation? I mean, the primary benefit of
> SolrCloud
> >> > is better support for sharding, and you have only a single shard. If
> you
> >> > have no need for sharding and your master-slave replicated Solr has
> been
> >> > working fine, then stick with it. If only one machine is having a load
> >> > problem, then that one node should be replaced. There are indeed
> plenty
> >> of
> >> > good reasons to prefer SolrCloud over traditional master-slave
> >> replication,
> >> > but so far you haven't touched on any of them.
> >> >
> >> > How much data (number of documents) do you have?
> >> >
> >> > What is your typical query latency?
> >> >
> >> >
> >> > -- Jack Krupansky
> >> >
> >> > On Wed, Feb 10, 2016 at 2:15 AM, kshitij tyagi <
> >> > kshitij.shopcl...@gmail.com>
> >> > wrote:
> >> >
> >> > > Hi,
> >> > >
> >> > > We are currently using solr 5.2 and I need to move on solr cloud
> >> > > architecture.
> >> > >
> >> > > As of now we are using 5 machines :
> >> > >
> >> > > 1. I am using 1 master where we are indexing ourdata.
> >> > > 2. I replicate my data on other machines
> >> > >
> >> > > One or the other machine keeps on showing high load so I am
> planning to
> >> > > move on solr cloud.
> >> > >
> >> > > Need help on following :
> >> > >
> >> > > 1. What should be my architecture in case of 5 machines to keep
> >> > (zookeeper,
> >> > > shards, core).
> >> > >
> >> > > 2. How to add a node.
> >> > >
> >> > > 3. what are the exact steps/process I need to follow in order to
> change
> >> > to
> >> > > solr cloud.
> >> > >
> >> > > 4. How indexing will work in solr cloud as of now I am using mysql
> >> query
> >> > to
> >> > > get the data on master and then index the same (how I need to change
> >> this
> >> > > in case of solr cloud).
> >> > >
> >> > > Regards,
> >> > > Kshitij
> >> > >
> >> >
> >>
>


Re: Need to move on SOlr cloud (help required)

2016-02-10 Thread Midas A
hi,
what if master node fail what should be our fail over strategy  ?

On Wed, Feb 10, 2016 at 9:12 PM, Jack Krupansky 
wrote:

> What exactly is your motivation? I mean, the primary benefit of SolrCloud
> is better support for sharding, and you have only a single shard. If you
> have no need for sharding and your master-slave replicated Solr has been
> working fine, then stick with it. If only one machine is having a load
> problem, then that one node should be replaced. There are indeed plenty of
> good reasons to prefer SolrCloud over traditional master-slave replication,
> but so far you haven't touched on any of them.
>
> How much data (number of documents) do you have?
>
> What is your typical query latency?
>
>
> -- Jack Krupansky
>
> On Wed, Feb 10, 2016 at 2:15 AM, kshitij tyagi <
> kshitij.shopcl...@gmail.com>
> wrote:
>
> > Hi,
> >
> > We are currently using solr 5.2 and I need to move on solr cloud
> > architecture.
> >
> > As of now we are using 5 machines :
> >
> > 1. I am using 1 master where we are indexing ourdata.
> > 2. I replicate my data on other machines
> >
> > One or the other machine keeps on showing high load so I am planning to
> > move on solr cloud.
> >
> > Need help on following :
> >
> > 1. What should be my architecture in case of 5 machines to keep
> (zookeeper,
> > shards, core).
> >
> > 2. How to add a node.
> >
> > 3. what are the exact steps/process I need to follow in order to change
> to
> > solr cloud.
> >
> > 4. How indexing will work in solr cloud as of now I am using mysql query
> to
> > get the data on master and then index the same (how I need to change this
> > in case of solr cloud).
> >
> > Regards,
> > Kshitij
> >
>


Re: URI is too long

2016-02-01 Thread Midas A
Is there any drawback of POST request and why we prefer GET.

On Mon, Feb 1, 2016 at 1:08 PM, Salman Ansari 
wrote:

> Cool. I would give POST a try. Any samples of using Post while passing the
> query string values (such as ORing between Solr field values) using
> Solr.NET?
>
> Regards,
> Salman
>
> On Sun, Jan 31, 2016 at 10:21 PM, Shawn Heisey 
> wrote:
>
> > On 1/31/2016 7:20 AM, Salman Ansari wrote:
> > > I am building a long query containing multiple ORs between query
> terms. I
> > > started to receive the following exception:
> > >
> > > The remote server returned an error: (414) Request-URI Too Long. Any
> idea
> > > what is the limit of the URL in Solr? Moreover, as a solution I was
> > > thinking of chunking the query into multiple requests but I was
> wondering
> > > if anyone has a better approach?
> >
> > The default HTTP header size limit on most webservers and containers
> > (including the Jetty that ships with Solr) is 8192 bytes.  A typical
> > request like this will start with "GET " and end with " HTTP/1.1", which
> > count against that 8192 bytes.  The max header size can be increased.
> >
> > If you place the parameters into a POST request instead of on the URL,
> > then the default size limit of that POST request in Solr is 2MB.  This
> > can also be increased.
> >
> > Thanks,
> > Shawn
> >
> >
>


facet on min of multi valued field

2016-02-01 Thread Midas A
Hi ,
we want facet query on min of multi valued field .


Regards,
Abhishek Tiwari


Re: facet on min of multi valued field

2016-02-01 Thread Midas A
Erick,
Actually  we are eCommerce site and we have  master child relationship in
our catalog.
we show only masters in our website . for example we have Iphone as a
master product and different sellers are selling ipone through our site
these products are child product . the price of the master is being decide
by some Ranking Algo (RA)  so the price that is being shown on website is
decided by RA from these child products .
our current RA is (min price + quantity of the product )
so Price of the product(master) is dynamically changed in our system  .

And on this scenario we want price facet on website .


Please give some insight to solve our problem .

~MA








On Tue, Feb 2, 2016 at 2:08 AM, Erick Erickson <erickerick...@gmail.com>
wrote:

> Frankly, I have no idea what this means. Only count a facet
> for a particular document for the minimum for a MV field? I.e.
> if the doc has values 1, 2, 3, 4 in a MV field, it should only be
> counted in the "1" bucket?
>
> The easiest would be to have a second field that contained the
> min value and facet on _that_. If you're using min as an exemplar
> of an arbitrary math function it's harder.
>
> See also: http://yonik.com/solr-facet-functions/
>
> Best,
> Erick
>
> On Mon, Feb 1, 2016 at 3:50 AM, Midas A <test.mi...@gmail.com> wrote:
> > Hi ,
> > we want facet query on min of multi valued field .
> >
> >
> > Regards,
> > Abhishek Tiwari
>


How much JVM should we allocate

2016-01-28 Thread Midas A
Hi ,

CPU : 4
physical memory : 48 GB


and we are only have solr on this server . How much JVM  can be allocate to
run server smoothly.

Regards,
Abhishek Tiwari


migrating solr 4.2.1 to 5.X

2016-01-26 Thread Midas A
I want migrate from solr 4.2.1 to 5.X version hten my question is

- can i use same snapshot of 4.2.1 in 5.x.x

Actually Indexing will take long time in my case then it would be possible
to do
or we should not do this.


next similar question is

- can we replicate 4.2.1 master to slave 5.x.x solr


Re: POST request on slave server & error (Urgent )

2016-01-25 Thread Midas A
my solr version :4.2.1


On Sun, Jan 24, 2016 at 8:32 PM, Binoy Dalal <binoydala...@gmail.com> wrote:

> {Solr_dist}/server/logs/solr.log
>
> On Sun, 24 Jan 2016, 20:12 Midas A <test.mi...@gmail.com> wrote:
>
> > Shawn,
> > where can i see solr these solr log.
> >
> > On Fri, Jan 22, 2016 at 8:54 PM, Shawn Heisey <apa...@elyograg.org>
> wrote:
> >
> > > On 1/22/2016 1:14 AM, Midas A wrote:
> > > > Please anybody tell me what these request are doing . Is it
> application
> > > > generated error or part of  solr master -slave?
> > > >
> > > >
> > > >
> > > > b)
> > > > 10.20.73.169 -  -  [22/Jan/2016:08:07:38 +] "POST
> > > > /solr/shopclue_prod/select HTTP/1.1" 200 7002
> > >
> > > This appears to be the servlet container request log.  All of the
> > > requests were made to the /select handler, so chances are that they are
> > > queries.  All of the requests returned a 200 response, so they all
> > > succeeded.
> > >
> > > Because it's a POST request and the important parameters were not in
> the
> > > URL, those parameters are not visible in the request log.  The Solr log
> > > (which is a different file) will have full details about every request.
> > >
> > > Thanks,
> > > Shawn
> > >
> > >
> >
> --
> Regards,
> Binoy Dalal
>


Re: POST request on slave server & error (Urgent )

2016-01-24 Thread Midas A
Shawn,
where can i see solr these solr log.

On Fri, Jan 22, 2016 at 8:54 PM, Shawn Heisey <apa...@elyograg.org> wrote:

> On 1/22/2016 1:14 AM, Midas A wrote:
> > Please anybody tell me what these request are doing . Is it application
> > generated error or part of  solr master -slave?
> >
> >
> >
> > b)
> > 10.20.73.169 -  -  [22/Jan/2016:08:07:38 +] "POST
> > /solr/shopclue_prod/select HTTP/1.1" 200 7002
>
> This appears to be the servlet container request log.  All of the
> requests were made to the /select handler, so chances are that they are
> queries.  All of the requests returned a 200 response, so they all
> succeeded.
>
> Because it's a POST request and the important parameters were not in the
> URL, those parameters are not visible in the request log.  The Solr log
> (which is a different file) will have full details about every request.
>
> Thanks,
> Shawn
>
>


jetty erro

2016-01-22 Thread Midas A
continuously getting following error on one of my solr slave

 a) null:org.eclipse.jetty.io.EofException


POST request on slave server & error (Urgent )

2016-01-22 Thread Midas A
Please anybody tell me what these request are doing . Is it application
generated error or part of  solr master -slave?



b)
10.20.73.169 -  -  [22/Jan/2016:08:07:38 +] "POST
/solr/shopclue_prod/select HTTP/1.1" 200 7002
10.20.73.164 -  -  [22/Jan/2016:08:07:38 +] "POST
/solr/shopclue_prod/select/?version=2.2=on=xml HTTP/1.1" 200
154986
10.20.73.167 -  -  [22/Jan/2016:08:07:38 +] "POST
/solr/shopclue_prod/select/?version=2.2=on=xml HTTP/1.1" 200
106282
10.20.73.167 -  -  [22/Jan/2016:08:07:38 +] "POST
/solr/shopclue_prod/select/?version=2.2=on=xml HTTP/1.1" 200
106282
10.20.73.204 -  -  [22/Jan/2016:08:07:38 +] "POST
/solr/shopclue_prod/select/?version=2.2=on=xml HTTP/1.1" 200 1833
10.20.73.164 -  -  [22/Jan/2016:08:07:38 +] "POST
/solr/shopclue_prod/select/?version=2.2=on=xml HTTP/1.1" 200
132117
10.20.73.164 -  -  [22/Jan/2016:08:07:38 +] "POST
/solr/shopclue_prod/select/?version=2.2=on=xml HTTP/1.1" 200
156184
10.20.73.170 -  -  [22/Jan/2016:08:07:38 +] "POST
/solr/shopclue_prod/select/?version=2.2=on=xml HTTP/1.1" 200
78677
10.20.73.164 -  -  [22/Jan/2016:08:07:38 +] "POST
/solr/shopclue_prod/select/?version=2.2=on=xml HTTP/1.1" 200
132116
10.20.73.204 -  -  [22/Jan/2016:08:07:38 +] "POST
/solr/shopclue_prod/select/?version=2.2=on=xml HTTP/1.1" 200 2106
10.20.73.204 -  -  [22/Jan/2016:08:07:38 +] "POST
/solr/shopclue_prod/select/?version=2.2=on=xml HTTP/1.1" 200 1975


Re: solr error

2016-01-21 Thread Midas A
Hi,
 Please find attached detail logs, Please help me to figure it out.


On Fri, Jan 15, 2016 at 2:50 AM, Shawn Heisey <apa...@elyograg.org> wrote:

> On 1/14/2016 12:08 AM, Midas A wrote:
> > we are continuously getting the error
> > "null:org.eclipse.jetty.io.EofException"
> > on slave .
> >
> > what could be the reason ?
>
> This error is caused by clients that disconnect the HTTP/TCP connection
> before Solr has responded to a request.  Jetty logs this error (rather
> than Solr) because it happens in the networking layer.
>
> There are two typical reasons for clients that disconnect early -- one
> is very slow queries, the other is aggressive TCP socket timeouts on
> clients.  It is sometimes a combination of both.
>
> Thanks,
> Shawn
>
>
r.HandlerWrapper.handle(HandlerWrapper.java:116)|?at org.eclipse.jetty.server.Server.handle(Server.java:365)|?at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:485)|?at org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)|?at org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:937)|?at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:998)|?at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:856)|?at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240)|?at org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)|?at org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)|?at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)|?at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)|?at java.lang.Thread.run(Thread.java:745)|Caused by: java.net.SocketException: Broken pipe|?at java.net.SocketOutputStream.socketWrite0(Native Method)|?at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113)|?at java.net.SocketOutputStream.write(SocketOutputStream.java:159)|?at org.eclipse.jetty.io.ByteArrayBuffer.writeTo(ByteArrayBuffer.java:359)|?at org.eclipse.jetty.io.bio.StreamEndPoint.flush(StreamEndPoint.java:164)|?at org.eclipse.jetty.io.bio.StreamEndPoint.flush(StreamEndPoint.java:194)|?at org.eclipse.jetty.http.HttpGenerator.flushBuffer(HttpGenerator.java:838)|?... 49 more|,code=500}
2016-01-22 11:33:37.117:WARN:oejs.Response:Committed before 500 {msg=Broken pipe,trace=org.eclipse.jetty.io.EofException|?at org.eclipse.jetty.http.HttpGenerator.flushBuffer(HttpGenerator.java:914)|?at org.eclipse.jetty.http.AbstractGenerator.blockForOutput(AbstractGenerator.java:507)|?at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:147)|?at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:107)|?at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)|?at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282)|?at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125)|?at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207)|?at org.apache.solr.util.FastWriter.flush(FastWriter.java:141)|?at org.apache.solr.util.FastWriter.write(FastWriter.java:126)|?at java.io.Writer.write(Writer.java:157)|?at org.apache.solr.response.XMLWriter.writeArray(XMLWriter.java:277)|?at org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:190)|?at org.apache.solr.response.XMLWriter.writeSolrDocument(XMLWriter.java:199)|?at org.apache.solr.response.TextResponseWriter.writeDocuments(TextResponseWriter.java:275)|?at org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:172)|?at org.apache.solr.response.XMLWriter.writeResponse(XMLWriter.java:111)|?at org.apache.solr.response.XMLResponseWriter.write(XMLResponseWriter.java:39)|?at org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilter.java:627)|?at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:358)|?at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141)|?at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1307)|?at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:453)|?at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)|?at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:560)|?at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)|?at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1072)|?at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:382)|?at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)|?at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1006)|?at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)|?at org.eclips

Re: solr error

2016-01-13 Thread Midas A
when we are using solr only

On Thu, Jan 14, 2016 at 12:41 PM, Binoy Dalal <binoydala...@gmail.com>
wrote:

> Can you post the entire stack trace?
>
> Do you get this error at startup or while you're using solr?
>
> On Thu, 14 Jan 2016, 12:38 Midas A <test.mi...@gmail.com> wrote:
>
> > we are continuously getting the error
> > "null:org.eclipse.jetty.io.EofException"
> > on slave .
> >
> > what could be the reason ?
> >
> --
> Regards,
> Binoy Dalal
>


solr error

2016-01-13 Thread Midas A
we are continuously getting the error
"null:org.eclipse.jetty.io.EofException"
on slave .

what could be the reason ?


Data import issue

2015-12-23 Thread Midas A
Hi ,


Please provide the steps to resolve the issue.


com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException:
Communications link failure during rollback(). Transaction resolution
unknown.


DIH errors

2015-12-23 Thread Midas A
Please help us

a)

java.sql.SQLException: Streaming result set
com.mysql.jdbc.RowDataDynamic@755ea675 is still active. No statements
may be issued when any streaming result sets are open and in use on a
given connection. Ensure that you have called .close() on any active
streaming result sets before attempting more queries.

b) java.lang.RuntimeException: java.lang.RuntimeException:
org.apache.solr.handler.dataimport.DataImportHandlerException: Unable
to execute query: SELECT pf.feature_id,
REPLACE(REPLACE(REPLACE(pfd.filter, '\n', ''), '', ''), '\r','') as
COLcolor_col, pfv.variant_id,
CONCAT(pfv.variant_id,'_',(REPLACE(REPLACE(REPLACE(fvd.variant, '\n',
''), '', ''), '\r',''))) as COLcolor_col_val  FROM
cscart_product_filters pf  inner join
cscart_product_filter_descriptions pfd on pfd.filter_id = pf.filter_id
inner join cscart_categories c on find_in_set (c.category_id,
pf.categories_path) inner join cscart_products_categories pc on
pc.category_id = c.category_id left join
cscart_product_features_values pfv on pfv.feature_id = pf.feature_id
and pfv.product_id = pc.product_id left join
cscart_product_feature_variant_descriptions fvd on fvd.variant_id =
pfv.variant_id where pf.feature_id != 53 and pc.product_id =
'72664486' and pf.status = 'A' order by pf.position asc Processing
Document # 517710

at 
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:266)
at 
org.apache.solr.handler.dataimport.DataImporter.doDeltaImport(DataImporter.java:451)
at 
org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:489)
at 
org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:468)
Caused by: java.lang.RuntimeException:
org.apache.solr.handler.dataimport.DataImportHandlerException: Unable
to execute query: SELECT pf.feature_id,
REPLACE(REPLACE(REPLACE(pfd.filter, '\n', ''), '', ''), '\r','') as
COLcolor_col, pfv.variant_id,
CONCAT(pfv.variant_id,'_',(REPLACE(REPLACE(REPLACE(fvd.variant, '\n',
''), '', ''), '\r',''))) as COLcolor_col_val  FROM
cscart_product_filters pf  inner join
cscart_product_filter_descriptions pfd on pfd.filter_id = pf.filter_id
inner join cscart_categories c on find_in_set (c.category_id,
pf.categories_path) inner join cscart_products_categories pc on
pc.category_id = c.category_id left join
cscart_product_features_values pfv on pfv.feature_id = pf.feature_id
and pfv.product_id = pc.product_id left join
cscart_product_feature_variant_descriptions fvd on fvd.variant_id =
pfv.variant_id where pf.feature_id != 53 and pc.product_id =
'72664486' and pf.status = 'A' order by pf.position asc Processing
Document # 517710
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:406)
at 
org.apache.solr.handler.dataimport.DocBuilder.doDelta(DocBuilder.java:353)
at 
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:219)
... 3 more
Caused by: org.apache.solr.handler.dataimport.DataImportHandlerException:
Unable to execute query: SELECT pf.feature_id,
REPLACE(REPLACE(REPLACE(pfd.filter, '\n', ''), '', ''), '\r','') as
COLcolor_col, pfv.variant_id,
CONCAT(pfv.variant_id,'_',(REPLACE(REPLACE(REPLACE(fvd.variant, '\n',
''), '', ''), '\r',''))) as COLcolor_col_val  FROM
cscart_product_filters pf  inner join
cscart_product_filter_descriptions pfd on pfd.filter_id = pf.filter_id
inner join cscart_categories c on find_in_set (c.category_id,
pf.categories_path) inner join cscart_products_categories pc on
pc.category_id = c.category_id left join
cscart_product_features_values pfv on pfv.feature_id = pf.feature_id
and pfv.product_id = pc.product_id left join
cscart_product_feature_variant_descriptions fvd on fvd.variant_id =
pfv.variant_id where pf.feature_id != 53 and pc.product_id =
'72664486' and pf.status = 'A' order by pf.position asc Processing
Document # 517710
at 
org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:71)
at 
org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.(JdbcDataSource.java:253)
at 
org.apache.solr.handler.dataimport.JdbcDataSource.getData(JdbcDataSource.java:210)
at 
org.apache.solr.handler.dataimport.JdbcDataSource.getData(JdbcDataSource.java:38)
at 
org.apache.solr.handler.dataimport.SqlEntityProcessor.initQuery(SqlEntityProcessor.java:59)
at 
org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:73)
at 
org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:243)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:465)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:491)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:404)
... 5 more


Partial update through DIH

2015-12-17 Thread Midas A
Hi,
can be do partial update trough Data import handler .

Regards,
Abhishek


Re: warning while indexing

2015-12-16 Thread Midas A
Alexandre ,

we are running multiple  DIH to index data.

On Thu, Dec 17, 2015 at 12:40 AM, Alexandre Rafalovitch <arafa...@gmail.com>
wrote:

> Are you sending documents from one client or many?
>
> Looks like an exhaustion of some sort of pool related to Commit within,
> which I assume you are using.
>
> Regards,
> Alex
> On 16 Dec 2015 4:11 pm, "Midas A" <test.mi...@gmail.com> wrote:
>
> > Getting following warning while indexing ..Anybody please tell me the
> > reason .
> >
> >
> > java.util.concurrent.RejectedExecutionException: Task
> >
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@9916a67
> > rejected from java.util.concurrent.ScheduledThreadPoolExecutor@79f8b5f
> > [Terminated,
> > pool size = 0, active threads = 0, queued tasks = 0, completed tasks =
> > 2046]
> > at
> >
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
> > at
> >
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
> > at
> >
> java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:325)
> > at
> >
> java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:530)
> > at
> >
> org.apache.solr.update.CommitTracker._scheduleCommitWithin(CommitTracker.java:150)
> > at
> >
> org.apache.solr.update.CommitTracker._scheduleCommitWithinIfNeeded(CommitTracker.java:118)
> > at
> >
> org.apache.solr.update.CommitTracker.addedDocument(CommitTracker.java:169)
> > at
> >
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:231)
> > at
> >
> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
> > at
> >
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
> > at
> >
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:451)
> > at
> >
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:587)
> > at
> >
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:346)
> > at
> >
> org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:100)
> > at
> > org.apache.solr.handler.dataimport.SolrWriter.upload(SolrWriter.java:70)
> > at
> >
> org.apache.solr.handler.dataimport.DataImportHandler$1.upload(DataImportHandler.java:235)
> > at
> >
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:500)
> > at
> >
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:404)
> > at
> >
> org.apache.solr.handler.dataimport.DocBuilder.doDelta(DocBuilder.java:353)
> > at
> >
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:219)
> > at
> >
> org.apache.solr.handler.dataimport.DataImporter.doDeltaImport(DataImporter.java:451)
> > at
> >
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:489)
> > at
> >
> org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:468)
> >
>


Re: warning while indexing

2015-12-16 Thread Midas A
Alexandre,

*Only two DIH, indexing different data.  *

On Thu, Dec 17, 2015 at 10:46 AM, Alexandre Rafalovitch <arafa...@gmail.com>
wrote:

> How many? On the same node?
>
> I am not sure if running multiple DIH is a popular case.
>
> My theory, still, that you are running out of a pool size there. Though if
> it happens with even just two DIH, it could be a different issue.
> On 17 Dec 2015 12:01 pm, "Midas A" <test.mi...@gmail.com> wrote:
>
> > Alexandre ,
> >
> > we are running multiple  DIH to index data.
> >
> > On Thu, Dec 17, 2015 at 12:40 AM, Alexandre Rafalovitch <
> > arafa...@gmail.com>
> > wrote:
> >
> > > Are you sending documents from one client or many?
> > >
> > > Looks like an exhaustion of some sort of pool related to Commit within,
> > > which I assume you are using.
> > >
> > > Regards,
> > > Alex
> > > On 16 Dec 2015 4:11 pm, "Midas A" <test.mi...@gmail.com> wrote:
> > >
> > > > Getting following warning while indexing ..Anybody please tell me the
> > > > reason .
> > > >
> > > >
> > > > java.util.concurrent.RejectedExecutionException: Task
> > > >
> > > >
> > >
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@9916a67
> > > > rejected from
> java.util.concurrent.ScheduledThreadPoolExecutor@79f8b5f
> > > > [Terminated,
> > > > pool size = 0, active threads = 0, queued tasks = 0, completed tasks
> =
> > > > 2046]
> > > > at
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
> > > > at
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
> > > > at
> > > >
> > >
> >
> java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:325)
> > > > at
> > > >
> > >
> >
> java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:530)
> > > > at
> > > >
> > >
> >
> org.apache.solr.update.CommitTracker._scheduleCommitWithin(CommitTracker.java:150)
> > > > at
> > > >
> > >
> >
> org.apache.solr.update.CommitTracker._scheduleCommitWithinIfNeeded(CommitTracker.java:118)
> > > > at
> > > >
> > >
> >
> org.apache.solr.update.CommitTracker.addedDocument(CommitTracker.java:169)
> > > > at
> > > >
> > >
> >
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:231)
> > > > at
> > > >
> > >
> >
> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
> > > > at
> > > >
> > >
> >
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
> > > > at
> > > >
> > >
> >
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:451)
> > > > at
> > > >
> > >
> >
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:587)
> > > > at
> > > >
> > >
> >
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:346)
> > > > at
> > > >
> > >
> >
> org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:100)
> > > > at
> > > >
> > org.apache.solr.handler.dataimport.SolrWriter.upload(SolrWriter.java:70)
> > > > at
> > > >
> > >
> >
> org.apache.solr.handler.dataimport.DataImportHandler$1.upload(DataImportHandler.java:235)
> > > > at
> > > >
> > >
> >
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:500)
> > > > at
> > > >
> > >
> >
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:404)
> > > > at
> > > >
> > >
> >
> org.apache.solr.handler.dataimport.DocBuilder.doDelta(DocBuilder.java:353)
> > > > at
> > > >
> > >
> >
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:219)
> > > > at
> > > >
> > >
> >
> org.apache.solr.handler.dataimport.DataImporter.doDeltaImport(DataImporter.java:451)
> > > > at
> > > >
> > >
> >
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:489)
> > > > at
> > > >
> > >
> >
> org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:468)
> > > >
> > >
> >
>


  1   2   >