Re: Different boost values for multiple parsers in Solr 5.2.1

2015-09-14 Thread dinesh naik
Hi Upayavira,
We have an issue here.

The boosting work as expected when we run the query from Admin console:
where we pass q and bq param as below.

q=(((_query_:"{!synonym_edismax qf='itemname OR itemnumber OR itemdesc'
v='HTC' bq='' mm=100 synonyms=true synonyms.constructPhrases=true
synonyms.ignoreQueryOperators=true}") OR (itemname:"HTC" OR
itemnamecomp:HTC* OR itemnumber:"HTC" OR itemnumbercomp:HTC* OR
itemdesc:"HTC"~500)) AND (warehouse:Ind02 OR warehouse:Ind03 OR
warehouse:Ind04 ))
bq=warehouse:Ind02^1000

This works absolutely fine when tried from Admin cosnole.

But when we use SolrJ API , we are not geting the expected boost value
being returned in score field.

We are using SolrQuery class for adding the bq parameter.

queryEngine.set("bq", boostQuery);
where boostQuery is : warehouse:Ind02^1000
How can we handle this. Is this becuase of bq='' being used for
synonym_edismax parser?




On Tue, Sep 8, 2015 at 5:49 PM, dinesh naik 
wrote:

> Thanks Alot Upayavira. It worked as expected.
>
>
> On Tue, Sep 8, 2015 at 2:09 PM, Upayavira  wrote:
>
>> you can add bq= inside your {!synonym_edismax} section, if you wish and
>> it will apply to that query parser only.
>>
>> Upayavira
>>
>> On Mon, Sep 7, 2015, at 03:05 PM, dinesh naik wrote:
>> > Please find below the detail:
>> >
>> >  My main query is like this:
>> >
>> > q=(((_query_:"{!synonym_edismax qf='itemname OR itemnumber OR itemdesc'
>> > v='HTC' mm=100 synonyms=true synonyms.constructPhrases=true
>> > synonyms.ignoreQueryOperators=true}") OR (itemname:"HTC" OR
>> > itemnamecomp:HTC* OR itemnumber:"HTC" OR itemnumbercomp:HTC* OR
>> > itemdesc:"HTC"~500)) AND (warehouse:Ind02 OR warehouse:Ind03 OR
>> > warehouse:Ind04 ))
>> >
>> >  Giving Boost of 1000 for warehouse Ind02
>> >  using below parameter:
>> >
>> >  bq=warehouse:Ind02^1000
>> >
>> >
>> > Here i am expecting a boost of 1004 but , somehow 1000 is added extra
>> may
>> > be because of my additional parser. How can i avoid this?
>> >
>> >
>> > Debug information for the boost :
>> >
>> >  
>> > 2004.0 = sum of:
>> >   1004.0 = sum of:
>> > 1003.0 = sum of:
>> >   1001.0 = sum of:
>> > 1.0 = max of:
>> >   1.0 = weight(itemname:HTC in 235500) [CustomSimilarity],
>> result
>> > of:
>> > 1.0 = fieldWeight in 235500, product of:
>> >   1.0 = tf(freq=1.0), with freq of:
>> > 1.0 = termFreq=1.0
>> >   1.0 = idf(docFreq=26, maxDocs=1738053)
>> >   1.0 = fieldNorm(doc=235500)
>> > 1000.0 = weight(warehouse:e02^1000.0 in 235500)
>> > [CustomSimilarity],
>> > result of:
>> >   1000.0 = score(doc=235500,freq=1.0), product of:
>> > 1000.0 = queryWeight, product of:
>> >   1000.0 = boost
>> >   1.0 = idf(docFreq=416190, maxDocs=1738053)
>> >   1.0 = queryNorm
>> > 1.0 = fieldWeight in 235500, product of:
>> >   1.0 = tf(freq=1.0), with freq of:
>> > 1.0 = termFreq=1.0
>> >   1.0 = idf(docFreq=416190, maxDocs=1738053)
>> >   1.0 = fieldNorm(doc=235500)
>> >   2.0 = sum of:
>> > 1.0 = weight(itemname:HTC in 235500) [CustomSimilarity], result
>> > of:
>> >   1.0 = fieldWeight in 235500, product of:
>> > 1.0 = tf(freq=1.0), with freq of:
>> >   1.0 = termFreq=1.0
>> > 1.0 = idf(docFreq=26, maxDocs=1738053)
>> > 1.0 = fieldNorm(doc=235500)
>> > 1.0 = itemnamecomp:HTC*, product of:
>> >   1.0 = boost
>> >   1.0 = queryNorm
>> > 1.0 = sum of:
>> >   1.0 = weight(warehouse:e02 in 235500) [CustomSimilarity], result
>> >   of:
>> > 1.0 = fieldWeight in 235500, product of:
>> >   1.0 = tf(freq=1.0), with freq of:
>> > 1.0 = termFreq=1.0
>> >   1.0 = idf(docFreq=416190, maxDocs=1738053)
>> >   1.0 = fieldNorm(doc=235500)
>> >   1000.0 = weight(warehouse:e02^1000.0 in 235500) [CustomSimilarity],
>> > result of:
>> > 1000.0 = score(doc=235500,freq=1.0), product of:
>> >   1000.0 = queryWeight, product of:
>> > 1000.0 = boost
>> > 1.0 = idf(docFreq=416190, maxDocs=1738053)
>> > 1.0 = queryNorm
>> >   1.0 = fieldWeight in 235500, product of:
>> > 1.0 = tf(freq=1.0), with freq of:
>> >   1.0 = termFreq=1.0
>> > 1.0 = idf(docFreq=416190, maxDocs=1738053)
>> > 1.0 = fieldNorm(doc=235500)
>> > 
>> >
>> > On Mon, Sep 7, 2015 at 7:21 PM, dinesh naik 
>> > wrote:
>> > Hi all,
>> >
>> > Is there a way to apply different boost , using bq parameter for
>> > different
>> > parser.
>> >
>> > for example if i am using a synonym parser and edismax parser in a
>> single
>> > query, my bq param value is getting applied for both the parser making
>> > the
>> > boost value double.
>> >
>> > --
>> > Best Regards,
>> > 

Re: Solr Replication sometimes coming in log files

2015-09-14 Thread Upayavira
I bet you have the admin UI open on your second slave. The _=144... is
the give-away. Those requests are the admin UI asking the replication
handler for the status of replication.

Upayavira

On Wed, Sep 9, 2015, at 06:32 AM, Kamal Kishore Aggarwal wrote:
> Hi Team,
> 
> I am currently working with Java-1.7, Solr-4.8.1 with tomcat 7. The solr
> configuration has master & slave ( 2 Slaves) architecture.
> 
> 
> Master & Slave 2 are in same server location (say zone A) , whereas Slave
> 1
> is in another server in different zone (say zone B). There is latency of
> 40
> ms between two zones.
> 
> Now, a days we are facing high load on Slave 1 & we suspect that it is
> due
> to delay in data replication from Master server. These days we are
> finding
> these below mentioned replication information in log files, but such
> lines
> are not in previous files on the Slave 1 server. Also, such information
> is
> not there in any Slave 2 log files (might be due to same zone of master &
> slave 2).
> 
> 
> > INFO: [Core] webapp=/solr path=/replication
> > params={wt=json=details&_=1441708786003} status=0 QTime=173
> > INFO: [Core] webapp=/solr path=/replication
> > params={wt=json=details&_=1441708787976} status=0 QTime=1807
> > INFO: [Core] webapp=/solr path=/replication
> > params={wt=json=details&_=1441708791563} status=0 QTime=7140
> > INFO: [Core] webapp=/solr path=/replication
> > params={wt=json=details&_=1441708800450} status=0 QTime=1679
> 
> 
> 
> Please confirm if we our thought that increased replication time (which
> can
> be due to servers connectivity issues) is the reason for high load on
> solr.
> 
> Regards
> Kamal Kishore


Re: Solr Join between two indexes taking too long.

2015-09-14 Thread Mikhail Khludnev
Why? It's enough to just open index by Solr 5.3 instance. No need to
reindex.

On Mon, Sep 14, 2015 at 4:57 PM, Russell Taylor <
russell.tay...@interactivedata.com> wrote:

> Looks like I won't be able to test this out on 5.3.
>
> Thanks for all your help.
>
> Russ.
>
> -Original Message-
> From: Russell Taylor
> Sent: 11 September 2015 14:00
> To: solr-user@lucene.apache.org
> Subject: RE: Solr Join between two indexes taking too long.
>
> It will take a little while to set-up a 5.3 version, hopefully I'll have
> some results later next week.
> 
> From: Mikhail Khludnev [mkhlud...@griddynamics.com]
> Sent: 11 September 2015 12:59
> To: Russell Taylor
> Subject: Re: Solr Join between two indexes taking too long.
>
>
> On Wed, Sep 9, 2015 at 1:10 PM, Russell Taylor <
> russell.tay...@interactivedata.com russell.tay...@interactivedata.com>> wrote:
> Do you have a link to your talk at Berlin Buzzwords?
>
>
> https://berlinbuzzwords.de/file/bbuzz-2015-mikhailv-khludnev-approaching-join-index-lucene
>
> How did it go with Solr 5.3 and {!join score=...} ?
>
>
> --
> Sincerely yours
> Mikhail Khludnev
> Principal Engineer,
> Grid Dynamics
>
> 
> 
>
>
> ***
> This message (including any files transmitted with it) may contain
> confidential and/or proprietary information, is the property of Interactive
> Data Corporation and/or its subsidiaries, and is directed only to the
> addressee(s). If you are not the designated recipient or have reason to
> believe you received this message in error, please delete this message from
> your system and notify the sender immediately. An unintended recipient's
> disclosure, copying, distribution, or use of this message or any
> attachments is prohibited and may be unlawful.
> ***
>
>
> ***
> This message (including any files transmitted with it) may contain
> confidential and/or proprietary information, is the property of Interactive
> Data Corporation and/or its subsidiaries, and is directed only to the
> addressee(s). If you are not the designated recipient or have reason to
> believe you received this message in error, please delete this message from
> your system and notify the sender immediately. An unintended recipient's
> disclosure, copying, distribution, or use of this message or any
> attachments is prohibited and may be unlawful.
> ***
>
>


-- 
Sincerely yours
Mikhail Khludnev
Principal Engineer,
Grid Dynamics





Strange error message in Solr 5.2.1 log

2015-09-14 Thread Shawn Heisey
Below is a very large error message and associated stacktrace that I can
see in the Solr log on a Linux system running version 5.2.1 with Oracle
JDK 8u60.  This message appeared a bunch of times over the weekend, but
does not appear to be happening now.

This looks like a low level Lucene error, the sort of thing that
shouldn't ever happen.  I have a manually started monitoring process
that retrieves certain Solr APIs that include the size of the index, and
this appears to match up with the stacktrace.

It looks like Upayavira noticed the same thing a couple of months ago --
SOLR-7785.

Should I be worried by this?  It appears to only have happened on two of
my build cores, not the live cores.  This is a dev server that is not
being used for queries right now, but I did have plans to migrate at
least one of the production indexes to 5.2.1 with the same config that's
on this system.

ERROR - 2015-09-08 16:09:39.981; [   s3build]
org.apache.solr.common.SolrException; java.lang.IllegalStateException:
file: MMapDirectory@/index/solr5/data/data/spark3_1/index
lockFactory=org.apache.lucene.store.NativeFSLockFactory@71ee2ee7 appears
both in delegate and in cache: cache=[_jb.fnm,​ _jc.tvd,​ _j9.tvd,​
_j9.fdx,​ _ja.fnm,​ _jc.tvx,​ _j9.fdt,​ _j9.tvx],​
delegate=[_ib_Lucene50_0.pos,​ _gq_Lucene50_0.pos,​ _gu_Lucene50_0.tim,​
_ig_Lucene50_0.tim,​ _im.si,​ _ft.si,​ _hq.nvd,​ _j9_Lucene50_0.dvm,​
_hw_Lucene50_0.dvd,​ _hp.cfs,​ _e1.tvx,​ _e0.tvd,​ _j2_Lucene50_0.doc,​
_je_Lucene50_0.tim,​ _i5.nvm,​ _en_Lucene50_0.tim,​ _j3_Lucene50_0.dvd,​
_dg.fdt,​ _it_Lucene50_0.doc,​ _ir_Lucene50_0.dvm,​ _ik.tvd,​
_ja_Lucene50_0.doc,​ _hs_Lucene50_0.tim,​ _j4_Lucene50_0.tim,​
_iu_Lucene50_0.pos,​ _iq_Lucene50_0.doc,​ _i0_Lucene50_0.tim,​ _j2.fnm,​
_fs_Lucene50_0.tip,​ _ia_Lucene50_0.pos,​ _eu_Lucene50_0.dvm,​ _cr.tvx,​
_is.nvd,​ _ey_Lucene50_0.pos,​ _jd.tvd,​ _i6_Lucene50_0.dvm,​
_im_Lucene50_0.tip,​ _hx.nvm,​ _it.nvd,​ _i7.fdx,​ _iw.fnm,​ _ia.fdx,​
_ix.fnm,​ _e1_Lucene50_0.pos,​ _ey.nvd,​ _j5.fdt,​ _iy_Lucene50_0.tim,​
_hj_Lucene50_0.doc,​ _i9.nvm,​ _g2.si,​ _hu.tvd,​ _cr.tvd,​ _i1.nvd,​
_iz_Lucene50_0.doc,​ _eu_Lucene50_0.pos,​ _fd_Lucene50_0.dvm,​
_ih_Lucene50_0.dvm,​ _in.fdx,​ _ig_Lucene50_0.dvd,​ _hq.tvx,​ _eu.nvd,​
_j4.fdx,​ _i4.nvm,​ _ir.tvd,​ _fk.tvx,​ _hr_Lucene50_0.tip,​ _jb.nvm,​
_ft.nvd,​ _ix.fdx,​ _j1_Lucene50_0.dvd,​ _g8_Lucene50_0.dvm,​
_g7_Lucene50_0.tim,​ _hz.nvm,​ _iz.nvd,​ _hy_Lucene50_0.dvm,​ _iy.tvx,​
_ha_Lucene50_0.tip,​ _fl.tvx,​ _j2.nvm,​ _i1_Lucene50_0.dvd,​
_h1_Lucene50_0.doc,​ _fv_Lucene50_0.pos,​ _dl_Lucene50_0.dvm,​
_hv_Lucene50_0.tim,​ _in_Lucene50_0.pos,​ _ie_Lucene50_0.tim,​ _ha.nvm,​
_dg_Lucene50_0.dvm,​ _ir.fnm,​ _e6.tvd,​ _hs.tvd,​ _hu_Lucene50_0.tip,​
_it_Lucene50_0.pos,​ _g7.fdt,​ _fk_Lucene50_0.doc,​ _j8.fnm,​
_im_Lucene50_0.tim,​ _in_Lucene50_0.tim,​ _en_Lucene50_0.tip,​
_it_Lucene50_0.dvm,​ _ib.tvx,​ _e0_Lucene50_0.tip,​ _fl.nvm,​
_iy_Lucene50_0.tip,​ _e0.tvx,​ _fd.fnm,​ _fs.fnm,​ _ia.nvm,​ _gm.tvx,​
_i5.tvd,​ _j2.nvd,​ _fe.fdt,​ _ik.si,​ _du_Lucene50_0.tip,​
_hy_Lucene50_0.tim,​ _cr.fnm,​ _h1.si,​ _hq.tvd,​ _jb.tvd,​ _du.tvd,​
_im.tvx,​ _iz_Lucene50_0.pos,​ _gm_Lucene50_0.pos,​ _hq.fdx,​
_i4_Lucene50_0.tip,​ _j7.nvd,​ _eb.fdt,​ _hq_Lucene50_0.dvm,​
_gc_Lucene50_0.dvd,​ _i0.tvd,​ _f5.nvm,​ _fk.nvm,​ _i2.nvm,​
_i0_Lucene50_0.dvd,​ _iz.tvx,​ _it.fdx,​ _if.fdt,​ _ex.tvd,​ _eb.tvd,​
_et.si,​ _gd.fdx,​ _eh_Lucene50_0.doc,​ _eh.nvd,​ _gd.si,​ _eh.nvm,​
_ij.tvx,​ _i5.tvx,​ _fb_Lucene50_0.doc,​ _jc.nvd,​ _ha.tvx,​ _eb.fnm,​
_gq_Lucene50_0.tip,​ _hy_Lucene50_0.tip,​ _j3.tvx,​ _j9_Lucene50_0.tip,​
_hu.fnm,​ _ho.nvm,​ _eu_Lucene50_0.tim,​ _i8.tvx,​ _jc.fdx,​ _gd.tvd,​
_hq.fnm,​ _fe_Lucene50_0.dvd,​ _g1.nvm,​ _iu.si,​ _iw.tvd,​
_dg_Lucene50_0.dvd,​ _jb_Lucene50_0.pos,​ _j7.fdt,​ _iy.si,​ _j5.nvd,​
_hw.tvx,​ _jb_Lucene50_0.dvd,​ _ik.fdx,​ _ey.nvm,​ _fd.tvx,​ _i4.fnm,​
_dg.nvm,​ _ia.fnm,​ _ix.nvd,​ _ie.nvd,​ _g7.fnm,​ _dg_Lucene50_0.pos,​
_fe.tvd,​ _fs.tvd,​ _j2.si,​ _ie_Lucene50_0.dvm,​ _i4_Lucene50_0.dvd,​
_je_Lucene50_0.doc,​ _iu_Lucene50_0.dvd,​ _e1_Lucene50_0.tip,​ _ja.nvd,​
_jd.fdt,​ _eb.tvx,​ _fb.nvd,​ _j2_Lucene50_0.pos,​ _in_Lucene50_0.tip,​
_gu_Lucene50_0.dvd,​ _ii_Lucene50_0.pos,​ _hw_Lucene50_0.doc,​ _ja.nvm,​
_i6_Lucene50_0.pos,​ _g8.si,​ _i0.fdx,​ _i6.tvx,​ _j1.fdt,​
_is_Lucene50_0.dvd,​ _ho_Lucene50_0.dvm,​ _iu.tvx,​ _eh.si,​
_if_Lucene50_0.dvd,​ _fv.tvx,​ _fv.tvd,​ _jc.tvd,​ _f5.fdt,​ _e1.fdx,​
_i9.fdt,​ _gq.fdx,​ _is.fnm,​ _ij_Lucene50_0.doc,​ _i5_Lucene50_0.doc,​
_ii_Lucene50_0.tip,​ _g2_Lucene50_0.dvm,​ _eu_Lucene50_0.dvd,​ _fl.tvd,​
_fv_Lucene50_0.dvd,​ _it.tvx,​ _f5_Lucene50_0.doc,​ _jf.fdt,​ _iu.fdt,​
_j7.tvx,​ _ey_Lucene50_0.doc,​ _fk.si,​ _ic.fdt,​ _j7_Lucene50_0.tip,​
_i5.fnm,​ _iy.nvm,​ _hw_Lucene50_0.pos,​ _ig_Lucene50_0.dvm,​
_j4_Lucene50_0.tip,​ _j6.nvm,​ _je.si,​ _j3.fnm,​ _im.fdx,​ _g7.nvm,​
_e0_Lucene50_0.dvd,​ _hq.nvm,​ _ha_Lucene50_0.dvd,​ _j9.nvm,​ _ih.nvm,​
_ic.fdx,​ _ik.fdt,​ _if.si,​ _iv_Lucene50_0.tim,​ _i0.nvm,​ _gd.nvm,​
_i2.nvd,​ _j6_Lucene50_0.pos,​ _ik.tvx,​ 

RE: Solr Join between two indexes taking too long.

2015-09-14 Thread Russell Taylor
Looks like I won't be able to test this out on 5.3.

Thanks for all your help.

Russ.

-Original Message-
From: Russell Taylor 
Sent: 11 September 2015 14:00
To: solr-user@lucene.apache.org
Subject: RE: Solr Join between two indexes taking too long.

It will take a little while to set-up a 5.3 version, hopefully I'll have some 
results later next week.

From: Mikhail Khludnev [mkhlud...@griddynamics.com]
Sent: 11 September 2015 12:59
To: Russell Taylor
Subject: Re: Solr Join between two indexes taking too long.


On Wed, Sep 9, 2015 at 1:10 PM, Russell Taylor 
> 
wrote:
Do you have a link to your talk at Berlin Buzzwords?

https://berlinbuzzwords.de/file/bbuzz-2015-mikhailv-khludnev-approaching-join-index-lucene

How did it go with Solr 5.3 and {!join score=...} ?


--
Sincerely yours
Mikhail Khludnev
Principal Engineer,
Grid Dynamics





***
This message (including any files transmitted with it) may contain confidential 
and/or proprietary information, is the property of Interactive Data Corporation 
and/or its subsidiaries, and is directed only to the addressee(s). If you are 
not the designated recipient or have reason to believe you received this 
message in error, please delete this message from your system and notify the 
sender immediately. An unintended recipient's disclosure, copying, 
distribution, or use of this message or any attachments is prohibited and may 
be unlawful. 
***


***
This message (including any files transmitted with it) may contain confidential 
and/or proprietary information, is the property of Interactive Data Corporation 
and/or its subsidiaries, and is directed only to the addressee(s). If you are 
not the designated recipient or have reason to believe you received this 
message in error, please delete this message from your system and notify the 
sender immediately. An unintended recipient's disclosure, copying, 
distribution, or use of this message or any attachments is prohibited and may 
be unlawful. 
***



Find records with no values in solr.LatLongType fied type

2015-09-14 Thread Kamal Kishore Aggarwal
Hi,

I am working on solr 4.8,1. I am trying to find the docs where latlongtype
have null values.

I have tried using these, but not getting the results :

1) http://localhost:8984/solr/IM-Search/select?q.alt=-usrlatlong:[' ' TO *]

2) http://localhost:8984/solr/IM-Search/select?q.alt=-usrlatlong:[* TO *]

Here's the configurations :
>  subFieldSuffix="_coordinate"/>
>  required="false" multiValued="false" />


Please help.


Re: Solr Replication sometimes coming in log files

2015-09-14 Thread Mikhail Khludnev
Hello,

I'd say opposite: high load causes long time to respond. 'command=details'
is rather cheap and fast _I believe_.

On Mon, Sep 14, 2015 at 10:20 AM, Kamal Kishore Aggarwal <
kkroyal@gmail.com> wrote:

> Can anybody suggest me something..
>
> On Wed, Sep 9, 2015 at 11:02 AM, Kamal Kishore Aggarwal <
> kkroyal@gmail.com> wrote:
>
> > Hi Team,
> >
> > I am currently working with Java-1.7, Solr-4.8.1 with tomcat 7. The solr
> > configuration has master & slave ( 2 Slaves) architecture.
> >
> >
> > Master & Slave 2 are in same server location (say zone A) , whereas Slave
> > 1 is in another server in different zone (say zone B). There is latency
> of
> > 40 ms between two zones.
> >
> > Now, a days we are facing high load on Slave 1 & we suspect that it is
> due
> > to delay in data replication from Master server. These days we are
> finding
> > these below mentioned replication information in log files, but such
> lines
> > are not in previous files on the Slave 1 server. Also, such information
> is
> > not there in any Slave 2 log files (might be due to same zone of master &
> > slave 2).
> >
> >
> >> INFO: [Core] webapp=/solr path=/replication
> >> params={wt=json=details&_=1441708786003} status=0 QTime=173
> >> INFO: [Core] webapp=/solr path=/replication
> >> params={wt=json=details&_=1441708787976} status=0 QTime=1807
> >> INFO: [Core] webapp=/solr path=/replication
> >> params={wt=json=details&_=1441708791563} status=0 QTime=7140
> >> INFO: [Core] webapp=/solr path=/replication
> >> params={wt=json=details&_=1441708800450} status=0 QTime=1679
> >
> >
> >
> > Please confirm if we our thought that increased replication time (which
> > can be due to servers connectivity issues) is the reason for high load on
> > solr.
> >
> > Regards
> > Kamal Kishore
> >
> >
>



-- 
Sincerely yours
Mikhail Khludnev
Principal Engineer,
Grid Dynamics





Re: Problem with starting Solr with custom core directories in Solr 5.3.0

2015-09-14 Thread Zheng Lin Edwin Yeo
Is this due to a bug with the code provided by Solr, or is it that I did
not configure my configurations in the other parts correctly?

Regards,
Edwin

On 9 September 2015 at 17:46, Zheng Lin Edwin Yeo 
wrote:

> Hi,
>
> Would like to check, is there any problem with this set of codes from Line
> 604-614 in solr.cmd for Solr 5.3.0? When using this set of codes, I'm not
> able to start solr with it pointing to custom core directories.
>
> IF "%SOLR_HOME%"=="" set "SOLR_HOME=%SOLR_SERVER_DIR%\solr"
>
> IF NOT EXIST "%SOLR_HOME%\" (
>
>   IF EXIST "%SOLR_SERVER_DIR%\%SOLR_HOME%" (
>
> set "SOLR_HOME=%SOLR_SERVER_DIR%\%SOLR_HOME%"
>
>   ) ELSE IF EXIST "%cd%\%SOLR_HOME%" (
>
> set "SOLR_HOME=%cd%\%SOLR_HOME%"
>
>   ) ELSE (
>
> set "SCRIPT_ERROR=EDM home directory %SOLR_HOME% not found!"
>
> goto err
>
>   )
>
> )
>
>
> I replace it with the following and it works.
>
> IF "%SOLR_HOME%"=="" set "SOLR_HOME=%SOLR_SERVER_DIR%\solr"
>
> IF EXIST "%SOLR_HOME%\" (
>
>   IF EXIST "%SOLR_SERVER_DIR%\%SOLR_HOME%" (
>
> set "SOLR_HOME=%SOLR_SERVER_DIR%\%SOLR_HOME%"
>
>   ) ELSE IF EXIST "%cd%\%SOLR_HOME%" (
>
> set "SOLR_HOME=%cd%\%SOLR_HOME%"
>
>   ) ELSE (
>
> set "SCRIPT_ERROR=EDM home directory %SOLR_HOME% not found!"
>
> goto err
>
>   )
>
> )
>
> The problem seems to be that both the IF "%SOLR_HOME%"=="" and IF NOT
> EXIST "%SOLR_HOME%\" mean the same thing, and thus the code didn't called
> the custom directory. I replace it with IF EXIST "%SOLR_HOME%\"  for the
> 2nd line (removing the 'NOT').
>
> The script which I used to startup is:
> bin\solr.cmd start -cloud -p 8983 -s solrMain\node1\solr -m 12g -z
> "localhost:2181,localhost:2182,localhost:2183"
>
> Hope we can clarify some doubts here.
>
> Regards,
> Edwin
>


Re: Solr Replication sometimes coming in log files

2015-09-14 Thread Kamal Kishore Aggarwal
Can anybody suggest me something..

On Wed, Sep 9, 2015 at 11:02 AM, Kamal Kishore Aggarwal <
kkroyal@gmail.com> wrote:

> Hi Team,
>
> I am currently working with Java-1.7, Solr-4.8.1 with tomcat 7. The solr
> configuration has master & slave ( 2 Slaves) architecture.
>
>
> Master & Slave 2 are in same server location (say zone A) , whereas Slave
> 1 is in another server in different zone (say zone B). There is latency of
> 40 ms between two zones.
>
> Now, a days we are facing high load on Slave 1 & we suspect that it is due
> to delay in data replication from Master server. These days we are finding
> these below mentioned replication information in log files, but such lines
> are not in previous files on the Slave 1 server. Also, such information is
> not there in any Slave 2 log files (might be due to same zone of master &
> slave 2).
>
>
>> INFO: [Core] webapp=/solr path=/replication
>> params={wt=json=details&_=1441708786003} status=0 QTime=173
>> INFO: [Core] webapp=/solr path=/replication
>> params={wt=json=details&_=1441708787976} status=0 QTime=1807
>> INFO: [Core] webapp=/solr path=/replication
>> params={wt=json=details&_=1441708791563} status=0 QTime=7140
>> INFO: [Core] webapp=/solr path=/replication
>> params={wt=json=details&_=1441708800450} status=0 QTime=1679
>
>
>
> Please confirm if we our thought that increased replication time (which
> can be due to servers connectivity issues) is the reason for high load on
> solr.
>
> Regards
> Kamal Kishore
>
>