Re: Solr Cloud and Multi-word Synonyms :: synonym_edismax parser

2016-06-01 Thread John Bickerstaff
Yes, I get that, thanks.
On Jun 1, 2016 6:38 PM, "Joe Lawson" 
wrote:

> 2.0 is compiled with Solr 5 and Java 7. It uses the namespace
> solr.SynonymExpandingExtendedDismaxQParserPlugin
>
> 5.0.4 is compiled with Solr 6 and Java 8 and is the first release that made
> it to maven central. It uses the namespace
> com.github.healthonnet.search.SynonymExpandingExtendedDismaxQParserPlugin
>
> The features are the same for all versions.
>
> Hope this clears things up.
>
> -Joe
> On Jun 1, 2016 8:11 PM, "John Bickerstaff" 
> wrote:
>
> > Just to be clear, I got version 2.0 of the jar from github...  should I
> be
> > look for something in a maven repository?  A bit confused at this point
> > given all the version numbers...
> >
> > I want the latest and greatest unless there's any special
> considerations..
> >
> > Thanks for the assistance!
> > On Jun 1, 2016 5:46 PM, "MaryJo Sminkey"  wrote:
> >
> > Yup that was the issue for us as well. It doesn't seem to be throwing the
> > class error now, although I have not been able to successfully get back
> > results that seem to be using it, it's showing up as the deftype in my
> > params but the QParser in my debug is the normal edismax one. I will have
> > to play around with my config some more tomorrow and try to figure out
> what
> > we're doing wrong.
> >
> > MJ
> >
> >
> >
> > On Wed, Jun 1, 2016 at 6:38 PM, Joe Lawson <
> > jlaw...@opensourceconnections.com> wrote:
> >
> > > Nothing up until 5.0.4 was distributed on maven central. 5.0 -> 5.0.4
> was
> > > just a bunch of clean up to get it ready for maven (including the
> > namespace
> > > change).
> > >
> > > Being that nearly all docs and articles talking about the plugin
> > reference
> > > the old 2.0 one could reasonably get confused as to what config to use
> > esp
> > > when I linked the latest 5.0.4 test config prior.
> > >
> > > You can get the older jars from the links off the readme.md.
> > > On Jun 1, 2016 6:14 PM, "Shawn Heisey"  wrote:
> > >
> > > On 6/1/2016 1:10 PM, John Bickerstaff wrote:
> > > > @Joe:
> > > >
> > > > Is it possible that the jar's package name does not match the entry
> in
> > > the
> > > > sample solrconfig.xml file?
> > > >
> > > > The solrconfig.xml example file in the test directory contains the
> > > > following package name:
> > > >  > > >
> > >
> > >
> >
> >
> class="com.github.healthonnet.search.SynonymExpandingExtendedDismaxQParserPlugin">
> > > >
> > > > However, the jar file (when unzipped) has the following directory
> > > structure
> > > > down to the same class name:
> > > >
> > > > org --> apache --> solr --> search
> > > >
> > > > I just tried with the name change to the org.apache package name
> in
> > > the
> > > > solrconfig.xml file and got no errors.
> > >
> > > Looks like the package name is indeed the problem here.
> > >
> > > They changed the package name from org.apache.solr.search to
> > > com.github.healthonnet.search in the LATEST source code release --
> > > 5.0.4.  The code in the 5.0.3 version (and the 2.0.0 version indicated
> > > in the earlier message) uses org.apache.solr.search.
> > >
> > > I cannot find any files in the 2.0.0 zipfile download that contain the
> > > new package name, so I'm curious where the incorrect information on how
> > > to configure Solr to use the plugin was found.  I did not check the
> > > tarball download.
> > >
> > > Thanks,
> > > Shawn
> > >
> >
>


Re: Add a new field dynamically to each of the result docs and sort on it

2016-06-01 Thread Mark Robinson
Thanks Charlie!
I will check this and try it out.

Best,
Mark.

On Wed, Jun 1, 2016 at 7:00 AM, Charlie Hull  wrote:

> On 01/06/2016 11:56, Mark Robinson wrote:
>
>> Just to complete my prev use case, in case no direct way is possible in
>> SOLR to sort on a field in a different core, is there a way to embed the
>> tagValue of a product dynamically into the results (the storeid will be
>> passed at query time. So query the product_tags core for that
>> product+storeid and get the tagValue and embed it into the product results
>> probably in the "process" method of a custom component ... in the first
>> place I believe we can add a value like that to each result doc). But then
>> how can we sort on this value as I am now working on the results which
>> came
>> out after any initial sort was applied and can we re-sort at this very
>> late
>> stage using some java sorting in the custom component.
>>
>
> Hi Mark,
>
> Not sure if this is directly relevant but we implemented a component to
> join Solr results with external data:
> http://www.flax.co.uk/blog/2016/01/25/xjoin-solr-part-1-filtering-using-price-discount-data/
>
> Cheers
>
> Charlie
>
>>
>> Thanks!
>> Mark.
>>
>> On Wed, Jun 1, 2016 at 6:44 AM, Mark Robinson 
>> wrote:
>>
>> Thanks much Eric and Hoss!
>>>
>>> Let me try to detail.
>>> We have our "product" core with a couple of million docs.
>>> We have a couple of thousand outlets where the products get sold.
>>> Each product can have a different *tagValue* in each outlet.
>>> Our "product_tag" core (around 2M times 2000 records), captures tag info
>>> of each product in each outlet. It has some additional info also (a
>>> couple
>>> of more fields in addition to *tagValue*), pertaining to each
>>> product-outlet combination and there can be NRT *tag* updates for this
>>> core (the *tagValue* of each product in each outlet can change and is
>>> updated in real time). So we moved the volatile portion of product out
>>> to a
>>> separate core which has approx 2M times 2000 records and only 4 or 5
>>> fields
>>> per doc.
>>>
>>> A recent requirement is that we want our product results to be bumped up
>>> or down if it has a particular *tagValue*... for example products with
>>> tagValue=X should be at the top. Currently only one tag*Value* considered
>>> to decide results order.
>>> A future requirement could be products with *tagValue=*X bumped up
>>> followed by products with *tagValue=*Y.
>>>
>>>
>>> ie "product" results need to be ordered based on a field(s) in the
>>> "product_tag" core (a different core).
>>>
>>> Is there ANY way to achieve this scenario.
>>>
>>> Thanks!
>>>
>>> Mark.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Tue, May 31, 2016 at 8:13 PM, Chris Hostetter <
>>> hossman_luc...@fucit.org
>>>
 wrote:

>>>
>>>
 : When a query comes in, I want to populate value for this field in the
 : results based on some values passed in the query.
 : So what needs to be accommodated in the result depends on a parameter
 in
 : the query and I would like to sort the final results on this field
 also,
 : which is dynamically populated.

 populated how? ... what exactly do you want to provide at query time,
 and
 how exactly do you want it to affect your query results / sorting?

 The details of what you *think* you mean matter, because based on the
 information you've provided we have no way of guessing what your goal
 is -- and if we can't guess what you mean, then there's no way to
 imagein
 Solr can figure it out ... software doesn't have an imagination.

 We need to know what your documents are going to look like at index
 time (with *real* details, and specific example docs) and what your
 queries are going to look like (again: with *real* details on the "some
 values passed in the query") and a detailed explanation of how what
 results you want to see and why -- describe in words how the final
 sorting
 of the docs you should have already described to use would be determined
 acording to the info pased in at query time which you should have also
 already described to us.


 In general i think i smell and XY Problem...

 https://people.apache.org/~hossman/#xyproblem
 XY Problem

 Your question appears to be an "XY Problem" ... that is: you are dealing
 with "X", you are assuming "Y" will help you, and you are asking about
 "Y"
 without giving more details about the "X" so that we can understand the
 full issue.  Perhaps the best solution doesn't involve "Y" at all?
 See Also: http://www.perlmonks.org/index.pl?node_id=542341


 -Hoss
 http://www.lucidworks.com/


>>>
>>>
>>
>
> --
> Charlie Hull
> Flax - Open Source Enterprise Search
>
> tel/fax: +44 (0)8700 118334
> mobile:  +44 (0)7767 825828
> web: www.flax.co.uk
>


Re: Sorting documents in one core based on a field in another core

2016-06-01 Thread Mark Robinson
Thanks Mikhail!
I will check and get back.

Best,
Mark

On Tue, May 31, 2016 at 4:58 PM, Mikhail Khludnev <
mkhlud...@griddynamics.com> wrote:

> Hello Mark,
>
> Is it sounds like what's described at
>
> http://blog-archive.griddynamics.com/2015/08/scoring-join-party-in-solr-53.html
> ?
>
> On Tue, May 31, 2016 at 5:41 PM, Mark Robinson 
> wrote:
>
> > Hi,
> >
> > I have a requirement to sort records in one core/ collection based on a
> > field in
> > another core/collection.
> >
> > Could some one please advise how it can be done in SOLR.
> >
> > I have used !join to restrict documents in one core based on field values
> > in another core. Is there some way to sort like that?
> >
> >
> > Thanks!
> > Mark.
> >
>
>
>
> --
> Sincerely yours
> Mikhail Khludnev
> Principal Engineer,
> Grid Dynamics
>
> 
> 
>


Re: Add a new field dynamically to each of the result docs and sort on it

2016-06-01 Thread Mark Robinson
Thanks for the reply Hoss!

Let me do a quick explanation to the sort by tagValue. (Actually I quickly
added this part in a different mail when I found I missed it in this mail..)
That is where the dynamic input parameter comes in.
The input will specify for which local outlet (outlet id passed) we need to
take the tagValue to sort on. So the tagValue of each of the products for
that (local)  outlet is what is taken into consideration for products
coming for that query.
Note:- Two people searching from 2 different locations will have 2
different "local" outlets. So based on the location of the user his local
outlet's
   id is passed so that for the user his products are sorted by
tagValue corresponding to that outlet.
This is where my JOIN query to filter out only products
belonging to that local store was used and went well.
   Now I need to sort the product results based on the tagValue of
that local store somehow!

Thanks!
Mark.

On Wed, Jun 1, 2016 at 1:18 PM, Chris Hostetter 
wrote:

>
> : Let me try to detail.
> : We have our "product" core with a couple of million docs.
> : We have a couple of thousand outlets where the products get sold.
> : Each product can have a different *tagValue* in each outlet.
> : Our "product_tag" core (around 2M times 2000 records), captures tag info
> of
> : each product in each outlet. It has some additional info also (a couple
> of
> : more fields in addition to *tagValue*), pertaining to each
> : product-outlet combination and there can be NRT *tag* updates for this
> core
> : (the *tagValue* of each product in each outlet can change and is updated
> in
> : real time). So we moved the volatile portion of product out to a separate
> : core which has approx 2M times 2000 records and only 4 or 5 fields per
> doc.
>
> That information is helpful, but -- as i mentioned before -- to reduce
> misscommunication providing detailed examples at the document+field level
> is helpful.  ie: make up 2 products, tell us what field values those
> products have in each field (in each collection) and then explain how
> those two products should sort (relative to eachother) so that we can see
> a relaistic example of what you want to happen.
>
> Based on the information you've provided so far, you're question still
> doesn't make any sense to me me
>
> you've said you want "product results to be bumped up or down if it has a
> particular *tagValue* ... for example products with tagValue=X should be
> at the top" -- but you've also said that "Each product can have a
> different *tagValue* in each outlet" indicating that there is not a simple
> "product->tagValue" relationship.  What you've described a
> "(product,outlet)->tagValue" relationship.  So even if anything were
> possible, how would Solr know which tagValue to use when deciding how to
> "bump" a product up/down in scoring?
>
> Imagine a given productA was paired with multiple outlets, and one pairing
> with outlet1 was mapped to tagX which you said should sort first, but a
> diff pairing with outlet2 was mapped to tagZ which should sort
> last? .. what do you wnat to happen in that case?
>
>
> -Hoss
> http://www.lucidworks.com/
>


Re: After Solr 5.5, mm parameter doesn't work properly

2016-06-01 Thread Greg Pendlebury
I would describe that subtly differently, and I think it is where the
difference lies:

"Then from 4.x it did not care about q.op if mm was set explicitly"
>> I agree. q.op was not actually used in the query, but rather as a way of
inferred the default mm value. eDismax still ignored whatever q.op was set
and built your query operators (ie. the occurs flags) using q.op=OR.

"And from 5.5 it seems as q.op does something even if mm is set..."
>> Yes, although I think it is the words 'even if' drawing too strong a
relationship between the two parameters. q.op has a function of its own,
and that now functions as it 'should' (opinionated, I know) in the query
construction, and continues to influence the default value of mm if it has
not been explicitly set. SOLR-8812 further evolves that influence by trying
to improve backwards compatibility for users who were not explicitly
setting mm, and only ever changed 'q.op' despite it being a step removed
from the actual parameter they were trying to manipulate.

So in relation to the OP's sample queries I was pointing out that 'q.op=OR
+ mm=2' and 'q,op=AND + mm=2' are treated as identical queries by Solr 5.4,
but 5.5+ will manipulate the occurs flags differently before it applies mm
afterwards... because that is what q.op does.


On 2 June 2016 at 07:13, Jan Høydahl  wrote:

> Edismax used to default to mm=100% and not care about q.op at all
>
> Then from 4.x it did not care about q.op if mm was set explicitly,
> but if mm was not set, then q.op=OR —> mm=0%, q.op=AND —> mm=100%
>
> And from 5.5 it seems as q.op does something even if mm is set...
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> > 1. jun. 2016 kl. 23.05 skrev Greg Pendlebury  >:
> >
> > But isn't that the default value? In this case the OP is setting mm
> > explicitly to 2.
> >
> > Will have to look at those code links more thoroughly at work this
> morning.
> > Apologies if I am wrong.
> >
> > Ta,
> > Greg
> >
> > On Wednesday, 1 June 2016, Jan Høydahl  wrote:
> >
> >>> 1. jun. 2016 kl. 03.47 skrev Greg Pendlebury <
> greg.pendleb...@gmail.com
> >> >:
> >>
> >>> I don't think it is 8812. q.op was completely ignored by edismax prior
> to
> >>> 5.5, so it is not mm that changed.
> >>
> >> That is not the case. Prior to 5.5, mm would be automatically set to
> 100%
> >> if q.op==AND
> >> See https://issues.apache.org/jira/browse/SOLR-1889 and
> >> https://svn.apache.org/viewvc?view=revision=950710
> >>
> >> Jan
>
>


Re: Solr /export and dates (Solr 5.5.1)

2016-06-01 Thread Ronald Wood

Thanks! I'm glad to find out I'm not going crazy.

I'll keep a lookout for that enhancement.

Ronald S. Wood

Immediate customer support:
Call 1-866-762-7741 (x2) or 
emailsupp...@smarsh.com

On Jun 1, 2016, at 21:45, Joel Bernstein 
> wrote:

The documentation is wrong for sure. We need a new example query.

I was just discussing the date issue with Erick Erickson the other day. I
believe he is working on adding dates to the export handler but I didn't
see a jira ticket for this yet. We'll also need to add dates to the /export
handler for date support in the Parallel SQL interface.

Erick, if you're reading this, let us know if this is in the works.




Joel Bernstein
http://joelsolr.blogspot.com/

On Wed, Jun 1, 2016 at 8:15 PM, Ronald Wood 
> wrote:

I have spent a bit of time with the export handler in 5.5.1 (since we are
unable to upgrade directly from 4 to 6). The speed looks impressive at
first glance compared to paging with cursors.

However, I am deeply confused that it does not seem to be possible to
either sort on or get date values when doing an export.

I say deeply confused, because the example in the Reference Guide is this:


http://localhost:8983/solr/core_name/export?q=my-query=severity+desc,timestamp+desc=severity,timestamp,msg

Now, I suppose you could argue that timestamp's schema type isn't shown,
so maybe it's an epochal integer value.

Certainly when I try to get our date field (defined as TrieDateField in
our schema) I get this error:

java.io.IOException: Export fields must either be one of the following
types: int,float,long,double,string
 at
org.apache.solr.response.SortingResponseWriter.getFieldWriters(SortingResponseWriter.java:277)
 at
org.apache.solr.response.SortingResponseWriter.write(SortingResponseWriter.java:120)
 at
org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:52)
 ...

And I see date is not a type there. However, int, float, long and double
are also Trie types, so I'm not sure why a TrieDateField could not also be
sorted or exported.

I wonder if someone could elucidate this. I would have thought getting
dates out of an export or stream would be highly desirable. I am definitely
open to the high likelihood I am doing something wrong.

I apologize if this topic has been covered before, as I was unable to find
a way to search the mailing list on the Apache mail archives site. I wonder
if there's some search engine out there that could do that kind of thing? ?

Ronald S. Wood | Senior Software Developer
857-991-7681 (mobile)

Smarsh
100 Franklin St. Suite 903 | Boston, MA 02210
1-866-SMARSH-1 | 971-998-9967 (fax)
www.smarsh.com

Immediate customer support:
Call 1-866-762-7741 (x2) or visit 
www.smarsh.com/support<
http://www.smarsh.com/support>



Re: Solr /export and dates (Solr 5.5.1)

2016-06-01 Thread Joel Bernstein
The documentation is wrong for sure. We need a new example query.

I was just discussing the date issue with Erick Erickson the other day. I
believe he is working on adding dates to the export handler but I didn't
see a jira ticket for this yet. We'll also need to add dates to the /export
handler for date support in the Parallel SQL interface.

Erick, if you're reading this, let us know if this is in the works.




Joel Bernstein
http://joelsolr.blogspot.com/

On Wed, Jun 1, 2016 at 8:15 PM, Ronald Wood  wrote:

> I have spent a bit of time with the export handler in 5.5.1 (since we are
> unable to upgrade directly from 4 to 6). The speed looks impressive at
> first glance compared to paging with cursors.
>
> However, I am deeply confused that it does not seem to be possible to
> either sort on or get date values when doing an export.
>
> I say deeply confused, because the example in the Reference Guide is this:
>
>
> http://localhost:8983/solr/core_name/export?q=my-query=severity+desc,timestamp+desc=severity,timestamp,msg
>
> Now, I suppose you could argue that timestamp’s schema type isn’t shown,
> so maybe it’s an epochal integer value.
>
> Certainly when I try to get our date field (defined as TrieDateField in
> our schema) I get this error:
>
> java.io.IOException: Export fields must either be one of the following
> types: int,float,long,double,string
>   at
> org.apache.solr.response.SortingResponseWriter.getFieldWriters(SortingResponseWriter.java:277)
>   at
> org.apache.solr.response.SortingResponseWriter.write(SortingResponseWriter.java:120)
>   at
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:52)
>   ...
>
> And I see date is not a type there. However, int, float, long and double
> are also Trie types, so I’m not sure why a TrieDateField could not also be
> sorted or exported.
>
> I wonder if someone could elucidate this. I would have thought getting
> dates out of an export or stream would be highly desirable. I am definitely
> open to the high likelihood I am doing something wrong.
>
> I apologize if this topic has been covered before, as I was unable to find
> a way to search the mailing list on the Apache mail archives site. I wonder
> if there’s some search engine out there that could do that kind of thing? 
>
> Ronald S. Wood | Senior Software Developer
> 857-991-7681 (mobile)
>
> Smarsh
> 100 Franklin St. Suite 903 | Boston, MA 02210
> 1-866-SMARSH-1 | 971-998-9967 (fax)
> www.smarsh.com
>
> Immediate customer support:
> Call 1-866-762-7741 (x2) or visit www.smarsh.com/support<
> http://www.smarsh.com/support>
>


Re: Solr Cloud and Multi-word Synonyms :: synonym_edismax parser

2016-06-01 Thread Joe Lawson
2.0 is compiled with Solr 5 and Java 7. It uses the namespace
solr.SynonymExpandingExtendedDismaxQParserPlugin

5.0.4 is compiled with Solr 6 and Java 8 and is the first release that made
it to maven central. It uses the namespace
com.github.healthonnet.search.SynonymExpandingExtendedDismaxQParserPlugin

The features are the same for all versions.

Hope this clears things up.

-Joe
On Jun 1, 2016 8:11 PM, "John Bickerstaff"  wrote:

> Just to be clear, I got version 2.0 of the jar from github...  should I be
> look for something in a maven repository?  A bit confused at this point
> given all the version numbers...
>
> I want the latest and greatest unless there's any special considerations..
>
> Thanks for the assistance!
> On Jun 1, 2016 5:46 PM, "MaryJo Sminkey"  wrote:
>
> Yup that was the issue for us as well. It doesn't seem to be throwing the
> class error now, although I have not been able to successfully get back
> results that seem to be using it, it's showing up as the deftype in my
> params but the QParser in my debug is the normal edismax one. I will have
> to play around with my config some more tomorrow and try to figure out what
> we're doing wrong.
>
> MJ
>
>
>
> On Wed, Jun 1, 2016 at 6:38 PM, Joe Lawson <
> jlaw...@opensourceconnections.com> wrote:
>
> > Nothing up until 5.0.4 was distributed on maven central. 5.0 -> 5.0.4 was
> > just a bunch of clean up to get it ready for maven (including the
> namespace
> > change).
> >
> > Being that nearly all docs and articles talking about the plugin
> reference
> > the old 2.0 one could reasonably get confused as to what config to use
> esp
> > when I linked the latest 5.0.4 test config prior.
> >
> > You can get the older jars from the links off the readme.md.
> > On Jun 1, 2016 6:14 PM, "Shawn Heisey"  wrote:
> >
> > On 6/1/2016 1:10 PM, John Bickerstaff wrote:
> > > @Joe:
> > >
> > > Is it possible that the jar's package name does not match the entry in
> > the
> > > sample solrconfig.xml file?
> > >
> > > The solrconfig.xml example file in the test directory contains the
> > > following package name:
> > >  > >
> >
> >
>
> class="com.github.healthonnet.search.SynonymExpandingExtendedDismaxQParserPlugin">
> > >
> > > However, the jar file (when unzipped) has the following directory
> > structure
> > > down to the same class name:
> > >
> > > org --> apache --> solr --> search
> > >
> > > I just tried with the name change to the org.apache package name in
> > the
> > > solrconfig.xml file and got no errors.
> >
> > Looks like the package name is indeed the problem here.
> >
> > They changed the package name from org.apache.solr.search to
> > com.github.healthonnet.search in the LATEST source code release --
> > 5.0.4.  The code in the 5.0.3 version (and the 2.0.0 version indicated
> > in the earlier message) uses org.apache.solr.search.
> >
> > I cannot find any files in the 2.0.0 zipfile download that contain the
> > new package name, so I'm curious where the incorrect information on how
> > to configure Solr to use the plugin was found.  I did not check the
> > tarball download.
> >
> > Thanks,
> > Shawn
> >
>


Solr /export and dates (Solr 5.5.1)

2016-06-01 Thread Ronald Wood
I have spent a bit of time with the export handler in 5.5.1 (since we are 
unable to upgrade directly from 4 to 6). The speed looks impressive at first 
glance compared to paging with cursors.

However, I am deeply confused that it does not seem to be possible to either 
sort on or get date values when doing an export.

I say deeply confused, because the example in the Reference Guide is this:

http://localhost:8983/solr/core_name/export?q=my-query=severity+desc,timestamp+desc=severity,timestamp,msg

Now, I suppose you could argue that timestamp’s schema type isn’t shown, so 
maybe it’s an epochal integer value.

Certainly when I try to get our date field (defined as TrieDateField in our 
schema) I get this error:

java.io.IOException: Export fields must either be one of the following types: 
int,float,long,double,string
  at 
org.apache.solr.response.SortingResponseWriter.getFieldWriters(SortingResponseWriter.java:277)
  at 
org.apache.solr.response.SortingResponseWriter.write(SortingResponseWriter.java:120)
  at 
org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:52)
  ...

And I see date is not a type there. However, int, float, long and double are 
also Trie types, so I’m not sure why a TrieDateField could not also be sorted 
or exported.

I wonder if someone could elucidate this. I would have thought getting dates 
out of an export or stream would be highly desirable. I am definitely open to 
the high likelihood I am doing something wrong.

I apologize if this topic has been covered before, as I was unable to find a 
way to search the mailing list on the Apache mail archives site. I wonder if 
there’s some search engine out there that could do that kind of thing? 

Ronald S. Wood | Senior Software Developer
857-991-7681 (mobile)

Smarsh
100 Franklin St. Suite 903 | Boston, MA 02210
1-866-SMARSH-1 | 971-998-9967 (fax)
www.smarsh.com

Immediate customer support:
Call 1-866-762-7741 (x2) or visit 
www.smarsh.com/support


Re: Solr Cloud and Multi-word Synonyms :: synonym_edismax parser

2016-06-01 Thread John Bickerstaff
Just to be clear, I got version 2.0 of the jar from github...  should I be
look for something in a maven repository?  A bit confused at this point
given all the version numbers...

I want the latest and greatest unless there's any special considerations..

Thanks for the assistance!
On Jun 1, 2016 5:46 PM, "MaryJo Sminkey"  wrote:

Yup that was the issue for us as well. It doesn't seem to be throwing the
class error now, although I have not been able to successfully get back
results that seem to be using it, it's showing up as the deftype in my
params but the QParser in my debug is the normal edismax one. I will have
to play around with my config some more tomorrow and try to figure out what
we're doing wrong.

MJ



On Wed, Jun 1, 2016 at 6:38 PM, Joe Lawson <
jlaw...@opensourceconnections.com> wrote:

> Nothing up until 5.0.4 was distributed on maven central. 5.0 -> 5.0.4 was
> just a bunch of clean up to get it ready for maven (including the
namespace
> change).
>
> Being that nearly all docs and articles talking about the plugin reference
> the old 2.0 one could reasonably get confused as to what config to use esp
> when I linked the latest 5.0.4 test config prior.
>
> You can get the older jars from the links off the readme.md.
> On Jun 1, 2016 6:14 PM, "Shawn Heisey"  wrote:
>
> On 6/1/2016 1:10 PM, John Bickerstaff wrote:
> > @Joe:
> >
> > Is it possible that the jar's package name does not match the entry in
> the
> > sample solrconfig.xml file?
> >
> > The solrconfig.xml example file in the test directory contains the
> > following package name:
> >  >
>
>
class="com.github.healthonnet.search.SynonymExpandingExtendedDismaxQParserPlugin">
> >
> > However, the jar file (when unzipped) has the following directory
> structure
> > down to the same class name:
> >
> > org --> apache --> solr --> search
> >
> > I just tried with the name change to the org.apache package name in
> the
> > solrconfig.xml file and got no errors.
>
> Looks like the package name is indeed the problem here.
>
> They changed the package name from org.apache.solr.search to
> com.github.healthonnet.search in the LATEST source code release --
> 5.0.4.  The code in the 5.0.3 version (and the 2.0.0 version indicated
> in the earlier message) uses org.apache.solr.search.
>
> I cannot find any files in the 2.0.0 zipfile download that contain the
> new package name, so I'm curious where the incorrect information on how
> to configure Solr to use the plugin was found.  I did not check the
> tarball download.
>
> Thanks,
> Shawn
>


Re: Using solr with increasing complicated access control

2016-06-01 Thread Lisheng Zhang
Erick, very sorry that i misspelled your name earlier! later i read more
and found that lucene seemed to implement approach 2/ (search a few times
and combine results), i guess when joining becomes complicated the
performance may suffer? later i will try to study more,

thanks for helps, Lisheng

On Wed, Jun 1, 2016 at 12:34 PM, Lisheng Zhang  wrote:

> Eric: thanks very much for your quick response (somehow msg was sent to
> spam initially, sorry about that)
>
> yes the rules has to be complicated beyond my control, we also tried to
> filter after search, but after data amount grows, it becomes slow ..
>
> Rightnow lucene has feature like document block or join to simulate
> relational database behavior, did lucene implement join by:
>
> 1/ internally flatten out documents to generate one new document
> 2/ or search more than once, then merge results
> 3/ or better way i could not see?
>
> For now i only need a high level understanding, thanks for your helps,
> Lisheng
>
>
> On Mon, May 23, 2016 at 6:23 PM, Erick Erickson 
> wrote:
>
>> I know this seems facetious, but Talk to your
>> clients about _why_ they want such increasingly
>> complex access requirements. Often the logic
>> is pretty flawed for the complexity. Things like
>> "allow user X to see document Y if they're part of
>> groups A, B, C but not D or E unless they are
>> also part of sub-group F and it's raining outside"...
>>
>> If the rules _must_ be complicated, that's what
>> post-filters were actually invented for. Pretty often
>> I'll build in some "bailout" because whatever you
>> build has, eventually, to deal with the system
>> admin searching all documents, i.e. doing the
>> ACL calcs for every document.
>>
>> Best,
>> Erick
>>
>> On Mon, May 23, 2016 at 6:02 PM, Lisheng Zhang 
>> wrote:
>> > Hi, i have been using solr for many years and it is VERY helpful.
>> >
>> > My problem is that our app has an increasingly more complicated access
>> > control to satisfy client's requirement, in solr/lucene  it means we
>> need
>> > to add more and more fields into each document and use more and more
>> > complicated filter conditions, so code is hard to maintain and indexing
>> > becomes a serious issue because we want to search as real time as
>> possible.
>> >
>> > I would appreciate a high level guidance on how to deal with this issue?
>> > recently i investigated mySQL fulltext search (our app uses mySQL),
>> using
>> > mySQL means we simply reuse DB for access control, but mySQL fulltext
>> > search performance is far from ideal compared to solr.
>> >
>> > Thanks very much for helps, Lisheng
>>
>
>


Re: Solr Cloud and Multi-word Synonyms :: synonym_edismax parser

2016-06-01 Thread MaryJo Sminkey
Yup that was the issue for us as well. It doesn't seem to be throwing the
class error now, although I have not been able to successfully get back
results that seem to be using it, it's showing up as the deftype in my
params but the QParser in my debug is the normal edismax one. I will have
to play around with my config some more tomorrow and try to figure out what
we're doing wrong.

MJ



On Wed, Jun 1, 2016 at 6:38 PM, Joe Lawson <
jlaw...@opensourceconnections.com> wrote:

> Nothing up until 5.0.4 was distributed on maven central. 5.0 -> 5.0.4 was
> just a bunch of clean up to get it ready for maven (including the namespace
> change).
>
> Being that nearly all docs and articles talking about the plugin reference
> the old 2.0 one could reasonably get confused as to what config to use esp
> when I linked the latest 5.0.4 test config prior.
>
> You can get the older jars from the links off the readme.md.
> On Jun 1, 2016 6:14 PM, "Shawn Heisey"  wrote:
>
> On 6/1/2016 1:10 PM, John Bickerstaff wrote:
> > @Joe:
> >
> > Is it possible that the jar's package name does not match the entry in
> the
> > sample solrconfig.xml file?
> >
> > The solrconfig.xml example file in the test directory contains the
> > following package name:
> >  >
>
> class="com.github.healthonnet.search.SynonymExpandingExtendedDismaxQParserPlugin">
> >
> > However, the jar file (when unzipped) has the following directory
> structure
> > down to the same class name:
> >
> > org --> apache --> solr --> search
> >
> > I just tried with the name change to the org.apache package name in
> the
> > solrconfig.xml file and got no errors.
>
> Looks like the package name is indeed the problem here.
>
> They changed the package name from org.apache.solr.search to
> com.github.healthonnet.search in the LATEST source code release --
> 5.0.4.  The code in the 5.0.3 version (and the 2.0.0 version indicated
> in the earlier message) uses org.apache.solr.search.
>
> I cannot find any files in the 2.0.0 zipfile download that contain the
> new package name, so I'm curious where the incorrect information on how
> to configure Solr to use the plugin was found.  I did not check the
> tarball download.
>
> Thanks,
> Shawn
>


StandardTokenizer behaviour with apostrophe and colon

2016-06-01 Thread Vincenzo D'Amore
Hi all,

StandardTokenizer don't split the text with an apostrophe (punctuation mark
' ) and with a colon (punctuation mark : ).

Just to be clear looking at documentation all punctation marks are
delimiters, with an exception for periods (dots), so I suppose that a pair
of Italian word like "nell'aria" should be split in two words "nell" and
"aria".

So I have bypassed the problem using a WordDelimiterFilterFactory.

Is this a bug or an undocumented behaviour? In any case, what to do next?

Best regards,
Vincenzo


-- 
Vincenzo D'Amore
email: v.dam...@gmail.com
skype: free.dev
mobile: +39 349 8513251


Re: Solr Cloud and Multi-word Synonyms :: synonym_edismax parser

2016-06-01 Thread Joe Lawson
Nothing up until 5.0.4 was distributed on maven central. 5.0 -> 5.0.4 was
just a bunch of clean up to get it ready for maven (including the namespace
change).

Being that nearly all docs and articles talking about the plugin reference
the old 2.0 one could reasonably get confused as to what config to use esp
when I linked the latest 5.0.4 test config prior.

You can get the older jars from the links off the readme.md.
On Jun 1, 2016 6:14 PM, "Shawn Heisey"  wrote:

On 6/1/2016 1:10 PM, John Bickerstaff wrote:
> @Joe:
>
> Is it possible that the jar's package name does not match the entry in the
> sample solrconfig.xml file?
>
> The solrconfig.xml example file in the test directory contains the
> following package name:
> 
class="com.github.healthonnet.search.SynonymExpandingExtendedDismaxQParserPlugin">
>
> However, the jar file (when unzipped) has the following directory
structure
> down to the same class name:
>
> org --> apache --> solr --> search
>
> I just tried with the name change to the org.apache package name in
the
> solrconfig.xml file and got no errors.

Looks like the package name is indeed the problem here.

They changed the package name from org.apache.solr.search to
com.github.healthonnet.search in the LATEST source code release --
5.0.4.  The code in the 5.0.3 version (and the 2.0.0 version indicated
in the earlier message) uses org.apache.solr.search.

I cannot find any files in the 2.0.0 zipfile download that contain the
new package name, so I'm curious where the incorrect information on how
to configure Solr to use the plugin was found.  I did not check the
tarball download.

Thanks,
Shawn


Re: Solr Cloud and Multi-word Synonyms :: synonym_edismax parser

2016-06-01 Thread Shawn Heisey
On 6/1/2016 1:10 PM, John Bickerstaff wrote:
> @Joe:
>
> Is it possible that the jar's package name does not match the entry in the
> sample solrconfig.xml file?
>
> The solrconfig.xml example file in the test directory contains the
> following package name:
>  class="com.github.healthonnet.search.SynonymExpandingExtendedDismaxQParserPlugin">
>
> However, the jar file (when unzipped) has the following directory structure
> down to the same class name:
>
> org --> apache --> solr --> search
>
> I just tried with the name change to the org.apache package name in the
> solrconfig.xml file and got no errors.

Looks like the package name is indeed the problem here.

They changed the package name from org.apache.solr.search to
com.github.healthonnet.search in the LATEST source code release --
5.0.4.  The code in the 5.0.3 version (and the 2.0.0 version indicated
in the earlier message) uses org.apache.solr.search.

I cannot find any files in the 2.0.0 zipfile download that contain the
new package name, so I'm curious where the incorrect information on how
to configure Solr to use the plugin was found.  I did not check the
tarball download.

Thanks,
Shawn



Re: Solr Cloud and Multi-word Synonyms :: synonym_edismax parser

2016-06-01 Thread Joe Lawson
I mean the 5.0 namespace is different from the 2.0 not 3.0.
On Jun 1, 2016 5:43 PM, "Joe Lawson" 
wrote:

2.0 is different from 3.0 so check the test config that is associated with
the 2.0 release. Ie


https://github.com/healthonnet/hon-lucene-synonyms/blob/8f736da053510911517fcb8a712b1d8ca5c920d2/src/test/resources/solr/collection1/conf/example_solrconfig.xml


On Jun 1, 2016 3:10 PM, "John Bickerstaff"  wrote:

> @Joe:
>
> Is it possible that the jar's package name does not match the entry in the
> sample solrconfig.xml file?
>
> The solrconfig.xml example file in the test directory contains the
> following package name:
> 
> class="com.github.healthonnet.search.SynonymExpandingExtendedDismaxQParserPlugin">
>
> However, the jar file (when unzipped) has the following directory structure
> down to the same class name:
>
> org --> apache --> solr --> search
>
> I just tried with the name change to the org.apache package name in the
> solrconfig.xml file and got no errors.
>
> I haven't yet tried to see synonym "stuff" in the debug for a query, but
> I'm betting it's much ado about nothing - just the package name has
> changed...
>
> If that makes sense to you, you may want to edit the example file...
>
> Thanks a lot for all the work you contributed to this by the way!
>
> --JohnB
>
> @ MaryJo - this may be the problem in your situation for this specific file
> -- good luck!
>
> I put it in $SOLR_HOME/lib  - which, taking the default "for production"
> install script on Ubuntu resolved to /var/solr/data/lib
>
> Good luck!
>
> On Wed, Jun 1, 2016 at 12:49 PM, John Bickerstaff <
> j...@johnbickerstaff.com>
> wrote:
>
> > I tried this - it didn't fail.  I don't know if it really started in
> > Denable.runtime.lib=true mode or not:
> >
> > service solr start -Denable.runtime.lib=true
> >
> > Of course, I'd still really rather be able to just drop jars into
> > /var/solr/data/lib and have them work...
> >
> > Thanks all.
> >
> > On Wed, Jun 1, 2016 at 12:42 PM, John Bickerstaff <
> > j...@johnbickerstaff.com> wrote:
> >
> >> So - the instructions on using the Blob Store API say to use the
> >> Denable.runtime.lib=true option when starting Solr.
> >>
> >> Thing is, I've installed per the "for production" instructions which
> >> gives me an entry in /etc/init.d called solr.
> >>
> >> Two questions.
> >>
> >> To test this can I still use the start.jar in /opt/solr/server as long
> as
> >> I issue the "cloud mode" flag or does that no longer work in 5.x?
> >>
> >> Do I instead have to modify that start script in /etc/init.d ?
> >>
> >> On Wed, Jun 1, 2016 at 10:42 AM, John Bickerstaff <
> >> j...@johnbickerstaff.com> wrote:
> >>
> >>> Ahhh - gotcha.
> >>>
> >>> Well, not sure why it's not picked up - seems lots of other jars are...
> >>> Maybe Joe will comment...
> >>>
> >>> On Wed, Jun 1, 2016 at 10:22 AM, MaryJo Sminkey 
> >>> wrote:
> >>>
>  That refers to running Solr in cloud mode. We aren't there yet.
> 
>  MJ
> 
> 
> 
>  On Wed, Jun 1, 2016 at 12:20 PM, John Bickerstaff <
>  j...@johnbickerstaff.com>
>  wrote:
> 
>  > Hi Mary Jo,
>  >
>  > I'll point you to Joe's earlier comment about needing to use the
> Blob
>  Store
>  > API...  He put a link in his response.
>  >
>  > I'm about to try that today...  Given that Joe is a contributor to
>  > hon_lucene there's a good chance his experience is correct here
> -
>  > especially given the evidence you just provided...
>  >
>  > Here's a copy - paste for your convenience.  It's a bit convoluted,
>  > although I totally get how this kind of approach is great for large
>  Solr
>  > Cloud installations that have machines or VMs coming up and going
>  down as
>  > part of a services-based approach...
>  >
>  > Joe said:
>  > The docs are out of date for the synonym_edismax but it does work.
>  Check
>  > out the tests for working examples. I'll try to update it soon. I've
>  run
>  > the plugin on Solr 5 and 6, solrcloud and standalone. For running in
>  > SolrCloud make sure you follow
>  >
>  >
> 
> https://cwiki.apache.org/confluence/display/solr/Adding+Custom+Plugins+in+SolrCloud+Mode
>  >
>  > On Wed, Jun 1, 2016 at 10:15 AM, MaryJo Sminkey <
> mjsmin...@gmail.com>
>  > wrote:
>  >
>  > > So we still can't get this to work, here's the latest update my
>  server
>  > guy
>  > > gave me: It seems to not matter where the file is located, it does
>  not
>  > > load. Yet, the the Solr Java class path shows the file has loaded.
>  Only
>  > > this path (./server/lib/hon-lucene-synonyms-2.0.0.jar) will work
> in
>  that
>  > it
>  > > loads in the java class path.  I've yet to find out what the error
>  is.
>  > All
>  > > I can see is this "Error loading class". 

RE: [E] Re: Simple Question about SimplePostTool

2016-06-01 Thread Jamal, Sarfaraz
Thank you.

Sas

-Original Message-
From: Erik Hatcher [mailto:erik.hatc...@gmail.com] 
Sent: Wednesday, June 1, 2016 4:34 PM
To: solr-user@lucene.apache.org
Subject: [E] Re: Simple Question about SimplePostTool

Yes, you can add “literal” field values with bin/post:

   bin/post -c test ~/Documents/Test.pdf  -params "literal.foo=bar”

See 
https://cwiki.apache.org/confluence/display/solr/Uploading+Data+with+Solr+Cell+using+Apache+Tika#UploadingDatawithSolrCellusingApacheTika-InputParameters
 for details on what parameters you can use with “rich document” indexing.

—
Erik Hatcher, Senior Solutions Architect http://www.lucidworks.com



> On Jun 1, 2016, at 3:28 PM, Jamal, Sarfaraz 
>  wrote:
> 
> Hi Guys,
> 
> I am a newbie at Solr, so I may have some very simple questions.
> I am also waiting for my book to arrive.
> 
> Can the SimplePostTool be used to add additional fields when indexing a 
> word/excel/text.
> 
> So, for example, as I index a word document, I pass in a parameter 
> saying team=avengers
> 
> Or something along the lines of that -
> 
> Thank you,
> 
> Sas



Re: Solr Cloud and Multi-word Synonyms :: synonym_edismax parser

2016-06-01 Thread Joe Lawson
2.0 is different from 3.0 so check the test config that is associated with
the 2.0 release. Ie


https://github.com/healthonnet/hon-lucene-synonyms/blob/8f736da053510911517fcb8a712b1d8ca5c920d2/src/test/resources/solr/collection1/conf/example_solrconfig.xml


On Jun 1, 2016 3:10 PM, "John Bickerstaff"  wrote:

> @Joe:
>
> Is it possible that the jar's package name does not match the entry in the
> sample solrconfig.xml file?
>
> The solrconfig.xml example file in the test directory contains the
> following package name:
> 
> class="com.github.healthonnet.search.SynonymExpandingExtendedDismaxQParserPlugin">
>
> However, the jar file (when unzipped) has the following directory structure
> down to the same class name:
>
> org --> apache --> solr --> search
>
> I just tried with the name change to the org.apache package name in the
> solrconfig.xml file and got no errors.
>
> I haven't yet tried to see synonym "stuff" in the debug for a query, but
> I'm betting it's much ado about nothing - just the package name has
> changed...
>
> If that makes sense to you, you may want to edit the example file...
>
> Thanks a lot for all the work you contributed to this by the way!
>
> --JohnB
>
> @ MaryJo - this may be the problem in your situation for this specific file
> -- good luck!
>
> I put it in $SOLR_HOME/lib  - which, taking the default "for production"
> install script on Ubuntu resolved to /var/solr/data/lib
>
> Good luck!
>
> On Wed, Jun 1, 2016 at 12:49 PM, John Bickerstaff <
> j...@johnbickerstaff.com>
> wrote:
>
> > I tried this - it didn't fail.  I don't know if it really started in
> > Denable.runtime.lib=true mode or not:
> >
> > service solr start -Denable.runtime.lib=true
> >
> > Of course, I'd still really rather be able to just drop jars into
> > /var/solr/data/lib and have them work...
> >
> > Thanks all.
> >
> > On Wed, Jun 1, 2016 at 12:42 PM, John Bickerstaff <
> > j...@johnbickerstaff.com> wrote:
> >
> >> So - the instructions on using the Blob Store API say to use the
> >> Denable.runtime.lib=true option when starting Solr.
> >>
> >> Thing is, I've installed per the "for production" instructions which
> >> gives me an entry in /etc/init.d called solr.
> >>
> >> Two questions.
> >>
> >> To test this can I still use the start.jar in /opt/solr/server as long
> as
> >> I issue the "cloud mode" flag or does that no longer work in 5.x?
> >>
> >> Do I instead have to modify that start script in /etc/init.d ?
> >>
> >> On Wed, Jun 1, 2016 at 10:42 AM, John Bickerstaff <
> >> j...@johnbickerstaff.com> wrote:
> >>
> >>> Ahhh - gotcha.
> >>>
> >>> Well, not sure why it's not picked up - seems lots of other jars are...
> >>> Maybe Joe will comment...
> >>>
> >>> On Wed, Jun 1, 2016 at 10:22 AM, MaryJo Sminkey 
> >>> wrote:
> >>>
>  That refers to running Solr in cloud mode. We aren't there yet.
> 
>  MJ
> 
> 
> 
>  On Wed, Jun 1, 2016 at 12:20 PM, John Bickerstaff <
>  j...@johnbickerstaff.com>
>  wrote:
> 
>  > Hi Mary Jo,
>  >
>  > I'll point you to Joe's earlier comment about needing to use the
> Blob
>  Store
>  > API...  He put a link in his response.
>  >
>  > I'm about to try that today...  Given that Joe is a contributor to
>  > hon_lucene there's a good chance his experience is correct here
> -
>  > especially given the evidence you just provided...
>  >
>  > Here's a copy - paste for your convenience.  It's a bit convoluted,
>  > although I totally get how this kind of approach is great for large
>  Solr
>  > Cloud installations that have machines or VMs coming up and going
>  down as
>  > part of a services-based approach...
>  >
>  > Joe said:
>  > The docs are out of date for the synonym_edismax but it does work.
>  Check
>  > out the tests for working examples. I'll try to update it soon. I've
>  run
>  > the plugin on Solr 5 and 6, solrcloud and standalone. For running in
>  > SolrCloud make sure you follow
>  >
>  >
> 
> https://cwiki.apache.org/confluence/display/solr/Adding+Custom+Plugins+in+SolrCloud+Mode
>  >
>  > On Wed, Jun 1, 2016 at 10:15 AM, MaryJo Sminkey <
> mjsmin...@gmail.com>
>  > wrote:
>  >
>  > > So we still can't get this to work, here's the latest update my
>  server
>  > guy
>  > > gave me: It seems to not matter where the file is located, it does
>  not
>  > > load. Yet, the the Solr Java class path shows the file has loaded.
>  Only
>  > > this path (./server/lib/hon-lucene-synonyms-2.0.0.jar) will work
> in
>  that
>  > it
>  > > loads in the java class path.  I've yet to find out what the error
>  is.
>  > All
>  > > I can see is this "Error loading class". Okay, but why? What error
>  was
>  > > encountered in trying to load the class?  I can't find any of this
>  > > information. 

Re: After Solr 5.5, mm parameter doesn't work properly

2016-06-01 Thread Jan Høydahl
Edismax used to default to mm=100% and not care about q.op at all

Then from 4.x it did not care about q.op if mm was set explicitly,
but if mm was not set, then q.op=OR —> mm=0%, q.op=AND —> mm=100%

And from 5.5 it seems as q.op does something even if mm is set...

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 1. jun. 2016 kl. 23.05 skrev Greg Pendlebury :
> 
> But isn't that the default value? In this case the OP is setting mm
> explicitly to 2.
> 
> Will have to look at those code links more thoroughly at work this morning.
> Apologies if I am wrong.
> 
> Ta,
> Greg
> 
> On Wednesday, 1 June 2016, Jan Høydahl  wrote:
> 
>>> 1. jun. 2016 kl. 03.47 skrev Greg Pendlebury > >:
>> 
>>> I don't think it is 8812. q.op was completely ignored by edismax prior to
>>> 5.5, so it is not mm that changed.
>> 
>> That is not the case. Prior to 5.5, mm would be automatically set to 100%
>> if q.op==AND
>> See https://issues.apache.org/jira/browse/SOLR-1889 and
>> https://svn.apache.org/viewvc?view=revision=950710
>> 
>> Jan



Re: Script to upgrade a Solr index from 4.x to 6.x

2016-06-01 Thread Jan Høydahl
It is a completely use on your own risk script :-)
Freshly written, tested on some 4.8.0 indexes.
PR’s welcome :-)

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 1. jun. 2016 kl. 14.36 skrev Brendan Humphreys :
> 
> Hi Jan,
> 
> Thanks for the script! I for one will definitely try it out.
> 
> Can you comment on how battle-tested it is?
> 
> Are there any limitations or drawbacks?
> 
> Cheers,
> -Brendan
> 
> On Wednesday, 1 June 2016, Jan Høydahl  wrote:
> 
>> Hi
>> 
>> Need to upgrade from Solr 4.x directly to the new 6.0?
>> Here is a script that does it automatically for all your cores:
>> 
>> https://github.com/cominvent/solr-tools/blob/master/upgradeindex/upgradeindex.sh
>> 
>> 
>> USAGE:
>>  Script to Upgrade old indices from 4.x and 5.x to 6.x format, so it can
>> be used with Solr 6.x or 7.x
>>  Usage: ./upgradeindex.sh [-s] 
>> 
>>  Example: ./upgradeindex.sh /var/lib/solr
>>  Please run the tool only on a cold index (no Solr running)
>>  The script leaves a backup in
>> //data/index_backup_.tgz. Use -s to skip
>> backup
>>  Requires wget or curl to download dependencies
>> 
>> --
>> Jan Høydahl, search solution architect
>> Cominvent AS - www.cominvent.com
>> 
>> 
> 
> -- 
> 
> [image: Canva]  
> Empowering the world to design
> Also, we're hiring. Apply here! 
> [image: Twitter]  [image: Facebook] 
>  [image: LinkedIn] 
>  [image: Instagram] 
> 



Re: After Solr 5.5, mm parameter doesn't work properly

2016-06-01 Thread Greg Pendlebury
But isn't that the default value? In this case the OP is setting mm
explicitly to 2.

Will have to look at those code links more thoroughly at work this morning.
Apologies if I am wrong.

Ta,
Greg

On Wednesday, 1 June 2016, Jan Høydahl  wrote:

> > 1. jun. 2016 kl. 03.47 skrev Greg Pendlebury  >:
>
> > I don't think it is 8812. q.op was completely ignored by edismax prior to
> > 5.5, so it is not mm that changed.
>
> That is not the case. Prior to 5.5, mm would be automatically set to 100%
> if q.op==AND
> See https://issues.apache.org/jira/browse/SOLR-1889 and
> https://svn.apache.org/viewvc?view=revision=950710
>
> Jan


Re: Simple Question about SimplePostTool

2016-06-01 Thread Erik Hatcher
Yes, you can add “literal” field values with bin/post:

   bin/post -c test ~/Documents/Test.pdf  -params "literal.foo=bar”

See 
https://cwiki.apache.org/confluence/display/solr/Uploading+Data+with+Solr+Cell+using+Apache+Tika#UploadingDatawithSolrCellusingApacheTika-InputParameters
 for details on what parameters you can use with “rich document” indexing.

—
Erik Hatcher, Senior Solutions Architect
http://www.lucidworks.com



> On Jun 1, 2016, at 3:28 PM, Jamal, Sarfaraz 
>  wrote:
> 
> Hi Guys,
> 
> I am a newbie at Solr, so I may have some very simple questions.
> I am also waiting for my book to arrive.
> 
> Can the SimplePostTool be used to add additional fields when indexing a 
> word/excel/text.
> 
> So, for example, as I index a word document, I pass in a parameter saying 
> team=avengers
> 
> Or something along the lines of that -
> 
> Thank you,
> 
> Sas



SolrCloud with SSL and Basic Authentication

2016-06-01 Thread Tempes, Piotr (Consultant)
I have ask same question on stackoverflow: 
http://stackoverflow.com/questions/37577074/solrcloud-with-ssl-and-basic-authentication

Is it possible to configure SolrCloud with SSL and Basic Authentication?

I have configured 3 nodes of Solr in SolrCloud with SSL using 
this:https://cwiki.apache.org/confluence/display/solr/Enabling+SSL

and I have added authentication and authorization following 
this:https://cwiki.apache.org/confluence/display/solr/Basic+Authentication+Plugin,https://cwiki.apache.org/confluence/display/solr/Rule-Based+Authorization+Plugin

when only SSL is enabled it works.

when only authentication + authorization is enabled it works

when both are enabled I get following stacktrace during startup:

2016-06-01 17:19:41.933 INFO  
(OverseerStateUpdate-168013962670440512-172.30.92.66:8983_solr-n_79) [  
 ] o.a.s.c.o.ZkStateWriter going to update_collection 
/collections/testowa/state.json version: 1350

2016-06-01 17:19:41.935 INFO  
(zkCallback-4-thread-1-processing-n:172.30.92.66:8983_solr) [   ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged path:/collections/testowa/state.json] 
for collection [testowa] has occurred - updating... (live nodes size: [3])

2016-06-01 17:19:41.937 INFO  
(zkCallback-4-thread-1-processing-n:172.30.92.66:8983_solr) [   ] 
o.a.s.c.c.ZkStateReader Updating data for [testowa] from [1350] to [1351]

2016-06-01 17:19:43.557 INFO  
(coreZkRegister-1-thread-1-processing-n:172.30.92.66:8983_solr 
x:testowa_shard1_replica3 s:shard1 c:testowa r:core_node1) [c:testowa s:shard1 
r:core_node1 x:testowa_shard1_replica3] o.a.s.c.ShardLeaderElectionContext 
Enough replicas found to continue.

2016-06-01 17:19:43.557 INFO  
(coreZkRegister-1-thread-1-processing-n:172.30.92.66:8983_solr 
x:testowa_shard1_replica3 s:shard1 c:testowa r:core_node1) [c:testowa s:shard1 
r:core_node1 x:testowa_shard1_replica3] o.a.s.c.ShardLeaderElectionContext I 
may be the new leader - try and sync

2016-06-01 17:19:43.557 INFO  
(coreZkRegister-1-thread-1-processing-n:172.30.92.66:8983_solr 
x:testowa_shard1_replica3 s:shard1 c:testowa r:core_node1) [c:testowa s:shard1 
r:core_node1 x:testowa_shard1_replica3] o.a.s.c.SyncStrategy Sync replicas to 
https://172.30.92.66:8983/solr/testowa_shard1_replica3/

2016-06-01 17:19:43.561 INFO  
(coreZkRegister-1-thread-1-processing-n:172.30.92.66:8983_solr 
x:testowa_shard1_replica3 s:shard1 c:testowa r:core_node1) [c:testowa s:shard1 
r:core_node1 x:testowa_shard1_replica3] o.a.s.u.PeerSync PeerSync: 
core=testowa_shard1_replica3 url=https://172.30.92.66:8983/solr START 
replicas=[https://172.30.182.43:8983/solr/testowa_shard1_replica1/, 
https://172.30.182.44:8983/solr/testowa_shard1_replica2/] nUpdates=100

2016-06-01 17:19:44.580 WARN  
(coreZkRegister-1-thread-1-processing-n:172.30.92.66:8983_solr 
x:testowa_shard1_replica3 s:shard1 c:testowa r:core_node1) [c:testowa s:shard1 
r:core_node1 x:testowa_shard1_replica3] o.a.s.u.PeerSync PeerSync: 
core=testowa_shard1_replica3 url=https://172.30.92.66:8983/solr  exception 
talking to https://172.30.182.44:8983/solr/testowa_shard1_replica2/, failed

org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://172.30.182.44:8983/solr/testowa_shard1_replica2: 
Expected mime type application/octet-stream but got text/html. 





Error 401 Unauthorized request, Response code: 401



HTTP ERROR 401

Problem accessing /solr/testowa_shard1_replica2/get. Reason:

Unauthorized request, Response code: 401







at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:545)

at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)

at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)

at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)

at 
org.apache.solr.handler.component.HttpShardHandler$1.call(HttpShardHandler.java:198)

at 
org.apache.solr.handler.component.HttpShardHandler$1.call(HttpShardHandler.java:163)

at java.util.concurrent.FutureTask.run(FutureTask.java:277)

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:522)

at java.util.concurrent.FutureTask.run(FutureTask.java:277)

at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)

at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$3.3C022970.run(Unknown
 Source)

at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1153)

at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)

at java.lang.Thread.run(Thread.java:785)

2016-06-01 17:19:44.582 INFO  
(coreZkRegister-1-thread-1-processing-n:172.30.92.66:8983_solr 
x:testowa_shard1_replica3 s:shard1 c:testowa r:core_node1) [c:testowa s:shard1 
r:core_node1 

RuntimeLib classes with Analyzers?

2016-06-01 Thread King Rhoton
This message on the solr-users mailing list from September, 2015 claims
> That is a current limitation of the blob store API. It can only be 
> used to load plugins in solrconfig.xml. It does not support loading 
> schema plugins such as analyzers, tokenizers.

But at

https://cwiki.apache.org/confluence/display/solr/Adding+Custom+Plugins+in+SolrCloud+Mode

I see:
> When running Solr in SolrCloud mode and you want to use custom code
> (such as custom analyzers, tokenizers, query parsers, and other plugins),
> it can be cumbersome to add jars to the classpath on all nodes in your 
> cluster.
> Using the Blob Store API and special commands with the Config API, you
> can upload jars to a special system-level collection and dynamically load
> plugins from them at runtime with out needing to restart any nodes.

So, can you actually use a blob-store-loaded jar to get a class to implement a 
custom analyzer?

It seems to me like any collection directive that takes a "class=" attribute 
should also support a
"runtimeLib=true" attribute.

Thanks.

Re: Using solr with increasing complicated access control

2016-06-01 Thread Lisheng Zhang
Eric: thanks very much for your quick response (somehow msg was sent to
spam initially, sorry about that)

yes the rules has to be complicated beyond my control, we also tried to
filter after search, but after data amount grows, it becomes slow ..

Rightnow lucene has feature like document block or join to simulate
relational database behavior, did lucene implement join by:

1/ internally flatten out documents to generate one new document
2/ or search more than once, then merge results
3/ or better way i could not see?

For now i only need a high level understanding, thanks for your helps,
Lisheng


On Mon, May 23, 2016 at 6:23 PM, Erick Erickson 
wrote:

> I know this seems facetious, but Talk to your
> clients about _why_ they want such increasingly
> complex access requirements. Often the logic
> is pretty flawed for the complexity. Things like
> "allow user X to see document Y if they're part of
> groups A, B, C but not D or E unless they are
> also part of sub-group F and it's raining outside"...
>
> If the rules _must_ be complicated, that's what
> post-filters were actually invented for. Pretty often
> I'll build in some "bailout" because whatever you
> build has, eventually, to deal with the system
> admin searching all documents, i.e. doing the
> ACL calcs for every document.
>
> Best,
> Erick
>
> On Mon, May 23, 2016 at 6:02 PM, Lisheng Zhang 
> wrote:
> > Hi, i have been using solr for many years and it is VERY helpful.
> >
> > My problem is that our app has an increasingly more complicated access
> > control to satisfy client's requirement, in solr/lucene  it means we need
> > to add more and more fields into each document and use more and more
> > complicated filter conditions, so code is hard to maintain and indexing
> > becomes a serious issue because we want to search as real time as
> possible.
> >
> > I would appreciate a high level guidance on how to deal with this issue?
> > recently i investigated mySQL fulltext search (our app uses mySQL), using
> > mySQL means we simply reuse DB for access control, but mySQL fulltext
> > search performance is far from ideal compared to solr.
> >
> > Thanks very much for helps, Lisheng
>


Solr off-heap FieldCache & HelioSearch

2016-06-01 Thread Phillip Peleshok
Hey everyone,

I've been using Solr for some time now and running into GC issues as most
others have.  Now I've exhausted all the traditional GC settings
recommended by various individuals (ie Shawn Heisey, etc) but neither
proved sufficient.  The one solution that I've seen that proved useful is
Heliosearch and the off-heap implementation.

My question is this, why wasn't the off-heap FieldCache implementation (
http://yonik.com/hs-solr-off-heap-fieldcache-performance/) ever rolled into
Solr when the other HelioSearch improvement were merged? Was there a
fundamental design problem or just a matter of time/testing that would be
incurred by the move?

Thanks,
Phil


Simple Question about SimplePostTool

2016-06-01 Thread Jamal, Sarfaraz
Hi Guys,

I am a newbie at Solr, so I may have some very simple questions.
I am also waiting for my book to arrive.

Can the SimplePostTool be used to add additional fields when indexing a 
word/excel/text.

So, for example, as I index a word document, I pass in a parameter saying 
team=avengers

Or something along the lines of that -

Thank you,

Sas


Any Performance Study to show Effect Of Shards with Replica on QPS

2016-06-01 Thread Siddhartha Singh Sandhu
Hi,

I wanted to know anyone had a link to graphical study showing the
correlation between Shards and Replicas against QPS.

I have this link but this show this link:
http://www.slideshare.net/thelabdude/solr-performance which compare
indexing performance between Shards and replicas.

I know that adding replicas will add to QPS. But, would like to know more
of its effect.

Regards,

Sid.


Re: Solr Cloud and Multi-word Synonyms :: synonym_edismax parser

2016-06-01 Thread John Bickerstaff
@Joe:

Is it possible that the jar's package name does not match the entry in the
sample solrconfig.xml file?

The solrconfig.xml example file in the test directory contains the
following package name:


However, the jar file (when unzipped) has the following directory structure
down to the same class name:

org --> apache --> solr --> search

I just tried with the name change to the org.apache package name in the
solrconfig.xml file and got no errors.

I haven't yet tried to see synonym "stuff" in the debug for a query, but
I'm betting it's much ado about nothing - just the package name has
changed...

If that makes sense to you, you may want to edit the example file...

Thanks a lot for all the work you contributed to this by the way!

--JohnB

@ MaryJo - this may be the problem in your situation for this specific file
-- good luck!

I put it in $SOLR_HOME/lib  - which, taking the default "for production"
install script on Ubuntu resolved to /var/solr/data/lib

Good luck!

On Wed, Jun 1, 2016 at 12:49 PM, John Bickerstaff 
wrote:

> I tried this - it didn't fail.  I don't know if it really started in
> Denable.runtime.lib=true mode or not:
>
> service solr start -Denable.runtime.lib=true
>
> Of course, I'd still really rather be able to just drop jars into
> /var/solr/data/lib and have them work...
>
> Thanks all.
>
> On Wed, Jun 1, 2016 at 12:42 PM, John Bickerstaff <
> j...@johnbickerstaff.com> wrote:
>
>> So - the instructions on using the Blob Store API say to use the
>> Denable.runtime.lib=true option when starting Solr.
>>
>> Thing is, I've installed per the "for production" instructions which
>> gives me an entry in /etc/init.d called solr.
>>
>> Two questions.
>>
>> To test this can I still use the start.jar in /opt/solr/server as long as
>> I issue the "cloud mode" flag or does that no longer work in 5.x?
>>
>> Do I instead have to modify that start script in /etc/init.d ?
>>
>> On Wed, Jun 1, 2016 at 10:42 AM, John Bickerstaff <
>> j...@johnbickerstaff.com> wrote:
>>
>>> Ahhh - gotcha.
>>>
>>> Well, not sure why it's not picked up - seems lots of other jars are...
>>> Maybe Joe will comment...
>>>
>>> On Wed, Jun 1, 2016 at 10:22 AM, MaryJo Sminkey 
>>> wrote:
>>>
 That refers to running Solr in cloud mode. We aren't there yet.

 MJ



 On Wed, Jun 1, 2016 at 12:20 PM, John Bickerstaff <
 j...@johnbickerstaff.com>
 wrote:

 > Hi Mary Jo,
 >
 > I'll point you to Joe's earlier comment about needing to use the Blob
 Store
 > API...  He put a link in his response.
 >
 > I'm about to try that today...  Given that Joe is a contributor to
 > hon_lucene there's a good chance his experience is correct here -
 > especially given the evidence you just provided...
 >
 > Here's a copy - paste for your convenience.  It's a bit convoluted,
 > although I totally get how this kind of approach is great for large
 Solr
 > Cloud installations that have machines or VMs coming up and going
 down as
 > part of a services-based approach...
 >
 > Joe said:
 > The docs are out of date for the synonym_edismax but it does work.
 Check
 > out the tests for working examples. I'll try to update it soon. I've
 run
 > the plugin on Solr 5 and 6, solrcloud and standalone. For running in
 > SolrCloud make sure you follow
 >
 >
 https://cwiki.apache.org/confluence/display/solr/Adding+Custom+Plugins+in+SolrCloud+Mode
 >
 > On Wed, Jun 1, 2016 at 10:15 AM, MaryJo Sminkey 
 > wrote:
 >
 > > So we still can't get this to work, here's the latest update my
 server
 > guy
 > > gave me: It seems to not matter where the file is located, it does
 not
 > > load. Yet, the the Solr Java class path shows the file has loaded.
 Only
 > > this path (./server/lib/hon-lucene-synonyms-2.0.0.jar) will work in
 that
 > it
 > > loads in the java class path.  I've yet to find out what the error
 is.
 > All
 > > I can see is this "Error loading class". Okay, but why? What error
 was
 > > encountered in trying to load the class?  I can't find any of this
 > > information. I'm trying to work with the documentation that is
 located
 > here
 > > http://wiki.apache.org/solr/SolrPlugins
 > >
 > > I found that the jar file was put into each of these locations in an
 > > attempt to find a place where it will load without error.
 > >
 > > find .|grep hon-lucene
 > >
 > > ./server/lib/hon-lucene-synonyms-2.0.0.jar
 > >
 > > ./server/solr/plugins/hon-lucene-synonyms-2.0.0.jar
 > >
 > > ./server/solr/classic_newdb/lib/hon-lucene-synonyms-2.0.0.jar
 > >
 > > ./server/solr/classic_search/lib/hon-lucene-synonyms-2.0.0.jar
 > >
 > >
 

Re: Solr Cloud and Multi-word Synonyms :: synonym_edismax parser

2016-06-01 Thread John Bickerstaff
I tried this - it didn't fail.  I don't know if it really started in
Denable.runtime.lib=true mode or not:

service solr start -Denable.runtime.lib=true

Of course, I'd still really rather be able to just drop jars into
/var/solr/data/lib and have them work...

Thanks all.

On Wed, Jun 1, 2016 at 12:42 PM, John Bickerstaff 
wrote:

> So - the instructions on using the Blob Store API say to use the
> Denable.runtime.lib=true option when starting Solr.
>
> Thing is, I've installed per the "for production" instructions which gives
> me an entry in /etc/init.d called solr.
>
> Two questions.
>
> To test this can I still use the start.jar in /opt/solr/server as long as
> I issue the "cloud mode" flag or does that no longer work in 5.x?
>
> Do I instead have to modify that start script in /etc/init.d ?
>
> On Wed, Jun 1, 2016 at 10:42 AM, John Bickerstaff <
> j...@johnbickerstaff.com> wrote:
>
>> Ahhh - gotcha.
>>
>> Well, not sure why it's not picked up - seems lots of other jars are...
>> Maybe Joe will comment...
>>
>> On Wed, Jun 1, 2016 at 10:22 AM, MaryJo Sminkey 
>> wrote:
>>
>>> That refers to running Solr in cloud mode. We aren't there yet.
>>>
>>> MJ
>>>
>>>
>>>
>>> On Wed, Jun 1, 2016 at 12:20 PM, John Bickerstaff <
>>> j...@johnbickerstaff.com>
>>> wrote:
>>>
>>> > Hi Mary Jo,
>>> >
>>> > I'll point you to Joe's earlier comment about needing to use the Blob
>>> Store
>>> > API...  He put a link in his response.
>>> >
>>> > I'm about to try that today...  Given that Joe is a contributor to
>>> > hon_lucene there's a good chance his experience is correct here -
>>> > especially given the evidence you just provided...
>>> >
>>> > Here's a copy - paste for your convenience.  It's a bit convoluted,
>>> > although I totally get how this kind of approach is great for large
>>> Solr
>>> > Cloud installations that have machines or VMs coming up and going down
>>> as
>>> > part of a services-based approach...
>>> >
>>> > Joe said:
>>> > The docs are out of date for the synonym_edismax but it does work.
>>> Check
>>> > out the tests for working examples. I'll try to update it soon. I've
>>> run
>>> > the plugin on Solr 5 and 6, solrcloud and standalone. For running in
>>> > SolrCloud make sure you follow
>>> >
>>> >
>>> https://cwiki.apache.org/confluence/display/solr/Adding+Custom+Plugins+in+SolrCloud+Mode
>>> >
>>> > On Wed, Jun 1, 2016 at 10:15 AM, MaryJo Sminkey 
>>> > wrote:
>>> >
>>> > > So we still can't get this to work, here's the latest update my
>>> server
>>> > guy
>>> > > gave me: It seems to not matter where the file is located, it does
>>> not
>>> > > load. Yet, the the Solr Java class path shows the file has loaded.
>>> Only
>>> > > this path (./server/lib/hon-lucene-synonyms-2.0.0.jar) will work in
>>> that
>>> > it
>>> > > loads in the java class path.  I've yet to find out what the error
>>> is.
>>> > All
>>> > > I can see is this "Error loading class". Okay, but why? What error
>>> was
>>> > > encountered in trying to load the class?  I can't find any of this
>>> > > information. I'm trying to work with the documentation that is
>>> located
>>> > here
>>> > > http://wiki.apache.org/solr/SolrPlugins
>>> > >
>>> > > I found that the jar file was put into each of these locations in an
>>> > > attempt to find a place where it will load without error.
>>> > >
>>> > > find .|grep hon-lucene
>>> > >
>>> > > ./server/lib/hon-lucene-synonyms-2.0.0.jar
>>> > >
>>> > > ./server/solr/plugins/hon-lucene-synonyms-2.0.0.jar
>>> > >
>>> > > ./server/solr/classic_newdb/lib/hon-lucene-synonyms-2.0.0.jar
>>> > >
>>> > > ./server/solr/classic_search/lib/hon-lucene-synonyms-2.0.0.jar
>>> > >
>>> > > ./server/solr-webapp/webapp/WEB-INF/lib/hon-lucene-synonyms-2.0.0.jar
>>> > >
>>> > >  The config specifies that files in certain paths can be loaded as
>>> > plugins
>>> > > or I can specify a path. Following the instructions I added this path
>>> > >
>>> > >   >> > > dir="${solr.install.dir:../../../..}/contrib/hon-lucene-synonyms/lib"
>>> > > regex=".*\.jar" />
>>> > >
>>> > > And I put the jar file in that location.  This did not work either. I
>>> > also
>>> > > tried using an absolute path like this.
>>> > >
>>> > > >> > >
>>> > >
>>> >
>>> dir="/opt/solr/contrib/hon-lucene-synonyms/lib/hon-lucene-synonyms-2.0.0.jar"
>>> > > />
>>> > >
>>> > > This did not work.
>>> > >
>>> > >
>>> > >
>>> > > I'm starting to think this isn't a configuration problem, but a
>>> > > compatibility problem. I have not seen anything from the maker of
>>> this
>>> > > plugin that it works on the exact version of Solr we are using.
>>> > >
>>> > >
>>> > >
>>> > >
>>> > >
>>> > > The best info I have found so far in the logs is this stack trace of
>>> the
>>> > > error. It still does not say why it failed to load.
>>> > >
>>> > > 2016-06-01 00:22:13.470 ERROR (qtp2096057945-14) [   ]
>>> > o.a.s.s.HttpSolrCall
>>> > > null:org.apache.solr.common.SolrException: 

Re: Solr Cloud and Multi-word Synonyms :: synonym_edismax parser

2016-06-01 Thread John Bickerstaff
Thanks Jeff.  I've installed "out of the box" with 5.4 and didn't make any
modifications on Ubuntu - so I'm not sure why it wouldn't get picked up,
but I'll keep chipping away at it...

I appreciate the new one to try.  That's a good test.

On Wed, Jun 1, 2016 at 12:45 PM, Jeff Wartes  wrote:

> In the interests of the specific questions to me:
>
> I’m using 5.4, solrcloud.
> I’ve never used the blob store thing, didn’t even know it existed before
> this thread.
>
> I’m uncertain how not finding the class could be specific to hon, it
> really feels like a general solr config issue, but you could try some other
> foreign jar and see if that works.
> Here’s one I use: https://github.com/whitepages/SOLR-4449 (although this
> one is also why I use WEB-INF/lib, because it overrides a protected method,
> so it might not be the greatest example)
>
>
> On 5/31/16, 4:02 PM, "John Bickerstaff"  wrote:
>
> >Thanks Jeff,
> >
> >I believe I tried that, and it still refused to load..  But I'd sure love
> >it to work since the other process is a bit convoluted - although I see
> >it's value in a large Solr installation.
> >
> >When I "locate" the jar on the linux command line I get:
> >
>
> >/opt/solr-5.4.0/server/solr-webapp/webapp/WEB-INF/lib/hon-lucene-synonyms-2.0.0.jar
> >
> >But the log file is still carrying class not found exceptions when I
> >restart...
> >
> >Are you in "Cloud" mode?  What version of Solr are you using?
> >
> >On Tue, May 31, 2016 at 4:08 PM, Jeff Wartes 
> wrote:
> >
> >> I’ve generally been dropping foreign plugin jars in this dir:
> >> server/solr-webapp/webapp/WEB-INF/lib/
> >> This is because it then gets loaded by the same classloader as Solr
> >> itself, which can be useful if you’re, say, overriding some
> >> solr-protected-space method.
> >>
> >> If you don’t care about the classloader, I believe you can use whatever
> >> dir you want, with the appropriate bit of solrconfig.xml to load it.
> >> Something like:
> >> 
> >>
> >>
> >> On 5/31/16, 2:13 PM, "John Bickerstaff" 
> wrote:
> >>
> >> >All --
> >> >
> >> >I'm now attempting to use the hon_lucene_synonyms project from github.
> >> >
> >> >I found the documents that were infered by the dead links on the
> readme in
> >> >the repository -- however, given that I'm using Solr 5.4.x, I no longer
> >> >have the need to integrate into a war file (as far as I can see).
> >> >
> >> >The suggestion on the readme is that I can drop the hon_lucene_synonyms
> >> jar
> >> >file into the $SOLR_HOME directory, but this does not seem to be
> working -
> >> >I'm getting class not found exceptions.
> >> >
> >> >Does anyone on this list have direct experience with getting this
> plugin
> >> to
> >> >work in Solr 5.x?
> >> >
> >> >Thanks in advance...
> >> >
> >> >On Mon, May 30, 2016 at 6:57 PM, MaryJo Sminkey 
> >> wrote:
> >> >
> >> >> It's been awhile since I installed it so I really can't say. I'm more
> >> of a
> >> >> code monkey than a server gal (particularly Linux... I'm amazed I got
> >> Solr
> >> >> installed in the first place, LOL!) So I had asked our network guy to
> >> look
> >> >> it over recently and see if it looked like I did it okay. He said
> since
> >> it
> >> >> shows up in the list of jars in the Solr admin that it's
> installed
> >> if
> >> >> that's not necessarily true, I probably need to point him in the
> right
> >> >> direction for what else to do since he really doesn't know Solr well
> >> >> either.
> >> >>
> >> >> Mary Jo
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> On Mon, May 30, 2016 at 7:49 PM, John Bickerstaff <
> >> >> j...@johnbickerstaff.com>
> >> >> wrote:
> >> >>
> >> >> > Thanks for the comment Mary Jo...
> >> >> >
> >> >> > The error loading the class rings a bell - did you find and follow
> >> >> > instructions for adding that to the WAR file?  I vaguely remember
> >> seeing
> >> >> > something about that.
> >> >> >
> >> >> > I'm going to try my own tests on the auto phrasing one..  If I'm
> >> >> > successful, I'll post back.
> >> >> >
> >> >> > On Mon, May 30, 2016 at 3:45 PM, MaryJo Sminkey <
> mjsmin...@gmail.com>
> >> >> > wrote:
> >> >> >
> >> >> > > This is a very timely discussion for me as well as we're trying
> to
> >> >> tackle
> >> >> > > the multi term synonym issue as well and have not been able to
> >> >> hon-lucene
> >> >> > > plugin to work, the jar shows up as installed but when we set up
> the
> >> >> > sample
> >> >> > > request handler it throws this error:
> >> >> > >
> >> >> > >
> >> >> >
> >> >>
> >>
> org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
> >> >> > > Error loading class
> >> >> > >
> >> >> >
> >> >>
> >>
> 'com.github.healthonnet.search.SynonymExpandingExtendedDismaxQParserPlugin'
> >> >> > >
> >> >> > > I have tried the auto-phrasing one as well (I did set up a field
> >> using
> >> >> > copy
> >> >> > > to configure it on) 

Re: Solr Cloud and Multi-word Synonyms :: synonym_edismax parser

2016-06-01 Thread Jeff Wartes
In the interests of the specific questions to me:

I’m using 5.4, solrcloud. 
I’ve never used the blob store thing, didn’t even know it existed before this 
thread.

I’m uncertain how not finding the class could be specific to hon, it really 
feels like a general solr config issue, but you could try some other foreign 
jar and see if that works. 
Here’s one I use: https://github.com/whitepages/SOLR-4449 (although this one is 
also why I use WEB-INF/lib, because it overrides a protected method, so it 
might not be the greatest example)


On 5/31/16, 4:02 PM, "John Bickerstaff"  wrote:

>Thanks Jeff,
>
>I believe I tried that, and it still refused to load..  But I'd sure love
>it to work since the other process is a bit convoluted - although I see
>it's value in a large Solr installation.
>
>When I "locate" the jar on the linux command line I get:
>
>/opt/solr-5.4.0/server/solr-webapp/webapp/WEB-INF/lib/hon-lucene-synonyms-2.0.0.jar
>
>But the log file is still carrying class not found exceptions when I
>restart...
>
>Are you in "Cloud" mode?  What version of Solr are you using?
>
>On Tue, May 31, 2016 at 4:08 PM, Jeff Wartes  wrote:
>
>> I’ve generally been dropping foreign plugin jars in this dir:
>> server/solr-webapp/webapp/WEB-INF/lib/
>> This is because it then gets loaded by the same classloader as Solr
>> itself, which can be useful if you’re, say, overriding some
>> solr-protected-space method.
>>
>> If you don’t care about the classloader, I believe you can use whatever
>> dir you want, with the appropriate bit of solrconfig.xml to load it.
>> Something like:
>> 
>>
>>
>> On 5/31/16, 2:13 PM, "John Bickerstaff"  wrote:
>>
>> >All --
>> >
>> >I'm now attempting to use the hon_lucene_synonyms project from github.
>> >
>> >I found the documents that were infered by the dead links on the readme in
>> >the repository -- however, given that I'm using Solr 5.4.x, I no longer
>> >have the need to integrate into a war file (as far as I can see).
>> >
>> >The suggestion on the readme is that I can drop the hon_lucene_synonyms
>> jar
>> >file into the $SOLR_HOME directory, but this does not seem to be working -
>> >I'm getting class not found exceptions.
>> >
>> >Does anyone on this list have direct experience with getting this plugin
>> to
>> >work in Solr 5.x?
>> >
>> >Thanks in advance...
>> >
>> >On Mon, May 30, 2016 at 6:57 PM, MaryJo Sminkey 
>> wrote:
>> >
>> >> It's been awhile since I installed it so I really can't say. I'm more
>> of a
>> >> code monkey than a server gal (particularly Linux... I'm amazed I got
>> Solr
>> >> installed in the first place, LOL!) So I had asked our network guy to
>> look
>> >> it over recently and see if it looked like I did it okay. He said since
>> it
>> >> shows up in the list of jars in the Solr admin that it's installed
>> if
>> >> that's not necessarily true, I probably need to point him in the right
>> >> direction for what else to do since he really doesn't know Solr well
>> >> either.
>> >>
>> >> Mary Jo
>> >>
>> >>
>> >>
>> >>
>> >> On Mon, May 30, 2016 at 7:49 PM, John Bickerstaff <
>> >> j...@johnbickerstaff.com>
>> >> wrote:
>> >>
>> >> > Thanks for the comment Mary Jo...
>> >> >
>> >> > The error loading the class rings a bell - did you find and follow
>> >> > instructions for adding that to the WAR file?  I vaguely remember
>> seeing
>> >> > something about that.
>> >> >
>> >> > I'm going to try my own tests on the auto phrasing one..  If I'm
>> >> > successful, I'll post back.
>> >> >
>> >> > On Mon, May 30, 2016 at 3:45 PM, MaryJo Sminkey 
>> >> > wrote:
>> >> >
>> >> > > This is a very timely discussion for me as well as we're trying to
>> >> tackle
>> >> > > the multi term synonym issue as well and have not been able to
>> >> hon-lucene
>> >> > > plugin to work, the jar shows up as installed but when we set up the
>> >> > sample
>> >> > > request handler it throws this error:
>> >> > >
>> >> > >
>> >> >
>> >>
>> org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
>> >> > > Error loading class
>> >> > >
>> >> >
>> >>
>> 'com.github.healthonnet.search.SynonymExpandingExtendedDismaxQParserPlugin'
>> >> > >
>> >> > > I have tried the auto-phrasing one as well (I did set up a field
>> using
>> >> > copy
>> >> > > to configure it on) but when testing it didn't seem to return the
>> >> > synonyms
>> >> > > as expected. So gave up on that one too (am willing to give it
>> another
>> >> > try
>> >> > > though, that was awhile ago). Would definitely like to hear what
>> other
>> >> > > people have found works on the latest versions of Solr 5.x and/or 6.
>> >> Just
>> >> > > sucks that this issue has never been fixed in the core product such
>> >> that
>> >> > > you still need to mess with plugins and patches to get such a basic
>> >> > > functionality working properly.
>> >> > >
>> >> > >
>> >> > > *Mary Jo Sminkey*

Re: Solr Cloud and Multi-word Synonyms :: synonym_edismax parser

2016-06-01 Thread John Bickerstaff
So - the instructions on using the Blob Store API say to use the
Denable.runtime.lib=true option when starting Solr.

Thing is, I've installed per the "for production" instructions which gives
me an entry in /etc/init.d called solr.

Two questions.

To test this can I still use the start.jar in /opt/solr/server as long as I
issue the "cloud mode" flag or does that no longer work in 5.x?

Do I instead have to modify that start script in /etc/init.d ?

On Wed, Jun 1, 2016 at 10:42 AM, John Bickerstaff 
wrote:

> Ahhh - gotcha.
>
> Well, not sure why it's not picked up - seems lots of other jars are...
> Maybe Joe will comment...
>
> On Wed, Jun 1, 2016 at 10:22 AM, MaryJo Sminkey 
> wrote:
>
>> That refers to running Solr in cloud mode. We aren't there yet.
>>
>> MJ
>>
>>
>>
>> On Wed, Jun 1, 2016 at 12:20 PM, John Bickerstaff <
>> j...@johnbickerstaff.com>
>> wrote:
>>
>> > Hi Mary Jo,
>> >
>> > I'll point you to Joe's earlier comment about needing to use the Blob
>> Store
>> > API...  He put a link in his response.
>> >
>> > I'm about to try that today...  Given that Joe is a contributor to
>> > hon_lucene there's a good chance his experience is correct here -
>> > especially given the evidence you just provided...
>> >
>> > Here's a copy - paste for your convenience.  It's a bit convoluted,
>> > although I totally get how this kind of approach is great for large Solr
>> > Cloud installations that have machines or VMs coming up and going down
>> as
>> > part of a services-based approach...
>> >
>> > Joe said:
>> > The docs are out of date for the synonym_edismax but it does work. Check
>> > out the tests for working examples. I'll try to update it soon. I've run
>> > the plugin on Solr 5 and 6, solrcloud and standalone. For running in
>> > SolrCloud make sure you follow
>> >
>> >
>> https://cwiki.apache.org/confluence/display/solr/Adding+Custom+Plugins+in+SolrCloud+Mode
>> >
>> > On Wed, Jun 1, 2016 at 10:15 AM, MaryJo Sminkey 
>> > wrote:
>> >
>> > > So we still can't get this to work, here's the latest update my server
>> > guy
>> > > gave me: It seems to not matter where the file is located, it does not
>> > > load. Yet, the the Solr Java class path shows the file has loaded.
>> Only
>> > > this path (./server/lib/hon-lucene-synonyms-2.0.0.jar) will work in
>> that
>> > it
>> > > loads in the java class path.  I've yet to find out what the error is.
>> > All
>> > > I can see is this "Error loading class". Okay, but why? What error was
>> > > encountered in trying to load the class?  I can't find any of this
>> > > information. I'm trying to work with the documentation that is located
>> > here
>> > > http://wiki.apache.org/solr/SolrPlugins
>> > >
>> > > I found that the jar file was put into each of these locations in an
>> > > attempt to find a place where it will load without error.
>> > >
>> > > find .|grep hon-lucene
>> > >
>> > > ./server/lib/hon-lucene-synonyms-2.0.0.jar
>> > >
>> > > ./server/solr/plugins/hon-lucene-synonyms-2.0.0.jar
>> > >
>> > > ./server/solr/classic_newdb/lib/hon-lucene-synonyms-2.0.0.jar
>> > >
>> > > ./server/solr/classic_search/lib/hon-lucene-synonyms-2.0.0.jar
>> > >
>> > > ./server/solr-webapp/webapp/WEB-INF/lib/hon-lucene-synonyms-2.0.0.jar
>> > >
>> > >  The config specifies that files in certain paths can be loaded as
>> > plugins
>> > > or I can specify a path. Following the instructions I added this path
>> > >
>> > >   > > > dir="${solr.install.dir:../../../..}/contrib/hon-lucene-synonyms/lib"
>> > > regex=".*\.jar" />
>> > >
>> > > And I put the jar file in that location.  This did not work either. I
>> > also
>> > > tried using an absolute path like this.
>> > >
>> > > > > >
>> > >
>> >
>> dir="/opt/solr/contrib/hon-lucene-synonyms/lib/hon-lucene-synonyms-2.0.0.jar"
>> > > />
>> > >
>> > > This did not work.
>> > >
>> > >
>> > >
>> > > I'm starting to think this isn't a configuration problem, but a
>> > > compatibility problem. I have not seen anything from the maker of this
>> > > plugin that it works on the exact version of Solr we are using.
>> > >
>> > >
>> > >
>> > >
>> > >
>> > > The best info I have found so far in the logs is this stack trace of
>> the
>> > > error. It still does not say why it failed to load.
>> > >
>> > > 2016-06-01 00:22:13.470 ERROR (qtp2096057945-14) [   ]
>> > o.a.s.s.HttpSolrCall
>> > > null:org.apache.solr.common.SolrException: SolrCore 'classic_search'
>> is
>> > not
>> > > available due to init failure: Error loading class
>> > > 'com.github.healthonnet.search.Syno
>> > >
>> > > nymExpandingExtendedDismaxQParserPlugin'
>> > >
>> > > at
>> > > org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:993)
>> > >
>> > > at
>> > org.apache.solr.servlet.HttpSolrCall.init(HttpSolrCall.java:249)
>> > >
>> > > at
>> > org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:411)
>> > >
>> > > at
>> > >
>> > >
>> >
>> 

Re: DocTransformer [explain] not working in Solr 5

2016-06-01 Thread Chris Hostetter
: Subject: DocTransformer [explain] not working in Solr 5
: 
: Not able to get the DocTransformer [explain] to work in Solr 5. I'm sure 
: I'm doing something wrong. But I'm following the example in the 

Hmmm... i don't think you're doing anything wrong.

Since Solr 5.5.1 this works as expected on a single node install...

http://localhost:8983/solr/techproducts/select?q=*:*=id,[explain+style=nl]

...but with a multi shard cloud colleciton no "explain" info is returned.


Definitely a bug somewhere -- but i'm not sure where?.  

In 6.0.0, 6.0.1, and on the master branch those all work ... that would 
usually indicate some bug that was delibierately fixed, but nothing jumps 
out at me reviewing the CAHNGES for 6.0 to suggest a possible 
backport/workaround for you.

(We also don't seem to have any "cloud centtric" tests of this doc 
transformer -- so i've opened SOLR-9180 to ensure this doesn't break in 
the future)




-Hoss
http://www.lucidworks.com/


Re: Last modified time is not updated on on SolrCloud 5.2.1 SSL

2016-06-01 Thread Erick Erickson
Issue an explicit commit to be sure.

And as to whether the SSL makes a difference... I'm more
going on the theory that you happened to look after
the autocommit kicked in on the non-SSL case and
before that on the SSL case. Admittedly a shot in the
dark.

Browser caching issues, have tripped me up more
than once too.

Best,
Erick

On Wed, Jun 1, 2016 at 9:26 AM, Ilan Schwarts  wrote:
> Since its working in non ssl, i dont think its commit issue, it is the same
> pc, i just update scheme on zoomeeper to https and un-comment the ssl
> settings on solr.in.cmd.
> On Jun 1, 2016 7:25 PM, "Ilan Schwarts"  wrote:
>
>> If a document was added on both cores/nodes. Doesnt it mean the document
>> was successfully added and commited?
>> On Jun 1, 2016 7:23 PM, "Erick Erickson"  wrote:
>>
>>> Did you issue a commit?
>>>
>>> Best,
>>> Erick
>>>
>>> On Wed, Jun 1, 2016 at 8:15 AM, Ilan Schwarts  wrote:
>>> > Hi all,
>>> > I have working enviroment SolrCloud 5.2.1, When i am using without SSL,
>>> > after adding a document, I can see on the core's information, Under
>>> > "statistics" the Last Modified is working good, it writes "Less than a
>>> > minuet".
>>> >
>>> > But when i set the solrcloud to SSL, after adding the document, it is
>>> added
>>> > to the collection, but the last modified is not updated.
>>> > Is this a bug ? known issue ?
>>> >
>>> > Update:
>>> > After restarting all the cores, the value of Last Modified is valid,
>>> But it
>>> > happens only after restart and not after update ot collection/index
>>> document
>>> >
>>> >
>>> > Thanks
>>> >
>>> > --
>>> >
>>> >
>>> > -
>>> > Ilan Schwarts
>>> >
>>> >
>>> >
>>> > --
>>> >
>>> >
>>> > -
>>> > Ilan Schwarts
>>>
>>


Re: Add a new field dynamically to each of the result docs and sort on it

2016-06-01 Thread Chris Hostetter

: Let me try to detail.
: We have our "product" core with a couple of million docs.
: We have a couple of thousand outlets where the products get sold.
: Each product can have a different *tagValue* in each outlet.
: Our "product_tag" core (around 2M times 2000 records), captures tag info of
: each product in each outlet. It has some additional info also (a couple of
: more fields in addition to *tagValue*), pertaining to each
: product-outlet combination and there can be NRT *tag* updates for this core
: (the *tagValue* of each product in each outlet can change and is updated in
: real time). So we moved the volatile portion of product out to a separate
: core which has approx 2M times 2000 records and only 4 or 5 fields per doc.

That information is helpful, but -- as i mentioned before -- to reduce 
misscommunication providing detailed examples at the document+field level 
is helpful.  ie: make up 2 products, tell us what field values those 
products have in each field (in each collection) and then explain how 
those two products should sort (relative to eachother) so that we can see 
a relaistic example of what you want to happen.

Based on the information you've provided so far, you're question still 
doesn't make any sense to me me 

you've said you want "product results to be bumped up or down if it has a 
particular *tagValue* ... for example products with tagValue=X should be 
at the top" -- but you've also said that "Each product can have a 
different *tagValue* in each outlet" indicating that there is not a simple 
"product->tagValue" relationship.  What you've described a 
"(product,outlet)->tagValue" relationship.  So even if anything were 
possible, how would Solr know which tagValue to use when deciding how to 
"bump" a product up/down in scoring?

Imagine a given productA was paired with multiple outlets, and one pairing 
with outlet1 was mapped to tagX which you said should sort first, but a 
diff pairing with outlet2 was mapped to tagZ which should sort 
last? .. what do you wnat to happen in that case?


-Hoss
http://www.lucidworks.com/


Re: Re: Can Solr 5.5 recognize the index result generated by SolrIndexer of old version Nutch ?

2016-06-01 Thread Yago Riveiro
I did the process from 4.0 to 4.10 (I have disk docValues in my index) with a
IndexUpgrader tool.  
  
Indeed, I don't know if from 1.4 to 4.10 this process works ...  
  
But googling a bit I found this  http://stackoverflow.com/questions/25450865
/migrate-solr-1-4-index-files-to-4-7  
  
Is like Erick said, you will need to do this process in several steps before
reach 5.x  
  
  
\--

  

/Yago Riveiro

  
![](https://link.nylas.com/open/m7fkqw0yim04itb62itnp7r9/local-1f481cc8-d5e2)

On Jun 1 2016, at 5:22 pm, Erick Erickson erickerick...@gmail.com
wrote:  

> https://lucene.apache.org/core/4_1_0/core/org/apache/lucene/index/IndexUpgra
der.html

>

> I'm not sure how far back this tool will work, i.e. I don't know if  
it'll successfully go from 1.4 - 5.x.  
You may have to pull a Solr 3x version to upgrade from 1.4-3x, then a  
4x version to upgrade 3x-4x  
and then finally a 5x version 4x-5x. If the IndexUpgraderTool even  
existed in 3x (that was a  
long time ago!).

>

> You can get old Solr versions here:  


>

> Best,  
Erick

>

> On Wed, Jun 1, 2016 at 8:57 AM, t...@sina.com wrote:  
 Hi, Yago,  
 Could you tell me the IndexUpgrade tool exactly? It is a tool released in
the Solr binary or some command line?  
 ThanksLiu Peng  
  
 \- 原始邮件 -  
 发件人:Yago Riveiro yago.rive...@gmail.com  
 收件人:solr-user solr-user@lucene.apache.org, solr-
u...@lucene.apache.org, t...@sina.com  
 主题:Re: Can Solr 5.5 recognize the index result generated by SolrIndexer
of old version Nutch ?  
 日期:2016年06月01日 17点58分  
  
  
  
  
 You need to upgrade your index to version 4.10 using the IndexUpgrade
tool.  
  
  
 \--  
  
 Yago Riveiro  
  
  
 On 1 Jun 2016 10:53 +0100, t...@sina.com, wrote:  
  
 Hi,  
  
 We plan to upgrade the solr server to 5.5.0. And we have a customized
crawler based on Nutch 1.2 and Solr 1.4.1.  
  
  
  
 So, the question is: can Solr 5.5 recognize the index result generated by
SolrIndexer of Nutch 1.2?  
  
 Thanks  
  
  
  
  




Re: Solr Cloud and Multi-word Synonyms :: synonym_edismax parser

2016-06-01 Thread John Bickerstaff
Ahhh - gotcha.

Well, not sure why it's not picked up - seems lots of other jars are...
Maybe Joe will comment...

On Wed, Jun 1, 2016 at 10:22 AM, MaryJo Sminkey  wrote:

> That refers to running Solr in cloud mode. We aren't there yet.
>
> MJ
>
>
>
> On Wed, Jun 1, 2016 at 12:20 PM, John Bickerstaff <
> j...@johnbickerstaff.com>
> wrote:
>
> > Hi Mary Jo,
> >
> > I'll point you to Joe's earlier comment about needing to use the Blob
> Store
> > API...  He put a link in his response.
> >
> > I'm about to try that today...  Given that Joe is a contributor to
> > hon_lucene there's a good chance his experience is correct here -
> > especially given the evidence you just provided...
> >
> > Here's a copy - paste for your convenience.  It's a bit convoluted,
> > although I totally get how this kind of approach is great for large Solr
> > Cloud installations that have machines or VMs coming up and going down as
> > part of a services-based approach...
> >
> > Joe said:
> > The docs are out of date for the synonym_edismax but it does work. Check
> > out the tests for working examples. I'll try to update it soon. I've run
> > the plugin on Solr 5 and 6, solrcloud and standalone. For running in
> > SolrCloud make sure you follow
> >
> >
> https://cwiki.apache.org/confluence/display/solr/Adding+Custom+Plugins+in+SolrCloud+Mode
> >
> > On Wed, Jun 1, 2016 at 10:15 AM, MaryJo Sminkey 
> > wrote:
> >
> > > So we still can't get this to work, here's the latest update my server
> > guy
> > > gave me: It seems to not matter where the file is located, it does not
> > > load. Yet, the the Solr Java class path shows the file has loaded.
> Only
> > > this path (./server/lib/hon-lucene-synonyms-2.0.0.jar) will work in
> that
> > it
> > > loads in the java class path.  I've yet to find out what the error is.
> > All
> > > I can see is this "Error loading class". Okay, but why? What error was
> > > encountered in trying to load the class?  I can't find any of this
> > > information. I'm trying to work with the documentation that is located
> > here
> > > http://wiki.apache.org/solr/SolrPlugins
> > >
> > > I found that the jar file was put into each of these locations in an
> > > attempt to find a place where it will load without error.
> > >
> > > find .|grep hon-lucene
> > >
> > > ./server/lib/hon-lucene-synonyms-2.0.0.jar
> > >
> > > ./server/solr/plugins/hon-lucene-synonyms-2.0.0.jar
> > >
> > > ./server/solr/classic_newdb/lib/hon-lucene-synonyms-2.0.0.jar
> > >
> > > ./server/solr/classic_search/lib/hon-lucene-synonyms-2.0.0.jar
> > >
> > > ./server/solr-webapp/webapp/WEB-INF/lib/hon-lucene-synonyms-2.0.0.jar
> > >
> > >  The config specifies that files in certain paths can be loaded as
> > plugins
> > > or I can specify a path. Following the instructions I added this path
> > >
> > >> > dir="${solr.install.dir:../../../..}/contrib/hon-lucene-synonyms/lib"
> > > regex=".*\.jar" />
> > >
> > > And I put the jar file in that location.  This did not work either. I
> > also
> > > tried using an absolute path like this.
> > >
> > >  > >
> > >
> >
> dir="/opt/solr/contrib/hon-lucene-synonyms/lib/hon-lucene-synonyms-2.0.0.jar"
> > > />
> > >
> > > This did not work.
> > >
> > >
> > >
> > > I'm starting to think this isn't a configuration problem, but a
> > > compatibility problem. I have not seen anything from the maker of this
> > > plugin that it works on the exact version of Solr we are using.
> > >
> > >
> > >
> > >
> > >
> > > The best info I have found so far in the logs is this stack trace of
> the
> > > error. It still does not say why it failed to load.
> > >
> > > 2016-06-01 00:22:13.470 ERROR (qtp2096057945-14) [   ]
> > o.a.s.s.HttpSolrCall
> > > null:org.apache.solr.common.SolrException: SolrCore 'classic_search' is
> > not
> > > available due to init failure: Error loading class
> > > 'com.github.healthonnet.search.Syno
> > >
> > > nymExpandingExtendedDismaxQParserPlugin'
> > >
> > > at
> > > org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:993)
> > >
> > > at
> > org.apache.solr.servlet.HttpSolrCall.init(HttpSolrCall.java:249)
> > >
> > > at
> > org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:411)
> > >
> > > at
> > >
> > >
> >
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> > >
> > > at
> > >
> > >
> >
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> > >
> > > at
> > >
> > >
> >
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> > >
> > > at
> > >
> >
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> > >
> > > at
> > >
> > >
> >
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> > >
> > > at
> > >
> >
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> > >
> > > 

Re: Last modified time is not updated on on SolrCloud 5.2.1 SSL

2016-06-01 Thread Ilan Schwarts
Since its working in non ssl, i dont think its commit issue, it is the same
pc, i just update scheme on zoomeeper to https and un-comment the ssl
settings on solr.in.cmd.
On Jun 1, 2016 7:25 PM, "Ilan Schwarts"  wrote:

> If a document was added on both cores/nodes. Doesnt it mean the document
> was successfully added and commited?
> On Jun 1, 2016 7:23 PM, "Erick Erickson"  wrote:
>
>> Did you issue a commit?
>>
>> Best,
>> Erick
>>
>> On Wed, Jun 1, 2016 at 8:15 AM, Ilan Schwarts  wrote:
>> > Hi all,
>> > I have working enviroment SolrCloud 5.2.1, When i am using without SSL,
>> > after adding a document, I can see on the core's information, Under
>> > "statistics" the Last Modified is working good, it writes "Less than a
>> > minuet".
>> >
>> > But when i set the solrcloud to SSL, after adding the document, it is
>> added
>> > to the collection, but the last modified is not updated.
>> > Is this a bug ? known issue ?
>> >
>> > Update:
>> > After restarting all the cores, the value of Last Modified is valid,
>> But it
>> > happens only after restart and not after update ot collection/index
>> document
>> >
>> >
>> > Thanks
>> >
>> > --
>> >
>> >
>> > -
>> > Ilan Schwarts
>> >
>> >
>> >
>> > --
>> >
>> >
>> > -
>> > Ilan Schwarts
>>
>


Re: Last modified time is not updated on on SolrCloud 5.2.1 SSL

2016-06-01 Thread Ilan Schwarts
If a document was added on both cores/nodes. Doesnt it mean the document
was successfully added and commited?
On Jun 1, 2016 7:23 PM, "Erick Erickson"  wrote:

> Did you issue a commit?
>
> Best,
> Erick
>
> On Wed, Jun 1, 2016 at 8:15 AM, Ilan Schwarts  wrote:
> > Hi all,
> > I have working enviroment SolrCloud 5.2.1, When i am using without SSL,
> > after adding a document, I can see on the core's information, Under
> > "statistics" the Last Modified is working good, it writes "Less than a
> > minuet".
> >
> > But when i set the solrcloud to SSL, after adding the document, it is
> added
> > to the collection, but the last modified is not updated.
> > Is this a bug ? known issue ?
> >
> > Update:
> > After restarting all the cores, the value of Last Modified is valid, But
> it
> > happens only after restart and not after update ot collection/index
> document
> >
> >
> > Thanks
> >
> > --
> >
> >
> > -
> > Ilan Schwarts
> >
> >
> >
> > --
> >
> >
> > -
> > Ilan Schwarts
>


Re: Last modified time is not updated on on SolrCloud 5.2.1 SSL

2016-06-01 Thread Erick Erickson
Did you issue a commit?

Best,
Erick

On Wed, Jun 1, 2016 at 8:15 AM, Ilan Schwarts  wrote:
> Hi all,
> I have working enviroment SolrCloud 5.2.1, When i am using without SSL,
> after adding a document, I can see on the core's information, Under
> "statistics" the Last Modified is working good, it writes "Less than a
> minuet".
>
> But when i set the solrcloud to SSL, after adding the document, it is added
> to the collection, but the last modified is not updated.
> Is this a bug ? known issue ?
>
> Update:
> After restarting all the cores, the value of Last Modified is valid, But it
> happens only after restart and not after update ot collection/index document
>
>
> Thanks
>
> --
>
>
> -
> Ilan Schwarts
>
>
>
> --
>
>
> -
> Ilan Schwarts


Re: Solr Cloud and Multi-word Synonyms :: synonym_edismax parser

2016-06-01 Thread MaryJo Sminkey
That refers to running Solr in cloud mode. We aren't there yet.

MJ



On Wed, Jun 1, 2016 at 12:20 PM, John Bickerstaff 
wrote:

> Hi Mary Jo,
>
> I'll point you to Joe's earlier comment about needing to use the Blob Store
> API...  He put a link in his response.
>
> I'm about to try that today...  Given that Joe is a contributor to
> hon_lucene there's a good chance his experience is correct here -
> especially given the evidence you just provided...
>
> Here's a copy - paste for your convenience.  It's a bit convoluted,
> although I totally get how this kind of approach is great for large Solr
> Cloud installations that have machines or VMs coming up and going down as
> part of a services-based approach...
>
> Joe said:
> The docs are out of date for the synonym_edismax but it does work. Check
> out the tests for working examples. I'll try to update it soon. I've run
> the plugin on Solr 5 and 6, solrcloud and standalone. For running in
> SolrCloud make sure you follow
>
> https://cwiki.apache.org/confluence/display/solr/Adding+Custom+Plugins+in+SolrCloud+Mode
>
> On Wed, Jun 1, 2016 at 10:15 AM, MaryJo Sminkey 
> wrote:
>
> > So we still can't get this to work, here's the latest update my server
> guy
> > gave me: It seems to not matter where the file is located, it does not
> > load. Yet, the the Solr Java class path shows the file has loaded.  Only
> > this path (./server/lib/hon-lucene-synonyms-2.0.0.jar) will work in that
> it
> > loads in the java class path.  I've yet to find out what the error is.
> All
> > I can see is this "Error loading class". Okay, but why? What error was
> > encountered in trying to load the class?  I can't find any of this
> > information. I'm trying to work with the documentation that is located
> here
> > http://wiki.apache.org/solr/SolrPlugins
> >
> > I found that the jar file was put into each of these locations in an
> > attempt to find a place where it will load without error.
> >
> > find .|grep hon-lucene
> >
> > ./server/lib/hon-lucene-synonyms-2.0.0.jar
> >
> > ./server/solr/plugins/hon-lucene-synonyms-2.0.0.jar
> >
> > ./server/solr/classic_newdb/lib/hon-lucene-synonyms-2.0.0.jar
> >
> > ./server/solr/classic_search/lib/hon-lucene-synonyms-2.0.0.jar
> >
> > ./server/solr-webapp/webapp/WEB-INF/lib/hon-lucene-synonyms-2.0.0.jar
> >
> >  The config specifies that files in certain paths can be loaded as
> plugins
> > or I can specify a path. Following the instructions I added this path
> >
> >> dir="${solr.install.dir:../../../..}/contrib/hon-lucene-synonyms/lib"
> > regex=".*\.jar" />
> >
> > And I put the jar file in that location.  This did not work either. I
> also
> > tried using an absolute path like this.
> >
> >  >
> >
> dir="/opt/solr/contrib/hon-lucene-synonyms/lib/hon-lucene-synonyms-2.0.0.jar"
> > />
> >
> > This did not work.
> >
> >
> >
> > I'm starting to think this isn't a configuration problem, but a
> > compatibility problem. I have not seen anything from the maker of this
> > plugin that it works on the exact version of Solr we are using.
> >
> >
> >
> >
> >
> > The best info I have found so far in the logs is this stack trace of the
> > error. It still does not say why it failed to load.
> >
> > 2016-06-01 00:22:13.470 ERROR (qtp2096057945-14) [   ]
> o.a.s.s.HttpSolrCall
> > null:org.apache.solr.common.SolrException: SolrCore 'classic_search' is
> not
> > available due to init failure: Error loading class
> > 'com.github.healthonnet.search.Syno
> >
> > nymExpandingExtendedDismaxQParserPlugin'
> >
> > at
> > org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:993)
> >
> > at
> org.apache.solr.servlet.HttpSolrCall.init(HttpSolrCall.java:249)
> >
> > at
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:411)
> >
> > at
> >
> >
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> >
> > at
> >
> >
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> >
> > at
> >
> >
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> >
> > at
> >
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> >
> > at
> >
> >
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> >
> > at
> >
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> >
> > at
> >
> >
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> >
> > at
> >
> >
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> >
> > at
> > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> >
> > at
> >
> >
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> >
> > at
> >
> >
> 

Re: Re: Can Solr 5.5 recognize the index result generated by SolrIndexer of old version Nutch ?

2016-06-01 Thread Erick Erickson
https://lucene.apache.org/core/4_1_0/core/org/apache/lucene/index/IndexUpgrader.html

I'm not sure how far back this tool will work, i.e. I don't know if
it'll successfully go from 1.4 -> 5.x.
You may have to pull a Solr 3x version to upgrade from 1.4->3x, then a
4x version to upgrade 3x->4x
and then finally a 5x version 4x->5x. If the IndexUpgraderTool even
existed in 3x (that was a
long time ago!).

You can get old Solr versions here:
http://archive.apache.org/dist/lucene/solr/

Best,
Erick

On Wed, Jun 1, 2016 at 8:57 AM,   wrote:
> Hi, Yago,
> Could you tell me the IndexUpgrade tool exactly? It is a tool released in the 
> Solr binary or some command line?
> ThanksLiu Peng
>
> - 原始邮件 -
> 发件人:Yago Riveiro 
> 收件人:solr-user ,  solr-user@lucene.apache.org, 
> t...@sina.com
> 主题:Re: Can Solr 5.5 recognize the index result generated by SolrIndexer of 
> old version Nutch ?
> 日期:2016年06月01日 17点58分
>
>
>
>
> You need to upgrade your index to version 4.10 using the IndexUpgrade tool.
>
>
> --
>
> Yago Riveiro
>
>
> On 1 Jun 2016 10:53 +0100, t...@sina.com, wrote:
>
> Hi,
>
> We plan to upgrade the solr server to 5.5.0. And we have a customized crawler 
> based on Nutch 1.2 and Solr 1.4.1.
>
>
>
> So, the question is: can Solr 5.5 recognize the index result generated by 
> SolrIndexer of Nutch 1.2?
>
> Thanks
>
>
>
>
>


Re: Solr Cloud and Multi-word Synonyms :: synonym_edismax parser

2016-06-01 Thread John Bickerstaff
Hi Mary Jo,

I'll point you to Joe's earlier comment about needing to use the Blob Store
API...  He put a link in his response.

I'm about to try that today...  Given that Joe is a contributor to
hon_lucene there's a good chance his experience is correct here -
especially given the evidence you just provided...

Here's a copy - paste for your convenience.  It's a bit convoluted,
although I totally get how this kind of approach is great for large Solr
Cloud installations that have machines or VMs coming up and going down as
part of a services-based approach...

Joe said:
The docs are out of date for the synonym_edismax but it does work. Check
out the tests for working examples. I'll try to update it soon. I've run
the plugin on Solr 5 and 6, solrcloud and standalone. For running in
SolrCloud make sure you follow
https://cwiki.apache.org/confluence/display/solr/Adding+Custom+Plugins+in+SolrCloud+Mode

On Wed, Jun 1, 2016 at 10:15 AM, MaryJo Sminkey  wrote:

> So we still can't get this to work, here's the latest update my server guy
> gave me: It seems to not matter where the file is located, it does not
> load. Yet, the the Solr Java class path shows the file has loaded.  Only
> this path (./server/lib/hon-lucene-synonyms-2.0.0.jar) will work in that it
> loads in the java class path.  I've yet to find out what the error is. All
> I can see is this "Error loading class". Okay, but why? What error was
> encountered in trying to load the class?  I can't find any of this
> information. I'm trying to work with the documentation that is located here
> http://wiki.apache.org/solr/SolrPlugins
>
> I found that the jar file was put into each of these locations in an
> attempt to find a place where it will load without error.
>
> find .|grep hon-lucene
>
> ./server/lib/hon-lucene-synonyms-2.0.0.jar
>
> ./server/solr/plugins/hon-lucene-synonyms-2.0.0.jar
>
> ./server/solr/classic_newdb/lib/hon-lucene-synonyms-2.0.0.jar
>
> ./server/solr/classic_search/lib/hon-lucene-synonyms-2.0.0.jar
>
> ./server/solr-webapp/webapp/WEB-INF/lib/hon-lucene-synonyms-2.0.0.jar
>
>  The config specifies that files in certain paths can be loaded as plugins
> or I can specify a path. Following the instructions I added this path
>
>dir="${solr.install.dir:../../../..}/contrib/hon-lucene-synonyms/lib"
> regex=".*\.jar" />
>
> And I put the jar file in that location.  This did not work either. I also
> tried using an absolute path like this.
>
> 
> dir="/opt/solr/contrib/hon-lucene-synonyms/lib/hon-lucene-synonyms-2.0.0.jar"
> />
>
> This did not work.
>
>
>
> I'm starting to think this isn't a configuration problem, but a
> compatibility problem. I have not seen anything from the maker of this
> plugin that it works on the exact version of Solr we are using.
>
>
>
>
>
> The best info I have found so far in the logs is this stack trace of the
> error. It still does not say why it failed to load.
>
> 2016-06-01 00:22:13.470 ERROR (qtp2096057945-14) [   ] o.a.s.s.HttpSolrCall
> null:org.apache.solr.common.SolrException: SolrCore 'classic_search' is not
> available due to init failure: Error loading class
> 'com.github.healthonnet.search.Syno
>
> nymExpandingExtendedDismaxQParserPlugin'
>
> at
> org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:993)
>
> at org.apache.solr.servlet.HttpSolrCall.init(HttpSolrCall.java:249)
>
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:411)
>
> at
>
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
>
> at
>
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
>
> at
>
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>
> at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>
> at
>
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>
> at
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>
> at
>
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>
> at
>
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>
> at
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>
> at
>
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>
> at
>
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>
> at
>
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>
> at
>
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>
> at
>
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>
> at
>
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>
> at 

Re: Solr Cloud and Multi-word Synonyms :: synonym_edismax parser

2016-06-01 Thread MaryJo Sminkey
So we still can't get this to work, here's the latest update my server guy
gave me: It seems to not matter where the file is located, it does not
load. Yet, the the Solr Java class path shows the file has loaded.  Only
this path (./server/lib/hon-lucene-synonyms-2.0.0.jar) will work in that it
loads in the java class path.  I've yet to find out what the error is. All
I can see is this "Error loading class". Okay, but why? What error was
encountered in trying to load the class?  I can't find any of this
information. I'm trying to work with the documentation that is located here
http://wiki.apache.org/solr/SolrPlugins

I found that the jar file was put into each of these locations in an
attempt to find a place where it will load without error.

find .|grep hon-lucene

./server/lib/hon-lucene-synonyms-2.0.0.jar

./server/solr/plugins/hon-lucene-synonyms-2.0.0.jar

./server/solr/classic_newdb/lib/hon-lucene-synonyms-2.0.0.jar

./server/solr/classic_search/lib/hon-lucene-synonyms-2.0.0.jar

./server/solr-webapp/webapp/WEB-INF/lib/hon-lucene-synonyms-2.0.0.jar

 The config specifies that files in certain paths can be loaded as plugins
or I can specify a path. Following the instructions I added this path

  

And I put the jar file in that location.  This did not work either. I also
tried using an absolute path like this.



This did not work.



I'm starting to think this isn't a configuration problem, but a
compatibility problem. I have not seen anything from the maker of this
plugin that it works on the exact version of Solr we are using.





The best info I have found so far in the logs is this stack trace of the
error. It still does not say why it failed to load.

2016-06-01 00:22:13.470 ERROR (qtp2096057945-14) [   ] o.a.s.s.HttpSolrCall
null:org.apache.solr.common.SolrException: SolrCore 'classic_search' is not
available due to init failure: Error loading class
'com.github.healthonnet.search.Syno

nymExpandingExtendedDismaxQParserPlugin'

at
org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:993)

at org.apache.solr.servlet.HttpSolrCall.init(HttpSolrCall.java:249)

at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:411)

at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)

at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)

at
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)

at
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)

at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)

at
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)

at
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)

at
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)

at
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)

at
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)

at
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)

at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)

at
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)

at
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)

at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)

at org.eclipse.jetty.server.Server.handle(Server.java:499)

at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)

at
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)

at
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)

at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)

at
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)

at java.lang.Thread.run(Thread.java:745)

Caused by: org.apache.solr.common.SolrException: Error loading class
'com.github.healthonnet.search.SynonymExpandingExtendedDismaxQParserPlugin'

at org.apache.solr.core.SolrCore.(SolrCore.java:824)

at org.apache.solr.core.SolrCore.(SolrCore.java:665)

at org.apache.solr.core.CoreContainer.create(CoreContainer.java:742)

at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:462)

at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:453)

at java.util.concurrent.FutureTask.run(FutureTask.java:266)

at
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:232)

at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

at

Re: Solr Cloud and Multi-word Synonyms :: synonym_edismax parser

2016-06-01 Thread John Bickerstaff
Thanks Shawn

Yup - I created a /lib inside my $SOLR_HOME directory (which by default was
/var/solr/data)

I put the hon_lucene. jar file in there and rebooted - same errors
about class not found.

Tried again in what looked like the next most obvious spot
server/solr-webapp/webapp/WEB-INF/lib

Same result...  Class not found.

I'll go back and triple check

Joe - is that recommendation of using the Blob Store API an absolute?  I
know my IT guys are going to want to have the signing - it would be a lot
easier to just drop in jars we care about without worrying about the
signing.  Yes - I'm being lazy, I know. 

Thanks all!

On Tue, May 31, 2016 at 11:35 PM, Shawn Heisey  wrote:

> On 5/31/2016 3:13 PM, John Bickerstaff wrote:
> > The suggestion on the readme is that I can drop the
> > hon_lucene_synonyms jar file into the $SOLR_HOME directory, but this
> > does not seem to be working - I'm getting class not found exceptions.
>
> What I typically do with *all* extra jars (dataimport, mysql, ICU jars,
> etc) is put them into $SOLR_HOME/lib ... a directory that you will
> usually need to create.  If the installer script is used with default
> options, that directory will be /var/solr/data/lib.
>
> Any jar that you place in that directory will be loaded once at Solr
> startup and available to all cores.  The best thing about this directory
> is that it requires zero configuration.
>
> For 5.3 and later, loading jars into
> server/solr-webapp/webapp/WEB-INF/lib should also work, but then you are
> modifying the actual Solr install, which I normally avoid because it
> makes it a little bit harder to upgrade Solr.
>
> > Does anyone on this list have direct experience with getting this
> > plugin to work in Solr 5.x?
>
> I don't have any experience with that specific plugin, but I have
> successfully used other plugin jars with the lib directory mentioned above.
>
> Thanks,
> Shawn
>
>


回复:Re: Can Solr 5.5 recognize the index result generated by SolrIndexer of old version Nutch ?

2016-06-01 Thread tjlp
Hi, Yago,
Could you tell me the IndexUpgrade tool exactly? It is a tool released in the 
Solr binary or some command line?
ThanksLiu Peng

- 原始邮件 -
发件人:Yago Riveiro 
收件人:solr-user ,  solr-user@lucene.apache.org, 
t...@sina.com
主题:Re: Can Solr 5.5 recognize the index result generated by SolrIndexer of old 
version Nutch ?
日期:2016年06月01日 17点58分




You need to upgrade your index to version 4.10 using the IndexUpgrade tool.


--

Yago Riveiro


On 1 Jun 2016 10:53 +0100, t...@sina.com, wrote:

Hi,

We plan to upgrade the solr server to 5.5.0. And we have a customized crawler 
based on Nutch 1.2 and Solr 1.4.1.



So, the question is: can Solr 5.5 recognize the index result generated by 
SolrIndexer of Nutch 1.2?

Thanks







Fwd: Last modified time is not updated on on SolrCloud 5.2.1 SSL

2016-06-01 Thread Ilan Schwarts
Hi all,
I have working enviroment SolrCloud 5.2.1, When i am using without SSL,
after adding a document, I can see on the core's information, Under
"statistics" the Last Modified is working good, it writes "Less than a
minuet".

But when i set the solrcloud to SSL, after adding the document, it is added
to the collection, but the last modified is not updated.
Is this a bug ? known issue ?

Update:
After restarting all the cores, the value of Last Modified is valid, But it
happens only after restart and not after update ot collection/index document


Thanks

-- 


-
Ilan Schwarts



-- 


-
Ilan Schwarts


Last modified time is not updated on on SolrCloud 5.2.1 SSL

2016-06-01 Thread Ilan Schwarts
Hi all,
I have working enviroment SolrCloud 5.2.1, When i am using without SSL,
after adding a document, I can see on the core's information, Under
"statistics" the Last Modified is working good, it writes "Less than a
minuet".

But when i set the solrcloud to SSL, after adding the document, it is added
to the collection, but the last modified is not updated.
Is this a bug ? known issue ?


Thanks

-- 


-
Ilan Schwarts


Re: Metadata and HTML ending up in searchable text

2016-06-01 Thread Simon Blandford

Thanks Timothy,

Will give the DIH a try. I have submitted a bug report.

Regards,
Simon

On 31/05/16 13:22, Allison, Timothy B. wrote:

  From the same page, extractFormat=text only applies when extractOnly
is true, which just shows the output from tika without indexing the document.

Y, sorry.  I just looked through the source code.  You're right.  If you use DIH 
(TikaEntityProcessor) instead of Solr Cell (ExtractingDocumentLoader), you should be able to set 
the handler type by setting the "format" attribute, and "text" is one option 
there.


I just want to make sure I'm not missing something really obvious before 
submitting a bug report.

I don't think you are.


  From the same page, extractFormat=text only applies when extractOnly
is true, which just shows the output from tika without indexing the document.
Running it in "extractOnly" mode resulting in a XML output. The
difference between selecting "text" or "xml" format is that the
escaped document in the  tag is either the original HTML
(xml mode) or stripped HTML (text mode). It seems some Javascript
creeps into the text version. (See below)

Regards,
Simon

HTML mode sample:
  051?xml
version="1.0" encoding="UTF-8"?
html xmlns="http://www.w3.org/1999/xhtml";
head
link
  rel="stylesheet" type="text/css" charset="utf-8" media="all"
href="/wiki/modernized/css/common.css"/
  link rel="stylesheet" type="text/css" charset="utf-8"
  media="screen" href="/wiki/modernized/css/screen.css"/
  link rel="stylesheet" type="text/css" charset="utf-8"
  media="print" href="/wiki/modernized/css/print.css"/...

TEXT mode (Blank lines stripped):

047
UsingMailingLists - Solr Wiki
Search:
!--// Initialize search form
var f = document.getElementById('searchform');
f.getElementsByTagName('label')[0].style.display = 'none'; var e =
document.getElementById('searchinput');
searchChange(e);
searchBlur(e);
//--
Solr Wiki
Login






On 27/05/16 13:31, Allison, Timothy B. wrote:

I'm only minimally familiar with Solr Cell, but...

1) It looks like you aren't setting extractFormat=text.  According
to [0]...the default is xhtml which will include a bunch of the metadata.
2) is there an attr_* dynamic field in your index with type="ignored"?
This would strip out the attr_ fields so they wouldn't even be
indexed...if you don't want them.

As for the HTML file, it looks like Tika is failing to strip out the
style section.  Try running the file alone with tika-app: java -jar
tika-app.jar -t inputfile.html.  If you are finding the noise there.
Please open an issue on our JIRA:
https://issues.apache.org/jira/browse/tika


[0]
https://cwiki.apache.org/confluence/display/solr/Uploading+Data+with
+Solr+Cell+using+Apache+Tika


-Original Message-
From: Simon Blandford [mailto:simon.blandf...@bkconnect.net]
Sent: Thursday, May 26, 2016 9:49 AM
To: solr-user@lucene.apache.org
Subject: Metadata and HTML ending up in searchable text

Hi,

I am using Solr 6.0 on Ubuntu 14.04.

I am ending up with loads of junk in the text body. It starts like,

The JSON entry output of a search result shows the indexed text
starting with...
body_txt_en: " stream_size 36499 X-Parsed-By
org.apache.tika.parser.DefaultParser X-Parsed-By"

And then once it gets to the actual text I get CSS class names
appearing that were in  or  tags etc.
e.g. "the power of calibre3 silence calibre2 and", where
"calibre3" etc are the CSS class names.

All this junk is searchable and is polluting the index.

I would like to index _only_ the actual content I am interested in
searching for.

Steps to reproduce:

1) Solr installed by untaring solr tgz in /opt.

2) Core created by typing "bin/solr create -c mycore"

3) Solr started with bin/solr start

4) TXT document index using the following command curl
"http://localhost:8983/solr/mycore/update/extract?literal.id=doc1=attr_=body_txt_en=true;
-F

"content/UsingMailingLists.txt=@/home/user/Documents/library/UsingMailingLists.txt"

5) HTML document index using following command curl
"http://localhost:8983/solr/mycore/update/extract?literal.id=doc2=attr_=body_txt_en=true;
-F

"content/UsingMailingLists.html=@/home/user/Documents/library/UsingMailingLists.html"

6) Query using URL:
http://localhost:8983/solr/mycore/select?q=especially=json

Result:

For the txt file, I get the following JSON for the document...

{
id: "doc1",
attr_stream_size: [
"8107"
],
attr_x_parsed_by: [
"org.apache.tika.parser.DefaultParser",
"org.apache.tika.parser.txt.TXTParser"
],
attr_stream_content_type: [
"text/plain"
],
attr_stream_name: [
"UsingMailingLists.txt"
],
attr_stream_source_info: [
"content/UsingMailingLists.txt"
],
attr_content_encoding: [
"ISO-8859-1"
],
attr_content_type: [
"text/plain; charset=ISO-8859-1"
 

SolrCloud 5.2.1 nodes are out of sync - how to handle

2016-06-01 Thread Ilan Schwarts
Hi,
We have in lab SolrCloud 5.2.1
2 Shards, each shard has 2 cores/nodes, replication factor is 1. meaning
that one node is leader (like old master-slave).
(upon collection creation numShards=1 rp=1)

Now there is a problem in the lab, shard 1 has 2 cores, but the number of
docs is different, and when adding a document to one of the cores, it will
not replicate the data to the other one.
If i check cluster state.json it appears fine, it writes there are 2 active
cores and only 1 is set as leader.

What is the recovery method for a scenario like this ? I dont have logs
anymore and cannot reproduce.
Is it possible to merge the 2 cores into 1, and then split that core to 2
cores ?
Or maybe to enforce sync if possible ?

The other shard, Shard 2 is functioning well, the replication works fine,
when adding a document to 1 core, it will replicate it to the other.

-- 


-
Ilan Schwarts


Re: After Solr 5.5, mm parameter doesn't work properly

2016-06-01 Thread Jan Høydahl
> 1. jun. 2016 kl. 03.47 skrev Greg Pendlebury :

> I don't think it is 8812. q.op was completely ignored by edismax prior to
> 5.5, so it is not mm that changed.

That is not the case. Prior to 5.5, mm would be automatically set to 100% if 
q.op==AND
See https://issues.apache.org/jira/browse/SOLR-1889 and 
https://svn.apache.org/viewvc?view=revision=950710

Jan

Re: Script to upgrade a Solr index from 4.x to 6.x

2016-06-01 Thread Brendan Humphreys
Hi Jan,

Thanks for the script! I for one will definitely try it out.

Can you comment on how battle-tested it is?

Are there any limitations or drawbacks?

Cheers,
-Brendan

On Wednesday, 1 June 2016, Jan Høydahl  wrote:

> Hi
>
> Need to upgrade from Solr 4.x directly to the new 6.0?
> Here is a script that does it automatically for all your cores:
>
> https://github.com/cominvent/solr-tools/blob/master/upgradeindex/upgradeindex.sh
>
>
> USAGE:
>   Script to Upgrade old indices from 4.x and 5.x to 6.x format, so it can
> be used with Solr 6.x or 7.x
>   Usage: ./upgradeindex.sh [-s] 
>
>   Example: ./upgradeindex.sh /var/lib/solr
>   Please run the tool only on a cold index (no Solr running)
>   The script leaves a backup in
> //data/index_backup_.tgz. Use -s to skip
> backup
>   Requires wget or curl to download dependencies
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
>

-- 

[image: Canva]  
Empowering the world to design
Also, we're hiring. Apply here! 
[image: Twitter]  [image: Facebook] 
 [image: LinkedIn] 
 [image: Instagram] 



Script to upgrade a Solr index from 4.x to 6.x

2016-06-01 Thread Jan Høydahl
Hi

Need to upgrade from Solr 4.x directly to the new 6.0?
Here is a script that does it automatically for all your cores:
https://github.com/cominvent/solr-tools/blob/master/upgradeindex/upgradeindex.sh


USAGE:
  Script to Upgrade old indices from 4.x and 5.x to 6.x format, so it can be 
used with Solr 6.x or 7.x
  Usage: ./upgradeindex.sh [-s] 

  Example: ./upgradeindex.sh /var/lib/solr
  Please run the tool only on a cold index (no Solr running)
  The script leaves a backup in 
//data/index_backup_.tgz. Use -s to skip backup
  Requires wget or curl to download dependencies

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com



Configure SolrCloud for Loadbalance for .net client

2016-06-01 Thread shivendra.tiwari
Hi,

I have to configure SolrCloud for loadbalance on .net application please 
suggest what we have to needed and how to configure it. We are currently 
working on lower version of Solr with Master and Slave concept.

Please suggest.


Warm Regards!
Shivendra Kumar Tiwari

Re: Add a new field dynamically to each of the result docs and sort on it

2016-06-01 Thread Charlie Hull

On 01/06/2016 11:56, Mark Robinson wrote:

Just to complete my prev use case, in case no direct way is possible in
SOLR to sort on a field in a different core, is there a way to embed the
tagValue of a product dynamically into the results (the storeid will be
passed at query time. So query the product_tags core for that
product+storeid and get the tagValue and embed it into the product results
probably in the "process" method of a custom component ... in the first
place I believe we can add a value like that to each result doc). But then
how can we sort on this value as I am now working on the results which came
out after any initial sort was applied and can we re-sort at this very late
stage using some java sorting in the custom component.


Hi Mark,

Not sure if this is directly relevant but we implemented a component to 
join Solr results with external data: 
http://www.flax.co.uk/blog/2016/01/25/xjoin-solr-part-1-filtering-using-price-discount-data/


Cheers

Charlie


Thanks!
Mark.

On Wed, Jun 1, 2016 at 6:44 AM, Mark Robinson 
wrote:


Thanks much Eric and Hoss!

Let me try to detail.
We have our "product" core with a couple of million docs.
We have a couple of thousand outlets where the products get sold.
Each product can have a different *tagValue* in each outlet.
Our "product_tag" core (around 2M times 2000 records), captures tag info
of each product in each outlet. It has some additional info also (a couple
of more fields in addition to *tagValue*), pertaining to each
product-outlet combination and there can be NRT *tag* updates for this
core (the *tagValue* of each product in each outlet can change and is
updated in real time). So we moved the volatile portion of product out to a
separate core which has approx 2M times 2000 records and only 4 or 5 fields
per doc.

A recent requirement is that we want our product results to be bumped up
or down if it has a particular *tagValue*... for example products with
tagValue=X should be at the top. Currently only one tag*Value* considered
to decide results order.
A future requirement could be products with *tagValue=*X bumped up
followed by products with *tagValue=*Y.

ie "product" results need to be ordered based on a field(s) in the
"product_tag" core (a different core).

Is there ANY way to achieve this scenario.

Thanks!

Mark.







On Tue, May 31, 2016 at 8:13 PM, Chris Hostetter 

Re: Add a new field dynamically to each of the result docs and sort on it

2016-06-01 Thread Mark Robinson
Just to complete my prev use case, in case no direct way is possible in
SOLR to sort on a field in a different core, is there a way to embed the
tagValue of a product dynamically into the results (the storeid will be
passed at query time. So query the product_tags core for that
product+storeid and get the tagValue and embed it into the product results
probably in the "process" method of a custom component ... in the first
place I believe we can add a value like that to each result doc). But then
how can we sort on this value as I am now working on the results which came
out after any initial sort was applied and can we re-sort at this very late
stage using some java sorting in the custom component.

Thanks!
Mark.

On Wed, Jun 1, 2016 at 6:44 AM, Mark Robinson 
wrote:

> Thanks much Eric and Hoss!
>
> Let me try to detail.
> We have our "product" core with a couple of million docs.
> We have a couple of thousand outlets where the products get sold.
> Each product can have a different *tagValue* in each outlet.
> Our "product_tag" core (around 2M times 2000 records), captures tag info
> of each product in each outlet. It has some additional info also (a couple
> of more fields in addition to *tagValue*), pertaining to each
> product-outlet combination and there can be NRT *tag* updates for this
> core (the *tagValue* of each product in each outlet can change and is
> updated in real time). So we moved the volatile portion of product out to a
> separate core which has approx 2M times 2000 records and only 4 or 5 fields
> per doc.
>
> A recent requirement is that we want our product results to be bumped up
> or down if it has a particular *tagValue*... for example products with
> tagValue=X should be at the top. Currently only one tag*Value* considered
> to decide results order.
> A future requirement could be products with *tagValue=*X bumped up
> followed by products with *tagValue=*Y.
>
> ie "product" results need to be ordered based on a field(s) in the
> "product_tag" core (a different core).
>
> Is there ANY way to achieve this scenario.
>
> Thanks!
>
> Mark.
>
>
>
>
>
>
>
> On Tue, May 31, 2016 at 8:13 PM, Chris Hostetter  > wrote:
>
>>
>> : When a query comes in, I want to populate value for this field in the
>> : results based on some values passed in the query.
>> : So what needs to be accommodated in the result depends on a parameter in
>> : the query and I would like to sort the final results on this field also,
>> : which is dynamically populated.
>>
>> populated how? ... what exactly do you want to provide at query time, and
>> how exactly do you want it to affect your query results / sorting?
>>
>> The details of what you *think* you mean matter, because based on the
>> information you've provided we have no way of guessing what your goal
>> is -- and if we can't guess what you mean, then there's no way to imagein
>> Solr can figure it out ... software doesn't have an imagination.
>>
>> We need to know what your documents are going to look like at index
>> time (with *real* details, and specific example docs) and what your
>> queries are going to look like (again: with *real* details on the "some
>> values passed in the query") and a detailed explanation of how what
>> results you want to see and why -- describe in words how the final sorting
>> of the docs you should have already described to use would be determined
>> acording to the info pased in at query time which you should have also
>> already described to us.
>>
>>
>> In general i think i smell and XY Problem...
>>
>> https://people.apache.org/~hossman/#xyproblem
>> XY Problem
>>
>> Your question appears to be an "XY Problem" ... that is: you are dealing
>> with "X", you are assuming "Y" will help you, and you are asking about "Y"
>> without giving more details about the "X" so that we can understand the
>> full issue.  Perhaps the best solution doesn't involve "Y" at all?
>> See Also: http://www.perlmonks.org/index.pl?node_id=542341
>>
>>
>> -Hoss
>> http://www.lucidworks.com/
>>
>
>


Re: Add a new field dynamically to each of the result docs and sort on it

2016-06-01 Thread Mark Robinson
Thanks much Eric and Hoss!

Let me try to detail.
We have our "product" core with a couple of million docs.
We have a couple of thousand outlets where the products get sold.
Each product can have a different *tagValue* in each outlet.
Our "product_tag" core (around 2M times 2000 records), captures tag info of
each product in each outlet. It has some additional info also (a couple of
more fields in addition to *tagValue*), pertaining to each
product-outlet combination and there can be NRT *tag* updates for this core
(the *tagValue* of each product in each outlet can change and is updated in
real time). So we moved the volatile portion of product out to a separate
core which has approx 2M times 2000 records and only 4 or 5 fields per doc.

A recent requirement is that we want our product results to be bumped up or
down if it has a particular *tagValue*... for example products with
tagValue=X should be at the top. Currently only one tag*Value* considered
to decide results order.
A future requirement could be products with *tagValue=*X bumped up followed
by products with *tagValue=*Y.

ie "product" results need to be ordered based on a field(s) in the
"product_tag" core (a different core).

Is there ANY way to achieve this scenario.

Thanks!

Mark.







On Tue, May 31, 2016 at 8:13 PM, Chris Hostetter 
wrote:

>
> : When a query comes in, I want to populate value for this field in the
> : results based on some values passed in the query.
> : So what needs to be accommodated in the result depends on a parameter in
> : the query and I would like to sort the final results on this field also,
> : which is dynamically populated.
>
> populated how? ... what exactly do you want to provide at query time, and
> how exactly do you want it to affect your query results / sorting?
>
> The details of what you *think* you mean matter, because based on the
> information you've provided we have no way of guessing what your goal
> is -- and if we can't guess what you mean, then there's no way to imagein
> Solr can figure it out ... software doesn't have an imagination.
>
> We need to know what your documents are going to look like at index
> time (with *real* details, and specific example docs) and what your
> queries are going to look like (again: with *real* details on the "some
> values passed in the query") and a detailed explanation of how what
> results you want to see and why -- describe in words how the final sorting
> of the docs you should have already described to use would be determined
> acording to the info pased in at query time which you should have also
> already described to us.
>
>
> In general i think i smell and XY Problem...
>
> https://people.apache.org/~hossman/#xyproblem
> XY Problem
>
> Your question appears to be an "XY Problem" ... that is: you are dealing
> with "X", you are assuming "Y" will help you, and you are asking about "Y"
> without giving more details about the "X" so that we can understand the
> full issue.  Perhaps the best solution doesn't involve "Y" at all?
> See Also: http://www.perlmonks.org/index.pl?node_id=542341
>
>
> -Hoss
> http://www.lucidworks.com/
>


Re: Solr 6 CDCR does not work

2016-06-01 Thread Adam Majid Sanjaya
disable autocommit on the target

It worked!
thanks

2016-05-30 15:40 GMT+07:00 Renaud Delbru :

> Hi Adam,
>
> could you check the response of the monitoring commands [1], QUEUES,
> ERRORS, OPS. This might help undeerstanding if documents are flowing or if
> there are issues.
>
> Also, do you have an autocommit configured on the target ? CDCR does not
> replicate commit, and therefore you have to send a commit command on the
> target to ensure that the latest replicated documents are visible.
>
> [1]
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=62687462#CrossDataCenterReplication%28CDCR%29-Monitoringcommands
>
> --
> Renaud Delbru
>
>
> On 29/05/16 12:10, Adam Majid Sanjaya wrote:
>
>> I’m testing Solr 6 CDCR, but it’s seems not working.
>>
>> Source configuration:
>> 
>>
>>  targetzkip:2181
>>  corehol
>>  corehol
>>
>>
>>
>>  1
>>  1000
>>  128
>>
>>
>>
>>  5000
>>
>> 
>>
>> 
>>
>>  ${solr.ulog.dir:}
>>
>> 
>>
>> Target(s) configuration:
>> 
>>
>>  disabled
>>
>> 
>>
>> 
>>
>>
>> 
>>
>> 
>>
>>  cdcr-proccessor-chain
>>
>> 
>>
>> 
>>
>>  ${solr.ulog.dir:}
>>
>> 
>>
>> Source Log: no cdcr
>> Target Log: no cdcr
>>
>> Create a core (solrconfig.xml modification directly from the folder
>> data_driven_schema_configs):
>> #bin/solr create -c corehol -p 8983
>>
>> Start cross-data center replication by running the START command on the
>> source data center
>> http://sourceip::8983/solr/corehol/cdcr?action=START
>>
>> Disable buffer by running the DISABLEBUFFER command on the target data
>> center
>> http://targetip::8983/solr/corehol/cdcr?action=DISABLEBUFFER
>>
>> The documents are not replicated to the target zone.
>>
>> What should I examine?
>>
>>
>


How can we incrementally build the solr suggestions

2016-06-01 Thread Subrahmanyam MadhavaBotla
Hi Team,

We are using Solr suggestions based on indexed terms.
However, We see only two options for building solr suggestion.. commit... on 
start. .
We understand that these will completely rebuild the suggestions every time 
these are called..
How can we incrementally build the solr suggestions.. is there any 
configuration we can supply for this?


Thanks and Regards,
Subrahmanyam MadhavaBotla
Senior Product Engineer | Products | Accelerite
madhava_subrahmn...@persistent.co.in | 
Cell: +91-9923051689 | Tel: +91-712-6691129 | IT PARK NAGPUR
Persistent Systems Ltd. Partners in Innovation | 
www.persistent.co.in


DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the 
property of Persistent Systems Ltd. It is intended only for the use of the 
individual or entity to which it is addressed. If you are not the intended 
recipient, you are not authorized to read, retain, copy, print, distribute or 
use this message. If you have received this communication in error, please 
notify the sender and delete all copies of this message. Persistent Systems 
Ltd. does not accept any liability for virus infected mails.



Re: Can Solr 5.5 recognize the index result generated by SolrIndexer of old version Nutch ?

2016-06-01 Thread Yago Riveiro
You need to upgrade your index to version 4.10 using the IndexUpgrade tool.

--
Yago Riveiro

On 1 Jun 2016 10:53 +0100, t...@sina.com, wrote:
> Hi,
> We plan to upgrade the solr server to 5.5.0. And we have a customized crawler 
> based on Nutch 1.2 and Solr 1.4.1.
> 
> So, the question is: can Solr 5.5 recognize the index result generated by 
> SolrIndexer of Nutch 1.2?
> Thanks


Can Solr 5.5 recognize the index result generated by SolrIndexer of old version Nutch ?

2016-06-01 Thread tjlp
 Hi,
We plan to upgrade the solr server to 5.5.0. And we have a customized crawler 
based on Nutch 1.2 and Solr 1.4.1. 

So, the question is: can Solr 5.5 recognize the index result generated by 
SolrIndexer of Nutch 1.2?
Thanks