Compiling SolrJ for Java 6

2015-11-03 Thread O. Olson
Hi,
I'm looking to compile the SolrJ for Solr 4.10.3 for running on Java 6. 
(Due to choices beyond my control, we are on this older version of SolrJ and
Java 6.) I'm looking for any pointers on how I could do it?

I tried downloading the source from SVN (for Solr 4.10.3, not the latest
version). I then went into the /solr/common-build.xml file and changed the
javac.target to 1.6 i.e.
  

I then ran "ant dist-solrj" and it compiled the SolrJ jar, but in Java 7. I
wanted Java 6. 

I should admit that the manifest file has "X-Compile-Target-JDK: 1.6"
However if you look at any of the class files (say SolrRequest.class) using
"javap -verbose" you get "major version: 51" which means Java 7. 

Besides making the change in the /solr/common-build.xml, is there some
other change I need to make to be able to compile SolrJ for Java 6? 

Thank you,
O. O.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Compiling-SolrJ-for-Java-6-tp4238068.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Compiling SolrJ for Java 6

2015-11-03 Thread O. Olson
Thank you Erick. I'm sorry I did not clarify this in my original message. 

I'm compiling Solr (or SolrJ) under Java 7. I'm aware that it requires Java
7 to compile, and that's why I have not changed the "java.source" value in
the common-build.xml file. SolrJ compiles fine. My problem is that I would
like to run it under Java 6 i.e. JRE 6. In my personal projects I do this by
supplying the "-target 1.6" flag to the Java Compiler "javac" and it works
fine. (I think I use Java 7 features, but I'm not sure. Anyway, the compiler
is Java 7, but the execution eviroment is Java 6). 

I'm not familiar with Java, but I thought that you could compile Java 7 code
to run under Java 6. Is this wrong? I know you get some warnings at the
compile time, but I have ignored them in the past and my code worked fine. 

Thank you again and also to Upayavira. I'm wondering if there is a way out.
O. O.




Erick Erickson wrote
> You're on your one if you try to do this. Solr 4.10 requires Java7. I
> don't believe Solr will even compile under 1.6.
> 
> You may bet lucky and get SolrJ to compile, but whether it works or
> not is chancy at best.
> 
> Best,
> Erick





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Compiling-SolrJ-for-Java-6-tp4238068p4238081.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Compiling SolrJ for Java 6

2015-11-03 Thread O. Olson
Damm. I always thought cross compilation of Java worked (i.e. compile in one
version with the target of a previous version). I guess it worked in my code
because I did not use any of the new features.  

Thank you very much Shawn. No, I'm not running SolrCloud, but I wanted to
use the new features in SolrJ particularly regarding the suggestions. Thank
you for confirming this though.

O. O.




Shawn Heisey-2 wrote
> Erick is right.  It won't even compile.  When the jump to Java 7 was
> made between the 4.7 and 4.8 releases, most of the source code was
> reviewed and certain pieces in a very large number of files were updated
> to code that only compiles in Java 7.  Many of those changes happened in
> SolrJ.  This was done because the Java 7 code is generally more reliable
> and easier to maintain.
> 
> Here's what happens when I import the 4.10 branch into eclipse and
> change the compiler compliance level to 1.6.  Notice the large number of
> red marks that are on source packages and java files:
> 
> https://www.dropbox.com/s/kktym6tsdi3iu36/solrj-4.10-java6-errors.png?dl=0
> 
> Your best bet is to download the 4.7 version and try to use that.  This
> is the last release that will work in Java 6:
> 
> http://archive.apache.org/dist/lucene/solr/4.7.2/
> 
> If you're NOT running SolrCloud, chances are VERY good that you can get
> this to work with zero problems.  If you're running SolrCloud on version
> 4.10 and try to use SolrJ 4.7, I would not be surprised to learn that
> they are not compatible.  It might work well ... I have never tried it.
> 
> The nature of the changes for Java 7 are such that it will be extremely
> time-consuming to backport changes between 4.7.2 and 4.10.4 to the older
> source code.
> 
> Thanks,
> Shawn





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Compiling-SolrJ-for-Java-6-tp4238068p4238095.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Trying to get AnalyzingInfixSuggester to work in Solr?

2015-05-07 Thread O. Olson
Thank you Erick. I'm sorry I did not mention this earlier, but I am still on
Solr 4.10.3. Once I upgrade to Solr 5.0+ , I would consider your suggestion
in your blog post. 
O. O. 


Erick Erickson wrote
 Uh, you mean because I forgot to pate in the URL? Siih...
 
 Anyway, the URL is irrelevant now that you've solved your problem, but
 in case you're interested:
 http://lucidworks.com/blog/solr-suggester/
 
 Sorry for the confusion.
 Erick





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Trying-to-get-AnalyzingInfixSuggester-to-work-in-Solr-tp4204163p4204392.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Trying to get AnalyzingInfixSuggester to work in Solr?

2015-05-07 Thread O. Olson
Thank you Erick. I have no clue what you are referring to when you used to
word this?  Are you referring to my question in my original email/message? 


Erick Erickson wrote
 Have you seen this? I tried to make something end-to-end with assorted
 gotchas identified
 
  Best,
 Erick





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Trying-to-get-AnalyzingInfixSuggester-to-work-in-Solr-tp4204163p4204336.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Trying to get AnalyzingInfixSuggester to work in Solr?

2015-05-07 Thread O. Olson
Thank you Rajesh for your persistence. I now got it to work. In my original
email/message, I mentioned that I use 'text_general' as defined in the
examples:
http://svn.apache.org/viewvc/lucene/dev/trunk/solr/example/example-DIH/solr/db/conf/schema.xml?view=markup
 
I'm sorry I did not mention this again later. 

Your definition of 'text_general' is a lot different from what's in the
examples. However, once I used it, I got this to work just as you said. 

Thank you,
O. O.


Rajesh Hazari wrote
 yes textSuggest is of type text_general with below definition
 fieldType name=text_general class=solr.TextField
  positionIncrementGap=100 sortMissingLast=true omitNorms=true
  
 analyzer type=index
 
 tokenizer class=solr.ClassicTokenizerFactory/
 filter class=solr.ClassicFilterFactory/
 filter class=solr.LowerCaseFilterFactory/
 
 filter class=solr.KeywordMarkerFilterFactory
 protected=protwords.txt/
 filter class=solr.ShingleFilterFactory maxShingleSize=5
 outputUnigrams=true/
   
 /analyzer
   
 analyzer type=query
 charFilter class=solr.MappingCharFilterFactory
 mapping=mapping-FoldToASCII.txt/
  
 tokenizer class=solr.ClassicTokenizerFactory/
 filter class=solr.ClassicFilterFactory/
 
 filter class=solr.LowerCaseFilterFactory/
 
 filter class=solr.KeywordMarkerFilterFactory
 protected=protwords.txt/
 filter class=solr.ShingleFilterFactory maxShingleSize=5
 outputUnigrams=true/
   
 /analyzer
 
 /fieldType
 *Rajesh.*





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Trying-to-get-AnalyzingInfixSuggester-to-work-in-Solr-tp4204163p4204334.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Trying to get AnalyzingInfixSuggester to work in Solr?

2015-05-07 Thread O. Olson
Thank you Rajesh, Alessandro and Erick. I apparently did not have much
knowledge about the Suggester - in fact I had no clue that there is a
difference between the SpellcheckComponent and the SuggestComponent. 

I would be reading about this, esp. Erick blog post on Lucidworks.

O. O. 


Rajesh Hazari wrote
 Good to know that its working as expected.
 
 I have some couple of question on your autosuggest implementation.
 
 I see that you are using SpellcheckComponent instead of SuggestComponent
 are you using this intentionally if not plz read this
  https://cwiki.apache.org/confluence/display/solr/Suggester
 
 I am working on an issue in suggester just sharing once again in this
 community just in-case if you are any others out have this in their list.
 
 http://stackoverflow.com/questions/27847707/solr-autosuggest-to-stop-filter-suggesting-the-phrase-that-ends-with-stopwords
 
 *thanks,*
 *Rajesh**.*





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Trying-to-get-AnalyzingInfixSuggester-to-work-in-Solr-tp4204163p4204356.html
Sent from the Solr - User mailing list archive at Nabble.com.


Trying to get AnalyzingInfixSuggester to work in Solr?

2015-05-06 Thread O. Olson
I'm trying to get the AnalyzingInfixSuggester to work but I'm not successful.
I'd be grateful if someone can point me to a working example. 

Problem:
My content is product descriptions similar to a BestBuy or NewEgg catalog.
My problem is that I'm getting only single words in the suggester results.
E.g. if I type 'len', I get the suggester results like 'Lenovo' but not
'Lenovo laptop' or something larger/longer than a single word. 

There is a suggestion here:
http://blog.mikemccandless.com/2013/06/a-new-lucene-suggester-based-on-infix.html
that the search at:
http://jirasearch.mikemccandless.com/search.py?index=jira is powered by the
AnalyzingInfixSuggester  If this is true, when I use this suggester, I get
more than a few words in the suggester results, but I don't with my setup
i.e. on my setup I get only single words. My configuration is 


searchComponent class=solr.SpellCheckComponent name=suggest
lst name=spellchecker
  str name=namesuggest/str
  str name=classnameorg.apache.solr.spelling.suggest.Suggester/str
  str
name=lookupImplorg.apache.solr.spelling.suggest.fst.AnalyzingInfixLookupFactory/str
  str name=fieldtext/str  
  float name=threshold0.005/float
  str name=buildOnCommittrue/str
  str name=suggestAnalyzerFieldTypetext_general/str
  bool name=exactMatchFirsttrue/bool
/lst
  /searchComponent
  
  requestHandler class=org.apache.solr.handler.component.SearchHandler
name=/suggest
lst name=defaults
  str name=spellchecktrue/str
  str name=spellcheck.dictionarysuggest/str
  str name=spellcheck.onlyMorePopulartrue/str
  str name=spellcheck.count5/str
  str name=spellcheck.collatetrue/str
/lst
arr name=components
  strsuggest/str
/arr
  /requestHandler

I copy the contents of all of my fields to a single field called 'text'. The
' text_general' type is exactly as in the solr examples:
http://svn.apache.org/viewvc/lucene/dev/trunk/solr/example/example-DIH/solr/db/conf/schema.xml?view=markup
 

I'd be grateful if anyone can help me. I don't know what to look at. Thank
you in adance.

O. O.





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Trying-to-get-AnalyzingInfixSuggester-to-work-in-Solr-tp4204163.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Trying to get AnalyzingInfixSuggester to work in Solr?

2015-05-06 Thread O. Olson
Thank you Rajesh. I think I got a bit of help from the answer at:
http://stackoverflow.com/a/29743945

While that example sort of worked for me, I'm not had the time to test what
works and what didn't. 

So far I have found that I need the the field in my searchComponent to be of
type 'string'. In my original example I had this as text_general. Next I
used the suggest_string fieldType as defined in the StackOverflow answer. I
also removed your queryConverter, and it still works, so I think it's not
needed. 

Thank you very much,
O. O. 



Rajesh Hazari wrote
 I just tested your config with my schema and it worked.
 
 my config :
   
 searchComponent class=solr.SpellCheckComponent name=suggest1
 
 lst name=spellchecker
   
 str name=name
 suggest
 /str
   
 str name=classname
 org.apache.solr.spelling.suggest.Suggester
 /str
   
 str
 name=lookupImpl
 org.apache.solr.spelling.suggest.fst.AnalyzingInfixLookupFactory
 /str
   
 str name=field
 textSuggest
 /str
   
 float name=threshold
 0.005
 /float
   
 str name=buildOnCommit
 true
 /str
   
 str name=suggestAnalyzerFieldType
 text_general
 /str
   
 bool name=exactMatchFirst
 true
 /bool
 
 /lst
   
 /searchComponent
 
 queryConverter name=queryConverter
 class=org.apache.solr.spelling.SuggestQueryConverter/
   
 requestHandler class=org.apache.solr.handler.component.SearchHandler
 name=/suggest1
 
 lst name=defaults
   
 str name=spellcheck
 true
 /str
   
 str name=spellcheck.dictionary
 suggest
 /str
   
 str name=spellcheck.onlyMorePopular
 true
 /str
   
 str name=spellcheck.count
 5
 /str
   
 str name=spellcheck.collate
 true
 /str
 
 /lst
 
 arr name=components
   
 str
 suggest1
 /str
 
 /arr
   
 /requestHandler
 
 http://localhost:8585/solr/collection1/suggest1?q=applerows=10wt=jsonindent=true
 
 {
   responseHeader:{
 status:0,
 QTime:2},
   spellcheck:{
 suggestions:[
   apple,{
 numFound:5,
 startOffset:0,
 endOffset:5,
 suggestion:[
*
 apple
*
 ,
   
*
 apple
*
  and,
   
*
 apple
*
  and facebook,
   
*
 apple
*
  and facebook learn,
   
*
 apple
*
  and facebook learn from]},
   collation,
*
 apple
*
 ]}}
 
 
 
 *Rajesh**.*





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Trying-to-get-AnalyzingInfixSuggester-to-work-in-Solr-tp4204163p4204222.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Trying to get AnalyzingInfixSuggester to work in Solr?

2015-05-06 Thread O. Olson
Thank you Rajesh for responding so quickly. I tried it again with a restart
and a reimport and I still cannot get this to work i.e. I'm seeing no
difference. 

I'm wondering how you define: 'textSuggest' in your schema? In my case I use
the field 'text' that is defined as: 

field name=text type=text_general indexed=true stored=false
multiValued=true/

I'm wondering if your 'textSuggest' is of type text_general ?

Thank you again for your help
O. O.


Rajesh Hazari wrote
 I just tested your config with my schema and it worked.
 
 my config :
   
 searchComponent class=solr.SpellCheckComponent name=suggest1
 
 lst name=spellchecker
   
 str name=name
 suggest
 /str
   
 str name=classname
 org.apache.solr.spelling.suggest.Suggester
 /str
   
 str
 name=lookupImpl
 org.apache.solr.spelling.suggest.fst.AnalyzingInfixLookupFactory
 /str
   
 str name=field
 textSuggest
 /str
   
 float name=threshold
 0.005
 /float
   
 str name=buildOnCommit
 true
 /str
   
 str name=suggestAnalyzerFieldType
 text_general
 /str
   
 bool name=exactMatchFirst
 true
 /bool
 
 /lst
   
 /searchComponent
 
 queryConverter name=queryConverter
 class=org.apache.solr.spelling.SuggestQueryConverter/
   
 requestHandler class=org.apache.solr.handler.component.SearchHandler
 name=/suggest1
 
 lst name=defaults
   
 str name=spellcheck
 true
 /str
   
 str name=spellcheck.dictionary
 suggest
 /str
   
 str name=spellcheck.onlyMorePopular
 true
 /str
   
 str name=spellcheck.count
 5
 /str
   
 str name=spellcheck.collate
 true
 /str
 
 /lst
 
 arr name=components
   
 str
 suggest1
 /str
 
 /arr
   
 /requestHandler
 
 http://localhost:8585/solr/collection1/suggest1?q=applerows=10wt=jsonindent=true
 
 {
   responseHeader:{
 status:0,
 QTime:2},
   spellcheck:{
 suggestions:[
   apple,{
 numFound:5,
 startOffset:0,
 endOffset:5,
 suggestion:[
*
 apple
*
 ,
   
*
 apple
*
  and,
   
*
 apple
*
  and facebook,
   
*
 apple
*
  and facebook learn,
   
*
 apple
*
  and facebook learn from]},
   collation,
*
 apple
*
 ]}}
 
 
 
 *Rajesh**.*





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Trying-to-get-AnalyzingInfixSuggester-to-work-in-Solr-tp4204163p4204208.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Trying to get AnalyzingInfixSuggester to work in Solr?

2015-05-06 Thread O. Olson
Thank you Rajesh. I'm not familiar with the queryConverter. How do you wire
it up to the rest of the setup? Right now, I just put it between the
SpellCheckComponent and the RequestHandler i.e. my config is as: 

searchComponent class=solr.SpellCheckComponent name=suggest
lst name=spellchecker
  str name=namesuggest/str
  str name=classnameorg.apache.solr.spelling.suggest.Suggester/str
  str
name=lookupImplorg.apache.solr.spelling.suggest.fst.AnalyzingInfixLookupFactory/str
  str name=fieldtext/str  
  float name=threshold0.005/float
  str name=buildOnCommittrue/str
  str name=suggestAnalyzerFieldTypetext_general/str
  bool name=exactMatchFirsttrue/bool
/lst
  /searchComponent
  
  queryConverter name=queryConverter
class=org.apache.solr.spelling.SuggestQueryConverter/ 
  
  requestHandler class=org.apache.solr.handler.component.SearchHandler
name=/suggest
lst name=defaults
  str name=spellchecktrue/str
  str name=spellcheck.dictionarysuggest/str
  str name=spellcheck.onlyMorePopulartrue/str
  str name=spellcheck.count5/str
  str name=spellcheck.collatetrue/str
/lst
arr name=components
  strsuggest/str
/arr
  /requestHandler

Is this correct? I do not see any difference in my results i.e. the
suggestions are the same as before.
O. O.





Rajesh Hazari wrote
 make sure you have this query converter defined in your config
 queryConverter name=queryConverter
 class=org.apache.solr.spelling.SuggestQueryConverter/
 *Thanks,*
 *Rajesh**.*





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Trying-to-get-AnalyzingInfixSuggester-to-work-in-Solr-tp4204163p4204173.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Negative Boosting documents with a certain word

2015-05-02 Thread O. Olson
Thank you very much Chris. I'm sorry I could not get back to you because I
did not have the time to try this.

If I change my query from q=laptops   to 
q=laptops%20(*:*%20-Refurbished)^10%20(*:*%20-Recertified)^10   I get
exactly what I want! Thank you!!

 Is there anyway to handle a list of such words. If I have about 10 to 15
words, this query would keep getting longer and longer. Is there a better
way to handle this?

Right now, I specify the boost for my request handler as:
requestHandler name=/select class=solr.SearchHandler
  .
  str name=boostln(qty)/str
  
 /requestHandler

Is there a way to specify this boost in the Solrconfig.xml?

I tried: str name=boost(*:* -Refurbished)^10/str   and I get the
following exception: 

ERROR - 2015-05-01 15:13:41.609; org.apache.solr.common.SolrException;
org.apache.solr.common.SolrException: org.apache.solr.search.SyntaxError:
Expected identifier at pos 0 str='(*:* -Refurbished)^10'
at
org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:204)
at
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:204)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1976)
at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
at
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
at
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.solr.search.SyntaxError: Expected identifier at pos 0
str='(*:* -Refurbished)^10'
at
org.apache.solr.search.QueryParsing$StrParser.getId(QueryParsing.java:771)
at
org.apache.solr.search.QueryParsing$StrParser.getId(QueryParsing.java:750)
at
org.apache.solr.search.FunctionQParser.parseValueSource(FunctionQParser.java:345)
at org.apache.solr.search.FunctionQParser.parse(FunctionQParser.java:68)
at org.apache.solr.search.QParser.getQuery(QParser.java:149)
at
org.apache.solr.search.ExtendedDismaxQParser.getMultiplicativeBoosts(ExtendedDismaxQParser.java:448)
at
org.apache.solr.search.ExtendedDismaxQParser.parse(ExtendedDismaxQParser.java:211)
at org.apache.solr.search.QParser.getQuery(QParser.java:149)
at
org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:147)
... 31 more


I'm using Solr 4.10.3

Thank you once again
O. O.


Chris Hostetter-3 wrote
 

Negative Boosting documents with a certain word

2015-04-30 Thread O. Olson
Hi,

My Solr documents contain descriptions of products, similar to a 
BestBuy or
a NewEgg catalog. I'm wondering if it were possible to push a product down
the ranking if it contains a certain word. By this I mean it would still
appear in the search results. However, instead of appearing near the top of
the results, it would appear further towards the bottom. (I'm assuming this
is a called a negative boost.)

For example, consider the word:  'Refurbished' or the word: 'Case'

If the product description contains the word 'Refurbished' (or the word
'Case') I would like to reduce the ranking of these products. My business
logic is that I would rather sell a new Laptop vs a refurbished laptop, or I
would rather sell a laptop vs selling a laptop case. So, I would like to see
if I can assign products a negative boost if they contain certain words in
their description.

Thank you in advance for all your help,
O. O.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Negative-Boosting-documents-with-a-certain-word-tp4203224.html
Sent from the Solr - User mailing list archive at Nabble.com.


Integrating Solr with an existing web application - and SolrJ

2015-04-27 Thread O. Olson
I can get the standard Solr example to run within Jetty and I can use it
through the velocity templates. I'm now thinking of integrating Solr with a
couple of existing websites. In this regard, I have the following questions:

1. For a medium sized website (about 100+ concurrent users), what is the
most popular way of integrating Solr? For e.g. do you just run Solr on the
same WebServer/Application Server? 

2. If you run Solr on a separate Server, how do you communicate with it from
the Webserver? 

I was thinking of using SolrJ for this. However, I think that each time
there is a Solr request, the server would have to open a connection to the
separate server running Solr. I would be using the Solr Suggester, so for
every keypress into my search box, there would be a separate connection to
the Solr server. Is this OK?

3. If I consider using a separate Solr Server, is there a big difference
between using Jetty vs Tomcat? 

I would prefer to use the embedded Jetty already packaged into Solr. On the
other hand, would Tomcat be better able to handle more concurrent
connections?  I'm looking for a big difference :-) ( because I'm lazy and
would prefer to use the embedded Jetty.)

Thank you in advance for your help
O. O.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Integrating-Solr-with-an-existing-web-application-and-SolrJ-tp4202611.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Integrating Solr with an existing web application - and SolrJ

2015-04-27 Thread O. Olson
Thank you very much Doug. I was thinking of putting Solr on a separate
server, but I did not expect you to so strongly recommend Jetty. I think I
would stick to the embedded Jetty, because I don't need the security. I'm
using Solr 4.10.3 at the moment, so I'm not familiar with Solr 5. 

Thanks again,
O. O.


Doug Turnbull wrote
 1. Unless usage is very light, you likely want Solr to be on a different
 server. Its going to have different caching and system needs than your web
 app. You may also want to scale Solr independently from your web app.
 Think
 of it just like you think of a database-- do you want your MySQL instance
 on the same server as your web app? For all but simple uses, I'd say no.
 You want it separate so you can tune performance differently.
 
 2. Most HTTP clients have a way for keeping the underlying socket
 connection alive. So open a connection likely won't happen. You can feel
 safe in relying on SolrJ or any other reasonable HTTP-based client
 communicating to Solr from your web app.
 
 Also, I always discourage suggesters that work on every keypress. You
 probably want a number of keypresses and a timeout to avoid overloading
 your Solr servers. Or if you truly want that, be aware that it may come
 with some performance issues.
 
 3. I would seriously steer away from anything but Jetty. Also, doesn't
 Solr
 5 not even give you a choice? Go with the defaults, they are tested well.
 If you want to wrap it behind a security layer, then proxy Solr with
 something like nginx (https://github.com/o19s/solr_nginx).
 
 The readme at that github repo captures our philosophy for deploying a
 Solr
 box with default Jetty fronted by nginx for some sane security
 
 Hope that helps,
 -Doug
 
 -- 
 *Doug Turnbull **| *Search Relevance Consultant | OpenSource Connections,
 LLC | 240.476.9983 | http://www.opensourceconnections.com
 Author: Taming Search lt;http://manning.com/turnbullgt; from Manning
 Publications
 This e-mail and all contents, including attachments, is considered to be
 Company Confidential unless explicitly stated otherwise, regardless
 of whether attachments are marked as such.





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Integrating-Solr-with-an-existing-web-application-and-SolrJ-tp4202611p4202621.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Checkout the source Code to the Release Version of Solr?

2015-02-17 Thread O. Olson
Thank you Mike. This is what I was looking for. I apparently did not
understand what tags where.


Mike Drob wrote
 The SVN source is under tags, not branches.
 
 http://svn.apache.org/repos/asf/lucene/dev/tags/lucene_solr_4_10_3/





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Checkout-the-source-Code-to-the-Release-Version-of-Solr-tp4187041p4187054.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Checkout the source Code to the Release Version of Solr?

2015-02-17 Thread O. Olson
Thank you Hrishikesh. Funny how GitHub is not mentioned  on
http://lucene.apache.org/solr/resources.html  

I think common-build.xml is what I was looking for. Thank you



Hrishikesh Gadre-3 wrote
 Also the version number is encoded (at least) in the build file
 
 https://github.com/apache/lucene-solr/blob/817303840fce547a1557e330e93e5a8ac0618f34/lucene/common-build.xml#L32
 
 Hope this helps.
 
 Thanks
 Hrishikesh


Hrishikesh Gadre-3 wrote
 Hi,
 
 You can get the released code base here
 
 https://github.com/apache/lucene-solr/releases
 
 Thanks
 Hrishikesh





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Checkout-the-source-Code-to-the-Release-Version-of-Solr-tp4187041p4187048.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Checkout the source Code to the Release Version of Solr?

2015-02-17 Thread O. Olson
Thank you Shawn. I have not updated my version in a while, so I prefer to do
it to 4.10 first, rather than go directly to 5.0. I'd be working on it
towards the end of this week.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Checkout-the-source-Code-to-the-Release-Version-of-Solr-tp4187041p4187055.html
Sent from the Solr - User mailing list archive at Nabble.com.


Checkout the source Code to the Release Version of Solr?

2015-02-17 Thread O. Olson
At this time the latest released version of Solr is 4.10.3. Is there anyway
we can get the source code for this release version?

I tried to checkout the Solr code from
http://svn.apache.org/repos/asf/lucene/dev/branches/lucene_solr_4_10/ In the
commit log, I see a number of revisions but nothing mention which is the
release version. The latest revision being 1657441 on Feb 4. Does this
correspond to 4.10.3? If no, then how do I go about getting the source code
of 4.10.3.

I'm also curious where the version number is embedded i.e. is it in a file
somewhere?

I want to ensure I am using the released version, and not some bug fixes
after the version got released. 

Thank you in anticipation.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Checkout-the-source-Code-to-the-Release-Version-of-Solr-tp4187041.html
Sent from the Solr - User mailing list archive at Nabble.com.


Detect ongoing Solr Import and its Completion

2015-02-05 Thread O. Olson
My setup is fairly similar to the examples. I start a Solr Import using the
UI i.e. I go to:
http://localhost:8983/solr/#/corename/dataimport//dataimport  and click the
Execute button to start the Import. 

First, I'm curious if there is a way of figuring out if there is an import
running. I thought one of the ways to do this look at the core Statistics
page i.e. at http://localhost:8983/solr/#/corename  and look at the value of
Current. If it is red – it means an import is running, and if it is green,
the import has either completed or not running. 

My problem is that initially for the first min or two, though the 
import is
running, the value of Current on the Statistics page is still green. Is
there a way of definitely determining if a import is currently running in
Solr?

Second, is there a way of determining if a Solr Import has completed. I
normally wait for the Red value of Current on the Statistics page becomes
Green, and I detect the completion of the Import. 

Thank you in advance




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Detect-ongoing-Solr-Import-and-its-Completion-tp4184273.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Detect ongoing Solr Import and its Completion

2015-02-05 Thread O. Olson
Thank you very much Alvaro and Shawn. The DataImport Status command was what
I was looking for. I have tried it a bit, and I feel the output is good
enough for me. 
Thanks again



Alvaro Cabrerizo wrote
 Maybe you are asking for the status command. Currently this is the url I
 invoke for checking whether the import process is running (or has failed)
 
 From the cwiki:
 
 The URL is http://
 host
 :
 port
 /solr/
 collection_name
 /dataimport?command=status.
 It returns statistics on the number of documents created, deleted, queries
 run, rows fetched, status, and so on.
 
 Hope it helps.



Shawn Heisey-2 wrote
 The actual dataimport API (not the admin UI link that you included
 above, which *uses* the dataimport API) is the only way I know of for
 sure to detect the import status.  The default command is status.
 
 http://server:port/solr/corename/dataimport
 
 Here's a wiki page with info on this API:
 
 http://wiki.apache.org/solr/DataImportHandler#Commands
 
 Unfortunately, interpreting the status is not straightforward for a
 program.  It's pretty easy for a human to interpret it on sight, but a
 program must examine several aspects of the status response to determine
 the status, success, or failure of an import.  There are bug reports
 about this, but it's a thorny problem that has not yet been solved.
 
 https://issues.apache.org/jira/browse/SOLR-3319
 
 Thanks,
 Shawn





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Detect-ongoing-Solr-Import-and-its-Completion-tp4184273p4184286.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Where can we set the parameters in Solr Config?

2015-02-04 Thread O. Olson
Thank you Alex and Jack for pointing out solrcore.properties and
core.properties files. This is much better than specifying these on the
command line. I think I need to use the solrcore.properties. I will try it
in the next few days. Thanks again.


Alexandre Rafalovitch wrote
 core.properties?
 https://cwiki.apache.org/confluence/display/solr/Configuring+solrconfig.xml#Configuringsolrconfig.xml-SubstitutingPropertiesinSolrConfigFiles
 
 Regards.
 Alex


Jack Krupansky-3 wrote
 The Solr properties can also be defined in solrcore.properties and
 core.properties files:
 https://cwiki.apache.org/confluence/display/solr/Configuring+solrconfig.xml
 
 
 -- Jack Krupansky





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Where-can-we-set-the-parameters-in-Solr-Config-tp4183706p4184021.html
Sent from the Solr - User mailing list archive at Nabble.com.


Where can we set the parameters in Solr Config?

2015-02-03 Thread O. Olson
I'm sorry if this is a basic question, but I am curious where, or at least,
how can we set the parameters in the solrconfig.xml.

E.g. Consider the solrconfig.xml shown here:
http://svn.apache.org/viewvc/lucene/dev/branches/lucene_solr_4_10/solr/example/example-DIH/solr/db/conf/solrconfig.xml?revision=1638496view=markup
 

There seems be a lot of 
${ParameterName:Value}
E.g. 
lockType${solr.lock.type:native}/lockType

Where do these parameter values get set? Thank you in anticipation. 




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Where-can-we-set-the-parameters-in-Solr-Config-tp4183706.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Where can we set the parameters in Solr Config?

2015-02-03 Thread O. Olson
Thank you Jim. I was hoping if there is an alternative to putting the
parameters on the command line, which would be a pain if there are more than
a few parameters i.e. like a config file for example.

Thanks again


Jim.Musil wrote
 We set them as extra parameters sent to to the servlet (jetty or tomcat).
 
 eg java -Dsolr.lock.type=native -jar start.jar
 
 Jim





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Where-can-we-set-the-parameters-in-Solr-Config-tp4183706p4183732.html
Sent from the Solr - User mailing list archive at Nabble.com.


Solr Suggester Autocomplete Working Example

2015-02-02 Thread O. Olson
Hi,

I'm am wondering if anyone can point me to a website that user Solr's
Suggester or Autocomplete or whatever you call it. I am looking for
something that is closer to the default provided in the examples, but is
also used commercially. 

I have a local Solr installation that is on an intranet. (Sorry I cannot
post it here.)  Unfortunately, the suggestions it provides does not seem to
be OK. By this I mean in comparison to Google, which I know does not use
Solr. 

For example, when I type the string sto into my installation, I get the
suggested values like storag - which is not a complete word i.e. it misses
the 'e' in the end. On the other hand when I use Google, I get complete
words like stock market or stopwatch etc.

I know Google does not use Solr. I also know that I do not have the
capability to do a lot of customizations to Solr that are much beyond the
defaults and changing a few settings. Hence I am curious if there is a
website out there that uses Suggester or Autocomplete where I can compare
the capabilities with my own. 

Thank you




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-Suggester-Autocomplete-Working-Example-tp4183493.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr Suggester Autocomplete Working Example

2015-02-02 Thread O. Olson
Thank you Michael. I will look at safaribooksonline.com later today when I
create my account. 

I am not sure how to use AnalyzingInfixSuggester. I googled a bit, and I can
find the source code, but not how to use it. 

You are perfectly correct when you say that I am using a field also used for
searching and which has been stemmed. I need to look into setting up another
field for the suggester. I will post here when I have questions about this.

Thanks again.



Michael Sokolov-3 wrote
 Please go ahead and play with autocomplete on safaribooksonline.com/home 
 - if you are not a subscriber you will have to sign up for a free 
 trial.  We use the AnalyzingInfixSuggester.  From your description, it 
 sounds as if you are building completions from a field that you also use 
 for searching -- maybe it is stemmed, and that's why you are seeing the 
 weird partial words.  To get good results from the suggester you will 
 probably need to set up a special field to use as a source of 
 suggestions that uses appropriate text analysis.
 
 -Mike





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-Suggester-Autocomplete-Working-Example-tp4183493p4183530.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr Suggester Autocomplete Working Example

2015-02-02 Thread O. Olson
Alexandre Rafalovitch wrote
 Actually, you have a capability to do unbelievable level of
 customization in Solr, starting from schema definition and down to
 writing custom components in Java. Or even completely rebuilding Solr
 the way you want from sources. Or was that a reference to your current
 skills rather than than Solr's? I think that should be fixable as
 well. Just keep learning and asking questions. We'll try to help.
 
 As to the suggester, it may make sense to explain what kind of text
 you are providing and what results you might be expecting. A bit more
 details that you've given already. There are several different
 implementations, each with its own trade-offs.
 
 Regards,
Alex.

Sorry Alex, I am just a bit dumb. That reference was regarding my skills not
Solr's. I think Michael pointed out one of my problems i.e. I was using the
Search field that had been stemmed. I will look at creating an alternate
field just for the suggester. Thank you.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-Suggester-Autocomplete-Working-Example-tp4183493p4183532.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Is there a problem with -Infinity as boost?

2014-10-21 Thread O. Olson
Thank you Walter. I liked your solution! This is what I was looking for i.e.

boost=log(sum(1,qty))

O. O.


Walter Underwood wrote
 The usual fix for this is log(1+qty). If you might have negative values,
 you can use log(max(1,qty)).
 
 wunder
 Walter Underwood

 wunder@

 http://observer.wunderwood.org/
 
 On Oct 20, 2014, at 3:04 PM, O. Olson lt;

 olson_ord@

 gt; wrote:
 
 I am considering using a boost as follows: 
 
 boost=log(qty)
 
 Where qty is the quantity in stock of a given product i.e. qty could be
 0,
 1, 2, 3, … etc. The problem I see is that log(0) is -Infinity. Would this
 be
 a problem for Solr? For me it is not a problem because 
 log(0)  log(1)  log(2) etc. 
 
 I'd be grateful for any thoughts. One alternative is to use max e.g.
 boost=max(log(qty), -1) 
 
 But still this would cause Solr to compute the -Infinity and then discard
 it.  So can I use an expression for boost that would result in –Infinity? 
 
 Thank you
 O. O.






--
View this message in context: 
http://lucene.472066.n3.nabble.com/Is-there-a-problem-with-Infinity-as-boost-tp4165036p4165189.html
Sent from the Solr - User mailing list archive at Nabble.com.


Is there a problem with -Infinity as boost?

2014-10-20 Thread O. Olson
I am considering using a boost as follows: 

boost=log(qty)

Where qty is the quantity in stock of a given product i.e. qty could be 0,
1, 2, 3, … etc. The problem I see is that log(0) is -Infinity. Would this be
a problem for Solr? For me it is not a problem because 
log(0)  log(1)  log(2) etc. 

I'd be grateful for any thoughts. One alternative is to use max e.g.
boost=max(log(qty), -1) 

But still this would cause Solr to compute the -Infinity and then discard
it.  So can I use an expression for boost that would result in –Infinity? 

Thank you
O. O.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Is-there-a-problem-with-Infinity-as-boost-tp4165036.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Setting of Default Boost in Edismax Search Handler

2014-09-26 Thread O. Olson
I'm grateful to elyograg and erikhatcher on the #solr IRC for helping me with
this question. They first pointed me to the Edismax documentation's boost
parameter at
http://wiki.apache.org/solr/ExtendedDisMax#bf_.28Boost_Function.2C_additive.29
and asked me to put the following in my solrconfig.xml where I define my
request handler: 

str name=bflog(qty)/str

This worked perfectly for me.
Thanks again.
O. O. 




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Setting-of-Default-Boost-in-Edismax-Search-Handler-tp4161122p4161320.html
Sent from the Solr - User mailing list archive at Nabble.com.


Setting of Default Boost in Edismax Search Handler

2014-09-25 Thread O. Olson
I have a setup very similar to the /browse handler in the example
(http://svn.apache.org/viewvc/lucene/dev/trunk/solr/example/example-DIH/solr/db/conf/solrconfig.xml?view=markup)
  

I am curious if it is possible to set a default boost function (e.g.
bf=log(qty)) , so that all query results would reflect it.

Thank you,
O. O.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Setting-of-Default-Boost-in-Edismax-Search-Handler-tp4161122.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr Boosting Unique Values

2014-09-23 Thread O. Olson
Thank you Erick for your prompt response. I'm sorry I could not get back to
you earlier. 

My current setup does not use the ImageUrl field for the search (more
specifically as the default search field). The ImageUrl field contains a URL
to the image which is for most part a GUID, which is meaningless to users.
However, I would like to note that the ImageUrl field is Indexed and Stored. 

I'm curious how should I use tf() for the boost. On the face of it, it seems
to be what I want, but I cannot figure out how to use it. Similar to my
example in my original post: 

bf=log(qty)

I cannot do: 
bf=tf(ImageUrl,field(ImageUrl))

I think it is having a problem with field(ImageUrl). If I replace it with
something static like ' http://domain/pathtoimage.jpg' then it works.
However, I would like it to have the value of the ImageUrl field instead of
this static value. 

As I mentioned in my original post, I would like to boost unique images. The
image URLs are not part of the search terms. tf() seems to be what I am
looking for, if I can get the current ImageUrl field value into the
function.

Thanks again,
O. O.



Erick Erickson wrote
 This should be happening automatically by the tf/idf
 calculations, which weighs terms that are rare in the
 index more heavily than ones that are more common.
 
 That said, at very low numbers this may be invisibly,
 I'm not sure the relevance calculations for 3 as opposed
 to 1 are very consequential.
 
 However, you _do_ have access to the tf in the Function Queries,
 see: https://cwiki.apache.org/confluence/display/solr/Function+Queries
 
 You could manipulate the scores of your docs by getting
 creative with these I think for your particular case.
 
 Best,
 Erick





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-Boosting-Unique-Values-tp4160507p4160736.html
Sent from the Solr - User mailing list archive at Nabble.com.


Solr Boosting Unique Values

2014-09-22 Thread O. Olson
I use Solr to index some products that have an ImageUrl field. Obviously some
of the images are duplicates. I would like to boost the rankings of products
that have unique images (i.e. more specifically, unique ImageUrl field
values, because I don't deal with the image binary). 

By this I mean, if a certain product has a value in the ImageUrl not used by
any other product, it would be boosted more than another product which has a
value in the ImageUrl used by 3 other products. 

I hope I have explained that correctly. If not, please ask and I would try
again. 

For e.g. if I want to boost the products with quantity, I can add 

bf=log(qty) 

to the request. Is there some similar function I can add to the ImageUrl
field to boost unique values?

Thank you in advance,
O. O.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-Boosting-Unique-Values-tp4160507.html
Sent from the Solr - User mailing list archive at Nabble.com.


Boost based on match in separate field

2014-09-11 Thread O. Olson
I have an index of books in Solr. I copy all the fields to a field called
text and search on it i.e. in my schema.xml I have: 
  copyField source=* dest=text/

Then in my solrconfig.xml (similar to the velocity example in
example\example-DIH\solr\db) I use the edismax parser and I have the
queryfield simply as: 
   str name=qftext/str

So far my search works. However, I would like to know if there is a way to
boost certain results based on a match in a different field. 

In my case, I have another field called category which has the category
the book is under. I would like to have the search boost results if there is
a match in the category field. 
E.g. If I enter the search query as religion, which should match the
category religion I would like books like the bible, quran, vedas
etc. which are in the religion category to be boosted and come at the top of
the list. Instead I see books on religious history and philosophy come at
the top. 

I understand that the search is doing what it is configured to do i.e.
bible, quran, vedas etc do not have the word/stem religion as
frequently as books on religious history and philosophy. My question is how
do I change the current search such that if there is a match in the
category field, the results from that category (religion in this case) get
boosted. For a search query religion the books in the religion category
should come before the books listed under history.  

I'd be grateful if anyone can show me how to change my search to take the
category field into account.  Setting my queryfield like text^0.5 category
^3 does not seem to work.
Thank you in advance,
O. O.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Boost-based-on-match-in-separate-field-tp4158346.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: CopyField Wildcard Exception possible?

2014-08-30 Thread O. Olson
Thank you Ahmet. I am not familiar with using the ScriptUpdateProcessor, but
I would look into it. I am also not sure how bad this would be on the import
performance.
O. O.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/CopyField-Wildcard-Exception-possible-tp4155686p4156001.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: CopyField Wildcard Exception possible?

2014-08-29 Thread O. Olson
Thank you Joe. I am not familiar with creating a JIRA ticket. I was however
hoping that there might be a solution to this. If there is none, then I
would consider explicitly specifying the fields.
O. O.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/CopyField-Wildcard-Exception-possible-tp4155686p4155838.html
Sent from the Solr - User mailing list archive at Nabble.com.


CopyField Wildcard Exception possible?

2014-08-28 Thread O. Olson
I have hundreds of fields of the form in my schema.xml: 

 field name=F10434 type=string indexed=true stored=true
multiValued=true/
 field name=B20215 type=string indexed=true stored=true
multiValued=true/
  .

I also have a field 'text' that is set as the Default Search Field

field name=text type=text indexed=true stored=false
multiValued=true/

I populate this 'text' field using CopyField as: 

copyField source=* dest=text/

This '*' worked so far. However, I now want to exclude some of the fields
from this i.e. I would like 'text' to contain everything (hundreds of
fields) except a few. Is there any way to do this?

One of the ways would be to specify the '*' explicitly e.g. 

copyField source=F10434 dest=text/
copyField source=B20215 dest=text/
 

and in this list I would exclude the ones I do not want. Is there an
alternative to this? (I would like an alternative because putting these
copyFields would be long and too difficult.


Thank you
O. O.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/CopyField-Wildcard-Exception-possible-tp4155686.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Understanding the Debug explanations for Query Result Scoring/Ranking

2014-07-28 Thread O. Olson
Thank you very much Chris. I was not aware of debug.explain.structured. It
seems to be what I was looking for. 

Thanks also to Jack Krupansky. Yes, delving into those numbers would be my
next step, but I will get to that later.
O. O.


Chris Hostetter-3 wrote
 Just to be clear, regardless of *which* response writer you use (xml, 
 ruby, json, etc...) the default behavior is to include the score 
 explanation sa a single string which uses tabs/newlines to deal with the 
 nested (this nesting is visible if you view the raw response, no matter 
 what ResponseWriter)
 
 You can however add a param indicating that you want the explaantion 
 information to be returned as a *structured data* instead o a simple 
 string...
 
 https://wiki.apache.org/solr/CommonQueryParameters#debug.explain.structured
 
 ...if you wnat to programatically process debug info, this is the 
 recomended way to to so.
 
 -Hoss
 http://www.lucidworks.com/





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Understanding-the-Debug-explanations-for-Query-Result-Scoring-Ranking-tp4149137p4149521.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Understanding the Debug explanations for Query Result Scoring/Ranking

2014-07-25 Thread O. Olson
Thank you Uwe. Unfortunately, I could not get your explain solr website to
work. I always get an error saying Ops. We have internal server error. This
event was logged. We will try fix this soon. We are sorry for
inconvenience.

At this point, I know that I need to have some technical background to
understanding how these numbers are calculated. However even with that, I am
sure that the format of this output is not obvious. I am curious about the
documentation of this output format. It seems to be unintelligible. 

If this is not documented anywhere, can someone point me to which class is
doing this output.

Thank you,
O. O.


an6 wrote
 Hi,
 
 to get an idea of the meaning of all this numbers, have a look on 
 http://explain.solr.pl. I like this tool, it's great.
 
 Uwe





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Understanding-the-Debug-explanations-for-Query-Result-Scoring-Ranking-tp4149137p4149217.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Understanding the Debug explanations for Query Result Scoring/Ranking

2014-07-25 Thread O. Olson
Thank you very much Erik. This is exactly what I was looking for. While at
the moment I have no clue about these numbers, they ruby formatting makes it
much more easier to understand.

Thanks to you Koji. I'm sorry I did not acknowledge you before. I think
Erik's solution is what I was looking for.
O. O.



Erik Hatcher-4 wrote
 The format of the XML explain output is not indented or very readable. 
 When I really need to see the explain indented, I use wt=rubyindent=true
 (I don’t think the indent parameter is relevant for the explain output,
 but I use it anyway)
 
   Erik





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Understanding-the-Debug-explanations-for-Query-Result-Scoring-Ranking-tp4149137p4149226.html
Sent from the Solr - User mailing list archive at Nabble.com.


Understanding the Debug explanations for Query Result Scoring/Ranking

2014-07-24 Thread O. Olson
Hi,

If you add /*debug=true*/ to the Solr request /(and wt=xml if your
current output is not XML)/, you would get a node in the resulting XML that
is named debug. There is a child node to this called explain to this
which has a list showing why the results are ranked in a particular order.
I'm curious if there is some documentation on understanding these
numbers/results. 

I am new to Solr, so I apologize that I may be using the wrong terms to
describe my problem. I also aware of
http://lucene.apache.org/core/4_9_0/core/org/apache/lucene/search/similarities/TFIDFSimilarity.html
though I have not completely understood it. 

My problem is trying to understand something like this: 

1.5797625 = (MATCH) sum of: 0.4717142 = (MATCH) weight(text:televis in
44109) [DefaultSimilarity], result of: 0.4717142 = score(doc=44109,freq=1.0
= termFreq=1.0 ), product of: 0.71447384 = queryWeight, product of:
7.0424104 = idf(docFreq=896, maxDocs=377553) 0.10145303 = queryNorm 0.660226
= fieldWeight in 44109, product of: 1.0 = tf(freq=1.0), with freq of: 1.0 =
termFreq=1.0 7.0424104 = idf(docFreq=896, maxDocs=377553) 0.09375 =
fieldNorm(doc=44109) 1.1080483 = (MATCH) weight(text:tv in 44109)
[DefaultSimilarity], result of: 1.1080483 = score(doc=44109,freq=6.0 =
termFreq=6.0 ), product of: 0.6996622 = queryWeight, product of: 6.896415 =
idf(docFreq=1037, maxDocs=377553) 0.10145303 = queryNorm 1.5836904 =
fieldWeight in 44109, product of: 2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0 6.896415 = idf(docFreq=1037, maxDocs=377553) 0.09375 =
fieldNorm(doc=44109)

*Note:* I have searched for televisions. My search field is a single
catch-all field. The Edismax parser seems to break up my search term into
televis and tv

Is there some documentation on how to understand these numbers. They do not
seem to be properly delimited. At the minimum, I can understand something
like: 
1.5797625 =  0.4717142 + 1.1080483
and
0.71447384  = 7.0424104 * 0.10145303

But, I cannot understand if something like 0.10145303 = queryNorm 0.660226
= fieldWeight in 44109 is used in the calculation anywhere. Also since
there were only two terms /(televis and tv)/ I could use subtraction to
find out 1.1080483 was the start of a new result.

I'd also appreciate if someone can tell me which class dumps out the above
data. If I know it, I can edit that class to make the output a bit more
understandable for me.

Thank you,
O. O.






--
View this message in context: 
http://lucene.472066.n3.nabble.com/Understanding-the-Debug-explanations-for-Query-Result-Scoring-Ranking-tp4149137.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Debug different Results from different Request Handlers

2014-06-18 Thread O. Olson
Thank you Erik (and to steffkes who helped me on the IRC #Solr Chat). Sorry
for the delay in responding, but I got this to work. 

Your suggestion about adding debug=true to the query helped me. Since I 
was
adding this to the Velocity request handler, I could not see the debug
results, but when I added wt=xml i.e. /products?q=hp|lync
debug=truewt=xml, I could see the Parsed Query as well as the Parser used
for each handler. 

Thanks also to steffkes who answered my question in the original post 
(on
IRC) i.e. both of my handlers go through
org.apache.solr.servlet.SolrDispatchFilter, particularly it’s the doFilter()
method that I was looking for.

Also as steffkes pointed out, (from my original post), the /products
request handler uses the ExtendedDismaxQParser whereas the second /search or
/select request handler uses the LuceneQParser. It seems that these two
parsers handle the | sign very differently.  For my limited private
installation, I decided to get to the base class of ExtendedDismaxQParser 
LuceneQParser i.e. QParser. There in the constructor, I strip out the | sign
from the qstr parameter. This is probably the dirtiest way to get this to
work, but it works for now. 

Thanks again to you all.
O. O. 

 




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Debug-different-Results-from-different-Request-Handlers-tp4141804p4142716.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Debug different Results from different Request Handlers

2014-06-14 Thread O. Olson
Thank you Erik. I tried /products?q=hp|lyncwt=xml and I show no results i.e.
numFound=0, so I think there is something wrong. You are correct, that the
VRW is not the problem but the Query Parser. Could you please let me know
how to determine the query parser?

For most part I have not changed these request handlers from the Solr
examples. The Request Handler that uses Apache Velocity looks like: 

requestHandler name=/products class=solr.SearchHandler
 lst name=defaults
   str name=echoParamsexplicit/str
   str name=wtvelocity/str
   str name=v.templatebrowse/str
   str name=debugQuerytrue/str
   str name=v.base_dirVMTemplates/str
   str name=v.layoutlayout/str
   str name=titleSolritas/str
   str name=defTypeedismax/str
   str name=qf
  text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4
  title^10.0 description^5.0 keywords^5.0 author^2.0
resourcename^1.0
   /str
   str name=dftext/str
   str name=mm100%/str
   str name=q.alt*:*/str
   str name=rows10/str
   str name=fl*,score/str
   str name=mlt.qf
 text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4
 title^10.0 description^5.0 keywords^5.0 author^2.0 resourcename^1.0
   /str 
   str
name=mlt.fltext,features,name,sku,id,manu,cat,title,description,keywords,author,resourcename/str
   int name=mlt.count3/int
   str name=faceton/str
   str name=facet.fieldCategoryID/str
   str name=spellcheckon/str
   str name=spellcheck.extendedResultsfalse/str   
   str name=spellcheck.count5/str
   str name=spellcheck.alternativeTermCount2/str
   str name=spellcheck.maxResultsForSuggest5/str   
   str name=spellcheck.collatetrue/str
   str name=spellcheck.collateExtendedResultstrue/str  
   str name=spellcheck.maxCollationTries5/str
   str name=spellcheck.maxCollations3/str  
 /lst
 arr name=last-components
   strspellcheck/str
 /arr
  /requestHandler

And the regular XML handler looks like: 

requestHandler name=/search
class=org.apache.solr.handler.component.SearchHandler
lst name=defaults
  str name=echoParamsexplicit/str
/lst
  /requestHandler

Does this show which is the Query Parser? I can post more of my
solrconfig.xml if necessary. 

I am curious where the Query Parser hands over the parameters to the Solr
engine that would be common irrespective of Request Handler i.e. I am trying
to put debugging statements into the common code so that these can dump out
intermediate results to the log. 

Thanks again Erik.
O. O.






--
View this message in context: 
http://lucene.472066.n3.nabble.com/Debug-different-Results-from-different-Request-Handlers-tp4141804p4141859.html
Sent from the Solr - User mailing list archive at Nabble.com.


Debug different Results from different Request Handlers

2014-06-13 Thread O. Olson
Hi,

In my solrcofig.xml I have one Request Handler displaying the results using 
Apache Velocity: 

  requestHandler name=/products class=solr.SearchHandler

And another with regular XML: 
requestHandler name=/search 
class=org.apache.solr.handler.component.SearchHandler

I am seeing different results when I use these two handlers. 

Search Query: hp|lync  (Or on the URL  q=hp%7Elync)

I see 0 results when I use the first handler (Velocity), but I see many results 
(10’s) with the second handler. I am trying to debug why this problem occurs.  
I am certain the problem is with the first handler, and I would be grateful if 
anyone can help me debug this. I do not know Solr well enough, so a few 
pointers could help. 

1. First, I would like to know if class=solr.SearchHandler and 
class=org.apache.solr.handler.component.SearchHandler are the same? If no, 
what does solr.SearchHandler refer to?

2. Second, I am working with the source of Solr 4.7 (yes, it is  a bit old, but 
I don’t think it is fundamentally changed). I have put log.debug() statements 
in the org.apache.solr.response.VelocityResponseWriter.write() method to verify 
that my query is not getting mangled with the URL encoding, and it is not. So, 
since I am getting different results for the same queries, I am curious to see 
what the core Solr engine is receiving when I run the same query from different 
handlers. Could someone tell me the class which has the core Solr engine that 
is used irrespective of which Request Handler makes the request? I am trying to 
put debug statements into this class to log the value of the query parameter 
that it receives. The results are different, so I think one or more parameters 
are different.

Thank you in advance,
O. O.



DataImport using SqlEntityProcessor running Out of Memory

2014-05-11 Thread O. Olson
I have a Data Schema which is Hierarchical i.e. I have an Entity and a number
of attributes. For a small subset of the Data - about 300 MB, I can do the
import with 3 GB memory. Now with the entire 4 GB Dataset, I find I cannot
do the import with 9 GB of memory. 
I am using the SqlEntityProcessor as below: 

dataConfig
dataSource driver=com.microsoft.sqlserver.jdbc.SQLServerDriver
url=jdbc:sqlserver://localhost\MSSQLSERVER;databaseName=SolrDB;user=solrusr;password=solrusr;/
document
entity name=Entity query=SELECT EntID, Image FROM 
ENTITY_TABLE
field column=EntID name=EntID /
field column=Image name=Image /

entity name=EntityAttribute1  
query=SELECT AttributeValue, EntID FROM ATTR_TABLE
WHERE AttributeID=1 
cacheKey=EntID 
cacheLookup=Entity.EntID
processor=SqlEntityProcessor cacheImpl=SortedMapBackedCache
field column=AttributeValue 
name=EntityAttribute1 / 
/entity
entity name=EntityAttribute2  
query=SELECT AttributeValue, EntID FROM ATTR_TABLE
WHERE AttributeID=2 
cacheKey=EntID 
cacheLookup=Entity.EntID
processor=SqlEntityProcessor cacheImpl=SortedMapBackedCache
field column=AttributeValue 
name=EntityAttribute2 / 
/entity



/entity
/document
/dataConfig



What is the best way to import this data? Doing it without a cache, results
in many SQL queries. With the cache, I run out of memory. 

I’m curious why 4GB of data cannot entirely fit in memory. One thing I need
to mention is that I have about 400 to 500 attributes. 

Thanks in advance for any helpful advice. 
O. O. 




--
View this message in context: 
http://lucene.472066.n3.nabble.com/DataImport-using-SqlEntityProcessor-running-Out-of-Memory-tp4135080.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Change Velocity Template Directory in Solr 4.6

2013-12-12 Thread O. Olson
Thank you very much for the confirmation iorixxx. When I started this thread
on Dec. 6, I did not know about the confluence wiki
(https://cwiki.apache.org/confluence/display/solr/Apache+Solr+Reference+Guide).
I learned about it through another thread I started
(http://lucene.472066.n3.nabble.com/Use-of-Deprecated-Classes-SortableIntField-SortableFloatField-SortableDoubleField-tp4105762p4106001.html).
I think that is much more up to date and has a lot more information than the
official Solr Wiki and I would be reading it before posting here.

Thank you again for your help.
O. O.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Change-Velocity-Template-Directory-in-Solr-4-6-tp4105381p4106467.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Change Velocity Template Directory in Solr 4.6

2013-12-11 Thread O. Olson
Thank you iorixxx. Yes, when I run: 

 java -Dsolr.allow.unsafe.resourceloading=true -jar start.jar

And I then load the root of my site, I get: 

ERROR - 2013-12-11 14:36:03.434; org.apache.solr.common.SolrException;
null:java.io.IOException: Unable to find resource 'browse.vm'
at
org.apache.solr.response.VelocityResponseWriter.getTemplate(VelocityResponseWriter.java:174)
at
org.apache.solr.response.VelocityResponseWriter.write(VelocityResponseWriter.java:50)

stacktrace truncated


In the above case, in the solrconfig.xml I have set: 

str name=v.base_dirMyVMTemplates/str 

And my velocity templates are in /corename/conf/MyVMTemplates . If you look
at the VelocityResponseWriter at
http://svn.apache.org/viewvc/lucene/dev/branches/lucene_solr_4_6/solr/contrib/velocity/src/java/org/apache/solr/response/VelocityResponseWriter.java?revision=1541081view=markup
 
nowhere does it use v.base_dir. So it seems that you need to name the
velocity template directory as velocity. (I tried to set it to
/corename/conf/velocity and it works without any errors.) 

Thank you,
O. O.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Change-Velocity-Template-Directory-in-Solr-4-6-tp4105381p4106232.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Use of Deprecated Classes: SortableIntField SortableFloatField SortableDoubleField

2013-12-10 Thread O. Olson
Thank you kydryavtsev andrey. Wow this reference guide at
https://cwiki.apache.org/confluence/display/solr/Apache+Solr+Reference+Guid
is a lot more detailed than the official Solr Wiki at
http://wiki.apache.org/solr/.  May be those responsible for Solr should link
to it.


Thank you for confirming that my syntax in the schema.xml was correct. I
used the following: 

fieldType name=sint class=solr.TrieIntField precisionStep=0
positionIncrementGap=0 sortMissingLast=true omitNorms=true/

And it seemed to work. For others: While positionIncrementGap is documented
in the wiki at
https://cwiki.apache.org/confluence/display/solr/Field+Type+Definitions+and+Properties
the precisionStep is covered at
http://lucene.apache.org/solr/4_6_0/solr-core/org/apache/solr/schema/TrieField.html#precisionStep
 

Thank you very much for your help.
O. O.





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Use-of-Deprecated-Classes-SortableIntField-SortableFloatField-SortableDoubleField-tp4105762p4106001.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Change Velocity Template Directory in Solr 4.6

2013-12-10 Thread O. Olson
Hi,

Does anyone have a clue regarding this? Or would this question be more
appropriate on the Solr-Dev?

After posting this I realized that the template directory needs to be named
velocity even if you place it under /core/conf/. This seems to be too
restrictive.

O. O.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Change-Velocity-Template-Directory-in-Solr-4-6-tp4105381p4106012.html
Sent from the Solr - User mailing list archive at Nabble.com.


Replacing Deprecated CachedSqlEntityProcessor with SqlEntityProcessor with a cacheImpl parameter

2013-12-10 Thread O. Olson
Hi,
 
    I am
looking to replace the Deprecated CachedSqlEntityProcessor with 
SqlEntityProcessor
with a cacheImpl parameter but I cannot find documentation.  
 
The Deprecated note at the top of 
http://lucene.apache.org/solr/3_6_0/org/apache/solr/handler/dataimport/CachedSqlEntityProcessor.html
 says that we need to replace the CachedSqlEntityProcessor with
SqlEntityProcessor with a cacheImpl parameter. The wiki here, does not mention
the cacheImpl parameter, or it's possible values: 
https://cwiki.apache.org/confluence/display/solr/Uploading+Structured+Data+Store+Data+with+the+Data+Import+Handler#UploadingStructuredDataStoreDatawiththeDataImportHandler-EntityProcessors
 
 
An abbreviated version of my db-data-config.xml looks like: 
 
 entity name=Doc 
                query=SELECT DocID, Title FROM solr.DOCS_TABLE
    field column=DocID name=DocID /
            field column=Title name=Title /
            entity name=Cat1  
    query=SELECT CategoryName, DocID FROM solr.CAT_DOCS_MAP 
                                WHERE CategoryLevel=1 
                    cacheKey=DocID cacheLookup=Doc.DocID 
processor=CachedSqlEntityProcessor
                field column=CategoryName name=Category1 / 
            /entity
/entity
 
 
I am curious how I would use SqlEntityProcessor and turn on
caching (because I really need it). Or is that even possible? Can I do 
something like: 
 
 
entity name=Doc 
                query=SELECT DocID, Title FROM solr.DOCS_TABLE
    field column=DocID name=DocID /
            field column=Title name=Title /
            entity name=Cat1  
    query=SELECT CategoryName, DocID FROM solr.CAT_DOCS_MAP 
                                WHERE CategoryLevel=1 
                    cacheKey=DocID cacheLookup=Doc.DocID 
processor=SqlEntityProcessor cacheImpl=???
                field column=CategoryName name=Category1 / 
            /entity
/entity
 
What do I put in for cacheImpl? What are the possible values
for cacheImpl.
 
Thank you in advance for your help.
O. O.


Use of Deprecated Classes: SortableIntField SortableFloatField SortableDoubleField

2013-12-09 Thread O. Olson


Use of Deprecated Classes: SortableIntField SortableFloatField
SortableDoubleField
 
I am attempting to migrate from Solr 4.3 to Solr 4.6. When I
run the example in 4.6, I get warnings SortableIntField etc. asking me to
consult the documentation to replace them accordingly. 
 
If these classes are deprecated, I think it would not be a
good idea to use them in the examples as in: 
http://svn.apache.org/repos/asf/lucene/dev/branches/lucene_solr_4_6/solr/example/example-DIH/solr/db/conf/schema.xml
 Here, weight, price and popularity seem to use the deprecated sfloat and sint. 
 
Does anyone know where I can find documentation to replace
these classes in my schema file. Thank you,
O. O.


Re: Use of Deprecated Classes: SortableIntField SortableFloatField SortableDoubleField

2013-12-09 Thread O. Olson
Thank you kydryavtsev andrey. Could you please suggest some examples. There
is no documentation on this. Also is there a reason why these classes are
not used in the examples even though they are deprecated?

I am looking for examples like below: Should I put the following in my
schema.xml file to use the TrieIntField:

fieldType name=sint class=solr.TrieIntField sortMissingLast=true
omitNorms=true/

Is this specification correct? Should it also have the sortMissingLast and
omitNorms, because I want something that I can use for sorting? I have no
clue how you get these.

Thank you again,
O. O.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Use-of-Deprecated-Classes-SortableIntField-SortableFloatField-SortableDoubleField-tp4105762p4105781.html
Sent from the Solr - User mailing list archive at Nabble.com.


Change Velocity Template Directory in Solr 4.6

2013-12-06 Thread O. Olson
I would like to know how to set the Velocity Template
Directory in Solr. 
 
About 6 months ago I asked this question on this list: 
http://lucene.472066.n3.nabble.com/Change-Velocity-Template-Directory-td4078120.html
  At that time Erik Hatcher advised me to use
the v.base_dir in solrconfig.xml. This worked perfectly in Solr 4.3. 
 
    However now
I am attempting to move my code/data to Solr 4.6, and this does not work i.e.
it does not recognize the v.base_dir in solrconfig.xml. Doing a diff of 
org.apache.solr.response.VelocityResponseWriter
I can see that some code has been removed from the getEngine() method in the
new 4.6 version. I was discussing this with hossman on the IRC, and he pointed
me to https://issues.apache.org/jira/browse/SOLR-4882 
 
    I
understand that this is a security issue and I am ready to take the risk
because for now this would only be used internally by non-technical folk. 
Hossman
pointed me to https://gist.github.com/hossman/7827910 This is a system property 
solr.allow.unsafe.resourceloading=true  that would supposedly enable unsafe 
template
loading from other locations. However this does not work. (Here I am assuming I
start up Solr with 
java -Dsolr.allow.unsafe.resourceloading=  -jar start.jar i.e. I have tried 
setting this
property on the commandline.) 
 
    Any ideas? If
this has been changed, then someone might need to remove v.base_dir from the
documentation at http://wiki.apache.org/solr/VelocityResponseWriter 
 
Thank you,
O. O.


Re: Customize Velocity Output, Utility Class or Custom Tool

2013-08-05 Thread O. Olson
Thank you very much *Erik*. At this point I have trouble compiling Solr /(I
needed help from the IRC)/, so I am not qualified to submit a patch.
However, now I know where this location is, I might consider creating my own
tool and putting it in there :-).

Thanks again, because I don’t think anyone else knew the answer.
O. O. 




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Customize-Velocity-Output-Utility-Class-or-Custom-Tool-tp4082051p4082661.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Customize Velocity Output, Utility Class or Custom Tool

2013-08-02 Thread O. Olson
Would this question be more appropriate on Solr-Dev?
Thank you in advance,
O. O. 


O. Olson wrote
 Hi,
 
   I am using Solr with the VelocityResponseWriter.
 http://wiki.apache.org/solr/VelocityResponseWriter  I am wondering if
 there is anyway to add my own Utility Class i.e. how do I put it in the
 Velocity Context. Or as an alternative to add my own Custom Tool? By the
 way, where is velocity-tools.xml?
 
 Thank you in advance,
 O. O.





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Customize-Velocity-Output-Utility-Class-or-Custom-Tool-tp4082051p4082277.html
Sent from the Solr - User mailing list archive at Nabble.com.


Customize Velocity Output, Utility Class or Custom Tool

2013-08-01 Thread O. Olson
Hi,

I am using Solr with the VelocityResponseWriter.
http://wiki.apache.org/solr/VelocityResponseWriter  I am wondering if there
is anyway to add my own Utility Class i.e. how do I put it in the Velocity
Context. Or as an alternative to add my own Custom Tool? By the way, where
is velocity-tools.xml?

Thank you in advance,
O. O.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Customize-Velocity-Output-Utility-Class-or-Custom-Tool-tp4082051.html
Sent from the Solr - User mailing list archive at Nabble.com.


Velocity Example: Where is #url_for_home defined?

2013-07-15 Thread O. Olson
I am new to using Velocity esp. with Solr. In the Velocity example provided,
I am curious where #url_for_home is set i.e. its value assigned? (It is used
a lot in the macros defined in VM_global_library.vm.)

Thank you in advance,
O. O.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Velocity-Example-Where-is-url-for-home-defined-tp4078104.html
Sent from the Solr - User mailing list archive at Nabble.com.


Change Velocity Template Directory

2013-07-15 Thread O. Olson
Is there any way to change the default Velocity directory where the Velocity
templates are stored? In the example download, I modified the solrconfig.xml
under the Solr Request Handler to add: 

str name=v.base_dirconf/mycustom//str

I have a mycustom directory under the conf directory for the example core,
but I still get the “Unable to find resource 'browse.vm'” exception/error. 

I actually renamed the velocity directory to mycustom. So it has all the
template files that Velocity needs - at least that’s what I figured.

Thank you in advance for any help,
O. O.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Change-Velocity-Template-Directory-tp4078120.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Velocity Example: Where is #url_for_home defined?

2013-07-15 Thread O. Olson
Thank you very much Erik. That’s exactly what I was looking for. I can swear
I looked into VM_global_library.vm. I'm not sure how I missed it :-(
O. O.


Erik Hatcher-4 wrote
 #url_for_home is defined in conf/velocity/VM_global_library.vm.  Note that
 it builds upon #url_root defined just above it, so maybe that's what you
 want to adjust if you need to tinker with it.
 
 Erik





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Velocity-Example-Where-is-url-for-home-defined-tp4078104p4078186.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Change Velocity Template Directory

2013-07-15 Thread O. Olson
Thank you Erik. I did not think the Windows file/directory path format would
work for Solr. For others the following worked for me:
str
name=v.base_dirC:\Users\MyUsername\Solr\example\example-DIH\solr\db\conf\mycustom\/str



Erik Hatcher-4 wrote
 Try supplying an absolute path.  I'm away from my computer so can't check
 just yet, but it is probably coded to consider that value absolute since
 moving it generally means you want templates outside of your Solr conf/. 
 
Erik





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Change-Velocity-Template-Directory-tp4078120p4078188.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Debugging Solr XSL

2013-06-14 Thread O. Olson
Thank you Upayavira  Miguel. I decided to use Visual Studio – since I can at
least set breakpoints and do interactive debugging in the UI. I hope the way
Visual Studio treats XSL is the same as Solr - else I would have problems
:-).
Thanks again,
O.O.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Debugging-Solr-XSL-tp4070368p4070572.html
Sent from the Solr - User mailing list archive at Nabble.com.


Debugging Solr XSL

2013-06-13 Thread O. Olson
Hi,

I am attempting to transform the XML output of Solr using the
XsltResponseWriter http://wiki.apache.org/solr/XsltResponseWriter to HTML.
This works, but I am wondering if there is a way for me to debug my creation
of XSL. If there is any problem in the XSL you simply get a stack trace in
the Solr Output. 

For e.g. In adding a HTML Link Tag to my XSL, I forgot the closing i.e. 
I
did “” instead of a “/”. I would just get a stack trace, nothing to tell
me what I did wrong. Another time I had a template match that was very
specific. I expected it to have precedence over the more general template.
It did not, and I have no clue. I ultimately put in a priority to get my
expected value. 

I am new to XSL. Is there any other free tool that would help me debug 
XSL
that Solr would accept? I have Visual Studio (full version) that has XSLT
debugging – but I have not tried this as yet. Would Solr accept as valid
what Visual Studio OKs?

I’m sorry I am new to this. I’d be grateful for any pointers. 

Thank you,
O.O.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Debugging-Solr-XSL-tp4070368.html
Sent from the Solr - User mailing list archive at Nabble.com.


Curious why Solr Jetty URL has a # sign?

2013-06-10 Thread O. Olson
Hi,

This may be a dumb question but I am curious why the sample Solr Jetty
results in a URL with a # sign e.g. http://localhost:8983/solr/#/~logging ?
Is there any way to get rid of it, so I could have something like:
http://localhost:8983/solr/~logging ? 

Thank you,
O. O. 




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Curious-why-Solr-Jetty-URL-has-a-sign-tp4069434.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Curious why Solr Jetty URL has a # sign?

2013-06-10 Thread O. Olson
Thank you Chris.

No, I do not have an XY Problem. I am new to Solr, Jetty and related
technology and was playing. I did not like the /#/ in the URL and felt that
it had no purpose. So, if I understand this correctly is Solr using the # as
a JQuery hook to decide which view to show? Am I correct in this
interpretation? 

If what I said above is correct, could I write a Jetty Rewrite rule to
eliminate the #. I could certainly write a rule to map /solr to the root /,
but I am not sure about the #. I don’t really have a need, I just wanted to
know what was possible. 

Thanks again,
O. O.



Chris Hostetter-3 wrote
 You're looking at the Solr UI which is a single page javascript/AJAX based 
 system that uses url fragments (after the hash) to record state about what 
 you are looking at in the UI
 
 some background...
 
 https://issues.apache.org/jira/browse/SOLR-4431?focusedCommentId=13596596page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13596596
 
 Why specifically does it concern/bother you about having a # in the UI 
 URL?   Smells like an XY Problem...
 
 https://people.apache.org/~hossman/#xyproblem
 XY Problem
 
 Your question appears to be an XY Problem ... that is: you are dealing
 with X, you are assuming Y will help you, and you are asking about Y
 without giving more details about the X so that we can understand the
 full issue.  Perhaps the best solution doesn't involve Y at all?
 See Also: http://www.perlmonks.org/index.pl?node_id=542341
 
 
 
 -Hoss





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Curious-why-Solr-Jetty-URL-has-a-sign-tp4069434p4069481.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Curious why Solr Jetty URL has a # sign?

2013-06-10 Thread O. Olson
Thank you Alex for the explanation. I was not aware of single page
application design. After a bit of google, it seems to be more popular than
I expected.
O. O.



Alexandre Rafalovitch wrote
 The # part is JavaScript URL. It is not seen by the server. It is part
 of a standard single-page-application design approach. So, it is not
 visible to Jetty rules, etc.
 
 If you don't have a problem here, I would suggest just taking this
 part on faith and continue to other parts of Solr
 
 Regards,
Alex.





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Curious-why-Solr-Jetty-URL-has-a-sign-tp4069434p4069509.html
Sent from the Solr - User mailing list archive at Nabble.com.


No files added to classloader from lib

2013-06-05 Thread O. Olson
Hi,

I downloaded Solr 4.3 and I am attempting to run and configure a 
separate
Solr instance under Jetty. I copied the Solr dist directory contents to a
directory called solrDist under the single core db that I was running. I
then attempted to get the DataImportHandler using the following in my
solrconfig.xml:

  lib dir=solrDist/ regex=apache-solr-dataimporthandler-.*\.jar /

In the log file, I see a lot of messages that the Jar Files in solrDist
were added to the classloader. E.g. 

…….
534  [coreLoadExecutor-3-thread-1] INFO 
org.apache.solr.core.SolrResourceLoader  - Adding
'file:/C:/Users/MyUsername/Documents/Jetty/Jetty9/solr/db/lib/solr-clustering-4.3.0.jar'
to classloader
534  [coreLoadExecutor-3-thread-1] INFO 
org.apache.solr.core.SolrResourceLoader  - Adding
'file:/C:/Users/MyUsername/Documents/Jetty/Jetty9/solr/db/lib/solr-core-4.3.0.jar'
to classloader
535  [coreLoadExecutor-3-thread-1] INFO 
org.apache.solr.core.SolrResourceLoader  - Adding
'file:/C:/Users/MyUsername/Documents/Jetty/Jetty9/solr/db/lib/solr-dataimporthandler-4.3.0.jar'
to classloader
535  [coreLoadExecutor-3-thread-1] INFO 
org.apache.solr.core.SolrResourceLoader  - Adding
'file:/C:/Users/MyUsername/Documents/Jetty/Jetty9/solr/db/lib/solr-dataimporthandler-extras-4.3.0.jar'
to classloader
535  [coreLoadExecutor-3-thread-1] INFO 
org.apache.solr.core.SolrResourceLoader  - Adding
'file:/C:/Users/MyUsername/Documents/Jetty/Jetty9/solr/db/lib/solr-langid-4.3.0.jar'
to classloader
535  [coreLoadExecutor-3-thread-1] INFO 
org.apache.solr.core.SolrResourceLoader  - Adding
'file:/C:/Users/MyUsername/Documents/Jetty/Jetty9/solr/db/lib/solr-solrj-4.3.0.jar'
to classloader

.

However in the end I get the following Warning:

570  [coreLoadExecutor-3-thread-1] WARN 
org.apache.solr.core.SolrResourceLoader  - No files added to classloader
from lib: solrDist/ (resolved as:
C:\Users\MyUsername\Documents\Jetty\Jetty9\solr\db\solrDist).

Why is this? I thought the Jar Files were added to the classloader, but the
last messages seems to say that none were added. I know that this is a
warning, but I am just curious. I’d be grateful to anyone who has an idea
regarding this.

Thank you,
O. O.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/No-files-added-to-classloader-from-lib-tp4068374.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: No files added to classloader from lib

2013-06-05 Thread O. Olson
Good call Jack. I totally missed that. I am curious how dataimport handler
worked before – if I made a mistake in the specification and it did not get
the jar. Anyway, it works now. Thanks again.
O.O.


apache-solr-dataimporthandler-.*\.jar - note that the apache- prefix has 
been removed from Solr jar files.

-- Jack Krupansky





--
View this message in context: 
http://lucene.472066.n3.nabble.com/No-files-added-to-classloader-from-lib-tp4068374p4068421.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Warning: no uniqueKey specified in schema.

2013-05-24 Thread O. Olson
Thank you Shawn for clearing this up. I was only using the “db” core, and
forgot that this example had a few other cores which have their own
schema.xml. I commented out this core in the solr.xml and now get no
warnings :-).

O. O.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Warning-no-uniqueKey-specified-in-schema-tp4065791p4065944.html
Sent from the Solr - User mailing list archive at Nabble.com.


Warning: no uniqueKey specified in schema.

2013-05-23 Thread O. Olson
Hi,

I just downloaded Apache Solr 4.3.0 from 
http://lucene.apache.org/solr/. I
then got into the /example directory and started Solr with: 

 java -Djava.util.logging.config.file=etc/logging.properties
 -Dsolr.solr.home=./example-DIH/solr/ -jar start.jar

I have not made any changes at this point and I get the following Warning:
no uniqueKey specified in schema. 

I have no clue why this error occurs because the schema.xml has
uniqueKeyid/uniqueKey. Isn’t this correctly defined?? I have not changed
the examples in any way, just ran them. I would like to add that if I use
the normal Solr (not the one with the DataImportHandler): 

 java -Djava.util.logging.config.file=etc/logging.properties -jar start.jar

This warning does not occur here. I’d appreciate any clues on why this
warning occurs in the example-DIH.

Thank you,
O. O.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Warning-no-uniqueKey-specified-in-schema-tp4065791.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: How do I use CachedSqlEntityProcessor?

2013-05-22 Thread O. Olson
Thank you bbarani. Unfortunately, this does not work. I do not get any
exception, and the documents import OK. However there is no Category1,
Category2 … etc. when I retrieve the documents.

I don’t think I am using the Alpha or Beta of 4.0. I think I downloaded the
plain vanilla release version. 
O. O.



bbarani wrote
 Try this..
 entity name=Cat1  
 query=SELECT CategoryName,SKU from CAT_TABLE WHERE
 CategoryLevel=1 cacheKey=Cat1.SKU cacheLookup=Product.SKU
 processor=CachedSqlEntityProcessor
 
 field column=CategoryName name=Category1 /
  
 
 /entity
 sample data import config:
 
   
 entity name=property query=select UID,name as name, value as value
 from opTable where type='${dataimporter.request.type}' and indexed='Y' 
 processor=CachedSqlEntityProcessor cacheKey=UID
 cacheLookup=object.uid
 transformer=RegexTransformer,DateFormatTransformer,TemplateTransformer
   
   
 field column=value name=${property.name}/
  //dynamic column
   
 /entity
 
 Also not sure if you are using Alpha / Beta release of SOLR 4.0.
 
 In Solr 3.6, 3.6.1, 4.0-Alpha  4.0-Beta, the cacheKey parameter was
 re-named cachePk. This is renamed back for 4.0 ( 3.6.2, if released).
 See SOLR-3850





--
View this message in context: 
http://lucene.472066.n3.nabble.com/How-do-I-use-CachedSqlEntityProcessor-tp4064919p4065309.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: How do I use CachedSqlEntityProcessor?

2013-05-22 Thread O. Olson
Thank you very much James. Your suggestion worked exactly! I am curious why I
did not get any errors before. For others, the following worked for me: 

entity name=Cat1  
query=SELECT CategoryName, SKU from CAT_TABLE WHERE
CategoryLevel=1 cacheKey=SKU cacheLookup=Product.SKU
processor=CachedSqlEntityProcessor
field column=CategoryName name=Category1 
/ 
/entity

Similarly for other Categories i.e. Category2, Category3, etc. 

I am now going to try this for a larger dataset. I hope this works.
O.O.


Dyer, James-2 wrote
 There was a mistake in my last reply.  Your child entities need to SELECT
 on the join key so DIH has it to do the join.  So use SELECT SKU,
 CategoryName...
 
 James Dyer
 Ingram Content Group
 (615) 213-4311





--
View this message in context: 
http://lucene.472066.n3.nabble.com/How-do-I-use-CachedSqlEntityProcessor-tp4064919p4065342.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: How do I use CachedSqlEntityProcessor?

2013-05-22 Thread O. Olson
Thank you guys, particularly James, very much. I just imported 200K documents
in a little more than 2 mins – which is great for me :-). Thank you Stefan.
I did not realize that it was not a syntax error and hence no error. Thank
you for clearing that up. 
O. O.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/How-do-I-use-CachedSqlEntityProcessor-tp4064919p4065392.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: Speed up import of Hierarchical Data

2013-05-22 Thread O. Olson
Just an update for others reading this thread: I had some
CachedSqlEntityProcessor and had it addressed in the thread How do I use
CachedSqlEntityProcessor?
(http://lucene.472066.n3.nabble.com/How-do-I-use-CachedSqlEntityProcessor-td4064919.html)

I basically had to declare the child entities in the db-data-config.xml
like: 

entity name=Cat1  
query=SELECT CategoryName, SKU from CAT_TABLE WHERE
CategoryLevel=1 cacheKey=SKU cacheLookup=Product.SKU
processor=CachedSqlEntityProcessor
field column=CategoryName
name=Category1 / 
/entity

Thanks to James and others for their help.
O. O.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Speed-up-import-of-Hierarchical-Data-tp4063924p4065400.html
Sent from the Solr - User mailing list archive at Nabble.com.


How do I use CachedSqlEntityProcessor?

2013-05-21 Thread O. Olson
I am using the DataImportHandler to Query a SQL Server and populate Solr with
data that has hierarchical relationships. 

The following is an outline of my table structure: 


PROD_TABLE 
- SKU (Primary Key) 
- Title  (varchar) 
- Descr (varchar) 

CAT_TABLE 
- SKU (Foreign Key) 
-  CategoryLevel (int i.e. 1, 2, 3 …) 
- CategoryName  (varchar) 

I specify the SQL Query in the db-data-config.xml file – a snippet of which
looks like: 

dataConfig
dataSource driver=com.microsoft.sqlserver.jdbc.SQLServerDriver
url=jdbc:sqlserver://localhost\/
document
entity name=Product 
query=SELECT SKU, Title, Descr FROM
PROD_TABLE
field column=SKU name=SKU /
field column=Title name=Title /
field column=Descr name=Descr /

entity name=Cat1   
query=SELECT CategoryName from CAT_TABLE where
SKU='${Product.SKU}' AND CategoryLevel=1
field column=CategoryName
name=Category1 /  
/entity
entity name=Cat2   
query=SELECT CategoryName from CAT_TABLE where
SKU='${Product.SKU}' AND CategoryLevel=2
field column=CategoryName
name=Category2 /  
/entity
entity name=Cat3   
query=SELECT CategoryName from CAT_TABLE where
SKU='${Product.SKU}' AND CategoryLevel=3
field column=CategoryName
name=Category3 /  
/entity

/entity
/document
/dataConfig


Unfortunately this is a bit slow, and it was recommended to me to use the
CachedSqlEntityProcessor
(http://wiki.apache.org/solr/DataImportHandler#CachedSqlEntityProcessor).
Hence I modified my db-data-config.xml to look like: 

dataConfig
dataSource driver=com.microsoft.sqlserver.jdbc.SQLServerDriver
url=jdbc:sqlserver://localhost\/
document
entity name=Product 
query=SELECT SKU, Title, Descr FROM
PROD_TABLE
field column=SKU name=SKU /
field column=Title name=Title /
field column=Descr name=Descr /

entity name=Cat1   
query=SELECT CategoryName from CAT_TABLE where
SKU='${Product.SKU}' AND CategoryLevel=1
processor=CachedSqlEntityProcessor
field column=CategoryName
name=Category1 /  
/entity
entity name=Cat2   
query=SELECT CategoryName from CAT_TABLE where
SKU='${Product.SKU}' AND CategoryLevel=2
processor=CachedSqlEntityProcessor
field column=CategoryName
name=Category2 /  
/entity
entity name=Cat3   
query=SELECT CategoryName from CAT_TABLE where
SKU='${Product.SKU}' AND CategoryLevel=3
processor=CachedSqlEntityProcessor
field column=CategoryName
name=Category3 /  
/entity

/entity
/document
/dataConfig

The import works really quickly, but there are no Categories e.g. Category1,
Category2 etc. in the imported documents. Any clue’s on how to debug this
problem? 

I should mention that I don’t change my schema.xml or any other file in the
config. All I do is switch between the first db-data-config.xml – where I
get the Categories as part of the document, and the second, where I do not.
I went back and re-verified this result. 

Thank you all for your help. 
O. O.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/How-do-I-use-CachedSqlEntityProcessor-tp4064919.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: How do I use CachedSqlEntityProcessor?

2013-05-21 Thread O. Olson
Thank you James  bbarani. 

This worked in the sense that there was no error or exception in the data
import. Unfortunately, I do not see any of my Category1, Category2 etc. when
I retrieve the documents. If I use the first configuration of the
db-data-config.xml posted in my original post, I see these fields in each
document. Doing an import with your suggestion of  

entity name=Cat1  
query=SELECT CategoryName from CAT_TABLE WHERE
CategoryLevel=1 cacheKey=SKU cacheLookup=Product.SKU
processor=CachedSqlEntityProcessor
field column=CategoryName name=Category1 
/ 
/entity

I do not see Category1. 

I have not changed my schema.xml, so I don’t think this should affect the
results. For e.g. Category1 is declared as: 

field name=Category1 type=string indexed=true stored=true
multiValued=true/

I am curious to what I am doing wrong. I should mention that I am using Solr
4.0.0. I know a more recent version is out – but I don’t think it should
make a difference.
Thank you again for your help.
O. O.





Dyer, James-2 wrote
 First remove the where condition from the child entities, then use the
 cacheKey and cacheLookup parameters to instruct DIH how to do the
 join.
 
 Example:
 entity 
  name=Cat1 
  cacheKey=SKU
  cacheLookup=Product.SKU 
  query=SELECT CategoryName from CAT_TABLE where CategoryLevel=1 
 /
 See http://wiki.apache.org/solr/DataImportHandler#CachedSqlEntityProcessor
 , particularly the 3rd configuration option.
 
 James Dyer
 Ingram Content Group
 (615) 213-4311





--
View this message in context: 
http://lucene.472066.n3.nabble.com/How-do-I-use-CachedSqlEntityProcessor-tp4064919p4065091.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: Speed up import of Hierarchical Data

2013-05-17 Thread O. Olson
Thank you James. I think I got this to work using CachedSqlEntityProcessor –
and it seems extremely fast. I will try SortedMapBackedCache on Monday :-). 
Thank you,
O. O.



Dyer, James-2 wrote
 Using SqlEntityProcessor with cacheImpl=SortedMapBackedCache is the same
 as specifying CachedSqlEntityProcessor.  Because the pluggable caches
 are only partially committed, I never added details to the wiki, so it
 still refers to CachedSEP.  But its the same thing.
 
 What is new here, though, is that you don't have to use
 SortedMapBackedCache (this is an in-memory cache and can only scale to
 what fits in heap.)  You can use an alternate cache (but none are included
 in the Solr distribution).  Also, you can cache data this doesn't come
 from SQL.  So its more flexible this way rather than the older CachedSEP.
 
 Here's the wiki link with an example: 
 http://wiki.apache.org/solr/DataImportHandler#CachedSqlEntityProcessor 
 
 James Dyer
 Ingram Content Group
 (615) 213-4311





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Speed-up-import-of-Hierarchical-Data-tp4063924p4064297.html
Sent from the Solr - User mailing list archive at Nabble.com.


Speed up import of Hierarchical Data

2013-05-16 Thread O. Olson
I am using the DataImportHandler to Query a SQL Server and populate Solr.
Unfortunately, SQL does not have an understanding of hierarchical
relationships, and hence I use Table Joins. The following is an outline of
my table structure: 


PROD_TABLE
- SKU (Primary Key)
- Title  (varchar)
- Descr (varchar)

CAT_TABLE
- SKU (Foreign Key)
-  CategoryLevel (int i.e. 1, 2, 3 …)
- CategoryName  (varchar)

I specify the SQL Query in the db-data-config.xml file – a snippet of which
looks like: 

dataConfig
dataSource driver=com.microsoft.sqlserver.jdbc.SQLServerDriver
url=jdbc:sqlserver://localhost\/
document
entity name=Product 
query=SELECT SKU, Title, Descr FROM 
PROD_TABLE
field column=SKU name=SKU /
field column=Title name=Title /
field column=Descr name=Descr /

entity name=Cat1  
query=SELECT CategoryName from CAT_TABLE where
SKU='${Product.SKU}' AND CategoryLevel=1
field column=CategoryName name=Category1 
/ 
/entity
entity name=Cat2  
query=SELECT CategoryName from CAT_TABLE where
SKU='${Product.SKU}' AND CategoryLevel=2
field column=CategoryName name=Category2 
/ 
/entity
entity name=Cat3  
query=SELECT CategoryName from CAT_TABLE where
SKU='${Product.SKU}' AND CategoryLevel=3
field column=CategoryName name=Category3 
/ 
/entity

/entity
/document
/dataConfig

It seems like the DataImportHandler handler sends out three or four queries
for each Product. This results in a very slow import. Is there any way to
speed this up? I would not mind an intermediate step of first extracting SQL
and then putting it into Solr.

Thank you for all your help. 
O. O.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Speed-up-import-of-Hierarchical-Data-tp4063924.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Speed up import of Hierarchical Data

2013-05-16 Thread O. Olson
Thank you Stefan. I am new to Solr and I would need to read up more on
CachedSqlEntityProcessor. Do you have any clue where to begin? There do not
seem to be any tutorials online.

The link you provided seems to have a very short and unclear explanation.
After “Example 1” you have “The usage is exactly same as the other one.”
What does “other one” refer to? I did not understand the description
completely.

This description seems to say that if the query is the same as a prior query
it would fetched from the cache. From my case each of the Category queries
are unique because they have a unique SKU and Category Level. Would
CachedSqlEntityProcessor then help me?

Thank you,
O. O.



Stefan Matheis-2 wrote
 That sounds like a perfect match for
 http://wiki.apache.org/solr/DataImportHandler#CachedSqlEntityProcessor :)





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Speed-up-import-of-Hierarchical-Data-tp4063924p4064034.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: Speed up import of Hierarchical Data

2013-05-16 Thread O. Olson
Thank you James. Are there any examples of SortedMapBackedCache? I am new to
Solr and I do not find many tutorials in this regard. I just modified the
examples and they worked for me.  What is a good way to learn these basics?
O. O.



Dyer, James-2 wrote
 See https://issues.apache.org/jira/browse/SOLR-2943 .  You can set up 2
 DIH handlers.  The first would query the CAT_TABLE and save it to a
 disk-backed cache, using DIHCacheWriter.  You then would replace your 3
 child entities in the 2nd DIH handler to use DIHCacheProcessor to read
 back the cached data.  This is a little complicated to do, but it would
 let you just cache the data once and because it is disk-backed, will scale
 to whatever size the CAT_TABLE is.  (For some details, see this thread:
 http://lucene.472066.n3.nabble.com/DIH-nested-entities-don-t-work-tt4015514.html)
 
 A simpler method is simply to specify cacheImpl=SortedMapBackedCache on
 the 3 child entities.  (This is the same as using
 CachedSqlEntityProcessor.)  It would generate 3 in-memory caches, each
 with the same data.  If CAT_TABLE is small, this would be adequate.  
 
 In between this would be to create a disk-backed cache Impl (or use the
 ones at SOLR-2613 or SOLR-2948) and specify it on cacheImpl.  It would
 still create 3 identical caches, but they would be disk-backed and could
 scale beyond what in-memory can handle.
 
 James Dyer
 Ingram Content Group
 (615) 213-4311





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Speed-up-import-of-Hierarchical-Data-tp4063924p4064040.html
Sent from the Solr - User mailing list archive at Nabble.com.


Solr Data Config Queries per Field

2013-01-29 Thread O. Olson
Hi,

I am new to Solr, and I am using the DataImportHandler to Query a SQL
Server and populate Solr. I specify the SQL Query in the db-data-config.xml
file. Each SQL Query seems to be associated with an entity. Is it possible
to have a query per field? I think it would be easier to explain this using
an example: 

I have products that are classified in a hierarchy of Categories. A single
product can be in multiple Categories. I want to provide the user the
ability to drill down i.e. first select the top level category Category1,
next select the next level category Category2 etc. Since a single product
can be in multiple Categories, all of these i.e. Category1, Category2,
Category3 etc. are multi-valued.


SQL Database Schema:

Table: Prod_Table
Column 1: SKU  - ID/Primary Key
Column 2: Title 

Table: Cat_Table
Column 1: SKU - Foreign Key
Column 2: CategoryLevel
Column 3: CategoryName

Where CategoryLevel is 1, I would like to save the value to Category1 field,
where CategoryLevel is 2, I would like to save this to the Category2 field
etc. My db-data-config.xml looks like:

dataConfig
dataSource driver=com.microsoft.sqlserver.jdbc.SQLServerDriver
url=jdbc:sqlserver://localhost…/
document
entity name=Product 
query=SELECT SKU, Title FROM PROD_TABLE
field column=SKU name=SKU /
field column=Title name=Title /

entity name=quot;Categoriesquot;  
query=quot;SELECT CategoryName from CAT_TABLE where
SKU='${Product.SKU}' AND CategoryLevel=1quot;
lt;field column=quot;Category1quot;
name=quot;Category1quot; /
 Query:  SELECT CategoryName from 
CAT_TABLE where
SKU='${Product.SKU}' AND CategoryLevel=2
field column=Category2 name=Category2 /
 Query:  SELECT CategoryName from 
CAT_TABLE where
SKU='${Product.SKU}' AND CategoryLevel=3
field column=Category3 name=Category3 /
/entity
/entity
/document
/dataConfig

How do I populate Category2 and Category3??

Thank you for all your help.
O. O.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-Data-Config-Queries-per-Field-tp4037092.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr Data Config Queries per Field

2013-01-29 Thread O. Olson
Gora Mohanty-3 wrote
 On 29 January 2013 22:42, O. Olson lt;

 olson_ord@

 gt; wrote:
 [...]
 SQL Database Schema:

 Table: Prod_Table
 Column 1: SKU  - ID/Primary Key
 Column 2: Title

 Table: Cat_Table
 Column 1: SKU - Foreign Key
 Column 2: CategoryLevel
 Column 3: CategoryName

 Where CategoryLevel is 1, I would like to save the value to Category1
 field,
 where CategoryLevel is 2, I would like to save this to the Category2
 field
 etc.
 [...]
 
 It is not very clear from your description, nor from your example,
 what you want saved to the Category1, Category2,... fields, and
 how you expect your user searches to function. You seem to imply
 that the categories are hierarchical, but there is no relationship in
 the database to define this hierarchy.
 
 For a given product SKU, do you want the multi-valued Category1
 field to contain all CategoryName values from Cat_Table that have
 CategoryLevel = 1 and SKU matching the product SKU, and so on
 for the other categories? If so, this should do it:
 entity name=Product query=SELECT SKU, Title FROM PROD_TABLE
  
 field column=SKU name=SKU /
  
 field column=Title name=Title /
  
 entity name=Cat1 query=SELECT CategoryName from CAT_TABLE where
 SKU='${Product.SKU}' AND CategoryLevel=1
  
 field column=CategoryName name=Category1 /
  
 /entity
  
 entity name=Cat2 query=SELECT CategoryName from CAT_TABLE where
 SKU='${Product.SKU}' AND CategoryLevel=1
  
 field column=CategoryName name=Category2 /
  
 /entity
  
 entity name=Cat3 query=SELECT CategoryName from CAT_TABLE where
 SKU='${Product.SKU}' AND CategoryLevel=1
  
 field column=CategoryName name=Category3 /
  
 /entity
 /entity
 Regards,
 Gora

Thank you. Good call Gora, I forgot to mention about the query. I am trying
to query something like the following in the URL for the Example:
http://localhost:8983/solr/db/select

?q=queryfacet=truefacet.field=Category1

I expect the above query to give me the counts for the products that satisfy
the query in Category1. For example given my query I get: Hardware (21),
Software (3), Office Supplies (10). These are Category1 values.  Lets then
say a user selects Hardware. I think I would do something like: 


?q=queryfacet=truefq=Category1:Hardwarefacet.field=Category2

I assume this would be give me the list of Category 2 values e.g. Printers
(7), Fax Machines (11), LCD Monitors (3) (7 + 11 + 3 = 21). 

You suggest I create separate entities for each Category Level. Would this
affect my schema? i.e. would the above queries work??

Thanks again Gora,
O. O.







--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-Data-Config-Queries-per-Field-tp4037092p4037118.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr Data Config Queries per Field

2013-01-29 Thread O. Olson
Gora Mohanty-3 wrote
 Yes, things should function as you describe, and no you should not
 need any change in your schema from changing the DIH configuration
 file. Please take a look at
 http://wiki.apache.org/solr/SolrFacetingOverview#Facet_Indexing for
 how best to define faceting fields. Also, see this tutorial on faceted
 search with Solr:
 http://searchhub.org/2009/09/02/faceted-search-with-solr/
 
 Regards,
 Gora

Thank you Gora. I implemented it the way you suggested, and it worked
perfectly!
O. O.





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-Data-Config-Queries-per-Field-tp4037092p4037189.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: Solr Faceting with Name Values

2013-01-29 Thread O. Olson
Thank you Robi for the information. I will be looking into this esp. the
implementation. Having to join the names together and then split them later
is something I have to discuss with my team. 

O. O.



Petersen, Robert wrote
 Hi O.O
 
 1.  Yes faceting on field function_s would return all the facet values in
 the search results with their counts.
 2.  You would probably have to join the names together with a special
 character and then split them later in the UI.  
 3.  I'm sure there is a way to query the index for all defined fields. 
 The admin schema browser page does this exact thing.
 
 Resources for further exploration:
 http://wiki.apache.org/solr/SolrFacetingOverview
 http://wiki.apache.org/solr/SimpleFacetParameters
 http://searchhub.org/2009/09/02/faceted-search-with-solr/
 http://wiki.apache.org/solr/HierarchicalFaceting
 http://lucidworks.lucidimagination.com/display/solr/Faceting
 
 Have fun!
 Robi





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-Faceting-with-Name-Values-tp4036872p4037201.html
Sent from the Solr - User mailing list archive at Nabble.com.


Solr Faceting with Name Values

2013-01-28 Thread O. Olson
Hi,

We are looking at putting our Product Catalog into Solr. Our Product
Catalog involves a Product, and a number of [Name, Value] pairs – which
represent attributes of a particular product. The attribute names are
standard along a certain Product Category, but they are too numerous to put
into the schema. I would like to add faceting queries on these attributes. 

For e.g. 

Product 1: 
Name: Cannon Scanner
Category: Office Machines
Attribute 1 Name: Function
Attribute 1 Value: Scanner
Attribute 2 Name: PC Connection
Attribute 2 Value: USB
Attribute 3 Name: Scan Speed (ppm)
Attribute 3 Value: 2

Product 2: 
Name: HP Printer
Category: Office Machines
Attribute 1 Name: Function
Attribute 1 Value: Printer
Attribute 2 Name: PC Connection
Attribute 2 Value: LAN
Attribute 3 Name: Print Speed (ppm)
Attribute 3 Value: 35

I would like to know if there would be an easy way to retrieve the Facet
Counts related to “PC Connection”. I think this should give me the counts
for LAN, USB, Wi-Fi etc. for the way products connect to a PC. 

If I would put “PC Connection” into a separate field in the schema in Solr,
I can append something like the following to the end of my query:

facet=truefacet.field=PC+Connection

However, there are too many attribute names like “PC Connection”. Is there
any way to get the facet counts without putting “PC Connection” into a
separate field? How should I structure my schema to get these results?


Thank you all for your help.
O. O.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-Faceting-with-Name-Values-tp4036872.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: Solr Faceting with Name Values

2013-01-28 Thread O. Olson
Thank you Robi. Your idea seems good but I have a few questions: 

1.  From your description, I would create a field “Function_s” with the 
value
“Scanner” and “Function_s” with the value “Printer” for my two Products.
This seems good. Is it possible for you give me a query for this dynamic
field. For e.g., could I do something like: 

facet=truefacet.field=Function_s

I would like this to tell me how many of the products are Scanners and how
many of the products are Printers.

2.  Many of my Attribute Names have spaces e.g. “PC Connection”, or even
brackets and slashes e.g. “Scan Speed (ppm)”. Would there be a problem
putting these in a dynamic field name?

3.  Is it possible to query for the possible list of dynamic fieldnames? I
might need this when creating a list of attributes.


Thanks again Robi.
O. O.

--

Petersen, Robert wrote
 Hi O.O.,
 
 You don't need to add them all into the schema.  You can use the wildcard
 fields like 
 dynamicField name=*_s  type=string  indexed=true  stored=true /
  to hold them.  You can then have the attribute name be the part of the
 wildcard and the attribute value be the field contents. So you could have
 fields like Function_s:Scanner etc and then you could ask for facets which
 are relevant based upon query or category.
 
 That would be a much more straightforward approach and much easier to
 facet on.  Hope that helps a little bit.
 
 -Robi





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-Faceting-with-Name-Values-tp4036872p4036904.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr SQL Express Integrated Security - Unable to execute query

2013-01-24 Thread O. Olson
Shawn Heisey-4 wrote
 There will be a lot more detail to this error.  This detail may have a 
 clue about what happened.  Can you include the entire stacktrace?
 
 Thanks,
Shawn

Thank you Shawn. The following is the entire stacktrace. I hope this helps:


INFO: Creating a connection for entity Product with URL:
jdbc:sqlserver://localhost;instanceName=SQLEXPRESS;databaseName=Amazon;integratedSecurity=true;
Jan 23, 2013 3:26:05 PM org.apache.solr.core.SolrCore execute
INFO: [db] webapp=/solr path=/dataimport params={command=status} status=0
QTime=1 
Jan 23, 2013 3:26:31 PM org.apache.solr.common.SolrException log
SEVERE: Exception while processing: Product document :
SolrInputDocument[]:org.apache.solr.handler.dataimport.DataImportHandlerException:
Unable to execute query: SELECT [ProdID],[Descr] FROM
[Amazon].[dbo].[Table_Temp] Processing Document # 1
at
org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:71)
at
org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.init(JdbcDataSource.java:252)
at
org.apache.solr.handler.dataimport.JdbcDataSource.getData(JdbcDataSource.java:209)
at
org.apache.solr.handler.dataimport.JdbcDataSource.getData(JdbcDataSource.java:38)
at
org.apache.solr.handler.dataimport.SqlEntityProcessor.initQuery(SqlEntityProcessor.java:59)
at
org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:73)
at
org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:243)
at
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:472)
at
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:411)
at
org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:326)
at
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:234)
at
org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:382)
at
org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:448)
at
org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:429)
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: The server
SQLEXPRESS is not configured to listen with TCP/IP.
at
com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:171)
at
com.microsoft.sqlserver.jdbc.SQLServerConnection.getInstancePort(SQLServerConnection.java:3188)
at
com.microsoft.sqlserver.jdbc.SQLServerConnection.primaryPermissionCheck(SQLServerConnection.java:937)
at
com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:800)
at
com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:700)
at
com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(SQLServerDriver.java:842)
at
org.apache.solr.handler.dataimport.JdbcDataSource$1.call(JdbcDataSource.java:160)
at
org.apache.solr.handler.dataimport.JdbcDataSource$1.call(JdbcDataSource.java:127)
at
org.apache.solr.handler.dataimport.JdbcDataSource.getConnection(JdbcDataSource.java:362)
at
org.apache.solr.handler.dataimport.JdbcDataSource.access$200(JdbcDataSource.java:38)
at
org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.init(JdbcDataSource.java:239)
... 12 more

Jan 23, 2013 3:26:31 PM org.apache.solr.update.processor.LogUpdateProcessor
finish
INFO: [db] webapp=/solr path=/dataimport params={command=full-import}
status=0 QTime=13 {deleteByQuery=*:*} 0 13
Jan 23, 2013 3:26:31 PM org.apache.solr.common.SolrException log
SEVERE: Full Import failed:java.lang.RuntimeException:
java.lang.RuntimeException:
org.apache.solr.handler.dataimport.DataImportHandlerException: Unable to
execute query: SELECT [ProdID],[Descr] FROM [Amazon].[dbo].[Table_Temp]
Processing Document # 1
at
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:273)
at
org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:382)
at
org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:448)
at
org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:429)
Caused by: java.lang.RuntimeException:
org.apache.solr.handler.dataimport.DataImportHandlerException: Unable to
execute query: SELECT [ProdID],[Descr] FROM [Amazon].[dbo].[Table_Temp]
Processing Document # 1
at
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:413)
at
org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:326)
at
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:234)
... 3 more
Caused by: org.apache.solr.handler.dataimport.DataImportHandlerException:
Unable to execute query: SELECT [ProdID],[Descr] FROM
[Amazon].[dbo].[Table_Temp] 

Re: Solr SQL Express Integrated Security - Unable to execute query

2013-01-24 Thread O. Olson
Michael Della Bitta-2 wrote
 On Thu, Jan 24, 2013 at 11:34 AM, O. Olson lt;

 olson_ord@

 gt; wrote:

 Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: The server
 SQLEXPRESS is not configured to listen with TCP/IP.
 
 
 That's probably your problem...
 
 
 Michael Della Bitta
 
 
 Appinions
 18 East 41st Street, 2nd Floor
 New York, NY 10017-6271
 
 www.appinions.com
 
 Where Influence Isn’t a Game


Good call Michael. I did have to enable TCP
(http://msdn.microsoft.com/en-us/library/hh231672.aspx  for others who have
the same problem), but I did not still not get this to work. 

I then tested my Driver, JDBC URL  SQL Query in a plain old Java class.
This showed me that it was almost impossible to get integrated
authentication to work in Java. I finally went with specifying the usename
and password literally. (I hope this useful to others):


public static void main(String[] args) throws Exception {
String url =
jdbc:sqlserver://localhost\\SQLEXPRESS;database=Amazon;user=solrusr;password=solrusr;;
String driver = com.microsoft.sqlserver.jdbc.SQLServerDriver;
Connection connection = null;
try {
System.out.println(Loading driver...);
Class.forName(driver);
System.out.println(Driver loaded! Attempting 
Connection ...);
connection = DriverManager.getConnection(url);
System.out.println(Connection succeeded!);
ResultSet RS = 
connection.createStatement().executeQuery(SELECT ProdID,
Descr FROM Table_Temp);
try {

while(RS.next() != false) {
System.out.println(RS.getString(1) ++
RS.getString(2));
}
} finally {
RS.close();
}
// Success.
} catch (SQLException e) {} finally {
if (connection != null) try { connection.close(); } catch
(SQLException ignore) {}
}
}

Hence, I modified my db-data-config.xml to

dataConfig
dataSource driver=com.microsoft.sqlserver.jdbc.SQLServerDriver
url=jdbc:sqlserver://localhost\SQLEXPRESS;databaseName=Amazon;user=solrusr;password=solrusr;/
document
entity name=Product 
query=SELECT ProdID,Descr FROM Table_Temp
field column=ProdID name=ProdID /
field column=Descr name=Descr /
/entity
/document
/dataConfig

This worked for me.

Thanks again Michael  Shawn.
O. O.










--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-SQL-Express-Integrated-Security-Unable-to-execute-query-tp4035758p4036056.html
Sent from the Solr - User mailing list archive at Nabble.com.


Solr SQL Express Integrated Security - Unable to execute query

2013-01-23 Thread O. Olson
Hi,

I am using the /example-DIH in the Solr 4.0 download. The example worked
out of the box using the HSQLDB. I then attempted to modify the files to
connect to a SQL Express instance running on my local machine. A
http://localhost:8983/solr/db/dataimport?command=full-import results in 

org.apache.solr.common.SolrException log
SEVERE: Full Import failed:java.lang.RuntimeException:
java.lang.RuntimeException:
org.apache.solr.handler.dataimport.DataImportHandlerException: Unable to
execute query: SELECT [ProdID],[Descr] FROM [Amazon].[dbo].[Table_Temp]
Processing Document # 1
at
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:273) …

I first copied sqljdbc4.jar (from Microsoft)  to
/example/example-DIH/solr/db/lib. I have the following db-data-config.xml:

dataConfig
dataSource driver=com.microsoft.sqlserver.jdbc.SQLServerDriver
url=jdbc:sqlserver://localhost;instanceName=SQLEXPRESS;databaseName=Amazon;integratedSecurity=true;/
document
entity name=Product 
query=SELECT [ProdID],[Descr] FROM 
[Amazon].[dbo].[Table_Temp]
field column=ProdID name=ProdID /
field column=Descr name=Descr /
/entity
/document
/dataConfig

I have adjusted my schema.xml file accordingly.

Is there anyway I can debug this problem? I want to use Integrated
Security/Authentication, am I doing this correctly?

Thank you for all the help.
O. O.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-SQL-Express-Integrated-Security-Unable-to-execute-query-tp4035758.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Delete all Documents in the Example (Solr 4.0)

2013-01-22 Thread O. Olson
Thank you Erick for that great tip on getting a listing of the Cores. 
O. O.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Delete-all-Documents-in-the-Example-Solr-4-0-tp4035156p4035454.html
Sent from the Solr - User mailing list archive at Nabble.com.


Enable Logging in the Example App

2013-01-21 Thread O. Olson
Hi,
 
    I am really
new to Solr, and I have never used anything similar to it before. So please
pardon my ignorance. I downloaded  Solr
4.0 from http://lucene.apache.org/solr/downloads.html and start it using the 
commandline: 
 
java -jar start.jar  
 
This generates a number of INFO log messages to the console,
that I would like to better view. 
 
    What is the
best way to send these log messages to a file? I see a logs directory, but it
seems to be empty. I first tried to add the log4j.properties in the “etc”
directory as mentioned in http://wiki.apache.org/solr/SolrLogging.
I then started solr on the commandline: 
 
java -jar start.jar
-Dlog4j.configuration=file:etc/log4j.properties
 
This does not give me any log files. I would appreciate any
ideas in this regard i.e. the easiest way to get logging into the example app.
 
Thank you,
O. O.


Delete all Documents in the Example (Solr 4.0)

2013-01-21 Thread O. Olson
Hi,
 
    I am
attempting to use the example-DIH that comes with the Solr 4.0 download. In
/example, I start Solr using: 
 
java -Dsolr.solr.home=./example-DIH/solr/ -jar
start.jar
 
After playing with it for a while, I decided to delete all
documents in the index. The FAQ at 
http://wiki.apache.org/solr/FAQ#How_can_I_delete_all_documents_from_my_index.3F 
seems to say that I needed to use: 
 
http://localhost:8983/solr/update?stream.body=deletequery*:*/query/delete
http://localhost:8983/solr/update?stream.body=commit/
 
I put the above urls in my browser, but I simply get 404’s. I
then tried: 
 
http://localhost:8983/solr/update 
 
and I got a 404 too. I then looked at
/example-DIH/solr/solr/conf/solrconfig.xml and it seems to have requestHandler
name=/update class=solr.UpdateRequestHandler  /. 
 
I am confused why I am getting a 404 if /update has a
handler? 
 
Thank you for any ideas.
O. O.


Re: Delete all Documents in the Example (Solr 4.0)

2013-01-21 Thread O. Olson




- Messaggio originale -
Da: Shawn Heisey s...@elyograg.org
A: solr-user@lucene.apache.org
Cc: 
Inviato: Lunedì 21 Gennaio 2013 12:35
Oggetto: Re: Delete all Documents in the Example (Solr 4.0)

On 1/21/2013 11:27 AM, O. Olson wrote:
 http://localhost:8983/solr/update
 
 and I got a 404 too. I then looked at
 /example-DIH/solr/solr/conf/solrconfig.xml and it seems to have 
 requestHandler
 name=/update class=solr.UpdateRequestHandler  /.
 
 I am confused why I am getting a 404 if /update has a
 handler?

You need to send the request to /solr/corename/update ... if you are using the 
solr example, most likely the core is named collection1 so the URL would be 
/solr/collection1/update.

There is a lot of information out there that has not been updated since before 
multicore operation became the default in Solr examples.

The example does have defaultCoreName defined, but I still see lots of people 
that run into problems like this, so I suspect that it isn't always honored.

Thanks,
Shawn
---

Thank you Shawn for the hint. Can someone tell me how to
figure out the corename?
 
http://localhost:8983/solr/collection1/update
 
did not seem to work for me. I then saw that /example/example-DIH/solr/db
had a conf and data directory, so I assumed it to be core. I then tried 
 
http://localhost:8983/solr/db/update?stream.body=deletequery*:*/query/delete
http://localhost:8983/solr/db/update?stream.body=commit/
 
which worked for me i.e. the documents in the index got
deleted.
 
Thanks again,
O. O.


Re: Enable Logging in the Example App

2013-01-21 Thread O. Olson
Thank you Ahmet. This worked perfectly.
O. O.


- Messaggio originale -
Da: Ahmet Arslan iori...@yahoo.com
A: solr-user@lucene.apache.org; O. Olson olson_...@yahoo.it
Cc: 
Inviato: Lunedì 21 Gennaio 2013 15:44
Oggetto: Re: Enable Logging in the Example App

Hi Olson,

java -Djava.util.logging.config.file=etc/logging.properties -jar start.jar
should do the trick. There is an information about this in README.txt


--- On Mon, 1/21/13, O. Olson olson_...@yahoo.it wrote:

 From: O. Olson olson_...@yahoo.it
 Subject: Enable Logging in the Example App
 To: solr-user@lucene.apache.org solr-user@lucene.apache.org
 Date: Monday, January 21, 2013, 6:02 PM
 Hi,
  
     I am really
 new to Solr, and I have never used anything similar to it
 before. So please
 pardon my ignorance. I downloaded  Solr
 4.0 from http://lucene.apache.org/solr/downloads.html and start
 it using the commandline: 
  
 java -jar start.jar  
  
 This generates a number of INFO log messages to the
 console,
 that I would like to better view. 
  
     What is the
 best way to send these log messages to a file? I see a logs
 directory, but it
 seems to be empty. I first tried to add the log4j.properties
 in the “etc”
 directory as mentioned in http://wiki.apache.org/solr/SolrLogging.
 I then started solr on the commandline: 
  
 java -jar start.jar
 -Dlog4j.configuration=file:etc/log4j.properties
  
 This does not give me any log files. I would appreciate any
 ideas in this regard i.e. the easiest way to get logging
 into the example app.
  
 Thank you,
 O. O.




Question on Solr Velocity Example

2013-01-18 Thread O. Olson
Hi,
 
    I am new to
Solr (and Velocity), and have downloaded Solr 4.0 from 
http://lucene.apache.org/solr/downloads.html.
I started the example solr, and indexed the XML files in the /exampledocs
directory. Next, I pointed the browser to: http://localhost:8983/solr/browse 
and I get the results along with the search and faceted search functionality. I
am interested in learning how this example works. I hope some of you can help
me with the following questions:  
 
1.  In
this example, we seem to be using the Velocity templates in: 
/example/solr/collection1/conf/velocity.
The overall page at http://localhost:8983/solr/browse seems to be generated 
from browse.vm - which seems to include (parse) other
templates. My question here is that I see things like 
$response.response.clusters
– Where can I know what properties the “response” object has, or the “clusters”
object has? Also there seem to be some methods like display_facet_query() –
where is this defined. Is there some documentation for this, or some way I can
find this out? I might need to modify these values, hence my question. (I am
completely new to Velocity – but I think I get some idea by looking at the
templates.)
 
2.  In http://localhost:8983/solr/browse page, we have a list of Query 
Facets. Right now I just see two: ipod and GB?
How are these values obtained? Do they come from elevate.xml?? Here I see ipod,
but not GB. 
 
I would appreciate any help on these questions. If the above description is not 
clear please let me know.
 
Thank you,
O. O.


Re: Question on Solr Velocity Example

2013-01-18 Thread O. Olson




- Messaggio originale -
Da: Erik Hatcher erik.hatc...@gmail.com
A: solr-user@lucene.apache.org; O. Olson olson_...@yahoo.it
Cc: 
Inviato: Venerdì 18 Gennaio 2013 15:20
Oggetto: Re: Question on Solr Velocity Example


Great question.  $response is as described here 
http://wiki.apache.org/solr/VelocityResponseWriter#Velocity_Context

You can navigate Solr's javadocs (or via IDE and the source code as I do) to 
trace what that object returns and then introspect as you drill in.

I often just add '.class' to something in a template to have it output what 
kind of Java object it is, and work from there, such as 
$response.clusters.class 


-

Thank you Erik. On the Page 
http://wiki.apache.org/solr/VelocityResponseWriter#Velocity_Context if you 
click on QueryResponse you get a 404 i.e. a link to 
http://lucene.apache.org/solr/4_0_0/solr-core/org/apache/solr/client/solrj/response/QueryResponse.html
 is a 404. 
 
Thank you for throwing light on my other questions. Your
responses helped.
 
Thank you,
O. O.