[ANNOUNCE] Apache Solr 3.3

2011-07-01 Thread Robert Muir
July 2011, Apache Solr™ 3.3 available
The Lucene PMC is pleased to announce the release of Apache Solr 3.3.

Solr is the popular, blazing fast open source enterprise search platform from
the Apache Lucene project. Its major features include powerful full-text
search, hit highlighting, faceted search, dynamic clustering, database
integration, rich document (e.g., Word, PDF) handling, and geospatial search.
Solr is highly scalable, providing distributed search and index replication,
and it powers the search and navigation features of many of the world's
largest internet sites.

This release contains numerous bug fixes, optimizations, and
improvements, some of which are highlighted below.  The release
is available for immediate download at:
   http://www.apache.org/dyn/closer.cgi/lucene/solr (see note below).

See the CHANGES.txt file included with the release for a full list of
details as well as instructions on upgrading.

Solr 3.3 Release Highlights

 * Grouping / Field Collapsing

 * A new, automaton-based suggest/autocomplete implementation offering an
   order of magnitude smaller RAM consumption.

 * KStemFilterFactory, an optimized implementation of a less aggressive
   stemmer for English.

 * Solr defaults to a new, more efficient merge policy (TieredMergePolicy).
   See http://s.apache.org/merging for more information.

 * Important bugfixes, including extremely high RAM usage in spellchecking.

 * Bugfixes and improvements from Apache Lucene 3.3

Note: The Apache Software Foundation uses an extensive mirroring network for
distributing releases.  It is possible that the mirror you are using may not
have replicated the release yet.  If that is the case, please try another
mirror.  This also goes for Maven access.

Thanks,
Apache Solr Developers


Re: How to optimize solr indexes

2011-07-01 Thread Romi
when i run as : deltaimport?command=delta-importoptimize=false 

But i am still getting optimize=true when i look at admin console which is
in my original post

-
Thanks  Regards
Romi
--
View this message in context: 
http://lucene.472066.n3.nabble.com/How-to-optimize-solr-indexes-tp3125293p3128424.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: MergerFacor effect on indexes

2011-07-01 Thread Romi
To see the changes i am deleting my old indexes and recreating them but still
getting the same result :(

-
Thanks  Regards
Romi
--
View this message in context: 
http://lucene.472066.n3.nabble.com/MergerFacor-effect-on-indexes-tp3125146p3128432.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr Clustering For Multiple Pages

2011-07-01 Thread nilay....@gmail.com
Hi  

I am asking about the  filter after clustering . Faceting   is based on the
single field so,if we need to filter we can search in related field .  But
in clustering it is created by multiple field  then how can we create a
filter for that.

Example 

after clusetring you get the following  

Model(20)
System(15)
Other Topics(5)

if i will click on Model then i should get  record associated with Model

Regards
Nilay Tiwari

-
Regards
Nilay Tiwari
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-Clustering-For-Multiple-Pages-tp3085507p3128493.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr Clustering For Multiple Pages

2011-07-01 Thread Stanislaw Osinski

 I am asking about the  filter after clustering . Faceting   is based on the
 single field so,if we need to filter we can search in related field .  But
 in clustering it is created by multiple field  then how can we create a
 filter for that.

 Example

 after clusetring you get the following

 Model(20)
 System(15)
 Other Topics(5)

 if i will click on Model then i should get  record associated with Model


I'm not sure what you mean by filter -- ids of documents belonging to each
cluster are part of the response, see the docs array inside the cluster
(see http://wiki.apache.org/solr/ClusteringComponent#Quick_Start for example
output). When the user clicks a cluster, you just need to show the documents
with ids specified inside the cluster the user clicked.

Cheers,

Staszek


Re: Multicore clustering setup problem

2011-07-01 Thread Stanislaw Osinski
Hi Walter,

That makes sense, but this has always been a multi-core setup, so the paths
 have not changed, and the clustering component worked fine for core0. The
 only thing new is I have fine tuned core1 (to begin implementing it).
 Previously the solrconfig.xml file was very basic. I replaced it with
 core0's solrconfig.xml and made very minor changes to it (unrelated to
 clustering) - it's a nearly identical solrconfig.xml file so I'm surprised
 it doesn't work for core1.


I'd probably need to take a look at the whole Solr dir you're working with,
clearly there's something wrong with the classpath of core1.

Again, I'm wondering if perhaps since both cores have the clustering
 component, if it should have a shared configuration in a different file
 used
 by both cores(?). Perhaps the duplicate clusteringComponent configuration
 for both cores is the problem?


I'm not an expert on Solr's internals related to core management, but I once
did configure two cores with search results clustering, where clustering
configuration and libs were specified for each core separately, so this is
unlikely to be a problem. Another approach would be to put all the JARs
required for clustering in a common directory and point Solr to that lib
using the sharedLib attribute in the solr tag:
http://wiki.apache.org/solr/CoreAdmin#solr. But it really should work both
ways.

If you can somehow e-mail (off-list) the listing of your Solr directory and
contents of your configuration XMLs, I may be able to trace the problem for
you.

Cheers,

Staszek


Re: How to use solr clustering to show in search results

2011-07-01 Thread Stanislaw Osinski
The docs array contained in each cluster contains ids of documents
belonging to the cluster, so for each id you need to look up the document's
content, which comes earlier in the response (in the response/docs array).

Cheers,

Staszek

On Thu, Jun 30, 2011 at 11:50, Romi romijain3...@gmail.com wrote:

 wanted to use clustering in my search results, i configured solr for
 clustering and i got following json for clusters. But i am not getting how
 to use it to show in search results. as corresponding to one doc i have
 number of fields and up till now i am showing name, description and id. now
 in clusters i have labels and doc id. then how to use my docs in clusters,
 i
 am really confused what to do Please reply.

 *
 clusters:[

{
   labels:[
   Complement any Business Casual or Semi-formal
 Attire
],
   docs:[
7799,
7801
]
  },
{
   labels:[
Design
],
   docs:[
8252,
7885
]
  },
{
   labels:[
Elegant Ring has an Akoya Cultured Pearl
],
   docs:[
8142,
8139
]
  },
{
   labels:[
Feel Amazing in these Scintillating Earrings
 Perfect
],
   docs:[
12250,
12254
]
  },
{
   labels:[
Formal Evening Attire
],
   docs:[
8151,
8004
]
  },
{
   labels:[
Pave Set
],
   docs:[
7788,
8169
]
  },
{
   labels:[
Subtle Look or Layer it or Attach
],
   docs:[
8014,
8012
]
  },
   {
   labels:[
Three-stone Setting is Elegant and Fun
],
   docs:[
8335,
8337
]
  },
{
   labels:[
Other Topics
],
   docs:[
8038,
7850,
7795,
7989,
7797
]
  {
]*


 -
 Thanks  Regards
 Romi
 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/How-to-use-solr-clustering-to-show-in-search-results-tp3125149p3125149.html
 Sent from the Solr - User mailing list archive at Nabble.com.



Re: Uninstall Solr

2011-07-01 Thread gauravpareek2009
Hello Erik,
thank u for ur help.

I understand that we need to delete the folder but how undeploy the solr.war 
and where i can find it.
If anyone can send me the document to unisnatll solr software will be great.

Regards,
Gaurav Pareek
--
Sent via Nokia Email

--Original message--
From: Erik Hatcher erik.hatc...@gmail.com
To: solr-user@lucene.apache.org
Date: Thursday, June 30, 2011 8:10:48 PM GMT-0400
Subject: Re: Uninstall Solr

How'd you install it?

Generally you just delete the directory where you installed it.  But you 
might be deploying solr.war in a container somewhere besides Solr's example 
Jetty setup, in which case you need to undeploy it from those other containers 
and remove the remnants.

Curious though... why uninstall it?  Solr makes a mighty fine hammer to have 
around :)

Erik

On Jun 30, 2011, at 19:49 , GAURAV PAREEK wrote:

 Hi All,
 
 How to *uninstall* Solr completely ?
 
 Any help will be appreciated.
 
 Regards,
 Gaurav




Re: Wildcard search not working if full word is queried

2011-07-01 Thread Celso Pinto
Hi François,

it is indeed being stemmed, thanks a lot for the heads up. It appears
that stemming is also configured for the query so it should work just
the same, no?

Thanks again.

Regards,
Celso


2011/6/30 François Schiettecatte fschietteca...@gmail.com:
 I would run that word through the analyzer, I suspect that the word 'teste' 
 is being stemmed to 'test' in the index, at least that is the first place I 
 would check.

 François

 On Jun 30, 2011, at 2:21 PM, Celso Pinto wrote:

 Hi everyone,

 I'm having some trouble figuring out why a query with an exact word
 followed by the * wildcard, eg. teste*, returns no results while a
 query for test* returns results that have the word teste in them.

 I've created a couple of pasties:

 Exact word with wildcard : http://pastebin.com/n9SMNsH0
 Similar word: http://pastebin.com/jQ56Ww6b

 Parameters other than title, description and content have no effect
 other than filtering out unwanted results. In a two of the four
 results, the title has the complete word teste. On the other two,
 the word appears in the other fields.

 Does anyone have any insights about what I'm doing wrong?

 Thanks in advance.

 Regards,
 Celso




Re: Wildcard search not working if full word is queried

2011-07-01 Thread Celso Pinto
Hi again,

read (past tense) TFM :-) and:

On wildcard and fuzzy searches, no text analysis is performed on the
search word.

Thanks a lot François!

Regards,
Celso

On Fri, Jul 1, 2011 at 10:02 AM, Celso Pinto cpi...@yimports.com wrote:
 Hi François,

 it is indeed being stemmed, thanks a lot for the heads up. It appears
 that stemming is also configured for the query so it should work just
 the same, no?

 Thanks again.

 Regards,
 Celso


 2011/6/30 François Schiettecatte fschietteca...@gmail.com:
 I would run that word through the analyzer, I suspect that the word 'teste' 
 is being stemmed to 'test' in the index, at least that is the first place I 
 would check.

 François

 On Jun 30, 2011, at 2:21 PM, Celso Pinto wrote:

 Hi everyone,

 I'm having some trouble figuring out why a query with an exact word
 followed by the * wildcard, eg. teste*, returns no results while a
 query for test* returns results that have the word teste in them.

 I've created a couple of pasties:

 Exact word with wildcard : http://pastebin.com/n9SMNsH0
 Similar word: http://pastebin.com/jQ56Ww6b

 Parameters other than title, description and content have no effect
 other than filtering out unwanted results. In a two of the four
 results, the title has the complete word teste. On the other two,
 the word appears in the other fields.

 Does anyone have any insights about what I'm doing wrong?

 Thanks in advance.

 Regards,
 Celso





Problem in including both clustering component and spellchecker for solr search results at the same time

2011-07-01 Thread Romi
I want to include both clustering and spellchecker in my search results. but
at a time i am able to include only one. Only one, with which
requestHandler i am setting default=true. than how can i include both
clustering and spellchecker both for my results.


-
Thanks  Regards
Romi
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Problem-in-including-both-clustering-component-and-spellchecker-for-solr-search-results-at-the-same-e-tp3128864p3128864.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Problem in including both clustering component and spellchecker for solr search results at the same time

2011-07-01 Thread Markus Jelsma
Use a custom request handler and define both components as in the example for 
these individual request handlers.

 I want to include both clustering and spellchecker in my search results.
 but at a time i am able to include only one. Only one, with which
 requestHandler i am setting default=true. than how can i include both
 clustering and spellchecker both for my results.
 
 
 -
 Thanks  Regards
 Romi
 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/Problem-in-including-both-clustering-co
 mponent-and-spellchecker-for-solr-search-results-at-the-same-e-tp3128864p31
 28864.html Sent from the Solr - User mailing list archive at Nabble.com.


Re: Problem in including both clustering component and spellchecker for solr search results at the same time

2011-07-01 Thread Romi
would you please give me an example for custom request handler

-
Thanks  Regards
Romi
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Problem-in-including-both-clustering-component-and-spellchecker-for-solr-search-results-at-the-same-e-tp3128864p3128893.html
Sent from the Solr - User mailing list archive at Nabble.com.


MergerFactor and MaxMergerDocs effecting num of segments created

2011-07-01 Thread Romi
My indexes are these, i want to see the effect of merge factor and maxmerge
docs. on These indexes how can i do it.
*
_0.fdt  3310 KB
_0.fdx  23 KB
_0.fnm  1 KB
_0.frq  857 KB
_0.nrm  31 KB
_0.prx  1748 KB
_0.tii  5 KB
_0.tis  350 Kb*

I mean what test cases for mergefactor and maxmergedoc i can run to see the
effect on indexed files. current configuration is:
*
mergeFactor2/mergeFactor
 maxMergeDocs10/maxMergeDocs*



-
Thanks  Regards
Romi
--
View this message in context: 
http://lucene.472066.n3.nabble.com/MergerFactor-and-MaxMergerDocs-effecting-num-of-segments-created-tp3128897p3128897.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Problem in including both clustering component and spellchecker for solr search results at the same time

2011-07-01 Thread Markus Jelsma
This example loads two fictional components. Use spellcheck and clustering 
instead.

704 requestHandler name=search class=solr.SearchHandler default=true
705 !-- default values for query parameters can be specified, these
706 will be overridden by parameters in the request
707 --
708 lst name=defaults
709 str name=echoParamsexplicit/str
710 int name=rows10/int
711 /lst
712 !-- In addition to defaults, appends params can be specified
713 to identify values which should be appended to the list of
714 multi-val params from the query (or the existing defaults).
715 --
716 !-- In this example, the param fq=instock:true would be appended to
717 any query time fq params the user may specify, as a mechanism for
718 partitioning the index, independent of any user selected filtering
719 that may also be desired (perhaps as a result of faceted searching).
720 
721 NOTE: there is *absolutely* nothing a client can do to prevent these
722 appends values from being used, so don't use this mechanism
723 unless you are sure you always want it.
724 --
725 !--
726 lst name=appends
727 str name=fqinStock:true/str
728 /lst
729 --
730 !-- invariants are a way of letting the Solr maintainer lock down
731 the options available to Solr clients. Any params values
732 specified here are used regardless of what values may be specified
733 in either the query, the defaults, or the appends params.
734 
735 In this example, the facet.field and facet.query params would
736 be fixed, limiting the facets clients can use. Faceting is
737 not turned on by default - but if the client does specify
738 facet=true in the request, these are the only facets they
739 will be able to see counts for; regardless of what other
740 facet.field or facet.query params they may specify.
741 
742 NOTE: there is *absolutely* nothing a client can do to prevent these
743 invariants values from being used, so don't use this mechanism
744 unless you are sure you always want it.
745 --
746 !--
747 lst name=invariants
748 str name=facet.fieldcat/str
749 str name=facet.fieldmanu_exact/str
750 str name=facet.queryprice:[* TO 500]/str
751 str name=facet.queryprice:[500 TO *]/str
752 /lst
753 --
754 !-- If the default list of SearchComponents is not desired, that
755 list can either be overridden completely, or components can be
756 prepended or appended to the default list. (see below)
757 --
758 !--
759 arr name=components
760 strnameOfCustomComponent1/str
761 strnameOfCustomComponent2/str
762 /arr
763 --
764 /requestHandler 

 would you please give me an example for custom request handler
 
 -
 Thanks  Regards
 Romi
 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/Problem-in-including-both-clustering-co
 mponent-and-spellchecker-for-solr-search-results-at-the-same-e-tp3128864p31
 28893.html Sent from the Solr - User mailing list archive at Nabble.com.


Re: TermVectors and custom queries

2011-07-01 Thread Michael Sokolov
I think that's all you can do, although there is a callback-style 
interface that might save some time (or space).  You still need to 
iterate over all of the vectors, at least until you get the one you want.


-Mike

On 6/30/2011 4:53 PM, Jamie Johnson wrote:

Perhaps a better question, is this possible?

On Mon, Jun 27, 2011 at 5:15 PM, Jamie Johnsonjej2...@gmail.com  wrote:

I have a field named content with the following definition

field name=content type=text indexed=true stored=true
multiValued=true termVectors=true termPositions=true
termOffsets=true/

I'm now trying to execute a query against content and get back the term
vectors for the pieces that matched my query, but I must be messing
something up.  My query is as follows:

http://localhost:8983/solr/select/?qt=tvrhq=content:testfl=contenttv.all=true

where the word test is in my content field.  When I get information back
though I am getting the term vectors for all of the tokens in that field.
How do I get back just the ones that match my search?





Problem facing while querying for wild card queries

2011-07-01 Thread Romi
I am using solr for indexing and searching in my application. I am facing
some strange problem while querying wild card When i search for di?mo?d, i
get results for diamond but when i search for diamo?? i get no results.What
could be the reason please tell.

-
Thanks  Regards
Romi
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Problem-facing-while-querying-for-wild-card-queries-tp3128983p3128983.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: TermVectors and custom queries

2011-07-01 Thread Jamie Johnson
How would I know which ones were the ones I wanted?  I don't see how
from a query I couldn't match up the term vectors that met the query.
Seems like what needs to be done is have the highlighting on the solr
end where you have more access to the information I'm looking for.
Sound about right?

On Fri, Jul 1, 2011 at 7:26 AM, Michael Sokolov soko...@ifactory.com wrote:
 I think that's all you can do, although there is a callback-style interface
 that might save some time (or space).  You still need to iterate over all of
 the vectors, at least until you get the one you want.

 -Mike

 On 6/30/2011 4:53 PM, Jamie Johnson wrote:

 Perhaps a better question, is this possible?

 On Mon, Jun 27, 2011 at 5:15 PM, Jamie Johnsonjej2...@gmail.com  wrote:

 I have a field named content with the following definition

    field name=content type=text indexed=true stored=true
 multiValued=true termVectors=true termPositions=true
 termOffsets=true/

 I'm now trying to execute a query against content and get back the term
 vectors for the pieces that matched my query, but I must be messing
 something up.  My query is as follows:


 http://localhost:8983/solr/select/?qt=tvrhq=content:testfl=contenttv.all=true

 where the word test is in my content field.  When I get information back
 though I am getting the term vectors for all of the tokens in that field.
 How do I get back just the ones that match my search?





Re: optional nested queries

2011-07-01 Thread kenf_nc
I don't use dismax, but do something similar with a regular query. I have a
field defined in my schema.xml called 'dummy' (not sure why its called that
actually) but it defaults to 1 on every document indexed. So say I want to
give a score bump to documents that have an image, I can do queries like:

q=Some+search+text+AND+(has_image:true^.5 OR dummy:1)

I'm doing that from memory haven't actually tested so my syntax may be off,
but i hope you get the idea. Basically the first part of the OR query in
parenthesis is your optional nested query, if it fails, or even if the
document doesn't have a field called has_image at all, the dummy:1 will
always pass.

Ken

--
View this message in context: 
http://lucene.472066.n3.nabble.com/optional-nested-queries-tp3128847p3129064.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: How to optimize solr indexes

2011-07-01 Thread kenf_nc
I believe that is not a setting, it's not telling you that you have 'optimize
turned on' it's a state, your index is currently optimized. If you index a
new document or delete an existing document, and don't issue an optimize
command, then your index should be optimize=false.

--
View this message in context: 
http://lucene.472066.n3.nabble.com/How-to-optimize-solr-indexes-tp3125293p3129078.html
Sent from the Solr - User mailing list archive at Nabble.com.


SOLR and SQL functions

2011-07-01 Thread roySolr
Hello,

I have made my own sql function(isSoccerClub). In my sql query browser this
works fine. My query looks like:

select *
from soccer
where isSoccerClub(id,name) = 1;

Now i try to use this with the DIH. It looks like this:

entity name=soccerclubs_entity query=select *
from soccer
where
isSoccerClub(id,name) = 1
/entity

Now i get some error with the full-import: Indexing failed. Rolled back all
changes.

without where isSoccerClub(id,name) = 1; it works fine. Does SOLR not
support sql functions(transact-sql)??

--
View this message in context: 
http://lucene.472066.n3.nabble.com/SOLR-and-SQL-functions-tp3129175p3129175.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SOLR and SQL functions

2011-07-01 Thread Stefan Matheis
that doesn't matter for solr .. it's just executing your query via
jdbc .. so the complete error-message would be intersting. have a look
at the error-log of your sql-server too (especially for the timeframe
while the dataimport is running)

regards
Stefan

On Fri, Jul 1, 2011 at 2:52 PM, roySolr royrutten1...@gmail.com wrote:
 Hello,

 I have made my own sql function(isSoccerClub). In my sql query browser this
 works fine. My query looks like:

 select *
 from soccer
 where isSoccerClub(id,name) = 1;

 Now i try to use this with the DIH. It looks like this:

 entity name=soccerclubs_entity query=select *
                                                                from soccer
                                                                where
 isSoccerClub(id,name) = 1
 /entity

 Now i get some error with the full-import: Indexing failed. Rolled back all
 changes.

 without where isSoccerClub(id,name) = 1; it works fine. Does SOLR not
 support sql functions(transact-sql)??

 --
 View this message in context: 
 http://lucene.472066.n3.nabble.com/SOLR-and-SQL-functions-tp3129175p3129175.html
 Sent from the Solr - User mailing list archive at Nabble.com.



Re: JOIN, query on the parent?

2011-07-01 Thread Yonik Seeley
On Thu, Jun 30, 2011 at 6:19 PM, Ryan McKinley ryan...@gmail.com wrote:
 Hello-

 I'm looking for a way to find all the links from a set of results.  Consider:

 doc
  id:1
  type:X
  link:a
  link:b
 /doc

 doc
  id:2
  type:X
  link:a
  link:c
 /doc

 doc
  id:3
  type:Y
  link:a
 /doc

 Is there a way to search for all the links from stuff of type X -- in
 this case (a,b,c)

Do the links point to other documents somehow?
Let's assume that there are documents with ids of a,b,c

fq={!join from=link to=id}type:X

Basically, you start with the set of documents that match type:X, then
follow from link to id to arrive at the new set of documents.

-Yonik
http://www.lucidimagination.com


optional nested queries

2011-07-01 Thread joelmats
Hello!

Is it possible to have an optional nested query. I have 2 nested queries and
would like to have the first query mandatory but the second optional. ie..
if there is a match on the second query, i would like it to improve the
score but it is not required.

A sample query I am currently using is:
sort=score+descfl=*+scorestart=0q=_query_:{!dismax qf='name_text
categories_text people_texts' mm='1' hl='on' hl.simple.pre='@@@hl@@@'
hl.simple.post='@@@endhl@@@' hl.fl='people_texts'}mr+milk _query_:{!dismax
qf='city_text' mm='1'}new yorkrows=10debugQuery=on

However, it seems that if the second query fails to match, the whole query
fails.

Thanks!

J



--
View this message in context: 
http://lucene.472066.n3.nabble.com/optional-nested-queries-tp3128847p3128847.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: optional nested queries

2011-07-01 Thread Erik Hatcher
Put an OR between your two nested queries to ensure you're using that operator. 
 Also, those hl params in your first dismax don't really belong there and 
should be separate parameters globally.

Erik

On Jul 1, 2011, at 06:19 , joelmats wrote:

 Hello!
 
 Is it possible to have an optional nested query. I have 2 nested queries and
 would like to have the first query mandatory but the second optional. ie..
 if there is a match on the second query, i would like it to improve the
 score but it is not required.
 
 A sample query I am currently using is:
 sort=score+descfl=*+scorestart=0q=_query_:{!dismax qf='name_text
 categories_text people_texts' mm='1' hl='on' hl.simple.pre='@@@hl@@@'
 hl.simple.post='@@@endhl@@@' hl.fl='people_texts'}mr+milk _query_:{!dismax
 qf='city_text' mm='1'}new yorkrows=10debugQuery=on
 
 However, it seems that if the second query fails to match, the whole query
 fails.
 
 Thanks!
 
 J
 
 
 
 --
 View this message in context: 
 http://lucene.472066.n3.nabble.com/optional-nested-queries-tp3128847p3128847.html
 Sent from the Solr - User mailing list archive at Nabble.com.



Re: SOLR and SQL functions

2011-07-01 Thread roySolr
Ok, i checked my error logs and find some problems.

SET NAMES latin1
 SET character_set_results = NULL
 SHOW VARIABLES
 SHOW COLLATION
 SET autocommit=1
 SET sql_mode='STRICT_TRANS_TABLES'
 SET autocommit=0
select * from soccer where isSoccerClub(id,name) = 1;

I see that the sql_mode is set to STRICT TRANS TABLES. When i run this part
in mysql i 
get errors. Without sql_mode it works. Can i change this variable or do you
know a better solution? 


--
View this message in context: 
http://lucene.472066.n3.nabble.com/SOLR-and-SQL-functions-tp3129175p3129342.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Wildcard search not working if full word is queried

2011-07-01 Thread François Schiettecatte
Celso

You are very welcome and yes I should have mentioned that wildcard searches are 
not analyzed (which is a recurring theme). This also means that they are not 
downcased, so the search TEST* will probably not find anything either in  your 
set up.

Cheers

François

On Jul 1, 2011, at 5:16 AM, Celso Pinto wrote:

 Hi again,
 
 read (past tense) TFM :-) and:
 
 On wildcard and fuzzy searches, no text analysis is performed on the
 search word.
 
 Thanks a lot François!
 
 Regards,
 Celso
 
 On Fri, Jul 1, 2011 at 10:02 AM, Celso Pinto cpi...@yimports.com wrote:
 Hi François,
 
 it is indeed being stemmed, thanks a lot for the heads up. It appears
 that stemming is also configured for the query so it should work just
 the same, no?
 
 Thanks again.
 
 Regards,
 Celso
 
 
 2011/6/30 François Schiettecatte fschietteca...@gmail.com:
 I would run that word through the analyzer, I suspect that the word 'teste' 
 is being stemmed to 'test' in the index, at least that is the first place I 
 would check.
 
 François
 
 On Jun 30, 2011, at 2:21 PM, Celso Pinto wrote:
 
 Hi everyone,
 
 I'm having some trouble figuring out why a query with an exact word
 followed by the * wildcard, eg. teste*, returns no results while a
 query for test* returns results that have the word teste in them.
 
 I've created a couple of pasties:
 
 Exact word with wildcard : http://pastebin.com/n9SMNsH0
 Similar word: http://pastebin.com/jQ56Ww6b
 
 Parameters other than title, description and content have no effect
 other than filtering out unwanted results. In a two of the four
 results, the title has the complete word teste. On the other two,
 the word appears in the other fields.
 
 Does anyone have any insights about what I'm doing wrong?
 
 Thanks in advance.
 
 Regards,
 Celso
 
 
 



Re: SOLR and SQL functions

2011-07-01 Thread roySolr
I have found the problem. Some records has incorrect data. Thanks for your
help so far!!

--
View this message in context: 
http://lucene.472066.n3.nabble.com/SOLR-and-SQL-functions-tp3129175p3129409.html
Sent from the Solr - User mailing list archive at Nabble.com.


The fastest way to obtain field values

2011-07-01 Thread Bojidar Penchev
Hi Guys,

Last several days I am trying to find fast way to obtain all possible values
for a given field, but all the solution that I tried was
not fast enough.
I have several millions of documents indexed in single Solr instance, around
7 million for now, but I want to see how far I can go.
Every document can have several 'special' fields for which I want to know
their possible values (the values that currently are present in the index).
The distinct count of the values is small, lets say around 200 at max. And
these fields are dynamic and defined as follows:

dynamicField name=*_sl  type=string  indexed=true  stored=true/

For now I tried the following things:

- using facet query for these fields, and taking the values that have count
 1, but this query takes around 15 sec to complete
http://localhost:8983/solr/select?q=*rows=0facet=truefacet.limit=-1facet.field=colour_slfacet.mincount=1%20%20http://localhost:8983/solr/select?q=*rows=0facet=truefacet.limit=-1facet.field=colour_slfacet.mincount=1

- tweaking the params of filterCache in the solrconfig.xml and switching
between facet.methods fc and enum, but the time was similar

- using the http://wiki.apache.org/solr/TermsComponent the query
http://localhost:8983/solr/terms?terms.fl=colour_slterms.limit=-1
http://localhost:8983/solr/terms?terms.fl=colour_slterms.limit=-1%20
was ultra fast, but since it does provide present data only if you run
optimize (which also is very time consuming operation) before the query,
this also doesn't work.

Are there any other ways ?


RE: Uninstall Solr

2011-07-01 Thread Jonathan Rochkind
There's no general documentation on that, because it depends on exactly what 
container you are using (Tomcat? Jetty? Something else?) and how you are using 
it.  It is confusing, but blame Java for that, nothing unique to Solr. 

So since there's really nothing unique to Solr here, you could try looking up 
documentation on the particular container you are using and how you undeploy 
.war's from it, or asking on lists related to that documentation. 

But it's also possible someone here would be able to help you out, but you'd 
have to provide more information about what container you are using, and 
ideally what you did in the first place to install it. 

Jonathan

From: gauravpareek2...@gmail.com [gauravpareek2...@gmail.com]
Sent: Friday, July 01, 2011 4:41 AM
To: erik.hatc...@gmail.com; solr-user@lucene.apache.org
Subject: Re: Uninstall Solr

Hello Erik,
thank u for ur help.

I understand that we need to delete the folder but how undeploy the solr.war 
and where i can find it.
If anyone can send me the document to unisnatll solr software will be great.

Regards,
Gaurav Pareek
--
Sent via Nokia Email

--Original message--
From: Erik Hatcher erik.hatc...@gmail.com
To: solr-user@lucene.apache.org
Date: Thursday, June 30, 2011 8:10:48 PM GMT-0400
Subject: Re: Uninstall Solr

How'd you install it?

Generally you just delete the directory where you installed it.  But you 
might be deploying solr.war in a container somewhere besides Solr's example 
Jetty setup, in which case you need to undeploy it from those other containers 
and remove the remnants.

Curious though... why uninstall it?  Solr makes a mighty fine hammer to have 
around :)

Erik

On Jun 30, 2011, at 19:49 , GAURAV PAREEK wrote:

 Hi All,

 How to *uninstall* Solr completely ?

 Any help will be appreciated.

 Regards,
 Gaurav




Re: MergerFactor and MaxMergerDocs effecting num of segments created

2011-07-01 Thread Shawn Heisey

On 7/1/2011 4:43 AM, Romi wrote:

My indexes are these, i want to see the effect of merge factor and maxmerge
docs. on These indexes how can i do it.
*
_0.fdt  3310 KB
_0.fdx  23 KB
_0.fnm  1 KB
_0.frq  857 KB
_0.nrm  31 KB
_0.prx  1748 KB
_0.tii  5 KB
_0.tis  350 Kb*

I mean what test cases for mergefactor and maxmergedoc i can run to see the
effect on indexed files. current configuration is:
*
mergeFactor2/mergeFactor
  maxMergeDocs10/maxMergeDocs*


That is a single index segment, and as it's the initial segment (_0), no 
optimization or merging has taken place.  Further segments would have 
the same file extensions with prefixes like _1, _2, etc.  Once you 
reached _z, the next segment would be _10.


Your index is very small, so small that it only needs one segment when 
it is built all at once.  If you were to add new documents to the index 
(rather than do a full reindex), those new documents would go into a new 
segment.  If you continue to add segments in this way, this is when 
mergeFactor comes into play -- when the number of original segments 
reaches this value, they are merged into a single larger segment.  When 
this continues and you have enough merged segments, they are merged into 
an even larger segment.  I believe that a mergeFactor of 2 is special, 
designed to keep a large starting segment untouched while merging all 
the rest, but I have not confirmed that myself.


I don't know why maxMergeDocs is not taking effect.  It could be that 
during initial indexing, other factors (like ramBufferSizeMB) are 
involved, and maxMergeDocs only takes effect when merging existing segments.


For comparison purposes, here are the first three segments from one of 
my indexes:


-rw-r--r-- 1 ncindex ncindex 6323043528 Jun 30 00:57 _lf.fdt
-rw-r--r-- 1 ncindex ncindex   75766484 Jun 30 00:57 _lf.fdx
-rw-r--r-- 1 ncindex ncindex382 Jun 30 00:55 _lf.fnm
-rw-r--r-- 1 ncindex ncindex 2833619259 Jun 30 01:04 _lf.frq
-rw-r--r-- 1 ncindex ncindex   28412434 Jun 30 01:05 _lf.nrm
-rw-r--r-- 1 ncindex ncindex1183860 Jun 30 15:41 _lf_o.del
-rw-r--r-- 1 ncindex ncindex 2455819068 Jun 30 01:04 _lf.prx
-rw-r--r-- 1 ncindex ncindex   23759599 Jun 30 01:04 _lf.tii
-rw-r--r-- 1 ncindex ncindex  926422435 Jun 30 01:04 _lf.tis
-rw-r--r-- 1 ncindex ncindex   18940740 Jun 30 01:06 _lf.tvd
-rw-r--r-- 1 ncindex ncindex 5883186438 Jun 30 01:06 _lf.tvf
-rw-r--r-- 1 ncindex ncindex  151532964 Jun 30 01:06 _lf.tvx
-rw-r--r-- 1 ncindex ncindex  868769283 Jul  1 09:07 _mf.fdt
-rw-r--r-- 1 ncindex ncindex   11279356 Jul  1 09:07 _mf.fdx
-rw-r--r-- 1 ncindex ncindex372 Jul  1 09:06 _mf.fnm
-rw-r--r-- 1 ncindex ncindex  347906214 Jul  1 09:08 _mf.frq
-rw-r--r-- 1 ncindex ncindex4229761 Jul  1 09:08 _mf.nrm
-rw-r--r-- 1 ncindex ncindex  284701250 Jul  1 09:08 _mf.prx
-rw-r--r-- 1 ncindex ncindex 960052 Jul  1 09:08 _mf.tii
-rw-r--r-- 1 ncindex ncindex  141775812 Jul  1 09:08 _mf.tis
-rw-r--r-- 1 ncindex ncindex2818958 Jul  1 09:08 _mf.tvd
-rw-r--r-- 1 ncindex ncindex  735319599 Jul  1 09:08 _mf.tvf
-rw-r--r-- 1 ncindex ncindex   22558708 Jul  1 09:08 _mf.tvx
-rw-r--r-- 1 ncindex ncindex   30888748 Jul  1 09:07 _mg.fdt
-rw-r--r-- 1 ncindex ncindex 385700 Jul  1 09:07 _mg.fdx
-rw-r--r-- 1 ncindex ncindex372 Jul  1 09:07 _mg.fnm
-rw-r--r-- 1 ncindex ncindex   13709508 Jul  1 09:07 _mg.frq
-rw-r--r-- 1 ncindex ncindex 144640 Jul  1 09:07 _mg.nrm
-rw-r--r-- 1 ncindex ncindex   12683152 Jul  1 09:07 _mg.prx
-rw-r--r-- 1 ncindex ncindex  51848 Jul  1 09:07 _mg.tii
-rw-r--r-- 1 ncindex ncindex7409698 Jul  1 09:07 _mg.tis
-rw-r--r-- 1 ncindex ncindex  96428 Jul  1 09:07 _mg.tvd
-rw-r--r-- 1 ncindex ncindex   31790084 Jul  1 09:07 _mg.tvf
-rw-r--r-- 1 ncindex ncindex 771396 Jul  1 09:07 _mg.tvx

Shawn



tika.parser.AutoDetectParser

2011-07-01 Thread Tod
I'm working on upgrading to v3.2 from v 1.4.1.  I think I've got 
everything working but when I try to do a data import using 
dataimport.jsp I'm rolling back and getting class not found exception on 
the above referenced class.


I thought that tika was packaged up with the base Solr build now but 
this message seems to contradict that unless I'm missing a jar 
somewhere.  I've got both dataimporthandler jar files in my WEB-INF/lib 
dir so not sure what I could be missing.  Any ideas?



Thanks - Tod


Re: JOIN, query on the parent?

2011-07-01 Thread Ryan McKinley
On Fri, Jul 1, 2011 at 9:06 AM, Yonik Seeley yo...@lucidimagination.com wrote:
 On Thu, Jun 30, 2011 at 6:19 PM, Ryan McKinley ryan...@gmail.com wrote:
 Hello-

 I'm looking for a way to find all the links from a set of results.  Consider:

 doc
  id:1
  type:X
  link:a
  link:b
 /doc

 doc
  id:2
  type:X
  link:a
  link:c
 /doc

 doc
  id:3
  type:Y
  link:a
 /doc

 Is there a way to search for all the links from stuff of type X -- in
 this case (a,b,c)

 Do the links point to other documents somehow?
 Let's assume that there are documents with ids of a,b,c

 fq={!join from=link to=id}type:X

 Basically, you start with the set of documents that match type:X, then
 follow from link to id to arrive at the new set of documents.


Yup -- that works.  Thank you!

ryan


Solr Restart - Query during warming query leads to exception

2011-07-01 Thread Thomas Schmidt
Hi,
when I restart my solr server it performs two warming queries.
When a request occures within this there is an exception and always
exceptions until i restart solr.

Logfile:
INFO: Added SolrEventListener:
org.apache.solr.core.QuerySenderListener{queries=[{q=solr,start=0,rows=10},
{q=rocks,start=0,rows=10}, {q=static newSearcher warming query from
solrconfig.xml}]}
INFO: Added SolrEventListener:
org.apache.solr.core.QuerySenderListener{queries=[{q=fast_warm,start=0,rows=10},
{q=static firstSearcher warming query from solrconfig.xml}]}

Is this a known problem?
Does anybody have an advice for a solution?

Thanks
T.

solrconfig.xml:

  !-- a newSearcher event is fired whenever a new searcher is being
prepared
 and there is a current searcher handling requests (aka registered).
--
!-- QuerySenderListener takes an array of NamedList and executes a
 local query request for each NamedList in sequence. --
listener event=newSearcher class=solr.QuerySenderListener
  arr name=queries
lst str name=qsolr/str str name=start0/str str
name=rows10/str /lst
lst str name=qrocks/str str name=start0/str str
name=rows10/str /lst
lststr name=qstatic newSearcher warming query from
solrconfig.xml/str/lst
  /arr
/listener

!-- a firstSearcher event is fired whenever a new searcher is being
 prepared but there is no current registered searcher to handle
 requests or to gain autowarming data from. --
listener event=firstSearcher class=solr.QuerySenderListener
  arr name=queries
lst str name=qfast_warm/str str name=start0/str str
name=rows10/str /lst
lststr name=qstatic firstSearcher warming query from
solrconfig.xml/str/lst
  /arr
/listener

!-- If a search request comes in and there is no current registered
searcher,
 then immediately register the still warming searcher and use it.
If
 false then all requests will block until the first searcher is
done
 warming. --
useColdSearcherfalse/useColdSearcher

!-- Maximum number of searchers that may be warming in the background
  concurrently.  An error is returned if this limit is exceeded.
Recommend
  1-2 for read-only slaves, higher for masters w/o cache warming. --
maxWarmingSearchers2/maxWarmingSearchers


upgraded from 2.9 to 3.x, problems. help?

2011-07-01 Thread dhastings
i recently upgraded al systems for indexing and searching to lucene/solr 3.1,
and unfortunatly it seems theres a lot more changes under the hood than
there used to be.

i have a java based indexer and a solr based searcher, on the java end for
the indexing this is what i have:

   Set nostopwords = new HashSet(); 
nostopwords.add(needtoindexstopwords);
Analyzer an = new StandardAnalyzer(Version.LUCENE_31, nostopwords);
 writer
= new IndexWriter(fsDir,an,MaxFieldLength.UNLIMITED);

doc.add(new Field(text, contents, Field.Store.NO, Field.Index.ANALYZED));

and for the solr end i have:
 fieldType name=text class=solr.TextField positionIncrementGap=100
 
filter class=solr.WordDelimiterFilterFactory
generateWordParts=1 generateNumberParts=1 catenateWords=1
catenateNumbers=1 catenateAll=0 splitOnCaseChange=0/
  analyzer class=org.apache.lucene.analysis.standard.StandardAnalyzer
ignoreCase=true /
/fieldType


and it seems to be working well enough, EXCEPT i somehow lost matching
against strings like:
97 Yale L.J. 1493 
which with 2.9 would give me 753 results in my data, and now 3.1 gives me
105

is there something i can change to the indexer to be able to understand what
used to be default behavior with the standard analyzer, or is this something
with my solr schema against the data?



--
View this message in context: 
http://lucene.472066.n3.nabble.com/upgraded-from-2-9-to-3-x-problems-help-tp3129348p3129348.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: upgraded from 2.9 to 3.x, problems. help?

2011-07-01 Thread dhastings
i guess what im asking is how to set up solr/lucene to find 
yale l.j.
yale l. j.
yale l j
as all the same thing.

--
View this message in context: 
http://lucene.472066.n3.nabble.com/upgraded-from-2-9-to-3-x-problems-help-tp3129348p3129520.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: tika.parser.AutoDetectParser

2011-07-01 Thread Shawn Heisey

On 7/1/2011 9:23 AM, Tod wrote:
I'm working on upgrading to v3.2 from v 1.4.1.  I think I've got 
everything working but when I try to do a data import using 
dataimport.jsp I'm rolling back and getting class not found exception 
on the above referenced class.


I thought that tika was packaged up with the base Solr build now but 
this message seems to contradict that unless I'm missing a jar 
somewhere.  I've got both dataimporthandler jar files in my 
WEB-INF/lib dir so not sure what I could be missing.  Any ideas?


Tika is included in the solr download, but it's not included in the .war 
or any of the other files in the dist directory.  You may have noticed 
that you now have to include one or more jars for the dataimport 
handler.  If you copy the following files from the solr download to the 
same place you have apache-solr-dataimporthandler-3.2.0.jar, you should 
be OK.


contrib/extraction/lib/tika-core-0.8.jar
contrib/extraction/lib/tika-parsers-0.8.jar

Thanks,
Shawn



Re: tika.parser.AutoDetectParser

2011-07-01 Thread Tod

On 07/01/2011 12:59 PM, Shawn Heisey wrote:

On 7/1/2011 9:23 AM, Tod wrote:

I'm working on upgrading to v3.2 from v 1.4.1. I think I've got
everything working but when I try to do a data import using
dataimport.jsp I'm rolling back and getting class not found exception
on the above referenced class.

I thought that tika was packaged up with the base Solr build now but
this message seems to contradict that unless I'm missing a jar
somewhere. I've got both dataimporthandler jar files in my WEB-INF/lib
dir so not sure what I could be missing. Any ideas?


Tika is included in the solr download, but it's not included in the .war
or any of the other files in the dist directory. You may have noticed
that you now have to include one or more jars for the dataimport
handler. If you copy the following files from the solr download to the
same place you have apache-solr-dataimporthandler-3.2.0.jar, you should
be OK.

contrib/extraction/lib/tika-core-0.8.jar
contrib/extraction/lib/tika-parsers-0.8.jar

Thanks,
Shawn





Got them, thanks Shawn.


QueryResultCache question

2011-07-01 Thread arian487
So it seems the things in the queryResultCache have no TTL, I'm just curious
how it works if I reindex something with new info?  I am going to be
reindexing things often (I'd sort by last login and this changes fast). 
I've been stepping through the code and of course if the same queries come
in it simply gets the results from the key in the result cache.  However, if
I make the same query over and over again, when will I ever get different
results?  

I'm a little confused as to how the 'correct' results are shown if it just
uses the QueryResultKey to get the results from the cache.  I imagine a new
Searcher with a fresh cache is created or something with every index?  If
I'm reindexing very often, how useful is the QueryResultCache?  

--
View this message in context: 
http://lucene.472066.n3.nabble.com/QueryResultCache-question-tp3130135p3130135.html
Sent from the Solr - User mailing list archive at Nabble.com.


pagination and groups

2011-07-01 Thread Benson Margulies
I'm a bit puzzled while trying to adapt some pagination code in
javascript to a grouped query.

I'm using:

'group' : 'true',
 'group.limit' : 5, // something to show ...
 'group.field' : [ 'bt.nearDupCluster', 'bt.nearStoryCluster' ]

and displaying each field's worth in a tab. how do I work 'start', etc?


Reading data from Solr MoreLikeThis

2011-07-01 Thread Sheetal
Hi,
I am beginning to learn Solr. I am trying to read data from Solr MoreLike
This through Java. My query is 
http://localhost:8983/solr/select?q=repository_id:20mlt=truemlt.fl=filenamemlt.mindf=1mlt.mintf=1debugQuery=onmlt.interestingTerms=detailindent=true
http://localhost:8983/solr/select?q=repository_id:20mlt=truemlt.fl=filenamemlt.mindf=1mlt.mintf=1debugQuery=onmlt.interestingTerms=detailindent=true
  

which gave me the result as below:
http://lucene.472066.n3.nabble.com/file/n3130176/Screen_shot_2011-07-01_at_1.52.17_PM.png
 

I wanted to read the data of the field moreLikeThis from output lst
name=moreLikeThis. 
The main idea is, after I do moreLikeThis, then all fieldValue of
moreLikeThis should print out in my program.

I figured out the way to read the result tag by doing QueryResponse
rsp.getResults() and looping out.

But How would I read and print the values of moreLikeThis tag? Is there
anyway class like rsp.getMoreLikeThisField(url) or something.


Thank you in advance. :)





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Reading-data-from-Solr-MoreLikeThis-tp3130176p3130176.html
Sent from the Solr - User mailing list archive at Nabble.com.


Reading data from Solr MoreLikeThis

2011-07-01 Thread Sheetal
Hi,
I am beginner in Solr. I am trying to read data from Solr MoreLike This
through Java. My query is
http://localhost:8983/solr/select?q=repository_id:20mlt=truemlt.fl=filenamemlt.mindf=1mlt.mintf=1debugQuery=onmlt.interestingTerms=detail


I wanted to read the data of the field moreLikeThis from output lst
name=moreLikeThis.
The main idea is, after I do moreLikeThis, then all fieldValue of
moreLikeThis should print out in my program.

I figured out the way to read the Result tag by doing QueryResponse
rsp.getResults() and looping out.

But How would I read and print the values of moreLikeThis tag? Is there
anyway class like rsp.getMoreLikeThisField(fieldname) or something.


Thank you in advance. :) 

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Reading-data-from-Solr-MoreLikeThis-tp3130184p3130184.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: pagination and groups

2011-07-01 Thread Tomás Fernández Löbbe
I'm not sure I understand what you want to do. To paginate with groups you
can use start and rows as with ungrouped queries. with group.ngroups
(Something I found a couple of days ago) you can show the total number of
groups. group.limit tells Solr how many (max) documents you want to see
for each group.

On Fri, Jul 1, 2011 at 2:56 PM, Benson Margulies bimargul...@gmail.comwrote:

 I'm a bit puzzled while trying to adapt some pagination code in
 javascript to a grouped query.

 I'm using:

 'group' : 'true',
  'group.limit' : 5, // something to show ...
  'group.field' : [ 'bt.nearDupCluster', 'bt.nearStoryCluster' ]

 and displaying each field's worth in a tab. how do I work 'start', etc?



bbox query syntax

2011-07-01 Thread Kyle Lee
Hello all,

What are we doing incorrectly with this query?

http://10.0.0.121:8080/solr/select?q=(description:rifle)fq=(transactionDate:[NOW-30DAY/DAYTO
NOW/DAY] AND {!bbox sfield=storeLocation pt=32.73,-96.97 d=20})

If we leave the transactionDate field out of the filter query string, the
query works as expected. However, when we include the BBOX clause, we get a
parser error.

Any help figuring out the correct syntax would be appreciated.

Thanks,
Kyle


Getting started with Velocity

2011-07-01 Thread Chip Calhoun
I'm a Solr novice, so I hope I'm missing something obvious.  When I run a 
search in the Admin view, everything works fine.  When I do the same search in 
http://localhost:8983/solr/browse , I invariably get 0 results found.  What 
am i missing?  Are these not supposed to be searching the same index?
 
Thanks,
Chip


Re: QueryResultCache question

2011-07-01 Thread Tomás Fernández Löbbe
Hi, currently in Solr, updated Documents doesn't actually change until you
issue a commit operation (the same happen with new and deleted documents).
After the commit operation, all caches are flushed. That's why there is no
TTL, all documents in Cache remain up to date with the index until a commit
is issued.

On Fri, Jul 1, 2011 at 2:47 PM, arian487 akarb...@tagged.com wrote:

 So it seems the things in the queryResultCache have no TTL, I'm just
 curious
 how it works if I reindex something with new info?  I am going to be
 reindexing things often (I'd sort by last login and this changes fast).
 I've been stepping through the code and of course if the same queries come
 in it simply gets the results from the key in the result cache.  However,
 if
 I make the same query over and over again, when will I ever get different
 results?

 I'm a little confused as to how the 'correct' results are shown if it just
 uses the QueryResultKey to get the results from the cache.  I imagine a new
 Searcher with a fresh cache is created or something with every index?  If
 I'm reindexing very often, how useful is the QueryResultCache?

 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/QueryResultCache-question-tp3130135p3130135.html
 Sent from the Solr - User mailing list archive at Nabble.com.



Re: pagination and groups

2011-07-01 Thread Benson Margulies
What takes the place of response.response.numFound?



2011/7/1 Tomás Fernández Löbbe tomasflo...@gmail.com:
 I'm not sure I understand what you want to do. To paginate with groups you
 can use start and rows as with ungrouped queries. with group.ngroups
 (Something I found a couple of days ago) you can show the total number of
 groups. group.limit tells Solr how many (max) documents you want to see
 for each group.

 On Fri, Jul 1, 2011 at 2:56 PM, Benson Margulies bimargul...@gmail.comwrote:

 I'm a bit puzzled while trying to adapt some pagination code in
 javascript to a grouped query.

 I'm using:

 'group' : 'true',
  'group.limit' : 5, // something to show ...
  'group.field' : [ 'bt.nearDupCluster', 'bt.nearStoryCluster' ]

 and displaying each field's worth in a tab. how do I work 'start', etc?




Re: pagination and groups

2011-07-01 Thread Tomás Fernández Löbbe
 are you using group.main=true?
I didn't see the code for this and the documentation doesn't specify it, but
I tried  group.ngroups=true and using group.main=true, the ngroups
attribute is not brought back. If you are not using group.main=true, then
by setting group.ngroups=true you'll see the value ngroups which means
the number of groups that matched the query.

NOTE: All this is in trunk, I'm not sure if it is on 3.3


On Fri, Jul 1, 2011 at 3:53 PM, Benson Margulies bimargul...@gmail.comwrote:

 What takes the place of response.response.numFound?



 2011/7/1 Tomás Fernández Löbbe tomasflo...@gmail.com:
  I'm not sure I understand what you want to do. To paginate with groups
 you
  can use start and rows as with ungrouped queries. with
 group.ngroups
  (Something I found a couple of days ago) you can show the total number of
  groups. group.limit tells Solr how many (max) documents you want to see
  for each group.
 
  On Fri, Jul 1, 2011 at 2:56 PM, Benson Margulies bimargul...@gmail.com
 wrote:
 
  I'm a bit puzzled while trying to adapt some pagination code in
  javascript to a grouped query.
 
  I'm using:
 
  'group' : 'true',
   'group.limit' : 5, // something to show ...
   'group.field' : [ 'bt.nearDupCluster', 'bt.nearStoryCluster' ]
 
  and displaying each field's worth in a tab. how do I work 'start', etc?
 
 



Re: Solr Restart - Query during warming query leads to exception

2011-07-01 Thread Chris Hostetter

: when I restart my solr server it performs two warming queries.
: When a request occures within this there is an exception and always
: exceptions until i restart solr.

what type of request?
what is the initial exception?
what are the subsequent exceptions until restart?
what do the logs looks like, starting with the first exception, until you 
restart?

: Does anybody have an advice for a solution?

hard to give advice when you haven't actually shown any evidence of the 
problem.  


-Hoss


How to import dynamic fields

2011-07-01 Thread randolf.julian
I am trying to import from one SOLR index to another (with different schema)
using data import handler via http: However, there are dynamic fields in the
source that I need to import. In the schema.xml, this field has been
declared as:

  dynamicField name=END_DATE_* type=date indexed=true stored=true/

When I query SOLR, this comes up:

date name=END_DATE_1021802011-05-31T00:00:00Z/datedate
name=END_DATE_11714852011-05-31T00:00:00Z/datedate
name=END_DATE_142112032011-07-26T08:15:25Z/datedate
name=END_DATE_1639696882011-05-31T00:00:00Z/datedate
name=END_DATE_2150899862011-07-26T08:15:25Z/datedate
name=END_DATE_3556734982011-05-31T00:00:00Z/datedate
name=END_DATE_43294072011-07-26T08:15:25Z/datedate
name=END_DATE_6606669242011-07-19T21:00:35Z/datedate
name=END_DATE_6697811602011-07-26T08:15:25Z/datedate
name=END_DATE_7936948142011-07-26T08:15:25Z/datedate
name=END_DATE_8249771782011-07-26T08:15:25Z/date

How can I import these to the other SOLR index using dataimporthandler via
http?

Thanks,
Randolf

--
View this message in context: 
http://lucene.472066.n3.nabble.com/How-to-import-dynamic-fields-tp3130553p3130553.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: optional nested queries

2011-07-01 Thread joelmats
Thanks!

I was wondering why my highlighting wasnt' working either.

--
View this message in context: 
http://lucene.472066.n3.nabble.com/optional-nested-queries-tp3128847p3130593.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Reading data from Solr MoreLikeThis

2011-07-01 Thread Juan Grande
Hi,

As far as I know, there's no specific method to get the MoreLikeThis section
from the response. Anyway, you can retrieve the results with a piece of code
like the following:

// the lst name=moreLikeThis is a NamedList of SolrDocumentLists
 NamedListSolrDocumentList mltResult =
 (NamedListSolrDocumentList)response.getResponse().get(moreLikeThis);
 for(Map.EntryString, SolrDocumentList entry: mltResult) {
 System.out.println(Docs similar to  + entry.getKey());
 for(SolrDocument similarDoc: entry.getValue()) {
 System.out.println( -  + similarDoc.get(id));
 }
 }


Hope that helps!

*Juan*



On Fri, Jul 1, 2011 at 3:04 PM, Sheetal rituzprad...@gmail.com wrote:

 Hi,
 I am beginner in Solr. I am trying to read data from Solr MoreLike This
 through Java. My query is

 http://localhost:8983/solr/select?q=repository_id:20mlt=truemlt.fl=filenamemlt.mindf=1mlt.mintf=1debugQuery=onmlt.interestingTerms=detail


 I wanted to read the data of the field moreLikeThis from output lst
 name=moreLikeThis.
 The main idea is, after I do moreLikeThis, then all fieldValue of
 moreLikeThis should print out in my program.

 I figured out the way to read the Result tag by doing QueryResponse
 rsp.getResults() and looping out.

 But How would I read and print the values of moreLikeThis tag? Is there
 anyway class like rsp.getMoreLikeThisField(fieldname) or something.


 Thank you in advance. :)

 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/Reading-data-from-Solr-MoreLikeThis-tp3130184p3130184.html
 Sent from the Solr - User mailing list archive at Nabble.com.



Re: QueryResultCache question

2011-07-01 Thread arian487
Thanks for the quick reply!  I see theres no way to access the result cache,
I actually want to access the result the cache in a new component I have
which runs after the query but it seems this is impossible.  I guess I'm
just going to rebuild the code to make it public or something as I need the
result cache.  

--
View this message in context: 
http://lucene.472066.n3.nabble.com/QueryResultCache-question-tp3130135p3130603.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: TermVectors and custom queries

2011-07-01 Thread Mike Sokolov
Yes, that's right.  But at the moment the HL code basically has to 
reconstruct and re-run your query - it doesn't have any special 
knowledge.  There's some work going on to try and fix that, but it seems 
like it's going to require some fairly major deep re-plumbing.


-Mike

On 07/01/2011 07:54 AM, Jamie Johnson wrote:

How would I know which ones were the ones I wanted?  I don't see how
from a query I couldn't match up the term vectors that met the query.
Seems like what needs to be done is have the highlighting on the solr
end where you have more access to the information I'm looking for.
Sound about right?

On Fri, Jul 1, 2011 at 7:26 AM, Michael Sokolovsoko...@ifactory.com  wrote:
   

I think that's all you can do, although there is a callback-style interface
that might save some time (or space).  You still need to iterate over all of
the vectors, at least until you get the one you want.

-Mike

On 6/30/2011 4:53 PM, Jamie Johnson wrote:
 

Perhaps a better question, is this possible?

On Mon, Jun 27, 2011 at 5:15 PM, Jamie Johnsonjej2...@gmail.comwrote:
   

I have a field named content with the following definition

field name=content type=text indexed=true stored=true
multiValued=true termVectors=true termPositions=true
termOffsets=true/

I'm now trying to execute a query against content and get back the term
vectors for the pieces that matched my query, but I must be messing
something up.  My query is as follows:


http://localhost:8983/solr/select/?qt=tvrhq=content:testfl=contenttv.all=true

where the word test is in my content field.  When I get information back
though I am getting the term vectors for all of the tokens in that field.
How do I get back just the ones that match my search?

 


 


Re: Getting started with Velocity

2011-07-01 Thread Way Cool
By default, browse is using the following config:
requestHandler name=/browse class=solr.SearchHandler
 lst name=defaults
   str name=echoParamsexplicit/str

   !-- VelocityResponseWriter settings --
   str name=wtvelocity/str

   str name=v.templatebrowse/str
   str name=v.layoutlayout/str
   str name=titleSolritas/str

   str name=defTypeedismax/str
   str name=q.alt*:*/str
   str name=rows10/str
   str name=fl*,score/str
   str name=mlt.qf
 text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4
   /str
   str name=mlt.fltext,features,name,sku,id,manu,cat/str
   int name=mlt.count3/int

   str name=qf
  text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4
   /str

   str name=faceton/str
   str name=facet.fieldcat/str
   str name=facet.fieldmanu_exact/str
   str name=facet.queryipod/str
   str name=facet.queryGB/str
   str name=facet.mincount1/str
   str name=facet.pivotcat,inStock/str
   str name=facet.rangeprice/str
   int name=f.price.facet.range.start0/int
   int name=f.price.facet.range.end600/int
   int name=f.price.facet.range.gap50/int
   str name=f.price.facet.range.otherafter/str
   str name=facet.rangemanufacturedate_dt/str
   str
name=f.manufacturedate_dt.facet.range.startNOW/YEAR-10YEARS/str
   str name=f.manufacturedate_dt.facet.range.endNOW/str
   str name=f.manufacturedate_dt.facet.range.gap+1YEAR/str
   str name=f.manufacturedate_dt.facet.range.otherbefore/str
   str name=f.manufacturedate_dt.facet.range.otherafter/str


   !-- Highlighting defaults --
   str name=hlon/str
   str name=hl.fltext features name/str
   str name=f.name.hl.fragsize0/str
   str name=f.name.hl.alternateFieldname/str
 /lst
 arr name=last-components
   strspellcheck/str
 /arr
 !--
 str name=url-schemehttpx/str
 --
  /requestHandler

while the normal search is using the following:
requestHandler name=search class=solr.SearchHandler default=true
!-- default values for query parameters can be specified, these
 will be overridden by parameters in the request
  --
 lst name=defaults
   str name=echoParamsexplicit/str
   int name=rows10/int
 /lst
/requestHandler.

Just make sure you have those fields defined in browse also in your doc,
otherwise change to not use dismax. :-)


On Fri, Jul 1, 2011 at 12:51 PM, Chip Calhoun ccalh...@aip.org wrote:

 I'm a Solr novice, so I hope I'm missing something obvious.  When I run a
 search in the Admin view, everything works fine.  When I do the same search
 in http://localhost:8983/solr/browse , I invariably get 0 results found.
  What am i missing?  Are these not supposed to be searching the same index?

 Thanks,
 Chip



Re: How to import dynamic fields

2011-07-01 Thread Lance Norskog
SOLR-1499 is a DIH plugin that reads from another Solr.

https://issues.apache.org/jira/browse/SOLR-1499

It is not in active development, but is being updated to current source trees.

Lance

On Fri, Jul 1, 2011 at 12:51 PM, randolf.julian
randolf.jul...@dominionenterprises.com wrote:
 I am trying to import from one SOLR index to another (with different schema)
 using data import handler via http: However, there are dynamic fields in the
 source that I need to import. In the schema.xml, this field has been
 declared as:

  dynamicField name=END_DATE_* type=date indexed=true stored=true/

 When I query SOLR, this comes up:

 date name=END_DATE_1021802011-05-31T00:00:00Z/datedate
 name=END_DATE_11714852011-05-31T00:00:00Z/datedate
 name=END_DATE_142112032011-07-26T08:15:25Z/datedate
 name=END_DATE_1639696882011-05-31T00:00:00Z/datedate
 name=END_DATE_2150899862011-07-26T08:15:25Z/datedate
 name=END_DATE_3556734982011-05-31T00:00:00Z/datedate
 name=END_DATE_43294072011-07-26T08:15:25Z/datedate
 name=END_DATE_6606669242011-07-19T21:00:35Z/datedate
 name=END_DATE_6697811602011-07-26T08:15:25Z/datedate
 name=END_DATE_7936948142011-07-26T08:15:25Z/datedate
 name=END_DATE_8249771782011-07-26T08:15:25Z/date

 How can I import these to the other SOLR index using dataimporthandler via
 http?

 Thanks,
 Randolf

 --
 View this message in context: 
 http://lucene.472066.n3.nabble.com/How-to-import-dynamic-fields-tp3130553p3130553.html
 Sent from the Solr - User mailing list archive at Nabble.com.




-- 
Lance Norskog
goks...@gmail.com


Re: pagination and groups

2011-07-01 Thread Benson Margulies
I'm using a version taken from the trunk some time ago. I'm not
setting groups.main, I just started setting groups.ngroups, and
nothing doing. So I guess I don't have a new enough grab from the
trunk.

2011/7/1 Tomás Fernández Löbbe tomasflo...@gmail.com:
  are you using group.main=true?
 I didn't see the code for this and the documentation doesn't specify it, but
 I tried  group.ngroups=true and using group.main=true, the ngroups
 attribute is not brought back. If you are not using group.main=true, then
 by setting group.ngroups=true you'll see the value ngroups which means
 the number of groups that matched the query.

 NOTE: All this is in trunk, I'm not sure if it is on 3.3


 On Fri, Jul 1, 2011 at 3:53 PM, Benson Margulies bimargul...@gmail.comwrote:

 What takes the place of response.response.numFound?



 2011/7/1 Tomás Fernández Löbbe tomasflo...@gmail.com:
  I'm not sure I understand what you want to do. To paginate with groups
 you
  can use start and rows as with ungrouped queries. with
 group.ngroups
  (Something I found a couple of days ago) you can show the total number of
  groups. group.limit tells Solr how many (max) documents you want to see
  for each group.
 
  On Fri, Jul 1, 2011 at 2:56 PM, Benson Margulies bimargul...@gmail.com
 wrote:
 
  I'm a bit puzzled while trying to adapt some pagination code in
  javascript to a grouped query.
 
  I'm using:
 
  'group' : 'true',
   'group.limit' : 5, // something to show ...
   'group.field' : [ 'bt.nearDupCluster', 'bt.nearStoryCluster' ]
 
  and displaying each field's worth in a tab. how do I work 'start', etc?
 
 




Match only documents which contain all query terms

2011-07-01 Thread Spyros Kapnissis
Hello to all,


Is it possible that I can make solr return only documents that contain all or 
most of my query terms for a specific field? Or will I need some 
post-processing on the results?

So, for example, if I search for (a b c), I would like the following documents 
returned:

a b c
a' c b (where a' is a stem for example)

but not 
x y a b c z

Thanks,
Spyros

Indexing CSV data in Multicore setup

2011-07-01 Thread Sandeep Gond
I am trying to index CSV data in multicore setup using post.jar.

Here is what I have tried so far:
1) Started the server using java -Dsolr.solr.home=multicore -jar
start.jar

2a) Tried to post to localhost:8983/solr/core0/update/csv using java
-Dcommit=no -Durl=http://localhost:8983/solr/core0/update/csv -jar post.jar
test.csv
  Error: SimplePostTool: FATAL: Solr returned an error #404 Not Found

2b) Tried to send CSV data to core0 using java -Durl=
http://localhost:8983/solr/core0/update -jar post.jar test.csv
  Error: SimplePostTool: FATAL: Solr returned an error #400 Unexpected
character 'S' (code 83) in prolog; expected ''   at [row,col
{unknown-source}]: [1,1]

I could feed in the xml files to core0 without any issues.

Am I missing something here?


Re: bbox query syntax

2011-07-01 Thread David Smiley (@MITRE.org)
Hi.
By the way, your uses of parenthesis are completely superfluous.
You can't just plop that {! syntax anywhere you please, it only works at
the beginning of a query to establish the query parser for the rest of the
string and/or to set local-params.   There is a sub-query hacky syntax:
... AND _query_:{!bbox sfield=storeLocation pt=32.73,-96.97 d=20}.  But in
your case, I would simply use a second filter query: fq={!bbox
sfield=storeLocation pt=32.73,-96.97 d=20}

And by the way, you forgot to round down your first NOW to the day.

~ David

-
 Author: https://www.packtpub.com/solr-1-4-enterprise-search-server/book
--
View this message in context: 
http://lucene.472066.n3.nabble.com/bbox-query-syntax-tp3130329p3131458.html
Sent from the Solr - User mailing list archive at Nabble.com.