Re: Free Webinar: Mastering Solr 1.4 with Yonik Seeley

2010-03-07 Thread Siddhant Goel
Now that I missed attending it, where can I view it? :-)

Thanks

On Fri, Feb 26, 2010 at 10:11 PM, Jay Hill jayallenh...@gmail.com wrote:

 Yes, it will be recorded and available to view after the presentation.

 -Jay


 On Thu, Feb 25, 2010 at 2:19 PM, Bernadette Houghton 
 bernadette.hough...@deakin.edu.au wrote:

  Yonk, can you please advise whether this event will be recorded and
  available for later download? (It starts 5am our time ;-)  )
 
  Regards
  Bern
 
  -Original Message-
  From: ysee...@gmail.com [mailto:ysee...@gmail.com] On Behalf Of Yonik
  Seeley
  Sent: Thursday, 25 February 2010 10:23 AM
  To: solr-user@lucene.apache.org
  Subject: Free Webinar: Mastering Solr 1.4 with Yonik Seeley
 
  I'd like to invite you to join me for an in-depth review of Solr's
  powerful, versatile new features and functions. The free webinar,
  sponsored by my company, Lucid Imagination, covers an intensive
  how-to for the features you need to make the most of Solr for your
  search application:
 
 * Faceting deep dive, from document fields to performance management
 * Best practices for sharding, index partitioning and scaling
 * How to construct efficient Range Queries and function queries
 * Sneak preview: Solr 1.5 roadmap
 
  Join us for a free webinar
  Thursday, March 4, 2010
  10:00 AM PST / 1:00 PM EST / 18:00 GMT
  Follow this link to sign up
 
  http://www.eventsvc.com/lucidimagination/030410?trk=WR-MAR2010-AP
 
  Thanks,
 
  -Yonik
  http://www.lucidimagination.com
 




-- 
- Siddhant


Re: Is it possible to use ODBC with DIH?

2010-03-07 Thread Noble Paul നോബിള്‍ नोब्ळ्
if you have  a jdbc-odbc bridge driver , it should be fine

On Sun, Mar 7, 2010 at 4:52 AM, JavaGuy84 bbar...@gmail.com wrote:

 Hi,

 I have a ODBC driver with me for MetaMatrix DB(Redhat). I am trying to
 figure out a way to use DIH using the DSN which has been created in my
 machine with that ODBC driver?

 Is it possible to spcify a DSN in DIH and index the DB? if its possible, can
 you please let me know the ODBC URL that I need to enter for Datasource in
 DIH data-config.xml?

 Thanks,
 Barani
 --
 View this message in context: 
 http://old.nabble.com/Is-it-possible-to-use-ODBC-with-DIH--tp27808016p27808016.html
 Sent from the Solr - User mailing list archive at Nabble.com.





-- 
-
Noble Paul | Systems Architect| AOL | http://aol.com


Re: Free Webinar: Mastering Solr 1.4 with Yonik Seeley

2010-03-07 Thread MitchK

Last but not least: When can we view it? :)
-- 
View this message in context: 
http://old.nabble.com/Re%3A-Free-Webinar%3A-Mastering-Solr-1.4-with-Yonik-Seeley-tp27720526p27810048.html
Sent from the Solr - User mailing list archive at Nabble.com.



which links do i have to follow to understand location based search concepts?

2010-03-07 Thread KshamaPai

Hi,

Inorder to understand - cartessian tiers,how are they contributing in
location based search - What is happening internally when we give query to
solr like http://localhost:8983/solr/select/?q=name:Minneapolis AND
_val_:recip(hsin(0.78, -1.6, lat_rad, lon_rad, 3963.205), 1, 1, 0)^100 and
other functions like ghhsin(),sqedist(),dist() .how is it working to
retrieve relevent records?

Can any one suggest me any link that will help me understand all these
concepts better.

Thank you.
-- 
View this message in context: 
http://old.nabble.com/which-links-do-i-have-to-follow-to-understand-location-based-search-concepts--tp27811139p27811139.html
Sent from the Solr - User mailing list archive at Nabble.com.



Fwd: index merge

2010-03-07 Thread Mark Fletcher
Hi,

I have created 2  identical cores coreX and coreY (both have different
dataDir values, but their index is same).
coreX - always serves the request when a user performs a search.
coreY - the updates will happen to this core and then I need to synchronize
it with coreX after the update process, so that coreX also has the
   latest data in it.  After coreX and coreY are synchronized, both
should again be identical again.

For this purpose I tried core merging of coreX and coreY once coreY is
updated with the latest set of data. But I find coreX to be containing
double the record count as in coreY.
(coreX = coreX+coreY)

Is there a problem in using MERGE concept here. If it is wrong can some one
pls suggest the best approach. I tried the various merges explained in my
previous mail.

Any help is deeply appreciated.

Thanks and Rgds,
Mark.



-- Forwarded message --
From: Mark Fletcher mark.fletcher2...@gmail.com
Date: Sat, Mar 6, 2010 at 9:17 AM
Subject: index merge
To: solr-user@lucene.apache.org
Cc: goks...@gmail.com


Hi,

I have a doubt regarding Index Merging:-

I have set up 2 cores COREX and COREY.
COREX - always serves user requests
COREY - gets updated with the latest values (dataDir is in a different
location from COREX)
I tried merging coreX and coreY at the end of COREY getting updated with the
latest data values so that COREX and COREY are having the latest data. So
the user who always queries COREX gets the latest data.Pls find the various
approaches I followed and the commands used.

I tried these merges:-
COREX = COREX and COREY merged
curl '
http://localhost:8983/solr/admin/cores?action=mergeindexescore=coreXindexDir=/opt/solr/coreX/data/indexindexDir=/opt1/solr/coreY/data/index
'

COREX = COREY and COREY merged
curl '
http://localhost:8983/solr/admin/cores?action=mergeindexescore=coreXindexDir=/opt/solr/coreY/data/indexindexDir=/opt1/solr/coreY/data/index
'

COREX = COREY and COREA merged (COREA just contains the initial 2 seed
segments.. a dummy core)
curl '
http://localhost:8983/solr/admin/cores?action=mergeindexescore=coreXindexDir=/opt/solr/coreY/data/indexindexDir=/opt1/solr/coreA/data/index
'

When I check the record count in COREX and COREY, COREX always contains
about double of what COREY has. Is everything fine here and just the record
count is different or is there something wrong.
Note:- I have only 2 cores here and I tried the X=X+Y approach, X=Y+Y and
X=Y+A approach where A is a dummy index. Never have the record counts
matched after the merging is done.

Can someone please help me understand why this record count difference
occurs and is there anything fundamentally wrong in my approach.

Thanks and Rgds,
Mark.


Re: Free Webinar: Mastering Solr 1.4 with Yonik Seeley

2010-03-07 Thread Grant Ingersoll
http://www.lucidimagination.com/blog/2010/02/25/free-webinar-mastering-solr-1-4-with-yonik-seeley/
  

From there, there is a link to listen to the webinar.  

-Grant


On Mar 7, 2010, at 4:25 AM, MitchK wrote:

 
 Last but not least: When can we view it and when take it place? :)
 -- 
 View this message in context: 
 http://old.nabble.com/Re%3A-Free-Webinar%3A-Mastering-Solr-1.4-with-Yonik-Seeley-tp27720526p27810048.html
 Sent from the Solr - User mailing list archive at Nabble.com.
 





Re: which links do i have to follow to understand location based search concepts?

2010-03-07 Thread Grant Ingersoll

On Mar 7, 2010, at 7:45 AM, KshamaPai wrote:

 
 Hi,
 
 Inorder to understand - cartessian tiers,how are they contributing in
 location based search - What is happening internally when we give query to
 solr like http://localhost:8983/solr/select/?q=name:Minneapolis AND
 _val_:recip(hsin(0.78, -1.6, lat_rad, lon_rad, 3963.205), 1, 1, 0)^100 and
 other functions like ghhsin(),sqedist(),dist() .how is it working to
 retrieve relevent records?
 

This query says to me:  Find all documents that have the word Minneapolis in 
the name and boost the scores based on not only the term scores (i.e. 
Minneapolis) but also
add in a boost based on 1 over the haversine distance between the point 0.78, 
-1.6 (in radians) and the values contained in the lat_rad and lon_rad fields 
(for each document that matched Minneapolis) and boost that resulting score by 
100.  In other words, 1 over the distance.

The other functions are just different ways of calculating distance.  GHSin is 
the Haversine distance applied to a GeoHash field.  A geohash field encodes 
lat/lon into a single field.  Haversine is generally more accurate for 
measurements on a Sphere.  Dist and sqedist are the traditional distances used 
in a Rectangular Coordinate System (aka the stuff you learned about way back 
when as a kid).  Even Haversine isn't as accurate as one could get, since the 
Earth is not actually a Sphere.  For most situations, however, it is more than 
sufficient.  If you really need the utmost accuracy, you could implement 
Vincenty's formula.  


 Can any one suggest me any link that will help me understand all these
 concepts better.


Here's the Solr wiki page: http://wiki.apache.org/solr/SpatialSearch

Here's an article I wrote on spatial: 
http://www.ibm.com/developerworks/opensource/library/j-spatial/index.html

Question about fieldNorms

2010-03-07 Thread Siddhant Goel
Hi everyone,

Is the fieldNorm calculation altered by the omitNorms factor? I saw on this
page (http://old.nabble.com/Question-about-fieldNorm-td17782701.html) the
formula for calculation of fieldNorms (fieldNorm =
fieldBoost/sqrt(numTermsForField)).

Does this mean that for a document containing a string like A B C D E in
its field, its fieldNorm would be boost/sqrt(5), and for another document
containing the string A B C in the same field, its fieldNorm would be
boost/sqrt(3). Is that correct?

If yes, then is *this* what omitNorms affects?

Thanks,

-- 
- Siddhant


Re: Free Webinar: Mastering Solr 1.4 with Yonik Seeley

2010-03-07 Thread MitchK

Sorry, I did not recognize that it already took place.
Thank you for the link.
-- 
View this message in context: 
http://old.nabble.com/Re%3A-Free-Webinar%3A-Mastering-Solr-1.4-with-Yonik-Seeley-tp27720526p27811668.html
Sent from the Solr - User mailing list archive at Nabble.com.



More Like This Category Restiction

2010-03-07 Thread Brad Stewart
Hello Everyone,

I am trying to do a more like this query on a particular item(ItemID 3), 
restricted to a certain category(CatID 1).  This is the query string i am using:

select?q=ItemID:3mlt=truemlt.fl=Name,Textmlt.mindf=1mlt.mintf=1fl=ItemID,Name,scoremlt.count=30wt=phpmlt.boost=truefq=CatID:1

This works most of the time but sometimes I am getting results that include 
items in different categories.  Is there something wrong with this query?  

Thanks in advance.

Handling and sorting email addresses

2010-03-07 Thread Ian Battersby
Forgive what might seem like a newbie question but am struggling desperately
with this. 

We have a dynamic field that holds email address and we'd like to be able to
sort by it, obviously when trying to do this we get an error as it thinks
the email address is a tokenized field. We've tried a custom field type
using PatternReplaceFilterFactory to specify that @ and . should be replaced
with  AT  and  DOT  but we just can't seem to get it to work, all the
field still contain the unparsed email.

We used an example found on the mailing-list for the field type:

fieldType name=email class=solr.TextField
positionIncrementGap=100
  analyzer
   tokenizer class=solr.StandardTokenizerFactory/
   filter class=solr.LowerCaseFilterFactory/
   filter class=solr.PatternReplaceFilterFactory pattern=\.
replacement= DOT  replace=all /
   filter class=solr.PatternReplaceFilterFactory pattern=@
replacement= AT  replace=all /
   filter class=solr.WordDelimiterFilterFactory generateWordParts=1
generateNumberParts=1 catenateWords=0 catenateNumbers=0
catenateAll=0 splitOnCaseChange=0/
  /analyzer
/fieldType

.. our dynamic field looks like ..

  dynamicField name=dynamicemail_*  type=email  indexed=true
stored=true  multiValued=true /

When writing a document to Solr it still seems to write the original email
address (e.g. this.u...@somewhere.com) opposed to its parsed version (e.g.
this DOT user AT somewhere DOT com). Can anyone help? 

We are running version 1.4 but have even tried the nightly build in an
attempt to solve this problem.

Thanks.



Re: Error 400 - By search with exclamation mark ... ?! PatternReplaceFilterFactory ?

2010-03-07 Thread stocki

hello.


yes, this works witghout any Exception.

but what say this to me ?




Koji Sekiguchi-2 wrote:
 
 stocki wrote:
 Hllo again ;)

 i get these Error message when is searching for this : hallo !
 hhtp request: select/?q=hallo+!version=2.2start=0rows=10indent=on

 SCHWERWIEGEND: org.apache.solr.common.SolrException:
 org.apache.lucene.queryParser.ParseException: Cannot parse 'tom !':
 Encountered EOF at line 1, column 5.
 Was expecting one of:
 ( ...
 * ...
 QUOTED ...
 TERM ...
 PREFIXTERM ...
 WILDTERM ...
 [ ...
 { ...
 NUMBER ...
 TERM ...
 * ...

 

 how can i exclude these patterns ? --  ! , . : - !§$%/()=?  during
 my
 index and search requests ?

   
 Can you try to place a back-slash before exclamation mark?
 
 http://localhost:8983/solr/select?q=hallo+\!
 
 Koji
 
 -- 
 http://www.rondhuit.com/en/
 
 
 

-- 
View this message in context: 
http://old.nabble.com/Error-400---By-search-with-exclamation-mark-...--%21-PatternReplaceFilterFactory---tp27778918p27813497.html
Sent from the Solr - User mailing list archive at Nabble.com.



Re: Error 400 - By search with exclamation mark ... ?! PatternReplaceFilterFactory ?

2010-03-07 Thread Ahmet Arslan

 
 hello.
 
 
 yes, this works witghout any Exception.
 
 but what say this to me ?

! is a special character that is a part of query syntax. It is NOT operator. 
You need to escape it if you want to search it.

http://lucene.apache.org/java/3_0_1/queryparsersyntax.html#Escaping Special 
Characters


  


Re: Error 400 - By search with exclamation mark ... ?! PatternReplaceFilterFactory ?

2010-03-07 Thread MitchK

According to Ahmet Arslan's Post:
Solr is expecting a word after the !, because it is an operator.
If you escape it, it is part of the queried string.
-- 
View this message in context: 
http://old.nabble.com/Error-400---By-search-with-exclamation-mark-...--%21-PatternReplaceFilterFactory---tp27778918p27815199.html
Sent from the Solr - User mailing list archive at Nabble.com.



Re: Handling and sorting email addresses

2010-03-07 Thread MitchK

Ian,

did you have a look at Solr's admin analysis.jsp?
When everything on the analysis's page is fine, you have missunderstood
Solr's schema.xml-file.

You've set two attributes in your schema.xml:
stored = true
indexed = true

What you get as a response is the stored field value.
The stored field value is the original field value, without any
modifications.
However, Solr is using the indexed field value to query your data.

Kind regards
- Mitch
 

Ian Battersby wrote:
 
 Forgive what might seem like a newbie question but am struggling
 desperately
 with this. 
 
 We have a dynamic field that holds email address and we'd like to be able
 to
 sort by it, obviously when trying to do this we get an error as it thinks
 the email address is a tokenized field. We've tried a custom field type
 using PatternReplaceFilterFactory to specify that @ and . should be
 replaced
 with  AT  and  DOT  but we just can't seem to get it to work, all the
 field still contain the unparsed email.
 
 We used an example found on the mailing-list for the field type:
 
 fieldType name=email class=solr.TextField
 positionIncrementGap=100
   analyzer
tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.LowerCaseFilterFactory/
filter class=solr.PatternReplaceFilterFactory pattern=\.
 replacement= DOT  replace=all /
filter class=solr.PatternReplaceFilterFactory pattern=@
 replacement= AT  replace=all /
filter class=solr.WordDelimiterFilterFactory
 generateWordParts=1
 generateNumberParts=1 catenateWords=0 catenateNumbers=0
 catenateAll=0 splitOnCaseChange=0/
   /analyzer
 /fieldType
 
 .. our dynamic field looks like ..
 
   dynamicField name=dynamicemail_*  type=email  indexed=true
 stored=true  multiValued=true /
 
 When writing a document to Solr it still seems to write the original email
 address (e.g. this.u...@somewhere.com) opposed to its parsed version (e.g.
 this DOT user AT somewhere DOT com). Can anyone help? 
 
 We are running version 1.4 but have even tried the nightly build in an
 attempt to solve this problem.
 
 Thanks.
 
 
 

-- 
View this message in context: 
http://old.nabble.com/Handling-and-sorting-email-addresses-tp27813111p27815239.html
Sent from the Solr - User mailing list archive at Nabble.com.



Re: Can't delete from curl

2010-03-07 Thread Paul Tomblin
On Tue, Mar 2, 2010 at 1:22 AM, Lance Norskog goks...@gmail.com wrote:

 On Mon, Mar 1, 2010 at 4:02 PM, Paul Tomblin ptomb...@xcski.com wrote:
  I have a schema with a field name category (field name=category
  type=string stored=true indexed=true/).  I'm trying to delete
  everything with a certain value of category with curl:
 
  I send:
 
  curl http://localhost:8080/solrChunk/nutch/update -H Content-Type:
  text/xml --data-binary 'deletequerycategory:Banks/query/delete'
 
  Response is:
 
  ?xml version=1.0 encoding=UTF-8?
  response
  lst name=responseHeaderint name=status0/intint
  name=QTime23/int/lst
  /response
 
  I send
 
  curl http://localhost:8080/solrChunk/nutch/update -H Content-Type:
  text/xml --data-binary 'commit/'
 
  Response is:
 
  ?xml version=1.0 encoding=UTF-8?
  response
  lst name=responseHeaderint name=status0/intint
  name=QTime1914/int/lst
  /response
 
  but when I go back and query, it shows all the same results as before.
 
  Why isn't it deleting?

 Do you query with curl also? If you use a web browser, Solr by default
 uses http caching, so your browser will show you the old result of the
 query.


I think you're right about that.  I tried using curl, and it did go to zero.
 But now I've got a different problem: sometimes when I try to commit, I get
a NullPointerException:


curl http://xen1.xcski.com:8080/solrChunk/nutch/select -H Content-Type:
text/xml --data-binary 'commit/'htmlheadtitleApache Tomcat/6.0.20 -
Error report/titlestyle!--H1
{font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:22px;}
H2
{font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:16px;}
H3
{font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:14px;}
BODY
{font-family:Tahoma,Arial,sans-serif;color:black;background-color:white;} B
{font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;}
P
{font-family:Tahoma,Arial,sans-serif;background:white;color:black;font-size:12px;}A
{color : black;}A.name {color : black;}HR {color : #525D76;}--/style
/headbodyh1HTTP Status 500 - null

java.lang.NullPointerException
at java.io.StringReader.lt;initgt;(StringReader.java:33)
at org.apache.lucene.queryParser.QueryParser.parse(QueryParser.java:173)
at org.apache.solr.search.LuceneQParser.parse(LuceneQParserPlugin.java:78)
at org.apache.solr.search.QParser.getQuery(QParser.java:131)
at
org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:89)
at
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:174)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316)
at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
at
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:849)
at
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:454)
at java.lang.Thread.run(Thread.java:619)
/h1HR size=1 noshade=noshadepbtype/b Status
report/ppbmessage/b unull

java.lang.NullPointerException
at java.io.StringReader.lt;initgt;(StringReader.java:33)
at org.apache.lucene.queryParser.QueryParser.parse(QueryParser.java:173)
at org.apache.solr.search.LuceneQParser.parse(LuceneQParserPlugin.java:78)
at org.apache.solr.search.QParser.getQuery(QParser.java:131)
at
org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:89)
at
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:174)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316)


-- 
http://www.linkedin.com/in/paultomblin
http://careers.stackoverflow.com/ptomblin


Re: Question about fieldNorms

2010-03-07 Thread Jay Hill
Yes, if omitNorms=true, then no lengthNorm calculation will be done, and the
fieldNorm value will be 1.0, and lengths of the field in question will not
be a factor in the score.

To see an example of this you can do a quick test. Add two text fields,
and on one omitNorms:

   field name=foo type=text indexed=true stored=true/
   field name=bar type=text indexed=true stored=true
omitNorms=true/

Index a doc with the same value for both fields:
  field name=foo1 2 3 4 5/field
  field name=bar1 2 3 4 5/field

Set debugQuery=true and do two queries: q=foo:5   q=bar:5

in the explain section of the debug output note that the fieldNorm value
for the foo query is this:

0.4375 = fieldNorm(field=foo, doc=1)

and the value for the bar query is this:

1.0 = fieldNorm(field=bar, doc=1)

A simplified description of how the fieldNorm value is: fieldNorm =
lengthNorm * documentBoost * documentFieldBoosts

and the lengthNorm is calculated like this: lengthNorm  =
1/(numTermsInField)**.5
[note that the value is encoded as a single byte, so there is some precision
loss]

When omitNorms=true no norm calculation is done, so fieldNorm will always be
one on those fields.

You can also use the Luke utility to view the document in the index, and it
will show that there is a norm value for the foo field, but not the bar
field.

-Jay
http://www.lucidimagination.com


On Sun, Mar 7, 2010 at 5:55 AM, Siddhant Goel siddhantg...@gmail.comwrote:

 Hi everyone,

 Is the fieldNorm calculation altered by the omitNorms factor? I saw on this
 page (http://old.nabble.com/Question-about-fieldNorm-td17782701.html) the
 formula for calculation of fieldNorms (fieldNorm =
 fieldBoost/sqrt(numTermsForField)).

 Does this mean that for a document containing a string like A B C D E in
 its field, its fieldNorm would be boost/sqrt(5), and for another document
 containing the string A B C in the same field, its fieldNorm would be
 boost/sqrt(3). Is that correct?

 If yes, then is *this* what omitNorms affects?

 Thanks,

 --
 - Siddhant



applying SOLR-64 and SOR-792

2010-03-07 Thread Seffie Schwartz
Hi -

I am following directions from wiki.apache.org/solr/HierarchicalFaceting.
I checkedout revision 920167. 
I downloaded patches SOLR-64 and SOLR-792.  
I applied the patch SOLR-64 - 3 patches worked but the patch to schema.xml did 
not

I then tried applying SOLR-792 following the directions of patch -p1  
SOLR-792.patch.  This told me that perhaps I had -p set wrong.
Please help.



[ANN] Zoie Solr Plugin - Zoie Solr Plugin enables real-time update functionality for Apache Solr 1.4+

2010-03-07 Thread Ian Holsman


I just saw this on twitter, and thought you guys would be interested.. I 
haven't tried it, but it looks interesting.


http://snaprojects.jira.com/wiki/display/ZOIE/Zoie+Solr+Plugin

Thanks for the RT Shalin!


Re: example solr xml working fine but my own xml files not working

2010-03-07 Thread venkatesh uruti

Dear Eric,

Please find below necessary steps that executed.

Iam following same structure as mentioned by you, and checked  results in
the admin page by clicking search button, samples are working fine. 

Ex:Added monitor.xml and search for video its displaying results- search
content is displaying properly

Let me explain you the problem which iam facing:

step 1: I started Apache tomcat 

step2 : Indexing Data 
   java -jar post.jar myfile.xml
  
Here is my XML content:

add

 doc
  field name=id1/field
  field name=nameYouth to Elder/field
  field name=Author Integrated Research Program/field
  field name=Year2009/field
  field name=Publisher First Nation/field
 /doc
 doc
  field name=id2/field
  field name=nameStrategies /field
  field name=AuthorImplementation Committee /field
  field name=Year2001/field
  field name=PublisherPolicy/field
 
 /doc

/add
Step 4 : i did

 java -jar post.jar myfile.xml 


output of above one:

SimplePostTool: version 1.2
SimplePostTool: WARNING: Make sure your XML documents are encoded in UTF-8,
othe
r encodings are not currently supported
SimplePostTool: POSTing files to http://localhost:8983/solr/update..
SimplePostTool: POSTing file curnew.xml
SimplePostTool: FATAL: Solr returned an error: Bad Request

Request to help me on this. 

-- 
View this message in context: 
http://old.nabble.com/example-solr-xml-working-fine-but-my-own-xml-files-not-working-tp27793958p27817161.html
Sent from the Solr - User mailing list archive at Nabble.com.



Re: Free Webinar: Mastering Solr 1.4 with Yonik Seeley

2010-03-07 Thread Janne Majaranta
Do I need a U.S. phone number to view the recording / download the slides ?
The registration form whines about invalid area code..

-Janne