if you don't have any custom components, you can probably just use
your entire solr home dir as is -- just change the solr.war. (you can't
just copy the data dir though, you need to use the same configs)
test it out, and note the Upgrading notes in the CHANGES.txt for the
1.3, 1.4, and
thanks guys
I will try the trunk
as for unpacking the war and changing the lucene... I am not an expect and
this my get complicated for me maybe over time
when I am comfortable
Mambe Churchill Nanje
237 33011349,
AfroVisioN Founder, President,CEO
http://www.afrovisiongroup.com |
Try to run the svn co command by using the console (in case you're
running a UNIX-like OS). Add the following files for Solr (.project and
.classpath) into your the solr folder:
http://markmail.org/message/yb5qgeamosvdscao
Then do an import as an existing project in Eclipse, and you're done.
On Tue, Feb 1, 2011 at 5:59 PM, Eric Grobler impalah...@googlemail.com wrote:
Hi
I am a newbie and I am trying to run solr in eclipse.
From this url
http://wiki.apache.org/solr/HowToContribute#Development_Environment_Tips
there is a subclipse example:
I use Team - Share Project and this
Sorry to reply to myself, but I just wanted to see if anyone saw
this/had ideas why MBeans would be removed/re-added/removed.
I tried looking for this in the code but was unable to grok what
triggers bean removal.
Any hints?
On Thu, Jan 27, 2011 at 3:30 PM, matthew sporleder
Good Morning,
I am planning to get started on indexing MS office using ApacheSolr -
can someone please direct me where I should start?
Thanks,
Sai Thumuluri
http://wiki.apache.org/solr/ExtractingRequestHandler
On Wednesday 02 February 2011 16:49:12 Thumuluri, Sai wrote:
Good Morning,
I am planning to get started on indexing MS office using ApacheSolr -
can someone please direct me where I should start?
Thanks,
Sai Thumuluri
--
Markus
http://wiki.apache.org/solr/ExtractingRequestHandler
Regards,
Jayendra
On Wed, Feb 2, 2011 at 10:49 AM, Thumuluri, Sai
sai.thumul...@verizonwireless.com wrote:
Good Morning,
I am planning to get started on indexing MS office using ApacheSolr -
can someone please direct me where I should
Hi,
have a look at Solr's ExtractingRequestHandler:
http://wiki.apache.org/solr/ExtractingRequestHandler
-Sascha
On 02.02.2011 16:49, Thumuluri, Sai wrote:
Good Morning,
I am planning to get started on indexing MS office using ApacheSolr -
can someone please direct me where I should
take a look at DIH
http://wiki.apache.org/solr/DataImportHandler
I always use New...Other...SVN...Checkout Projects from SVN
Thanks, that seemed to work perfectly :-)
On Wed, Feb 2, 2011 at 12:43 PM, Robert Muir rcm...@gmail.com wrote:
On Tue, Feb 1, 2011 at 5:59 PM, Eric Grobler impalah...@googlemail.com
wrote:
Hi
I am a newbie and I am trying to
[x] ASF Mirrors (linked in our release announcements or via the Lucene
website)
[] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[x] I/we build them from source via an SVN/Git checkout.
[] Other (someone in your company mirrors them internally or via a
downstream project)
Dear all,
I got an exception when querying the index within Solr. It told me that too
many files are opened. How to handle this problem?
Thanks so much!
LB
[java] org.apache.solr.client.solrj.
SolrServerException: java.net.SocketException: Too many open files
[java] at
On Jan 28, 2011, at 5:38 PM, Andreas Kemkes wrote:
Just getting my feet wet with the text extraction using both schema and
solrconfig settings from the example directory in the 1.4 distribution, so I
might miss something obvious.
Trying to provide my own title (and discarding the one
I always use New...Other...SVN...Checkout Projects from SVN
And how do run in eclipse jetty in the example folder?
Thanks for your help
Ericz
On Wed, Feb 2, 2011 at 12:43 PM, Robert Muir rcm...@gmail.com wrote:
On Tue, Feb 1, 2011 at 5:59 PM, Eric Grobler impalah...@googlemail.com
wrote:
Hi
In http://wiki.apache.org/solr/SpatialSearch
there is an example of a bbox filter and a geodist function.
Is it possible to do a bbox filter and sort by distance - combine the two?
Thanks
Ericz
On Jan 30, 2011, at 2:47 AM, Dennis Gearon wrote:
I would love it if I could use 'latitude' and 'longitude' in all places. But
it
seems that solr spatial for 1.4 plugin only works with lat/lng. Any way to
change that?
What 1.4 plugin are you referring to?
Dennis Gearon
Signature
On Wed, Feb 2, 2011 at 11:15 AM, Eric Grobler impalah...@googlemail.com wrote:
I always use New...Other...SVN...Checkout Projects from SVN
And how do run in eclipse jetty in the example folder?
you can always go to the commandline and use the usual techniques,
e.g. ant run-example, or java
Sorry to re-post, but can anyone help out on the question below of dynamic
custom results filtering using CommonsHttpSolrServer? If anyone is doing
this sort of thing, any suggestions would be much appreciated.
Thanks!
Dave
On 1/31/11 2:47 PM, Dave Troiano david.troi...@rovicorp.com wrote:
I only use eclipse as a fancy text editor!
Eclipse will feed insulted :-)
I will just try to create hot keys to start/stop jetty manually.
Thanks for your feedback
Regards
Ericz
On Wed, Feb 2, 2011 at 4:26 PM, Robert Muir rcm...@gmail.com wrote:
On Wed, Feb 2, 2011 at 11:15 AM, Eric Grobler
Hi,
I have a question on filters on multivalued atrribute. Is there a way to
filter a multivalue attribute based on a particular value inside that
attribute?
Consider the below example.
arr name=relationship
strDEF_BY/str
strBEL_TO/str
/arr
I want to do a search which returns the result which
Hello list,
I've met a few google matches that indicate that SOLR-based servers implement
the Open Archive Initiative's Metadata Harvesting Protocol.
Is there something made to be re-usable that would be an add-on to solr?
thanks in advance
paul
Hello,
I have the following definitions in my schema.xml:
fieldType name=testedgengrams class=solr.TextField
analyzer type=index
tokenizer class=solr.LowerCaseTokenizerFactory/
filter class=solr.NGramFilterFactory minGramSize=3
maxGramSize=15/
/analyzer
analyzer
About this:
copyField source=text_ngrams dest=text/
The NGrams are going to be indexed on the field text_ngrams, not on
text. For the field text, Solr will apply the text analysis (which I
guess doesn't have NGrams). You have to search on the text_ngrams field,
something like text_ngrams:hippo
Hi Paul, I don't fully understand what you want to do. The way, I think,
SolrJ is intended to be used is from a client application (outside Solr). If
what you want is something like what's done with Velocity I think you could
implement a response writer that renders the JSP and send it on the
Hi,
I don't know whether it fits to your need, but we are builing a tool
based on Drupal (eXtensible Catalog Drupal Toolkit), which can harvest
with OAI-PMH and index the harvested records into Solr. The records is
harvested, processed, and stored into MySQL, then we index them into
Solr. We
Peter,
I'm afraid your service is harvesting and I am trying to look at a PMH provider
service.
Your project appeared early in the goolge matches.
paul
Le 2 févr. 2011 à 20:46, Péter Király a écrit :
Hi,
I don't know whether it fits to your need, but we are builing a tool
based on
Hi Paul,
yes, you are right, the project is about harvesting, and not to be harvestable.
Péter
2011/2/2 Paul Libbrecht p...@hoplahup.net:
Peter,
I'm afraid your service is harvesting and I am trying to look at a PMH
provider service.
Your project appeared early in the goolge matches.
The trick is that you can't just have a generic black box OAI-PMH
provider on top of any Solr index. How would it know where to get the
metadata elements it needs, such as title, or last-updated date, etc.
Any given solr index might not even have this in stored fields -- and a
given app might
I already replied to the original poster off-list, but it seems that it may be
worth weighing in here as well...
The next release of VuFind (http://vufind.org) is going to include OAI-PMH
server support. As you say, there is really no way to plug OAI-PMH directly
into Solr... but a tool like
Hello,
Let me give a brief description of my scenario.
Today I am only using Lucene 2.9.3. I have an index of 30 million documents
distributed on three machines and each machine with 6 hds (15k rmp).
The server queries the search index using the remote class search. And each
machine is made to
Hi,
I'm using SOLR 1.4.1 and have a rather large index with 800+M docs.
Until now we have, erroneously I think, indexed a long field with the type:
fieldType name=long class=solr.TrieLongField precisionStep=0
omitNorms=true positionIncrementGap=0/
Now the range queries have become slow as
On Wed, Feb 2, 2011 at 3:46 PM, Dan G diser...@yahoo.se wrote:
My question is if it would be possible to just change the field to the
preferred
type tlong with a precision of 8?
Would this change be compatible with my indexed data or should I re-indexed
the
date (a pain with 800+M docs
Hi, I'm having a weirdness with indexing multiple terms to a single field
using a copyField. An example:
For document A
field:contents_1 is a multivalued field containing cat, dog and duck
field:contents_2 is a multivalued field containing cat, horse, and
flower
For document B
field:contents_1
On a closer review, i am noticing that the fieldNorm is what is killing
document A.
If I reindex with omitNorms=true, will this problem be solved?
On Wed, Feb 2, 2011 at 4:54 PM, Martin J martinj.eng...@gmail.com wrote:
Hi, I'm having a weirdness with indexing multiple terms to a single field
Does something like this work to extract dates, phone numbers, addresses across
international formats and languages?
Or, just in the plain ol' USA?
Dennis Gearon
Signature Warning
It is always a good idea to learn from your own mistakes. It is usually a
better
idea to
On 2/2/2011 5:19 PM, Dennis Gearon wrote:
Does something like this work to extract dates, phone numbers, addresses across
international formats and languages?
Or, just in the plain ol' USA?
What are you talking about? There is nothing discussed in this thread
that does any 'extracting' of
I would think OAI certainly has a trans-national format for dates.
And that probably dives well into SOLR's own date format.
But all of that is non-user-oriented so... no culture dependency in principle.
paul
Le 2 févr. 2011 à 23:19, Dennis Gearon a écrit :
Does something like this work to
Hi,
I am a newbie to Apache Solr.
We are using ContentStreamUpdateRequest to insert into Solr. For eg,
ContentStreamUpdateRequest req = new ContentStreamUpdateRequest(
/update/extract)
req.addContentStream(stream);
req.addContentStream(literal.name,
Hello,
I have the following definitions in my schema.xml:
fieldType name=testedgengrams class=solr.TextField
analyzer type=index
tokenizer class=solr.LowerCaseTokenizerFactory/
filter class=solr.NGramFilterFactory minGramSize=3
maxGramSize=15/
/analyzer
analyzer
Yes, I have tried searching on text_ngrams as well and it produces no results.
On a related note, since I have copyField source=text_ngrams
dest=text/ wouldn't the ngrams produced by text_ngrams field
definition also be available within the text field?
2011/2/2 Tomás Fernández Löbbe
I guess I didn't understand 'meta data'. That's why I asked the question.
Dennis Gearon
Signature Warning
It is always a good idea to learn from your own mistakes. It is usually a
better
idea to learn from others’ mistakes, so you do not have to make them yourself.
from
2011/2/2 Gustavo Maia gust...@goshme.com
Hello,
Let me give a brief description of my scenario.
Today I am only using Lucene 2.9.3. I have an index of 30 million
documents distributed on three machines and each machine with 6 hds (15k
rmp).
The server queries the search index using the
For time of day fields, NOT unix timestamp/dates, what is the best way to do
that?
I can think of seconds since beginning of day as integer
OR
string
Any other ideas? Assume that I'll be using range queries. TIA.
Dennis Gearon
Signature Warning
It is always a good idea to
Got my API to input into both the database and the Solr instance, search
geograhically/chronologically in Solr.
Next is Update and Delete. And then .. and then ... and then ..
Dennis Gearon
Signature Warning
It is always a good idea to learn from your own mistakes. It is
So I'm trying to update a single entity in my index using DataImportHandler.
http://solr:8983/solr/dataimport?command=full-importentity=games
It ends near-instantaneously without hitting the database at all, apparently.
Status shows:
str name=Total Requests made to DataSource0/str
str
If your using a DIH you can configure it however you want. Here is a
snippet of my code. Note the DateTimeTransformer.
dataConfig
dataSource type=JdbcDataSource
name=bleh
driver=net.sourceforge.jtds.jdbc.Driver
On Thu, Feb 3, 2011 at 6:08 AM, Jon Drukman j...@cluttered.com wrote:
So I'm trying to update a single entity in my index using DataImportHandler.
http://solr:8983/solr/dataimport?command=full-importentity=games
It ends near-instantaneously without hitting the database at all, apparently.
Dear all,
I am trying to implement an autocomplete system for research. But I am stuck
on some problems that I can't solve.
Here is my problem :
I give text like :
the cat is black and I want to explore all 1 gram to 8 gram for all the
text that are passed :
the, cat, is, black, the cat, cat is,
solr-user-help
This is posted as an enhancement on SOLR-2345.
I am willing to work on it. But I am stuck. I would like to loop through the
lat/long values when they are stored in a multiValue list. But it appears that
I cannot figure out to do that. For example:
sort=geodist() asc
This should grab the
This is posted as an enhancement on SOLR-2345.
I am willing to work on it. But I am stuck. I would like to loop through
the lat/long values when they are stored in a multiValue list. But it
appears that I cannot figure out to do that. For example:
sort=geodist() asc
This should grab the closest
This is posted as an enhancement on SOLR-2345.
I am willing to work on it. But I am stuck. I would like to loop through
the lat/long values when they are stored in a multiValue list. But it
appears that I cannot figure out to do that. For example:
sort=geodist() asc
This should grab the
Use analysis.jsp to see how your analysis is going .
Also you can see the parse queries by adding the parameter debugQuery=on on
request URL
-
Thanx:
Grijesh
http://lucidimagination.com
--
View this message in context:
increase the OS parameter for max file to open. it is less set as default
depending on the OS.
-
Thanx:
Grijesh
http://lucidimagination.com
--
View this message in context:
http://lucene.472066.n3.nabble.com/Open-Too-Many-Files-tp2406289p2411217.html
Sent from the Solr - User mailing list
Use analysis.jsp to see what happening at index time and query time with your
input data.You can use highlighting to see if match found.
-
Thanx:
Grijesh
http://lucidimagination.com
--
View this message in context:
Nevermind.. got the details from here..
http://wiki.apache.org/solr/ExtractingRequestHandler
Thanks..
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-inserting-Multivalued-filelds-tp2406612p2411248.html
Sent from the Solr - User mailing list archive at Nabble.com.
57 matches
Mail list logo