Yes, I don't know how set solr.home in glassfish with centOS.
I tried to configure the solr.home, but the error log is:looking for
solr.xml: /var/deploy/solr/solr.xml
markrmiller wrote:
What have you tried? Deploying the Solr war should be pretty
straightforward. The main issue is
hi,
I need to run around 10 million records to index, by solr.
I has nearly 2lakh records, so i made a program to looping it till 10
million.
Here, i specified 20 fields in schema.xml file. the unoque field i set
was, currentTimeStamp field.
So, when i run the loader program (which loads xml
Hi,
I am implementing linguistic variations in solr search engine. I want to
implement this for US/UK/CA/AU english.
e.g. Color (UK) = Colour (US)
when user searches for either of the word, both results should appear.
I don't want to use synonym.txt as this will make synonym.txt very long.
I think that to get the best results you need some kind of natural language
processing
I'm trying to do so using UIMA but i need to integrate it with SOLR as I
explain in this post
http://www.nabble.com/Solr-and-UIMA-tc24567504.html
prerna07 wrote:
Hi,
I am implementing Lemmatisation in
I am with a nightly from middle june
Noble Paul നോബിള് नोब्ळ्-2 wrote:
it is not normal to get the inform() called twice for a single object.
which version of solr are you using?
On Mon, Jul 20, 2009 at 7:17 PM, Marc Sturlesemarc.sturl...@gmail.com
wrote:
Hey there,
I have
on the slave this command would not work well. The indexversion is not
the actual index version. It is the current replicateable index
version.
why do you call that API directly?
On Tue, Jul 21, 2009 at 12:53 AM, solr jaysolr...@gmail.com wrote:
If you ask for the index version of a slave
On Fri, 17 Jul 2009 16:04:24 +0200, Anders Melchiorsen
m...@cup.kalibalik.dk wrote:
On Thu, 16 Jul 2009 10:56:38 -0400, Erik Hatcher
e...@ehatchersolutions.com wrote:
One trick worth noting is the FieldAnalysisRequestHandler can provide
offsets from external text, which could be used for
On Jul 20, 2009, at 6:43 AM, JCodina wrote:
D: Break things down. The CAS would only produce XML that solr can
process.
Then different Tokenizers can be used to deal with the data in the
CAS. the
main point is that the XML has a the doc and field labels of solr.
I just committed the
http://wiki.apache.org/solr/ExtractingRequestHandler contains several
examples of posting files to Solr for Tika.
FYI, I don't know if PST files are supported by Tika.
-Grant
On Jul 21, 2009, at 4:38 AM, Brindha wrote:
Hi,
How to index Ms-outlook(.Pst) files to solr tika.I have posted the
Sounds like you need a TokenFilter that does lemmatisation. I don't
know of any open ones off hand, but I haven't looked all that hard.
On Jul 21, 2009, at 4:25 AM, prerna07 wrote:
Hi,
I am implementing Lemmatisation in Solr, which means if user looks for
Mouse then it should display
oh, in case of index data corrupted on slave, I want to download the entire
index from master. During downloading, I want the slave be out of service
and put it back after it finished. I was trying figure out how to determine
downloading is done. Right now, I am calling
Hello, Grant,
there are two ways, to implement this, one is payloads, and the other one is
multiple tokens at the same positions.
Each of them can be useful, let me explain the way I thick they can be used.
Payloads : every token has extra information that can be used in the
processing , for
Do you anyone the differences between these two?
From the schema.xml
We have:
fieldType name=text class=solr.TextField positionIncrementGap=100
analyzer type=index
tokenizer class=solr.WhitespaceTokenizerFactory/
filter class=solr.SynonymFilterFactory
Hi Francis,
The named of synonyms files are arbitrary, but whatever you call them needs to
match what you have in solrconfig.xml
If you are referring to them, then they should probably exist.
If you are referring to them, then they should probably be non-empty.
But think this through a bit,
We're in the process of building a log searcher application.
In order to reduce the index size to improve the query performance,
we're exploring the possibility of having:
1. One field for each log line with 'indexed=true stored=false'
that will be used for searching
2. Another field
There are for-money solutions to this.
On Tue, Jul 21, 2009 at 10:04 AM, Grant Ingersollgsing...@apache.org wrote:
Sounds like you need a TokenFilter that does lemmatisation. I don't know of
any open ones off hand, but I haven't looked all that hard.
On Jul 21, 2009, at 4:25 AM, prerna07
It will depend on how much total volume you have. If you are discussing
millions and millions of records, I'd say use multicore and shards.
On Wed, Jul 8, 2009 at 5:25 AM, Tim Sell trs...@gmail.com wrote:
Hi,
I am wondering if it is common to have just one very large index, or
multiple
Trying to install SOLR for a project. Currently we have a 10.1.3 Oracle J2EE
install. I believe it satisfies the SOLR requirements. I have the war file
deployed and it appears to be ½ working, but have errors with the .css file
when hitting the admin page.
Anyone else been successful
Hi,
I have the following tag in my xml files:
field name=timestamp2009-05-06/field
When I try posting the file I get this error:
FATAL: Solr returned an error: Invalid_Date_String20090506
My schema.xml file has this:
field name=timestamp type=date indexed=true stored=true
default=NOW
Hi
Dates must be in ISO 8601 format:
http://lucene.apache.org/solr/api/org/apache/solr/schema/DateField.html
e.g 1995-12-31T23:59:59Z
Hope this helps
Andrew McCombe
2009/7/21 Mick England mic...@mac.com
Hi,
I have the following tag in my xml files:
field
Thanks for the quick response. That worked for me.
Andrew McCombe wrote:
Dates must be in ISO 8601 format:
http://lucene.apache.org/solr/api/org/apache/solr/schema/DateField.html
e.g 1995-12-31T23:59:59Z
--
View this message in context:
What are the errors you see?
On Tue, Jul 21, 2009 at 3:01 PM, Hall, David dh...@vermeer.com wrote:
Trying to install SOLR for a project. Currently we have a 10.1.3 Oracle
J2EE install. I believe it satisfies the SOLR requirements. I have the
war file deployed and it appears to be ½
Jul 20, 2009 2:45:34 PM org.apache.solr.common.SolrException log
SEVERE: java.lang.StackOverflowError
at java.util.Properties.getProperty(Properties.java:774)
at
com.evermind.server.ApplicationServerSystemProperties.getProperty(ApplicationServerSystemProperties.java:43)
at
We are experiencing random slowness on certain queries. I have been unable
to diagnose what the issue is. We are using SOLR 1.4 and 99.99% of queries
return in under 250 ms. The remaining queries are returning in 2-5 seconds
for no apparent reason. There does not seem to be any commonality
: I'd like to take keywords in my documents, and expand them as synonyms; for
: example, if the document gets annotated with a keyword of 'sf', I'd like
: that to expand to 'San Francisco'. (San Francisco,San Fran,SF is a line in
: my synonyms.txt file).
:
: But I also want to be able to
Any Lucene analyzer that has a no arg constructor can be used in Solr,
just specify it by full class name (there is an example of this in the
example schema.xml)
Any Tokenizer/TokenFilter that exists in the Lucene distribution also gets
a Factory in Solr (unless someone forgets) you can use
Thanks. Check out this thread:
http://www.lucidimagination.com/search/document/b15c06f78820d1da/weblogic_10_compatibility_issue_stackoverflowerror
and this wikipage: http://wiki.apache.org/solr/SolrWeblogic
If it helps, please add to our wiki - if not, we can dig deeper.
Thanks,
--
- Mark
: Some time ago I configured my Solr instance to use the
: DutchStemFilterFactory.
...
: Words like 'baas', 'paas', 'maan', 'boom' etc. are indexed as 'bas',
: 'pas', 'man' and 'bom'. Those wordt have a meaning of their own. Am I
: missing something, or has this to be considered as a bug?
: Okay. So still, how would I go about creating a new DocList and Docset as
: they cannot be instantiated?
DocLists and DocSets are retrieved from the SolrIndexSearcher as results
from searches. a simple javadoc search for the useages of the DocList and
DocSet APIs would have given you this
: SolrParams params = req.getParams();
:
: Now I want to get the values of those params. What should be the
: approach as SolrParams is an abstract class and its get(String) method
: is abstract?
your question seems to be more about java basics then about using Solr --
it doens't matter if
: Subject: Solrj, tomcat and a proxy
: References: 2aa3aff80907130547y124d433chec4f4bcbbfb35...@mail.gmail.com
: In-Reply-To: 2aa3aff80907130547y124d433chec4f4bcbbfb35...@mail.gmail.com
http://people.apache.org/~hossman/#threadhijack
Thread Hijacking on Mailing Lists
When starting a new
: SolrIndexConfig accepts a mergePolicy class name, however how does one
: inject properties into it?
At the moment you can't.
If you look at the history of MergePolicy, users have never been
encouraged to implement their own (the API actively discourages it,
without going so far as to make
I'd like to be able to define within a single Solr core, a set
of indexes in multiple directories. This is really useful for
indexing in Hadoop or integrating with Katta where an
EmbeddedSolrServer is distributed to the Hadoop cluster and
indexes are generated in parallel and returned to Solr
: Indeed - I assumed that only the + and - characters had any
: special meaning when parsing dismax queries and that all other content
: would be treated just as keywords. That seems to be how it's
: described in the dismax documentation?
The dirty little secret of hte dismax parser is that i
I am referring to setting properties on the *existing* policy
available in Lucene such as LogByteSizeMergePolicy.setMaxMergeMB
On Tue, Jul 21, 2009 at 5:11 PM, Chris
Hostetterhossman_luc...@fucit.org wrote:
: SolrIndexConfig accepts a mergePolicy class name, however how does one
: inject
The FieldValueCache plays a important role in sort and facet of solr. But
this cache is not managed by solr,
is there any way to configure it? thanks!
Hello,
You can control it in solrconfig.xml:
!-- Cache used to hold field values that are quickly accessible
by document id. The fieldValueCache is created by default
even if not configured here.
--
fieldValueCache
class=solr.FastLRUCache
Thanks very much. Is there any difference between fieldValueCache and
fieldCache?
I would just do something like this:
String myParam = req.getParams().get(xparam);
where xparam is:
http://localhost:8983/solr/select/?q=dogxparam=somethingstart=0rows=10indent=on
Kartik1 wrote:
The responsebuiilder class has SolrQueryRequest as public type. Using
SolrQueryRequest we can
39 matches
Mail list logo