Well, you need to specify a path, relative or absolute, that points to the
directory where the Velocity JAR file resides.
I'm not sure, at this point, exactly what you're missing. But it should be
fairly straightforward. Solr startup logs the libraries it loads, so maybe
that is helpful
Greg,
a few things, i noticed while reading your post:
1) you don't need an field-assignment for fields where the name does
not change, you can just skip that. field column=creationDate
name=creationDate / - just to name one example
2) TemplateTransformer
On Debian you can edit /etc/default/tomcat6
hi,
i am using solr1.4 with apache tomcat. to enable the
clustering feature
i follow the link
http://wiki.apache.org/solr/ClusteringComponent
Plz help me how to add-Dsolr.clustering.enabled=true to $CATALINA_OPTS.
after that which
What about using
http://wiki.apache.org/solr/DataImportHandler#XPathEntityProcessor ?
On Wed, Feb 16, 2011 at 10:08 AM, Bill Bell billnb...@gmail.com wrote:
I am using DIH.
I am trying to take a column in a SQL Server database that returns an XML
string and use Xpath to get data out of it.
On Wednesday 16 February 2011 02:41 PM, Markus Jelsma wrote:
On Debian you can edit /etc/default/tomcat6
hi,
i am using solr1.4 with apache tomcat. to enable the
clustering feature
i follow the link
http://wiki.apache.org/solr/ClusteringComponent
Plz help me how to
What distro are you using? On at least Debian systems you can put the -
Dsolr.clustering.enabled=true environment variable in /etc/default/tomcat6.
You can also, of course, remove all occurences of ${solr.clustering.enabled}
from you solrconfig.xml
On Wednesday 16 February 2011 10:52:35 Isha
On Wednesday 16 February 2011 03:32 PM, Markus Jelsma wrote:
What distro are you using? On at least Debian systems you can put the -
Dsolr.clustering.enabled=true environment variable in /etc/default/tomcat6.
You can also, of course, remove all occurences of ${solr.clustering.enabled}
from you
Hi,
There are a couple of Solr 1.4.1 slaves, all doing the same. Pulling some
snaps, handling some queries, nothing exciting. But can anyone explain a
sudden nightly occurence of this error?
2011-02-16 01:23:04,527 ERROR [solr.handler.ReplicationHandler] - [pool-238-
thread-1] - : SnapPull
I have no idea, seems you haven't compiled Carrot2 or haven't included all
jars.
On Wednesday 16 February 2011 11:29:30 Isha Garg wrote:
On Wednesday 16 February 2011 03:32 PM, Markus Jelsma wrote:
What distro are you using? On at least Debian systems you can put the -
my error is, that solr is not reachable with a ping.
ping over php-HttpRequest ...
-
--- System
One Server, 12 GB RAM, 2 Solr Instances, 7 Cores,
1 Core with 31 Million Documents other Cores 100.000
- Solr1 for
Hello.
i have the field reason_1 and reason_2. this two fields is in my schema one
dynamicField: dynamicField name=reason_* type=textgen indexed=true
stored=false/
i copy this field in my text-default search field: copyField
source=reason_* dest=text/
And in a new field reason: copyField
Hi,
We do have a validation layer for other purposes, but this layer do not know
about the fields and
i would not like to replicate this configuration. Is there any way to query
the solr core about declared fields?
thanks,
[ ]'s
Leonardo da S. Souza
°v° Linux user #375225
/(_)\
What does the admin page show you are the contents of
your index for reason_1?
I suspect you don't really have two documents with the same
value. Perhaps you give them both the same uniqueKey and
one overwrites the other. Perhaps you didn't commit the second.
Perhaps
But you haven't provided
Hi Stefan,
LukeRequestHandler could be a good solution, there's a lot of useful info.
This handler works with version 1.4x?
thanks
[ ]'s
Leonardo da S. Souza
°v° Linux user #375225
/(_)\ http://counter.li.org/
^ ^
On Wed, Feb 16, 2011 at 10:41 AM, Stefan Matheis
Hi,
I have very typical problem. From one of my applications I get data in the
format
add
doc
field name=address Some Address/field
field name=zipcode1/field
/doc
/add
How can I implement a spatial search for this data?
Any ideas are welcome
On Wed, Feb 16, 2011 at 3:57 AM, Thorsten Scherler scher...@gmail.com wrote:
On Tue, 2011-02-15 at 09:59 -0500, Yonik Seeley wrote:
On Mon, Feb 14, 2011 at 8:08 AM, Thorsten Scherler thors...@apache.org
wrote:
Hi all,
I followed http://wiki.apache.org/solr/SolrCloud and everything worked
Regarding the Wiki-Page .. since 1.2 .. so, yes, should :)
On Wed, Feb 16, 2011 at 1:55 PM, Leonardo Souza leonardo...@gmail.com wrote:
Hi Stefan,
LukeRequestHandler could be a good solution, there's a lot of useful info.
This handler works with version 1.4x?
thanks
[ ]'s
Leonardo da S.
Nishant,
correct me if i'm wrong .. but spatial search normally requires
geo-information, like latitude and longitude to work? so you would
need to fetch this information before putting them into solr. the
google maps api offers
Renaud,
just because i'm interested in .. what are your concerns about using
cron for that?
Stefan
On Wed, Feb 16, 2011 at 2:12 PM, Renaud Delbru renaud.del...@deri.org wrote:
Hi,
We would like to trigger an optimise every x hours. From what I can see,
there is nothing in Solr
the fieldType is textgen.
-
--- System
One Server, 12 GB RAM, 2 Solr Instances, 7 Cores,
1 Core with 31 Million Documents other Cores 100.000
- Solr1 for Search-Requests - commit every Minute - 4GB Xmx
- Solr2 for
Hi,
We would like to trigger an optimise every x hours. From what I can see,
there is nothing in Solr (3.1-SNAPSHOT) that enables to do such a thing.
We have a master-slave configuration. The masters are tuned for fast
indexing (large merge factor). However, for the moment, the master index
Mainly technical administration effort.
We are trying to have a solr packaging that
- minimises the effort to deploy the system on a machine.
- reduces errors when deploying
- centralised the logic of the Solr system
Ideally, we would like to have a central place (e.g., solrconfig) where
the
It looks like you are trying to use a function query on a multi-valued field?
-Yonik
http://lucidimagination.com
On Tue, Feb 15, 2011 at 8:34 AM, Ezequiel Calderara ezech...@gmail.com wrote:
Hi, im having a problem while trying to do a dismax search.
For example i have the standard query url
hm okay, reasonable :)
never used it, but maybe a pointer into the right direction?
http://wiki.apache.org/solr/DataImportHandler#Scheduling
On Wed, Feb 16, 2011 at 2:27 PM, Renaud Delbru renaud.del...@deri.org wrote:
Mainly technical administration effort.
We are trying to have a solr
the documents havent the same uniquekey, only reason is the same.
i cannot show the exactly search request, because of privacy policy...
the query is like that:
reason_1: firstname lastname,
reason_2: 1234, 02.02.2011
-- in field reason: firstname lastname, 1234, 02.02.2011
the search
Hi everyone,
I am trying to get Synonyms working with CJKAnalyzer. Search works fine but
synonyms do not work as expected. Here is my field definition in the schema
file:
fieldType name=cjk class=solr.TextField
analyzer class=org.apache.lucene.analysis.cjk.CJKAnalyzer
I think you can get far by just optimizing how often you do commits (as seldom
as possible), as well as MergeFactor, to get a good balance between indexing
and query efficiency. It may be that you're looking for fewer segments on
average - not always one fully optimized segment.
If you still
Hello Ravish, Erick,
I'm facing the same issue with solr-trunk (as of r1071282)
- Field configuration :
fieldType name=normalized_string class=solr.TextField
positionIncrementGap=100
analyzer
tokenizer class=solr.KeywordTokenizerFactory /
filter class=solr.LowerCaseFilterFactory/
filter
On Wednesday 16 February 2011 16:49:51 Tod wrote:
I have a couple of semi-related questions regarding the use of the Term
Vector Component:
- Using curl is there a way to query a specific document (maybe using
Tika when required?) to get a distribution of the terms it contains?
No Tika
It only works on FileDataSource right ?
Bill Bell
Sent from mobile
On Feb 16, 2011, at 2:17 AM, Stefan Matheis matheis.ste...@googlemail.com
wrote:
What about using
http://wiki.apache.org/solr/DataImportHandler#XPathEntityProcessor ?
On Wed, Feb 16, 2011 at 10:08 AM, Bill Bell
2011/2/16 Yonik Seeley yo...@lucidimagination.com
On Wed, Feb 16, 2011 at 3:57 AM, Thorsten Scherler scher...@gmail.com
wrote:
On Tue, 2011-02-15 at 09:59 -0500, Yonik Seeley wrote:
On Mon, Feb 14, 2011 at 8:08 AM, Thorsten Scherler thors...@apache.org
wrote:
Hi all,
I followed
It looks like a log4j issue:
java.lang.NoClassDefFoundError: org/apache/log4j/jmx/HierarchyDynamicMBean
at
org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:51)
at
org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:114)
Managed to get this working. Changed my solrconfig for the one provided in
velocity dir, repackaged the war file and redeployed on tomcat.
Although this seems like a ridiculously obvious thing to do, I somehow
overlooked the repackaging aspect, this was where the problem was.
Thanks for the
Hi,
Jetty on Ubuntu has been working well for us and a bunch of our customers.
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/
- Original Message
From: Rosa (Anuncios) rosaemailanunc...@gmail.com
To:
Hi Tri,
You could look at the stats page for each slave and compare the number of docs
in them. The one(s) that are off from the rest/majority are out of sync.
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/
-
In my own Solr 1.4, I am pretty sure that running an index optimize does
give me significant better performance. Perhaps because I use some
largeish (not huge, maybe as large as 200k) stored fields.
So I'm interested in always keeping my index optimized.
Am I right that if I set mergeFactor
In my own Solr 1.4, I am pretty sure that running an index optimize does
give me significant better performance. Perhaps because I use some
largeish (not huge, maybe as large as 200k) stored fields.
200.000 stored fields? I asume that number includes your number of documents?
Sounds crazy =)
Hi,
I have a need to index multiple applications using Solr, I also have the
need to share indexes or run a search query across these application
indexes. Is solr multi-core - the way to go? My server config is
2virtual CPUs @ 1.8 GHz and has about 32GB of memory. What is the
recommendation?
Closing a core will shutdown almost everything related to the workings of a
core. Update and search handlers, possible warming searchers etc.
Check the implementation of the close method:
Hi,
I'm trying to use a CustomSimilarityFactory and pass in per-field
options from the schema.xml, like so:
similarity class=org.ads.solr.CustomSimilarityFactory
lst name=field_a
int name=min500/int
int name=max1/int
float name=steepness0.5/float
/lst
lst
Thanks for the answers, more questions below.
On 2/16/2011 3:37 PM, Markus Jelsma wrote:
200.000 stored fields? I asume that number includes your number of documents?
Sounds crazy =)
Nope, I wasn't clear. I have less than a dozen stored field, but the
value of a stored field can sometimes
Solr multi-core essentially just lets you run multiple seperate distinct
Solr indexes in the same running Solr instance.
It does NOT let you run queries accross multiple cores at once. The
cores are just like completely seperate Solr indexes, they are just
conveniently running in the same
Solr 1.4.1. So, from the documentation at
http://wiki.apache.org/solr/SolrReplication
I was wondering if I could get away without having any actual
configuration in my slave at all. The replication handler is turned on,
but if I'm going to manually trigger replication pulls while supplying
Thanks for the answers, more questions below.
On 2/16/2011 3:37 PM, Markus Jelsma wrote:
200.000 stored fields? I asume that number includes your number of
documents? Sounds crazy =)
Nope, I wasn't clear. I have less than a dozen stored field, but the
value of a stored field can
Hmmm. Maybe I'm not understanding what you're getting at, Jonathan, when you
say 'There is no good way in Solr to run a query across multiple Solr indexes'.
What about the 'shards' parameter? That allows searching across multiple cores
in the same instance, or shards across multiple
(I'm using solr 1.4)
I'm doing a test of my index, so I'm reading out every document in
batches of 500. The query is (I added newlines here to make it
readable):
http://localhost:8983/solr/archive_ECCO/select/
?q=archive%3AECCO
fl=uri
version=2.2
start=0
rows=500
indent=on
sort=uri%20asc
It
On Wed, Feb 16, 2011 at 5:08 PM, Paul p...@nines.org wrote:
Is this a known solr bug or is there something subtle going on?
Yes, I think it's the following bug, fixed in 1.4.1:
* SOLR-1777: fieldTypes with sortMissingLast=true or sortMissingFirst=true can
result in incorrectly sorted results.
Yes, you're right, from now on when I say that, I'll say except
shards. It is true.
My understanding is that shards functionality's intended use case is for
when your index is so large that you want to split it up for
performance. I think it works pretty well for that, with some
limitations
Hi,
That depends (as usual) on your scenario. Let me ask some questions:
1. what is the sum of documents for your applications?
2. what is the expected load in queries/minute
3. what is the update frequency in documents/minute and how many documents per
commit?
4. how many different
You can also easily abuse shards to query multiple cores that share parts of
the schema. This way you have isolation with the ability to query them all.
The same can, of course, also be achieved using a sinlge index with a simple
field identying the application and using fq on that one.
Yes,
I updated my data importer.
I used to have:
field column=webtitle stripHTML=true /
field column=webdescription stripHTML=true /
which wasn't working. But I changed that to
field column=webtitle name=webtitle stripHTML=true /
field column=webdescription name=webdescription stripHTML=true /
and
I frequently use multiple cores for these reasons:
* Completely different applications, such as web search and directory search
or if their update latency / query /caching requirements are very different
I can then also nuke one without affecting the other
Also, you get nice separation for
: if you don't have any custom components, you can probably just use
: your entire solr home dir as is -- just change the solr.war. (you can't
: just copy the data dir though, you need to use the same configs)
:
: test it out, and note the Upgrading notes in the CHANGES.txt for the
: 1.3,
: This was my first thought but -1 is relatively common but we have other
: numbers just as common.
i assume that when you say that you mean ...we have other numbers
(that are not negative) just as common, (but searching for them is much
faster) ?
I don't have any insight into why your
Does anyone have an example of using this with SQL Server varchar or XML
field?
??
dataConfig
dataSource /
document
entity name=y query=select * from y where xid=${x.id}
entity name=x processor=XPathEntityProcessor
forEach=/the/record/xpath url=${y.xml_name}
A common problem in metasearch engines. Its not intractable. You just have to
surface the right statistics into a 'fusion' scorer.
-
NOT always nice. When are we getting better releases?
--
View this message in context:
Thanks for updating your solution
On Tue, Feb 8, 2011 at 8:20 AM, shan2812 shanmugaraja...@gmail.com wrote:
Hi,
At last the migration to Solr-1.4.1 does solve this issue :-)..
Cheers
--
View this message in context:
Thanks for the response Hoss. Sorry for replying late was on a business
trip. The server was indexing as well as searching at the same time and it
was configured for a Native file lock, could that be the issue ? I got
another server so moved it to a Master slave configuration with file lock
being
Hi,
I wonder if it is possible to let the user build up a Solr Query and have it
validated by some java API before sending it to Solr.
Is there a parser that could help with that? I would like to help the user
building a valid query as she types by showing messages like The query is
not valid
Use a fielddatasource for reading field from database and then use
xpathentityprocessor .Field datasource will give you the stream that is
needed by xpathentity processor.Bellow is the example dih configuration
code.
?xml version=1.0?
dataConfig
dataSource type=JdbcDataSource
Hello all,
We need to build a Analytics kind of application. Intially we plan to aggregate
the result and add it to database or use any ETL tool. I have an idea to use
Facet search. I just want to know others opinion on this.
We require results in the below fashion. Top 3 results in each
I thing facet search is good for your requirement. Also what about Result
Grouping feature of Solr ?
-
Thanx:
Grijesh
http://lucidimagination.com
--
View this message in context:
http://lucene.472066.n3.nabble.com/Is-facet-could-be-used-for-Analytics-tp2515938p2515959.html
Sent from the
Is it my imagination or has this exact email been on the list already?
Dennis Gearon
Signature Warning
It is always a good idea to learn from your own mistakes. It is usually a
better
idea to learn from others’ mistakes, so you do not have to make them yourself.
from
63 matches
Mail list logo