Hi
I've found the error,
It was actually silly .. solr's jar's file was in the bad folder.
I just removed them .. now it works ...
GREAT
:)
sunnyfr wrote:
Yes I tried to change the name manually but it didn't help, nothing
changed.
we spoke about the file in data/solr/ this
Hello experts,
I've gotta a question with regards to synchronisation under solr.
I would like to have 2 Linux servers both running Solr. One that could act as
master and the other one as slave..
Then I want to use HeartBeat in order change the IP when the master is down...
My question is
Hi Bill,
Just to know, so you use post commit and post optimize and did you create a
cron job for snapshooter ?
If yes when, the same minute as delta-import ?
Thanks,
Bill Au wrote:
If you use cron, you should use the new -c option of snapshooter which
only takes a snapshot where there
Hi,
Can somebody explain me a bit how works optimize?
Is it automatic, because I configurated my snapshooter on the postOptimize,
and not the postCommit,
Did I miss something ? I read the doc but didn't get really what fire
optimize, I did that:
!-- Perform a commit/ automatically under
Hi,
Can somebody explain me a bit how works optimize?
I read the doc but didn't get really what fire optimize.
Thanks a lot,
--
View this message in context:
http://www.nabble.com/Optimize-tp19775320p19775320.html
Sent from the Solr - User mailing list archive at Nabble.com.
Yes, each one is a document.
A real example :
str name=qk1_en:men/str
doc
float name=score0.81426066/float
...
str name=id846/str
...
str name=k1_en
;arm;arms;elbow;elbows;man;men;male;males;indoors;one;person;Men's;moods;
/str
...
/doc
...
doc
float
Begin forwarded message:
From: Noirin Shirley [EMAIL PROTECTED]
Date: October 2, 2008 4:22:06 AM EDT
To: [EMAIL PROTECTED]
Subject: CFP open for ApacheCon Europe 2009
Reply-To: [EMAIL PROTECTED]
Reply-To: [EMAIL PROTECTED]
PMCs: Please send this on to your users@ lists!
If you only have
Oct 2 11:35:02 solr-test jsvc.exec[11422]: Oct 2, 2008 11:35:02 AM
org.apache.solr.handler.dataimport.SolrWriter upload SEVERE: Exception while
adding: Documentindexed,termVector,omitNormschannels:life
indexed,omitNormstag1:bureau indexed,tokenizedtext:bureau
indexed,omitNormstag2:paris
No, optimize is not automatic. You have to invoke it yourself just like
commits.
Take a look at the following for examples:
http://wiki.apache.org/solr/UpdateXmlMessages
On Thu, Oct 2, 2008 at 2:03 PM, sunnyfr [EMAIL PROTECTED] wrote:
Hi,
Can somebody explain me a bit how works optimize?
Hello,
I would appreciate any suggestions on solving following problem:
I'm trying to index newspaper. After processing logical structure and
articles, I have similar structure to this...
article id=201 article_type=ARTICLE pub_id=5 iss_id=6
date=18560301
word t=1137 l=147 b=1665 r=951
Oct 2 14:09:30 solr-test jsvc.exec[12890]: Oct 2, 2008 2:09:30 PM
org.apache.solr.core.SolrCore initIndex WARNING: [video] Solr index
directory '/data/solr/video/data/index' doesn't exist. Creating new index...
Oct 2 14:09:30 solr-test jsvc.exec[12890]: Oct 2, 2008 2:09:30 PM
You probably have a permission problem. Check to make sure that the user id
running Solr has write permission in the directory /data/solr/video/data.
Bill
On Thu, Oct 2, 2008 at 8:11 AM, sunnyfr [EMAIL PROTECTED] wrote:
Oct 2 14:09:30 solr-test jsvc.exec[12890]: Oct 2, 2008 2:09:30 PM
Not we are not using a cron job for snapshooter.
Bill
On Thu, Oct 2, 2008 at 3:53 AM, sunnyfr [EMAIL PROTECTED] wrote:
Hi Bill,
Just to know, so you use post commit and post optimize and did you create
a
cron job for snapshooter ?
If yes when, the same minute as delta-import ?
Thanks,
Have you seen these two Wiki pages:
http://wiki.apache.org/solr/CollectionDistribution
http://wiki.apache.org/solr/SolrCollectionDistributionOperationsOutline
Solr comes with tools to let you sync the index directory.
Bill
On Thu, Oct 2, 2008 at 3:52 AM, dudes dudes [EMAIL PROTECTED] wrote:
Thanks Bill,
I'm aware of these links.. I have also deployed them in my environment,,,
However; I'm looking for a complete sync between 2 server rather than
using one server for indexing and the other one for searching.
It would be nice to have a complete transparency..
thanks for your time
as one instance is only reading the index and the other is writing into
it... It doesn't look like it is going to crash.
The instance which is only reading the index needs its searcher to be
updated. Assuming that this instance is listen on port 8984,
I am achieving this by --- *curl
Can you share more about what you are doing? An exception without any
context is hard to figure out. What's the schema? Is there another
exception associated with it (root cause)?
-Grant
On Oct 2, 2008, at 5:38 AM, sunnyfr wrote:
Oct 2 11:35:02 solr-test jsvc.exec[11422]: Oct 2, 2008
Hi,
I've some issue with my tomcat, can you please tell me what you have in your
folder
./var/lib/tomcat5.5/webapps
./usr/share/tomcat5.5/webapps
Cuz really I'm a bit lost with tomcat55 and what's happening ... how did you
manage it ??
Thanks a lot
Jack Bates-2 wrote:
Thanks for your
Hi everybody,
With regard to RSS feeds; I noticed that there's a stylesheet to
convert the output of a Solr search into RSS format in the
example\solr\conf\xlst directory. My questions are:
1) Where can I find docs on how to get Solr to feed RSS directly?
2) Correct me if I'm wrong here:
No sweat - did you install the Ubuntu solr package or the solr.war from
http://lucene.apache.org/solr/?
When you say it doesn't work, what exactly do you mean?
On Thu, 2008-10-02 at 07:43 -0700, [EMAIL PROTECTED] wrote:
Hi Jack,
Really I would love if you could help me about it ... and tell me
I haven't tried installing the ubuntu package, but the releases from
apache.org come with an example that contains a directory called solr
which contains a directory called conf where schema.xml and
solrconfig.xml are important. Is it possible these files do not exist
in the path?
Tricia
You have:
;arm;arms;elbow;elbows;man;men;male;males;indoors;one;person;Men's;moods;
Note these two:
men
Men's
You probably tokenize that field and you probably lowercase it, and you
probably stem it and you probably end up with 2 men tokens:
men == men
Men's == men
Hence your term freq of 2.
Bok Saša,
It sounds like you need to keep per-word metadata, plus the raw content so you
can full-text search it.
If so, consider keeping the meta data elsewhere - e.g. different index,
external DB, etc.
For full-text search you probably want to index the full content, something
like:
field
Hi and thank you!
This is what I got when I user the -QUIT flag.
Does it say you anything?
Regards Erik
Full thread dump OpenJDK 64-Bit Server VM (1.6.0-b09 mixed mode):
DestroyJavaVM prio=10 tid=0x010c7c00 nid=0x13c waiting on
condition [0x..0x00507c60]
i can execute what i want simply with using lucene directly
Hits hits = searcher.search(customScoreQuery, myQuery.getFilter());
howerver, i can't find the right Class , or method in the API to do
this for SOLR the searcher
I am using the SOLRServer(Embeded version) to execute the .query...
what about:
SolrQuery query = ...;
query.addFilterQuery( type:xxx );
On Oct 2, 2008, at 1:23 PM, Jeryl Cook wrote:
i can execute what i want simply with using lucene directly
Hits hits = searcher.search(customScoreQuery, myQuery.getFilter());
howerver, i can't find the right Class
Hi All,
I didn't see anywhere to share the plugin I created for my multipart
work (see https://issues.apache.org/jira/browse/SOLR-380 for more). So
I created one here: http://wiki.apache.org/solr/SolrPluginRepository.
I'm open to other ways of sharing plugins.
Tricia
I don't have issues adding a filter query to a SolrQuery...
i guess ill look at the source code, i just need to pass the a custom
Filter object at runtime before i execute a search using the
SolrServer..
currently this is all i can do the below with SOLR...
SolrServer.query(customScoreQuery);
i
On Oct 2, 2008, at 2:24 PM, Jeryl Cook wrote:
I don't have issues adding a filter query to a SolrQuery...
i guess ill look at the source code, i just need to pass the a custom
Filter object at runtime before i execute a search using the
SolrServer..
currently this is all i can do the below
Thanks!
If there is interest, we could start a non-apache project for plugins
that don't make sense in core or contrib...
Apache Wicket has a project called Wicket Stuff on sourceforge that
is a repository for non-core components. This is where components
linking to non-Apache
Nice, nice. I think that's what contrib/ is for, among other things, couldn't
we use that?
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: Ryan McKinley [EMAIL PROTECTED]
To: solr-user@lucene.apache.org
Sent: Thursday, October 2, 2008
i see,
..would be nice to build component within the code..
programmatically...rather than as a component to add to the
configuration file..but i will read the docs on how to do this.
thanks
On Thu, Oct 2, 2008 at 2:37 PM, Ryan McKinley [EMAIL PROTECTED] wrote:
On Oct 2, 2008, at 2:24 PM,
Yes, contrib should be for anything general and fits within apache
guidelines.
SOLR-380 may belong as a contrib (or core) -- i have not looked at it.
Just throwing it out there as an option with fewer restrictions. In
particular it would be nice to have off the shelf plugins that can
Bok Otis,
I was thinking about this approach, but was wondering if there is more
elegant approach where I wouldn't have to recreate logic for proximity and
quoted complex queries (identification of neighbor hits and quote queries
for highlighting and positioning on image).
If nobody comes up
I had absolutely not luck with the jetty-solr package on Ubuntu 8.04.
I haven't tried Tomcat for solr.
I do have it running on Ubuntu though. Here's what I did. Hope this
helps. Don't do this unless you
understand the steps. When I say things like 'remove contents' I
don't know what you have
Anyone?
On Thu, Sep 25, 2008 at 2:58 PM, Erlend Hamnaberg [EMAIL PROTECTED] wrote:
Hi list.
I am using the EmbeddedSolrServer to embed solr in my application, however
I have run into a snag.
The only filesystem dependency that I want is the index itself.
The current implementation of the
SEVERE: Exception starting filter SolrRequestFilter
java.lang.NoClassDefFoundError: Could not initialize class
org.apache.solr.core.SolrConfig
btw, this looks like you are you using current 1.3 or head versions of
classes in Schema.xml or solrconfig.xml, but you are running on a 1.2
version of
: But now I just restart tomcat and it stay stuck on this : and minute just
: increase and nothing is hit anymore every 5 minutes, where does that come
: from ? Nothing change except minute
are you sure your cron is still running? does hte access log for tomcat
indicate that the path
:
: I don't know why in my logs I've this error:
:
: Could not start SOLR. Check solr/home property java.lang.RuntimeException:
: Can't find resource 'solrconfig.xml' in classpath or 'solr/conf/',
what get's logged before and that? what is the full error with the full
stack trace? ... it's
: how would it fit c:some phrase into that structure?
:
: does this make sense?
:
: ( (a:some | b:some ) (a:phrase | b:phrase) ( c:some phrase) )
that's pretty much exactly what pf does, the only distinction is you
get...
+( (a:some | b:some ) (a:phrase | b:phrase) ) ( c:some phrase )
Just curious,
Currently a full-import call does a delete all even when appending an
entity param ... wouldn't it be possible to pick up the param and just
delete on that entity somehow? It would be nice if there was
something involved w/ having an entity field name that worked w/ DIH
to
: Subject: Luke not working with Solr 1.3 index
: In-Reply-To: [EMAIL PROTECTED]
http://people.apache.org/~hossman/#threadhijack
Thread Hijacking on Mailing Lists
When starting a new discussion on a mailing list, please do not reply to
an existing message, instead start a fresh email. Even
: Yes this may be my problem,
:
: But is there any solution to have only one men keyword indexed when i''ve
: got something like this :
SOLR-739 is working towards a new omitTf option for fields (taking
advantage of a Lucene optimization for this case) but in the mean time the
best options i
: What would be the scope of the work to implement Erik's suggestion, I
: would have to ask my boss, but I think we would then contribute the code
: back to Solr.
The QParser modifications would be fairly straight forward -- adding some
setters to SolrQueryParser to set booleans telling when
: I chugg away at 1.5 million records in a single file, but solr never
: commits. specifically, it ignores my autocommit settings. (I can
: commit separately at the end, of course :)
the way the autocommit settings work is soemthing i always get confused by
-- the autocommit logic may not
On Oct 2, 2008, at 3:38 PM, Ryan McKinley wrote:
Yes, contrib should be for anything general and fits within apache
guidelines.
SOLR-380 may belong as a contrib (or core) -- i have not looked at it.
Just throwing it out there as an option with fewer restrictions. In
particular it would
On Oct 2, 2008, at 11:03 PM, Grant Ingersoll wrote:
On Oct 2, 2008, at 3:38 PM, Ryan McKinley wrote:
Yes, contrib should be for anything general and fits within apache
guidelines.
SOLR-380 may belong as a contrib (or core) -- i have not looked at
it.
Just throwing it out there as an
DIH does not know the rows created by that entity. So we do not really
have any knowledge on how to delete specific rows.
how about passing a deleteQuery=type:x in the request params
or having a deleteByQuery on each top level entitywhich can be used
when that entity is doing a full-import
Hi Burnell
As we know in the real enterprise application the queries will be always
complex than what I have posted here. That time I fear, this approach may
not be sufficient. Especially when the query has to handle multiple
conditions or joins or more complex operations like that.
So I suppose
49 matches
Mail list logo