Thanks Jack .
In which case the template cores would be ones that would be initialized
first and we need to take care of this on template configs .
Also I notices that when we remove the core1 and core0 and try to create a
new webapp without any core and empty solr.xml and try to create a new
As per the source code, Solr only sets the BooleanQuery clause limit on the
very first core load. It ignores any the setting on subsequent core loads,
including a reload of the initial core.
SolrCore.java: // only change the BooleanQuery maxClauseCount once for ALL
cores...
The cores should
Have you recently change its type from unstored to stored?
if so you have to reindex .
-
Thanx:
Grijesh
www.gettinhahead.co.in
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-1-3-highlighting-problem-tp2918089p2922117.html
Sent from the Solr - User mailing list
I indeed changed the type from unstored to stored. But I did do a reindex
(several times already) and a commit, restarted Tomcat, restarted PC,
nothing works. When I do a search the text field shows up in the results.
But that is the only thing that has changed. lst name=highlighting tag
only
You're using the string fieldType to get highlighting to work but it must be
tokenized. Use a fieldType with a tokenizer.
See also: http://wiki.apache.org/solr/FieldOptionsByUseCase
Hi,
I'm using the old 1.3 Solr version on one of my sites and I decided to
add a highlighting feature.
Oh!
I did't see the string
-
Thanx:
Grijesh
www.gettinhahead.co.in
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-1-3-highlighting-problem-tp2918089p2922532.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi Markus,
thanks for the tip. I replaced the string field with a tokenizer field and
now it works. Thank you!
Kind regards,
Nick
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-1-3-highlighting-problem-tp2918089p2922785.html
Sent from the Solr - User mailing list
Whether your field text is stored or not? Highlighting works with stored
fields of schema only.
-
Thanx:
Grijesh
www.gettinhahead.co.in
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-1-3-highlighting-problem-tp2918089p2918299.html
Sent from the Solr - User
Hi Grijesh,
The field text is stored and yet it is not working.
Kind regards,
Nick
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-1-3-highlighting-problem-tp2918089p2918518.html
Sent from the Solr - User mailing list archive at Nabble.com.
Did you do a 'commit' after this? If there is no error in the log or
the HTTP response, the document should get added.
Solr 1.4.1 has much newer versions of this software.
On Thu, Nov 11, 2010 at 6:46 AM, Nikola Garafolic
nikola.garafo...@srce.hr wrote:
Hi,
I use solr 1.3 with patch for
hi erick,
I was able to implement this using link you posted.
I am using SOLR 1.3
I wanted to add spellcheck component to it so did this
requestHandler name=standard class=solr.SearchHandler default=true
!-- default values for query parameters --
lst name=defaults
str
Oops, you're right, term listings and counts for deleted docs are
adjusted during merges. I had the impression that optimize had some
special powers here that merge does not.
Thank you for bringing expungeDeletes to my attention.
On Sat, Nov 21, 2009 at 7:46 AM, Yonik Seeley
On Sat, Nov 21, 2009 at 12:33 AM, Lance Norskog goks...@gmail.com wrote:
And, terms whose documents have been deleted are not purged. So, you
can merge all you like and the index will not shrink back completely.
Under what conditions? Certainly not all, since I just tried a simple
test and a
Hoss,
Using Solr 1.4, I see constant index growth until an optimize. I
commit (hundreds of updates) every 5 minutes and have a mergefactor of
10, but every 50 minutes I don't see the index collapse down to its
original size -- it's slightly larger.
Over the course of a week, the index grew from
On Fri, Nov 20, 2009 at 12:24 PM, Michael solrco...@gmail.com wrote:
So -- I thought I understood you to mean that if I frequently merge,
it's basically the same as an optimize, and cruft will get purged. Am
I misunderstanding you?
That only applies to the segments involved in the merge. The
On Fri, Nov 20, 2009 at 12:35 PM, Yonik Seeley
yo...@lucidimagination.com wrote:
On Fri, Nov 20, 2009 at 12:24 PM, Michael solrco...@gmail.com wrote:
So -- I thought I understood you to mean that if I frequently merge,
it's basically the same as an optimize, and cruft will get purged. Am
I
On Fri, Nov 20, 2009 at 2:32 PM, Michael solrco...@gmail.com wrote:
On Fri, Nov 20, 2009 at 12:35 PM, Yonik Seeley
yo...@lucidimagination.com wrote:
On Fri, Nov 20, 2009 at 12:24 PM, Michael solrco...@gmail.com wrote:
So -- I thought I understood you to mean that if I frequently merge,
it's
And, terms whose documents have been deleted are not purged. So, you
can merge all you like and the index will not shrink back completely.
Only an optimize will remove the orphan terms.
This is important because the orphan terms affect relevance
calculations. So you really want to purge them with
: Basically, search entries are keyed to other documents. We have finite
: storage,
: so we purge old documents. My understanding was that deleted documents
: still
: take space until an optimize is done. Therefore, if I don't optimize, the
: index
: size on disk will grow without bound.
:
:
On Tue, Nov 17, 2009 at 2:24 PM, Chris Hostetter
hossman_luc...@fucit.orgwrote:
: Basically, search entries are keyed to other documents. We have finite
: storage,
: so we purge old documents. My understanding was that deleted documents
: still
: take space until an optimize is done.
Otis Gospodnetic otis_gospodne...@yahoo.com wrote on 11/13/2009 11:15:43
PM:
Let's take a step back. Why do you need to optimize? You said: As
long as I'm not optimizing, search and indexing times are
satisfactory. :)
You don't need to optimize just because you are continuously adding
- Original Message
From: Jerome L Quinn jlqu...@us.ibm.com
To: solr-user@lucene.apache.org
Sent: Mon, November 16, 2009 10:05:55 AM
Subject: Re: Solr 1.3 query and index perf tank during optimize
Otis Gospodnetic wrote on 11/13/2009 11:15:43
PM:
Let's take a step back
Good question!
The terms in the deleted documents are left behind, and so the
relevance behavior will be off. The other space used directly by
documents will be reabsorbed. (??)
On Sat, Nov 14, 2009 at 1:28 PM, Jerome L Quinn jlqu...@us.ibm.com wrote:
Lance Norskog goks...@gmail.com wrote on
I think we sorely need a Directory impl that down-prioritizes IO
performed by merging.
It would be wonderful if from Java we could simply set a per-thread
IO priority, but, it'll be a looong time until that's possible.
So I think for now we should make a Directory impl that emulates such
Another thing to try, is reducing the maxThreadCount for
ConcurrentMergeScheduler.
It defaults to 3, which I think is too high -- we should change this
default to 1 (I'll open a Lucene issue).
Mike
On Thu, Nov 12, 2009 at 6:30 PM, Jerome L Quinn jlqu...@us.ibm.com wrote:
Hi, everyone, this is
On Fri, Nov 13, 2009 at 6:27 AM, Michael McCandless
luc...@mikemccandless.com wrote:
I think we sorely need a Directory impl that down-prioritizes IO
performed by merging.
Presumably this prioritizing Directory impl could wrap/decorate any
existing Directory.
Mike
Mark Miller markrmil...@gmail.com wrote on 11/12/2009 07:18:03 PM:
Ah, the pains of optimization. Its kind of just how it is. One solution
is to use two boxes and replication - optimize on the master, and then
queries only hit the slave. Out of reach for some though, and adds many
ysee...@gmail.com wrote on 11/13/2009 09:06:29 AM:
On Fri, Nov 13, 2009 at 6:27 AM, Michael McCandless
luc...@mikemccandless.com wrote:
I think we sorely need a Directory impl that down-prioritizes IO
performed by merging.
It's unclear if this case is caused by IO contention, or the OS
ysee...@gmail.com wrote on 11/13/2009 09:06:29 AM:
On Fri, Nov 13, 2009 at 6:27 AM, Michael McCandless
luc...@mikemccandless.com wrote:
I think we sorely need a Directory impl that down-prioritizes IO
performed by merging.
It's unclear if this case is caused by IO contention, or the OS
Let's take a step back. Why do you need to optimize? You said: As long as
I'm not optimizing, search and indexing times are satisfactory. :)
You don't need to optimize just because you are continuously adding and
deleting documents. On the contrary!
Otis
--
Sematext is hiring --
The 'maxSegments' feature is new with 1.4. I'm not sure that it will
cause any less disk I/O during optimize.
The 'mergeFactor=2' idea is not what you think: in this case the index
is always mostly optimized, so you never need to run optimize.
Indexing is always slower, because you amortize the
Jerome L Quinn wrote:
Hi, everyone, this is a problem I've had for quite a while,
and have basically avoided optimizing because of it. However,
eventually we will get to the point where we must delete as
well as add docs continuously.
I have a Solr 1.3 index with ~4M docs at around 90G.
I also seem to be having a similar problem deleting. As far as I can tell,
the system thinks we are deleting the records (it logs that it's executing
the commands and all looks OK) but the records always remain. Regardsless
if we try a delete by ID or by query, nothing happens. It's also not
On Fri, Sep 18, 2009 at 6:26 AM, Lee Theobald l...@openobjects.com wrote:
I also seem to be having a similar problem deleting. As far as I can tell,
the system thinks we are deleting the records (it logs that it's executing
the commands and all looks OK) but the records always remain.
the dataDir is a Solr1.4 feature
On Thu, Sep 10, 2009 at 1:57 AM, Paul Rosen p...@performantsoftware.com wrote:
Hi All,
I'm trying to set up solr 1.3 to use multicore but I'm getting some puzzling
results. My solr.xml file is:
?xml version=1.0 encoding=UTF-8?
solr persistent=true
Ok. I have a workaround for now. I've duplicated the conf folder three
times and changed this line in solrconfig.xml in each folder:
dataDir${solr.data.dir:./solr/exhibits/data}/dataDir
I can't wait for solr 1.4!
Noble Paul നോബിള് नोब्ळ् wrote:
the dataDir is a Solr1.4 feature
On Thu,
you do not have to make 3 copies of conf dir even in Solr1.3
you can try this
dataDir${./solr/${solr.core.name}/data}/dataDir
On Thu, Sep 10, 2009 at 7:55 PM, Paul Rosen p...@performantsoftware.com wrote:
Ok. I have a workaround for now. I've duplicated the conf folder three times
and
DIH in Solr1.3 does not support JNDI datasource. Only 1.supports.
On Tue, Aug 18, 2009 at 7:41 PM, brianenobrian.enoch...@gmail.com wrote:
Hello,
We have deployed Solr in our application within Weblogic and all is
working well. The last piece I am struggling with is configuring the
Hi,
I'm running solr 1.3 with java -version java version 1.6... .
No problem to report.
Cheers.
J
2009/8/12 vaibhav joshi callvaib...@hotmail.com:
Hi
I am using Solr 1.3 ( official released version) and JDk1.5. My company is
moving towards upgrading all systems to JDK1.6. is it safe to
We have been using solr 1.3 with jdk1.6 for quite sometime in production, no
issues yet
Thanks,
Kalyan Manepalli
-Original Message-
From: vaibhav joshi [mailto:callvaib...@hotmail.com]
Sent: Wednesday, August 12, 2009 1:21 PM
To: solr-user@lucene.apache.org
Subject: Solr 1.3 and JDK1.6
Oracle JRockit (Mission Control 1.3) latest-greatest (Java 6), -server, AMD64,
SLES 10
Solr 1.3/1.4
Tomcat 6.0.20, APR
No any problem. But you need licensing for production.
JRockit seems to be at least 20 times faster than SUN's JVM.
P.S.
I only had constant problems with latest Apache
No there is no such file there.
How can i configure more detailed error reporting for this message?
2009/4/15 Shalin Shekhar Mangar shalinman...@gmail.com:
From the log it seems like there is a solr.xml inside
var/lib/tomcat5/webapps/ which tomcat is trying deploy and failing. Very
strange.
From the log it seems like there is a solr.xml inside
var/lib/tomcat5/webapps/ which tomcat is trying deploy and failing. Very
strange. You should remove that file and see if that fixes it.
On Tue, Apr 14, 2009 at 11:35 PM, andrysha nihuhoid nihuh...@gmail.comwrote:
Hi, got problem setting up
here is the exhaustive list of all changes in 1.4
http://svn.apache.org/repos/asf/lucene/solr/trunk/CHANGES.txt
On Thu, Mar 12, 2009 at 3:29 PM, Praveen Kumar Jayaram
praveen198...@gmail.com wrote:
Hi
What is the exact difference between Solr 1.3 and Solr 1.4 (Nightly build as
of now)??
I was successful at distributing the Solr-1.4-DEV data import functionality
within the Solr 1.3 war.
1. Copy the data import’s src directory from 1.4 to 1.3.
2. Made sure to used the data import’s build.xml already existing in Solr
1.3
3. Commented out all code within #SolrWriter.rollback method
On Wed, Mar 11, 2009 at 2:28 PM, Wesley Small wesley.sm...@mtvstaff.comwrote:
Good morning,
I reviewed a Solr Patch-742, which corrects an issue with the data import
process properly ingesting/commiting (solr add xml) document with dynamic
fields.
Is this fix available for Solr 1.3 or is
On Wed, Mar 11, 2009 at 2:55 PM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
On Wed, Mar 11, 2009 at 2:28 PM, Wesley Small
wesley.sm...@mtvstaff.comwrote:
Good morning,
I reviewed a Solr Patch-742, which corrects an issue with the data import
process properly ingesting/commiting
Thanks for the feedback Shalin. I will investigate the backport of this 1.4
fix into 1.3.Do you know of any other subsequent patches related to the
data import and dynamic fields that I also should located and backport as
well? I just ask if you happen to have this information handy.
I am
I attempted a backport of Patch-742 on Solr-1.3. You can see the results
below with Hunk failures.
Is there specific method to obtain a list of patches may that occurred
specific to the data import functionality prior to PATCH-742. I suppose I
would need to ensure that these specific data
I guess you can take the trunk and comment out the contents of
SolrWriter#rollback() and it should work with Solr1.3
On Wed, Mar 11, 2009 at 3:37 PM, Wesley Small wesley.sm...@mtvstaff.com wrote:
Thanks for the feedback Shalin. I will investigate the backport of this 1.4
fix into 1.3. Do
On Wed, Mar 11, 2009 at 4:01 PM, Noble Paul നോബിള് नोब्ळ्
noble.p...@gmail.com wrote:
I guess you can take the trunk and comment out the contents of
SolrWriter#rollback() and it should work with Solr1.3
I agree. Rollback is the only feature which depends on enhancements in
Solr/Lucene
: Subject: Solr 1.3; Data Import w/ Dynamic Fields
: In-Reply-To: 5e76b0ad0903110150h3e75bb68pd3603b8da4261...@mail.gmail.com
http://people.apache.org/~hossman/#threadhijack
Thread Hijacking on Mailing Lists
When starting a new discussion on a mailing list, please do not reply to
an existing
: The notes in the wiki seem to indicate that syntax (with multiple id nodes)
: will be supported in Solr 1.4, not 1.3 - but I guess it really just means that
: you can't combine those with a query node yet.
correct -- the wiki is pointing out htat you can't combine id and
query deletes until
On Feb 28, 2009, at 5:56 PM, Stephen Weiss wrote:
Yeah honestly I don't know how it ever worked either.
my guess is that the XPP parser did not validate anything -- when we
switched to StAX it validates something...
ryan
Yeah honestly I don't know how it ever worked either.
The notes in the wiki seem to indicate that syntax (with multiple id
nodes) will be supported in Solr 1.4, not 1.3 - but I guess it really
just means that you can't combine those with a query node yet.
I'll miss the deletesPending stat,
: deleteidimage.1/id/deletedeleteidimage.2/id/delete etc... (one
: delete node for each image we wanted to delete)
:
: And that worked in 1.2.
that is really suprising ... it's not a legal XML doc (multiple root
nodes) so it should have been an error.
Support was added in Solr 1.3 to support
i see filterfactories for other languages like dutch
,french,barzialian etc but no tokenizer. in this scenario are we supposed
to use the standard tokenizer and the corresponding language filters.
Yes. Exactly the same as what Lucene Analyzers do.
Lucene has the analyzers for the same. how
I am not super familiar with the lucene/solr spell checking
implementations, but here is my take:
By saying to only allow more popular, you are restricting suggestions to
only those that have a higher instance frequency in the index. The score
is still by edit distance, but only terms with a
-
From: Mark Miller [mailto:markrmil...@gmail.com]
Sent: Thursday, January 29, 2009 2:47 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr 1.3 and spellcheck.onlyMorePopular=true
Let me try that again. I think my email client is going nuts:
I am not super familiar with the lucene/solr
Ooops . Sorry - Never mind - they are present under contrib directory.
/opt/programs/solr $ find contrib -name *.java | grep Handler
contrib/dataimporthandler/src/main/java/org/apache/solr/handler/dataimport/DataImportHandlerException.java
On Fri, Dec 12, 2008 at 11:50 PM, Rakesh Sinha rakesh.use...@gmail.comwrote:
Hi -
I was planning to check more details about integrating ibatis query
resultsets with the query required for entity / tags . Before I
start experimenting more along the lines - I am just curious if there
had
Trivial answer - I already have quite a bit of iBatis queries as part
of the project ( a large consumer facing website) that I want to
reuse.
Also - the iBatis layer already has all the db authentication tokens
/ sqlmap wired on ( as part of sql-map-config.xml ).
When I create the dataConfig xml
Ok makes sense. I don't think anybody has reported trying this. If you
decide to do it, it might be worth contributing back. I guess it may be more
difficult than just using plain sql queries.
On Sat, Dec 13, 2008 at 2:10 AM, Rakesh Sinha rakesh.use...@gmail.comwrote:
Trivial answer - I already
Tracy,
I think Iván de Prado's patch is the latest. Porting to 1.4-dev would be good,
too.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: SOLR lists [EMAIL PROTECTED]
To: Solr Users List solr-user@lucene.apache.org
Sent: Tuesday,
I have not looked at Field Collapsing in a long time. If someone made
an effort to bring it up-to-date, i'll review it.
It would be great to get Field Collapsing in 1.4
ryan
On Dec 9, 2008, at 12:46 PM, Otis Gospodnetic wrote:
Tracy,
I think Iván de Prado's patch is the latest. Porting
Otis,
If I get it working in 1.3, I'll be happy to take a shot at a patch
for 1.4.
Tracy
On Dec 9, 2008, at 12:46 PM, Otis Gospodnetic wrote:
Tracy,
I think Iván de Prado's patch is the latest. Porting to 1.4-dev
would be good, too.
Otis
--
Sematext -- http://sematext.com/ --
Thanks a lot guys for your time,
I appreciate it.
I will follow all your advice.
Yonik Seeley wrote:
On Wed, Dec 3, 2008 at 11:49 AM, sunnyfr [EMAIL PROTECTED] wrote:
Sorry the request is more :
Hi Yonik,
I've tried everything but it's doesn't change anything, I tried as well the
last trunk version but nothing changed.
There is nothings that I can do about the indexation ...maybe I can optimize
something before searching ???
I'm using linux system, apache 5.5, last solr version updated.
On Thu, Dec 4, 2008 at 8:13 AM, sunnyfr [EMAIL PROTECTED] wrote:
Hi Yonik,
I've tried everything but it's doesn't change anything, I tried as well the
last trunk version but nothing changed.
There is nothings that I can do about the indexation ...maybe I can optimize
something before
Huge thanks for your help Yonik,
I optimized the index so I will try to reduce the size ... like I explained
you I stored all language text ...
So I will reduce my stored data.
Cheers... I will let you know :)
Yonik Seeley wrote:
On Thu, Dec 4, 2008 at 8:13 AM, sunnyfr [EMAIL PROTECTED]
Hi Yonik,
I will index my data again Can you advice me to optimize a lot my data
and tell me if you see something very wrong or bad for the memory, according
to the fact that I just need to show back the ID, that's it.
But I need to boost some field ... like description ..
remove this entry from the example schema unless you need the
timestamp when it was indexed:
field name=timestamp type=date indexed=true stored=true
default=NOW multiValued=false/
Also, only index fields you really need to search separately.
For example, if the description field is also
Ok thanks a lot,
so I can remove all this part
field name=title type=text indexed=true
stored=false /
field name=description type=textindexed=true
stored=false /
field name=tag1 type=string indexed=true
stored=false
On Thu, Dec 4, 2008 at 11:41 AM, sunnyfr [EMAIL PROTECTED] wrote:
Ok thanks a lot,
so I can remove all this part
I wouldn't remove them if they are the source of a copyField (with the
destination being text).
Simply change to indexed=false stored=false
otherwise you may get an undefined field
right !!!
Yonik Seeley wrote:
On Thu, Dec 4, 2008 at 11:41 AM, sunnyfr [EMAIL PROTECTED] wrote:
Ok thanks a lot,
so I can remove all this part
I wouldn't remove them if they are the source of a copyField (with the
destination being text).
Simply change to indexed=false stored=false
Hi again,
In my test, I've maximum response time : 65 sec for an average at 3,
So it might come some request which provide error, for exemple in my test
for 50 000 requests I've around 30 requests which get back error, that's why
the max time response is 65sec.
I just don't get why I've this
this is my error :
Caused by: java.net.SocketException: Unexpected end of file from server
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:769)
It's like it doesn't find data but it takes time to look for it ???
sunnyfr wrote:
Hi again,
In my test, I've maximum
Are you manipulating the query at all between the url like /test/
selector?cache=0backend=solrrequest=/relevance/search/D and what
gets sent to Solr? To me, those don't look like solr requests (I could
be missing something though). I'd be curious to see the actual
requests to try and let
Sorry the request is more :
/select?q=text:svr09\+tutorial+AND+status_published:1+AND+status_moderated:0+AND+status_personal:0+AND+status_explicit:0+AND+status_private:0+AND+status_deleted:0+AND+status_error:0+AND+status_read
or even I tried :
On Wed, Dec 3, 2008 at 11:49 AM, sunnyfr [EMAIL PROTECTED] wrote:
Sorry the request is more :
/select?q=text:svr09\+tutorial+AND+status_published:1+AND+status_moderated:0+AND+status_personal:0+AND+status_explicit:0+AND+status_private:0+AND+status_deleted:0+AND+status_error:0+AND+status_read
or
On Tue, Dec 2, 2008 at 12:04 PM, sunnyfr [EMAIL PROTECTED] wrote:
How can I make it faster?
There's no -go-faster-please flag ;-)
Give us the exact URL and we might be able to help figure out what part is slow.
-Yonik
Hi Matthew, Hi Yonik,
...sorry for the flag .. didnt want to ...
Solr 1.3 / Apache 5.5
Data's directory size : 7.9G
I'm using jMeter to hit http request, I'm sending exactly the same on solr
and sphinx(mysql) by http either.
solr
Hi Todd,
Thanks for this answer, ok but it's not just showing or not in the list,
if a field is not shown but it's boost using qf do I need to store it ???
For a language field which need some special configuration like stemming ...
thanks a lot for your clear answer,
I believe (someone
On Thu, Nov 13, 2008 at 10:43 PM, sunnyfr [EMAIL PROTECTED] wrote:
Hi everybody,
I don't get really when do I have to re index datas or not.
I did a full import but I realised I stored too many fields which I don't
need.
So I have to change some fields inedexed which are stored to not
hossman wrote:
i don't have time to really dig into the code right now, but out of
curiosity what happens when you hit http://localhost:7001/solr/admin/
and/or http://localhost:7001/solr/admin/index.jsp ?
I get the same exception when going to both of those.
--
View this message
I believe (someone correct me if I'm wrong) that the only fields you
need to store are those fields which you wish returned from the query.
In other words, if you will never put the field on the list of fields
(fl) to return, there is no need to store it.
It would be advantageous not to store
I'm experiencing the same java.lang.StackOverflowError problem with solr
1.3.0 on Weblogic 10.3 when accessing the admin page.
I'm using the distributed war but have added a weblogic.xml file to the
WEB-INF directory.
I get the exception when accessing http://localhost:7001/solr/admin but
: I get the exception when accessing http://localhost:7001/solr/admin but
: http://localhost:7001/solr/admin/luke works fine.
i don't have time to really dig into the code right now, but out of
curiosity what happens when you hit http://localhost:7001/solr/admin/
and/or
Hi,
I can't get what's going on ??
If somebody could help me it would be very nice
thanks
sunnyfr wrote:
Hi,
I don't get what's happening
I tried to make a full import with a limit inside between to id ...
everything works fine then the time keep running but it looks stuck and
Idon't
Hi Shalin
I don't get Ive executed :
http://com:8180/solr/video/dataimport?command=full-import
Then when I Checked :
http://solr-test.adm.video.com:8180/solr/video/dataimport
It looks like document are created :
lst name=initArgs
−
lst name=defaults
str name=configdata-config.xml/str
/lst
Hi Shalin
I don't get Ive executed :
http://com:8180/solr/video/dataimport?command=full-import
Then when I Checked :
http://solr-test.adm.dailymotion.com:8180/solr/video/dataimport
It looks like document are created :
lst name=initArgs
−
lst name=defaults
str name=configdata-config.xml/str
Hi,
I'm confused by the log you have posted. There is no import command to be
seen to /dataimport. Only one request is in the log to /dataimport at the
very end. The logs say that Solr started up and some warming queries were
executed and that's it.
What do you mean by it looks stuck? Does the
Olga: noticing this thread now, are you still having this problem? did you
find a problem with your configuration?
: However I am getting stack overflow when I try to access solr/admin
: page.
...
: java.lang.StackOverflowError javax.servlet.ServletException:
...
:
Chris Hostetter wrote:
: I'm not sure if there's any reason for solr-core to declare a maven
: dependency on solr-solrj.
: When creating the POMs, I had (incorrectly) assumed that the core jar does
: not contain SolrJ classes, hence the dependency.
I consider it a totally justifiable
: 1) solr-core artifact contains org.apache.solr.client.solrj packages, and at
: the same time, the solr-core artifact depends on the solr-solrj artifact.
what you are seeing isn't specific to the maven jars, that's the way it is
in hte standard release.
i believe the inclusion of solrj code
On Wed, Oct 29, 2008 at 9:11 PM, Chris Hostetter
[EMAIL PROTECTED]wrote:
i believe the inclusion of solrj code in the core jar is intentional, the
core jar is intended (as i understand it) to encapsulate everything needed
to run Solr (and because of the built in distributed search features,
: I'm not sure if there's any reason for solr-core to declare a maven
: dependency on solr-solrj.
: When creating the POMs, I had (incorrectly) assumed that the core jar does
: not contain SolrJ classes, hence the dependency.
I consider it a totally justifiable assumption. the current
are you running the packaged .war directly? or something custom? Did
it ever work?
Is anyone else running successfully on weblogic?
On Oct 24, 2008, at 5:10 PM, Dadasheva, Olga wrote:
Hi,
I run Solr 1.3 in Weblogic 10.3 Java 6;
I have a single core application deployed to the same
]
Sent: Friday, October 24, 2008 5:28 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr 1.3 stack overflow when accessing solr/admin page
are you running the packaged .war directly? or something custom? Did
it ever work?
Is anyone else running successfully on weblogic?
On Oct 24, 2008
Hi,
About the second point, it was my mistake (source dependencies problem
in eclipse).
--
Renaud Delbru
Renaud Delbru wrote:
Hi,
I am using the Solr 1.3 mave nartifacts from [1]. It seems that these
artifacts are not correct. I have noticed that:
1) solr-core artifact contains
1 - 100 of 166 matches
Mail list logo