On Thu, 2007-05-10 at 10:05 +0100, Kainth, Sachin wrote:
unsubscribe
Hi Sachin,
you need to send to a different mailing address:
[EMAIL PROTECTED]
HTH
salu2
--
Thorsten Scherler thorsten.at.apache.org
Open Source Java consulting, training
but index file size not changed and maxDoc not changed.
2007/5/10, Nick Jenkin [EMAIL PROTECTED]:
Hi James,
As I understand it numDocs is the number of documents in your index,
maxDoc is the most documents you have ever had in your index.
You currently have no documents in your index by
On 5/10/07, James liu [EMAIL PROTECTED] wrote:
i try, it show me error information:
Solr could support a Lucene 1.4.3 index if the schema was configured
to match it.
I see the following buried in your logs:
java.lang.RuntimeException: Can't find resource 'solrconfig.xml'
-Yonik
I have written a costume response writer and added the response writer to
solrconfig.xml
When I run a program I can see the costume response writer is initialized,
but when I run a search with the costume writer's name as the wt paramater
the search is executed but the response writer is not
I'm trying to setup a system to have very low index latency (1-2
seconds) and one of the javadocs intrigued me:
DirectUpdateHandler2 implements an UpdateHandler where documents are
added directly to the main Lucene index as opposed to adding to a
separate smaller index
The plain
On 5/10/07, Will Johnson [EMAIL PROTECTED] wrote:
I'm trying to setup a system to have very low index latency (1-2
seconds) and one of the javadocs intrigued me:
DirectUpdateHandler2 implements an UpdateHandler where documents are
added directly to the main Lucene index as opposed to adding to
I guess I was more concerned with doing the frequent commits and how
that would affect the caches. Say I have 2M docs in my main index but I
want to add docs every 2 seconds all while doing queries. if I do
commits every 2 seconds I basically loose any caching advantage and my
faceting
On 5/10/07, Will Johnson [EMAIL PROTECTED] wrote:
I guess I was more concerned with doing the frequent commits and how
that would affect the caches. Say I have 2M docs in my main index but I
want to add docs every 2 seconds all while doing queries. if I do
commits every 2 seconds I basically
I believe in lucene at least deleting documents only marks them for
deletion. The actual delete happens only after closing the IndexReader.
Not sure about Solr
Ajanta.
James liu wrote:
but index file size not changed and maxDoc not changed.
2007/5/10, Nick Jenkin [EMAIL PROTECTED]:
Hi
What about issuing separate commits to the index on a regularly
scheduled basis? For example, you add documents to the index every 2
seconds, or however often, but these operations don't commit. Instead,
you have a cron'd script or something that just issues a commit every 5
or 10 minutes or
The problem is I want the newly added documents to be made searchable
every 1-2 seconds so I need the commits. I was hoping that the caches
could be stored/tied to the IndexSearcher then a MultiSearcher could
take advantage of the multiple sub indexes and their respective caches.
I think the
On 5/10/07, Ajanta Phatak [EMAIL PROTECTED] wrote:
I believe in lucene at least deleting documents only marks them for
deletion. The actual delete happens only after closing the IndexReader.
Not sure about Solr
Closing an IndexReader only flushes the list of deleted docids to the
index... it
On 5/10/07, Debra [EMAIL PROTECTED] wrote:
I have written a costume response writer and added the response writer to
solrconfig.xml
When I run a program I can see the costume response writer is initialized,
but when I run a search with the costume writer's name as the wt paramater
the search
Yes, that is possible, but we also monitor Apache, Tomcat, the JVM, and
OS through JMX and other live monitoring interfaces. Why invent a real-time
HTTP log analysis system when I can fetch /search/stats.jsp at any time?
By number of rows fetched, do you mean number of documents matched?
The log
After writing my 3rd parser in my third scripting language in so many
months to go from unix timestamps to Solr Time (8601) I have to
ask: shouldn't the date/time field type be more resilient? I assume
there's a good reason that it's 8601 internally, but certainly it
would be excellent for
I don't know if this helps, but...
Do *all* your queries need to include the fast updates? I have a setup
where there are some cases that need the newest stuff but most cases can
wait 5 mins (or so)
In that case, I have two solr instances pointing to the same index
files. One is used for
On 5/10/07, Brian Whitman [EMAIL PROTECTED] wrote:
After writing my 3rd parser in my third scripting language in so many
months to go from unix timestamps to Solr Time (8601) I have to
ask: shouldn't the date/time field type be more resilient? I assume
there's a good reason that it's 8601
You can get at some of this functionality in the built-in xslt 1.0
engine (Xalan) by using the e-xslt date-time extensions: see
http://exslt.org/date/index.html, and for Xalan's implementation see
http://xml.apache.org/xalan-j/extensionslib.html#exslt . There are some
examples here:
You can get at some of this functionality in the built-in xslt 1.0
engine (Xalan) by using the e-xslt date-time extensions: see
http://exslt.org/date/index.html, and for Xalan's implementation see
http://xml.apache.org/xalan-j/extensionslib.html#exslt .
The exslt stuff looks good, thanks! I'll
This is from the log:
...
INFO: adding queryResponseWriter
jdbc=com.lss.search.request.JDBCResponseWriter
10/05/2007 21:11:39 com.lss.search.request.JDBCResponseWriter init
INFO: Init JDBC reponse writer //This is added from the ini of the
class to see that it's actually finding the right
: It's more than string processing, anyway. I would want to convert the
: Solr Time 2007-03-15T00:41:5:2Z to March 15th, 2007 in a web app.
: I'd also like to say 'Posted 3 days ago. In my vision of things,
: that work is done on Solr's side. (The former case with a strftime
: type formatter in
: INFO: adding queryResponseWriter
: jdbc=com.lss.search.request.JDBCResponseWriter
: 10/05/2007 21:11:44 org.apache.solr.core.SolrCore execute
: INFO: null jdsn=4start=0q=whitewt=jdbcqt=standardrows=90 0 1442
that's very strange ... the only thing that jumps out at me is the null
there where
On May 10, 2007, at 2:30 PM, Chris Hostetter wrote:
Questions like these are whiy I'm glad Solr currently keeps it
simple and
makes people deal in absolutes .. less room for confusion :)
I get all that, thanks for the great explanation.
I imagine most of my problems can be solved with a
On 5/10/07, Brian Whitman [EMAIL PROTECTED] wrote:
On May 10, 2007, at 2:30 PM, Chris Hostetter wrote:
Questions like these are whiy I'm glad Solr currently keeps it
simple and
makes people deal in absolutes .. less room for confusion :)
I get all that, thanks for the great explanation.
I
BTW,
The Simple Example Install section in
http://wiki.apache.org/solr/SolrTomcat
leaves the unzipped directory apache-solr-nightly-incubating
intact, but this is not needed after copying the
solr.war and the example solr directory, is it?
Can I edit the instruction to insert:
rm -r
I'm trying to search an index of docs which have text fields in Arabic,
using XSL writer (wt=xslttr=example.xsl). But the Arabic text gets
all garbled. Is XSL writer known to work for Arabic text? Is anybody
using it?
-kuro
In example.xsl change the output type
xsl:output media-type=text/html/
to
xsl:output media-type=text/html; charset=UTF-8 encoding=UTF-8/
And see if that helps. I had the same problem (different language.)
If this works we should file a JIRA to fix it up in trunk.
On May 10, 2007,
: The right approach for more flexible date parsing is probably to add
: more functionality to the date field and configure via optional
: attributes.
Adding configuration options to DateField seems like it might ultimately
be the right choice for changing the *internal* format, but assuming we
(In general a DateTranslatingTokenFilter class would be a pretty cool
addition to Lucene, it could as constructor args two DateFormatters (one
for parsing the incoming tokens, and one for formating the outgoing
If this happens, it would be nice (perhaps overkill) to have a chronic
input
hossman_lucene wrote:
can you clarify:
1) which version of Solr you are using (the Solr Implementation Version
from /admin/registry.jsp gives the best answer)
...
-Hoss
Just downloaded the latest night build and viola it's back on track (with
the other bugs...)
--
Regarding Hoss's points about the internal format, resolution of
date-times, etc.: maybe a good starting point would be to implement the
date-time algorithms of XML Schema
(http://www.w3.org/TR/xmlschema-2/#isoformats), where these behaviors
are spelled out in reasonably precise terms. There must
If my memory is correct, UTF-8 has been the default encoding per
XML specification from a very early stage. If the XML parser is not
defaulting
to UTF-8 in absence of the encoding attribute, that means the XML
parser has a bug, and the code should be corrected.
(I don't have an objection to add
Though, isn't there a recent patch to allow multiple indices under a single
Solr instance in JIRA?
Otis
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Simpy -- http://www.simpy.com/ - Tag - Search - Share
- Original Message
From: Yonik Seeley [EMAIL PROTECTED]
To:
Yes, coordination between the main index searcher, the index writer,
and the index reader needed to delete other documents.
Can you point me to any documentation/code that describes this
implementation?
That's weird... I've never seen that.
The lucene write lock is only obtained when the
get it. thks yonik.
2007/5/10, Yonik Seeley [EMAIL PROTECTED]:
On 5/10/07, Ajanta Phatak [EMAIL PROTECTED] wrote:
I believe in lucene at least deleting documents only marks them for
deletion. The actual delete happens only after closing the IndexReader.
Not sure about Solr
Closing an
Hello all,
I have tested by use post.sh in example directory to add xml documents into
solr. It works when I add one by one.
But when I have a lot of .xml file to be posted (say about 500-1000 files) and
I wrote a shell script to call post.sh one by one. I found those xml files are
not
u should know id is unique number.
2007/5/11, David Xiao [EMAIL PROTECTED]:
Hello all,
I have tested by use post.sh in example directory to add xml documents
into solr. It works when I add one by one.
But when I have a lot of .xml file to be posted (say about 500-1000 files)
and I wrote a
that section was never really intented to be *the* set of instructions for
installing Solr on Tomcat, just the *simplest* set of things you could do
to see it working, many additional things could be done (besides deleting
the unzipped dir). If we start listing more things, people may get
: Closing an IndexReader only flushes the list of deleted docids to the
: index... it doesn't actually delete them. Deletions only happen when
: the deleted docs segment is involved in a merge, or when an optimize
: is done (which is a merge of all segments).
just to clarify slightly because
The boost is a way to adjust the weight of that field, just like you
adjust the weight of any other field. If the boost is dominating the
score, reduce the weight and vice versa.
wunder
On 5/10/07 9:22 PM, Chris Hostetter [EMAIL PROTECTED] wrote:
: Is this correct? bf is a boosting
: want to add docs every 2 seconds all while doing queries. if I do
: commits every 2 seconds I basically loose any caching advantage and my
: faceting performance goes down the tube. If however, I were to add
: things to a smaller index and then roll it into the larger one every ~30
: minutes
41 matches
Mail list logo