I am implementing the BDBUpdateHandler such that it would save updates into
BDB, and then periodically add them into the Lucene index. This would make
rollback possible.
- Original Message
From: Yonik Seeley [EMAIL PROTECTED]
To: solr-user@lucene.apache.org
Sent: Friday, June 9,
: jason rutherglen [EMAIL PROTECTED]
To: solr-user@lucene.apache.org
Sent: Saturday, June 10, 2006 4:06:29 PM
Subject: Re: Rollback
I am implementing the BDBUpdateHandler such that it would save updates into
BDB, and then periodically add them into the Lucene index. This would make
rollback
When doing a copyField into a text field that is supposed to be stemmed I'm not
seeing the stemming occur.
These are the relevant lines of XML from the schema.xml:
fieldtype name=text class=org.apache.solr.schema.TextField
analyzer
tokenizer
By looking at what is stored. Has this worked for others?
- Original Message
From: Yonik Seeley [EMAIL PROTECTED]
To: solr-user@lucene.apache.org; jason rutherglen [EMAIL PROTECTED]
Sent: Friday, August 25, 2006 6:35:43 PM
Subject: Re: Possible bug in copyField
On 8/25/06, jason
, at 1:41 PM, jason rutherglen wrote:
Ok... Looks like its related to using SpanQueries (I hacked on the
XML query code). I remember a discussion about this issue. Not
something Solr specifically supports so my apologies. However if
anyone knows about this feel free to post something
Hello,
I am interested in using NumberUtils to encode any number into a sortable
double. Once encoded will the lexicographic sorting work on any precision?
If I store 47. and 48.22 may I assume that the order will be
correct? I ask this questions because doubleToRawLongBits is called to
interested in realtime
search to get involved as it may be something that is difficult for
one company to have enough resources to implement to a production
level. I think this is where open source collaboration is
particularly useful.
Cheers,
Jason Rutherglen
[EMAIL PROTECTED]
On Wed, Sep 3, 2008 at 4
Hello,
There are a few features I would like to see in SOLR going forward and
I am interested in finding out what other folks thought about them to
get a priority list. I believe there are many features that Google
and FAST have that SOLR and Lucene will want to implement in future
releases.
1.
Hello Ryan,
SQL database such as H2
Mainly to offer joins and be able to perform hierarchical queries.
Also any other types of queries a hybrid SQL search system would
offer. This is something that is best built into SOLR rather than
Lucene. It seems like a lot of the users of SOLR work with
If the configuration code is going to be rewritten then I would like
to see the ability to dynamically update the configuration and schema
without needing to reboot the server. Also I would like the
configuration classes to just contain data and not have so many
methods that operate on the
16, 2008 at 10:12 AM, Jason Rutherglen
[EMAIL PROTECTED] wrote:
SQL database such as H2
Mainly to offer joins and be able to perform hierarchical queries.
Can you define or give an example of what you mean by hierarchical queries?
A downside of any type of cross-document queries (like joins
Hi Yonik,
One approach I have been working on that I will integrate into SOLR is
the ability to use serialized objects for the analyzers so that the
schema can be defined on the client side if need be. The analyzer
classes will be dynamically loaded. Or there is no need for a schema
and plain
at 1:27 PM, Jason Rutherglen
[EMAIL PROTECTED] wrote:
If the configuration code is going to be rewritten then I would like
to see the ability to dynamically update the configuration and schema
without needing to reboot the server.
Exactly. Actually, multi-core allows you to instantiate
with the rsync based batch replication.
On Wed, Sep 17, 2008 at 2:21 PM, Yonik Seeley [EMAIL PROTECTED] wrote:
On Wed, Sep 17, 2008 at 1:27 PM, Jason Rutherglen
[EMAIL PROTECTED] wrote:
If the configuration code is going to be rewritten then I would like
to see the ability to dynamically update
off in production in servelet containers imo as well.
This can really be such a pain in the ass on a live site...someone touches
web.xml and the app server reboots*shudder*. Seen it, don't dig it.
Jason Rutherglen wrote:
This should be done. Great idea.
On Wed, Sep 17, 2008 at 3:41 PM
:56 AM, Mark Miller [EMAIL PROTECTED] wrote:
Dynamic changes are not what I'm against...I'm against dynamic changes that
are triggered by the app noticing that the config have changed.
Jason Rutherglen wrote:
Servlets is one thing. For SOLR the situation is different. There
are always small
PROTECTED] wrote:
why to restart solr ? reloading a core may be sufficient.
SOLR-561 already supports this
-
On Thu, Sep 18, 2008 at 5:17 PM, Jason Rutherglen
[EMAIL PROTECTED] wrote:
Servlets is one thing. For SOLR the situation is different. There
are always small changes people want
The question I have is what is the optimal approach for integrating
realtime into SOLR? What classes should be extended or created?
On Sat, Sep 27, 2008 at 9:40 AM, Otis Gospodnetic
[EMAIL PROTECTED] wrote:
Solr today is not suited for real-time search (seeing newly added docs in
search
Tom,
Yes, we've (Biz360) indexed 3 billion and upwards... If indexing
is the issue (or rather re-indexing) we used SOLR-1301 with
Hadoop to re-index efficiently (ie, in a timely manner). For
querying we're currently using the out of the box Solr
distributed shards query mechanism, which is hard
How does one do this? UpdateHandler doesn't override the init method
like SearchHandler.
I'm using the following data-config.xml with DataImportHandler. I've
never used embedded entities before however I'm not seeing the comment
show up in the document... I'm not sure what's up.
dataConfig
dataSource type=JdbcDataSource name=ch
driver=com.mysql.jdbc.Driver
.
From: Jason Rutherglen [via Lucene]
[mailto:ml-node+740624-966329660-124...@n3.nabble.com]
Sent: Wednesday, April 21, 2010 10:15 AM
To: caman
Subject: Problem with DataImportHandler and embedded entities
I'm using the following data-config.xml with DataImportHandler. I've
never used embedded
that is the issue though but worth a try
Select id, updated,( SELECT comment FROM ratings WHERE app = appParent.id)
as comment FROM applications appParent limit 10
From: Jason Rutherglen [via Lucene]
[mailto:ml-node+740680-1955771337-124...@n3.nabble.com]
Sent: Wednesday, April 21, 2010 10:33
I think it's working, it was the lack of the seemingly innocuous
sub-entity pk=application_id. After adding that I'm seeing some
data returned.
On Wed, Apr 21, 2010 at 10:44 AM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
Something's off, for each row, it's performing the following 5
The other issue now is full-import is only importing 1 document, and
that's all. Despite no limits etc... Odd...
On Wed, Apr 21, 2010 at 10:48 AM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
I think it's working, it was the lack of the seemingly innocuous
sub-entity pk=application_id
the merge settings (and maybe the MergeScheduler) to ensure that your
pathalogical worst case scenerio (ie: a really big merge) doens't block
your commits.
ConcurrentMergeScheduler should be handling the thread priorities more
intelligently in Lucene 3.1.
I have an int ratings field that I want to boost from within the
query. So basically want to use the
http://wiki.apache.org/solr/FunctionQuery#scale function to
scale the ratings to values 1..5, then within the actual query
(or otherwise), boost the scaled rating value. How would I go
about doing
Does this work? When trying to display with a URL such as
solr/sandbox/admin/file/?file=/mnt/solr/schema.xml from the Solr
admin console, the following error occurs:
type Status report
message Can not find: schema.xml [/mnt/solr/sandbox/conf/mnt/solr/schema.xml]
description The request sent by
Maybe we can add an error message to ShowFileRequesHandler that
explains to the user that displaying an absolute path doesn't work?
On Mon, Apr 26, 2010 at 1:13 PM, Chris Hostetter
hossman_luc...@fucit.org wrote:
: Does this work? When trying to display with a URL such as
:
Tom,
Interesting, can you post your findings after you've found them? :)
Jason
On Tue, Apr 27, 2010 at 2:33 PM, Burton-West, Tom tburt...@umich.edu wrote:
Is it possible to use the NoOpMergePolicy (
https://issues.apache.org/jira/browse/LUCENE-2331 ) from Solr?
We have very large indexes
If I create a new core on a Solr master, is there a way to instruct a
Solr slave to replicate the new core?
I guess I didn't explain it properly. I want to create a core on
the master, and then have N slaves also (aka replicate) create
those new core(s) on the slave servers, then of course, begin to
replicate (yeah, got that part). There doesn't appear to be
anything today that does this, it's unclear
wrote:
On Wed, Apr 28, 2010 at 10:14 AM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
I guess I didn't explain it properly. I want to create a core on
the master, and then have N slaves also (aka replicate) create
those new core(s) on the slave servers, then of course, begin to
replicate
Multiple spellcheckers may be specified by name in solrconfig, such as
str name=namejarowinkler/str, however how does one make a
request to this particular spellchecker, as opposed to the one named
default?
Ahmet, thanks, however it's un-intuitive, it should be spellchecker.name?
On Wed, Apr 28, 2010 at 12:01 PM, Ahmet Arslan iori...@yahoo.com wrote:
Multiple spellcheckers may be
specified by name in solrconfig, such as
str name=namejarowinkler/str, however
how does one make a
request to this
Im not sure even the ZooKeeper setup would
include something like this.
- Jon
On Apr 28, 2010, at 10:14 AM, Jason Rutherglen wrote:
I guess I didn't explain it properly. I want to create a core on
the master, and then have N slaves also (aka replicate) create
those new core(s
Is the cleanup of indexes using Solr 1.4 Replication documented
somewhere? I can't find any information regarding this at:
http://wiki.apache.org/solr/SolrReplication
Too many snapshot indexes are being left around, and so they need to
be cleaned up.
The main issue is if you're using facets, which are currently
inefficient for the realtime use case because they're created on the
entire set of segment/readers. Field caches in Lucene are per segment
and so don't have this problem.
On Tue, May 25, 2010 at 4:09 AM, Grant Ingersoll
Grant, the link's broken?
http://blogs.apache.org/conferences/date/20100428
Unexpected Exception
Status Code 500
Message You have closed the EntityManager, though the persistence
context will remain active until the current transaction commits.
Type
Exception Roller has
The insert shards code is as follows:
ModifiableSolrParams modParams = new ModifiableSolrParams(params);
modParams.set(shards, shards);
rb.req.setParams(modParams);
Where shards is a valid single shard pseudo URL.
Stacktrace:
HTTP Status 500 - null java.lang.NullPointerException at
Kris,
That wouldn't do anything because all merging occurs on the master.
Jason
On Thu, Jun 3, 2010 at 6:25 AM, Kris Jack mrkrisj...@gmail.com wrote:
Hi everyone,
I have set up a master-slave configuration where the master machine will be
used primarily for indexing while the slave
What is the best practice? Perhaps we can amend the article at
http://www.lucidimagination.com/blog/2009/05/13/exploring-lucene-and-solrs-trierange-capabilities/
to include the recommendation (ie, dates are commonly unique).
I'm assuming using a long is the best choice.
We (Attensity Group) have been using SOLR-1301 for 6+ months now
because we have a ready Hadoop cluster and need to be able to re/index
up to 3 billion docs. I read the various emails and wasn't sure what
you're asking.
Cheers...
On Tue, Jun 22, 2010 at 8:27 AM, Neeb muneeba...@hotmail.com
If you do distributed indexing correctly, what about updating the documents
and what about replicating them correctly?
Yes, you can do you and it'll work great.
On Mon, Jul 5, 2010 at 7:42 AM, MitchK mitc...@web.de wrote:
I need to revive this discussion...
If you do distributed indexing
What's the fastest way to obtain the total number of docs from the
index? (The Luke request handler takes a long time to load so I'm
looking for something else).
Sorry, like the subject, I mean the total number of terms.
On Mon, Jul 26, 2010 at 4:03 PM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
What's the fastest way to obtain the total number of docs from the
index? (The Luke request handler takes a long time to load so I'm
looking
Tom,
The total number of terms... Ah well, not a big deal, however yes the
flex branch does expose this so we can show this in Solr at some
point, hopefully outside of Solr's Luke impl.
On Tue, Jul 27, 2010 at 9:27 AM, Burton-West, Tom tburt...@umich.edu wrote:
Hi Jason,
Are you looking for
I'm having a different issue with the EdgeNGram technique described
here:
http://www.lucidimagination.com/blog/2009/09/08/auto-suggest-from-popular-queries-using-edgengrams/
That is one word queries q=app on the query_text field, work fine
however q=app mou do not. Why would this be or is there
Analysis returns app mou.
On Thu, Sep 2, 2010 at 6:12 PM, Lance Norskog goks...@gmail.com wrote:
What does analysis.jsp show?
On Thu, Sep 2, 2010 at 5:53 AM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
I'm having a different issue with the EdgeNGram technique described
here:
http
To clarify, the query analyzer returns that. Variations such as
apple mou also do not return anything. Maybe Jay can comment and
then we can amend the article?
On Fri, Sep 3, 2010 at 6:12 AM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
Analysis returns app mou.
On Thu, Sep 2, 2010
, Results were
cached in the mod_perl servers.
Regards,
Dan
On Thu, Sep 2, 2010 at 1:53 PM, Jason Rutherglen jason.rutherg...@gmail.com
wrote:
I'm having a different issue with the EdgeNGram technique described
here:
http://www.lucidimagination.com/blog/2009/09/08/auto-suggest-from-popular
=moufacet.field=term_suggestqt=basicwt=javabinrows=0version=1
Jason Rutherglen wrote:
To clarify, the query analyzer returns that. Variations such as
apple mou also do not return anything. Maybe Jay can comment and
then we can amend the article?
On Fri, Sep 3, 2010 at 6:12 AM, Jason
Katta can be used for managing shards that are built and live in HDFS.
On Fri, Sep 3, 2010 at 10:29 AM, thiseye this...@gmail.com wrote:
I'm investigating using Lucene for a project to index a massive HBase
database. I was looking at using Katta to distribute the index because
people have
Peter,
Are you using per-segment faceting, eg, SOLR-1617? That could help
your situation.
On Sun, Sep 12, 2010 at 12:26 PM, Peter Sturge peter.stu...@gmail.com wrote:
Hi,
Below are some notes regarding Solr cache tuning that should prove
useful for anyone who uses Solr with frequent commits
,
Peter
On Mon, Sep 13, 2010 at 12:05 AM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
Peter,
Are you using per-segment faceting, eg, SOLR-1617? That could help
your situation.
On Sun, Sep 12, 2010 at 12:26 PM, Peter Sturge peter.stu...@gmail.com
wrote:
Hi,
Below are some notes
fieldType name=text_shingle4 class=solr.TextField
positionIncrementGap=100
analyzer
tokenizer class=solr.HTMLStripWhitespaceTokenizerFactory/
filter class=solr.LowerCaseFilterFactory/
filter class=solr.StopFilterFactory ignoreCase=true words=stopwords.txt/
filter class=solr.ShingleFilterFactory
Ron,
IO throttling was discussed a while back however I don't think it was
implemented. For systems that search on indexes where indexing is
happening on the same server, reducing IO contention would be useful.
Here is a somewhat similar issue for merging segments:
Here's the remainder of the discussion, albeit, brief:
http://www.lucidimagination.com/search/document/d6fa7b3241ed11b8/throttling_merges#9df776e79da71044
On Sun, Sep 19, 2010 at 12:04 AM, Ron Mayer r...@0ape.com wrote:
My system which has documents being added pretty much
continually seems
This may be what you're looking for.
http://www.lucidimagination.com/blog/2009/09/08/auto-suggest-from-popular-queries-using-edgengrams/
On Wed, Sep 22, 2010 at 4:41 AM, Arunkumar Ayyavu
arunkumar.ayy...@gmail.com wrote:
It's been over a week since I started learning Solr. Now, I'm using the
Marc,
What do you mean by Katta's ranking algorithm? If you use
SOLR-1395's search request system that traverses Hadoop RPC,
it's simply using what Solr offers today in terms of distributed
search (i.e. no distributed IDF). Instead of requests being
serialized into an HTTP call, they are
I have a fresh checkout from trunk, cd example, after running java
-Dsolr.solr.home=core -jar start.jar,
http://localhost:8983/solr/admin yields a 404 error.
/admin/
-Jay
http://www.lucidimagination.com
On Fri, Oct 9, 2009 at 1:17 PM, Jason Rutherglen jason.rutherg...@gmail.com
wrote:
I have a fresh checkout from trunk, cd example, after running java
-Dsolr.solr.home=core -jar start.jar,
http://localhost:8983/solr/admin yields a 404 error.
in the structure
core/solr/conf? If it has multiple subcores, there is no
solr/admin.jsp. Instead there is a main solr/ and a
solr/core1/admin.jsp etc.
Try running -Dsolr.solr.home=example-DIH/solr from the example/ directory.
On Fri, Oct 9, 2009 at 1:17 PM, Jason Rutherglen
jason.rutherg
http://www.lucidimagination.com
On Fri, Oct 9, 2009 at 1:38 PM, Jason Rutherglen jason.rutherg...@gmail.com
wrote:
Jay,
I tried that as well, still nothing.
When I run: java -Dsolr.solr.home=solr -jar start.jar
I see:
2009-10-09 13:37:04.887::INFO: Logging to STDERR via
Try this in solrconfig.xml:
mergeScheduler class=org.apache.lucene.index.ConcurrentMergeScheduler
int name=maxThreadCount1/int
/mergeScheduler
Yes you can stop the process mid-merge. The partially merged files
will be deleted on restart.
We need to update the wiki?
On Mon, Oct 12, 2009 at
Gio,
Also, is there any concrete way to know when the merge is actually complete
(aside from profiling the machine)?
This would be a great feature to add to the Solr web UI. The ability
to monitor merges in progress and log how much time each used.
-J
are not available
yet), is there any sort of guess as to when these features might become
available?
thanks,
Don
On Wed, Oct 14, 2009 at 2:13 PM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
Dan,
For automatic failover there are 2 wiki pages that may be helpful,
however both
guessing the latter)?
If the latter (ZooKeeperIntegration and KattaIntegration are not available
yet), is there any sort of guess as to when these features might become
available?
thanks,
Don
On Wed, Oct 14, 2009 at 2:13 PM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
Dan
Hi Pravin,
You'll need to setup a Hadoop cluster which is independent of
SOLR-1301. 1301 is for building Solr indexes only, so there
isn't a master and slave. After building the indexes one needs
to provision the indexes to Solr servers. In my case I only have
slaves because I'm not incrementally
what are the steps for this.
Shall I just have copy solr war to Hadoop cluster or what else ?
(Note: I have two setup :
1. Hadoop setup
2. Solr setup)
So to run distributed indexing how to bridge these two setup?
Thanks
-Pravin
-Original Message-
From: Jason Rutherglen
It seems like no, and should be an easy change. I'm putting newlines
after the commas so the large shards list doesn't scroll off the
screen.
If a filter query matches nothing, then no additional query should be
performed and no results returned? I don't think we have this today?
:07 PM, Yonik Seeley
yo...@lucidimagination.com wrote:
On Mon, Oct 19, 2009 at 2:55 PM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
If a filter query matches nothing, then no additional query should be
performed and no results returned? I don't think we have this today
Ok, thanks, new Lucene 2.9 features.
On Mon, Oct 19, 2009 at 2:33 PM, Yonik Seeley
yo...@lucidimagination.com wrote:
On Mon, Oct 19, 2009 at 4:45 PM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
Yonik,
this is a fast operation anyway
Can you elaborate on why this is a fast operation
I couldn't find anything, however I'm thinking of starting one.
I simply altered solr.xml and changed it to persistent=true, then
all subsequent actions were saved.
Thanks
2009/11/11 Noble Paul നോബിള് नोब्ळ् noble.p...@corp.aol.com:
On Thu, Nov 12, 2009 at 3:13 AM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
It looks like our core admin wiki
Ah, thanks for the tip about switching out the jdk jar with the
log4j jar. I think I was running into this issue and couldn't
figure out why Solr logging couldn't be configured when running
inside Hadoop which uses log4j, maybe this was the issue?
On Wed, Nov 18, 2009 at 9:11 AM, Ryan McKinley
Rodrigo,
It sounds like you're asking about near realtime search support,
I'm not sure. So here's few ideas.
#1 How often do you need to be able to search on the latest
updates (as opposed to updates from lets say, 10 minutes ago)?
To topic #2, Solr provides master slave replication. The
If I've got multiple cores on a server, I guess I need multiple
rsyncd's running (if using the shell scripts)?
, 2009 at 11:13 PM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
On Tue, Dec 8, 2009 at 11:48 AM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
If I've got multiple cores on a server, I guess I need multiple
rsyncd's running (if using the shell scripts)?
Yes. I'd highly recommend
/happened
On Mon, Dec 7, 2009 at 11:13 PM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
On Tue, Dec 8, 2009 at 11:48 AM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
If I've got multiple cores on a server, I guess I need multiple
rsyncd's running (if using the shell scripts
I assume there isn't one? Anything in the works?
Seems like an ease of use thing to be able to click to shards from the admin UI?
,
Chris
On 12/9/09 10:31 PM, Shalin Shekhar Mangar shalinman...@gmail.com wrote:
On Thu, Dec 10, 2009 at 11:52 AM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
I assume there isn't one? Anything in the works?
Nope.
--
Regards,
Shalin Shekhar Mangar
-Xmx64m JVM parameter.
Is going to be far too low. For example I always start at 1 GB and
move up from there.
On Tue, Dec 22, 2009 at 4:35 AM, c82famo
ext_amouroux.frede...@agora.msa.fr wrote:
Hi,
I'm facing some OutOfMemory issues with SOLR.
Tomcat is started with a -Xmx64m JVM
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters/Kstem
Which is a cool algo, however the MIT link is totally down... Is it up
sometimes or is it discontinued in favor of the Lucid version (which
is open source or not?)?
Hi, sorry for the somewhat inane question:
I setup replication request handler on the master however I'm not
seeing any replicatable indexes via
http://localhost:8080/solr/main/replication?command=indexversion
Queries such as *:* yield results on the master (so I assume the
commit worked). The
of events to replicate after?
-Yonik
http://www.lucidimagination.com
On Mon, Jan 11, 2010 at 12:25 PM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
Hi, sorry for the somewhat inane question:
I setup replication request handler on the master however I'm not
seeing any replicatable indexes
There's a connect exception on the client, however I'd expect this to
show up in the slave replication console (it's not). Is this correct
behavior (i.e. not showing replication errors)?
On Mon, Jan 11, 2010 at 9:50 AM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
Yonik,
I added startup
Hmm...Even with the IP address in the master URL on the slave, the
indexversion command to the master mysteriously doesn't show the
latest commit... Totally freakin' bizarre!
On Tue, Jan 12, 2010 at 10:53 AM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
There's a connect exception
It was having multiple replicateAfter values... Perhaps a bug, though
I probably won't spend time investigating the why right now, nor
reproducing in the test cases.
On Tue, Jan 12, 2010 at 11:10 AM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
Hmm...Even with the IP address in the master
Hi Reddy,
What's the limitation you're running into?
Jason
On Thu, Jan 28, 2010 at 2:15 AM, V SudershanReddy vsre...@huawei.com wrote:
Hi,
Can we Integrate solr with katta?
In order to overcome the limitations of Solr in distributed search, I need
to integrate katta with solr, without
DataImportHandler multivalued field CollectionString isn't
working the way I'd expect, meaning not at all. I logged the
collection is there, however the multivalue collection field
just isn't being indexed (according to the DIH web UI and it's
not in the index).
wrote:
Hi Jason,
Solr's PatternReplaceFilter(ts, \\P{Alnum}+$, , false) should work,
chained after an appropriate tokenizer.
Steve
On 02/04/2010 at 12:18 PM, Jason Rutherglen wrote:
Is there an analyzer that easily strips non alpha-numeric from the end
of a token
Robert, thanks for redoing all the Solr analyzers to the new API! It
helps to have many examples to work from, best practices so to speak.
Answering my own question... PatternReplaceFilter doesn't output
multiple tokens...
Which means messing with capture state...
On Thu, Feb 4, 2010 at 2:16 PM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
Transferred partially to solr-user...
Steven, thanks for the reply!
I wonder
Sorry for the poorly worded title... For SOLR-1761 I want to pass in a
URL and parse the query response... However it's non-obvious to me how
to do this using the SolrJ API, hence asking the experts here. :)
)
at org.apache.solr.util.QueryTime.main(QueryTime.java:20)
On Mon, Feb 8, 2010 at 9:32 AM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
Sorry for the poorly worded title... For SOLR-1761 I want to pass in a
URL and parse the query response... However it's non-obvious to me how
to do this using the SolrJ API
QueryResponse(namedList, null);
On Mon, Feb 8, 2010 at 10:03 AM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
So here's what happens if I pass in a URL with parameters, SolrJ chokes:
Exception in thread main java.lang.RuntimeException: Invalid base
url for solrj. The base URL must not contain
Ahmet, Thanks, though that isn't quite what I was going for, and it's
resolved besides...
On Mon, Feb 8, 2010 at 10:24 AM, Ahmet Arslan iori...@yahoo.com wrote:
So here's what happens if I pass in a
URL with parameters, SolrJ chokes:
Exception in thread main java.lang.RuntimeException:
1 - 100 of 238 matches
Mail list logo