Please provide your full query, including your qf parameter and all other
request parameters, and also the relevant fields/field-types from schema. Do
you use stopwords? Can you also add debugQuery=true and paste in the
parsedQuery?
--
Jan Høydahl, search solution architect
Cominvent AS -
Thanks Em, Robert, Chris for your time and valuable advice. We'll make some
tests and will let you know soon.
On Thu, Feb 16, 2012 at 11:43 PM, Em mailformailingli...@yahoo.de wrote:
Hello Carlos,
I think we missunderstood eachother.
As an example:
BooleanQuery (
clauses: (
Indika Tantrigoda wrote
Hi All,
I am using edismax SearchHandler in my search and I have some issues in
the
search results. As I understand if the defaultOperator is set to OR the
search query will be passed as - The OR quick OR brown OR fox
implicitly.
Did you also remove mm? If
I have been using sharding with multiple basic solr server for clustering. I
also used one embedded solr server (Solrj Java API) with many basic solr
servers and connecting them by sharding as embedded solr server is the
caller of them. I used the code line below for this purpose.
SolrQuery
Hi Chantal,
I checked my client. It was pointing to the old solrj. After changing that,
it got indexed properly.
Thanks a lot.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Error-Indexing-in-solr-3-5-tp3746735p3753359.html
Sent from the Solr - User mailing list archive at
Hi all
(Note: this question is cross-posted on stackoverflow:
http://stackoverflow.com/questions/9327542/removing-empty-dynamic-fields-from-a-solr-1-4-index)
I have a Solr index that uses quite a few dynamic fields. I've recently changed
my code to reduce the amount of data we index with Solr,
Hi all,
Am writing rails application by using solr_ruby gem to access solr .
Can anybody suggest how to handle testcaeses for solr code and connections
in functionaltetsing.
--
View this message in context:
See below
On Thu, Feb 16, 2012 at 6:18 AM, v_shan varun.c...@gmail.com wrote:
I have a heldesk application developed in PHP/MySQL. I want to implement real
time Full text search and I have shortlisted Solr. MySQL database will store
all the tickets and their updates and that data will be
OK, payloads are a bit of a mystery to me, so this may be way off
base.
But...
The ordering of your analysis chain is suspicious, the admin/analysis
page is a life-saver.
WordDelimiterFilterFactory is breaking up your input before it gets to
the payload filter I think, so your payload
Hi,
is it possible to extend the standard tokenizer or use a custom one
(possible via extending the standard one) to add some custom tokens
like Lucene-Core to be one token.
regards
smime.p7s
Description: S/MIME cryptographic signature
thanks gora for your help.
I installed Maven and downloaded Tika following the guide: But I have an
errore during the built of Tika about 'tika compiler', and the maven
installation of Tika is stopped.
there is another way?
thank you
a.
2012/2/16 Gora Mohanty g...@mimirtech.com
On 16 February
You should not have to do anything with Maven, the instructions
you followed were from 1.4.1 days..
Assuming you're working with a 3.x build, here's a data-config
that worked for me, just a straight distro. But note a couple of things:
1 for simplicity, I changed the schema.xml to NOT require
Thanks Mark. I'm still seeing some issues while indexing though. I
have the same setup describe in my previous email. I do some indexing
to the cluster with everything up and everything looks good. I then
take down one instance which is running 2 cores (shard2 slice 1 and
shard 1 slice 2) and
and having looked at this closer, shouldn't the down node not be
marked as active when I stop that solr instance?
On Fri, Feb 17, 2012 at 10:04 AM, Jamie Johnson jej2...@gmail.com wrote:
Thanks Mark. I'm still seeing some issues while indexing though. I
have the same setup describe in my
Hi,
I'm pretty new to solr and especially solr cloud, so hopefully this isn't
too dumb: I followed the wiki instructions for setting up a small cloud.
Things seem to work, *except* on the UI [using chrome and safari], the
cloud tab hangs. It says Zookeeper Data, and then there's a loading
On Fri, Feb 17, 2012 at 5:10 PM, Jamie Johnson jej2...@gmail.com wrote:
and having looked at this closer, shouldn't the down node not be
marked as active when I stop that solr instance?
Currently the shard state is not updated in the cloudstate when a node
goes down. This behavior should
Just FYI the solr-ruby (hyphen, not underscore to be precise) is
deprecated in that the source no longer lives under Apache's svn. The gem is
still out there, and it's still a useful library, but the Ruby/Solr world seems
to use RSolr the most. Both have their pros/cons, but solr-ruby
Thanks Sami, so long at it's expected ;)
In regards to the replication not working the way I think it should,
am I missing something or is it simply not working the way I think?
On Fri, Feb 17, 2012 at 11:01 AM, Sami Siren ssi...@gmail.com wrote:
On Fri, Feb 17, 2012 at 5:10 PM, Jamie Johnson
A wonderful writeup on various memory collection concerns
http://www.lucidimagination.com/blog/2011/03/27/garbage-collection-bootcamp-1-0/
On Fri, Feb 17, 2012 at 12:27 AM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
One thing that could fit the pattern you describe would be Solr caches
Hi Torsten,
did you have a look at WordDelimiterTokenFilter?
Sounds like it fits your needs.
Regards,
Em
Am 17.02.2012 15:14, schrieb Torsten Krah:
Hi,
is it possible to extend the standard tokenizer or use a custom one
(possible via extending the standard one) to add some custom tokens
On Fri, Feb 17, 2012 at 6:03 PM, Jamie Johnson jej2...@gmail.com wrote:
Thanks Sami, so long at it's expected ;)
In regards to the replication not working the way I think it should,
am I missing something or is it simply not working the way I think?
It should work. I also tried to reproduce
On Feb 17, 2012, at 11:03 AM, Jamie Johnson wrote:
Thanks Sami, so long at it's expected ;)
Yeah, its expected - we always use both the live nodes info and state to
determine the full state for a shard.
In regards to the replication not working the way I think it should,
am I missing
On Fri, Feb 17, 2012 at 11:13 AM, Mark Miller markrmil...@gmail.com wrote:
When exactly is this build from?
Yeah... I just checked in a fix yesterday dealing with sync while
indexing is going on.
-Yonik
lucidimagination.com
I stop the indexing, stop the shard, then start indexing again. So
shouldn't need Yonik's latest fix? In regards to how far out of sync,
it's completely out of sync, meaning index 100 documents to the
cluster (40 on shard1 60 on shard2) then stop the instance, index 100
more, when I bring the
On Feb 17, 2012, at 11:00 AM, Ranjan Bagchi wrote:
Hi,
I'm pretty new to solr and especially solr cloud, so hopefully this isn't
too dumb: I followed the wiki instructions for setting up a small cloud.
Things seem to work, *except* on the UI [using chrome and safari], the
cloud tab
I'm seeing the following. Do I need a _version_ long field in my schema?
Feb 17, 2012 1:15:50 PM
org.apache.solr.update.processor.LogUpdateProcessor finish
INFO: {delete=[f2c29abe-2e48-4965-adfb-8bd611293ff0]} 0 0
Feb 17, 2012 1:15:50 PM org.apache.solr.common.SolrException log
SEVERE:
On Fri, Feb 17, 2012 at 1:27 PM, Jamie Johnson jej2...@gmail.com wrote:
I'm seeing the following. Do I need a _version_ long field in my schema?
Yep... versions are the way we keep things sane (shuffled updates to a
replica can be correctly reordered, etc).
-Yonik
lucidimagination.com
Ok, so I'm making some progress now. With _version_ in the schema
(forgot about this because I remember asking about it before) deletes
across the cluster work when I delete by id. Updates work as well if
a node is down it recovered fine. Something that didn't work though
was if a node was down
On Fri, Feb 17, 2012 at 1:38 PM, Jamie Johnson jej2...@gmail.com wrote:
Something that didn't work though
was if a node was down when a delete happened and then comes back up,
that node still listed the id I deleted. Is this currently supported?
Yes, that should work fine. Are you still
Hello folks,
I build a simple custom component for “hl.q” query.
My case was to inject hl.q=params on the fly, with filter params like
fields which were in my
standard query. These were highlighted , because Solr/Lucene have no way of
interpreting an extended q clause and saying this part is a
Yes, still seeing that. Master has 8 items, replica has 9. So the
delete didn't seem to work when the node was down.
On Fri, Feb 17, 2012 at 1:41 PM, Yonik Seeley
yo...@lucidimagination.com wrote:
On Fri, Feb 17, 2012 at 1:38 PM, Jamie Johnson jej2...@gmail.com wrote:
Something that didn't
Hmm...just tried this with only deletes, and the replica sync'd fine for me.
Is this with your multi core setup or were you trying with instances?
On Feb 17, 2012, at 1:52 PM, Jamie Johnson wrote:
Yes, still seeing that. Master has 8 items, replica has 9. So the
delete didn't seem to work
This was with the cloud-dev solrcloud-start.sh script (after that I've
used solrcloud-start-existing.sh).
Essentially I run ./solrcloud-start-existing.sh
index docs
kill 1 of the solr instances (using kill -9 on the pid)
delete a doc from running instances
restart killed solr instance
on doing
Hi Torsten,
The Lucene StandardTokenizer is written in JFlex (http://jflex.de) - you can
see the version 3.X specification at:
http://svn.apache.org/viewvc/lucene/dev/branches/branch_3x/lucene/core/src/java/org/apache/lucene/analysis/standard/StandardTokenizerImpl.jflex?view=markup
You can
On Fri, Feb 17, 2012 at 2:07 PM, Jamie Johnson jej2...@gmail.com wrote:
This was with the cloud-dev solrcloud-start.sh script (after that I've
used solrcloud-start-existing.sh).
Essentially I run ./solrcloud-start-existing.sh
index docs
kill 1 of the solr instances (using kill -9 on the pid)
You are committing in that mix right?
On Feb 17, 2012, at 2:07 PM, Jamie Johnson wrote:
This was with the cloud-dev solrcloud-start.sh script (after that I've
used solrcloud-start-existing.sh).
Essentially I run ./solrcloud-start-existing.sh
index docs
kill 1 of the solr instances (using
i try...but i works with solr 1.4.1
Il giorno 17 febbraio 2012 15:59, Erick Erickson
erickerick...@gmail.comha scritto:
You should not have to do anything with Maven, the instructions
you followed were from 1.4.1 days..
Assuming you're working with a 3.x build, here's a data-config
Sorry, my error! In that case you *do* have to do some fiddling to get
it all to work.
Good Luck!
Erick
On Fri, Feb 17, 2012 at 3:27 PM, alessio crisantemi
alessio.crisant...@gmail.com wrote:
i try...but i works with solr 1.4.1
Il giorno 17 febbraio 2012 15:59, Erick Erickson
I'm confused now..
so, my last question:
I add this in my solrconfig.xml:
requestHandler name=/dataimport
class=org.apache.solr.handler.dataimport.DataImportHandler
lst name=defaults
str name=configc:\solr\conf\db-config.xml/str
/lst
/requestHandler
And I wrote my db-config.xml like
Why do you want to? That is what are you trying to accomplish by
modifying that variable? You may not really need to...
This seems like an XY problem...
Best
Erick
On Thu, Feb 16, 2012 at 11:06 PM, remi tassing tassingr...@gmail.com wrote:
Hi all,
How do we modify the $content variable in
yes committing in the mix.
id field is a UUID.
On Fri, Feb 17, 2012 at 3:22 PM, Mark Miller markrmil...@gmail.com wrote:
You are committing in that mix right?
On Feb 17, 2012, at 2:07 PM, Jamie Johnson wrote:
This was with the cloud-dev solrcloud-start.sh script (after that I've
used
On Feb 17, 2012, at 3:56 PM, Jamie Johnson wrote:
id field is a UUID.
Strange - was using UUID's myself in same test this morning...
I'll try again soon.
- Mark Miller
lucidimagination.com
what is the proper syntax for including sort directive in my responseHandler?
i tried this but got an error:
requestHandler name=partItemNoSearch class=solr.SearchHandler
default=false
lst name=defaults
str name=defTypeedismax/str
str name=echoParamsall/str
int
$content is output of the main template rendered.
To modify what is generated into $content, modify the main template or the
sub-#parsed templates (which is what you've discovered, looks like) that is
rendered (browse.vm, perhaps, if you're using the default example setup). The
main template
Hi guys, I'm cross posting this from lucene list as I guess I can have
better help here for this scenario.
Suppose I want to index 100Gb+ of numeric data. I'm not yet sure the
specifics, but I can expect the following:
- data is expected to be in one gigantic table. conceptually, is likea
Ouch... sorry about the format... I have no idea why gmail turned my
text into that...
On Fri, Feb 17, 2012 at 10:07 PM, Pedro Ferreira
psilvaferre...@gmail.com wrote:
Hi guys, I'm cross posting this from lucene list as I guess I can have
better help here for this scenario.
Suppose I want to
Hi Mark,
Having a look at that requestHandler it looks ok [1], are you experiencing
any errors?
If so did you check the wiki page FieldOptionsByUseCase [2], maybe that
field (rankNo) options contain indexed=false or multiValued=true?
HTH,
Tommaso
[1] :
The Apache Solr main page does not mention the mailing lists. The wiki
main page has a broken link. I have had to search my incoming mail to
find out how to unsubscribe to solr-user.
Someone with full access- please fix these problems.
Thanks,
--
Lance Norskog
goks...@gmail.com
To unsubscribe, e-mail: solr-user-unsubscr...@lucene.apache.org
Also you can request a FAQ, e-mail: solr-user-...@lucene.apache.org
On Sat, Feb 18, 2012 at 12:38 AM, Lance Norskog goks...@gmail.com wrote:
The Apache Solr main page does not mention the mailing lists. The wiki
main page has a
Apologies. I meant to type “1.4 TB” and somehow typed “1.4 GB.” Little
wonder that no one thought the question was interesting, or figured I must
be using Sneakernet to run my searches.
-- Bryan Loofbourrow
--
*From:* Bryan Loofbourrow
Can anybody help me understand the right way to define a data-config.xml file
with nested entities for indexing the contents of an XML file?
I used this data-config.xml file to index a database containing sample patient
records:
dataConfig
dataSource type=JdbcDataSource
Thanks for your thoughts Shawn. I did notice 3.x tightened up alot and I did
account for it by making sure I had pk defined and columns explicitly
aliased with the same name (and I will make sure the bug text reflects
that).
To help others that are having the same problem, I just found a thread
The PointType seems to be hard-coded to use doubles. Where in the code
does this happen?
--
Lance Norskog
goks...@gmail.com
53 matches
Mail list logo