Erick Erickson wrote:
P.S. Are you absolutely sure your SOLR instance
is pointing at the index you're adding data to?
On Sun, Feb 28, 2010 at 9:08 AM, Erick Erickson
erickerick...@gmail.comwrote:
H, works for me without any problems. That is,
this URL brings back newly-indexed
we've done it successfully for similar requirements
the resource requirements depends on how many concurrent people will be running
those types of reports
up to 4000 records is not a problem at all, one report at a time, but if you
had concurrent requests running into thousands as well then
Hi,
doc
field name=idEN7800GTX/2DHTV/256M/field
field name=manuASUS Computer Inc./field
field name=catelectronics/field
field name=catgraphics card/field
field name=featuresNVIDIA GeForce 7800 GTX GPU/VPU clocked at
486MHz/field
field name=features256MB GDDR3 Memory clocked at
Yes. You can just re-add the document with your changes, and the rest of the
fields in the document will remain unchanged.
On Mon, Mar 1, 2010 at 5:09 PM, Suram reactive...@yahoo.com wrote:
Hi,
doc
field name=idEN7800GTX/2DHTV/256M/field
field name=manuASUS Computer Inc./field
field
Siddhant wrote:
Yes. You can just re-add the document with your changes, and the rest of
the
fields in the document will remain unchanged.
On Mon, Mar 1, 2010 at 5:09 PM, Suram reactive...@yahoo.com wrote:
Hi,
doc
field name=idEN7800GTX/2DHTV/256M/field
field name=manuASUS
Hi @ all,
i try to create a query out of a webbased content management system. In The
CMS there are some protecetd Documents. While Feeding the Documents to Solr
I have the Information: A Document is not protected ore someone with
userGroup:group1 has access. So the query can look like:
Hi all.
I'm configuring spell checking in my index. Everything is working, but I
want to get the best suggestion based in number of ocurrences, and not in
the way Solr defines. So,, let me giva an example:
Query: apartamemy
Suggestions:
arr name=suggestion
strapartamemto/str
Unfortunately, because of how Lucene works internally, you will not be able
to update just one or two fields. You have to resubmit the entire document.
If you only send just one or two fields, then the updated document will only
have the fields sent in the last update.
On Mon, Mar 1, 2010 at
well thanks for ur reply .. as far as the load goes again I think most of the
reports will be for 1000-4000 records and we dont have that many users ..
its an internal system so we have about 400 users per day and we are opening
this up for only half of those people (a specific role of people) ..
This is quite contradictory.
That name field arrow not found for autosuggestion and gave some word
with
incomplite
What does this mean? You're not searching for a field arrow. You really
have to provide more details. Schema, error, etc.
Have you examined your index with Luke as I suggested? Have
How solr can understand cyrillic characters(words)?
--
View this message in context:
http://old.nabble.com/Cyrillic-problem-tp27744106p27744106.html
Sent from the Solr - User mailing list archive at Nabble.com.
Yep. I think updation in Lucene means first a deletion, and then an
addition. So the entire document needs to be sent to update.
On Mon, Mar 1, 2010 at 7:24 PM, Israel Ekpo israele...@gmail.com wrote:
Unfortunately, because of how Lucene works internally, you will not be able
to update just
Have you tried specifying the RussianAnalyzer in your schema? See:
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#Specifying_an_Analyzer_in_the_schema
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#Specifying_an_Analyzer_in_the_schemaparticularly
the first point
Erick Erickson wrote:
This is quite contradictory.
That name field arrow not found for autosuggestion and gave some word
with
incomplite
What does this mean? You're not searching for a field arrow. You really
have to provide more details. Schema, error, etc.
Have you examined your
Hi,
I wonder if anyone could shed some insight on a dynamic indexing question...?
The basic requirement is this:
Indexing:
A process writes to an index, and when it reaches a certain size (say, 1GB), a
new index (core) is 'automatically' created/deployed (i.e. the process doesn't
Did you try the setting 'onlyMorePopular' ?
http://wiki.apache.org/solr/SpellCheckComponent#spellcheck.onlyMorePopular
André Maldonado wrote:
Hi all.
I'm configuring spell checking in my index. Everything is working, but I
want to get the best suggestion based in number of ocurrences,
By 'AutoSuggestion', are you referring to the spellcheck handler?
If so, you have to rebuild your spellcheck index using the 'build' parameter
after you add new data. You can also configure the spellcheck module to
rebuild the index automatically after a commit or an optimize.
: Is it the wrong approach to have the same warmup queries in both new and
: first searcher? The wiki shows a sorting query for the newSearcher and the
: same sorting query plus facet/filter queries for the firstSearcher.
The thing to remember is that in addition to static warming that you
: Lucene 2.9.1 is out of course (and in repos) but the 2.9.1-dev as found in
: Solr's source control right now is not. This is pretty frustrating and I
: can only expect it will be a recurring problem. If Solr is going to use
: -dev versions then I think Solr needs to put them in a repo
: I'm dynamically creating cores with a new index, using the same schema
...
: 2010-02-24 16:24:54,176 DEBUG [Config] solrconfig.xml
: mainIndex/lockType=simple
...
: 2010-02-24 16:24:54,540 WARN [SolrIndexWriter] No lockType configured for
:
: going to http://localhost:8983/solr/admin suddenly throws a HTTP ERROR: 404
: missing core name in path
:
: Why would adding the above snippet suddenly throw that error?
I think you are seeing the effects of SOLR-1743 masking another error ...
have you checked your log for other
Hi,
I'm using solrnet with solr. It is *the* library to use with .NET and solr.
For indexing I'm just sending the XML with the builtin HTTP libraries of the
.NET BCL.
Cheers,
Janne
2010/2/27 Frederico Azeiteiro frederico.azeite...@cision.com
Hi Saschin,
Yes i had to make some patches too
Thank you! And one little question:
Can I use RussianAnalyzer for ukrainian characters?
--
View this message in context:
http://old.nabble.com/Cyrillic-problem-tp27744106p27749323.html
Sent from the Solr - User mailing list archive at Nabble.com.
I too wish it worked this way, but it doesn't. I believe that this all takes
places within Lucene, so there is no concept of single values or multi-valued
fields. They are all just terms. The same is true with term frequency. In my
case, I set omitNorms=true and then created a custom similarity
Hmmm, I'm nowhere near an expert on how the analyzers actually work, so I
have to
punt a bit here. And certainly take any of the regulars advice if they
give it G...
But outside of stemming, Lucene/SOLR really doesn't understand the concept
of
language. And that's not even Lucene, it's the
as far as cyrillic goes, any of the analyzers will handle cyrillic
characters. so you can just use the textgen or whatever in the example
schema and everything is ok, standardanalyzer will work too.
you don't need to use the RussianAnalyzer, the only special thing it has is
awareness of russian
Hi,
In current version you need to handle the cluster layout yourself, both on
indexing and search side, i.e. route documents to shards as you please, and
know what shards to search.
We try to address how to make this easier in
http://wiki.apache.org/solr/SolrCloud - have a look at it. The
I have a schema with a field name category (field name=category
type=string stored=true indexed=true/). I'm trying to delete
everything with a certain value of category with curl:
I send:
curl http://localhost:8080/solrChunk/nutch/update -H Content-Type:
text/xml --data-binary
: is this true, no downloaded copy of the documentprocessor
: anywhere available?
By the looks of that URL the SVN respository seems to have been hozed --
but more the point if people have questions about sesat.no code, then
perhaps you should try emailing the Contact us address at the bottom
Hi Jan,
Thanks very much for your message. SolrCloud sounds very cool indeed...
So, from the Wiki, am I right in understanding that the only 'external'
component is ZooKeeper, everything else is pure Solr (i.e. replication, distrib
queries et al. are all Solr http a.o.t. something like
: You could create your own unique ID and pass it in with the
: literal.field=value feature.
By which Lance means you could specify an unique value in a differnet
field from yoru uniqueKey field, and then query on that field:value pair
to get the doc after it's been added -- but that query
:
: I am having a go at extracting some file as per the wiki guide.
:
: I cd to the root directory of the folder and run the command with no success
apart from some broken HTML
:
: If you see this here: http://screencast.com/t/MGRiZTU5M
That error message looks like its coming from jetty --
***Sorry if this was sent twice. I had connection problems here and it
didn't look like the first time it went out
I have been testing out results for some basic queries using both the
Standard and DisMax query parsers. The results though aren't what I
expected and am wondering if I am
what are you using for the mm parameter? if you set it to 1 only one
word has to match,
On 03/01/2010 05:07 PM, Steve Reichgut wrote:
***Sorry if this was sent twice. I had connection problems here and it
didn't look like the first time it went out
I have been testing out results for some
This replication does not work well. temp directory and /data/index are on
different device/disks
I see the following message
[2010-03-02 01:22:07] [pool-3-thread-1] ERROR(ReplicationHandler.java:266) -
SnapPull failed
And Yet I applied the patch SOLR-1736
I ll uni test patch
Hello,
Is it possible to boost a document's score based on something like
fq=site(com.google*). In other words, I want to boost the score of
documents who's site field starts with com.google.
I'm using the MoreLikeThisHandler.
Thanks for the help,
-- Christopher
Thanks Joe. That was exactly the issue. When I added 'mm=1', I got
exactly the results I was looking for. Where would I change the default
value for the 'mm' parameter? Is it in solrconfig.xml?
Steve
On 3/1/2010 5:30 PM, Joe Calderon wrote:
what are you using for the mm parameter? if you set
I read that a simple way to implement hierarchical facet is to concatenate
strings with a separator. Something like level1level2level3 with as the
separator.
A problem with this approach is that the number of facet values will greatly
increase.
For example I have a facet Location with the
Look at your dismax definition, in solrconfig.xml. You should be
able to add something like:
str name=mm3/str
Or, if you want to bend your mind, this is also possible (from the example
file):
str name=mm3lt;-1 5lt;-2 6lt;90%/str
See:
On Mon, Mar 1, 2010 at 7:36 PM, Christopher Bottaro
cjbott...@onespot.com wrote:
Hello,
Is it possible to boost a document's score based on something like
fq=site(com.google*). In other words, I want to boost the score of
documents who's site field starts with com.google.
I'm using the
The data/index.20100226063400 dir is a temporary dir and isc reated in
the same dir where the index dir is located.
I'm wondering if the symlink is causing the problem. Why don't you set
the data dir as /raid/data instead of /solr/data
On Sat, Feb 27, 2010 at 12:13 AM, Matthieu Labour
: Subject: Implementing hierarchical facet
: In-Reply-To: 4b8c7213.9080...@axtaweb.com
http://people.apache.org/~hossman/#threadhijack
Thread Hijacking on Mailing Lists
When starting a new discussion on a mailing list, please do not reply to
an existing message, instead start a fresh email.
Oops. Sorry about that.
I'll start a fresh one.
--- On Mon, 3/1/10, Chris Hostetter hossman_luc...@fucit.org wrote:
From: Chris Hostetter hossman_luc...@fucit.org
Subject: Re: Implementing hierarchical facet
To: solr-user@lucene.apache.org
Date: Monday, March 1, 2010, 11:36 PM
: Subject:
(repost with a fresh email)
I read that a simple way to implement hierarchical facet is to
concatenate strings with a separator. Something like
level1level2level3 with as the separator.
A problem with this approach is that the number of facet values will greatly
increase.
For
example I have a
On Mon, Mar 1, 2010 at 4:02 PM, Paul Tomblin ptomb...@xcski.com wrote:
I have a schema with a field name category (field name=category
type=string stored=true indexed=true/). I'm trying to delete
everything with a certain value of category with curl:
I send:
curl
To quote from the wiki,
http://wiki.apache.org/solr/ExtractingRequestHandler
curl 'http://localhost:8983/solr/update/extract?literal.id=doc1commit=true'
-F myfi...@tutorial.html
This runs the extractor on your input file (in this case an HTML
file). It then stores the generated document with the
If I remove the space before !query, this is the error:
Cannot parse ')}': Encountered ) ) at line 1, column 0.
Perhaps someone knows how parentheses and curlies combine here?
Also: *.yahoo.com will not work. Wildcards do not work at the
beginning of a word. To make this search work, you
: To quote from the wiki,
...
That's all true ... but Bill explicitly said he wanted to use
SignatureUpdateProcessorFactory to generate a uniqueKey from the content
field post-extraction so he could dedup documents with the same content
... his question was how to get that key after
48 matches
Mail list logo