won't this replace *all* 0s ? ie, 1024 will become 124 ?
_
{Beto|Norberto|Numard} Meijome
The only people that never change are the stupid and the dead
Jorge Luis Borges.
I speak for myself, not my employer. Contents may be hot. Slippery when wet.
Reading disclaimers
On Wed, 28 Oct 2009 19:20:37 -0400
Joel Nylund jnyl...@yahoo.com wrote:
Well I tried removing those 2 letters from stopwords, didnt seem to
help, I also tried changing the field type to text_ws, didnt seem to
work. Any other ideas?
Hi Joel,
if your stop word filter was applied on index,
On Mon, 3 Aug 2009 13:15:44 -0700
Robert Petersen rober...@buy.com wrote:
Thanks all, I figured there would be more talk about daemontools if there
were really a need. I appreciate the input and for starters we'll put two
slaves behind a load balancer and grow it from there.
Robert,
not
On Tue, 7 Jul 2009 22:16:04 -0700
Francis Yakin fya...@liquid.com wrote:
I have the following curl cmd to update and doing commit to Solr ( I have
10 xml files just for testing)
[...]
hello,
DIH supports XML, right?
not sure if it works with n files...but it's worth looking at it.
On Sun, 5 Jul 2009 21:36:35 +0200
Marcus Herou marcus.he...@tailsweep.com wrote:
Sharing some of our exports from DB to solr. Note: many of the statements
below might not work due to clip-clip.
thx Marcus - but that's a DIH config right? :)
b
_
{Beto|Norberto|Numard}
On Sun, 5 Jul 2009 10:28:16 -0700
Francis Yakin fya...@liquid.com wrote:
[...]
upload the file to your SOLR server? Then the data file is local to your SOLR
server , you will bypass any WAN and firewall you may be having. (or some
variation of it, sql - SOLR server as file, etc..)
How we
On Mon, 6 Jul 2009 09:56:03 -0700
Francis Yakin fya...@liquid.com wrote:
Norberto,
Thanks, I think my questions is:
why not generate your SQL output directly into your oracle server as a file
What type of file is this?
a file in a format that you can then import into SOLR.
On Thu, 2 Jul 2009 11:28:51 -0700
Francis Yakin fya...@liquid.com wrote:
Norberto,
Hi Francis,
Please reply to the list, or keep it in CC.
You saying:
Other alternatives are to transform the XML into csv and import it that way
How do you transfer that CSV file to Solr?
On Thu, 2 Jul 2009 11:02:28 -0700
Francis Yakin fya...@liquid.com wrote:
Norberto, Thanks for your input.
What do you mean with Have you tried connecting to SOLR over HTTP from
localhost, therefore avoiding any firewall issues and network latency ? it
should work a LOT faster than from a
On Wed, 1 Jul 2009 15:07:12 -0700
Francis Yakin fya...@liquid.com wrote:
We have several thousands of xml files in database that we load it to solr
master The Database uses http connection and transfer those files to solr
master. Solr then translate xml files to their lindex.
We are
On Thu, 2 Jul 2009 16:12:58 +0800
James liu liuping.ja...@gmail.com wrote:
I use solr to search and index is made by lucene. (not
EmbeddedSolrServer(wiki is old))
Is it problem when i use solr to search?
which the difference between Index(made by lucene and solr)?
Hi James,
make sure the
On Wed, 24 Jun 2009 23:20:26 -0700 (PDT)
pof melbournebeerba...@gmail.com wrote:
Hi, I am wanting to add document-level security that works as following: An
external process makes a query to the index, depending on their security
allowences based of a login id a list of hits are returned
On Sun, 14 Dec 2008 19:22:00 -0800 (PST)
Otis Gospodnetic otis_gospodne...@yahoo.com wrote:
Perhaps an easier alternative is to index not the MS-Outlook files
themselves, but email messages pulled from the IMAP or POP servers, if that's
where the original emails live.
PST files ('outlook
On Tue, 25 Nov 2008 03:59:31 +0200
Timo Sirainen [EMAIL PROTECTED] wrote:
would it be faster to say q=user:user AND highestuid:[ * TO *] ?
Now that I read again what fq really did, yes, sounds like you're right.
you may want to compare them both to see which one is better... I just went
On Mon, 24 Nov 2008 13:31:39 -0500
Burton-West, Tom [EMAIL PROTECTED] wrote:
The approach to this problem used by Nutch looks promising. Has anyone
ported the Nutch CommonGrams filter to Solr?
Construct n-grams for frequently occuring terms and phrases while
indexing. Optimize phrase
On Wed, 26 Nov 2008 10:08:03 +1100
Norberto Meijome [EMAIL PROTECTED] wrote:
We didn't notice any severe performance hit but :
- data set isn't huge ( ca 1 MM docs).
- reindexed nightly via DIH from MS-SQL, so we can use a separate cache layer
to lower the number of hits to SOLR.
To make
On Mon, 24 Nov 2008 20:21:17 +0200
Timo Sirainen [EMAIL PROTECTED] wrote:
I think I gave enough reasons above for why I don't like this
solution. :) I also don't like adding new shared global state databases
just for Solr. Solr should be the one shared global state database..
fair enough - it
On Sun, 23 Nov 2008 11:59:50 -0500
Ryan McKinley [EMAIL PROTECTED] wrote:
Please submit your preferences for the solr logo.
https://issues.apache.org/jira/secure/attachment/12394267/apache_solr_c_blue.jpg
https://issues.apache.org/jira/secure/attachment/12394263/apache_solr_a_blue.jpg
On Sun, 23 Nov 2008 16:02:16 +0200
Timo Sirainen [EMAIL PROTECTED] wrote:
Hi,
Hi Timo,
[...]
The main problem is that before doing the search, I first have to check
if there are any unindexed messages and then add them to Solr. This is
done using a query like:
- fl=uid
- rows=1
-
On Wed, 19 Nov 2008 22:58:52 -0800 (PST)
RaghavPrabhu [EMAIL PROTECTED] wrote:
Im using multiple cores and all i need to do is,to make the each core in
secure manner. If i am accessing the particular core via url,it should ask
and validate the credentials say Username Password for each core.
On Tue, 18 Nov 2008 14:26:02 +0100
Aleksander M. Stensby [EMAIL PROTECTED] wrote:
Well, then I suggest you index the field in two different ways if you want
both possible ways of searching. One, where you treat the entire name as
one token (in lowercase) (then you can search for avera* and
On Tue, 11 Nov 2008 10:25:07 -0800 (PST)
Otis Gospodnetic [EMAIL PROTECTED] wrote:
Doc ID gaps are zapped during segment merges and index optimization.
thanks Otis :)
b
_
{Beto|Norberto|Numard} Meijome
I didn't attend the funeral, but I sent a nice letter saying I
On Tue, 11 Nov 2008 20:39:32 -0800 (PST)
Otis Gospodnetic [EMAIL PROTECTED] wrote:
With Distributed Search you are limited to # of shards * Integer.MAX_VALUE.
yeah, makes sense. And i would suspect since this is PER INDEX , it applies to
each core only ( so you could have n cores in m shards
On Mon, 10 Nov 2008 10:24:47 -0800 (PST)
Otis Gospodnetic [EMAIL PROTECTED] wrote:
I don't think there is a limit other than your hardware and the internal Doc
ID which limits you to 2B docs on 32-bit machines.
Hi Otis,
just curious is this internal doc ID reused when an optimise happens?
On Tue, 4 Nov 2008 23:45:40 -0800 (PST)
con [EMAIL PROTECTED] wrote:
But for the first question, I am still not clear.
I think to use the multicore feature we should inform the server. In the
Jetty server, we are starting the server using: java
-Dsolr.solr.home=multicore -jar start.jar
On Tue, 4 Nov 2008 09:55:38 -0800 (PST)
con [EMAIL PROTECTED] wrote:
1) Which all files do I need to edit to use the multicore feature?
2) Also, where can I specify the index directly so that we can point the
indexed documents to a custom folder instead of jboss/bin?
Con, please check the
On Thu, 30 Oct 2008 15:50:58 -0300
Jorge Solari [EMAIL PROTECTED] wrote:
copyField source=* dest=text/
in the schema file.
or use Dismax query handler.
b
_
{Beto|Norberto|Numard} Meijome
Windows: Where do you want to go today?
Linux: Where do you want to go
On Thu, 30 Oct 2008 20:46:16 -0700
Lance Norskog [EMAIL PROTECTED] wrote:
Now: a few hours later there are a different 100 lastest documents. How do
I add those to the index so I will have 200 documents? 'full-import' throws
away the first 100. 'delta-import' is not implemented. What is the
On Mon, 20 Oct 2008 03:24:36 -0700 (PDT)
prerna07 [EMAIL PROTECTED] wrote:
Yes, We want search on these incomplete words.
Look into the NGram token factory . works a treat - I don't think it's
explained a lot in the wiki, but has been discussed in this list in the past,
and you also have
On Mon, 20 Oct 2008 00:08:07 -0700 (PDT)
prerna07 [EMAIL PROTECTED] wrote:
The issue with synonym arise when i have number in synonym defination:
ccc =1,2 gives following result in debugQuery= true :
str name=parsedqueryMultiPhraseQuery(all: (1 ) (2 ccc )
3)/str
On Mon, 20 Oct 2008 06:21:06 -0700 (PDT)
Sunil Sarje [EMAIL PROTECTED] wrote:
I am working with nightly build of Oct 17, 2008 and found the issue that
something wrong with LuceneQParserPlugin; It takes + as OR
Sunil, please do not hijack the thread :
On Mon, 20 Oct 2008 06:25:09 -0700 (PDT)
sunnyfr [EMAIL PROTECTED] wrote:
I implemented multi language search, but I didn't finished the website in
PHP, how can I check it works properly?
maybe by sending to SOLR the queries you plan your PHP frontend to generate ?
_
On Mon, 20 Oct 2008 16:28:23 +0300
christophe [EMAIL PROTECTED] wrote:
Hum. this mean I have to wait before I index new documents and avoid
indexing when they are created (I have about 50 000 new documents
created each day and I was planning to make those searchable ASAP).
you can
On Thu, 16 Oct 2008 16:09:17 +0200
Jarek Zgoda [EMAIL PROTECTED] wrote:
They came to such expectations seeing Solr's own Spellcheck at work -
if it can suggest correct versions, it should be able to sanitize
broken words in documents and search them using sanitized input. For
me, this
On Mon, 20 Oct 2008 08:16:50 -0700 (PDT)
sunnyfr [EMAIL PROTECTED] wrote:
ok so straight by the admin part !
Hi Johanna - not sure what you mean by 'the admin part'.
it should work .. so it doesn't
if you tell us what you did (what url you called) , what you expect to receive
back (sample
On Tue, 07 Oct 2008 09:27:30 -0700
Jon Drukman [EMAIL PROTECTED] wrote:
Yep, you can fake it by only using fieldsets (qf) that have a
consistent set of stopwords.
does that mean changing the query or changing the schema?
Jon,
- you change schema.xml to define which type each field is.
On Wed, 8 Oct 2008 03:45:20 -0700 (PDT)
con [EMAIL PROTECTED] wrote:
But in that case, while doing a full-import I am getting the following
error:
org.apache.solr.common.SolrException: QueryElevationComponent requires the
schema to have a uniqueKeyField
Con, if you don't use the Query
On Tue, 30 Sep 2008 11:43:57 -0700 (PDT)
Chris Hostetter [EMAIL PROTECTED] wrote:
: That's why I was wondering how Dismax breaks it all apart. It makes
sense...I : suppose what I'd like to have is a way to tell dismax which
fields NOT to : tokenize the input for. For these fields, it would
On Fri, 26 Sep 2008 10:42:42 -0700 (PDT)
Chris Hostetter [EMAIL PROTECTED] wrote:
: tokenizer
: class=solr.KeywordTokenizerFactory / !-- The LowerCase TokenFilter does
: Now, when I search with ?q=the doors , all the terms in my q= aren't used
: together to build
On Fri, 26 Sep 2008 18:58:14 +0530
Dinesh Gupta [EMAIL PROTECTED] wrote:
Please tell me where to upload the files.
anywhere you have access to... your own website, somewhere anyone on the list
can access the files you want to share to address your problems :)
b
_
On Fri, 26 Sep 2008 00:46:07 -0700 (PDT)
con [EMAIL PROTECTED] wrote:
To be more specific:
I have the data-config.xml just like:
dataConfig
dataSource **/
document
entity name=user query=select * from USER
/entity
On Fri, 26 Sep 2008 16:32:05 +0530
Dinesh Gupta [EMAIL PROTECTED] wrote:
Is it OK to create whole index by Solr web-app?
If not than ,How can I create index?
I have attached some file that create index now.
Dinesh,
you sent the same email 2 1/2 hours ago. sending it again will not give
On Fri, 26 Sep 2008 02:35:18 -0700 (PDT)
con [EMAIL PROTECTED] wrote:
What you meant is correct only. Please excuse for that I am new to solr. :-(
Con, have a read here :
http://www.ibm.com/developerworks/java/library/j-solr1/
it helped me pick up the basics a while back. it refers to 1.2,
On Fri, 26 Sep 2008 02:35:18 -0700 (PDT)
con [EMAIL PROTECTED] wrote:
What you meant is correct only. Please excuse for that I am new to solr. :-(
hi Con,
nothing to be excused for..but you may want to read the wiki , as it provides
quite a lot of information that should answer your questions.
On Wed, 24 Sep 2008 08:34:57 -0700 (PDT)
Otis Gospodnetic [EMAIL PROTECTED] wrote:
What happens if you change ps from 100 to 1 and comment out that ord function?
Otis, I think what I am after is what Hoss described in his last paragraph in
his reply to your email last year :
hi guys,
I may have missed it ,but is it possible to tell the solr.ShingleFilterFactory
the minimum number of grams to generate per shingle? Similar to
NGramTokenizerFactory's minGramSize=3 maxGramSize=3
thanks!
B
_
{Beto|Norberto|Numard} Meijome
Ask not what's inside
On Sat, 16 Aug 2008 15:39:44 -0700
Chris Harris [EMAIL PROTECTED] wrote:
[...]
So finally I modified the Lucene ShingleFilter class to add an
outputUnigramIfNoNgram option. Basically, if you set that option,
and also set outputUnigrams=false, then the filter will tokenize just
as in Exhibit
Hello,
I've seen references to this in the list, but not completely explained...my
apologies if this is FAQ (and for the length of the email).
I am using dismax across a number of fields on an index with data about music
albums songs - the fields are quite full of stop words. I am trying to
On Wed, 24 Sep 2008 07:46:57 -0400
Mark Miller [EMAIL PROTECTED] wrote:
Yes. You will def see a speed increasing by avoiding http (especially
doc at a time http) and using the direct csv loader.
http://wiki.apache.org/solr/UpdateCSV
and the obvious reason that if, for whatever reason,
On Wed, 24 Sep 2008 08:34:57 -0700 (PDT)
Otis Gospodnetic [EMAIL PROTECTED] wrote:
What happens if you change ps from 100 to 1 and comment out that ord function?
Otis
Hi Otis,
no luck - without :
str name=rawquerystringsmashing pumpkins/str
str name=querystringsmashing pumpkins/str
str
On Wed, 24 Sep 2008 11:45:34 -0400
Mark Miller [EMAIL PROTECTED] wrote:
Nothing to stop you from breaking up the tsv/csv files into multiple
tsv/csv files.
Absolutely agreeing with you ... in one system where I implemented SOLR, I
have a process run through the file system and lazily pick up
On Wed, 24 Sep 2008 04:42:42 -0700 (PDT)
con [EMAIL PROTECTED] wrote:
In the table we will be having various column names like CUSTOMER_NAME,
CUSTOMER_PHONE etc. If we use the default schema.xml, we have to map these
values to some the default values like cat, features etc. this will cause
On Mon, 22 Sep 2008 15:46:54 +0530
Jacob Singh [EMAIL PROTECTED] wrote:
Hi,
I'm trying to write a testing suite to gauge the performance of solr
searches. To do so, I'd like to be able to find out what keywords
will get me search results. Is there anyway to programaticaly do this
with
On Thu, 18 Sep 2008 10:53:39 +0530
Sanjay Suri [EMAIL PROTECTED] wrote:
One of my field values has the name R__ikk__nen which contains a special
characters.
Strangely, as I see it anyway, it matches on the search query 'x' ?
Can someone explain or point me to the solution/documentation?
On Sat, 13 Sep 2008 16:17:12 +
zzh [EMAIL PROTECTED] wrote:
I think this is a stupid method, because the search conditions is too
long, and the search efficiency will be low, we hope you can help me to solve
this problem.
Hi,
IMHO,a long set of conditions doesn't make it stupid. You
On Fri, 29 Aug 2008 00:31:13 -0700 (PDT)
sanraj25 [EMAIL PROTECTED] wrote:
But still i cant maintain two index.
please help me how to create two cores in solr
What specific problem do you have ?
B
_
{Beto|Norberto|Numard} Meijome
Always listen to experts. They'll
On Fri, 29 Aug 2008 02:37:10 -0700 (PDT)
sanraj25 [EMAIL PROTECTED] wrote:
I want to store two independent datas in solr index. so I decided to create
two index.But that's not possible.so i go for multicore concept in solr
.can u give me step by step procedure to create multicore in solr
Hi,
On Thu, 28 Aug 2008 02:01:05 -0700 (PDT)
sanraj25 [EMAIL PROTECTED] wrote:
I want to index two different files in solr.(for ex) I want to store
two tables like, job_post and job_profile in solr. But now both are stored
in same place in solr.when i get data from job_post, data come from
On Tue, 26 Aug 2008 15:15:21 +0300
Aleksey Gogolev [EMAIL PROTECTED] wrote:
Hello.
I'm new to solr and I need to make a search suggest (like google
suggestions).
Hi Aleksey,
please search the archives of this list for subjects containing 'autocomplete'
or 'auto-suggest'. that should
On Fri, 22 Aug 2008 12:21:53 -0700
Lance Norskog [EMAIL PROTECTED] wrote:
Apparently the ZFS (Silicon Graphics
originally) is great for really huge files.
hi Lance,
You may be confusing Sun's ZFS with SGI's XFS. The OP referred, i think, to
ZFS.
B
_
On Mon, 25 Aug 2008 17:11:47 +0200
Walter Ferrara [EMAIL PROTECTED] wrote:
Launching a multicore solr with dataimporthandler using a mysql driver,
(driver=com.mysql.jdbc.Driver) works fine if the mysql connector jar
(mysql-connector-java-5.0.7-bin.jar) is in the classpath, either jdk
On Thu, 21 Aug 2008 18:09:11 -0700
Jake Conk [EMAIL PROTECTED] wrote:
I thought if I used copyField / to copy my string field to a text
field then I can search for words within it and not limited to the
entire content. Did I misunderstand that?
but you need to search on the fields that are
On Wed, 20 Aug 2008 10:58:50 -0300
Alexander Ramos Jardim [EMAIL PROTECTED] wrote:
A tiny but really explanation can be found here
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters
thanks Alexander - indeed, quite short, and focused on shingles ... which , if
I understand
On Mon, 18 Aug 2008 15:33:02 +0800
finy finy [EMAIL PROTECTED] wrote:
the name field is text,which is analysed, i use the query
name:ibmT63notebook
why do you search with no spaces? is this free text entered by a user, or is it
part of a link which you control ?
PS: please dont top-post
hi :)
does anyone have a .wsdl definition for the example bundled with SOLR?
if nobody has it, would it be useful to have one ?
cheers,
B
_
{Beto|Norberto|Numard} Meijome
Intelligence: Finding an error in a Knuth text.
Stupidity: Cashing that $2.56 check you got.
I
On Mon, 18 Aug 2008 23:07:19 +0800
finy finy [EMAIL PROTECTED] wrote:
because i use chinese character, for example ibm___
solr will parse it into a term ibm and a phraze _ __
can i use solr to query with a term ibm and a term _ and a term
__?
Hi finy,
On Mon, 18 Aug 2008 19:08:24 -0300
Alexander Ramos Jardim [EMAIL PROTECTED] wrote:
Do you wanna a full web service for SOLR example? How a .wsdl will help you?
Why don't you use the HTTP interface SOLR provides?
Anyways, if you need to develop a web service (SOAP compliant) to access
SOLR,
On Tue, 19 Aug 2008 11:23:48 +1000
Norberto Meijome [EMAIL PROTECTED] wrote:
On Mon, 18 Aug 2008 19:08:24 -0300
Alexander Ramos Jardim [EMAIL PROTECTED] wrote:
Do you wanna a full web service for SOLR example? How a .wsdl will help you?
Why don't you use the HTTP interface SOLR provides
On Tue, 19 Aug 2008 10:18:12 +1200
Gene Campbell [EMAIL PROTECTED] wrote:
Is this interpreted as meaning, there are 10 documents that will match
with 'car' in the title, and likewise 6 'boat' and 2 'bike'?
Correct.
If so, is there any way to get counts for the *number times* a value
is
On Tue, 12 Aug 2008 23:36:32 +1000
Norberto Meijome [EMAIL PROTECTED] wrote:
hi :)
I'm trying to use SAXON instead of the default XSLT parser. I was pretty sure
i
had it running fine on 1.2, but when I repeated the same steps (as per the
wiki) on latest nightly build, i cannot see any sign
hi guys,
First of all, thanks for DIH - it's great :)
One thing I noticed during my tests ( nightly, 2008-08-16) is that, if the DB
is not available during SOLR startup time, the whole core won't initialise .-
the error is shown below.
I was wondering,
1) would it be possible to have DIH bomb
Guys + gals,
just a question of form - would DIH itself be the right place to implement a
URLS to call after successfully completing a DIH full or partial load - for
example, to rebuild spellchecker when new items have been added? Or should
that be part of my external process (cron - shell
On Sun, 17 Aug 2008 20:22:26 +0530
Shalin Shekhar Mangar [EMAIL PROTECTED] wrote:
If it is only SpellCheckComponent that you are interested in, then see
SOLR-622.
You can add this to your SCC config to rebuild SCC after every commit:
str name=buildOnCommittrue/str
ah great stuff , thanks
Hi again,
I see in the DIH wiki page :
[...]
full-import [..]
commit: (default 'true'). Tells whether to commit+optimize after the operation
[...]
but nothing for delta-import... I think it would be useful , a 'commit'
(default=true) , 'optimize' (default=false) for the delta-import - these
On Mon, 18 Aug 2008 10:14:32 +0800
finy finy [EMAIL PROTECTED] wrote:
i use solr for 3 months, and i find some question follow:
Please do not hijack mail threads.
http://en.wikipedia.org/wiki/Thread_hijacking
_
{Beto|Norberto|Numard} Meijome
Ask not what's inside your
On Mon, 18 Aug 2008 09:34:56 +0530
Shalin Shekhar Mangar [EMAIL PROTECTED] wrote:
Actually we have commit and optimize as separate request parameters
defaulting to true for both full-import and delta-import. You can add a
request parameter optimize=false for delta-import if you want to commit
( 2 in 1 reply)
On Wed, 13 Aug 2008 09:59:21 -0700
Walter Underwood [EMAIL PROTECTED] wrote:
Stripping accents doesn't quite work. The correct translation
is language-dependent. In German, o-dieresis should turn into
oe, but in English, it shoulde be o (as in co__perate or
M__tley Cr__e). In
On Thu, 14 Aug 2008 11:34:47 -0400
Steven A Rowe [EMAIL PROTECTED] wrote:
[...]
The kind of filter Walter is talking about - a generalized language-aware
character normalization Solr/Lucene filter - does not yet exist. My guess is
that if/when it does materialize, both the Solr and the
On Tue, 12 Aug 2008 13:26:26 -0700
Jake Conk [EMAIL PROTECTED] wrote:
1) I want to search only within a specific field, for instance
`category`. Is there a way to do this?
of course. Please see http://wiki.apache.org/solr/SolrQuerySyntax (in
particular, follow the link to Lucene syntax..)
hi :)
I'm trying to use SAXON instead of the default XSLT parser. I was pretty sure i
had it running fine on 1.2, but when I repeated the same steps (as per the
wiki) on latest nightly build, i cannot see any sign of it being loaded or use,
although the classpath seems to be pointing to them (see
On Tue, 12 Aug 2008 11:21:50 -0700
Mike Klaas [EMAIL PROTECTED] wrote:
will delete happen first, and then the add, or could it be that the
add happens before delete, in which case i end up with no more doc
id=1 ?
As long as you are sending these requests on the same thread, they
On Tue, 12 Aug 2008 20:53:12 -0400
Yonik Seeley [EMAIL PROTECTED] wrote:
On Tue, Aug 12, 2008 at 1:48 AM, Norberto Meijome [EMAIL PROTECTED] wrote:
What happens if I issue:
deleteid1/id/delete
adddocid1/idnamenew/name/doc
commit/
will delete happen first, and then the add
On Tue, 12 Aug 2008 11:44:42 -0400
Steven A Rowe [EMAIL PROTECTED] wrote:
Solr is Unicode aware. The ISOLatin1AccentFilterFactory handles diacritics
for the ISO Latin-1 section of the Unicode character set. UTF (do you mean
UTF-8?) is a (set of) Unicode serialization(s), and once Solr has
On Sun, 10 Aug 2008 19:58:24 -0700 (PDT)
SoupErman [EMAIL PROTECTED] wrote:
I needed to run a search with a query containing the word not, so I removed
not from the stopwords.txt file. Which seemed to work, at least as far as
parsing the query. It was now successfully searching for that
On Wed, 6 Aug 2008 12:25:34 +1000
Norberto Meijome [EMAIL PROTECTED] wrote:
On Tue, 5 Aug 2008 14:41:08 -0300
Scott Swan [EMAIL PROTECTED] wrote:
I currently have multiple documents that i would like to index but i would
like to combine two fields to produce the unique key
On Mon, 11 Aug 2008 06:48:05 -0700 (PDT)
Vj Ali [EMAIL PROTECTED] wrote:
i also sends coomit tag as well.
maybe you need
commit/
instead of coomit
?
_
{Beto|Norberto|Numard} Meijome
With sufficient thrust, pigs fly just fine. However, this is not necessarily a
Hello :)
I *think* i know the answer, but i'd like to confirm :
Say I have
docid1id/nameold/name/doc
already indexed and commited (ie, 'live' )
What happens if I issue:
deleteid1/id/delete
adddocid1/idnamenew/name/doc
commit/
will delete happen first, and then the add, or could it be that
On Wed, 6 Aug 2008 21:35:47 -0700 (PDT)
Otis Gospodnetic [EMAIL PROTECTED] wrote:
tokenizer class=solr.KeywordTokenizerFactory/
filter class=solr.StandardTokenizerFactory/
2 Tokenizers?
i wondered about that too, but didn't have the time to test...
B
On Thu, 7 Aug 2008 00:50:59 -0700 (PDT)
matt connolly [EMAIL PROTECTED] wrote:
Where do I file a bug report?
https://issues.apache.org/jira
thanks!
B
_
{Beto|Norberto|Numard} Meijome
Contrary to popular belief, Unix is user friendly. It just happens to be very
On Tue, 05 Aug 2008 16:02:51 -0400
Stephen Weiss [EMAIL PROTECTED] wrote:
My issue with the logos presented was they made solr look like a
school project instead of the powerful tool that it is. The tricked
out font or whatever just usually doesn't play well with the business
types...
On Tue, 5 Aug 2008 11:43:44 -0500
Kashyap, Raghu [EMAIL PROTECTED] wrote:
Hi,
Hi Kashyap,
please don't hijack topic threads.
http://en.wikipedia.org/wiki/Thread_hijacking
thanks!!
B
_
{Beto|Norberto|Numard} Meijome
Software QA is like cleaning my cat's litter box:
On Tue, 5 Aug 2008 14:41:08 -0300
Scott Swan [EMAIL PROTECTED] wrote:
I currently have multiple documents that i would like to index but i would
like to combine two fields to produce the unique key.
the documents either have 1 or the other fields so by combining the two
fields i will get
On Tue, 05 Aug 2008 18:58:42 -0300
Leonardo Dias [EMAIL PROTECTED] wrote:
So I'm looking for a Ferrari. CarStore says that there are 5 ads for
Ferrari, but one ad has 2 Ferraris being sold, the other ad has 3
Ferraris and all the others have 1 Ferrari each, meaning that there are
5 ads and
On Mon, 4 Aug 2008 09:29:30 -0700
Ryan McKinley [EMAIL PROTECTED] wrote:
If there is a still room for new log design for Solr and the
community is
open for it then I can try to come up with some proposal. Doing logo
for
Mahout was really interesting experience.
In my opinion,
On Mon, 4 Aug 2008 21:13:09 -0700 (PDT)
Vicky_Dev [EMAIL PROTECTED] wrote:
Can we get solr 1.3 release as soon as possible? Otherwise some interim
release (1.2.x) containing DataImportHandler will also a good option.
Any Thoughts?
have you tried one of the nightly builds? I've been
On Wed, 23 Jul 2008 11:28:49 -0700 (PDT)
briand [EMAIL PROTECTED] wrote:
I have documents in SOLR such that each document contains one to many points
(latitude and longitudes). Currently we store the multiple points for a
given document in the db and query the db to find all of the document
On Tue, 15 Jul 2008 13:15:41 +0530
Sunil [EMAIL PROTECTED] wrote:
1) I don't want duplicate content.
SOLR uses the field you define as the unique field to determine whether a
document should be replaced or added. The rest of the fields are in your hands.
You could devise a setup whereby the
On Tue, 15 Jul 2008 10:48:14 +0200
Jarek Zgoda [EMAIL PROTECTED] wrote:
2) I don't want to overwrite old content with new one.
Means, if I add duplicate content in solr and the content already
exists, the old content should not be overwritten.
before inserting a new document,
On Tue, 15 Jul 2008 18:07:43 +0530
Preetam Rao [EMAIL PROTECTED] wrote:
When I say filter, I meant q=fishfq=type:idea
btw, this *seems* to only work for me with standard search handler. dismax and
fq: dont' seem to get along nicely... but maybe, it is just late and i'm not
testing it
On Mon, 14 Jul 2008 15:52:35 +
sundar shankar [EMAIL PROTECTED] wrote:
Hi Hoss,
I was talking about classes like EdgeNGramFilterFactory,
PatterReplaceFilterfactory etc. I didnt find these in the 1.2 Jar. Where do I
find wiki for these and Specific classes introduced for
1 - 100 of 182 matches
Mail list logo