Hi,
Is there a way what fields to add to schema.xml prior to crawling with
nutch, rather than crawling over and over again and fixing the fields
one by one?
Regards,
First of all, sorry about the subject of this discussion. It should have
been something like Adding config to SolrCloud without starting a Solr
server
Mark Miller skrev:
k
On May 16, 2012, at 5:35 AM, Per Steffensen wrote:
Hi
We want to create a Solr config in ZK during installation of
Hi;
I have an incorrect schema -- a missing field :
and when I add a documents (UpdateResponse ur = solrServer.add(docs), I have
not be able to catch exception in solrj and the UpdateResponse cannot handle
result in UpdateResponse.
I use solr-core3.6, solr-solrj3.6 and solr.war4.0
Best Regards
Hi,
I have 96 documents added to index, and I would like to be able to
search in them in plain text, without using complex search queries. How
can I do that?
Regards,
Thank you for your guidance. Actually, I am using Apache Solr with jetty
5.1. I will try to setup solr using tomcat incorporating the configuration
suggested below by you.
Thank you
Sanjailal K P
--
On Thu, May 17, 2012 at 7:24 PM, Ahmet Arslan iori...@yahoo.com wrote:
A search with
What the wiki indicates actually works, altough it´s not what I wanted.
I have tried it and works fine.
I have also tried Jack´s approach and also works fine (and is what I was
looking for :-)
Still, I have one more question. You wrote: This is a 1.4.1
installation, back when there was no
I am trying the following query and get only zero results (I
am supposed to
get 10 results according to my dataset)
*http://mymachine:8983/solr/select/?q=-(HOSTID:302)*
I also tried the below query and got zero results yet
again.
*http://mymachine:8983/solr/select/?q=NOT(HOSTID:302)*
hi All ,
To provide realtime data we are delta indexing every 15 minutes and than
replicating it to the slave .
*Auto warmup count is 0 . Dismax queries are getting slow really slow (30 TO
90 seconds) and if i stop the delta replication, then the dismax queries are
getting fast . If i run
Hi,
Deduplication on SolrCloud through the SignatureUpdateRequestProcessor is not
functional anymore. The problem is that documents are passed multiple times
through the URP and the digest field is added as if it is an multi valued
field.
If the field is not multi valued you'll get this
The thought here is to distribute a search between two different
solrcloud clusters and get ordered ranked results between them.
It's possible?
On Tue, 2012-05-15 at 18:47 -0400, Darren Govoni wrote:
Hi,
Would distributed search (the old way where you provide the solr host
IP's etc.) still
On Fri, May 18, 2012 at 5:03 PM, Ahmet Arslan iori...@yahoo.com wrote:
I am trying the following query and get only zero results (I
am supposed to
get 10 results according to my dataset)
*http://mymachine:8983/solr/select/?q=-(HOSTID:302)*
I also tried the below query and got zero
Why don't you just use /solr/select/?q=-HOSTID:302
Tried the same right at start, but never worked :(
q=-HOSTID:302 and q=+*:* -HOSTID:302 should return same result set.
Which solr version and query parser are you using?
Could you give us some examples of the kinds of search you want to do?
Besides, keywords and quoted phrases?
The dismax query parser may be good enough.
-- Jack Krupansky
-Original Message-
From: Tolga
Sent: Friday, May 18, 2012 6:27 AM
To: solr-user@lucene.apache.org
Subject:
You could enable the * dynamic field which accepts all field names.
-- Jack Krupansky
-Original Message-
From: Tolga
Sent: Friday, May 18, 2012 2:54 AM
To: solr-user@lucene.apache.org
Subject: Unknown field
Hi,
Is there a way what fields to add to schema.xml prior to crawling
On Fri, May 18, 2012 at 6:03 PM, Ahmet Arslan iori...@yahoo.com wrote:
Why don't you just use /solr/select/?q=-HOSTID:302
Tried the same right at start, but never worked :(
q=-HOSTID:302 and q=+*:* -HOSTID:302 should return same result set.
Which solr version and query parser are you
My website is http://liseyazokulu.sabanciuniv.edu/ it has the word
barınma in it, and I want to be able to search for that by just typing
barınma in the admin interface.
On 5/18/12 3:40 PM, Jack Krupansky wrote:
Could you give us some examples of the kinds of search you want to do?
Besides,
I am using the standard LuceneQParserPlugin and I have a
custom request
handler. I use solr 3.5
I would test the same query with standard request handler. May be something is
your custom request handler?
debugQuery=on would help too.
Hi,
Even after setting the URIEncoding=UTF-8 in the tomcat /conf/server.xml,
indexing and search of Hindi characters doesn't work. The output furnished
below:
?xml version=1.0 encoding=UTF-8 ?
*-*http://192.168.0.132:8080/solr/select/?q=acc_no%3A+H-100version=2.2start=0rows=10indent=on#
Hi Kuli
Is Just raising. Thanks for the explanation.
Regards
Anderson
2012/5/11 Shawn Heisey s...@elyograg.org
On 5/11/2012 9:30 AM, Anderson vasconcelos wrote:
HI Kuli
The free -m command gives me
total used free sharedbuffers
cached
Mem:
On Fri, May 18, 2012 at 6:20 PM, Ahmet Arslan iori...@yahoo.com wrote:
I am using the standard LuceneQParserPlugin and I have a
custom request
handler. I use solr 3.5
I would test the same query with standard request handler. May be
something is your custom request handler?
I don't
Hi,
I've put the line copyField=* dest=text stored=true
indexed=true/ in my schema.xml and restarted Solr, crawled my
website, and indexed (I've also committed but do I really have to
commit?). But I still have to search with content:mykeyword at the admin
interface. What do I have to do so
I don't think request handler should be a problem. I
have just used the *q
*parameter as follows.
String q = params.get(CommonParams.Q);
IndexSchema schema = req.getSchema();
Query query = new QueryParsing().parseQuery(q, schema);
Hope there shouldn't be a problem with the above!
Solr
Yeah, you can still override the shards param and search anywhere AFAIK. I have
not tried it recently, but it should work.
On May 18, 2012, at 7:57 AM, Darren Govoni wrote:
The thought here is to distribute a search between two different
solrcloud clusters and get ordered ranked results
Hey Markus -
When I ran into a similar issue with another update proc, I created
https://issues.apache.org/jira/browse/SOLR-3215 so that I could order things to
avoid this. I have not committed this yet though, in favor of waiting for
https://issues.apache.org/jira/browse/SOLR-2822
Go vote?
I have a field that was indexed with the string
.2231-7. When i
search using '*' or '?' like this *2231-7 the query
don't returns
results. When i remove -7 substring and search agin using
*2231 the
query returns. Finally when i search using
.2231-7 the query returns
On May 18, 2012, at 3:06 AM, Per Steffensen wrote:
First of all, sorry about the subject of this discussion. It should have been
something like Adding config to SolrCloud without starting a Solr server
Mark Miller skrev:
k
On May 16, 2012, at 5:35 AM, Per Steffensen wrote:
Hi
We
Seems something is stopping the connection from occurring? Tests are constantly
running and doing this using an embedded zk server - and I know more than a few
people using an external zk setup. I'd have to guess something in your env or
URL is causing this?
On May 16, 2012, at 3:11 PM,
On 5/18/2012 1:42 AM, Jamel ESSOUSSI wrote:
I have an incorrect schema -- a missing field :
and when I add a documents (UpdateResponse ur = solrServer.add(docs), I have
not be able to catch exception in solrj and the UpdateResponse cannot handle
result in UpdateResponse.
I use solr-core3.6,
On Sun, May 13, 2012 at 4:45 PM, Dmitry Kan dmitry@gmail.com wrote:
Are you operating inside the SOLR source code or on the (solrj) client
side?
SOLR source code!
On Fri, May 11, 2012 at 12:46 PM, Ramprakash Ramamoorthy
youngestachie...@gmail.com wrote:
Dear all,
I get two
On 18 May 2012 18:43, KP Sanjailal kpsanjai...@gmail.com wrote:
Hi,
Even after setting the URIEncoding=UTF-8 in the tomcat /conf/server.xml,
indexing and search of Hindi characters doesn't work. The output furnished
below:
[...]
How are you indexing your data into Solr?
Where does your
I see.
What I need is not multiple threads for one entity but multiple entities
at the same time.
What I have done is rename the DIH for each of the entities in
solrconfig, altough the are using the same data-import-confg.xml.
Something like:
!-- Used for simultaneous full-import with
The dismax query parser should be good enough.
-- Jack Krupansky
-Original Message-
From: Tolga
Sent: Friday, May 18, 2012 8:46 AM
To: solr-user@lucene.apache.org
Subject: Re: Search plain text
My website is http://liseyazokulu.sabanciuniv.edu/ it has the word
barınma in it, and I
On Fri, May 18, 2012 at 7:26 PM, Ahmet Arslan iori...@yahoo.com wrote:
I don't think request handler should be a problem. I
have just used the *q
*parameter as follows.
String q = params.get(CommonParams.Q);
IndexSchema schema = req.getSchema();
Query query = new
Did you also delete all existing documents from the index? Maybe your crawl
did not re-index documents that were already in the index or that hadn't
changed since the last crawl, leaving the old index data as it was before
the change.
-- Jack Krupansky
-Original Message-
From: Tolga
I'll make sure to do that. Thanks
myPhone'dan gönderdim
18 May 2012 tarihinde 17:40 saatinde, Jack Krupansky
j...@basetechnology.com şunları yazdı:
Did you also delete all existing documents from the index? Maybe your crawl
did not re-index documents that were already in the index or that
On May 18, 2012, at 10:26 AM, Shawn Heisey wrote:
On 5/18/2012 1:42 AM, Jamel ESSOUSSI wrote:
I have an incorrect schema -- a missing field :
and when I add a documents (UpdateResponse ur = solrServer.add(docs), I have
not be able to catch exception in solrj and the UpdateResponse cannot
Hello.
Could you tell me the difference between this two?
1) Having a DIH with a field in data-import-config.xml like this:
field column="body" name="article" stripHTML="true"/
b) Having the Schema.xml with a field like this:
fieldType name="textNoHtml"
Check the analyzers for the field types containing Hindi text to be sure
that they are not using a character mapping or folding filter that might
mangle the Hindi characters. Post the field type, say for the title field.
Also, try manually (using curl or the post jar) adding a single document
It is simply a question of whether or not you wish to have the raw HTML stored
in the field so that it can be returned to the application for display
purposes. If you simply want the HTML to do away as soon as possible, use
“stripHTML”, but then there is no need to use the factory on the field
On 5/18/2012 9:54 AM, Tolga wrote:
Hi,
I've put the line copyField=* dest=text stored=true
indexed=true/ in my schema.xml and restarted Solr, crawled my
website, and indexed (I've also committed but do I really have to
commit?). But I still have to search with content:mykeyword at the
Hi,
Interesting! I'm watching the issues and will test as soon as they are
committed.
Thanks!
-Original message-
From:Mark Miller markrmil...@gmail.com
Sent: Fri 18-May-2012 16:05
To: solr-user@lucene.apache.org; Markus Jelsma markus.jel...@openindex.io
Subject: Re: SolrCloud
With replication every 15 minutes you could still do some autowarming. But
if autowarming was the problem you should see only the first couple of
queries slow, after that it should go back to normal, is this what you are
seeing?
Are your queries very complex? Do you facet in many fields? are
On 5/18/2012 8:50 AM, Mark Miller wrote:
On May 18, 2012, at 10:26 AM, Shawn Heisey wrote:
On 5/18/2012 1:42 AM, Jamel ESSOUSSI wrote:
I have an incorrect schema -- a missing field :
and when I add a documents (UpdateResponse ur = solrServer.add(docs), I have
not be able to catch exception
: Interesting! I'm watching the issues and will test as soon as they are
committed.
FWIW: it's a chicken and egg problem -- if you could test out the patch in
SOLR-2822 with your real world use case / configs, and comment on it's
effectiveness, that would go a long way towards my confidence
I am wondering why solr doesnt have an uppercase filter. I want the analyzed
output to be in upper case to be compatible with legacy data. Will there be
any problem if i create my own uppercase filter and use it ?
--
View this message in context:
In Unicode, uppercasing characters loses information, because there are some
upper case characters that represent more than one lower case character.
Lower casing text is safe, so always lower-case.
wunder
On May 18, 2012, at 10:41 AM, srinir wrote:
I am wondering why solr doesnt have an
Just to give folks an update, we trashed the server having issues and
cloned/rebuild a VM from a sane server and it seems to be running good
for the past 3 days without any issues. We intend to monitor it over
the weekend. If its still stable on Monday, I would blame the issues
it on the server
Default field? I'm not sure but I think I do. Will have to look.
myPhone'dan gönderdim
18 May 2012 tarihinde 18:11 saatinde, Yury Kats yuryk...@yahoo.com şunları
yazdı:
On 5/18/2012 9:54 AM, Tolga wrote:
Hi,
I've put the line copyField=* dest=text stored=true
indexed=true/ in my
On 5/18/2012 4:02 PM, Tolga wrote:
Default field? I'm not sure but I think I do. Will have to look.
http://wiki.apache.org/solr/SchemaXml#The_Default_Search_Field
Howdy,
I have a multi-core set up in Solr 3.6.0 which works fine. That is until I
request the response in json with the wt=json parameter. When I do that it
looks like its using the schema.xml file of one of my other cores because it
complains that it can not get a required field that exists in
Oh this one. Yes I have it.
myPhone'dan gönderdim
18 May 2012 tarihinde 23:14 saatinde, Yury Kats yuryk...@yahoo.com şunları
yazdı:
On 5/18/2012 4:02 PM, Tolga wrote:
Default field? I'm not sure but I think I do. Will have to look.
I should clarify the error a bit. When I make a select request on my first
core (called core0) using the wt=json parameter I get a 400 response with
the explanation undefined field: gid. The field gid is not defined in the
schema.xml file of my first core. But, it is defined in the schema.xml file
I have a uniquekey set in my schema; however, I am still getting duplicated
documents added. Can anyone provide any insight into why this may be happening?
This is in my schema.xml:
!-- Field to use to determine and enforce document uniqueness.
Unless this field is marked with
Your unique key field should be of type string not a tokenized type.
Erik
On May 18, 2012, at 17:50, Parmeley, Michael mjparme...@west.com wrote:
I have a uniquekey set in my schema; however, I am still getting duplicated
documents added. Can anyone provide any insight into why this may
Typically the uniqueKey field is a string field type (your schema uses
text_general), although I don't think it is supposed to be a requirement.
Still, it is one thing that stands out.
Actually, you may be running into some variation of SOLR-1401:
you're right. I'll test the patch as soon as possible.
Thanks!
-Original message-
From:Chris Hostetter hossman_luc...@fucit.org
Sent: Fri 18-May-2012 18:20
To: solr-user@lucene.apache.org
Subject: RE: SolrCloud deduplication
: Interesting! I'm watching the issues and will
56 matches
Mail list logo