Hi Alberto,
You can add child entity which returns multiple records, i.e.
entity name=root query=select id, title from titles
entity name=child query=select value from multivalued where
title_id='${root.id}'
/entity
/entity
HTH,
Alex
2010/6/7 Alberto García Sola alberto...@gmail.com:
Hi. All.
I tried with the default solr example plus my own config/schema file. I
post test document into solr manually. Then test the distributed search and
it works. Then I switch to my existing l*ucene index, and it d*oesn't work.
So I am wondering is that the reason, when solr use lucene
Thank you for the reply! I'm using Tomcat 6.0.20. I read the page.
I think you meant setting URIEncoding for the connector:
Connector ... URIEncoding=UTF-8/
I tried this but it still doesn't work, while the Python client still
works fine.
Because the Python client works fine, I tend to think that
Hi. All.
I am still testing. I think I am approaching the truth. Now confirmed:
the doc in my existing lucene indexes, when search with distributed search,
none of them are returned. But the docs inserted from solr post.jar are
returned successfully.
Don't know why. looks the lucene docs
I was using SolrQuery. Now I'm switching to QueryRequest.
Hope this works. Thanks!
On Mon, Jun 7, 2010 at 11:26 PM, jlist9 jli...@gmail.com wrote:
Thank you for the reply! I'm using Tomcat 6.0.20. I read the page.
I think you meant setting URIEncoding for the connector:
Connector ...
I have tried both to change the datasource per child node to use the
parent nodes name, and tried to making the Xpath`s relative, all
causing either exceptions telling that Xpath must start with /, or
nullpointer exceptions ( nsfgrantsdir document : null).
Best regards
On Mon, Jun 7, 2010 at
Meanwhile, I'd like to try using POST, but I didn't find
information
about how to do this. Could someone point me to a link to
some
sample code?
you can pass METHOD.POST to query method of SolrServer.
public QueryResponse query(SolrParams params, METHOD method)
i used the 1.5 build a few weeks ago, implemented the geospatial
functionality and it worked really well.
however due to the unknown quantity in terms of stability (and the uncertain
future of 1.5) etc. we decided not to use it in production.
rob ganly
On 8 June 2010 03:50, Darren Govoni
my bad, it looks like XPathEntityProcessor doesn't support relative xpaths.
However, I quickly looked at the Slashdot example (which is pretty good
actually) at http://wiki.apache.org/solr/DataImportHandler.
From that I infer that you use only 1 entity per xml-doc. And within that
entity use
Recently I looked a bit at DataImportHandler and I'm really impressed with
the flexibility of transform / import options.
Especially with integrations with Solr Cell / Tika this has become a great
Data importer.
Besides some use-cases that import to Solr (which I plan to migrate to DIH
asap),
Hi,
i'm trying make the geonames.org query parser
(http://www.ibm.com/developerworks/opensource/library/j-spatial/index.html?ca=drs-)
work with the nightly solr build.
I've added to
/examples/solr/lib/ three jar-files (geonames*.jar, jdom*.jar,
spatial-ex.jar) and i've added the
did you send a commit after the last doc posted to solr?
-Ursprüngliche Nachricht-
Von: Scott Zhang [mailto:macromars...@gmail.com]
Gesendet: Dienstag, 8. Juni 2010 08:30
An: solr-user@lucene.apache.org
Betreff: Re: Distributed Search doesn't response the result set
Hi. All.
Hi.
I have extended QParserPlugin according to the Solr Wiki and registered it in
solrconfig.xml.
Using the defType query parameter, it worked with my multi-core server, giving
285 hits for my search.
Next, I wanted it to be the default query parser, so I added to the standard
searchhandler
Hi. Markus.
Thanks for replying.
I figured out the reason this afternoon. Sorry for not following up on this
list. I posted it onto dev list because I think it is a BUG.
I finally know why it doesn't return the
Hi Raakhi,
I am not sure if I understand your usecase correctly,
but if you need this custom location to test against an
existing schema/config file I found this snippet [1].
Otherwise the solr home can be set with
-Dsolr.solr.home=/opt/solr/example
more information is available here [2]
It appears that the defType parameter is not being set by the request
handler.
What do you get when you append echoParams=all to your search url?
So you have something like this entry in solrconfig.xml
requestHandler name=standard class=solr.SearchHandler default=true
lst name=defaults
str
On Mon, Jun 7, 2010 at 9:23 PM, K Wong wongo...@gmail.com wrote:
Did you install tomcat 5.5 from an RPM?
I did not, on the advice of that same Solr wiki article that manual
installation is recommended because distribution Tomcats are either
old or quirky. There haven't been any issues with this,
I had read http://wiki.apache.org/solr/DataImportHandler but I didn't
realize that was for multivalued.
Than you very much!
On Tue, Jun 8, 2010 at 8:01 AM, Alexey Serba ase...@gmail.com wrote:
Hi Alberto,
You can add child entity which returns multiple records, i.e.
entity name=root
For the record, I've been running one of our production Solr 1.4
installs under the Ubuntu 9.04 tomcat6 + OpenJDK. package, and haven't
run into difficulties yet.
On Tue, Jun 8, 2010 at 8:00 AM, K Wong wongo...@gmail.com wrote:
Okay. I've been running multicore Solr 1.4 on Tomcat 5.5/OpenJDK 6
How would I do a facet search if I did this and not get duplicates?
Thanks,
Moazzam
On Mon, Jun 7, 2010 at 10:07 AM, Israel Ekpo israele...@gmail.com wrote:
I think you need a 1:1 mapping between the consultant and the company, else
how are you going to run your queries for let's say
The way I did it with SQL Server is like this:
Let's say you have a field called Company which is multivalued, then
you would declare it like this in schema.xml:
field name=Company type=text indexed=true stored=true
multiValued=true /
in your SQL query, you would do this:
select
Hi All,
I've been running some tests using 6 shards each one containing about 1
millions documents.
Each shard is running in its own virtual machine with 7 GB of ram (5GB
allocated to the JVM).
After about 1100 unique queries the shards start to struggle and run out of
memory. I've reduced
Hi all,
We are about to test out various factors to try to speed up our indexing
process. One set of experiments will try various maxRamBufferSizeMB settings.
Since the factors we will be varying are at the Lucene level, we are
considering using the Lucene Benchmark utilities in
Hey Andrew,
Just wondering if you ever managed to run TextProfileSignature based
deduplication. I would appreciate it if you could send me the code fragment
for it from solrconfig.
I have currently something like this, but not sure if I am doing it right:
updateRequestProcessorChain
Currently using solr 1.4,
I am looking for a way to conduct a solr search with a single query
and multiple locations. The goal is not to find the intersect of these
locations (so I can't just apply multiple filter queries) but to
return documents in rage 1 OR range 2.
I am currently working with
On Tue, Jun 8, 2010 at 11:00 AM, K Wong wongo...@gmail.com wrote:
Okay. I've been running multicore Solr 1.4 on Tomcat 5.5/OpenJDK 6
straight out of the centos repo and I've not had any issues. We're not
doing anything wild and crazy with it though.
It's nice to know that the wiki's advice
2010/5/22 Noble Paul നോബിള് नोब्ळ् noble.p...@gmail.com:
just copy the dih-extras jar file from the nightly should be fine
Now that I've finally got a server on which to attempt to set these
things up... this turns out not to be a viable solution. The extras
jar does contain the
When I wanted to add some content to the solrj wiki for glassfish, I had a
problem in that their anti-spam measures broke the ability to create a new
account. Someone here (Chris I think) was kind enough to create a ticket in
the correct place:
https://issues.apache.org/jira/browse/INFRA-2726
Andrew Clegg wrote:
Re. your config, I don't see a minTokenLength in the wiki page for
deduplication, is this a recent addition that's not documented yet?
Sorry about this -- stupid question -- I should have read back through the
thread and refreshed my memory.
--
View this message in
So let me understand what you said. You went through the trouble to
implement a geospatial
solution using Solr 1.5, it worked really well. You saw no signs of
instability, but decided not to use it anyway?
Did you put it through a routine of tests and witness some stability
problem? Or just
The following should work on centos/redhat, don't forget to edit the paths,
user, and java options for your environment. You can use chkconfig to add it
to your startup.
Note, this script assumes that the Solr webapp is configured using JNDI in a
tomcat context fragment. If not you will need to
Hi folks,
We have a data cleanup effort going on here, and I thought I would
share some information about how to poke around your facet values.
Most of this comes from:
http://wiki.apache.org/solr/SimpleFacetParameters
Exploring Facet Values:
---
facet field to examine:
Hi Group,
I have been trying index about 70 million records in the solr index, the
data is coming from the MySQL database, and I am using the DataImportHandler
with batchSize set to -1. When I perform a full-import, it indexes about 27
million records then throws the following exception:
Any
As the index gets larger, the underlying housekeeping of the Lucene
index sometimes causes pauses in the indexing. The JDBC connection
(and/or the underlying socket) to the MySql database can time out
during these pauses.
- If it is not set, you should add this to your JCBD url:
Hi Glen,
Thank you very much for the quick response, I would like to try increasing
the netTimoutForStreamingResults , is that something I can do it in the
MySQL side? or in the solr side?
Giri
On Tue, Jun 8, 2010 at 6:17 PM, Glen Newton glen.new...@gmail.com wrote:
As the index gets larger,
On 6/8/10 3:36 PM, Dragisa Krsmanovic wrote:
When we sendoptimize waitFlush=false waitSearcher=false/ the HTTP
response sometimes takes more than 60s and our client times out after
that. Whole operation takes 200+ seconds. Isn't waitFlush=false and
waitSearcher=false supposed to tell Solr to
(10/06/09 7:36), Dragisa Krsmanovic wrote:
When we sendoptimize waitFlush=false waitSearcher=false/ the HTTP
response sometimes takes more than 60s and our client times out after
that. Whole operation takes 200+ seconds. Isn't waitFlush=false and
waitSearcher=false supposed to tell Solr to
Ah, I didn't know this. This should be much simpler. Thank you very much!
On Tue, Jun 8, 2010 at 12:57 AM, Ahmet Arslan iori...@yahoo.com wrote:
Meanwhile, I'd like to try using POST, but I didn't find
information
about how to do this. Could someone point me to a link to
some
sample code?
Also, you may need this system property in your client app:
java -Dfile.encoding=utf-8 ..
On Tue, Jun 8, 2010 at 4:52 PM, jlist9 jli...@gmail.com wrote:
Ah, I didn't know this. This should be much simpler. Thank you very much!
On Tue, Jun 8, 2010 at 12:57 AM, Ahmet Arslan
To get the schema.xml file, look at how Solr's admin/index.jsp fetches
it under the Schema button.
You cannot get a nice, cleanly parsed schema object tree from SolrJ.
On Tue, Jun 8, 2010 at 5:16 AM, Peter Karich peat...@yahoo.de wrote:
Hi Raakhi,
I am not sure if I understand your usecase
40 matches
Mail list logo