Seems to be a rather innocent network issue based on your stacktrace:
Caused by: java.sql.SQLException: Network error IOException: Address
already in use: connect
Can you recheck connections and retry?
Sent from my iPhone
On Sep 23, 2011, at 3:34 PM, Vazquez, Maria \(STM\)
On Fri, Sep 23, 2011 at 11:59 AM, nagarjuna nagarjuna.avul...@gmail.com wrote:
yaa Gora i set up rss feed to my blog and i have the following url for the
rss feed of my blog
It would be best if you stated your exact problem up front,
rather than having to dig through to find where exactly the
I do not know how to search both cores and not define shard
parameter,could you show me some solutions for solve my issue?
On 9/24/11, Yury Kats [via Lucene]
ml-node+s472066n3363164...@n3.nabble.com wrote:
On 9/23/2011 6:00 PM, hadi wrote:
I index my files with solrj and crawl my sites with
Hi,
sorry for this question but I am hoping it has a quick solution.
I am sending multiple get request queries to solr but solr is not
returning the responses in the sequence I send the requests.
The shortest responses arrive back first
I am wondering whether I can add a tag to the
Thanks Otis,
this helps me tremendously.
Kind regards,
Roland
Otis Gospodnetic wrote:
Hi Roland,
I did this:
http://search-lucene.com/?q=sort+by+functionfc_project=Solrfc_type=wiki
Which took me to this:
http://wiki.apache.org/solr/FunctionQuery#Sort_By_Function
And further on that page
Hi all
I want to send a pdf file to slor for indexing. there is a command to send
Solr a file via HTTP POST:
http://wiki.apache.org/solr/ExtractingRequestHandler#Getting_Started_with_the_Solr_Example
but *curl* is for linux and I want to use Solr in Windows.
thanks a lot.
Also when I use that command in Linux, see this error:
---
html
head
meta http-equiv=Content-Type content=text/html; charset=ISO-8859-1/
title*Error 400 ERROR:unknown field 'ignored_meta'*/title
/head
bodyh2HTTP ERROR 400/h2
pProblem accessing
On 9/24/2011 3:09 AM, hadi wrote:
I do not know how to search both cores and not define shard
parameter,could you show me some solutions for solve my issue?
See this: http://wiki.apache.org/solr/DistributedSearch
hello
Solr Tutorial page explains about index a xml file. but when I try to index
a xml file with this command:
~/Desktop/apache-solr-3.3.0/example/exampledocs$ java -jar post.jar solr.xml
I get this error:
SimplePostTool: FATAL: Solr returned an error #400 ERROR:unknown field
'name'
can anyone
i think the xml to be indexed has to follow a certain schema, defined
in schema.xml under conf directory. maybe, your solr.xml is not doing
that
Sent from my iPhone
On 24 Sep 2011, at 18:15, ahmad ajiloo ahmad.aji...@gmail.com wrote:
hello
Solr Tutorial page explains about index a xml
I read the link but the
'http://localhost:8983/solr/select?shards=localhost:8983/solr,localhost:7574/solrindent=trueq=ipod+solr'
have a XML response that is not useful for me, i want to create query
in solr/browse so this is need to change the template engine,do you
know how to change that to
You should get cygwin for windows and make sure to select curl as one of the
many packages that come with cygwin when it's installer runs.
Sent from my iPhone
On Sep 24, 2011, at 5:29 AM, ahmad ajiloo ahmad.aji...@gmail.com wrote:
Also when I use that command in Linux, see this error:
ok. this is a very basic question so please bear with me.
I see where the velocity templates are and I have looked at the
documentation and get the idea of how to write them.
it looks to me as if Solr just brings back the URLs. what I want to do is to
get the actual documents in the answer set,
Does wrapping your content in CDATAs work?
Best
Erick
On Mon, Sep 19, 2011 at 6:39 PM, chadsteele.com c...@chadsteele.com wrote:
It seems xml docs that use doc fail to be indexed properly and I've
recently discovered the following fails on my installation.
/solr/update?stream.body=doc/doc
I have 300 cores so I feel your pain :-)
What we do is use a relative path for the file. It works if you use
../../common/schema.xml for each core, then just create a common directory
off your solr home and put your schema file there.
I found this works great with solrconfig.xml and all of it's
I don't think you can do this.
If you are sending multiple GET requests, you are doing it across different
HTTP connections. The web service has no way of knowing these are related.
One solution would be to pass a spare, unused parameter to your request,
like sequenceId=NNN and get the response
Hi all, I am testing various versions of solr from trunk, I am finding that
often times the example doesn't build and I can't test out the version. Is
there a resource that shows which versions build correctly so that we can
test it out?
My guess on this is that you're making a LOT of database requests and have a
million TIME-WAIT connections, and your port range for local ports is
running out.
You should first confirm that's true by running netstat on the machine while
the load is running. See if it gives a lot of output.
One
I can't imagine that the ( or ) is a problem. So I think we need to see
how you're using SolrJ. In particular, are you asking for the
field in question to be returned (e.g. SolrQuery.setFields or addField)?
Second question: Are you sure your SolrJ is connecting to the server you
connect to with
Why is it important? What are you worried about that this implementation
detail is necessary to know about?
But the short answer is that the fq's are calculated against the whole index
and the results are efficiently cached. That's the only way that the fq can
be re-used against a different
Hmmm, what advantage does JSON have over the SolrDocument
you get back? Perhaps if you describe that we can offer better
suggestions.
Best
Erick
On Wed, Sep 21, 2011 at 5:01 AM, Kissue Kissue kissue...@gmail.com wrote:
Hi,
I am using solr 3.3 with SolrJ. Does anybody have any idea how i can
You don't do anything special for facet at index time unless you, say,
wanted to remove some value from the facet field, but then it would
NEVER be available. So if you're saying that at index time you have
certain documents 'New Year's Offers' that ONLY EVER want to
map to NEWA, NEWB, NEWY, you
You might want to review:
http://wiki.apache.org/solr/UsingMailingLists
There's really not much to go on here.
Best
Erick
On Wed, Sep 21, 2011 at 12:13 PM, roz dev rozde...@gmail.com wrote:
Hi All
We are getting this error in our Production Solr Setup.
Message: Element type t_sort must be
No G. The problem is that number of documents isn't a reliable
indicator of resource consumption. Consider the difference between
indexing a twitter message and a book. I can put a LOT more docs
of 140 chars on a single machine of size X than I can books.
Unfortunately, the only way I know of is
Solr dates are very specific, and your parsing exception is expected. See:
http://lucene.apache.org/solr/api/org/apache/solr/schema/DateField.html
Best
Erick
On Thu, Sep 22, 2011 at 6:28 AM, mechravi25 mechrav...@yahoo.co.in wrote:
Hi,
Thanks for the suggestions. This is the option I tried.
What version of Solr? When you copied the default, did you set up
default values for MLT?
Showing us the request you used and the relevant portions of your
solrconifg file would help a lot, you might want to review:
http://wiki.apache.org/solr/UsingMailingLists
Best
Erick
On Thu, Sep 22, 2011
just looking for hints where to look for...
We were testing single threaded ingest rate on solr, trunk version on
atypical collection (a lot of small documents), and we noticed
something we are not able to explain.
Setup:
We use defaults for index settings, windows 64 bit, jdk 7 U2. on SSD,
I suspect this is an issue with, say, your servelet container truncating
the response or some such, but that's a guess...
Best
Erick
On Thu, Sep 22, 2011 at 9:09 PM, roz dev rozde...@gmail.com wrote:
Wanted to update the list with our finding.
We reduced the number of documents which are
In general, you flatten the data when you put things into Solr. I know
that's anathema
to DB training, but this is searching G...
If you have a reasonable number of distinct column names, you could
just define your
schema to have an entry for each and index the associated values that way. Then
Really, really, get in the habit of looking at your query with
debugQuery=on appended, it'll save you a world of pain G..
customer_name:John Do*
doesn't do what you think. It parses into
customer_name:John OR default_search_field:Do*
you want something like customer_name:(+John +Do*) or
Hmmm. I'm a little confused. Are you sure your log is going
somewhere and that you are NOT seeing any stack traces?
Because it looks like you *are* seeing them. In which case
re-throwing an error breaks your file fetch loop and stops
your processing.
I'd actually expect that you're losing
Could you please add some details here? It's really hard to figure
out what the problem is. Perhaps you could review:
http://wiki.apache.org/solr/UsingMailingLists
Best
Erick
On Fri, Sep 23, 2011 at 9:28 AM, Ahson Iqbal mianah...@yahoo.com wrote:
Hi
I have indexed some 1M documents, just for
There are really no differences between dynamic and static
fields performance-wise that I know of.
Personally, though, I tend to prefer static over dynamic from
a maintenance/debugging perspective. At issue is tracking
down why results weren't as expected, then spending several
days discovering
I think you should step back and consider what you're asking
for as Ken pointed out. You have different schemas. And
presumably different documents in each schema. The scores
from the different cores are NOT comparable. So how could
you combine the meaningfully? Further, assuming that the
Hmmm, why are you doing this? Why not use the latest
successful trunk build?
You can get a series of built artifacts at:
https://builds.apache.org//view/S-Z/view/Solr/job/Solr-trunk/
but I'm not sure how far back they go. How are you getting
the trunk source code? And *how* don't they build?
But
And to complete the answer of Erick,
in this search,
customer_name:Joh*
* is not considered as a wildcard, it is an exact search.
another thing, (it is not your problem...),
Words with wildcards are not analyzed,
so, if your analyzer contains a lower case filter,
in the index, these words
Thanks Ludovic, you're absolutely right, I should have added that.
BTW, there are patches that haven't been committed, see:
https://issues.apache.org/jira/browse/SOLR-1604
and similar.
Best
Erick
On Sat, Sep 24, 2011 at 1:32 PM, lboutros boutr...@gmail.com wrote:
And to complete the answer of
Erik,
Unfortunately the facet fields are not static. The field are dynamic SOLR
fields and are generated by different applications.
The field names will be populated into a data store (like memcache) and
facets have to be driven from that data store.
I need to write a Custom FacetComponent which
Hey, the more hammering on trunk the better!
On Sep 24, 2011, at 13:31 , Erick Erickson wrote:
Hmmm, why are you doing this? Why not use the latest
successful trunk build?
You can get a series of built artifacts at:
https://builds.apache.org//view/S-Z/view/Solr/job/Solr-trunk/
but I'm
Agreed, but I'd rather see hammering on latest code G
On Sat, Sep 24, 2011 at 1:53 PM, Erik Hatcher erik.hatc...@gmail.com wrote:
Hey, the more hammering on trunk the better!
On Sep 24, 2011, at 13:31 , Erick Erickson wrote:
Hmmm, why are you doing this? Why not use the latest
Send us the example solr.xml and schema.xml'. You are missing fields
in the schema.xml that you are referencing.
On 9/24/11 8:15 AM, ahmad ajiloo ahmad.aji...@gmail.com wrote:
hello
Solr Tutorial page explains about index a xml file. but when I try to
index
a xml file with this command:
eks,
This is clear as day - you're using Winblows! Kidding.
I'd:
* watch IO with something like vmstat 2 and see if the rate drops correlate to
increased disk IO or IO wait time
* monitor the DB from which you were pulling the data - maybe the DB or the
server that runs it had issues
*
Hi Roland,
Check this:
response
lst name=responseHeader
int name=status0/int
int name=QTime0/int
lst name=params
str name=indenton/str
str name=start0/str
str name=qsolr/str
str name=foo1/str === from foo=1
str name=version2.2/str
str name=rows10/str
/lst
I added foo=1 to the
What is the best algorithm for escaping strings before sending to Solr? Does
someone have some code?
A few things I have witnessed in q using DIH handler
* Double quotes - that are not balanced can cause several issues from an
error (strip the double quote?), to no results.
* Should we use + or
Yes. It appears that cannot be encoded in the URL or there is really
bad results.
For example we get an error on first request, but if we refresh it goes
away.
On 9/23/11 2:57 PM, hadi md.anb...@gmail.com wrote:
When I create a query like somethingfl=content in solr/browse the
and
= in URL
Thanks a lot for your response!
I think that is exactly what's happening. It runs ok for a short time and
starts throwing that error while some of the ueriea run successfully.
I had it setup with 10 threads, maybe that was too much.
I'd be very interested in that code if you don't mind sharing.
46 matches
Mail list logo