Thank you.
I tried Luke with IndexReader disabled, however it seems the index is
compeletely broken, as it complains ERROR: java.lang.Exception: there is
no valid Lucene index in this directory.
Sounds like I am out of luck, is it so?
--
View this message in context:
1. No, if IndexReader is on I get the same error message from checkindex
2. It doesnt do any thing but giving that error message I posted before then
quit. The full print of the error trace is:
Opening index @ E:\...\zookeeper\solr\collec
tion1\data\index
ERROR: could not read any segments
My Lucene index - built with Solr using Lucene4.1 - is corrupted. Upon trying
to read the index using the following code I get
org.apache.solr.common.SolrException: No such core: collection1 exception:
File configFile = new File(cacheFolder + File.separator + solr.xml);
CoreContainer container =
Hi
Thanks.
But I am already using CheckIndex and the error is given by the CheckIndex
utility: it could not even continue after reporting could not read any
segements file in directory.
--
View this message in context:
Hi
I need to store and retrieve some custom java objects using Solr and I have
used ByteField and java serialisation for this. Using the embedded jetty
server I can see these byte data but when I use Solrj api to retrieve the
data they are not available. Details are below:
My schema:
Hi, the full stack trace is below.
-
SEVERE: Unable to create core: collection1
org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.init(SolrCore.java:794)
at org.apache.solr.core.SolrCore.init(SolrCore.java:607)
Hi
sorry I couldnt do this directly... the way I do this is by subscribing to a
cluster of computers in our organisation and send the job with required
memory. It gets randomly allocated to a node (one single server in the
cluster) once executed and it is not possible to connect to that specific
Hi
I am really frustrated by this problem.
I have built an index of 1.5 billion data records, with a size of about
170GB. It's been optimised and has 12 separate files in the index directory,
looking like below:
_2.fdt --- 58G
_2.fdx --- 80M
_2.fnm--- 900bytes
_2.si --- 380bytes
Hi, thanks for your advice!
I have deliberately allocated 32G to JVM, with the command java -Xmx32000m
-jar start.jar etc. I am using our server which I think has a total of 48G.
However it still crashes because of that error when I specify any keywords
in my query. The only query that worked, as
Thanks again for your kind input!
I followed Tim's advice and tried to use MMapDirectory. Then I get
outofmemory on solr startup (tried giving only 8G, 4G to JVM)
I guess this truely indicates that there arent sufficient memory for such a
huge index.
On another thread I posted days before,
Hi
I have built a 300GB index using lucene 4.1 and now it is too big to do
queries efficiently. I wonder if it is possible to split it into shards,
then use SolrCloud configuration?
I have looked around the forum but was unable to find any tips on this. Any
help please?
Many thanks!
--
View
Hi all
I am learning to use morelikethis handler, which seems very straightforward
but I got some problems when testing and I wonder if you could help me.
In my schema I have
field name=page_content type=text indexed=true stored=false
required=false multiValued=false termVectors=true/
With
fields, I think it should be
ok because i have another two fields using sfloat and are multivalued, the
ranged queries work ok
Any hints are appreciated! thanks!
zqzuk wrote:
Hi all,
in my schema I have two multivalued fields as
field name=start_year type=sfloat indexed=true stored
Hi all,
in my schema I have two multivalued fields as
field name=start_year type=sfloat indexed=true stored=true
multiValued=true/
field name=end_year type=sfloat indexed=true stored=true
multiValued=true/
and I issued a query as: start_year:[400 TO *], the result seems to be
incorrect because
wrote:
Hi,
I'm trying as well to stress test solr. I would love some advice to manage
it properly.
I'm using solr 1.3 and tomcat55.
Thanks a lot,
zqzuk wrote:
Hi, I am doing a stress testing of my solr application to see how many
concurrent requests it can handle and how long
Hi all, in my application I need to index some seminar data. The basic
assumption is that each seminar can be allocated to multiple time slots,
with an start time and an end time. For example on 1st March it is allocated
to 14:00 to 16:00 ; then on 1st April it is reallocated to 10:00 - 11:30.
Hi, solr have reserved some special chars in building its queries, such as +
* : and so on, thus any queries must escape these chars otherwise
exceptions will occur. I wonder where can I find a complete list of chars I
need escape in the query, and what is the encoding/decoding method (URL?)
will be
served first, and in the worse case, the last request may have to wait for a
long time until all preceding requests have been answered?
Thanks
zqzuk wrote:
Hi, I am doing a stress testing of my solr application to see how many
concurrent requests it can handle and how long it takes
Hi, is it possible to have append like updates, where if two records of
same id's are posted to solr, the contents of the two merges and composes a
single record with the id? I am asking because my program works in a
multi-thread manner where several threads produces serveral parts of a final
Hi, I am using the SimplePostTool to post files to solr. I have encoutered
some problem with the content of xml files. I noticed that if my xml file
has fields whose values contain the character or or , the post
fails and I get the exception :
javax.xml.stream.XMLStreamException: ParseError at
Thanks for the quick advice!
pbinkley wrote:
You should encode those three characters, and it doesn't hurt to encode
the ampersand and double-quote characters too:
http://en.wikipedia.org/wiki/XML#Entity_references
Peter
-Original Message-
From: zqzuk [mailto:[EMAIL
Hi, I am using the post.jar tool to post files to solr. I d like to post
everything in a folder, e.g., myfolder. I typed in command:
java -jar post.jar c:/myfolder/*.xml.
This works perfectly when I test on a sample of 100k xml files. But when I
work on the real dataset, there are over 1m files
of Analyzers, have a look at the schema.xml file
for defining your fields.
On Nov 18, 2007, at 11:03 AM, zqzuk wrote:
Hi, I understand that in solr we index documents by issuing a
command to post
xml documents to solr server. But how do I do the same indexing
without
using the solr
Thanks for your tips Chris, I really appreciate!
hossman wrote:
: Hi, I have played with the solr example web app, it works well. I wonder
how
: do I do the same searching, or faceted searching without relying on the
web
: application, i.e., sending request by urls etc. In other words,
Hi, I have been seeing tutorials and messages discussing solrj the magic
client package which eases tasks for building solr powered applications...
but I have been searching around without success, could you please give me
some directions?
Many thanks!
--
View this message in context:
Hi, I have played with the solr example web app, it works well. I wonder how
do I do the same searching, or faceted searching without relying on the web
application, i.e., sending request by urls etc. In other words, essentially
how does the search and faceting work? Could you please point me to
26 matches
Mail list logo