Hello,
Thanks for your reply.
That's what I understand when I look at the exception.
However, if you look at my XML update command, there's no empty string
anywhere. That's why I don't understand why this exception is raised.
Thanks,
Ben
--
View this message in context:
Ok, sorry but the issue was located between my keyboard and my chair.
The field _collection_id is required in the schema and not filled in my
update request.
As the exception didn't warn me about any required field, I didn't look at
this.
Thanks anyway,
Ben
--
View this message in context:
Hi Gora,
Thank you for the reply
LFH_SIG? LFH SIG is an internal reference in commons rather than a schema
item?
http://commons.apache.org/proper/commons-compress/apidocs/org/apache/commons/compress/archivers/zip/ZipLong.html#LFH_SIG
regards,
Joel
On 18 December 2014 at 07:45, Gora Mohanty
On 17 December 2014 at 18:08, Erick Erickson erickerick...@gmail.com wrote:
This is seeming like a puzzler...
I’ve got to the point that I do get suggestions if I find no document
at all. The problem was seemingly caused by the way I quoted my search
queries.
Still I don’t get suggestions for
To be honest. I don´t have a clue how the syntax would be.
I tried something like
{!type=join from=PersonIdsS to=PersonID fromIndex=assignment}({!type=join
from=CompanyID to=CompIDS fromIndex=company v='NationalitySFD:Canada'})
AND type_level:parent
but this is two joins from Person to Company
Hi all,
you are right, I was doing everything right but I wasn't using facets for
seeing the result.
I was mixing indexing and analysis.
Now I'm working on the next problem: having keepwords that consist of more
than one word... but this is another problem :)
thank you all, your hints were
The Problem arises when a solr core is added over network an the core uses
the DIH (dataimporthandler)
I tried to add two identical cores to Solr via the Web Interface. The first
is placed on the local machine,
while the second is placed on a remote machine.
In the first case it works, no
aagh... I've got the drama. join has no 'toIndex' parameter. and now
I'm not able to come up with any solution
On Thu, Dec 18, 2014 at 11:57 AM, marotosg marot...@gmail.com wrote:
To be honest. I don´t have a clue how the syntax would be.
I tried something like
{!type=join
Hello,
is possible differentiate direction in one field?
I have a interview and i have there a tags d1Talking first person/d1
d2Talking second person/d2d1First person/d1d2Second person/d2
etc.
When i want search olny reply from first person.
Must i split on more fields, or should i use some
Kind of depends on how you're going to query.
If you're going to query always with a direction then, you can probably prefix
all tokens with the direction.
If you're going to query always simple text bits, then using phrase search with
d1 and d2 being words might also work.
If you're going for
On 12/18/2014 1:30 AM, bengates wrote:
Ok, sorry but the issue was located between my keyboard and my chair.
The field _collection_id is required in the schema and not filled in my
update request.
As the exception didn't warn me about any required field, I didn't look at
this.
The reason
On 12/18/2014 12:35 AM, rashi gandhi wrote:
Also, as per our investigation currently there is work ongoing in SOLR
community to support this concept of distributed/Global IDF. But, I wanted
to know if there is any solution possible right now to manage/control the
score of the documents during
What's the full stack trace in your server logs?
Regards,
Alex.
Sign up for my Solr resources newsletter at http://www.solr-start.com/
On 17 December 2014 at 16:58, Trilok Prithvi trilok.prit...@gmail.com wrote:
When I run the following query (Solr 4.10.2) with edit-distance, I'm
Martin,
If you would like to get suggestions even for terms occurring in the index, set
spellcheck.alternativeTermCount to a value 0 . You can use the same value
as for spellcheck.count, or a lower value if you want fewer results than for
terms not in the index.
See
Your description and your stacktrace seem to mismatch.
You say you upload a plan text file, yet the stacktrace is for sending
a zip file to an Extract (Tika) update handler. And the error is most
probably for some meta fields that Tika generates in the process.
Regards,
Alex.
Sign up for
Hi guys,
I have this field in my schema:
field name=ds_orgao_julgador type=string indexed=true stored=true
/
And I need to use this field as a facet but with a different display name,
it means that instead of to display ds_orgao_julgador I'd like to display
Órgão Julgador.
I tried this:
str
How would you solve this problem if it were a database? Internal field
names are not something that needs to be exposed directly to the user.
If you need to map them, map them in your client, since you are
hardcoding the field names anyway.
Regards,
Alex.
Sign up for my Solr resources
Do you have the libraries that DIH requires in both machines at the
same path? They are defined near the top of solrconfig.xml
Regards,
Alex.
Sign up for my Solr resources newsletter at http://www.solr-start.com/
On 18 December 2014 at 04:37, Axel Burandt
How do you _know_ when something is English or Spanish? You didn't
describe your logic. Or do you need language auto-detect?
One place you could start looking is UpdateRequestProcessors, they go
between your handler's work and the schema level processing and you
can insert auto-detect, field
Is it possible for Solr's SpellCheckComponent to suggest Rockpoint if the
user mistypes Rock piont. Currently I have it making the correct
suggestions when I have Rockpiont or Rock point but not the example I
gave. Here are the relevant parts of my config files:
Matt,
Unfortunately this kind of correction is not supported. The word break spell
checker works independently from the distance-based spellcheckers so it cannot
correct both whitespace problems and other misspellings together.
If you really need this, then you'll need to go with the
Hi Erick,
This question came to my mind after sometime seen your reply, Why Solr
configurations are kept on ZooKeeper ?
As far as I know, ZooKeeper is generic system can be used for any cross
node configurations, not only with Solr, Solr configurations are Solr
Specific, How ZooKeeper know/read
: When I run the following query (Solr 4.10.2) with edit-distance, I'm
: getting a null pointer exception:
:
: *host/solr/select?q=fld:(Event
: ID)fl=strdist(eventid,fld_alphaonly,edit)*
probably this bug: https://issues.apache.org/jira/browse/SOLR-6540
: responselst name=errorstr
Zookeeper knows nothing at all about Solr, it's fully generic. The code
for SolrCloud on each of the Solr instances _does_ know about Zookeeper,
and where to expect certain information, specifically where the configurations
are stored on Zookeeper. So on startup, the Solr instance queries
I have not tried this as of yet, but is there any limitation to the nesting
of documents? Specifically can sub documents have their own sub
documents? Are there any practical limits on this or performance impacts
from a search/indexing perspective to consider?
Thanks Lot and It's clear to me Eric.
On 19 December 2014 at 11:08, Erick Erickson erickerick...@gmail.com
wrote:
Zookeeper knows nothing at all about Solr, it's fully generic. The code
for SolrCloud on each of the Solr instances _does_ know about Zookeeper,
and where to expect certain
Hi,
We have 2 shards, each one has 2 replicas and each Solr instance has a
single thread that constantly uses 100% of CPU:
http://lucene.472066.n3.nabble.com/file/n4175088/Screenshot_896.png
After restart it is running normally for some time (approximately until Solr
comes close to Xmx limit),
I've been experiencing this problem. Running VisualVM on my instances
shows that they spend a lot of time creating WeakReferences
(org.apache.lucene.util.WeakIdentityMap$IdentityWeakReference that is).
I think what's happening here is the heap's not big enough for Lucene's
caches and it ends up
Here is the stack trace...
java.lang.NullPointerException at
org.apache.lucene.search.spell.LevensteinDistance.getDistance(LevensteinDistance.java:66)
at
org.apache.solr.search.function.distance.StringDistanceFunction$1.floatVal(StringDistanceFunction.java:54)
at
Thanks Hoss.
But how do we avoid this error?
Is there anyway to tweak the query and return empty result instead of null
pointer exception?
On Thu, Dec 18, 2014 at 4:31 PM, Trilok Prithvi trilok.prit...@gmail.com
wrote:
Here is the stack trace...
java.lang.NullPointerException at
: But how do we avoid this error?
: Is there anyway to tweak the query and return empty result instead of null
: pointer exception?
did you look at the issue i linked to?
: probably this bug: https://issues.apache.org/jira/browse/SOLR-6540
A workarround in some contexts can be to wrap the
Should have googled first, I've read it is possible to index arbitrary
depths.
I've done a little looking into the query syntax, is there any work or
interest in supporting an API similar to what elastic search supports for
these type of queries? They seem much simpler to read/write/understand.
Right, I've seen situation where as Solr is using a high percentage of the
available memory, Java spends more and more time in GC cycles. Say
you've allocated 8G to the heap. Say further that the steady state for
Solr needs 7.5g (numbers made up...). Now the GC algorithm only has
0.5G to play with
Hi,
I'm trying to index documents using SolrJ. I'm getting duplicate documents
while adding child document to parent in the below scenario. I've UniqueKey
configuration in schema.xml.
1) Adding child to parent. If the parent already has a child then I can just
retrieve that parent and add child
34 matches
Mail list logo