How are you constructing the Stream with classes or using a Streaming
Expression?
In either case can you post either the code or expression?
Are there more errors in the logs? The place where this NPE is occurring is
that an underlying stream is null, which leads me to believe there would be
some
Can you provide a sample expression that would be able to reproduce this?
Are you able to try a newer version by chance - I know we've fixed a few
NPEs recently, maybe https://issues.apache.org/jira/browse/SOLR-14700
On Thu, Jan 21, 2021 at 4:13 PM ufuk yılmaz
wrote:
> Solr version 8.4. I’m gett
that syntax isn’t isn’t a syntax that Solr recognize intentionally at all.
At very best, the select handler is using the default field (if it’s defined).
Although not having a “q” parameter means all bets are off. On a local copy of
7.3
I have lying around I get a valid response, but using a Matc
Yes, we are sure that this is not typo.
Actually we did more experiments and found that
1) https://hostname:8983/solr/my_collection/select?ids=169455599|1
2) https://hostname:8983/solr/my_collection/select?q=id:169455599|1
3) https://hostname:8983/solr/my_collection/get?ids=169455599|1
1) thro
Hmm, an NPE is weird in any case, but assuming “ids” is a field, your syntax
is wrong
q=ids:111|222
or
q=ids: (111 222) would do too.
Are you sure you used this syntax before? Or is it a typo?
Erick
> On Sep 3, 2020, at 1:02 PM, Louis wrote:
>
> We are using SolrCloud 7.7.2 and having some tr
This should be considered a bug. Feel free file jira for this.
Joel Bernstein
http://joelsolr.blogspot.com/
On Tue, Jun 4, 2019 at 9:16 AM aus...@3bx.org.INVALID
wrote:
> Just wanted to provide a bit more information on this issue after
> experimenting a bit more.
>
> The error I've describe
Just wanted to provide a bit more information on this issue after
experimenting a bit more.
The error I've described below only seems to occur when I'm
collapsing/expanding on an integer field. If I switch the field type to a
string, no errors occur if there are missing field values within the
do
Probably SOLR-11770 and/or SOLR-11792.
In the meantime, insure that has stored=true set and
insure that there are terms.
You'll probably have to re-index though
Best,
Erick
On Wed, Jul 18, 2018 at 10:38 AM, babuasian wrote:
> Hi,Running solr version 6.5.Trying to get tf-idf values of a term ‘
I went ahead and resolved the jira - it was never seen again by us in later
versions of Solr. There are a number of bug fixes since the 6.2 release, so
I personally recommend updating!
On Wed, Nov 22, 2017 at 11:48 AM, Pushkar Raste
wrote:
> As mentioned in the JIRA, exception seems to be coming
As mentioned in the JIRA, exception seems to be coming from a log
statement. The issue was fixed in 6.3, here is relevant line f rom 6.3
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/6.3.0/solr/core/src/java/org/apache/solr/update/PeerSync.java#L707
On Wed, Nov 22, 2017 at 1:18
Right, if there's no "fixed version" mentioned and if the resolution
is "unresolved", it's not in the code base at all. But that JIRA is
not apparently reproducible, especially on more recent versions that
6.2. Is it possible to test a more recent version (6.6.2 would be my
recommendation).
Erick
My bad. I found it at https://issues.apache.org/jira/browse/SOLR-9453
But I could not find it in changes.txt perhaps because its yet not resolved.
On Tue, Nov 21, 2017 at 9:15 AM, Erick Erickson
wrote:
> Did you check the JIRA list? Or CHANGES.txt in more recent versions?
>
> On Tue, Nov 21, 201
Did you check the JIRA list? Or CHANGES.txt in more recent versions?
On Tue, Nov 21, 2017 at 1:13 AM, S G wrote:
> Hi,
>
> We are running 6.2 version of Solr and hitting this error frequently.
>
> Error while trying to recover. core=my_core:java.lang.NullPointerException
> at org.apache.s
Joel:
Would it make sense to throw a more informative error when the stream
context wasn't set? Maybe an explicit check in open() or some such?
Erick
On Fri, Jul 14, 2017 at 8:25 AM, Joe Obernberger
wrote:
> Still stuck on this one. I suspect there is something I'm not setting in
> the StreamC
Still stuck on this one. I suspect there is something I'm not setting
in the StreamContext. I'm not sure what to put for these two?
context.put("core", this.coreName);
context.put("solr-core", req.getCore());
Also not sure what the class is for ClassifyStream? Error that I'm
getting is:
ja
If you can include the stack trace and version of Solr we can see what's
causing the exception.
Joel Bernstein
http://joelsolr.blogspot.com/
On Thu, Jul 13, 2017 at 4:33 PM, Joe Obernberger <
joseph.obernber...@gmail.com> wrote:
> Thanks for this. I'm now trying to use stream for classify, but
Thanks for this. I'm now trying to use stream for classify, but am
getting an ArrayIndexOutOfBounds error on the stream.open(). I'm
setting the streamFactory up, and including
.withFunctionName("classify", ClassifyStream.class) - but is that class
in orga.apache.solr.handler?
-
StringBu
Thank you Joel - that was it.
context = new StreamContext();
context.setSolrClientCache(StaticInfo.getSingleton(props).getClientCache());
context.workerID = 0;
context.numWorkers = 1;
context.setModelCache(StaticInfo.getSingleton(props).getModelCache());
Then:
This the working code snippet I have, if that helps
public static void main(String []args) throws IOException
{
String clause;
TupleStream stream;
List tuples;
StreamContext streamContext = new StreamContext();
SolrClientCache solrClientCache = new SolrClientCache();
streamContext.s
It's most likely that you're not setting the StreamContext. New versions of
Solr expect the StreamContext to be set before the stream is opened. The
SolrClientCache also needs to present in the StreamContext. You can take a
look at how the StreamHandler does this for an example:
https://github.com/
Yes, I'm aware that building an index is expensive and I will remove
"buildOnStartup" once I have it working. The field I added was an
attempt to get it working...
I have attached my latest version of solrconfig.xml and schema.xml (both
are in the same attachment), except that I have removed
; Data Scientist
> Toxicology and Health Sciences
> Syngenta UK
> Email: geraint.d...@syngenta.com
>
>
> -Original Message-
> From: Mark Fenbers [mailto:mark.fenb...@noaa.gov]
> Sent: 12 October 2015 12:14
> To: solr-user@lucene.apache.org
> Subject: Re: Nul
Message-
From: Mark Fenbers [mailto:mark.fenb...@noaa.gov]
Sent: 12 October 2015 12:14
To: solr-user@lucene.apache.org
Subject: Re: NullPointerException
On 10/12/2015 5:38 AM, Duck Geraint (ext) GBJH wrote:
> "When I use the Admin UI (v5.3.0), and check the spellcheck.build box"
On 10/12/2015 5:38 AM, Duck Geraint (ext) GBJH wrote:
"When I use the Admin UI (v5.3.0), and check the spellcheck.build box"
Out of interest, where is this option within the Admin UI? I can't find
anything like it in mine...
This is in the expanded options that open up once I put a checkmark in
"When I use the Admin UI (v5.3.0), and check the spellcheck.build box"
Out of interest, where is this option within the Admin UI? I can't find
anything like it in mine...
Do you get the same issue by submitting the build command directly with
something like this instead:
http://localhost:8983/so
Thanks Markus. I initially interpreted the line "It's OK to have a keyField
value that can't be found in the index" as meaning that the key field value
in the external file does not have to exist as a term in the index.
On 8 October 2014 23:56, Markus Jelsma wrote:
> Hi - yes it is worth a t
Hi - yes it is worth a ticket as the javadoc says it is ok:
http://lucene.apache.org/solr/4_10_1/solr-core/org/apache/solr/schema/ExternalFileField.html
-Original message-
> From:Matthew Nigl
> Sent: Wednesday 8th October 2014 14:48
> To: solr-user@lucene.apache.org
> Subject: NullPoin
It seems to be a modified row and referenced in EvaluatorBag.
I am not familiar with either.
Sent from my iPad
> On Nov 22, 2013, at 3:05 AM, Adrien RUFFIE wrote:
>
> Hello all,
>
> I have perform a full indexation with solr, but when I try to perform an
> incrementation indexation I get the
If this went away when you made your "id" field into a string type rather
than analyzed then it's probably not worth a JIRA...
Erick
On Thu, Nov 8, 2012 at 11:39 AM, Otis Gospodnetic <
otis.gospodne...@gmail.com> wrote:
> Looks like a bug. If Solr 4.0, maybe this needs to be in JIRA along with
Looks like a bug. If Solr 4.0, maybe this needs to be in JIRA along with
some sample data you indexed + your schema, so one can reproduce it.
Otis
--
Search Analytics - http://sematext.com/search-analytics/index.html
Performance Monitoring - http://sematext.com/spm/index.html
On Thu, Nov 8, 201
well, to sum it up... it doesn't really matter if I use standard or
dismax, at the moment both give me NullPointers for the same query,
although I didn't change anything since it was working ... it seems
totally random, sometimes it works a couple of times, sometimes it
doesn't :(
Weird...
Hi all,
wow, this is weird...
now before I file the JIRA issue - one thing I forgot to mention is that
I am using the edismax query parser.
I've just done the following:
1) searched with edismax parser:
/select?indent=on&version=2.2&q=beffy&fq=&start=0&rows=10&fl=*%2Cscore&qt=dismax&wt=standa
Sure, no problem, I'll submit a JIRA entry :)
Am 21.09.2010 16:13, schrieb Robert Muir:
I don't think you should get an error like this from SynonymFilter... would
you mind opening a JIRA issue?
On Tue, Sep 21, 2010 at 9:49 AM, Stefan Moises wrote:
Hi again,
well it turns out that it sti
I don't think you should get an error like this from SynonymFilter... would
you mind opening a JIRA issue?
On Tue, Sep 21, 2010 at 9:49 AM, Stefan Moises wrote:
> Hi again,
>
> well it turns out that it still doesn't work ...
> Sometimes it works (i.e. for some cores), sometimes I still get the
Hi again,
well it turns out that it still doesn't work ...
Sometimes it works (i.e. for some cores), sometimes I still get the
nullpointer - e.g. if I create a new core and use the same settings as a
working one, but index different data, then I add a synonym (e.g. "foo
=> bar") and activate
doh, looks like I only forgot to add the spellcheck component to my
edismax request handler... now it works with:
...
spellcheck
elevator
What's strange is that spellchecking seemed to work *without* that
entry, too
Cheers,
Stefan
Am 01.09.2010 13:33, schrieb Stefan Moises:
Hi ther
Ouch! Absolutely correct - quoting the URL fixed it. Thanks for saving me a
sleepless night!
cheers - rene
2010/7/26 Chris Hostetter
>
> : However, when I'm trying this very URL with curl within my (perl) script,
> I
> : receive a NullPointerException:
> : CURL-COMMAND: curl -sL
> :
> http://lo
: However, when I'm trying this very URL with curl within my (perl) script, I
: receive a NullPointerException:
: CURL-COMMAND: curl -sL
:
http://localhost:8983/solr/select?indent=on&version=2.2&q=*&fq=ListId%3A881&start=0&rows=0&fl=*%2Cscore&qt=standard&wt=standard
it appears you aren't quoting
On Sat, Jan 30, 2010 at 5:08 AM, Chris Hostetter
wrote:
>
> : never keep a 0.
> :
> : It is better to leave not mention the deletionPolicy at all. The
> : defaults are usually fine.
>
> if setting the "keep" values to 0 results in NPEs we should do one (if not
> both) of the following...
>
> 1) ch
: never keep a 0.
:
: It is better to leave not mention the deletionPolicy at all. The
: defaults are usually fine.
if setting the "keep" values to 0 results in NPEs we should do one (if not
both) of the following...
1) change the init code to warn/fail if the values are 0 (not sure if
there
never keep a 0.
It is better to leave not mention the deletionPolicy at all. The
defaults are usually fine.
On Fri, Jan 22, 2010 at 11:12 AM, Stephen Weiss wrote:
> Hi Shalin,
>
> Thanks for your reply. Please see below.
>
>
> On Jan 18, 2010, at 4:19 AM, Shalin Shekhar Mangar wrote:
>
>> On We
Hi Shalin,
Thanks for your reply. Please see below.
On Jan 18, 2010, at 4:19 AM, Shalin Shekhar Mangar wrote:
On Wed, Jan 13, 2010 at 12:51 AM, Stephen Weiss
wrote:
...
When we replicate
manually (via the admin page) things seem to go well. However, when
replication is triggered by a
When you copy paste config from wiki, just copy what you need.
excluding documentation and comments
On Wed, Jan 13, 2010 at 12:51 AM, Stephen Weiss wrote:
> Hi Solr List,
>
> We're trying to set up java-based replication with Solr 1.4 (dist tarball).
> We are running this to start with on a pair
On Wed, Jan 13, 2010 at 12:51 AM, Stephen Weiss wrote:
> Hi Solr List,
>
> We're trying to set up java-based replication with Solr 1.4 (dist tarball).
> We are running this to start with on a pair of test servers just to see how
> things go.
>
> There's one major problem we can't seem to get past
Just to clarify, the error is being thrown FROM a search, DURING an update.
This error is making distributed SOLR close to unusable for me. Any ideas?
Does SOLR fail on searches if one node takes too long to respond?
hossman wrote:
>
> : Hi,
> : I'm running a distributed solr index (3 nod
: Hi,
: I'm running a distributed solr index (3 nodes) and have noticed frequent
: exceptions thrown during updates. The exception (see below for full trace)
what do you mean "during updates" ? ... QueryComponent isn't used at all
when updating hte index, so there may be a missunderstanding here
I'm using a very slightly modified version of a nightly build from the middle
of September.
It seems like this isn't the only report of problems of this sort:
http://markmail.org/message/zept5u3hv4ytunfv#query:solr%20commit%20connection%20reset%20mergeids+page:1+mid:aqzaaphbuow4sa5o+state:resu
What version are you using? If a nightly build, from when?
Thanks
Erick
On Wed, Dec 2, 2009 at 12:53 PM, smock wrote:
>
> Hi,
> I'm running a distributed solr index (3 nodes) and have noticed frequent
> exceptions thrown during updates. The exception (see below for full trace)
> occurs in the
I think it might be to do with the library itself
I downloaded semanticvectors-1.22 and compiled from source. Then created a demo
corpus using
java org.apache.lucene.demo.IndexFiles against the lucene src directory
I then ran a java pitt.search.semanticvectors.BuildIndex against the index and
got
On Thu, Jul 30, 2009 at 9:45 PM, Andrew Clegg wrote:
>
>
> Erik Hatcher wrote:
>>
>>
>> On Jul 30, 2009, at 11:54 AM, Andrew Clegg wrote:
>>> >> url="${domain.pdb_code}-noatom.xml" processor="XPathEntityProcessor"
>>> forEach="/">
>>> >> xpath="//*[local-name()='structCate
It's very easy to write your own entity processor. At least, that is my
experience with extending the SQLEntityProcessor to my needs. So, maybe
you'd be better off subclassing the xpath processor and handling the
xpath in a way you can keep your configuration straight forward.
Andrew Clegg sc
On Jul 30, 2009, at 12:19 PM, Andrew Clegg wrote:
Don't worry -- your hints put me on the right track :-)
I got it working with:
Now, to get it to ignore missing files without an error... Hmm...
onError="skip" or abort, or continue
Erik
Chantal Ackermann wrote:
>
>
> my experience with XPathEntityProcessor is non-existent. ;-)
>
>
Don't worry -- your hints put me on the right track :-)
I got it working with:
Now, to get it to ignore missing files without an error... Hmm...
Che
Erik Hatcher wrote:
>
>
> On Jul 30, 2009, at 11:54 AM, Andrew Clegg wrote:
>>> url="${domain.pdb_code}-noatom.xml" processor="XPathEntityProcessor"
>> forEach="/">
>>> xpath="//*[local-name()='structCategory']/*[local-name()='struct']/
>> *[local-name()='title']"
>
Hi Andrew,
my experience with XPathEntityProcessor is non-existent. ;-)
Just after a quick look at the method that throws the exception:
private void addField0(String xpath, String name, boolean multiValued,
boolean isRecord) {
List paths = new
LinkedList(Arrays.
On Jul 30, 2009, at 11:54 AM, Andrew Clegg wrote:
xpath="//*[local-name()='structCategory']/*[local-name()='struct']/
*[local-name()='title']"
/>
The XPathEntityProcessor doesn't support that fancy of an xpath - it
supports only a limited subset. Try /structCate
Chantal Ackermann wrote:
>
> Hi Andrew,
>
> your inner entity uses an XML type datasource. The default entity
> processor is the SQL one, however.
>
> For your inner entity, you have to specify the correct entity processor
> explicitly. You do that by adding the attribute "processor", and th
Hi Andrew,
your inner entity uses an XML type datasource. The default entity
processor is the SQL one, however.
For your inner entity, you have to specify the correct entity processor
explicitly. You do that by adding the attribute "processor", and the
value is the classname of the processor
Do you have an index where this exception happens consistently, eg
when you try to optimize? Can you post that somewhere?
Also, which exact JRE version are you using?
Mike
On Sun, Mar 29, 2009 at 1:28 PM, Sameer Maggon wrote:
> In our application, we are getting NullPointerExceptions very freq
Hi,
Yes, cdt & mdt are the date in MYSQL DB
> Date: Fri, 26 Sep 2008 13:58:24 +0530
> From: [EMAIL PROTECTED]
> To: solr-user@lucene.apache.org
> Subject: Re: NullPointerException
>
> I dunno if the problem is w/ date. are cdt and mdt date fields in the DB?
>
> O
I dunno if the problem is w/ date. are cdt and mdt date fields in the DB?
On Fri, Sep 26, 2008 at 12:58 AM, Shalin Shekhar Mangar
<[EMAIL PROTECTED]> wrote:
> I'm not sure about why the NullPointerException is coming. Is that the whole
> stack trace?
>
> The mdt and cdt are date in schema.xml but
I'm not sure about why the NullPointerException is coming. Is that the whole
stack trace?
The mdt and cdt are date in schema.xml but the format that is in the log is
wrong. Look at the DateFormatTransformer in DataImportHandler which can
format strings in your database to the correct date format n
> : I'm just looking into transitioning from solr 1.2 to 1.3 (trunk). I
> : have some legacy handler code (called "AdvancedRequestHandler") that
> : used to work with 1.2 but now throws an exception using 1.3 (latest
> : nightly build).
> This is an interesting use case that wasn't really conside
: I'm just looking into transitioning from solr 1.2 to 1.3 (trunk). I
: have some legacy handler code (called "AdvancedRequestHandler") that
: used to work with 1.2 but now throws an exception using 1.3 (latest
: nightly build). The exception is this:
The short answer is: right after you call "s
Otis,
Thanks for the response, that list should be very useful!
Charlie
-Original Message-
From: Otis Gospodnetic [mailto:[EMAIL PROTECTED]
Sent: Wednesday, May 02, 2007 11:13 AM
To: solr-user@lucene.apache.org
Subject: Re: NullPointerException (not schema related)
Charlie,
There is
Simpy -- http://www.simpy.com/ - Tag - Search - Share
- Original Message
From: Charlie Jackson <[EMAIL PROTECTED]>
To: solr-user@lucene.apache.org
Sent: Tuesday, May 1, 2007 5:31:13 PM
Subject: RE: NullPointerException (not schema related)
I went with the first approach which go
to be asked.
Thanks,
Charlie
-Original Message-
From: Chris Hostetter [mailto:[EMAIL PROTECTED]
Sent: Tuesday, May 01, 2007 3:20 PM
To: solr-user@lucene.apache.org
Subject: RE: NullPointerException (not schema related)
:
: snapshooter
: /usr/local/Production/solr/so
On 5/1/07, Charlie Jackson <[EMAIL PROTECTED]> wrote:
This is what came in the solrconfig.xml file with just a minor tweak to
the directory. However, when I committed data to the index, I was
getting "No such file or directory" errors from the Runtime.exec call. I
verified all of the permissions,
:
: snapshooter
: /usr/local/Production/solr/solr/bin/
: true
:
: the directory. However, when I committed data to the index, I was
: getting "No such file or directory" errors from the Runtime.exec call. I
: verified all of the permissions, etc, with the user I was tr
Nevermind this...looks like my problem was tagging the "args" as an
node instead of an node. Thanks anyway!
Charlie
-Original Message-
From: Charlie Jackson [mailto:[EMAIL PROTECTED]
Sent: Tuesday, May 01, 2007 12:02 PM
To: solr-user@lucene.apache.org
Subject: NullPointerException (not
70 matches
Mail list logo