Issue with Atomic update on boolean fields

2022-08-29 Thread Rahul Goswami
Hello,
I submitted the below JIRA after discussing the issue on the user list.

https://issues.apache.org/jira/browse/SOLR-16360

I have tried the fix locally and it's working as per expectations. I would
like to contribute it as well. Since this will be my first time
contributing to the open source community, I would like to know if the JIRA
needs to be assigned to me or if I can just go ahead and submit a PR
against it?

Thanks,
Rahul


Re: Lucene (unexpected ) fsync on existing segments

2021-03-27 Thread Rahul Goswami
Hello,
Opened the below JIRA for this issue. I will work on this and try to submit
a patch.
[LUCENE-9889] Lucene (unexpected ) fsync on existing segments - ASF JIRA
(apache.org) <https://issues.apache.org/jira/browse/LUCENE-9889>

Thanks,
Rahul

On Fri, Mar 26, 2021 at 9:56 AM Rahul Goswami  wrote:

> Mike,
>
>  >> "But, I believe you (system locks up with MMapDirectory for you
> use-case), so there is a bug somewhere!  And I wish we could get to the
> bottom of that, and fix it."
>
> Yes that's true for Windows for sure. I haven't tested it on Unix-like
> systems to that scale, so don't have any observations to report there.
>
> >> "Also, this (system locks up when using MMapDirectory) sounds different
> from the "Lucene fsyncs files that it doesn't need to" bug, right?"
>
> That's correct, they are separate issues. I just brought up the
> system-freezing-up-on-Windows point in response to Uwe's explanation
> earlier.
>
> I know I had taken it upon myself to open up a Jira for the fsync issue,
> but it got delayed from my side as I got occupied with other things
> in my day job. Will open up one later today.
>
> Thanks,
> Rahul
>
>
> On Wed, Mar 24, 2021 at 12:58 PM Michael McCandless <
> luc...@mikemccandless.com> wrote:
>
>> MMapDirectory really should be (is supposed to be) better than
>> SimpleFSDirectory for your usage case.
>>
>> Memory mapped pages do not have to fit into your 64 GB physical space,
>> but the "hot" pages (parts of the index that you are actively querying)
>> ideally would fit mostly in free RAM on your box to have OK search
>> performance.  Run with as small a JVM heap as possible so the OS has the
>> most RAM to keep such pages hot.  Since you are getting OK performance with
>> SimpleFSDirectory it sounds like you do have enough free RAM for the parts
>> of the index you are searching...
>>
>> But, I believe you (system locks up with MMapDirectory for you use-case),
>> so there is a bug somewhere!  And I wish we could get to the bottom of
>> that, and fix it.
>>
>> Also, this (system locks up when using MMapDirectory) sounds different
>> from the "Lucene fsyncs files that it doesn't need to" bug, right?
>>
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>>
>> On Mon, Mar 15, 2021 at 4:28 PM Rahul Goswami 
>> wrote:
>>
>>> Uwe,
>>> I understand that mmap would only map *a part* of the index from virtual
>>> address space to physical memory as and when the pages are requested.
>>> However the limitation on our side is that in most cases, we cannot ask for
>>> more than 128 GB RAM (and unfortunately even that would be a stretch) for
>>> the Solr machine.
>>>
>>> I have read and re-read the article you referenced in the past :) It's
>>> brilliantly written and did help clarify quite a few things for me I must
>>> say. However, at the end of the day, there is only so much the OS (at least
>>> Windows) can do before it starts to swap different pages in a 2-3 TB index
>>> into 64 GB of physical space, isn't that right ? The CPU usage spikes to
>>> 100% at such times and the machine becomes totally unresponsive. Turning on
>>> SimpleFSDIrectory at such times does rid us of this issue. I understand
>>> that we are losing out on performance by an order of magnitude compared to
>>> mmap, but I don't know any alternate solution. Also, since most of our use
>>> cases are more write-heavy than read-heavy, we can afford to compromise on
>>> the search performance due to SimpleFS.
>>>
>>> Please let me know still, if there is anything about my explanation that
>>> doesn't sound right to you.
>>>
>>> Thanks,
>>> Rahul
>>>
>>> On Mon, Mar 15, 2021 at 3:54 PM Uwe Schindler  wrote:
>>>
>>>> This is not true. Memory mapping does not need to load the index into
>>>> ram, so you don't need so much physical memory. Paging is done only between
>>>> index files and ram, that's what memory mapping is about.
>>>>
>>>> Please read the blog post:
>>>> https://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
>>>>
>>>> Uwe
>>>>
>>>> Am March 15, 2021 7:43:29 PM UTC schrieb Rahul Goswami <
>>>> rahul196...@gmail.com>:
>>>>>
>>>>> Mike,
>>>>> Yes I am using a 64 bit JVM on Windows. I haven't tried reproducing
>>>>> the issue on Linux yet. In the 

Re: Lucene (unexpected ) fsync on existing segments

2021-03-26 Thread Rahul Goswami
Mike,

 >> "But, I believe you (system locks up with MMapDirectory for you
use-case), so there is a bug somewhere!  And I wish we could get to the
bottom of that, and fix it."

Yes that's true for Windows for sure. I haven't tested it on Unix-like
systems to that scale, so don't have any observations to report there.

>> "Also, this (system locks up when using MMapDirectory) sounds different
from the "Lucene fsyncs files that it doesn't need to" bug, right?"

That's correct, they are separate issues. I just brought up the
system-freezing-up-on-Windows point in response to Uwe's explanation
earlier.

I know I had taken it upon myself to open up a Jira for the fsync issue,
but it got delayed from my side as I got occupied with other things
in my day job. Will open up one later today.

Thanks,
Rahul


On Wed, Mar 24, 2021 at 12:58 PM Michael McCandless <
luc...@mikemccandless.com> wrote:

> MMapDirectory really should be (is supposed to be) better than
> SimpleFSDirectory for your usage case.
>
> Memory mapped pages do not have to fit into your 64 GB physical space, but
> the "hot" pages (parts of the index that you are actively querying) ideally
> would fit mostly in free RAM on your box to have OK search performance.
> Run with as small a JVM heap as possible so the OS has the most RAM to keep
> such pages hot.  Since you are getting OK performance with
> SimpleFSDirectory it sounds like you do have enough free RAM for the parts
> of the index you are searching...
>
> But, I believe you (system locks up with MMapDirectory for you use-case),
> so there is a bug somewhere!  And I wish we could get to the bottom of
> that, and fix it.
>
> Also, this (system locks up when using MMapDirectory) sounds different
> from the "Lucene fsyncs files that it doesn't need to" bug, right?
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Mon, Mar 15, 2021 at 4:28 PM Rahul Goswami 
> wrote:
>
>> Uwe,
>> I understand that mmap would only map *a part* of the index from virtual
>> address space to physical memory as and when the pages are requested.
>> However the limitation on our side is that in most cases, we cannot ask for
>> more than 128 GB RAM (and unfortunately even that would be a stretch) for
>> the Solr machine.
>>
>> I have read and re-read the article you referenced in the past :) It's
>> brilliantly written and did help clarify quite a few things for me I must
>> say. However, at the end of the day, there is only so much the OS (at least
>> Windows) can do before it starts to swap different pages in a 2-3 TB index
>> into 64 GB of physical space, isn't that right ? The CPU usage spikes to
>> 100% at such times and the machine becomes totally unresponsive. Turning on
>> SimpleFSDIrectory at such times does rid us of this issue. I understand
>> that we are losing out on performance by an order of magnitude compared to
>> mmap, but I don't know any alternate solution. Also, since most of our use
>> cases are more write-heavy than read-heavy, we can afford to compromise on
>> the search performance due to SimpleFS.
>>
>> Please let me know still, if there is anything about my explanation that
>> doesn't sound right to you.
>>
>> Thanks,
>> Rahul
>>
>> On Mon, Mar 15, 2021 at 3:54 PM Uwe Schindler  wrote:
>>
>>> This is not true. Memory mapping does not need to load the index into
>>> ram, so you don't need so much physical memory. Paging is done only between
>>> index files and ram, that's what memory mapping is about.
>>>
>>> Please read the blog post:
>>> https://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
>>>
>>> Uwe
>>>
>>> Am March 15, 2021 7:43:29 PM UTC schrieb Rahul Goswami <
>>> rahul196...@gmail.com>:
>>>>
>>>> Mike,
>>>> Yes I am using a 64 bit JVM on Windows. I haven't tried reproducing the
>>>> issue on Linux yet. In the past we have had problems with mmap on Windows
>>>> with the machine freezing. The rationale I gave to myself is the amount of
>>>> disk and CPU activity for paging in and out must be intense for the OS
>>>> while trying to map an index that large into 64 GB of heap. Also since it's
>>>> an on-premise deployment, we can't expect the customers of the product to
>>>> provide nodes with > 400 GB RAM which is what *I think* would be required
>>>> to get a decent performance with mmap. Hence we had to switch to
>>>> SimpleFSDirectory.
>>>>
>>>> As for the fsync behavior, you are right. I tried

Re: Lucene (unexpected ) fsync on existing segments

2021-03-15 Thread Rahul Goswami
Uwe,
I understand that mmap would only map *a part* of the index from virtual
address space to physical memory as and when the pages are requested.
However the limitation on our side is that in most cases, we cannot ask for
more than 128 GB RAM (and unfortunately even that would be a stretch) for
the Solr machine.

I have read and re-read the article you referenced in the past :) It's
brilliantly written and did help clarify quite a few things for me I must
say. However, at the end of the day, there is only so much the OS (at least
Windows) can do before it starts to swap different pages in a 2-3 TB index
into 64 GB of physical space, isn't that right ? The CPU usage spikes to
100% at such times and the machine becomes totally unresponsive. Turning on
SimpleFSDIrectory at such times does rid us of this issue. I understand
that we are losing out on performance by an order of magnitude compared to
mmap, but I don't know any alternate solution. Also, since most of our use
cases are more write-heavy than read-heavy, we can afford to compromise on
the search performance due to SimpleFS.

Please let me know still, if there is anything about my explanation that
doesn't sound right to you.

Thanks,
Rahul

On Mon, Mar 15, 2021 at 3:54 PM Uwe Schindler  wrote:

> This is not true. Memory mapping does not need to load the index into ram,
> so you don't need so much physical memory. Paging is done only between
> index files and ram, that's what memory mapping is about.
>
> Please read the blog post:
> https://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
>
> Uwe
>
> Am March 15, 2021 7:43:29 PM UTC schrieb Rahul Goswami <
> rahul196...@gmail.com>:
>>
>> Mike,
>> Yes I am using a 64 bit JVM on Windows. I haven't tried reproducing the
>> issue on Linux yet. In the past we have had problems with mmap on Windows
>> with the machine freezing. The rationale I gave to myself is the amount of
>> disk and CPU activity for paging in and out must be intense for the OS
>> while trying to map an index that large into 64 GB of heap. Also since it's
>> an on-premise deployment, we can't expect the customers of the product to
>> provide nodes with > 400 GB RAM which is what *I think* would be required
>> to get a decent performance with mmap. Hence we had to switch to
>> SimpleFSDirectory.
>>
>> As for the fsync behavior, you are right. I tried with
>> NRTCachingDirectoryFactory as well which defaults to using mmap underneath
>> and still makes fsync calls for already existing index files.
>>
>> Thanks,
>> Rahul
>>
>> On Mon, Mar 15, 2021 at 3:15 PM Michael McCandless <
>> luc...@mikemccandless.com> wrote:
>>
>>> Thanks Rahul.
>>>
>>> > primary reason being that memory mapping multi-terabyte indexes is not
>>> feasible through mmap
>>>
>>> Hmm, that is interesting -- are you using a 64 bit JVM?  If so, what
>>> goes wrong with such large maps?  Lucene's MMapDirectory should chunk the
>>> mapping to deal with ByteBuffer int only address space.
>>>
>>> SimpleFSDirectory usually has substantially worse performance than
>>> MMapDirectory.
>>>
>>> Still, I suspect you would hit the same issue if you used other
>>> FSDirectory implementations -- the fsync behavior should be the same.
>>>
>>> Mike McCandless
>>>
>>> http://blog.mikemccandless.com
>>>
>>>
>>> On Fri, Mar 12, 2021 at 1:46 PM Rahul Goswami 
>>> wrote:
>>>
>>>> Thanks Michael. For your question...yes I am running Solr on Windows
>>>> and running it with SimpleFSDirectoryFactory (primary reason being that
>>>> memory mapping multi-terabyte indexes is not feasible through mmap). I will
>>>> create a Jira later today with the details in this thread and assign it to
>>>> myself. Will take a shot at the fix.
>>>>
>>>> Thanks,
>>>> Rahul
>>>>
>>>> On Fri, Mar 12, 2021 at 10:00 AM Michael McCandless <
>>>> luc...@mikemccandless.com> wrote:
>>>>
>>>>> I think long ago we used to track which files were actually dirty (we
>>>>> had written bytes to) and only fsync those ones.  But something went wrong
>>>>> with that, and at some point we "simplified" this logic, I think on the
>>>>> assumption that asking the OS to fsync a file that does in fact exist yet
>>>>> indeed has not changed would be harmless?  But somehow it is not in your
>>>>> case?  Are you on Windows?
>>>>>
>>>>> I tried to do a bit of dig

Re: Lucene (unexpected ) fsync on existing segments

2021-03-15 Thread Rahul Goswami
Mike,
Yes I am using a 64 bit JVM on Windows. I haven't tried reproducing the
issue on Linux yet. In the past we have had problems with mmap on Windows
with the machine freezing. The rationale I gave to myself is the amount of
disk and CPU activity for paging in and out must be intense for the OS
while trying to map an index that large into 64 GB of heap. Also since it's
an on-premise deployment, we can't expect the customers of the product to
provide nodes with > 400 GB RAM which is what *I think* would be required
to get a decent performance with mmap. Hence we had to switch to
SimpleFSDirectory.

As for the fsync behavior, you are right. I tried with
NRTCachingDirectoryFactory as well which defaults to using mmap underneath
and still makes fsync calls for already existing index files.

Thanks,
Rahul

On Mon, Mar 15, 2021 at 3:15 PM Michael McCandless <
luc...@mikemccandless.com> wrote:

> Thanks Rahul.
>
> > primary reason being that memory mapping multi-terabyte indexes is not
> feasible through mmap
>
> Hmm, that is interesting -- are you using a 64 bit JVM?  If so, what goes
> wrong with such large maps?  Lucene's MMapDirectory should chunk the
> mapping to deal with ByteBuffer int only address space.
>
> SimpleFSDirectory usually has substantially worse performance than
> MMapDirectory.
>
> Still, I suspect you would hit the same issue if you used other
> FSDirectory implementations -- the fsync behavior should be the same.
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Fri, Mar 12, 2021 at 1:46 PM Rahul Goswami 
> wrote:
>
>> Thanks Michael. For your question...yes I am running Solr on Windows and
>> running it with SimpleFSDirectoryFactory (primary reason being that memory
>> mapping multi-terabyte indexes is not feasible through mmap). I will create
>> a Jira later today with the details in this thread and assign it to myself.
>> Will take a shot at the fix.
>>
>> Thanks,
>> Rahul
>>
>> On Fri, Mar 12, 2021 at 10:00 AM Michael McCandless <
>> luc...@mikemccandless.com> wrote:
>>
>>> I think long ago we used to track which files were actually dirty (we
>>> had written bytes to) and only fsync those ones.  But something went wrong
>>> with that, and at some point we "simplified" this logic, I think on the
>>> assumption that asking the OS to fsync a file that does in fact exist yet
>>> indeed has not changed would be harmless?  But somehow it is not in your
>>> case?  Are you on Windows?
>>>
>>> I tried to do a bit of digital archaeology and remember what
>>> happened here, and I came across this relevant looking issue:
>>> https://issues.apache.org/jira/browse/LUCENE-2328.  That issue moved
>>> tracking of which files have been written but not yet fsync'd down from
>>> IndexWriter into FSDirectory.
>>>
>>> But there was another change that then removed staleFiles from
>>> FSDirectory entirely still trying to find that.  Aha, found it!
>>> https://issues.apache.org/jira/browse/LUCENE-6150.  Phew Uwe was really
>>> quite upset in that issue ;)
>>>
>>> I also came across this delightful related issue, showing how a massive
>>> hurricane (Irene) can lead to finding and fixing a bug in Lucene!
>>> https://issues.apache.org/jira/browse/LUCENE-3418
>>>
>>> > The assumption is that while the commit point is saved, no changes
>>> happen to the segment files in the saved generation.
>>>
>>> This assumption should really be true.  Lucene writes the files, append
>>> only, once, and then never changes them, once they are closed.  Pulling a
>>> commit point from Solr should further ensure that, even as indexing
>>> continues and new segments are written, the old segments referenced in that
>>> commit point will not be deleted.  But apparently this "harmless fsync"
>>> Lucene is doing is not so harmless in your use case.  Maybe open an issue
>>> and pull out the details from this discussion onto it?
>>>
>>> Mike McCandless
>>>
>>> http://blog.mikemccandless.com
>>>
>>>
>>> On Fri, Mar 12, 2021 at 9:03 AM Michael Sokolov 
>>> wrote:
>>>
>>>> Also - I should have said - I think the first step here is to write a
>>>> focused unit test that demonstrates the existence of the extra fsyncs
>>>> that we want to eliminate. It would be awesome if you were able to
>>>> create such a thing.
>>>>
>>>> On Fri, Mar 12, 2021 at 9:00 AM Michael Sokolov 
>>>> wrote:
>>>>

Re: Lucene (unexpected ) fsync on existing segments

2021-03-12 Thread Rahul Goswami
Thanks Michael. For your question...yes I am running Solr on Windows and
running it with SimpleFSDirectoryFactory (primary reason being that memory
mapping multi-terabyte indexes is not feasible through mmap). I will create
a Jira later today with the details in this thread and assign it to myself.
Will take a shot at the fix.

Thanks,
Rahul

On Fri, Mar 12, 2021 at 10:00 AM Michael McCandless <
luc...@mikemccandless.com> wrote:

> I think long ago we used to track which files were actually dirty (we had
> written bytes to) and only fsync those ones.  But something went wrong with
> that, and at some point we "simplified" this logic, I think on the
> assumption that asking the OS to fsync a file that does in fact exist yet
> indeed has not changed would be harmless?  But somehow it is not in your
> case?  Are you on Windows?
>
> I tried to do a bit of digital archaeology and remember what
> happened here, and I came across this relevant looking issue:
> https://issues.apache.org/jira/browse/LUCENE-2328.  That issue moved
> tracking of which files have been written but not yet fsync'd down from
> IndexWriter into FSDirectory.
>
> But there was another change that then removed staleFiles from FSDirectory
> entirely still trying to find that.  Aha, found it!
> https://issues.apache.org/jira/browse/LUCENE-6150.  Phew Uwe was really
> quite upset in that issue ;)
>
> I also came across this delightful related issue, showing how a massive
> hurricane (Irene) can lead to finding and fixing a bug in Lucene!
> https://issues.apache.org/jira/browse/LUCENE-3418
>
> > The assumption is that while the commit point is saved, no changes
> happen to the segment files in the saved generation.
>
> This assumption should really be true.  Lucene writes the files, append
> only, once, and then never changes them, once they are closed.  Pulling a
> commit point from Solr should further ensure that, even as indexing
> continues and new segments are written, the old segments referenced in that
> commit point will not be deleted.  But apparently this "harmless fsync"
> Lucene is doing is not so harmless in your use case.  Maybe open an issue
> and pull out the details from this discussion onto it?
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Fri, Mar 12, 2021 at 9:03 AM Michael Sokolov 
> wrote:
>
>> Also - I should have said - I think the first step here is to write a
>> focused unit test that demonstrates the existence of the extra fsyncs
>> that we want to eliminate. It would be awesome if you were able to
>> create such a thing.
>>
>> On Fri, Mar 12, 2021 at 9:00 AM Michael Sokolov 
>> wrote:
>> >
>> > Yes, please go ahead and open an issue. TBH I'm not sure why this is
>> > happening - there may be a good reason?? But let's explore it using an
>> > issue, thanks.
>> >
>> > On Fri, Mar 12, 2021 at 12:16 AM Rahul Goswami 
>> wrote:
>> > >
>> > > I can create a Jira and assign it to myself if that's ok (?). I think
>> this can help improve commit performance.
>> > > Also, to answer your question, we have indexes sometimes going into
>> multiple terabytes. Using the replication handler for backup would mean
>> requiring a disk capacity more than 2x the index size on the machine at all
>> times, which might not be feasible. So we directly back the index up from
>> the Solr node to a remote repository.
>> > >
>> > > Thanks,
>> > > Rahul
>> > >
>> > > On Thu, Mar 11, 2021 at 4:09 PM Michael Sokolov 
>> wrote:
>> > >>
>> > >> Well, it certainly doesn't seem necessary to fsync files that are
>> > >> unchanged and have already been fsync'ed. Maybe there's an
>> opportunity
>> > >> to improve it? On the other hand, support for external processes
>> > >> reading Lucene index files isn't likely to become a feature of
>> Lucene.
>> > >> You might want to consider using Solr replication to power your
>> > >> backup?
>> > >>
>> > >> On Thu, Mar 11, 2021 at 2:52 PM Rahul Goswami 
>> wrote:
>> > >> >
>> > >> > Thanks Michael. I thought since this discussion is closer to the
>> code than most discussions on the solr-users list, it seemed like a more
>> appropriate forum. Will be mindful going forward.
>> > >> > On your point about new segments, I attached a debugger and tried
>> to do a new commit (just pure Solr commit, no backup process running), and
>> the code indeed does fsync on a pre-existing segm

Re: Lucene (unexpected ) fsync on existing segments

2021-03-11 Thread Rahul Goswami
I can create a Jira and assign it to myself if that's ok (?). I think this
can help improve commit performance.
Also, to answer your question, we have indexes sometimes going into
multiple terabytes. Using the replication handler for backup would mean
requiring a disk capacity more than 2x the index size on the machine at all
times, which might not be feasible. So we directly back the index up from
the Solr node to a remote repository.

Thanks,
Rahul

On Thu, Mar 11, 2021 at 4:09 PM Michael Sokolov  wrote:

> Well, it certainly doesn't seem necessary to fsync files that are
> unchanged and have already been fsync'ed. Maybe there's an opportunity
> to improve it? On the other hand, support for external processes
> reading Lucene index files isn't likely to become a feature of Lucene.
> You might want to consider using Solr replication to power your
> backup?
>
> On Thu, Mar 11, 2021 at 2:52 PM Rahul Goswami 
> wrote:
> >
> > Thanks Michael. I thought since this discussion is closer to the code
> than most discussions on the solr-users list, it seemed like a more
> appropriate forum. Will be mindful going forward.
> > On your point about new segments, I attached a debugger and tried to do
> a new commit (just pure Solr commit, no backup process running), and the
> code indeed does fsync on a pre-existing segment file. Hence I was a bit
> baffled since it challenged my fundamental understanding that segment files
> once written are immutable, no matter what (unless picked up for a merge of
> course). Hence I thought of reaching out, in case there are scenarios where
> this might happen which I might be unaware of.
> >
> > Thanks,
> > Rahul
> >
> > On Thu, Mar 11, 2021 at 2:38 PM Michael Sokolov 
> wrote:
> >>
> >> This isn't a support forum; solr-users@ might be more appropriate. On
> >> that list someone might have a better idea about how the replication
> >> handler gets its list of files. This would be a good list to try if
> >> you wanted to propose a fix for the problem you're having. But since
> >> you're here -- it looks to me as if IndexWriter indeed syncs all "new"
> >> files in the current segments being committed; look in
> >> IndexWriter.startCommit and SegmentInfos.files. Caveat: (1) I'm
> >> looking at this code for the first time, and (2) things may have been
> >> different in 7.7.2? Sorry I don't know for sure, but are you sure that
> >> your backup process is not attempting to copy one of the new files?
> >>
> >> On Thu, Mar 11, 2021 at 1:35 PM Rahul Goswami 
> wrote:
> >> >
> >> > Hello,
> >> > Just wanted to follow up one more time to see if this is the right
> form for my question? Or is this suitable for some other mailing list?
> >> >
> >> > Best,
> >> > Rahul
> >> >
> >> > On Sat, Mar 6, 2021 at 3:57 PM Rahul Goswami 
> wrote:
> >> >>
> >> >> Hello everyone,
> >> >> Following up on my question in case anyone has any idea. Why it's
> important to know this is because I am thinking of allowing the backup
> process to not hold any lock on the index files, which should allow the
> fsync during parallel commits. BUT, in case doing an fsync on existing
> segment files in a saved commit point DOES have an effect, it might render
> the backed up index in a corrupt state.
> >> >>
> >> >> Thanks,
> >> >> Rahul
> >> >>
> >> >> On Fri, Mar 5, 2021 at 3:04 PM Rahul Goswami 
> wrote:
> >> >>>
> >> >>> Hello,
> >> >>> We have a process which backs up the index (Solr 7.7.2) on a
> schedule. The way we do it is we first save a commit point on the index and
> then using Solr's /replication handler, get the list of files in that
> generation. After the backup completes, we release the commit point (Please
> note that this is a separate backup process outside of Solr and not the
> backup command of the /replication handler)
> >> >>> The assumption is that while the commit point is saved, no changes
> happen to the segment files in the saved generation.
> >> >>>
> >> >>> Now the issue... The backup process opens the index files in a
> shared READ mode, preventing writes. This is causing any parallel commits
> to fail as it seems to be complaining about the index files to be locked by
> another process(the backup process). Upon debugging, I see that fsync is
> being called during commit on already existing segment files which is not
> expected. So, my question is, is there any reason for lucene to call f

Re: Lucene (unexpected ) fsync on existing segments

2021-03-11 Thread Rahul Goswami
Thanks Michael. I thought since this discussion is closer to the code than
most discussions on the solr-users list, it seemed like a more appropriate
forum. Will be mindful going forward.
On your point about new segments, I attached a debugger and tried to do a
new commit (just pure Solr commit, no backup process running), and the code
indeed does fsync on a pre-existing segment file. Hence I was a bit baffled
since it challenged my fundamental understanding that segment files once
written are immutable, no matter what (unless picked up for a merge of
course). Hence I thought of reaching out, in case there are scenarios where
this might happen which I might be unaware of.

Thanks,
Rahul

On Thu, Mar 11, 2021 at 2:38 PM Michael Sokolov  wrote:

> This isn't a support forum; solr-users@ might be more appropriate. On
> that list someone might have a better idea about how the replication
> handler gets its list of files. This would be a good list to try if
> you wanted to propose a fix for the problem you're having. But since
> you're here -- it looks to me as if IndexWriter indeed syncs all "new"
> files in the current segments being committed; look in
> IndexWriter.startCommit and SegmentInfos.files. Caveat: (1) I'm
> looking at this code for the first time, and (2) things may have been
> different in 7.7.2? Sorry I don't know for sure, but are you sure that
> your backup process is not attempting to copy one of the new files?
>
> On Thu, Mar 11, 2021 at 1:35 PM Rahul Goswami 
> wrote:
> >
> > Hello,
> > Just wanted to follow up one more time to see if this is the right form
> for my question? Or is this suitable for some other mailing list?
> >
> > Best,
> > Rahul
> >
> > On Sat, Mar 6, 2021 at 3:57 PM Rahul Goswami 
> wrote:
> >>
> >> Hello everyone,
> >> Following up on my question in case anyone has any idea. Why it's
> important to know this is because I am thinking of allowing the backup
> process to not hold any lock on the index files, which should allow the
> fsync during parallel commits. BUT, in case doing an fsync on existing
> segment files in a saved commit point DOES have an effect, it might render
> the backed up index in a corrupt state.
> >>
> >> Thanks,
> >> Rahul
> >>
> >> On Fri, Mar 5, 2021 at 3:04 PM Rahul Goswami 
> wrote:
> >>>
> >>> Hello,
> >>> We have a process which backs up the index (Solr 7.7.2) on a schedule.
> The way we do it is we first save a commit point on the index and then
> using Solr's /replication handler, get the list of files in that
> generation. After the backup completes, we release the commit point (Please
> note that this is a separate backup process outside of Solr and not the
> backup command of the /replication handler)
> >>> The assumption is that while the commit point is saved, no changes
> happen to the segment files in the saved generation.
> >>>
> >>> Now the issue... The backup process opens the index files in a shared
> READ mode, preventing writes. This is causing any parallel commits to fail
> as it seems to be complaining about the index files to be locked by another
> process(the backup process). Upon debugging, I see that fsync is being
> called during commit on already existing segment files which is not
> expected. So, my question is, is there any reason for lucene to call fsync
> on already existing segment files?
> >>>
> >>> The line of code I am referring to is as below:
> >>> try (final FileChannel file = FileChannel.open(fileToSync, isDir ?
> StandardOpenOption.READ : StandardOpenOption.WRITE))
> >>>
> >>> in method fsync(Path fileToSync, boolean isDir) of the class file
> >>>
> >>> lucene\core\src\java\org\apache\lucene\util\IOUtils.java
> >>>
> >>> Thanks,
> >>> Rahul
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: Lucene (unexpected ) fsync on existing segments

2021-03-11 Thread Rahul Goswami
Hello,
Just wanted to follow up one more time to see if this is the right form for
my question? Or is this suitable for some other mailing list?

Best,
Rahul

On Sat, Mar 6, 2021 at 3:57 PM Rahul Goswami  wrote:

> Hello everyone,
> Following up on my question in case anyone has any idea. Why it's
> important to know this is because I am thinking of allowing the backup
> process to not hold any lock on the index files, which should allow the
> fsync during parallel commits. BUT, in case doing an fsync on existing
> segment files in a saved commit point DOES have an effect, it might render
> the backed up index in a corrupt state.
>
> Thanks,
> Rahul
>
> On Fri, Mar 5, 2021 at 3:04 PM Rahul Goswami 
> wrote:
>
>> Hello,
>> We have a process which backs up the index (Solr 7.7.2) on a schedule.
>> The way we do it is we first save a commit point on the index and then
>> using Solr's /replication handler, get the list of files in that
>> generation. After the backup completes, we release the commit point (Please
>> note that this is a separate backup process outside of Solr and not
>> the backup command of the /replication handler)
>> The assumption is that while the commit point is saved, no changes happen
>> to the segment files in the saved generation.
>>
>> Now the issue... The backup process opens the index files in a shared
>> READ mode, preventing writes. This is causing any parallel commits to fail
>> as it seems to be complaining about the index files to be locked by another
>> process(the backup process). Upon debugging, I see that fsync is being
>> called during commit on already existing segment files which is not
>> expected. So, my question is, is there any reason for lucene to call fsync
>> on already existing segment files?
>>
>> The line of code I am referring to is as below:
>> try (final FileChannel file = FileChannel.open(fileToSync, isDir ?
>> StandardOpenOption.READ : StandardOpenOption.WRITE))
>>
>> in method fsync(Path fileToSync, boolean isDir) of the class file
>>
>> lucene\core\src\java\org\apache\lucene\util\IOUtils.java
>>
>> Thanks,
>> Rahul
>>
>


Re: Lucene (unexpected ) fsync on existing segments

2021-03-06 Thread Rahul Goswami
Hello everyone,
Following up on my question in case anyone has any idea. Why it's important
to know this is because I am thinking of allowing the backup process to not
hold any lock on the index files, which should allow the fsync during
parallel commits. BUT, in case doing an fsync on existing segment files in
a saved commit point DOES have an effect, it might render the backed up
index in a corrupt state.

Thanks,
Rahul

On Fri, Mar 5, 2021 at 3:04 PM Rahul Goswami  wrote:

> Hello,
> We have a process which backs up the index (Solr 7.7.2) on a schedule. The
> way we do it is we first save a commit point on the index and then using
> Solr's /replication handler, get the list of files in that generation.
> After the backup completes, we release the commit point (Please note that
> this is a separate backup process outside of Solr and not the backup
> command of the /replication handler)
> The assumption is that while the commit point is saved, no changes happen
> to the segment files in the saved generation.
>
> Now the issue... The backup process opens the index files in a shared READ
> mode, preventing writes. This is causing any parallel commits to fail as it
> seems to be complaining about the index files to be locked by another
> process(the backup process). Upon debugging, I see that fsync is being
> called during commit on already existing segment files which is not
> expected. So, my question is, is there any reason for lucene to call fsync
> on already existing segment files?
>
> The line of code I am referring to is as below:
> try (final FileChannel file = FileChannel.open(fileToSync, isDir ?
> StandardOpenOption.READ : StandardOpenOption.WRITE))
>
> in method fsync(Path fileToSync, boolean isDir) of the class file
>
> lucene\core\src\java\org\apache\lucene\util\IOUtils.java
>
> Thanks,
> Rahul
>


Lucene (unexpected ) fsync on existing segments

2021-03-05 Thread Rahul Goswami
Hello,
We have a process which backs up the index (Solr 7.7.2) on a schedule. The
way we do it is we first save a commit point on the index and then using
Solr's /replication handler, get the list of files in that generation.
After the backup completes, we release the commit point (Please note that
this is a separate backup process outside of Solr and not the backup
command of the /replication handler)
The assumption is that while the commit point is saved, no changes happen
to the segment files in the saved generation.

Now the issue... The backup process opens the index files in a shared READ
mode, preventing writes. This is causing any parallel commits to fail as it
seems to be complaining about the index files to be locked by another
process(the backup process). Upon debugging, I see that fsync is being
called during commit on already existing segment files which is not
expected. So, my question is, is there any reason for lucene to call fsync
on already existing segment files?

The line of code I am referring to is as below:
try (final FileChannel file = FileChannel.open(fileToSync, isDir ?
StandardOpenOption.READ : StandardOpenOption.WRITE))

in method fsync(Path fileToSync, boolean isDir) of the class file

lucene\core\src\java\org\apache\lucene\util\IOUtils.java

Thanks,
Rahul


Re: SOLR: Why do we have a CHANGES.txt/md to maintain?

2020-12-02 Thread Rahul Goswami
Couldn't help pitching in here and making a humble request. The CHANGES.txt
has been of immense help for us for determining the right upgrade version
for our production deployments. So CHANGES.txt or no CHANGES.txt, I hope
we'll retain a mechanism to clearly be able to track the changes in
subsequent versions.

Thanks,
Rahul

On Mon, Nov 30, 2020 at 9:04 AM David Smiley  wrote:

> I get your point on different audiences... sometimes I peer-review us on
> dubiously written CHANGES.txt entries to be more user friendly.  However,
> this attention could and should be given to JIRA issue summaries as well.
> We all benefit from that.  Also, for Solr in particular, the need for
> examining CHANGES / JIRA is reduced because we have a solr-upgrade-notes.adoc
> which is editorialized and covers just the important stuff; no minor
> matters.  We link to this from release announcements.
>
> ~ David Smiley
> Apache Lucene/Solr Search Developer
> http://www.linkedin.com/in/davidwsmiley
>
>
> On Mon, Nov 30, 2020 at 3:29 AM Adrien Grand  wrote:
>
>> I have a preference for maintaining a separate CHANGES file because it
>> allows us to keep JIRA focused for a committer/contributor audience while
>> the CHANGES file can describe changes that matter for users. Elasticsearch
>> uses a similar mechanism for release notes to what you are proposing, using
>> GitHub instead of JIRA. It works well, but in my opinion the Lucene/Solr
>> process produces better curated release notes.
>>
>> On Mon, Nov 30, 2020 at 12:25 AM David Smiley  wrote:
>>
>>> Well the commit history remains there as well and was converted from SVN
>>> and may eventually be converted to something else.  My point is that it has
>>> been retained.  On release boundaries, we could not only distribute
>>> Changes.html (a JIRA export) in the assembly (tar.gz) but we could also
>>> commit it to source control on each release branch, and thus will transfer
>>> along with source control into the future, which is way more convenient
>>> than digging up an old binary.
>>>
>>> ~ David Smiley
>>> Apache Lucene/Solr Search Developer
>>> http://www.linkedin.com/in/davidwsmiley
>>>
>>>
>>> On Sun, Nov 29, 2020 at 5:55 AM Dawid Weiss 
>>> wrote:
>>>
 Changes in the repository stay there forever (think
 cvs/svn/git/whatever comes next...). External tools change all the
 time. This is the benefit I see.

 Dawid

 On Sun, Nov 29, 2020 at 6:32 AM David Smiley 
 wrote:
 >
 > After recently proposing per-module CHANGES.md... I think I'd
 actually rather not have any CHANGES file at all to maintain.  I'd rather
 go to JIRA with a bit better hygiene for metadata like
 components==contrib/module, and have some convenient links sprinkled about
 so that it's a convenient click away from each module.  This proposal may
 not be as compelling for Lucene which has no solr-upgrade-notes.adoc file.
 >
 > Maintaining this CHANGES file (or files) is a pain.  Formatting it
 just-so & conversion to HTML & other scripts manipulating it in dev-tools
 (e.g. add version), and branch syncing.  It's commonly a source of merge
 conflicts more than any other file.  It's an annoying step with GitHub PRs
 in particular.  Why do we bother?  Instead, on releases, provide a JIRA
 link to display all fixed issues grouped by issue type.  We could export it
 to a file for direct inclusion in the distribution.  JIRA even has a
 feature for this -- here's a direct link for 8.7:
 https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310230=12348463
 Notice the HTML version at the bottom.  It could be dumped into the release
 binaries.
 > Issue summaries tend to be much shorter than CHANGES.txt bullets but
 I think that's okay because it's not the only information available for
 those who want to know more.  Remember there is also all the other metadata
 in JIRA a user can examine, there are commit messages, sometimes PRs, and
 there's solr-upgrade-notes.adoc which ought to be the starting point for
 someone interested in a release.
 >
 > It's been argued that contributors should get attribution here but we
 could maintain a separate contributors file to acknowledge people by name
 for inclusion with the Solr distribution -- one that has a link to JIRA and
 GitHub even.
 >
 > ~ David Smiley
 > Apache Lucene/Solr Search Developer
 > http://www.linkedin.com/in/davidwsmiley

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


>>
>> --
>> Adrien
>>
>


[jira] [Commented] (SOLR-12550) ConcurrentUpdateSolrClient doesn't respect timeouts for commits and optimize

2019-06-24 Thread Rahul Goswami (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16870859#comment-16870859
 ] 

Rahul Goswami commented on SOLR-12550:
--

As discussed on the Solr user list, I found this issue on Solr 7.2.1. Providing 
a patch on 7.2 with GitHub pull request #740 as attached to this Jira with the 
appropriate description of and solution to the problem.
I tried the patch in pull request #417 as submitted by Marc but it won't work, 
reason being that the builder object which is used to instantiate a 
ConcurrentUpdateSolrClient itself doesn't contain the timeout values.

> ConcurrentUpdateSolrClient doesn't respect timeouts for commits and optimize
> 
>
> Key: SOLR-12550
> URL: https://issues.apache.org/jira/browse/SOLR-12550
> Project: Solr
>  Issue Type: Bug
>Reporter: Marc Morissette
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We're in a situation where we need to optimize some of our collections. These 
> optimizations are done with waitSearcher=true as a simple throttling 
> mechanism to prevent too many collections from being optimized at once.
> We're seeing these optimize commands return without error after 10 minutes 
> but well before the end of the operation. Our Solr logs show errors with 
> socketTimeout stack traces. Setting distribUpdateSoTimeout to a higher value 
> has no effect.
> See the links section for my patch.
> It turns out that ConcurrentUpdateSolrClient delegates commit and optimize 
> commands to a private HttpSolrClient but fails to pass along its builder's 
> timeouts to that client.
> A patch is attached in the links section.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13217) collapse parser with /export request handler throws NPE

2019-02-04 Thread Rahul Goswami (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rahul Goswami updated SOLR-13217:
-
Description: 
A NullPointerException is thrown when trying to use the /export handler with a 
search() streaming expression containing an fq which uses collapse parser. 
Below is the format of the complete query:

 [http://localhost:8983/solr/mycollection/stream/?expr=search(mycollection] 
,sort="field1 asc,field2 
asc",fl="fileld1,field2,field3",qt="/export",q="*:*",fq="((field4:1) OR 
(field4:2))",fq="

{!collapse field=id_field sort='field3 desc'}

")

 

I made sure that collapse parser here is the problem by removing all other 
filter queries and retaining only collapse filter query. The stacktrace is as 
below: 

 

org.apache.solr.servlet.HttpSolrCall null:java.lang.NullPointerException
 at org.apache.lucene.util.BitSetIterator.(BitSetIterator.java:61)
 at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:243)
 at org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:222)
 at 
org.apache.solr.response.JSONWriter.writeIterator(JSONResponseWriter.java:523)
 at 
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:180)
 at org.apache.solr.response.JSONWriter$2.put(JSONResponseWriter.java:559)
 at org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:222)
 at org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547)
 at 
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:198)
 at org.apache.solr.response.JSONWriter$2.put(JSONResponseWriter.java:559)
 at org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:220)
 at org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547)
 at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:218)
 at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2627)
 at 
org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
 at org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
 at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
 at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
 at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
 at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
 at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
 at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
 at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
 at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
 at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
 at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
 at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
 at org.eclipse.jetty.server.Server.handle(Server.java:539)
 at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
 at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
 at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
 at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
 at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
 at java.base/java.lang.Thread.run(Thread.java:834)

  was:
A NullPointerException is obtained when trying to use the /export handler with 
a search() streaming expression containing an fq which uses collapse parser. 
Below is the format of the complete query:

 

[jira] [Updated] (SOLR-13217) collapse parser with /export request handler throws NPE

2019-02-04 Thread Rahul Goswami (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rahul Goswami updated SOLR-13217:
-
Description: 
A NullPointerException is thrown when trying to use the /export handler with a 
search() streaming expression containing an fq which uses collapse parser. 
Below is the format of the complete query:

 [http://localhost:8983/solr/mycollection/stream/?expr=search(mycollection] 
,sort="field1 asc,field2 
asc",fl="fileld1,field2,field3",qt="/export",q="*:*",fq="((field4:1) OR 
(field4:2))",fq="

{!collapse field=id_field sort='field3 desc'}")

 

I made sure that collapse parser here is the problem by removing all other 
filter queries and retaining only collapse filter query. The stacktrace is as 
below: 

 

org.apache.solr.servlet.HttpSolrCall null:java.lang.NullPointerException
 at org.apache.lucene.util.BitSetIterator.(BitSetIterator.java:61)
 at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:243)
 at org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:222)
 at 
org.apache.solr.response.JSONWriter.writeIterator(JSONResponseWriter.java:523)
 at 
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:180)
 at org.apache.solr.response.JSONWriter$2.put(JSONResponseWriter.java:559)
 at org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:222)
 at org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547)
 at 
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:198)
 at org.apache.solr.response.JSONWriter$2.put(JSONResponseWriter.java:559)
 at org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:220)
 at org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547)
 at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:218)
 at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2627)
 at 
org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
 at org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
 at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
 at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
 at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
 at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
 at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
 at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
 at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
 at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
 at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
 at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
 at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
 at org.eclipse.jetty.server.Server.handle(Server.java:539)
 at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
 at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
 at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
 at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
 at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
 at java.base/java.lang.Thread.run(Thread.java:834)

  was:
A NullPointerException is thrown when trying to use the /export handler with a 
search() streaming expression containing an fq which uses collapse parser. 
Below is the format of the complete query:

 [http://local

[jira] [Created] (SOLR-13217) collapse parser with /export request handler throws NPE

2019-02-04 Thread Rahul Goswami (JIRA)
Rahul Goswami created SOLR-13217:


 Summary: collapse parser with /export request handler throws NPE
 Key: SOLR-13217
 URL: https://issues.apache.org/jira/browse/SOLR-13217
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 7.2.1
Reporter: Rahul Goswami


A NullPointerException is obtained when trying to use the /export handler with 
a search() streaming expression containing an fq which uses collapse parser. 
Below is the format of the complete query:

 [http://localhost:8983/solr/mycollection/stream/?expr=search(mycollection] 
,sort="field1 asc,field2 
asc",fl="fileld1,field2,field3",qt="/export",q="*:*",fq="((field4:1) OR 
(field4:2))",fq="{!collapse field=id_field sort='field3 desc'}")

 

I made sure that collapse parser here is the problem by removing all other 
filter queries and retaining only collapse filter query. The stacktrace is as 
below: 

 

org.apache.solr.servlet.HttpSolrCall null:java.lang.NullPointerException
 at org.apache.lucene.util.BitSetIterator.(BitSetIterator.java:61)
 at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:243)
 at org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:222)
 at 
org.apache.solr.response.JSONWriter.writeIterator(JSONResponseWriter.java:523)
 at 
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:180)
 at org.apache.solr.response.JSONWriter$2.put(JSONResponseWriter.java:559)
 at org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:222)
 at org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547)
 at 
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:198)
 at org.apache.solr.response.JSONWriter$2.put(JSONResponseWriter.java:559)
 at org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:220)
 at org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547)
 at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:218)
 at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2627)
 at 
org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
 at org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
 at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
 at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
 at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
 at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
 at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
 at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
 at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
 at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
 at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
 at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
 at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
 at org.eclipse.jetty.server.Server.handle(Server.java:539)
 at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
 at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
 at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
 at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
 at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
 at java.base/java.lang.Thread.run(Th

[jira] [Comment Edited] (SOLR-8291) NPE calling export handler when useFilterForSortedQuery=true

2019-01-17 Thread Rahul Goswami (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-8291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16744483#comment-16744483
 ] 

Rahul Goswami edited comment on SOLR-8291 at 1/17/19 9:44 PM:
--

Facing the same issue on 7.2.1 while using /export handler.
 In my case useFilterForSortedQuery=false and I get the NPE in 
ExportWriter.java when the /stream handler is invoked with a search() streaming 
expression with qt="/export" containing fq="\{!collapse field=id_field 
sort="time desc"} (among other fq's. I tried eliminating one fq at a time to 
find the problematic one. The one with collapse parser is what makes it fail). 
Below is the stacktrace (Same as Ron's above):

org.apache.solr.servlet.HttpSolrCall null:java.lang.NullPointerException
 at org.apache.lucene.util.BitSetIterator.(BitSetIterator.java:61)
 at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:243)
 at org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:222)
 at 
org.apache.solr.response.JSONWriter.writeIterator(JSONResponseWriter.java:523)
 at 
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:180)
 at org.apache.solr.response.JSONWriter$2.put(JSONResponseWriter.java:559)
 at org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:222)
 at org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547)
 at 
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:198)
 at org.apache.solr.response.JSONWriter$2.put(JSONResponseWriter.java:559)
 at org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:220)
 at org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547)
 at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:218)
 at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2627)
 at 
org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
 at org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
 at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)



 Is Ron's patch the way to go?

 


was (Author: rahul196...@gmail.com):
Facing the same issue on 7.2.1 while using /export handler.
 In my case useFilterForSortedQuery=false and I get the NPE in 
ExportWriter.java when the /stream handler is invoked with a search() streaming 
expression with qt="/export" containing fq="\{!collapse field=id_field 
sort="time desc"} (among other fq's. I tried eliminating one fq at a time to 
find the problematic one. The one with collapse parser is what makes it fail)
 Is Ron's patch the way to go?

 

> NPE calling export handler when useFilterForSortedQuery=true
> 
>
> Key: SOLR-8291
> URL: https://issues.apache.org/jira/browse/SOLR-8291
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.2.1
>Reporter: Ray
>Priority: Major
> Attachments: SOLR-8291.patch, solr.log
>
>
> *Updated*: The stacktrace below was created when the solrconfig.xml has the 
> following element:
> {code}
>  true
> {code}
> It was determined that useFilterForSortedQuery is incompatible with the 
> /export handler.
> See the comments near the end of the ticket for a potential work around if 
> this flag needs to be set.
> Get NPE during calling export handler, here is the stack trace:
>   at org.apache.lucene.util.BitSetIterator.(BitSetIterator.java:58)
>   at 
> org.apache.solr.response.SortingResponseWriter.write(SortingResponseWriter.java:138)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:53)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:727)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:459)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
>   at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:275)
>   at 
>

[jira] [Comment Edited] (SOLR-8291) NPE calling export handler when useFilterForSortedQuery=true

2019-01-17 Thread Rahul Goswami (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-8291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16744483#comment-16744483
 ] 

Rahul Goswami edited comment on SOLR-8291 at 1/17/19 9:42 PM:
--

Facing the same issue on 7.2.1 while using /export handler.
 In my case useFilterForSortedQuery=false and I get the NPE in 
ExportWriter.java when the /stream handler is invoked with a search() streaming 
expression with qt="/export" containing fq="\{!collapse field=id_field 
sort="time desc"} (among other fq's. I tried eliminating one fq at a time to 
find the problematic one. The one with collapse parser is what makes it fail)
 Is Ron's patch the way to go?

 


was (Author: rahul196...@gmail.com):
Facing the same issue on 7.2.1 while using /export handler.
In my case useFilterForSortedQuery=false and I get the NPE in ExportWriter.java 
when the stream request with /export handler contains an fq="\{!collapse 
field=id_field sort="time desc"}
 Is Ron's patch the way to go?

 

> NPE calling export handler when useFilterForSortedQuery=true
> 
>
> Key: SOLR-8291
> URL: https://issues.apache.org/jira/browse/SOLR-8291
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.2.1
>Reporter: Ray
>Priority: Major
> Attachments: SOLR-8291.patch, solr.log
>
>
> *Updated*: The stacktrace below was created when the solrconfig.xml has the 
> following element:
> {code}
>  true
> {code}
> It was determined that useFilterForSortedQuery is incompatible with the 
> /export handler.
> See the comments near the end of the ticket for a potential work around if 
> this flag needs to be set.
> Get NPE during calling export handler, here is the stack trace:
>   at org.apache.lucene.util.BitSetIterator.(BitSetIterator.java:58)
>   at 
> org.apache.solr.response.SortingResponseWriter.write(SortingResponseWriter.java:138)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:53)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:727)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:459)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
>   at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:275)
>   at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:161)
>   at 
> org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:181)
>   at 
> org.jboss.modcluster.catalina.CatalinaContext$RequestListenerValve.event(CatalinaContext.java:285)
>   at 
> org.jboss.modcluster.catalina.CatalinaContext$RequestListenerValve.invoke(CatalinaContext.java:261)
>   at 
> org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:88)
>   at 
> org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:100)
>   at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:159)
>   at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
>   at 
> org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158)
>   at 
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:567)
>   at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>   at 
> org.jboss.web.tomcat.service.request.ActiveRequestResponseCacheValve.invoke(ActiveRequestResponseCacheValve.java:53)
>   at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:362)
>   at 
> org.apache.coyote.ajp.AjpAprProcessor.process(AjpAprProcessor.java:489)
>   at 
> org.apache.coyote.ajp.AjpAprProtocol$AjpConnectionHandler.process(AjpAprProtocol.java:452)
>   at 
> org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint

[jira] [Comment Edited] (SOLR-8291) NPE calling export handler when useFilterForSortedQuery=true

2019-01-17 Thread Rahul Goswami (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-8291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16744483#comment-16744483
 ] 

Rahul Goswami edited comment on SOLR-8291 at 1/17/19 9:36 PM:
--

Facing the same issue on 7.2.1 while using /export handler.
In my case useFilterForSortedQuery=false and I get the NPE in ExportWriter.java 
when the stream request with /export handler contains an fq="\{!collapse 
field=id_field sort="time desc"}
 Is Ron's patch the way to go?

 


was (Author: rahul196...@gmail.com):
Facing the same issue on 7.2.1 while using /export handler... Is Ron's patch 
the way to go? 

> NPE calling export handler when useFilterForSortedQuery=true
> 
>
> Key: SOLR-8291
> URL: https://issues.apache.org/jira/browse/SOLR-8291
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.2.1
>Reporter: Ray
>Priority: Major
> Attachments: SOLR-8291.patch, solr.log
>
>
> *Updated*: The stacktrace below was created when the solrconfig.xml has the 
> following element:
> {code}
>  true
> {code}
> It was determined that useFilterForSortedQuery is incompatible with the 
> /export handler.
> See the comments near the end of the ticket for a potential work around if 
> this flag needs to be set.
> Get NPE during calling export handler, here is the stack trace:
>   at org.apache.lucene.util.BitSetIterator.(BitSetIterator.java:58)
>   at 
> org.apache.solr.response.SortingResponseWriter.write(SortingResponseWriter.java:138)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:53)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:727)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:459)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
>   at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:275)
>   at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:161)
>   at 
> org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:181)
>   at 
> org.jboss.modcluster.catalina.CatalinaContext$RequestListenerValve.event(CatalinaContext.java:285)
>   at 
> org.jboss.modcluster.catalina.CatalinaContext$RequestListenerValve.invoke(CatalinaContext.java:261)
>   at 
> org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:88)
>   at 
> org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:100)
>   at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:159)
>   at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
>   at 
> org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158)
>   at 
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:567)
>   at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>   at 
> org.jboss.web.tomcat.service.request.ActiveRequestResponseCacheValve.invoke(ActiveRequestResponseCacheValve.java:53)
>   at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:362)
>   at 
> org.apache.coyote.ajp.AjpAprProcessor.process(AjpAprProcessor.java:489)
>   at 
> org.apache.coyote.ajp.AjpAprProtocol$AjpConnectionHandler.process(AjpAprProtocol.java:452)
>   at 
> org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:2019)
>   at java.lang.Thread.run(Thread.java:745)
> It seems there are some FixedBitSet was set to null



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8291) NPE calling export handler when useFilterForSortedQuery=true

2019-01-16 Thread Rahul Goswami (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-8291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16744483#comment-16744483
 ] 

Rahul Goswami commented on SOLR-8291:
-

Facing the same issue on 7.2.1 while using /export handler... Is Ron's patch 
the way to go? 

> NPE calling export handler when useFilterForSortedQuery=true
> 
>
> Key: SOLR-8291
> URL: https://issues.apache.org/jira/browse/SOLR-8291
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.2.1
>Reporter: Ray
>Priority: Major
> Attachments: SOLR-8291.patch, solr.log
>
>
> *Updated*: The stacktrace below was created when the solrconfig.xml has the 
> following element:
> {code}
>  true
> {code}
> It was determined that useFilterForSortedQuery is incompatible with the 
> /export handler.
> See the comments near the end of the ticket for a potential work around if 
> this flag needs to be set.
> Get NPE during calling export handler, here is the stack trace:
>   at org.apache.lucene.util.BitSetIterator.(BitSetIterator.java:58)
>   at 
> org.apache.solr.response.SortingResponseWriter.write(SortingResponseWriter.java:138)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:53)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:727)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:459)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
>   at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:275)
>   at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:161)
>   at 
> org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:181)
>   at 
> org.jboss.modcluster.catalina.CatalinaContext$RequestListenerValve.event(CatalinaContext.java:285)
>   at 
> org.jboss.modcluster.catalina.CatalinaContext$RequestListenerValve.invoke(CatalinaContext.java:261)
>   at 
> org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:88)
>   at 
> org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:100)
>   at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:159)
>   at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
>   at 
> org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158)
>   at 
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:567)
>   at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>   at 
> org.jboss.web.tomcat.service.request.ActiveRequestResponseCacheValve.invoke(ActiveRequestResponseCacheValve.java:53)
>   at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:362)
>   at 
> org.apache.coyote.ajp.AjpAprProcessor.process(AjpAprProcessor.java:489)
>   at 
> org.apache.coyote.ajp.AjpAprProtocol$AjpConnectionHandler.process(AjpAprProtocol.java:452)
>   at 
> org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:2019)
>   at java.lang.Thread.run(Thread.java:745)
> It seems there are some FixedBitSet was set to null



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org